Removing TSM for Databases

2018-10-31 Thread Thomas Denier
We have a 64 bit x86 Linux server on which TSM for Databases (for Oracle) 7.1 
was installed but never used. We would like to remove this TSM component. I 
have so far been completely unsuccessful in finding instructions for doing 
this. Where is the uninstall procedure documented?

Thomas Denier,
Thomas Jefferson University



The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: counting files in directories for a node

2017-11-16 Thread Thomas Denier
My previous response omitted the "type='FILE'", which is needed to get the 
result you want.

Thomas Denier,
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Thursday, November 16, 2017 10:18
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] counting files in directories for a node

Tsm server 7.1.7.100

I have a request to count the number of files in each directory on the e: drive 
for a node.
For output, path name, and file count.

Any pointers on where to start?

I can do it given a specific path,

Select count(*) from backups where node_name=bla and hl_name=bla and type='FILE'

How to extend this to all dirs. In a filespace?

Thanks for the help.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: counting files in directories for a node

2017-11-16 Thread Thomas Denier
You will need something like:

select hl_name,count(ll_name) from backups where node_name='JUPITER' and 
filespace_id=3 group by hl_name

The "group by" clause triggers the desired counting by directory.

You might be able to use something like "filespace_name='\\jupiter\e$'" as an 
alternative to the "filespace_id" test; I am not sure whether you would have 
problems with Unicode characters or not.

Depending on your exact goal, you might need to add "state='ACTIVE_VERSION'" to 
the selection criteria. If you omit this condition, the counts will include 
copies of recently deleted files and copies of older versions of recently 
updated files.

Thomas Denier,
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Thursday, November 16, 2017 10:18
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] counting files in directories for a node

Tsm server 7.1.7.100

I have a request to count the number of files in each directory on the e: drive 
for a node.
For output, path name, and file count.

Any pointers on where to start?

I can do it given a specific path,

Select count(*) from backups where node_name=bla and hl_name=bla and type='FILE'

How to extend this to all dirs. In a filespace?

Thanks for the help.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Odd ANR2716E messages

2017-10-13 Thread Thomas Denier
One of our Windows client backups has had a minor but very puzzling problem on 
three of the last four days. On each of the three days the following sequence 
of events occurred:

1.The TSM server displayed the message "ANR2716E Schedule prompter was not able 
to contact client TJVDPMHD using type 1" three minutes and ten or eleven 
seconds after the nominal starting time for the backup.
2.The contact attempt was retried successfully thirty seconds after the error 
message.
3.The backup ran successfully and with no further sign of network communication 
issues.

The client system has a network interface dedicated to TSM traffic. This 
interface is on the same subnet as one of the network interfaces on the system 
hosting the TSM server. There are 24 other client systems on the subnet. None 
of the 24 have shown any recent signs of network communications issues.

A "query node" command reports that the client system is running 64 bit Windows 
7 and using TSM 6.2.4.0 client code.

The TSM server code is at level 6.3.5.0 and is running under zSeries Linux. I 
checked the various log files in /var/log and found no sign of network errors 
within the last few days.

Does anyone know of an explanation for the odd combination of consistent 
behavior on a 24 hour time scale and inconsistent behavior on a 30 second time 
scale?

Thomas Denier,
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: ndmp

2017-08-01 Thread Thomas Denier
You might be better off having proxy systems access the NAS contents using CIFS 
and/or NFS, and having the proxy systems use the backup/archive client to back 
up the NAS contents.

My department supports Commvault as well as TSM (the result of a merger of 
previously separate IT organizations). The Commvault workload includes a NAS 
server on the same scale as yours. Our Commvault representative advised us to 
forget about Commvault's NDMP support and use the Commvault analog of the 
approach described in the previous paragraph.

The subject of NAS backup coverage arose at an IBM training/marketing event for 
the Spectrum family of products. The IBM representative who responded was not 
as bluntly dismissive of NDMP as our Commvault representative, but he sounded 
decidedly unenthusiastic when he mentioned NDMP as a possible approach to NAS 
backups.

Thomas Denier,
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Remco 
Post
Sent: Monday, July 31, 2017 16:41
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] ndmp

Hi all,

I’m working on a large TSM implementation for a customer who also has HDS NAS 
systems, and quite some data in those systems, more than 100 TB that needs to 
be backed up. We were planning to go 100% directory container for the new 
environment, but alas IBM’s “best of both worlds" (DISK & FILE) doesn’t support 
NDMP and I don’t like FILE with deduplication (too much of a hassle), so is it 
really true, are we really stuck with tape? ISn’t it about time after so many 
years that IBM finally gives us a decent solution to backup NAS systems?

--

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Re: doing something like an incremental restore

2017-06-21 Thread Thomas Denier
See the documentation for the "fromdate" and "fromtime" options.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Wednesday, June 21, 2017 15:23
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] doing something like an incremental restore

Working on a strange project.

First, I have a windows server with a 13 TB filesystem which nees to be copied 
to another server.

Given our network circomstances, (server is 60 miles away), it was quicker to 
restore its data to a local machine.
However, there may have been changes while that was restoring.
So, how to restore only changed or added data?

Replace=no is not the answer, but didn't see anything better in the client 
manual?

Client version 7.1.6.

Thanks for any suggestions.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Multiple NFS mounts to same DataDomain

2017-02-14 Thread Thomas Denier
I have successfully removed directories from file device classes a number of 
times. The sequence of events was as follows:
1.Remove the directory from the list in the device class definition to prevent 
allocation of new scratch volumes on the directory.
2.List filling volumes on the directory and make them read only, preventing 
appends to existing volumes on the directory.
3.Run "move data" commands to relocate the contents of volumes to other 
directories.
4.Wait for database backups and pending volumes to age off.

The Linux administrator thought step 3 would have better overall throughput 
with multiple streams of "move data" commands. I used the "-P" option of 
"xargs" to manage the multiple streams.

Thomas Denier,
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Tuesday, February 14, 2017 08:35
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Multiple NFS mounts to same DataDomain

Arnaud's discussion on the another thread is SO interesting (Availability for 
Spectrum Protect 8.1 server software for Linux on power system).

It got me thinking of our problems . . .

> NFS, whose performance is not that good on AIX systems

Agreed!!!  After getting DataDomain system and using NFS we were/are VERY 
unhappy with the NFS performance.

Our Unix admins worked with IBM/AIX support, and finally got an admission that 
the problem is AIX/NFS using a single TCP socket for all writes.  The 
workaround was to use multiple mount point to the same NFS share and spread  
writes (somehow) across them.  He did this and got higher throughput.

So now I'm wondering if we could use multiple NFS mounts to the same DD for our 
file device pools.

  aix:  /DD/tsm1/mnt1dd: /data/col1/tsm1/mnt1
/DD/tsm1/mnt2/data/col1/tsm1/mnt2
/DD/tsm1/mnt3/data/col1/tsm1/mnt2

Then use multiple dir's for the file device devclass:
   define devclass DDFILEDEV devtype=file 
dir=/DD/tsm1/mnt1,/DD/tsm1/mnt2,/DD/tsm1/mnt3

According to the link to dsmISI again, TSM will roughly balance across the 
multiple mount points, hopefully giving better write throughput.  I've been 
VERY reluctant to try this since it appears once you add a dir to a file device 
devclass, it's there forever!


I'm curious if anyone is doing this.

Rick
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Re: Archive deletion

2016-10-25 Thread Thomas Denier
My best guess is that inventory expiration won't remove an archived directory 
if archived copies of any of the directory's contents still exist, and that the 
order of events in inventory expiration prevents a single expiration process 
from working its way up the directory tree. If this theory is correct, the 
successive expirations behave as follows:

1.Remove files and empty directories (if any).
2.Remove directories that had files but no sub-directories.
3.Remove directories that had one level of sub-directories.
4.Remove directories that had two levels of sub-directories.

and so on.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
Eric van (ITOPT3) - KLM
Sent: Tuesday, October 25, 2016 07:57
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Archive deletion

Hi guys!
I was always under the impression that deleted archives were delete immediately 
from TSM. I just discovered that this is not the case. Although a client can no 
longer retrieve an archive after deletion, the archive files are still present 
in TSM. A select * from archives where node_name='MYNODE' confirms this.
The strange thing is that the archive object of type=file are removed by the 
first running expiration process, but the archive objects of type=dir are not! 
I had to run multiple consecutive expirations on my server to get rid of all of 
them. Each run removed a few of them and after the 5th run all archive objects 
were gone. I really don't understand this behavior...
Thanks in advance for any explanation!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Prompted Clients Missing backups

2016-10-24 Thread Thomas Denier
Are the one time schedule tests run at about the same time of day as the 
failing daily schedules? If not, the missed backups are probably the result of 
some change in the environment with time of day. The most obvious possibility 
is that someone is routinely shutting down the problem children just before 
leaving work and starting them up just after arriving at work. More exotic 
possibilities would include network maintenance or upgrades that are being 
performed piecemeal outside of office hours.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebschman, George
Sent: Monday, October 24, 2016 11:23
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Prompted Clients Missing backups

Hello everyone,
TSM server - Server Version 6, Release 3, Level 5.100 TSM server is on AIX, 
7.1.0.0

I am having an ongoing problem with a small set clients that Miss nightly 
backups.
All our client schedules are server prompted.

They worked just fine before an incident had where the SAN was disrupted.

I go in daily and restart the dsmcad.
There are no errors in the dsmerror.log or dsmsched.log.
There are entries in the dsmsched.log that show the time scheduled for the 
daily backup was picked up early in the day, but then nothing happens.
(It seems that that might be irrelevant given that they are server prompted, I 
have not worked with server prompted sched's before.) These are Windows servers.

To get these servers to backup up, instead of running a manual dsmc -i from the 
client, we have a manually triggered "ONCE" backup script that creates a one 
time backup schedule for that client, then runs it immediately.
 All that the script does is copy the existing schedule, prefix it with 
"ONCE_", then change the schedule date and time to (startd=today startt=now 
perunits=onetime).
THAT works:
So communication from the TSM server to the client should not be a problem.
And compatibility between the client versions and the TSM Server version should 
not be an issue.

So, if it can work by schedule, why doesn't it?

I get these error messages in the actlog:
ANR2716E Schedule prompter was not able to contact client   using 
type 1 (xx.xxx.xx.xxx  )


User response: Verify that the address type and address are correct for this 
client. The only valid address type is 1 (for TCP/IP). Make sure that the 
client scheduler is not using an invalid address, obtained at the time the 
client scheduler was started, from either the client's options file or from the 
command line. Verify that the client scheduler for node node name is running 
and that the necessary communication links to that scheduler are operational. 
Firewalls must allow traffic from the server to the client and from the client 
to the server without the session timing out. Ensure that the DNS configuration 
is correct.

Clients are not identical:

Node Name: 

   Client Version: Version 6, release 4, level 2.0
  Locked?: No
  Compression: Client
   Last Communication Method Used: Tcp/Ip
 Maximum Mount Points Allowed: 1
   Session Initiation: ClientOrServer
Deduplication: ClientOrServer
   Hypervisor:
   Client OS Name: WIN:Windows Server 2008 R2
Client Processor Architecture: x64


 Node Name: 

   Client Version: Version 7, release 1, level 6.0
Invalid Sign-on Count: 0
  Locked?: No
  Compression: Client
   Last Communication Method Used: Tcp/Ip
 Maximum Mount Points Allowed: 1
   Session Initiation: ClientOrServer
Deduplication: ServerOnly
   Hypervisor: VMware
   Client OS Name: WIN:Windows Server 2012 R2
Client Processor Architecture: x64

George Huebschman



**
This email and any files transmitted with it are intended solely for the use of 
the individual or agency to whom they are addressed.
If you have received this email in error please notify the Navy Exchange 
Service Command e-mail administrator. This footnote also confirms that this 
email message has been scanned for the presence of computer viruses.

Thank You!
**
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use

Re: Decommision VM?

2016-10-03 Thread Thomas Denier
The 30 day retention period for a specific backup copy starts when TSM 
discovers that the corresponding file on the client system's disk has been 
deleted or updated. TSM never got the opportunity to make that discovery for 
backup copies of the files on the client system's disk at the time of the last 
backup. Those backup copies will stay around until a TSM administrator forces 
the TSM server to get rid of them. I usually deal with this by executing 
"delete filespace" commands on a deletion date negotiated with the client 
system administrator (usually 60 days after the last backup session).

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Monday, October 03, 2016 12:50
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Decommision VM?

I decommissioned a VM on 9/1/2016.  We have a 30 day retention of data.  The 
filespace still has not been removed even tho it is now 10/3/2016.  Why not?

Output from a q file shows the filespace is decommissioned.

   Platform: TDP VMware
 Filespace Type: API:TSMVM
  Is Filespace Unicode?: No
   Capacity: 0 bytes
   Pct Util: 0.0
Last Backup Start Date/Time: 08/30/2016 03:13:32
 Days Since Last Backup Started: 34
   Last Backup Completion Date/Time: 08/30/2016 04:09:46
   Days Since Last Backup Completed: 34 Last Full NAS Image Backup 
Completion Date/Time:
Days Since Last Full NAS Image Backup Completed:
Last Backup Date/Time From Client (UTC): 08/30/2016 08:00:19
   Last Archive Date/Time From Client (UTC):
   Last Replication Start Date/Time: 10/03/2016 10:49:29
Days Since Last Replication Started: <1
  Last Replication Completion Date/Time: 10/03/2016 11:08:51
  Days Since Last Replication Completed: <1
more...   ( to continue, 'C' to cancel)

   Backup Replication Rule Name: DEFAULT
  Backup Replication Rule State: Enabled
  Archive Replication Rule Name: DEFAULT
 Archive Replication Rule State: Enabled
 Space Management Replication Rule Name: DEFAULT
Space Management Replication Rule State: Enabled
   At-risk type: Default interval
   At-risk interval:
 Decommissioned: Yes
Decommissioned Date: 09/01/2016 13:09:12
MAC Address:
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: occupancy

2016-09-16 Thread Thomas Denier
If the storage pools are both sequential,

move nodedata dino from=devt_prim to=win2k_prim

will consolidate all of the DINO C drive backup files into the WIN2K_PRIM 
storage pool.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Jeanne 
Bruno
Sent: Friday, September 16, 2016 09:32
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] occupancy

Hello.   I just stumbled on this yesterday and I don't when this happened or 
what I did to make this happen. (I'm assuming it was part of a different 
storagepool/domain at some point)

I have a node which I'll call DINO, it's been backing up fine for some time now.
When I do an 'FI' on it, I see this:

Node Name: DINO
   Filespace Name: DINO\SystemState\NULL\System State\SystemState
 FSID: 1
 Platform: WinNT
   Filespace Type: VSS
Is Filespace Unicode?: Yes
 Capacity: 0 KB
 Pct Util: 0.0

Node Name: DINO
   Filespace Name: \\dino\c$
 FSID: 2
 Platform: WinNT
   Filespace Type: NTFS
Is Filespace Unicode?: Yes
 Capacity: 99 GB
 Pct Util: 36.7

Looks ok, but when I do a 'occ' on it, I see this:

DINO Bkup DINO-1 WIN2K_PRIM  90,132   -6,420.42
 \System-
 State\NU-
 LL\System
 State\Sy-
 stemState
DINO Bkup \\dino\c$-2 DEVT_PRIM  314   -3.54
DINO Bkup \\dino\c$-2 WIN2K_PRIM 113,919   -   54,917.20

I need to get rid of the c: on the storage pool called 'DEVT_PRIM'.  Both had a 
FSID of '2'.  If I delete the FI with FSID=2, do we know if it will delete both 
occ from both storage pools?   (I can always back it up again afterwards).   Or 
is there a way to delete just the FSID=2 for storage pool DEVT_PRIM?

Thanks!


Jeannie Bruno
Senior Systems Analyst
jbr...@cenhud.com<mailto:jbr...@cenhud.com>
Central Hudson Gas & Electric
(845) 486-5780
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Invoking dsmadmc from powershell

2016-09-02 Thread Thomas Denier
I have successfully used the code below as part of a process for setting up 
schedule changes to work around network maintenance:

# Save current drive letter and path.
Push-Location
# Move to location needed for TSM administrator interface.
C:
cd "\Program Files\Tivoli\TSM\baclient"
$dsmadmc="C:\Program Files\Tivoli\TSM\baclient\dsmadmc"
$domain_results = &$dsmadmc -id=tsmadmin -password=xyzzy -comma -dataonly=yes 
'select node_name,domain_name from nodes'
$late_results = &$dsmadmc -id=tsmadmin -password=xyzzy -comma -dataonly=yes 
'query event * * begind=08/28/2016 begint=04:01 endt=12:00'
# Return to original location.
Pop-Location

Each of the TSM administrator commands enclosed in apostrophes is on a single 
line in the script; I suspect that the commands will be split in the process of 
e-mail transmission.

Thomas Denier,
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Steven 
Harris
Sent: Thursday, September 01, 2016 18:39
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Invoking dsmadmc from powershell

Hi All

I have to run some security checks from time to time. Its a tedious process and 
error prone, so a smart person would automate this. I have no ability to 
install anything so I must work with what I have

for unix I can do something like this,

===

id=SCRIPT
SVR=TSM1
pass=somethingsecret

alias admc="dsmadmc -errorlogn=./dsmerror.log -se=${SVR} -id=$id -pa=$pass 
-tabd -dataonly=y"

admc "select server_name, current timestamp from status"

admc "select location from dbspace" | xargs -I {} find  {} ! -perm -g= -o !
-perm -o= -ls

===

The second of those lists only those database files with permissions set 
invalidly.


I'd like to do something similar with Windows.  The equivalent of find is not 
hard to do with powershell,  but I'm having trouble invoking dsmadmc with 
parameters

Does anyone have a magic incantation to accomplish this?

The best I've been able to come up with  is


$DSM_DIR="C:\Program File\Tivoli\TSM\baclient"
$dsmadmc="$DSM_DIR\dsmadmc.exe"
$env:DSM_DIR=$DSM_DIR
$env:Path=$env:Path+";"+$DSM_DIR
$SVR="TSM1"
$id="SCRIPT"
$pass="somethingsecret"
Invoke-Command -FilePath $dsmadmc -ArgumentList '-se=$SVR -id=$id -pa=$pass 
-tabd -dataonly=y "q st"''


But thats not substituting as I would like.

Thanks


Steve.

Steven Harris
TSM Admin, Canberra Australia
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Extra client sessions

2016-08-31 Thread Thomas Denier
We are occasionally seeing some odd behavior in our TSM environment.

We write incoming client files to sequential disk storage pools. Almost all of 
our client nodes use the default maxnummp value of 1.

When the odd behavior occurs, a number of clients will go through the following 
sequence of events:
1.The TSM server will send a request to start a backup.
2.The client will almost immediately open a TCP connection to be used as a 
producer session (a session used to obtain information from the TSM database).
3.Somewhere between tens of seconds and a few minutes later the client will 
open a TCP connection to be used as a consumer session (a session used to send 
copies of new and changed files).
4.Sometime later the client will open a third TCP connection and start using it 
as a consumer session.
5.The TSM server will report large numbers of transaction failures because it 
considers the original consumer session to be tying up the one mount point 
allowed for the node and hence has no way of storing files arriving on the new 
consumer session.

In most cases, all of the affected clients will hit step four within an 
interval of a couple of minutes.

My current theory is that step four occurs when the client system detects a 
condition that is viewed as a fatal error in the original consumer session, 
triggering the opening of a replacement consumer session. In most cases the TSM 
server never detects a problem with the original consumer session, and 
eventually terminates the session after five hours of inactivity (we have 
database backups that can legitimately go through long periods with no data 
transfer). More rarely the TSM server eventually reports that the original 
consumer session was severed.

We occasionally see cases where the replacement consumer session is in turn 
replaced by another new session, and even cases where the latter session is 
replaced by yet another session.

Our client population is a bit over half Windows, but almost all instances of 
the odd behavior involve only Windows client systems.

The affected systems are frequently split between two data centers, each with 
its own TSM server.

We have usually not found any correlation between the odd TSM behavior and 
issues with other applications. The most recent case was an exception. There 
were some e-mail delivery failures at about the same time as step four of the 
odd TSM behavior. The failures occurred when e-mail servers were unable to 
perform LDAP queries.

When we have asked our Network Operations group to check on previous 
occurrences of the odd behavior they have consistently reported that they found 
no evidence of a network problem.

Each of our TSM servers runs under zSeries Linux on a z10 BC. Each server has a 
VIPA address with two associated network interfaces on different subnets.

I would welcome any suggestions for finding the underlying cause of the odd 
behavior.

Thomas Denier,
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: removing offsite data for a particular node

2016-06-03 Thread Thomas Denier
If you are using collocation groups successfully, you could migrate all the 
nodes in a collocation group to Amazon and then execute "delete volume" 
commands for the copy pool volumes belonging to the group.  In this context, 
using collocation groups successfully means avoiding situations where data from 
two or more collocation groups ends up on the same tape volume. Such situations 
can occur because nodes were moved between groups or because the copy pool ran 
low on scratch volumes at some point. You can use the "query nodedata" command 
to figure out which volumes belong to each collocation group and to identify 
volumes split between groups.

If the process described above is unsuitable, I think you could use the 
following process at multiple times during the migration process:

1.Use output from "query nodedata" to identify copy pool volumes with large 
amounts of data from nodes that have been migrated to Amazon.
2.Execute "delete volume" commands for the volumes identified in step 1.
3.Execute a "backup stgpool" command to write new copies of files the came from 
unmigrated nodes and got deleted in step 2.
4.Send the volumes written in step 3 to the vault.
5.Recall the volumes cleared in step 2.

You will need to think very carefully about the recoverability implications. In 
particular, you will need to avoid having all of the offsite copies of specific 
files end up onsite at the same time. If space at the vault is very tight, this 
might entail the use of a temporary storage location separate from either the 
vault or your data center.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Friday, June 03, 2016 11:19
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] removing offsite data for a particular node

We are slowly moving our primary tsm data storage out into the amazon cloud.

Since this is by definition off site, our off site tape pool can go away.
At least that is the current thinking, and must happen because our 3494 
libraries go out of support next year.

Given this, How, once a node's data is out in amazon, can I remove its data 
from the offsite pool.
We are stretched very thin, the offsite library is full, and no chance of 
adding more slots.

Any help appreciated.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Fixing level for ASNODENAME vulnerability

2016-02-24 Thread Thomas Denier
We are trying to figure out how to deal with the bug described in 
http://www-01.ibm.com/support/docview.wss?uid=swg21975957. The document at that 
URL includes a table with information about the availability of fixes for 
various server code levels. The row for TSM 6.3 has a cell stating that the 
fixing level is 6.3.5.1. Two cells to the right in the same row customers are 
advised to contact IBM support and request 6.3.5.110 or later. Am I missing 
something that makes it possible for the two cells to be logically compatible?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Copy Mode: Absolute misunderstanding

2016-01-22 Thread Thomas Denier
Is RETONLY also set to 30 days? If RETONLY is longer than 30 days, backup 
copies some files deleted before the absolute backups would be retained for 
more than 30 days after the absolute backups.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Arbogast, Warren K

We have implemented directory container pools in TSM version 7.1.4 and are 
happy with it.  All new backups are written to the DEDUP pool, and old backups 
are being migrated from the VTL to the DEDUP pool through a process of 
replicating existing nodes' files to a DEDUP pool on a target server,  then 
replicating them back to the DEDUP pool on the source server. That process 
works well.

There is an urgency to empty the storage previously used on the VTL so it can 
be re-purposed. To hurry the migration to the DEDUP pool along we devised a 
strategy of running FULL backups (COPY MODE: ABSOLUTE) on certain small servers 
in policy domains whose destination was the DEDUP pool. That would promote all 
of a node’s previous backups on the VTL to Inactive status. And, since the 
pertinent RETAIN EXTRA VERSIONS setting was ’30’, we expected within 30 days 
all inactive versions would be expired and removed from the VTL.

It’s 30 days later, and the strategy did not work perfectly. There are 
thousands of files remaining on the VTL for nodes which had  a COPY MODE: 
ABSOLUTE backup.

What am I misunderstanding about COPY MODE: ABSOLUTE?  I had understood it 
would force a 100% full backup, with no mitigation by include and exclude  
statements. Apparently that’s not the case. Could someone clarify how it works?

The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Future of Web client

2015-12-14 Thread Thomas Denier
My employer is reviewing application compatibility issues for a planned upgrade 
to Internet Explorer 11. When I was checking browser requirements for TSM 
components I noticed that the Web client requires a Java plugin. If I 
understand the terminology correctly Google Chrome has dropped support for Java 
plugins and Microsoft Edge, the successor to Internet Explorer, has never 
supported Java plugins. Does IBM have a long term strategy for delivering the 
Web client functionality as support for Java plugins becomes increasingly rare?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Operations Center browser requirements

2015-11-24 Thread Thomas Denier
I am trying to make sense of the Operations Center browser requirements shown 
in http://www-01.ibm.com/support/docview.wss?uid=swg21653418.

One of the headings in the browser section reads "TSM Operations Center V710 
minimum requirement:" and one of the bullet points under this heading reads 
"Microsoft Internet Explorer versions 9 and 10". If the heading used the word 
"minimum" as shown and the bullet point mentioned only version 9, it would be 
clear that versions 9, 10, and 11 were supported. If the heading omitted the 
word "minimum" and the bullet point were as shown, it would be clear that 
versions 9 and 10 were supported but not version 11. As it is, I see no way of 
determining which reading IBM really intended. Is there any IBM document in 
which the browser requirements are stated unambiguously?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: tsm server restore

2015-11-19 Thread Thomas Denier
-Tim Brown wrote:
Attempting to restore server via TSM physical to VM, have done this many times 
successfully.
Our last attempt though has failed, and we get BSOD.

Not sure why this server is acting differently.

BSOD happens so fast that we can't capture screen.

Any suggestions
-

Are the device drivers on the physical system compatible with the virtual 
devices provided by the virtualization environment?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: sql assistance with getting files backed up and their sizes

2015-11-10 Thread Thomas Denier
The symptoms you are describing sound more like retries of files that changed 
while TSM was reading them than growth in the size of individual files. Having 
the 'compress' option set to 'no' would, as far as I know, rule out the 
possibility of growth in the size of individual backup files. The worst growth 
I have ever seen for individual files was about 30%, not the almost 100% you 
mention. Retries caused by changing files would be noted explicitly in the 
client log files.

Thomas Denier
Thomas Jefferson University

-Gary Lee wrote: -

I have at least two linux clients v6.4 tsm server 6.3.4 whose bytes backed up 
are nearly double the bytes inspected.
Compress is set to no.

I would like to find the offending files which are growing.

The following script shows files backed up yesterday, for a specific node, 
(thanks andy raibeck), but I would like to add and order by file size.

How do I add the object_id field for comparison with the contents table, but 
not print it?
Script follows:

[Script removed]
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: TS3500 library changes to Max Cartridges and Max VIO slots

2015-10-12 Thread Thomas Denier
I think you will need to quiesce tape activity and execute "audit library" with 
"refreshstate=yes" on each of the library managers.

Thomas Denier
Thomas Jefferson University

Zoltan Forray wrote:

Our TS3500 is configured as 2-logical libraries with n-slots configured for 
each.  We just reached the maximum number of cartridges I can load into one of 
these libraries due to the Max Cartridges value.  I want to adjust the 
2-logical libraries to shift slots from one to the other.  Also want to reduce 
the VIO Slots (currently at default/max of 255 for each library).

When I tried to change the Max. Cartridges value for one of the libraries, I 
got the message "WARNING - Changing Maximum settings for a logical library may 
require reconfiguration of the host applications for selected logical library"

The TS3500 is solely used for TSM.  2-of my 7-TSM servers are assigned as 
library managers of the 2-logical libraries (1-onsite and 1-offsite tapes).  
All TSM servers are RH Linux.

Will I need to restart/reboot the 2-TSM servers that manage the libraries if I 
make this change?  Will it impact all of the TSM servers?
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



TSM 7.1.3 documention

2015-09-30 Thread Thomas Denier
I have been looking over the documentation for TSM 7.1.3. Most of the material 
found in the "Administrator's Guide" in the documentation for earlier levels 
seems to be gone. For example, I have not been able to find any detailed 
information on using a recovery plan file to support a database restore. Am I 
missing something, or has the documentation really been hollowed out to the 
extent I suspect?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: How to dedicate a network port for backup

2015-08-25 Thread Thomas Denier
-Original Message-
In one of our setup, customer has given a dedicated network port for backup. We 
have to enable this IP and  port from TSM server side and client side for 
backup.

Basis details are

Server physical host name is : eccdb
Service ip address for SAP application: 192.168.1.100

/etc/hosts having the first entry against Service IP address, ie
 192.168.1.100  eccdb
...
...
172.16.10.100   eccbkp

They have dedicated the ip address 172.16.10.100 for backup.
Here they are also running the LAN Free backup. and i have updated the HL 
address in and in storage agent config. but when we start the storage agent 
service, it is picking the the 192.168.1.100. I have checked it from the dsmsta 
process. dsmstat process is listing on port 1500 against ip address 
192.168.1.100.

I have also updated the HL address in TSM server for storage agent server.
My backup is running perfectly but running of service ip address. Where as it 
should run on 172.16.10.100 ip address as i have mentioned in the config file 
dsmsta.opt.

Any one has any idea on this.
-End Original Message-

I think you will need to execute a 'route add' command on eccdb to specify the 
eccbkp interface as the preferred interface for packets destined for the subnet 
on which the TSM server is located.

As far as I know, the effects of a 'route add' command do not persist across 
reboots in Linux or Unix environments. You will probably need to arrange to 
have the 'route add' command run as part of the system initialization process. 
Different implementations/distributions offer different options for doing this, 
and different sites may have differing conventions on this point.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Re: Windows and symbolic links

2015-08-14 Thread Thomas Denier
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bill 
Boyer
Sent: Friday, August 14, 2015 10:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Windows and symbolic links

Windows 2012R2 server

TSM 7.1.2.0 client

The customer has a drive with a symbolic link off to a UNC share path. When the 
backup runs it backs up the drive, but does not follow the symbolic link and 
backup the UNC share. Which is what we really want to accomplish. The 
FOLLOWSYMBOLIC=YES is ignored in the DSM.OPT file. I know it says *NIX only, 
but I was hoping.

Anybody know of a way to get TSM to follow that symbolic link on the drive?

Bill Boyer

-End of Original Message-

I think a statement like

domain ALL-LOCAL \\server1\share1

will get the share backed up, if the account used to run backups has the 
necessary permissions. The client scheduler service is typically run under the 
system account, which typically does not  have access to resources owned by 
other systems.

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Flashcopy Manager and EMC XtremIO

2015-07-23 Thread Thomas Denier
We are considering an EMC XtremIO flash array for a large database application. 
I am trying to find out whether Flashcopy Manager supports this device. IBM 
seems to be unable to state a consistent position on this. Some IBM Web pages 
state that the only storage arrays supported by Flashcopy Manager are IBM 
arrays and non-IBM arrays accessed through the IBM SVC. Other IBM Web pages and 
some Rocket Software Web pages indicate that some EMC arrays are supported with 
the help of the Rocket Device Adapter Pack. All such pages I have found so far 
agree that VNX and VMAX arrays are supported; they split on whether support 
extends to the non-VMAX portion of the Symmetrix product line. We recently had 
a meeting with IBM to discuss the possibility of migrating from value unit 
licensing to capacity licensing for TSM and related products. One of the IBM 
representatives told us that Flashcopy Manager supports EMC arrays by way of a 
non-IBM adapter. His phrasing implied that this support extends to all EMC 
storage arrays.

Does Flashcopy Manager support the XtremIO array or not?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Tape Encryption

2015-07-08 Thread Thomas Denier
The Redbook IBM Tivoli Storage Manager: Building a Secure Environment 
(SG24-7505-00) goes into a bit more detail.

A stolen storage pool tape is not, in and of itself, a security exposure; the 
thief will not have access to the TSM database entry containing the encryption 
key. If someone steals a storage pool tape and the various items needed for a 
database restore (database backup tape, volume history file, and device 
configuration file), they can decrypt the contents of the storage pool tape, as 
long as they have the necessary hardware and the knowledge needed to carry out 
what amounts to a TSM DR process.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
McWilliams, Eric
Sent: Wednesday, July 08, 2015 2:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Tape Encryption

We are currently encrypting our data as it is being written to tape.  The 
auditors want to know how the encryption keys are managed.  All I can find is 
that the keys are managed by the Tivoli Storage Manager.

Does anyone have any documentation that explains how the keys are managed and 
what keeps someone from decrypting a tape that is lost or stolen?

tsm: q dev ltodevc f=d

 Device Class Name: LTODEVC
Device Access Strategy: Sequential
Storage Pool Count: 1
   Device Type: LTO
Format: DRIVE
 Est/Max Capacity (MB):
   Mount Limit: DRIVES
  Mount Wait (min): 60
 Mount Retention (min): 60
  Label Prefix: ADSM
  Drive Letter:
   Library: MEDSLIB
 Directory:
   Server Name:
  Retry Period:
Retry Interval:
  Twosided:
Shared:
High-level Address:
  Minimum Capacity:
  WORM: No
  Drive Encryption: On
   Scaled Capacity:
   Primary Allocation (MB):
 Secondary Allocation (MB):
   Compression:
 Retention:
Protection:
   Expiration Date:
  Unit:
  Logical Block Protection: No
Last Update by (administrator):
 Last Update Date/Time: 12/08/2014 13:14:44

   Volume Name: XXX
 Storage Pool Name: TAPEPOOL
 Device Class Name: LTODEVC
Estimated Capacity: 2.3 T
   Scaled Capacity Applied:
  Pct Util: 100.0
 Volume Status: Full
Access: Read/Write
Pct. Reclaimable Space: 0.0
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 1
 Write Pass Number: 1
 Approx. Date Last Written: 07/02/2015 05:16:24
Approx. Date Last Read: 07/02/2015 05:16:24
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location:
Volume is MVS Lanfree Capable : No
Last Update by (administrator):
 Last Update Date/Time: 06/30/2015 18:17:40
  Begin Reclaim Period:
End Reclaim Period:
  Drive Encryption Key Manager: Tivoli Storage Manager
   Logical Block Protected: No

Thanks

Eric

**
*** CONFIDENTIALITY NOTICE ***

 This message and any included attachments are from MedSynergies, Inc. and are 
intended only for the addressee. The contents of this message contain 
confidential information belonging to the sender that is legally protected. 
Unauthorized forwarding, printing, copying, distribution, or use of such 
information is strictly prohibited and may be unlawful. If you are not the 
addressee, please promptly delete this message and notify the sender of the 
delivery error by e-mail or contact MedSynergies, Inc. at 
postmas...@medsynergies.com.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Operations Center processor requirements

2015-06-09 Thread Thomas Denier
We recently had an IBM presentation on the future of TSM. The presenter told us 
that Operations Center was going to change our minds about needing a third 
party management facility for TSM (we currently use TSMManager). My management 
has asked me to check on the requirements for installing Operations Center. I 
found an IBM tool for estimating Operations Center resource requirements at:

http://www-01.ibm.com/support/docview.wss?uid=swg21641684aid=1

I requested estimates for a configuration with Operations Center on a separate 
server. The Operations Center host is predicted to need a tenth of a processor 
core. Each TSM instance is predicted to need at least one processor core to 
support the Operations Center. The estimate for the hub instance sometimes 
reaches 1.1 cores, depending on the estimates of administrator activity levels.

We have three production TSM server instances (one of them a dedicated library 
manager) split across two z10 systems with two IFL's each. The predictions from 
the IBM tool imply that interaction with the Operations Center will consume all 
of the processor capacity on the system with the library manager and half of 
the processor capacity on the other system.

Are the estimates from the IBM tool reasonably accurate?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Moving nodes to a new policy

2015-06-05 Thread Thomas Denier
This should work fine. There will be some additional complications if you use 
the TSM central scheduler to trigger backups for the affected nodes.

You will need to copy schedule definitions from the old policy domain to the 
new one, unless the new one already has suitable schedule definitions.

You will need to execute define association commands.

You may need to restart the schedule services (Windows) or processes 
(Linux/Unix) on the affected nodes. My best guess is that this will be 
necessary if prompt mode scheduling is used and will not be necessary if 
polling mode scheduling is used.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Thursday, June 04, 2015 8:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Moving nodes to a new policy

If I have nodes that are currently under a policy that keeps 5 backup versions, 
can I move them to another policy which keeps only 2 backup versions?

Will TSM then start expiring all the older backup versions of files which are 
no longer needed?

We have two nodes which are taking up a lot of backup space in TSM and want to 
reduce this.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Documenting TSM

2015-05-27 Thread Thomas Denier
DR documentation presents special problems in at least two respects.

You will need to ensure that DR documentation survives the type of disaster 
covered by the documentation. Depending on your employer's policies and 
infrastructure, this may entail treating DR documentation as an exception to 
normal policies governing storage of online documentation.

The level of detail in DR documentation should allow for the unpleasant 
possibility of having the procedure carried out by consultants familiar with 
the relevant products but not familiar with your site's local conventions. I 
still remember the occasion some years ago when two groups of IS staff arrived 
at the machine room at the same time to work on unrelated problems. One group 
consisted of both Unix administrators on the IS Department payroll at the time. 
The other group consisted of the primary TSM administrator (me) and a co-worker 
who was both the backup TSM administrator and the primary system administrator 
for the MVS system that hosted the TSM server.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, May 27, 2015 9:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Documenting TSM

I find it's good to split documents into three different types - configuration 
(long-term, can be complex), operations (daily, should be simple), and DR 
(infrequent, but still should be simple). All of our documents live in a Twiki 
wiki.

On the configuration side, I try not to duplicate the vendor (IBM, library
vendor) docs too much, but will fill in holes where they're inadequate, and 
definitely will document local customizations and conventions. I might not 
document exact steps or commands for these, though, as these documents. I try 
to split out the TSM docs from the library docs.

On the operations side, I've found it's good to be as explicit and procedural 
as possible. Daily tasks and alerts both come through as tickets, with a link 
to the specific step-by-step procedure needed. The goal is for someone without 
a deep knowledge of TSM to be able to accomplish these tasks. As needed, I 
might combine the software and hardware steps here, for the sake of simplicity.

Like-wise, our DR docs are painfully explicit. In an emergency, you don't want 
to be thinking too hard, as you'll be scrambling and stressed even if 
everything is going right.

On Sat, May 23, 2015 at 08:45:23AM +0300, madu...@gmail.com wrote:
 Starting to document the Installing/Implementing TSM7.1 environment
 and I'm wondering what type of information and the amount detail
 others are putting into their docs?  How are you documented TSM?  Security? 
 Database?
 Scripts?  Installation steps? troubleshooting?installing clients? ...etc.

 Does anyone have a good documentation example  day-to-day
 operation/administration?

 -mad

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: 3584 questions

2015-05-22 Thread Thomas Denier
The 3484 is a SCSI library with a mechanical design similar to that of a 3494. 
In more recent times IBM has marketed the 3484 or a very similar successor as 
the TS3500.

We used to do what amounted to a 3494 to 3584 migration during disaster 
recovery tests; our own system had a 3494, but our hot site vendor provided a 
3584.

I checked our old DR procedure. It does not cover the checkout operation, since 
all the volumes available at the test had been checked out and sent to an 
offsite vault at some point in the past. In outline the process was as follows:

Update 3494 tape drive paths to online=no.
Execute define library for the 3484.
Execute the related define drive and define path commands (including 
defining a path to the library).
Update tape device classes to use the new library.
Check volumes into the new library.
Execute an audit library command with checklabel=barcode for the new 
library.

The device for the new server to library path will probably follow a distinctly 
different naming convention than its 3494 counterpart.

Most commands that refer to a library name need somewhat different operands for 
a SCSI library  (such as a 3484) than for a 3494. You will need to review any 
such commands executed as part of your automated housekeeping or as part of 
manual procedures such as adding tape volumes.

Thomas Denier
Thomas Jefferson University

-Original Message-

We are looking at replacing our 3494 libraries with 3584s.
Thinkk that is the correct number.

Are they similar enough that I can simply check out the volumes from the old 
libraries and check into the new?

We are keeping our ts1120 drives and transferring them into the new robots.

Anyone with experience doing this move?

Thanks for any help.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Share permission changes

2015-05-11 Thread Thomas Denier
One of our TSM servers is in the process of backing up a large part of the 
contents of a Windows 2008 file server. I contacted the system administrator. 
He told me that he had changed share permissions but not security permissions, 
and did not expect all the files in the share to be backed up. Based on my 
limited knowledge of share permissions I wouldn't have expected that either. Is 
it normal for a share permissions change to have this effect? How easy is it to 
make a security permissions change while trying to make a share permissions 
change?

Thomas Denier,
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


ANR3497W

2015-04-23 Thread Thomas Denier
We have a number of TSM server instances running under zSeries Linux. These 
were created with 6.2.2.0 server codes, subsequently upgraded to 6.2.5.0, and 
upgraded to 6.3.5.0 in early April. One of the server instances is now 
displaying the message:

PM ANR3497W Reorganization is required on excluded table BACKUP_OBJECTS. The 
reason code is 2.

every few days. The messages manual and various technotes mention the following 
three options:

1.Do nothing, and risk ongoing growth in disk space  used by the database and 
ongoing decline in performance.
2.Enable online reorgs of BACKUP_OBJECTS, lock out smaller reorgs for periods 
that might stretch into months, and risk crashing the server.
3.Take the server down for many hours to perform an offline reorg.

The discussion of the advantages and disadvantages of each option is maddening 
vague. All sorts of things might happen, but hardly anything will happen. 
Timing is discussed only in terms like many hours. As far as I can tell, the 
writers' main goal was to ensure that IBM will never be blamed for giving bad 
advice, and they achieved this goal by the simple expedient of refusing to give 
any advice.

Is there any documentation available that provides real help in deciding which 
option would be best for our situation?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Linux 6.4 client hangs on starting dsmc

2015-04-16 Thread Thomas Denier
If X11 support is available I would suggest running dsm as well. My site 
recently had a problem with a Windows client. The dsm command found and 
reported a bad options file line during initialization. The dsmc command 
started with no complaints and then behaved strangely.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Francisco Javier
Sent: Thursday, April 16, 2015 12:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Linux 6.4 client hangs on starting dsmc

try executing dsmc since command line, perhaps be a problem with some 
configuration in option files.

Regards


2015-04-16 11:38 GMT-05:00 Arbogast, Warren K warbo...@iu.edu:

 A Linux client has been missing its scheduled backups.  The TSM client
 is at version s 6.4.0.0, and our TSM server is running 7.1.1.108 on a
 Redhat 6 OS,

 The client admin reports that it hangs immediately when dsmc is
 started, but the admin can telnet successfully from the client to to
 the TSM server over ports 1500 and 1542, so we have crossed ‘firewall
 problem’ off the list of possible causes.

 'ssl yes’ and ‘ssl fipsmode yes’ are specified in dsm.sys, but the
 admin tried commenting out ‘sslfipsmode yes’ and running dsmc —with
 the same result.

 dsmerror.log is empty, and there are no recent messages in dsmwebcl.log.

 The admin reports that selinux is running, but that ’nothing has changed’
 in its configuration recently.  Since backups had been runining
 successfully till a week ago, certainly something has changed, but we
 can’t find it.

 Where else should we look for the cause of the immediate hang when
 dsmc is started?

 With many thanks,
 Keith Arbogast
 Indiana University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Problem with domain option

2015-03-20 Thread Thomas Denier
We have a client system running 64 bit Windows 2008 and using TSM 7.1.1.0 
client code. An H drive that does not need backup coverage was recently added 
to the system. The system administrator added the following line to dsm.opt:

domain  ALL-LOCAL -h:

The GUI client claims that this line is invalid and helpfully offers to turn 
it into a comment line. The line is almost identical to one of the examples in 
the Windows client manual; the only difference is specifying -h: rather than 
the -c: in the example. I have used a Windows port of the Linux od utility 
to verify that the only non-printing characters in dsm.opt are the normal line 
end characters. The option is apparently acceptable to the client scheduler 
service; the service will start and remain started with the option in effect. I 
don't know how the option affects scheduled backups; the system administrator 
has accepted the helpful offer from the GUI a number of times, and has 
managed to coordinate acceptance of the offer with scheduler service restarts 
in such a way that the option has never been in effect during a scheduled 
backup.

Is there something genuinely wrong with the domain option, or some quirk in 
the GUI's validity checking that we can work around (for example, stricter 
rules about capitalization or extra white space than the scheduler service and 
the documentation)?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Restartable restores and command line options

2015-03-20 Thread Thomas Denier
We recently discovered that a restarted restore won't necessarily inherit all 
of the command line options specified for the original restore. I don't know 
whether this is a bug or a feature for which I have not found the 
documentation. We started a large command line restore with -quiet and 
-resourceutilization=10 options. The command started 10 concurrent sessions 
and did not list the names of files being restored. The command eventually 
failed because the destination file system filled up. This resulted in a 
restartable restore. The client system administrator added space to the file 
system and executed a dsmc restart restore command. The restarted restore 
only started one session and displayed a message for each file restored.

The client system was using 64 bit Windows 2008 and TSM 7.1.1.0 client code. 
The TSM server was using zSeries Linux and TSM 6.2.5.0 server code.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Problem with domain option

2015-03-20 Thread Thomas Denier
Andy,

Your suspicion was correct. I displayed the line with od options that cause 
each byte to be displayed as both a character and an unsigned integer. The 
integer value for the character in front of the drive letter turned out to be 
150 (en-dash in the Latin I code page) rather than the correct 45. The Windows 
system administrator used Notepad to delete the en-dash and insert an ASCII 
hyphen. Once this was done the GUI started with no error messages.

Thank you for your assistance.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew 
Raibeck
Sent: Friday, March 20, 2015 12:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Problem with domain option

No bugs with DOMAIN processing that I am aware of that could cause this. My 
first guess would be that the character preceding the 'h' is not an ASCII minus 
sign character (0x2D), but some kind of typographical hyphen or dash.

Suggestions:

* Open the dsm.opt file with a bona fide plain text editor (not a word 
processor or editor that might default to non-plain text behavior). Remove the 
offending line, and carefully type it in manually (no copy  paste from other 
source). Then see if the problem persists.

* Send me the dsm.opt file and I'll have a look at it. If you go this route but 
don't send it today, I won't be able to get back to you until the following 
week.

Regards,

- Andy



Andrew Raibeck | Tivoli Storage Manager Level 3 Technical Lead | 
stor...@us.ibm.com

IBM Tivoli Storage Manager links:
Product support:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

Online documentation:
http://www.ibm.com/support/knowledgecenter/SSGSG7/welcome
Product Wiki:
https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2015-03-20
11:03:24:

 From: Thomas Denier thomas.den...@jefferson.edu
 To: ADSM-L@VM.MARIST.EDU
 Date: 2015-03-20 11:06
 Subject: Problem with domain option
 Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU

 We have a client system running 64 bit Windows 2008 and using TSM 7.
 1.1.0 client code. An H drive that does not need backup coverage was
 recently added to the system. The system administrator added the
 following line to dsm.opt:

 domain  ALL-LOCAL -h:

 The GUI client claims that this line is invalid and helpfully
 offers to turn it into a comment line. The line is almost identical to
 one of the examples in the Windows client manual; the only difference
 is specifying -h: rather than the -c: in the example.
 I have used a Windows port of the Linux od utility to verify that
 the only non-printing characters in dsm.opt are the normal line end
 characters. The option is apparently acceptable to the client
 scheduler service; the service will start and remain started with the
 option in effect. I don't know how the option affects scheduled
 backups; the system administrator has accepted the helpful offer
 from the GUI a number of times, and has managed to coordinate
 acceptance of the offer with scheduler service restarts in such a way
 that the option has never been in effect during a scheduled backup.

 Is there something genuinely wrong with the domain option, or some
 quirk in the GUI's validity checking that we can work around (for
 example, stricter rules about capitalization or extra white space than
 the scheduler service and the documentation)?

 Thomas Denier
 Thomas Jefferson University
 The information contained in this transmission contains privileged and
 confidential information. It is intended only for the use of the
 person named above. If you are not the intended recipient, you are
 hereby notified that any review, dissemination, distribution or
 duplication of this communication is strictly prohibited. If you are
 not the intended recipient, please contact the sender by reply email
 and destroy all copies of the original message.

 CAUTION: Intended recipients should NOT use email communication for
 emergent or urgent health care matters.

The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Fix for privilege escalation bug

2015-03-12 Thread Thomas Denier
We have a considerable number of Linux TSM clients running on 32 bit x86 
processors and currently using either 6.2.2.0 or 6.2.4.0 client code. These 
client code levels have the privilege escalation bug described in the IBM 
bulletin  Tivoli Storage Manager Stack-based Buffer Overflow Elevation of 
Privilege: CVE-2014-6184. This bug is fixed in 6.2.5.4 client code. The README 
file for the 6.2.5.4 patch level has a link for Linux x86_64 client 
requirements but no corresponding link for the 32 bit x86 architecture. Does 
this imply that IBM is not providing the bug fix for 32 bit x86 systems?

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Privilege escalation bug

2015-02-25 Thread Thomas Denier
I received a security bulletin from IBM yesterday regarding Tivoli Storage 
Manager Stack-based Buffer Overflow Elevation of Privilege: CVE-2014-6184. The 
affected version/release combinations listed in the bulletin run from 5.4 to 
6.3. We still have one Linux system with 5.3 client code. Can I treat the list 
of affected releases as an explicit assurance that the 5.3 client does not have 
the vulnerability discussed in the bulletin? The alternative possibility that 
worries me is that 5.4 is the oldest level IBM thought it worthwhile to check.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Privilege escalation bug

2015-02-25 Thread Thomas Denier
TSM 6.1 and all Version 5 releases are past normal end of support. The security 
bulletin advises customers with support extensions on 5.4, 5.5, or 6.1 to 
contact IBM Support.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [OITS]
Sent: Wednesday, February 25, 2015 11:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Privilege escalation bug

Is the 5.3 release so old that it is considered not in support?



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Thomas 
Denier
Sent: Wednesday, February 25, 2015 9:56 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Privilege escalation bug

I received a security bulletin from IBM yesterday regarding Tivoli Storage 
Manager Stack-based Buffer Overflow Elevation of Privilege: CVE-2014-6184. The 
affected version/release combinations listed in the bulletin run from 5.4 to 
6.3. We still have one Linux system with 5.3 client code. Can I treat the list 
of affected releases as an explicit assurance that the 5.3 client does not have 
the vulnerability discussed in the bulletin? The alternative possibility that 
worries me is that 5.4 is the oldest level IBM thought it worthwhile to check.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.

[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the person 
or entity to which it is addressed and may contain confidential or privileged 
information.  Any unauthorized review, use, or disclosure is prohibited.  If 
you are not the intended recipient, please contact the sender and destroy the 
original message, including all copies, Thank you.
***


Re: Privilege escalation bug

2015-02-25 Thread Thomas Denier
The body of the bulletin I received states that the affected platforms are AIX, 
HP-UX, Linux, Solaris, and Mac.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Wednesday, February 25, 2015 12:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Privilege escalation bug

Does not specifically say if it includes SOLARIS (only says *UNIX, Linux, and 
OS X allows local users to gain privileges via unspecified vectors.*).
Do I assume since it says UNIX SOLARIS is includes?  We have some old Domino 
Solaris boxes (supposed to go away some time soon) still running 6.1.3



On Wed, Feb 25, 2015 at 10:56 AM, Thomas Denier thomas.den...@jefferson.edu
 wrote:

 I received a security bulletin from IBM yesterday regarding Tivoli
 Storage Manager Stack-based Buffer Overflow Elevation of Privilege:
 CVE-2014-6184. The affected version/release combinations listed in
 the bulletin run from 5.4 to 6.3. We still have one Linux system with
 5.3 client code. Can I treat the list of affected releases as an
 explicit assurance that the 5.3 client does not have the vulnerability
 discussed in the bulletin? The alternative possibility that worries me
 is that 5.4 is the oldest level IBM thought it worthwhile to check.

 Thomas Denier
 Thomas Jefferson University
 The information contained in this transmission contains privileged and
 confidential information. It is intended only for the use of the
 person named above. If you are not the intended recipient, you are
 hereby notified that any review, dissemination, distribution or
 duplication of this communication is strictly prohibited. If you are
 not the intended recipient, please contact the sender by reply email
 and destroy all copies of the original message.

 CAUTION: Intended recipients should NOT use email communication for
 emergent or urgent health care matters.




--
*Zoltan Forray*
TSM Software  Hardware Administrator
Hobbit / Xymon Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Re: Privilege escalation bug

2015-02-25 Thread Thomas Denier
I signed up for a subscription for notices related to TSM. The trailer 
information on the privilege escalation bulletin advises using the URL:

https://www.ibm.com/support/mynotifications

to subscribe or unsubscribe.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Wednesday, February 25, 2015 3:01 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Privilege escalation bug

Where are you getting the bulletins/alerts from?  I wouldn't have know about it 
if it wasn't for your posting.  I have passed this on to my folks
- we too have old clients going back to 5.3 and older (IRIX?)

On Wed, Feb 25, 2015 at 12:55 PM, Thomas Denier thomas.den...@jefferson.edu
 wrote:

 The body of the bulletin I received states that the affected platforms
 are AIX, HP-UX, Linux, Solaris, and Mac.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
 Of Zoltan Forray
 Sent: Wednesday, February 25, 2015 12:12 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Privilege escalation bug

 Does not specifically say if it includes SOLARIS (only says *UNIX,
 Linux, and OS X allows local users to gain privileges via unspecified 
 vectors.*).
 Do I assume since it says UNIX SOLARIS is includes?  We have some
 old Domino Solaris boxes (supposed to go away some time soon)
 still running 6.1.3



 On Wed, Feb 25, 2015 at 10:56 AM, Thomas Denier 
 thomas.den...@jefferson.edu
  wrote:

  I received a security bulletin from IBM yesterday regarding Tivoli
  Storage Manager Stack-based Buffer Overflow Elevation of Privilege:
  CVE-2014-6184. The affected version/release combinations listed in
  the bulletin run from 5.4 to 6.3. We still have one Linux system
  with
  5.3 client code. Can I treat the list of affected releases as an
  explicit assurance that the 5.3 client does not have the
  vulnerability discussed in the bulletin? The alternative possibility
  that worries me is that 5.4 is the oldest level IBM thought it worthwhile 
  to check.
 
  Thomas Denier
  Thomas Jefferson University
  The information contained in this transmission contains privileged
  and confidential information. It is intended only for the use of the
  person named above. If you are not the intended recipient, you are
  hereby notified that any review, dissemination, distribution or
  duplication of this communication is strictly prohibited. If you are
  not the intended recipient, please contact the sender by reply email
  and destroy all copies of the original message.
 
  CAUTION: Intended recipients should NOT use email communication for
  emergent or urgent health care matters.
 



 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 Hobbit / Xymon Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations
 will never use email to request that you reply with your password,
 social security number or confidential personal information. For more
 details visit http://infosecurity.vcu.edu/phishing.html
 The information contained in this transmission contains privileged and
 confidential information. It is intended only for the use of the
 person named above. If you are not the intended recipient, you are
 hereby notified that any review, dissemination, distribution or
 duplication of this communication is strictly prohibited. If you are
 not the intended recipient, please contact the sender by reply email
 and destroy all copies of the original message.

 CAUTION: Intended recipients should NOT use email communication for
 emergent or urgent health care matters.




--
*Zoltan Forray*
TSM Software  Hardware Administrator
Hobbit / Xymon Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Journal based backup support for Windows 2003

2015-02-20 Thread Thomas Denier
One of the TSM client systems at our site is a very large file server. Weekday 
backups are consistently taking over 22 hours, with occasional instances of a 
backup being missed because its predecessor was still running. The system is an 
obvious candidate for journal based backups. The system runs Windows 2003 and 
currently has 6.2.4.0 client code installed.

When I was checking for compatibility issues I found the following statement on 
page 111 of the Installation and User's Guide for the TSM 6.2 Windows client:

Journal-based backup is supported for all Windows clients.

I advised the client system administrator to configure the system for journal 
based backups, and that process was completed yesterday.

I just discovered the following on page 31 of the same document:

Journal-based backup can be used for all Windows clients, except for clients 
running on Windows Server 2003 systems.

Which of the two contradictory statements is true?

At best I will ending up spending a significant amount of time verifying that 
the client configuration described above is supported. At worst I have been 
tricked into advising a customer to use an unsupported configuration. The TSM 
6.2 developers apparently didn't understand, or didn't care, that sloppy work 
does real harm to customers. The TSM 7.1 server code is currently awaiting a 
maintenance level that addresses the second crippling bug found after the code 
was released, which suggests that the same attitude toward sloppy work prevails 
to this day.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Downloading TSM v6.3.5

2015-02-05 Thread Thomas Denier
I just went through the process of getting backleveled server code with 
licenses from Passport Advantage and getting the server code I really wanted to 
install from the site Richard Rhodes mentioned. I discovered a document at

http://www-01.ibm.com/support/docview.wss?uid=swg21667074

which explains how to extract the license package from a Passport Advantage 
download and install the license with one rpm (Linux) or installp (AIX) 
command. This takes a lot of the pain out of needing two TSM server levels. 
When I first installed Version 6 I installed a very early 6.2 level from 
Passport Advantage and then upgraded to the then current 6.2.4.0. This time I 
was able to skip the installation of the Passport Advantage download.

The process would have been even easier if Passport Advantage made license 
packages available as separate downloads.

Thomas Denier
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, February 05, 2015 11:55 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Downloading TSM v6.3.5

I got mine (for Linux) from here:

*ftp://service.boulder.ibm.com//storage/tivoli-storage-management/maintenance/server/v6r3/
ftp://service.boulder.ibm.com//storage/tivoli-storage-management/maintenance/server/v6r3/*

On Thu, Feb 5, 2015 at 11:08 AM, Rhodes, Richard L.  
rrho...@firstenergycorp.com wrote:

 Ok, I'm confused!

 I wanted to download AIX TSM v6.3.5.
 I logged onto PassportAdvantage and could only find v6.3.4.
 All kinds searching couldn't find v6.3.5.
 My team lead found a IBM web site with a FTP link for v6.3.5.

 I don't get it.  Has IBM gone back to FTP site for all but the base
 release again?

 So some versions are on PassportAdvantage and others aren't.

 Rick





 -

 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If the
 reader of this message is not the intended recipient or an agent
 responsible for delivering it to the intended recipient, you are
 hereby notified that you have received this document in error and that
 any review, dissemination, distribution, or copying of this message is
 strictly prohibited. If you have received this communication in error,
 please notify us immediately, and delete the original message.




--
*Zoltan Forray*
TSM Software  Hardware Administrator
BigBro / Hobbit / Xymon Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.



Re: TSM rant

2015-01-30 Thread Thomas Denier
The 7.1.1.100 server code has a rather serious bug affecting restores. If copy 
storage pool volumes are available the TSM server will mount both primary pool 
volumes and copy pool volumes when performing a restore. This is expected to be 
fixed in 7.1.1.200. The last time I checked, the target date for 7.1.1.200 was 
second quarter of 2015. That pretty much rules out a 6.2 to 7.1 upgrade, unless 
you are prepared to live with the bug described above. We are currently at 
6.2.5.0, and expect to upgrade to 6.3.5.0 or 6.3.5.100.

Thomas Denier,
Thomas Jefferson University

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Friday, January 30, 2015 8:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM rant

We just completed our conversion from v5 to v6.2.5 the end of December.  In 
general we're very happy with v6.2.5, but then we don't use any of the newer 
features like dedup - and don't plan to.  (we have DataDomain for our dedup 
load)

V6 has really solved our v5 pain points:  very long expirations (+24hr) and 
very slow db processing.  We run a morning report every morning against our 
TSM servers.  It generates a lot of info about an instance that we keep for 
documentation/reporting.  Some of our morning reports ran over an hour due to 
heavy SQL cmds.  Now on V6 the morning report runs in a few minutes!  V6 has 
definitely raised the scalability of TSM.

Yes, I have a long list of complaints about TSM, but in general we are happy 
with V6.

Now that we are completely on V6.2.5, we have to upgrade quickly due to ending 
support in April.  Our debate is going to v6.3.x or jumping to 7.1.x.  I'd be 
interested in any recommendations for this.

Thanks

Rick


PS:  Just in case you are curious, here are the reports we generate in our 
morning report:

  print = r000 report index and beginning time stamp 
  print = r010 activity summary 
  print = r011 admin schedule activity 
  print = r015 scratch count 
  print = r019 scratch tape usage 
  print = r020 tape vol summary for all TSM instances
  print = r021 reclaimable tapes by pct-reclaim 
  print = r022 volume info 
  print = r023 volumes per stgpool status and maxscratch  
  print = r024 volume average utilization by stgpool 
  print = r025 q dr 
  print = r030 q path (not emailed)
  print = r036 drive activity 
  print = r040 q db
  print = r045 q log
  print = r050 log consumption and utilization
  print = r055 log pin info (not emailed)
  print = r065 q sess
  print = r070 q stgpool
  print = r075 q copygroup (not emailed)
  print = r076 q events for exceptions - missed backups (not emailed)
  print = r077 slow backups
  print = r080 db backups
  print = r085 expiration - completions 
  print = r090 expiration - detail (not emailed)
  print = r095 drive and media errors
  print = r097 nodes with tcp_ip or tcp_name changes
  print = r100 recplan dir listings
  print = r105 q volhost type=dbb
  print = r110 q volhost type=dbs (not emailed)
  print = r120 stgpool volumes: 7 day trend (not emailed)
  print = r125 aix errpt
  ###print = r129 tdp notes - summary 
  ###print = r130 tdp notes - full (not emailed)
  ###print = r131 tdp notes - incremental (not emailed)
  ###print = r132 tdp notes - logs (not emailed)
  print = r140 session per node where count  1 (not emailed) 
  print = r141 q option (not emailed)
  print = r145 occupancy by server 
  print = r150 occupancy by domain 
  print = r152 occupancy by stgpool 
  print = r153 occupancy by collocation group 
  print = r155 occupancy by node (not emailed)
  print = r157 q audotocc (not emailed)
  print = r159 nodes locked (not emailed)
  print = r160 nodes with no associations
  print = r161 nodes with no associations EXCLUSION LIST
  print = r165 nodes never backed up (not emailed)
  print = r166 zzrt nodes with associations (not emailed)
  print = r167 nodes by collocation group (not emailed)
  print = r170 filespaces not backed up in 7 days (not emailed)
  print = r175 filespaces never backed up (not emailed)
  print = r180 server critical errors
  print = r184 backup objects and bytes per domain (not emailed)
  print = r185 backup objects and bytes per node (not emailed)
  print = r190 q vol  for tape (not emailed)
  print = r195 q libvol (not emailed)
  print = r999 report end timestamp






-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Remco 
Post
Sent: Thursday, January 29, 2015 6:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM rant

 Op 29 jan. 2015, om 22:39 heeft Skylar Thompson skyl...@u.washington.edu 
 het volgende geschreven:

 TSM between v6.1 and the end of v6.2 was really rough, mostly related
 to DB2. By v6.3 it got a lot more stable. I'm glad we upgraded from v5
 early, though, since we really benefit from DB2's improved indexing
 and table compression - between two TSM instances we have close to 2
 billion file versions tracked by TSM

Availability of 7.1.1.200 server code

2015-01-28 Thread Thomas Denier
We have been preparing to upgrade our TSM servers from 6.2.5.0 to 7.1.1.100. We 
recently found out about APAR IT02929, which describes a bug that causes TSM to 
mount both primary storage pool volumes and copy storage pool volumes for a 
client restore. This sounds disruptive enough to rule out installing 7.1.1.100. 
The bug is supposed to be fixed in 7.1.1.200. Is this level available? I have 
not been able to find it on the Passport Advantage Web site or on 
ftp.software.ibm.com. If it is not available now, will it be available by early 
March (the latest time I would feel comfortable starting an upgrade process 
with the intention of finishing before end of support for 6.2). Should I give 
up on Version 7 for the time being and settle for upgrading to 6.3.5.0?

Thomas Denier,
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: TSM on Linux Box

2015-01-21 Thread Thomas Denier
Have you rebooted the system since adding the inittab entry? Entries in inittab 
are only processed during system initialization.

Thomas Denier
Thomas Jefferson University

-Jeanie Bruno wrote-

Hello.  On a windows box, when installing TSM, there's the option towards the 
end of the client install to choose whether you want the TSM server to start up 
automatically when the client server (node) has been restarted.
How do I do this on a linux box?  I thought if I put:
item::once: /usr/bin/dsmc sched  /dev/null 21 # TSM scheduler
in the etc/inittab file, this was the auto startup.

But this does not auto start the 'dsmc schedule', because the node missed the 
backup and I have to manually do the 'nohup' command.

I'm not seeing anything on the GUI and I don't know if I need anything 
additional in the .opt or .sys file.
I have 'managedservices  webclient' currently in the .sys file.

In this link:  
http://kb.mit.edu/confluence/display/istcontrib/TSM+6+for+Linux+-+Install,+Configure,+Set+Up,+and+Confirm+the+Scheduler
It mentions that using the /etc/inittabl to autostart doesn't seem to work any 
more and it suggests to create a /etc/init/dsm-sched.conf file with some 
start,stop,respawn and exec options.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


DB2 LOCKLIST parameter

2015-01-09 Thread Thomas Denier
Inventory expiration processes on one of our TSM servers have been failing 
occasionally with no obvious explanation. We were able to get trace data for 
the most recent failure. IBM reviewed the trace data and advised us to increase 
the DB2 LOCKLIST parameter. They referred us to a technote at 
http://www-01.ibm.com/support/docview.wss?uid=swg21430874 for information on 
calculating the new value for the parameter.

The instructions in the document are puzzling, to put it politely. The document 
notes that deduplication greatly increases the need for row locks, but 
recommends the same LOCKLIST setting whether deduplication is being used or 
not. The recommended value is based on gigabytes of data moved rather than 
number of files moved. This sounds reasonable for environments with 
deduplication but is inexplicable in environments without deduplication. The 
document makes reference to concurrent data movement. In one of the examples 
given, all incoming client data in a four hour period and all data moved by 
migration in the same period is counted as concurrent data movement. The 
other example treats data movement spread over eight hours as concurrent. As 
far as I can see, this makes sense only if every database transaction triggered 
by client sessions and background processes remains uncommitted until all data 
movement activity ends.

One of our clients is a 32 bit Windows file server with 18 million files on 
rather slow disk drives. The backup for this client starts in the middle of our 
intended backup window and almost always runs through the day and an hour or 
two into the next backup window. The TSM server can run for months at a time 
with at least one client session in progress at all times. The examples in the 
technote seem to imply that all data movement occurring during those months 
should be counted as concurrent.

Is there any documentation available in which the criteria for selecting a 
LOCKLIST setting are explained more clearly?

We are currently using TSM 6.2.5.0 server code running under zSeries Linux. We 
are preparing to upgrade to TSM 7.1.1.100. We are not currently using 
deduplication. We may use deduplication for backups of databases on client 
systems after the upgrade. We don't have the CPU and memory resources to use 
deduplication for all client files.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: SQL for highest volume label in library

2014-12-11 Thread Thomas Denier
select max(volume_name) from libvolumes where library_name='...'

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
EJ van (ITOPT3) - KLM
Sent: Thursday, December 11, 2014 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SQL for highest volume label in library

Hi TSM-ers!
Does anybody know a SQL statement to retrieve the highest volume label of all 
tapes in a library? Is it possible with one single SQL query?
Thanks for any help in advance!
Kind regards,
Eric van Loon
AF/KLM Storage Engineering


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286

The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Inventory expiration failure

2014-12-09 Thread Thomas Denier
The inventory expiration process on one of our TSM servers was reported as 
failed this morning, but I couldn't find any message reporting a reason for the 
failure. The output from the dsmadmc command used to execute the expire 
inventory command was captured to a file. When I use the grep command with 
the -v option to filter out the large numbers of ANR0165I and ANR0166I 
messages, the remaining lines  are as follows:

IBM Tivoli Storage Manager
Command Line Administrative Interface - Version 6, Release 2, Level 4.0
(c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved.

Session established with server DC2P1: Linux/s390x
  Server Version 6, Release 2, Level 5.0
  Server date/time: 12/09/14   09:41:30  Last access: 12/09/14   09:41:30

ANS8000I Server command: 'expire inventory wait=y'
ANR0984I Process 813 for EXPIRE INVENTORY started in the FOREGROUND at 09:41:30 
AM.
ANR0811I Inventory client file expiration started as process 813.
ANR2369I Database backup volume and recovery plan file expiration starting 
under process 813.
ANR0167I Inventory file expiration process 813 processed for 88 minutes.
ANR0812I Inventory file expiration process 813 completed: processed 269 nodes, 
examined 5384354 objects, deleting 5384246 backup objects, 77 archive objects, 
0 DB backup volumes, and 0 recovery plan files. 0 objects were retried and 1 
errors were encountered.
ANR0987I Process 813 for EXPIRE INVENTORY running in the FOREGROUND processed 
5,384,354 items with a completion state of FAILURE at 11:10:06 AM.
ANR1893E Process 813 for EXPIRE INVENTORY completed with a completion state of 
FAILURE.
ANS8001I Return code 19.

ANS8002I Highest return code was 19.

The number of nodes reported processed in the ANR0812I message matches the 
number of rows in the nodes SQL table.  The server runs under zSeries Linux 
and uses TSM 6.2.5.0 server code. Are there any known problems that could cause 
a return code of 19 with no explanation?

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: TSM level for deduplication

2014-12-08 Thread Thomas Denier
Bent,

TSM 7.1.1.000 had a bug that sometimes caused restores of large files to fail. 
IBM considered the bug serious enough to warrant removing 7.1.1.000 from its 
software distribution servers.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Saturday, December 06, 2014 6:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: TSM level for deduplication

Hi Thomas,

when you are calling 7.1.1- an utter distaster when it comes to dedup then 
what issues are you referring to?

I have been using 7.1.1 in a production environment dedupping some 500 TB, 
approx 400 nodes, without any bigger issues for more than a year now.

Surely, there are still lots of not-very-well-documented features in TSM 7, 
and I am not at all impressed by IBM support, and especially not DB2 support 
and their lack of willingness to recognize TSM DB2 as being a production 
environment, but when it comes to dedupping it has been smooth sailing for us 
up until now.


 - Bent


Fra: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] P#229; vegne af Thomas 
Denier [thomas.den...@jefferson.edu]
Sendt: 5. december 2014 20:56
Til: ADSM-L@VM.MARIST.EDU
Emne: [ADSM-L] TSM level for deduplication

My management is very eager to deploy TSM deduplication in our production 
environment. We have been testing deduplication on a TSM 6.2.5.0 test server, 
but the list of known bugs makes me very uncomfortable about using that level 
for production deployment of deduplication. The same is true of later Version 6 
levels and TSM 7.1.0. TSM 7.1.1.000 was an utter disaster. Is there any 
currently available level in which the deduplication code is really fit for 
production use?

IBM has historically described patch levels as being less thoroughly tested 
than maintenance levels. Because of that I have avoided patch levels unless 
they were the only option for fixing crippling bugs in code we were already 
using.
Is that attitude still warranted? In particular, is that attitude warranted for 
TSM 7.1.1.100?

Has IBM dropped any hints about the likely availability date for TSM 7.1.2.000?

Thomas Denier
Thomas Jefferson University Hospital


The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: TSM level for deduplication

2014-12-08 Thread Thomas Denier
The web page at

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli+Storage+Manager/page/TSM+Schedule+for+Fix-Packs

has the note Removed from FTP site 12/1 due to IT05283. Replaced by 
7.1.1.100. in reference to 7.1.1.000. I just looked at the FTP site and 
7.1.1.000 is indeed still there. I overestimated IBM's ability to keep track of 
the contents of its own Web sites and FTP servers.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of J. 
Pohlmann
Sent: Monday, December 08, 2014 12:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM level for deduplication

FYI - 7.1.1.000 is still on the FTP site. 7.1.1.100 is also on the FTP site.
Ref http://www-01.ibm.com/support/docview.wss?uid=swg24035122

Best regards,

Joerg Pohlmann

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Thomas 
Denier
Sent: December 8, 2014 08:34
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM level for deduplication

Bent,

TSM 7.1.1.000 had a bug that sometimes caused restores of large files to fail. 
IBM considered the bug serious enough to warrant removing 7.1.1.000 from its 
software distribution servers.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Saturday, December 06, 2014 6:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: TSM level for deduplication

Hi Thomas,

when you are calling 7.1.1- an utter distaster when it comes to dedup then 
what issues are you referring to?

I have been using 7.1.1 in a production environment dedupping some 500 TB, 
approx 400 nodes, without any bigger issues for more than a year now.

Surely, there are still lots of not-very-well-documented features in TSM 7, 
and I am not at all impressed by IBM support, and especially not DB2 support 
and their lack of willingness to recognize TSM DB2 as being a production 
environment, but when it comes to dedupping it has been smooth sailing for us 
up until now.


 - Bent


Fra: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] P#229; vegne af Thomas 
Denier [thomas.den...@jefferson.edu]
Sendt: 5. december 2014 20:56
Til: ADSM-L@VM.MARIST.EDU
Emne: [ADSM-L] TSM level for deduplication

My management is very eager to deploy TSM deduplication in our production 
environment. We have been testing deduplication on a TSM 6.2.5.0 test server, 
but the list of known bugs makes me very uncomfortable about using that level 
for production deployment of deduplication. The same is true of later Version 6 
levels and TSM 7.1.0. TSM 7.1.1.000 was an utter disaster.
Is there any currently available level in which the deduplication code is 
really fit for production use?

IBM has historically described patch levels as being less thoroughly tested 
than maintenance levels. Because of that I have avoided patch levels unless 
they were the only option for fixing crippling bugs in code we were already 
using.
Is that attitude still warranted? In particular, is that attitude warranted for 
TSM 7.1.1.100?

Has IBM dropped any hints about the likely availability date for TSM 7.1.2.000?

Thomas Denier
Thomas Jefferson University Hospital


The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


TSM level for deduplication

2014-12-05 Thread Thomas Denier
My management is very eager to deploy TSM deduplication in our production
environment. We have been testing deduplication on a TSM 6.2.5.0 test server,
but the list of known bugs makes me very uncomfortable about using that
level for production deployment of deduplication. The same is true of later
Version 6 levels and TSM 7.1.0. TSM 7.1.1.000 was an utter disaster. Is there
any currently available level in which the deduplication code is really fit
for production use?

IBM has historically described patch levels as being less thoroughly tested
than maintenance levels. Because of that I have avoided patch levels unless they
were the only option for fixing crippling bugs in code we were already using.
Is that attitude still warranted? In particular, is that attitude warranted for
TSM 7.1.1.100?

Has IBM dropped any hints about the likely availability date for TSM 7.1.2.000?

Thomas Denier
Thomas Jefferson University Hospital


The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: corrupted db on test system

2014-11-13 Thread Thomas Denier
David Tyree wrote-

We are running TSM v6.3.1 on a test box and we had a drive fail 
and it has a corrupted database now and won't start.

I did the dsmserv remove db tsmdb1 to get rid of the old 
database and now I'm about to start the actual database restore.
While I was waiting for the offsite database backup tape to be 
brought back onsite I issued the dsmserv restore db to see what would happen. 
It failed like I expected because it couldn't find the tape in the library.
At least it's seeing the library and all the drives.
My question is how do I check in that tape to the library?
I can physically put the tape in the library but until TSM knows that it's 
actually checked in I'm stuck.
TSM won't see it until it's checked in but TSM is down.

How would I be able to read that tape once I get it on site?

The restore process gets all its information about tape volumes from the volume 
history file and possibly the device configuration file.

One approach recommended by some TSM sites is to use a device configuration 
file modified to show a manual library and then use a library
control utility to mount the tape when requested.

If you want to use a device configuration files that shows the actual library 
type, the process depends on the type of library.

For a SCSI library you would need to get the tape into the library and add a 
volume location record for the volume to the
device configuration file.

For a 3494 I think you would just need to put the volume in the library and use 
a library control utility to set the category
appropriately.

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Strange tcp_address value

2014-11-06 Thread Thomas Denier
A traceroute from the TSM server shows a typical route to one of my employer's 
core routers. There is no output after that. In particular, there are not lines 
with asterisks indicating timeouts.

I don't have logon access to the two clients, and I don't see how running 
tracerte commands on the clients would help. The two client systems are 
connecting regularly with addresses in the appropriate subnet. There is no 
evidence of any current source of 192.168 client addresses.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-

TCP address is mostly choosen by network and you can try tracerte from both 
ends to identify the issue 

By Sarav
+65-82284384


 On 6 Nov 2014, at 12:45 am, Thomas Denier thomas.den...@jefferson.edu wrote:
 
 If I execute the command:
 
 select node_name,tcp_address from nodes
 
 on one of our TSM servers, two nodes have the same, very strange, 
 value for the
 address: 192.168.30.4. The same address appears in the corresponding 
 output fields from 'query node' with 'format=detailed'.
 
 This address does not belong to my employer. All of the network 
 interfaces on the TSM server have addresses in one the officially 
 defined private address ranges. This has been the case since the TSM server 
 code was first installed.
 Given that, I don't see how a system with the address 192.168.30.4 
 could ever have connected to the TSM server.
 
 I see session start messages for both nodes on a daily basis. There 
 are no error messages for these sessions except for an occasional 
 expired password message. Even when that happens, subsequent sessions 
 run without errors, indicating that a new password was negotiated 
 successfully. The origin addresses for the sessions look perfectly 
 reasonable. They are in the same private address range as the TSM 
 server addresses, and in the right subnet for the building the client 
 systems are in. Every relevant statement I have found in the TSM 
 documentation indicates that the tcp_address field should be updated to match 
 the session origin address.
 
 When the TSM central scheduler attempts to request a backup of one of 
 the nodes it attempts to contact an address in the same subnet as the 
 session origin addresses.
 
 The TSM server is running TSM 6.2.5.0 server code under zSeries Linux. 
 The two clients are running Windows XP and using TSM 6.2.2.0 client 
 code. The two clients are administered by the same group of people.
 
 Does anyone know where the strange address could have come from, or 
 how to get the TSM to track the node addresses correctly in the future?
 
 Thomas Denier
 Thomas Jefferson University Hospital
 The information contained in this transmission contains privileged and 
 confidential information. It is intended only for the use of the person named 
 above. If you are not the intended recipient, you are hereby notified that 
 any review, dissemination, distribution or duplication of this communication 
 is strictly prohibited. If you are not the intended recipient, please contact 
 the sender by reply email and destroy all copies of the original message.
 
 CAUTION: Intended recipients should NOT use email communication for emergent 
 or urgent health care matters.


Re: Strange tcp_address value

2014-11-06 Thread Thomas Denier
The two clients are connecting to the TSM server regularly with distinct 
addresses in the appropriate subnet. Even if misconfiguration of the IP stacks 
on the clients produced 192.168 addresses sometime in the past, TSM should have 
replaced the 192.168 addresses in the Nodes table with the session origin 
addresses the nodes are currently using.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Grigori Solonovitch
Sent: Wednesday, November 05, 2014 11:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Strange tcp_address value

Most possible you have IP duplication clients. Windows is selecting some 
strange addresses in this case. Please check client configuration (preffered) 
by ipconfig /all and try to restart TSM services on clients.

Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank Kuwait, 
www.ahliunited.com.kw


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Thomas 
Denier
Sent: 05 11 2014 7:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Strange tcp_address value

If I execute the command:

select node_name,tcp_address from nodes

on one of our TSM servers, two nodes have the same, very strange, value for the
address: 192.168.30.4. The same address appears in the corresponding output 
fields from 'query node' with 'format=detailed'.

This address does not belong to my employer. All of the network interfaces on 
the TSM server have addresses in one the officially defined private address 
ranges. This has been the case since the TSM server code was first installed.
Given that, I don't see how a system with the address 192.168.30.4 could ever 
have connected to the TSM server.

I see session start messages for both nodes on a daily basis. There are no 
error messages for these sessions except for an occasional expired password 
message. Even when that happens, subsequent sessions run without errors, 
indicating that a new password was negotiated successfully. The origin 
addresses for the sessions look perfectly reasonable. They are in the same 
private address range as the TSM server addresses, and in the right subnet for 
the building the client systems are in. Every relevant statement I have found 
in the TSM documentation indicates that the tcp_address field should be updated 
to match the session origin address.

When the TSM central scheduler attempts to request a backup of one of the nodes 
it attempts to contact an address in the same subnet as the session origin 
addresses.

The TSM server is running TSM 6.2.5.0 server code under zSeries Linux. The two 
clients are running Windows XP and using TSM 6.2.2.0 client code. The two 
clients are administered by the same group of people.

Does anyone know where the strange address could have come from, or how to get 
the TSM to track the node addresses correctly in the future?

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Please consider the environment before printing this Email.



CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.


Re: Strange tcp_address value

2014-11-06 Thread Thomas Denier
The /etc/hosts file defines, 'localhost', names for the addresses of the 
various network interfaces on the TSM server, and some special IP Version 6 
addresses (which should be irrelevant, since my employer is still using only IP 
Version 4).

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Anjaneyulu Pentyala
Sent: Wednesday, November 05, 2014 11:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Strange tcp_address value

Please check the /etc/host entry on your tsm server. /etc/host entry was added 
with wrong entry of ip address, how ever nodes are contacting by using DNS. 
check the nodes ip address in DNS.

nslookup tcp_name

Regards
Anjen

Aanjaneyulu Penttyala
Technical Services Team Leader
SSO MR Storage
Regional Delivery Center Bangalore
Delivery Center India, GTS Service Delivery
Phone: +91-80-40258197
Mobile: +91- 849781
e-mail: anjan.penty...@in.ibm.com
MTP, K Block, 4F, Bangalore, India




From:
Saravanan evergreen.sa...@gmail.com
To:
ADSM-L@VM.MARIST.EDU
Date:
11/06/2014 08:40 AM
Subject:
Re: [ADSM-L] Strange tcp_address value
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



TCP address is mostly choosen by network and you can try tracerte from both 
ends to identify the issue

By Sarav
+65-82284384


 On 6 Nov 2014, at 12:45 am, Thomas Denier
 thomas.den...@jefferson.edu
wrote:

 If I execute the command:

 select node_name,tcp_address from nodes

 on one of our TSM servers, two nodes have the same, very strange,
 value
for the
 address: 192.168.30.4. The same address appears in the corresponding
output
 fields from 'query node' with 'format=detailed'.

 This address does not belong to my employer. All of the network
interfaces on
 the TSM server have addresses in one the officially defined private
address
 ranges. This has been the case since the TSM server code was first
installed.
 Given that, I don't see how a system with the address 192.168.30.4
 could
ever
 have connected to the TSM server.

 I see session start messages for both nodes on a daily basis. There
 are
no error
 messages for these sessions except for an occasional expired password
 message. Even when that happens, subsequent sessions run without
 errors, indicating that a new password was negotiated successfully.
 The origin addresses for the sessions look perfectly reasonable. They
 are in the
same
 private address range as the TSM server addresses, and in the right
subnet
 for the building the client systems are in. Every relevant statement I
have
 found in the TSM documentation indicates that the tcp_address field
should
 be updated to match the session origin address.

 When the TSM central scheduler attempts to request a backup of one of
the
 nodes it attempts to contact an address in the same subnet as the
session
 origin addresses.

 The TSM server is running TSM 6.2.5.0 server code under zSeries Linux.
The
 two clients are running Windows XP and using TSM 6.2.2.0 client code.
The
 two clients are administered by the same group of people.

 Does anyone know where the strange address could have come from, or
 how to get the TSM to track the node addresses correctly in the future?

 Thomas Denier
 Thomas Jefferson University Hospital
 The information contained in this transmission contains privileged and
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

 CAUTION: Intended recipients should NOT use email communication for
emergent or urgent health care matters.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Strange tcp_address value

2014-11-06 Thread Thomas Denier
A tcpclientaddress option with a 192.168 address would cause the TSM central 
scheduler to attempt to open a connection to that 192.168 address when the time 
came to run a scheduled backup. When the central scheduler tries to request a 
backup of either of the clients involved in the problem TSM reports failure to 
connect to an address in the correct subnet.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Kirubhakaran, Wellington
Sent: Thursday, November 06, 2014 12:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Strange tcp_address value

Kindly check dsm.sys whether tcpclientaddress option is specified.

Regards,
Wellington
+965 50369767

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Anjaneyulu Pentyala
Sent: Thursday, November 06, 2014 7:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Strange tcp_address value

Please check the /etc/host entry on your tsm server. /etc/host entry was added 
with wrong entry of ip address, how ever nodes are contacting by using DNS. 
check the nodes ip address in DNS.

nslookup tcp_name

Regards
Anjen

Aanjaneyulu Penttyala
Technical Services Team Leader
SSO MR Storage
Regional Delivery Center Bangalore
Delivery Center India, GTS Service Delivery
Phone: +91-80-40258197
Mobile: +91- 849781
e-mail: anjan.penty...@in.ibm.com
MTP, K Block, 4F, Bangalore, India




From:
Saravanan evergreen.sa...@gmail.com
To:
ADSM-L@VM.MARIST.EDU
Date:
11/06/2014 08:40 AM
Subject:
Re: [ADSM-L] Strange tcp_address value
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



TCP address is mostly choosen by network and you can try tracerte from both 
ends to identify the issue

By Sarav
+65-82284384


 On 6 Nov 2014, at 12:45 am, Thomas Denier
 thomas.den...@jefferson.edu
wrote:

 If I execute the command:

 select node_name,tcp_address from nodes

 on one of our TSM servers, two nodes have the same, very strange,
 value
for the
 address: 192.168.30.4. The same address appears in the corresponding
output
 fields from 'query node' with 'format=detailed'.

 This address does not belong to my employer. All of the network
interfaces on
 the TSM server have addresses in one the officially defined private
address
 ranges. This has been the case since the TSM server code was first
installed.
 Given that, I don't see how a system with the address 192.168.30.4
 could
ever
 have connected to the TSM server.

 I see session start messages for both nodes on a daily basis. There
 are
no error
 messages for these sessions except for an occasional expired password
 message. Even when that happens, subsequent sessions run without
 errors, indicating that a new password was negotiated successfully.
 The origin addresses for the sessions look perfectly reasonable. They
 are in the
same
 private address range as the TSM server addresses, and in the right
subnet
 for the building the client systems are in. Every relevant statement I
have
 found in the TSM documentation indicates that the tcp_address field
should
 be updated to match the session origin address.

 When the TSM central scheduler attempts to request a backup of one of
the
 nodes it attempts to contact an address in the same subnet as the
session
 origin addresses.

 The TSM server is running TSM 6.2.5.0 server code under zSeries Linux.
The
 two clients are running Windows XP and using TSM 6.2.2.0 client code.
The
 two clients are administered by the same group of people.

 Does anyone know where the strange address could have come from, or
 how to get the TSM to track the node addresses correctly in the future?

 Thomas Denier
 Thomas Jefferson University Hospital
 The information contained in this transmission contains privileged and
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

 CAUTION: Intended recipients should NOT use email communication for
emergent or urgent health care matters.

__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com 
__
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact

Re: Strange tcp_address value

2014-11-06 Thread Thomas Denier
A laptop at someone's home would only be able to connect to the TSM server 
using a VPN tunnel. I have a home network that uses 192.168 addresses, and 
recently had occasion to use VPN to open an administrator session to the TSM 
server. I checked the origin address the TSM server reported for this session. 
It was in one of the on-campus subnets my employer uses for network 
infrastructure.

Even if the two client systems somehow managed to connect to the TSM server in 
the past using 192.168 addresses, they have not done so in the last five days, 
and have repeatedly logged on with addresses in the right subnet. TSM should 
have replaced the 192.168 addresses in the Nodes table with the session origin 
addresses that have been used in the last few days.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Thursday, November 06, 2014 3:21 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Strange tcp_address value

Don't worry. 192.168.*.* addresses are typically assigned locally by something 
like DHCP. They are not exposed outside your local network. In fact they can't 
be, because each of us has our own set of 192.168 addresses. Most home routers 
use this address range. This is quite normal. Read 
http://en.wikipedia.org/wiki/Private_network

This could mean that somebody on staff took their TSM-backed-up laptop home, 
and while it was connected to their home network, the automatic scheduled 
backup happened. This may be something you want to allow.

For serious tracking of nodes and their hardware, I use GUIDs, not IP addresses.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 6 Nov 2014, Anjaneyulu Pentyala wrote:

Please check the /etc/host entry on your tsm server. /etc/host entry
was added with wrong entry of ip address, how ever nodes are contacting
by using DNS. check the nodes ip address in DNS.

nslookup tcp_name

Regards
Anjen

Aanjaneyulu Penttyala
Technical Services Team Leader
SSO MR Storage
Regional Delivery Center Bangalore
Delivery Center India, GTS Service Delivery
Phone: +91-80-40258197
Mobile: +91- 849781
e-mail: anjan.penty...@in.ibm.com
MTP, K Block, 4F, Bangalore, India




From:
Saravanan evergreen.sa...@gmail.com
To:
ADSM-L@VM.MARIST.EDU
Date:
11/06/2014 08:40 AM
Subject:
Re: [ADSM-L] Strange tcp_address value
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



TCP address is mostly choosen by network and you can try tracerte from
both ends to identify the issue

By Sarav
+65-82284384


 On 6 Nov 2014, at 12:45 am, Thomas Denier
 thomas.den...@jefferson.edu
wrote:

 If I execute the command:

 select node_name,tcp_address from nodes

 on one of our TSM servers, two nodes have the same, very strange,
 value
for the
 address: 192.168.30.4. The same address appears in the corresponding
output
 fields from 'query node' with 'format=detailed'.

 This address does not belong to my employer. All of the network
interfaces on
 the TSM server have addresses in one the officially defined private
address
 ranges. This has been the case since the TSM server code was first
installed.
 Given that, I don't see how a system with the address 192.168.30.4
 could
ever
 have connected to the TSM server.

 I see session start messages for both nodes on a daily basis. There
 are
no error
 messages for these sessions except for an occasional expired password
 message. Even when that happens, subsequent sessions run without
 errors, indicating that a new password was negotiated successfully.
 The origin addresses for the sessions look perfectly reasonable. They
 are in the
same
 private address range as the TSM server addresses, and in the right
subnet
 for the building the client systems are in. Every relevant statement
 I
have
 found in the TSM documentation indicates that the tcp_address field
should
 be updated to match the session origin address.

 When the TSM central scheduler attempts to request a backup of one of
the
 nodes it attempts to contact an address in the same subnet as the
session
 origin addresses.

 The TSM server is running TSM 6.2.5.0 server code under zSeries Linux.
The
 two clients are running Windows XP and using TSM 6.2.2.0 client code.
The
 two clients are administered by the same group of people.

 Does anyone know where the strange address could have come from, or
 how to get the TSM to track the node addresses correctly in the future?

 Thomas Denier
 Thomas Jefferson University Hospital
 The information contained in this transmission contains privileged
 and
confidential information. It is intended only for the use of the person
named above. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of
this communication is strictly prohibited. If you are not the intended

Re: Strange tcp_address value

2014-11-06 Thread Thomas Denier
I found two ANR1639I messages within the last five days. Neither involved a 
node with a 192.168 address in the Nodes table. One showed a change of GUID, 
and the other showed an IP address change, with both old and new addresses in 
legitimate on-campus subnets.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Thursday, November 06, 2014 7:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Strange tcp_address value

Check if your actlog has any ANR1639I messages.  This is thrown when the TSM 
server detects an IP address change on a node.  





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Thomas 
Denier
Sent: Wednesday, November 05, 2014 11:45 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Strange tcp_address value

If I execute the command:

select node_name,tcp_address from nodes

on one of our TSM servers, two nodes have the same, very strange, value for the
address: 192.168.30.4. The same address appears in the corresponding output 
fields from 'query node' with 'format=detailed'.

This address does not belong to my employer. All of the network interfaces on 
the TSM server have addresses in one the officially defined private address 
ranges. This has been the case since the TSM server code was first installed.
Given that, I don't see how a system with the address 192.168.30.4 could ever 
have connected to the TSM server.

I see session start messages for both nodes on a daily basis. There are no 
error messages for these sessions except for an occasional expired password 
message. Even when that happens, subsequent sessions run without errors, 
indicating that a new password was negotiated successfully. The origin 
addresses for the sessions look perfectly reasonable. They are in the same 
private address range as the TSM server addresses, and in the right subnet for 
the building the client systems are in. Every relevant statement I have found 
in the TSM documentation indicates that the tcp_address field should be updated 
to match the session origin address.

When the TSM central scheduler attempts to request a backup of one of the nodes 
it attempts to contact an address in the same subnet as the session origin 
addresses.

The TSM server is running TSM 6.2.5.0 server code under zSeries Linux. The two 
clients are running Windows XP and using TSM 6.2.2.0 client code. The two 
clients are administered by the same group of people.

Does anyone know where the strange address could have come from, or how to get 
the TSM to track the node addresses correctly in the future?

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


-The information contained in this 
message is intended only for the personal and confidential use of the 
recipient(s) named above. If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended recipient, 
you are hereby notified that you have received this document in error and that 
any review, dissemination, distribution, or copying of this message is strictly 
prohibited. If you have received this communication in error, please notify us 
immediately, and delete the original message.


Strange tcp_address value

2014-11-05 Thread Thomas Denier
If I execute the command:

select node_name,tcp_address from nodes

on one of our TSM servers, two nodes have the same, very strange, value for the
address: 192.168.30.4. The same address appears in the corresponding output
fields from 'query node' with 'format=detailed'.

This address does not belong to my employer. All of the network interfaces on
the TSM server have addresses in one the officially defined private address
ranges. This has been the case since the TSM server code was first installed.
Given that, I don't see how a system with the address 192.168.30.4 could ever
have connected to the TSM server.

I see session start messages for both nodes on a daily basis. There are no error
messages for these sessions except for an occasional expired password
message. Even when that happens, subsequent sessions run without errors,
indicating that a new password was negotiated successfully. The origin
addresses for the sessions look perfectly reasonable. They are in the same
private address range as the TSM server addresses, and in the right subnet
for the building the client systems are in. Every relevant statement I have
found in the TSM documentation indicates that the tcp_address field should
be updated to match the session origin address.

When the TSM central scheduler attempts to request a backup of one of the
nodes it attempts to contact an address in the same subnet as the session
origin addresses.

The TSM server is running TSM 6.2.5.0 server code under zSeries Linux. The
two clients are running Windows XP and using TSM 6.2.2.0 client code. The
two clients are administered by the same group of people.

Does anyone know where the strange address could have come from, or
how to get the TSM to track the node addresses correctly in the future?

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Deduplication anomalies

2014-10-22 Thread Thomas Denier
I am trying to determine the causes of two anomalies in the behavior of a
deduplicated storage pool in our TSM test environment. The test environment
uses TSM 6.2.5.0 server code running under zSeries Linux. The environment has
been using only server side deduplication since early September. Some tests
before that time used client side deduplication.

The first anomaly has to do with reclamation of the deduplicated storage pool.
For the last several days 'reclaim stgpool' commands have ended immediately
with the message:

ANR2111W RECLAIM STGPOOL: There is no data to process for LDH.

This was surprising, given the amount of duplicated date reported by 'identify
duplicates' processes. Yesterday I discovered that the storage pool had several
volumes that were eligible for reclamation with the threshold that had been
specified in the 'reclaim stgpool' commands. There had been a successful
storage pool backup after the then most recent client backup sessions. I was
able to perform 'move data' commands for each of the eligible volumes.

The second anomaly has to do with filling volumes. The deduplicated storage
pool has 187 filling volumes with a reported occupancy of 0.0. Most of these
also have the percentage of reclaimable space also reported as 0.0, and all
have the percentage of reclaimable space below 20. Most of the last write
dates are concentrated in three afternoons. I maintain a document in
which I log changes in the test environment and observations of the
behavior of the environment. This document does not show any change
in the environment or any observed anomalies on the days when most
of the low occupancy volumes were last written. The test environment
has two collocation groups. I have verified that the deduplicated storage
pool is configured for collocation by group and that every node is in
one of the collocation groups. All of the volumes in the storage pool
have  an access setting of 'READWRITE'. I have tried performing 'move
data' commands for a few of the low occupancy volumes. The test
environment consistently allocated a new scratch volume for output
rather than adding the contents of the input volume to one of the
few filling volumes with substantial amounts of data or to one of the
many other low occupancy volumes.

Web searches for the ANR2111W message turned up nothing except
reminders that a storage pool backup is needed before reclamation.
Web searches for various groups of keywords related to the second
anomaly have turned up nothing recognizably relevant.

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Script snipet for current hour

2014-10-21 Thread Thomas Denier
You can run something like:

select database_name from db where hour(current timestamp)16

The return code will be 0 if the hour is greater than 16 and 11 otherwise.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [OITS]
Sent: Tuesday, October 21, 2014 11:20 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Script snipet for current hour

Does anyone have a bit of script code that evaluates the current hour when 
the script is running?

Something like
IF SELECT HOUR(CURRENT_TIMESTAMP) FROM LOG  16 GOTO ...

I've poked around ADSM.QuickFacts and see some potential hints.  SELECT 
HOUR(CURRENT_TIMESTAMP) FROM LOG will return the current hour, but I can't get 
an if test to evaluate that hour saved as a script.

My goal: A script runs and it schedules a secondary script to run at a future 
time.

When the secondary script runs, it tests for various conditions (active node 
sessions, reclamation, etc.) and MIGHT reschedule itself for STARTT=NOW+00:15.

But, that secondary script will ideally also test for the time that it started. 
 If the start time is past a given time of day (say 16:00), I want the script 
to skip its intended purpose and delete the schedule that fired it off.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: maxscratch

2014-10-09 Thread Thomas Denier
If scratch tapes are also used for purposes other than storage pool volumes, 
such as database backups, the sum of the maxscratch values should be a bit less 
than the number of volumes in the library.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Thursday, October 09, 2014 8:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] maxscratch

Eric,

That is correct.  If you use maxscratch, it should be set to the total number 
of volumes you want a particular storagepool to use.  If you have multiple 
storagepools sharing a tape library (virtual or not), the sum of their 
maxscratches should equal the number of volumes in the library.  Using 
maxscratch in this way lets you forecast when you will need additional volumes. 
 Alternatively, you can set maxscratch to a high number and not worry about it; 
you however lose the information about how soon you might need to add new 
volumes.

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
EJ van (SPLXM) - KLM
Sent: Thursday, October 09, 2014 6:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] maxscratch

Hi Steven!
So if I understand your explanation correctly, for a storagepool used by one 
server only, the maxscratch should always be equal to the total amount of 
volumes in this pool? So when adding new scratches, one has to raise the 
maxscratch with an equal amount?
Kind regards,
Eric van Loon
AF/KLM Storage Engineering


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Steven 
Langdale
Sent: donderdag 9 oktober 2014 10:20
To: ADSM-L@VM.MARIST.EDU
Subject: Re: maxscratch

Eric

It's the max number of vols that can be used from the scratch pool by the 
particular stgpool.  so in your instance (assuming you have a single stgpool 
with all 4000 private takes in it AND it got them from the scratch pool), if 
you set it to 1000 you won't be able to use anymore new scratch tapes until the 
usage goes below 1000.

You can look as it as a way to stop a single stpool stealing all of the scratch 
tapes.

Steven

On 9 October 2014 08:53, Loon, EJ van (SPLXM) - KLM eric-van.l...@klm.com
wrote:

 Hi guys!
 I never used the maxscratch value for our VTL libraries, but I'm just
 wondering how it works.
 From the TSM Reference manual:

 MAXSCRatch
 Specifies the maximum number of scratch volumes that the server can
 request for this storage pool. This parameter is optional. You can
 specify an integer from 0 to 1. By allowing the server to
 request scratch volumes as needed, you avoid having to define each volume to 
 be used.

 What is meant by scratch volumes here? The total amount of scratch
 tapes when you start with an empty TSM server or the amount of
 scratches in the current situation?
 For instance, I have a server with 4000 private tapes and 1000 scratch
 tapes. Should I set it to 1000 or 5000?
 Thanks for your help in advance!
 Kind regards,
 Eric van Loon
 AF/KLM Storage Engineering
 
 For information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only.
 If you are not the addressee, you are notified that no part of the
 e-mail or any attachment may be disclosed, copied or distributed, and
 that any other action related to this e-mail or attachment is strictly
 prohibited, and may be unlawful. If you have received this e-mail by
 error, please notify the sender immediately by return e-mail, and delete this 
 message.

 Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
 its employees shall not be liable for the incorrect or incomplete
 transmission of this e-mail or any attachments, nor responsible for any delay 
 in receipt.
 Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
 Dutch
 Airlines) is registered in Amstelveen, The Netherlands, with
 registered number 33014286
 



For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission

Re: Shared libraries and DR - configuration questions

2014-08-25 Thread Thomas Denier
Matthew McGeary wrote:

We're in the process of setting up a second TSM server at our head office to 
split the backup load.  Primary storage is disk, so that's no problem but we 
offsite to LTO4 tape.

My original thought was that we'd set up library sharing using either the 
current TSM server as the manager or a new TSM server instance that is solely 
responsible for library operations.  I'm leaning towards a new, 
library-manager-only TSM server, with two TSM library clients.

Where I'm unclear is how DR works with a shared library manager.  Do we go 
through the process of restoring the TSM library manager instance first, then 
the two (or more) library clients?  Or (if we have two libraries available at 
our DR site) can I restore the two library clients independently, each 
attached to their own library?

I recently performed a recovery test with a library manager and one client in 
our TSM test environment. The test environment is a scaled down version of a 
production environment with a library manager and a library client at one site 
and a second library client at another site. The recovery worked just as you 
suspect: I recovered the library manager first and then used the recovered 
library manager to support recovery of the library client.

We used to use a commercial hot site to recover a TSM server with an IBM 3494 
tape library. We had to reconfigure the recovered server to use the TS3500 
library supplied by the hot site. I think you could just as well reconfigure 
each of your library clients to use a non-shared tape library. If you do this 
you will need to update the device configuration files to match the recovery 
site tape infrastructure before starting the database restores and subsequently 
update the TSM databases to match the recovery site tape infrastructure. If the 
hot site provides SCSI libraries the device configuration file updates will 
include adding records showing the storage slot element numbers for tape 
volumes.

Thomas Denier
Thomas Jefferson University Hospital


The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Database restore oddity

2014-08-19 Thread Thomas Denier
We have a TSM server instance dedicated to use as a library manager. It 
performs a
daily database snapshot to virtual volumes on a TSM server instance at another
location. Both instances use TSM 6.2.5.0 and run under zSeries Linux. Yesterday
afternoon I tested part of our DR process. The test involved restoring the 
latest
library manager database snapshot to a third Linux image. I did not start the
restored TSM server after the restore finished. The database snapshot from the
original library manager failed this morning with the following message:

ANR4370E The source server is not authorized to create volumes on target server
DC1P1.

The documentation for this message advised me to execute an 'update server'
command with 'forcesync=yes'. I executed such a command on the original
library manager and was able to rerun the snapshot successfully.

TSM documentation states that a database restore changes the verification
token. I thought this meant that the restored TSM server would need the
'update server' with 'forcesync=yes' in order to communicate with the
server holding the virtual volumes; I did not expect a restore to one
system to invalidate a verification token on a server not directly involved
in the restore.  Is the behavior described above an expected result of a
database restore?

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Backup fails with no error message

2014-07-22 Thread Thomas Denier
Andy,

It looks like the problem was in fact a shortage of memory. The problem 
starting occurring again this past Saturday. A backup with tracing after an 
earlier occurrence of the problem ran out of stack space. I attempted to raise 
the stack size limit from its default of about 32 MB to its hard limit of about 
4 GB before running another backup with tracing. For some reason I only got 
half the requested limit. The backup failed, but produced a useful error 
message for the first time in the history of this problem: an ANS1225E message 
indicating that the client software was unable to obtain memory needed for file 
compression. I was able to rerun the backup successfully after using the 
'ulimit' command to allow unlimited memory size and data segment size. The 
default soft limits are in fact much smaller than the corresponding values on 
most of our other systems. The default data segment size is about 128 MB and 
the default memory size is about 1 GB. I am currently trying to get the system 
vendor to sign off on a request to allow unlimited memory and data segment 
sizes for backups of the resource group disks.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew 
Raibeck
Sent: Wednesday, July 09, 2014 10:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup fails with no error message

It is a puzzler.

Just to verify: you have checked dsmerror.log as well for error messages and 
found nothing? Another thought is to check the TSM server activity log for any 
tell tale error or warning messages that might provide a hint.

The TSM client return codes are derived directly from the severity of the 
messages issued during whatever operation is running. ANSI messages are RC 
0; ANSW are RC 8; and ANSE or ANSS are RC 12. The exceptions are 
related to skipped files: these exception messages are ANSE but the 
return code handling sets the RC to 4. The highest severity prevails, so if, 
for example, an ANSW (RC 8) and ANSE (RC 12) are issued, then the RC 
will be 12. We have had the odd skipped file message that is not setting the 
RC to 4, but those have been fixed via APARs, and in any case I would still 
expect some error message in the log. If you inspect the error log, let me

The GlobalRC trace example I showed you illustrates when a non-zero producing 
message sets the return code. Thus when whatever message is processed that 
trips the RC 12, I would expect to see it in the trace. If you have trace files 
from when the problem did not occur, and the RC was 0, then I would not expect 
to see any of the GlobalRC messages in the trace.

I am a little surprised if no such error message appears in the dsmerror.log 
file. I have recently seen one case where the client experiences an out of 
memory error but no message was written to the console, schedule log, or error 
log. However the SERVICE trace is still sufficient to reveal the problem. What 
are the ulimits set to for this client, and are there an unusually large number 
files in any of these file systems? Are we talking about millions of files, and 
maybe the file system is on the cusp of running out of memory during backup? 
It's a long shot, but figured I'd mention it.

If you are willing to continue to run the tracing, it would be a good idea.
If the problem persists but you are unable to obtain a trace, open a PMR and 
we'll have to come up with an alternative way to figure out what is going on.

Regards,

- Andy



Andrew Raibeck | Tivoli Storage Manager Level 3 Technical Lead | 
stor...@us.ibm.com

IBM Tivoli Storage Manager links:
Product support:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

Online documentation:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
+Documentation+Central/page/Tivoli+Storage+Manager
Product Wiki:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
+Storage+Manager/page/Home

ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2014-07-09
11:41:51:

 From: Thomas Denier thomas.den...@jefferson.edu
 To: ADSM-L@vm.marist.edu,
 Date: 2014-07-09 11:42
 Subject: Re: Backup fails with no error message Sent by: ADSM: Dist
 Stor Manager ADSM-L@vm.marist.edu

 The regularly scheduled backup ran successfully on Tuesday morning.
 The scheduled backup this morning failed with exit status 12 and no
 error message. The backup start and end times indicated that the
 failure occurred while processing a different file system in the same
 resource group.

 I ran a backup of the file system with service tracing enabled. The
 TSM client eventually crashed with a segmentation fault.  I found two
 trace files, neither of which contained 'GlobalRC'. The core file from
 the crash consumed nearly all of the remaining space

Re: Backup fails with no error message

2014-07-10 Thread Thomas Denier
   of314fe45a.773f8b60-on87257d0e.006e2dd7-85257d0e.006f9...@us.ibm.com

  
73a8ba4a53ee4dcea9852cbfa85a6...@by2pr05mb631.namprd05.prod.outlook.com
 of6d738e72.2896199c-on87257d11.000d1ce9-85257d11.000ef...@us.ibm.com
In-Reply-To: 
of6d738e72.2896199c-on87257d11.000d1ce9-85257d11.000ef...@us.ibm.com
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [147.140.233.16]
x-microsoft-antispam: BCL:0;PCL:0;RULEID:
x-forefront-prvs: 0268246AE7
x-forefront-antispam-report: 
SFV:NSPM;SFS:(6009001)(189002)(199002)(377424004)(51704005)(377454003)(85714005)(479174003)(13464003)(93886003)(77982001)(99286002)(15975445006)(76482001)(20776003)(2171001)(87936001)(2656002)(33646001)(64706001)(105586002)(19580405001)(92566001)(88552001)(15202345003)(79102001)(101416001)(50986999)(66066001)(19580395003)(107886001)(83322001)(76176999)(31966008)(99396002)(83072002)(46102001)(76576001)(85306003)(74662001)(106356001)(74316001)(54356999)(95666004)(19625735002)(81342001)(86362001)(74502001)(551544002)(89122001)(80022001)(21056001)(107046002)(85852003)(81542001)(75432001)(108616002)(567094001)(24736002);DIR:OUT;SFP:;SCL:1;SRVR:BY2PR05MB629;H:BY2PR05MB631.namprd05.prod.outlook.com;FPR:;MLV:sfv;PTR:InfoNoRecords;MX:1;LANG:en;
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: jefferson.edu
X-VPM-MSG-ID: e00d0915-4ef0-4ad4-b390-b317d5f556a7
X-VPM-HOST: xvm127.jefferson.edu
X-VPM-GROUP-ID: ca56bb30-8f8c-4377-8f9e-1371248b9de3
X-VPM-ENC-REGIME: Plaintext
X-VPM-CERT-FLAG: 0
X-VPM-IS-HYBRID: 0
X-Barracuda-Connect: zixgateway01.jefferson.edu[147.140.20.158]
X-Barracuda-Start-Time: 1405010793
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-URL: http://148.100.49.28:8000/cgi-mod/mark.cgi
X-Virus-Scanned: by bsmtpd at marist.edu
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=3.5 
QUARANTINE_LEVEL=1000.0 KILL_LEVEL=5.5 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.7405
Rule breakdown below
 pts rule name  description
 -- 
--

The error log entries for each day usually include a few messages about
files not found or files that changed while TSM was reading them. There
is a warning message each day noting that a specific directory is excluded.
The directory is named in an 'exclude.dir' statement and is the top level
directory for a file system listed in a 'domain' statement. I have asked th=
e
system vendor for clearance to remove the file system from the domain
statement. I have not gotten a response so far. There are no messages that
have any evident connection to the exit status of 12 or to stopping the
backup prematurely.

The file system in which the backup stopped from June 27 to July 7 has
about 12.6 GB of free space. The file system in which the backup stopped
yesterday has about 6.5 GB of free space. The file system used for TSM logs
has about 3.8 GB of free space. Neither of the file systems in which the
backup stopped at one time or another has millions of files; a successful
backup of the entire resource group early this morning inspected 559,009
files.

The backup that got a segmentation fault apparently ran out of stack space;
the error report in the output from 'errpt -a' includes the words 'Too many
stack elements'. The soft limit on the stack size for root is 65,536 512 by=
te
blocks. The hard limit is 8,388,608 blocks. Are there any published
recommendations for resource limits for the TSM client?

I looked over the other error reports in the output from 'errpt -a'. I didn=
't
find anything recognizably relevant around the times when 'dsmc' ended
with exit status 12, in the interval between the successful /main/UT backup
on June 26 and the failed backup on June 27, or in the interval between the
success  /main/U backup on July 8 and the failed backup on July 9

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of An=
drew Raibeck
Sent: Wednesday, July 09, 2014 10:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup fails with no error message

It is a puzzler.

Just to verify: you have checked dsmerror.log as well for error messages an=
d found nothing? Another thought is to check the TSM server activity log fo=
r any tell tale error or warning messages that might provide a hint.

The TSM client return codes are derived directly from the severity of the m=
essages issued during whatever operation is running. ANSI messages are =
RC 0; ANSW are RC 8; and ANSE or ANSS are RC 12. The exceptions=
 are related to skipped files: these exception messages are ANSE but =
the return code handling sets the RC to 4. The highest severity prevails, s=
o if, for example

Re: Backup fails with no error message

2014-07-09 Thread Thomas Denier
The regularly scheduled backup ran successfully on Tuesday morning.
The scheduled backup this morning failed with exit status 12 and no
error message. The backup start and end times indicated that the
failure occurred while processing a different file system in the same
resource group.

I ran a backup of the file system with service tracing enabled. The
TSM client eventually crashed with a segmentation fault.  I found two
trace files, neither of which contained 'GlobalRC'. The core file from
the crash consumed nearly all of the remaining space in the root file
system. As far  as I can tell, a system administrator responding to an
automated alert removed the core file without consulting me.

I ran a backup of the entire resource group without tracing. This was
successful.

I am thinking of upgrading the client software, even though none of
the bug fixes listed has any obvious connection to the behavior I am
seeing.

Should I just keep trying the tracing every time a backup fails and
hope I eventually get lucky and obtain a useful trace?

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew 
Raibeck
Sent: Monday, July 07, 2014 4:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup fails with no error message

Thomas,

Run the failing backup command and this time add these parameters:

-traceflags=service -tracefile=/sometracefilename

For example:

dsmc inc /main/UT -servername=DC1P1_MAIN -traceflags=service 
-tracefile=/tsmtrace.out

Name the trace file whatever you want, just make sure ot put it in a file 
system with room for a potentially large trace file.

Note: If you anticipate GB and GB of output, you can add the option
-tracemax=1024 to wrap the trace file at 1 GB. The risk is, if whatever happens 
is not immediately causing the backup to stop, the needed trace lines could be 
written over due to wrapping. But based on your description, off-hand I'd say 
the backup stops when the problem occurs so the risk due to wrapping should be 
low.

After the backup finishes with the RC 12, scan the trace GlobalRC (without 
the quotes) and you should find lines like these:

07/07/2014 16:12:14.122 [003772] [3812] : ..\..\common\ut\GlobalRC.cpp ( 428): 
msgNum = 1076 changed the Global RC.
07/07/2014 16:12:14.122 [003772] [3812] : ..\..\common\ut\GlobalRC.cpp ( 429): 
Old values: rc = 0, rcMacroMax = 0, rcMax = 0.
07/07/2014 16:12:14.122 [003772] [3812] : ..\..\common\ut\GlobalRC.cpp ( 443): 
New values: rc = 12, rcMacroMax = 12, rcMax = 12.

This will show you which message is driving the RC change. In my example, 
msgNum = 1076 corresponds to ANS1076E

Based on the message, you might be able to figure out the rest; but at the 
least you have a trace file you can send in to support.

Regards,

- Andy



Andrew Raibeck | Tivoli Storage Manager Level 3 Technical Lead | 
stor...@us.ibm.com

IBM Tivoli Storage Manager links:
Product support:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

Online documentation:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
+Documentation+Central/page/Tivoli+Storage+Manager
Product Wiki:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
+Storage+Manager/page/Home

ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2014-07-07
15:59:57:

 From: Thomas Denier thomas.den...@jefferson.edu
 To: ADSM-L@vm.marist.edu,
 Date: 2014-07-07 16:00
 Subject: Backup fails with no error message Sent by: ADSM: Dist Stor
 Manager ADSM-L@vm.marist.edu

 We have an AIX system on which backups of a specific file system
 terminate with exit status 12 but with no error message indicating a
 reason for this exit status.
 If I execute the command

 dsmc inc /main/UT -servername=DC1P1_MAIN

 as root, I will see typical messages about the number of files
 processed and about specific files being backed up, followed by the
 usual summary messages. The exit status will be 12. The summary
 statistics will show a number of files
examined
 equal to about half the number of files present in the file system.
 There will not
 be any error message explaining the exit status or the failure to
 examine
the
 entire file system.

 The DCIP1_MAIN stanza in dsm.sys has some unusual features because it
 is
used
 to back up one of the resource groups for a clustered environment. The
stanza
 includes three 'domain' statements listing the file systems in the
 resource group.
 The stanza includes a 'nodename' option specifying the node name that
owns the
 backup files from the resource group. The stanza includes an 'asnode'
option
 specifying the node name used to authenticate sessions from the
 cluster
node
 involved (we and the system vendor were not able to agree on an
acceptable
 arrangement for storing a TSM password within the resource group

Re: Backup fails with no error message

2014-07-08 Thread Thomas Denier
Andy,

The failure did not occur when I ran the backup with service tracing. Further 
testing
revealed that the failure no longer occurred even in the absence of tracing. I 
don't
know whether the circumventions mentioned in my original e-mail ever had any 
real
effect.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew 
Raibeck
Sent: Monday, July 07, 2014 4:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup fails with no error message

Thomas,

Run the failing backup command and this time add these parameters:

-traceflags=service -tracefile=/sometracefilename

For example:

dsmc inc /main/UT -servername=DC1P1_MAIN -traceflags=service 
-tracefile=/tsmtrace.out

Name the trace file whatever you want, just make sure ot put it in a file 
system with room for a potentially large trace file.

Note: If you anticipate GB and GB of output, you can add the option
-tracemax=1024 to wrap the trace file at 1 GB. The risk is, if whatever happens 
is not immediately causing the backup to stop, the needed trace lines could be 
written over due to wrapping. But based on your description, off-hand I'd say 
the backup stops when the problem occurs so the risk due to wrapping should be 
low.

After the backup finishes with the RC 12, scan the trace GlobalRC (without 
the quotes) and you should find lines like these:

07/07/2014 16:12:14.122 [003772] [3812] : ..\..\common\ut\GlobalRC.cpp ( 428): 
msgNum = 1076 changed the Global RC.
07/07/2014 16:12:14.122 [003772] [3812] : ..\..\common\ut\GlobalRC.cpp ( 429): 
Old values: rc = 0, rcMacroMax = 0, rcMax = 0.
07/07/2014 16:12:14.122 [003772] [3812] : ..\..\common\ut\GlobalRC.cpp ( 443): 
New values: rc = 12, rcMacroMax = 12, rcMax = 12.

This will show you which message is driving the RC change. In my example, 
msgNum = 1076 corresponds to ANS1076E

Based on the message, you might be able to figure out the rest; but at the 
least you have a trace file you can send in to support.

Regards,

- Andy



Andrew Raibeck | Tivoli Storage Manager Level 3 Technical Lead | 
stor...@us.ibm.com

IBM Tivoli Storage Manager links:
Product support:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

Online documentation:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
+Documentation+Central/page/Tivoli+Storage+Manager
Product Wiki:
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
+Storage+Manager/page/Home

ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2014-07-07
15:59:57:

 From: Thomas Denier thomas.den...@jefferson.edu
 To: ADSM-L@vm.marist.edu,
 Date: 2014-07-07 16:00
 Subject: Backup fails with no error message Sent by: ADSM: Dist Stor
 Manager ADSM-L@vm.marist.edu

 We have an AIX system on which backups of a specific file system
 terminate with exit status 12 but with no error message indicating a
 reason for this exit status.
 If I execute the command

 dsmc inc /main/UT -servername=DC1P1_MAIN

 as root, I will see typical messages about the number of files
 processed and about specific files being backed up, followed by the
 usual summary messages. The exit status will be 12. The summary
 statistics will show a number of files
examined
 equal to about half the number of files present in the file system.
 There will not
 be any error message explaining the exit status or the failure to
 examine
the
 entire file system.

 The DCIP1_MAIN stanza in dsm.sys has some unusual features because it
 is
used
 to back up one of the resource groups for a clustered environment. The
stanza
 includes three 'domain' statements listing the file systems in the
 resource group.
 The stanza includes a 'nodename' option specifying the node name that
owns the
 backup files from the resource group. The stanza includes an 'asnode'
option
 specifying the node name used to authenticate sessions from the
 cluster
node
 involved (we and the system vendor were not able to agree on an
acceptable
 arrangement for storing a TSM password within the resource group).
 This stanza works fine for the other file systems in the same resource
 group,
and
 worked fine for /main/UT up until June 26.

 I have found two ways to circumvent the problem. One circumvention is
 to
run
 the command

 dsmc inc /main/UT/ -subdir=y -servername=DC1P1_MAIN

 to back up the top level directory of the file system rather than the
 file system as such. An 'lsfs' command shows nothing unusual about the
 file system;
it is
 a jfs2 file system, like all the other file systems, and uses the same
mount
 options as the other file systems. The other circumvention is to add
 an 'exclude.dir' line for a specific subdirectory of /main/UT to the
 include/exclude file. The subdirectory came under suspicion because it
 was last updated a
few
 hours after the last

Backup fails with no error message

2014-07-07 Thread Thomas Denier
We have an AIX system on which backups of a specific file system terminate with
exit status 12 but with no error message indicating a reason for this exit 
status.
If I execute the command

dsmc inc /main/UT -servername=DC1P1_MAIN

as root, I will see typical messages about the number of files processed and 
about
specific files being backed up, followed by the usual summary messages. The exit
status will be 12. The summary statistics will show a number of files examined
equal to about half the number of files present in the file system. There will 
not
be any error message explaining the exit status or the failure to examine the
entire file system.

The DCIP1_MAIN stanza in dsm.sys has some unusual features because it is used
to back up one of the resource groups for a clustered environment. The stanza
includes three 'domain' statements listing the file systems in the resource 
group.
The stanza includes a 'nodename' option specifying the node name that owns the
backup files from the resource group. The stanza includes an 'asnode' option
specifying the node name used to authenticate sessions from the cluster node
involved (we and the system vendor were not able to agree on an acceptable
arrangement for storing a TSM password within the resource group). This
stanza works fine for the other file systems in the same resource group, and
worked fine for /main/UT up until June 26.

I have found two ways to circumvent the problem. One circumvention is to run
the command

dsmc inc /main/UT/ -subdir=y -servername=DC1P1_MAIN

to back up the top level directory of the file system rather than the file 
system
as such. An 'lsfs' command shows nothing unusual about the file system; it is
a jfs2 file system, like all the other file systems, and uses the same mount
options as the other file systems. The other circumvention is to add an
'exclude.dir' line for a specific subdirectory of /main/UT to the 
include/exclude
file. The subdirectory came under suspicion because it was last updated a few
hours after the last fully successful backup.

The client code is TSM 6.4.1.0. The client OS is AIX 7.1. The TSM server is TSM
6.2.5.0 running under zSeries Linux.

Does anyone recognize this as a known problem? If not, does anyone have
suggestions for presenting the problem to TSM support? I am having
difficulty imagining any kind of productive interaction if I don't have a
message identifier to report.

Thomas Denier
Thomas Jefferson University Hospital
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: New management class

2014-07-03 Thread Thomas Denier
The 'dsmc query backup' command reports management classes as part of the 
information it displays about backup files.

You can also start the GUI client, select 'Restore', and expand the part of the 
file tree you are interested in. The file characteristics shown include 
management classes. Once you have examined the files you are interested in you 
can exit without starting a restore.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-

If I move a node to a new management class (e.g. include * 15day), is there a 
message in either the server actlog or the client dsmched.log that indicates 
that backup entries have been rebound to the new class?  I looking for some 
indication that the change to the new management class was effective.

David
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: bind all files except .pst to new management class?

2014-06-24 Thread Thomas Denier
I think this would have the desired effect on backup copies of files that still 
exist on the client system, including backup copies of older versions of those 
files. The proposed change would not affect management class assignments for 
backup copies of files that have been deleted from the client system.

Thomas Denier
Thomas Jefferson University Hospital

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Arbogast, Warren K
Sent: Tuesday, June 24, 2014 10:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] bind all files except .pst to new management class?

A user wishes to bind all files except .pst files to a new management class 
with more ample retention settings.  What's the best way to do this?  Is it 
sufficient to bind all .pst files to a second management class --to keep them 
from being bound to the new management class?

Something like this:
include ?:\...\*ample-mc
include ?:\...\*.pst  meager-mc

Thank you,
Keith Arbogast
Indiana University


The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: 3584 upgrade: 3592-E05 to 3592-E07

2014-06-19 Thread Thomas Denier
I'm not sure the problem is as bad as you think. Automatic tape libraries tend 
to manage scratch
tapes first-in first-out in an effort to distribute tape wear evenly. My guess 
is that your library
will use all of the JC volumes inserted at the start of the conversion before 
using any of the JA
or JB volumes scratched after that. If you later clean out scratched JA and JB 
volumes and add
JC scratch volumes, my guess is that the new batch of JC volumes will be used 
before any JA
or JB volumes scratched after that.

For storage pool volumes you could eject JA and JB volumes in PENDING status to 
eliminate
any worries about the volumes being reused. You could temporarily increase the 
reuse delay
to give you a bigger window of opportunity for ejecting volumes and/or maintain 
the normal
degree of readiness for restoring older versions of the TSM database.

Thomas Denier
Thomas Jefferson University Hospital   

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Thursday, June 19, 2014 12:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] 3584 upgrade: 3592-E05 to 3592-E07

Hi,

This weekend we are upgrading all of our tape drives (in 2 libraries) from 
3592-E05 (TS1120) to 3592-E07 (TS1140).

The libraries currently have a mix of JA and JB media.
We will be migrating over time to all new JC media.

JA tapes are READONLY   on 3592-E07 drives.
JB tapes are READ/WRITE on 3592-E07 drives.

The plan is to:
- Export out of the lib all current scratch JA/JB media.
- Mark all existing JA/JB media ReadOnly.
- Insert lots of new JC media.
- TSM will write to the new JC tapes.
- As JA/JB tapes drop to scratch, export them out of the
Library, and insert more JC media.

When JA/JB tapes drop to SCRATCH status, I'm not sure I will always be able to 
export them before TSM might try and reuse them.  If TSM tries to use a JB 
tape, that is OK - it will work.  But what if TSM selects a JA tape and tries 
to use it.  That would fail since
3592-E07 drives can READ JA media but NOT WRITE to them.

Q) Is TSM smart enough to not load a JA media into a 3592-E07
   drive for writing?

TSM has the media_type of all tapes (q libvol f=d), so I would think it should 
know not to do this.

Thanks

Rick






-
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.


Re: Lib client mounts and firewall timeouts.

2014-05-27 Thread Thomas Denier
-Steve Harris wrote: -

I have a situation that is causing me grief.  As part of a V5 to V6
upgrade I have implemented library managers.  These live in one part
of
the network and the library clients live in another separated by a
firewall.  The customer insists that timeouts be implemented on the
firewall for any session over 60 minutes: its a security thing for
some
reason and is non-negotiable.

At times I get a lot of mounts queued, in the past when these were
local
mounts, they would eventually resolve themselves but now they time
out
in the firewall, never complete, and I get a cascading blockage
until
the whole server grinds to a halt.

I'm told I can set recourcetimeout to less than the firewall timeout
and
that will cause the mounts to fail, but a lot of these are oracle
and
DB2 backups and they won't retry in a reasonable manner.

Yes, I could use devicelasses and mount limits to reserve drives, and
I
could put some stuff on disk that now goes direct to tape, but
neither
of those are palatable.

Of course the easiest thing would be to have the library clients use
keepalives on their sessions, as was added in recent versions for
NDMP
backups.  I have raised an RFE to this effect at

http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID
=
54030

and I'd appreciate your votes.

Does anyone have bright ideas on how to proceed?  I have thought
about SSL port forwarding, but apparently bypassing the controls that
way is frowned upon. Even if the RFE gets up, it won't help me as
half of the clients are still TSM 5.5 for the next six months or so
while we cut them over.

If your TSM servers run under Linux you can use libkeepalive to
make TCP connections use keepalive packets. We also have firewalls
with a one hour timeout between our library manager and its
clients. We had the kind of problems you describe when we first
set up our current TSM environment. We have never had any
trouble with firewall timeouts since we installed
libkeepalive and set the appropriate environment variables
for the TSM server processes.

Thomas Denier
Thomas Jefferson University Hospital
We have


Re: Bandwidth

2014-05-14 Thread Thomas Denier
-Tom Taylor wrote: -

Good morning,

I run TSM 6.3.4


How do I throttle bandwidth so that the clients don't choke the
network
during backups. I have already set a large window for the clients to
use,
and I am reading about client side de-duplication, and adaptive file
backup. Are these the only two avenues to reduce the bandwidth used
by
TSM?

Client compression is another possible means of reducing the amount
of data transferred.

As far as I know, TSM has no facilities for limiting the peak rate
of data transfer. Your network infrastructure might have that sort
of capability. For example, some routers can limit the rate at
which data is transferred between a specific pair of addresses.

Thomas Denier
Thomas Jefferson University Hospital


Re: How does the Backups table get updated?

2014-05-02 Thread Thomas Denier
-Rick Adamson wrote: -

TSM 6.3.4 on Windows.
Recently I found that our DB2 clients were not expiring old backups
and some systems had accumulated them for some time.
After the DBA's have corrected the situation on a particular machine
and I query the Backups table all of the objects are still reported,
even though the Occupancy table and DB2 client show the reduction.

Does anyone know what triggers the update to the backups table to
purge the old object records?

Have you run 'expire inventory' since the DB2 clients requested
deletion of old backups?

Thomas Denier
Thomas Jefferson University Hospital


Exit status from 'dsmserv restore db'

2014-04-14 Thread Thomas Denier
I have been testing disaster recovery procedures for our TSM 6.2.5.0
servers, which run under zSeries Linux. As far as I can tell, 'dsmserv'
commands with 'restore db' always end with exit status 0, even when there
are errors that prevent a successful database restore, such as inability
to mount any of the relevant database backup volumes. Is this a known
bug? Has it been fixed in a more recent release or maintenance level?

Thomas Denier,
Thomas Jefferson University Hospital


Re: Recovering Linux TSM server from partial filesystem failure

2014-03-11 Thread Thomas Denier
-Zoltan Forray wrote: -

2.  When doing postmortem on this failed server (still waiting for
results
from hardware diagnostics - my OS guy is head to the offsite location
to
check on the results and to start reinstalling the OS), I notice
this
message from my monitoring system:

3/6/2014 8:00:11 PM ANR2971E Database backup/restore/rollforward
terminated
- DB2 sqlcode -980 error.

Unfortunately, everywhere I Google sqlcode's, there is no *-980* ?
Anybody
have a better magic decoder ring to tell me what this is saying?

Google would normally treat a minus sign as a request for Web pages
not containing a specific word, so that a search for 'sqlcode -980'
would find pages containing 'sqlcode' but not '980' (with or without
a preceding sign). I don't know whether there is a way to have a
minus sign treated as part of a search term.

Thomas Denier
Thomas Jefferson University Hospital


Re: Daily surge in use of LGTMPTSP table space

2014-02-21 Thread Thomas Denier
About a month ago I reported a rapid rise in the use of the LGTMPTSP
table space on one of our Version 6 TSM servers. We have since discovered
a connection with client behavior. The surge in table space usage starts
when one of the Windows clients starts backing up its E drive, which
contains about 20 million files. About 15 minutes into the surge another
Windows client starts backing up a drive with about 5 million files. We
ended up adding about 16 GB of space to the file system used for the
LGTMPTSP table space. This has so far been sufficient to prevent any more
server crashes. The amount of additional space was chosen because we have
historically added database space in 16 GB increments; IBM declined to
offer any advice on estimating the amount of space we needed to add.

Thomas Denier
Thomas Jefferson University Hospital


Daily surge in use of LGTMPTSP table space

2014-01-20 Thread Thomas Denier
On one of our Version 6 TSM servers the amount of space used by the
LGTMPTSP table space is nearly constant for most of the day, but rises
sharply starting around 8:30 PM and drops back to normal by 9:30 PM.
The increase in usage has ranged from 7 to 15 GB lately. On some days
the file system containing that table space fills up and the server
crashes (even though it has substantial amounts of free space in other
database file systems).

We rarely have any processes running at the time of the surge. We have
client backups running at that time, but have not been able to find any
unusual client activity that is correlated with the surge. We are not
using deduplication.

The TSM server is at the 6.2.5.0 code level and runs under SLES 11 SP1
on zSeries hardware.

We opened a problem ticket with IBM earlier today, and have just
received a request for an assortment of log files.

Thomas Denier
Thomas Jefferson University Hospital


TSM server maintenance and shell profile changes

2013-12-05 Thread Thomas Denier
We have a number of Version 6 TSM servers running under zSeries Linux.
These were installed with 6.2.2.0 server code and subsequently upgraded
to 6.2.5.0. 

We had 'backup stgpool' processes hang one day last week. The hung
processes had to wait a couple of hours for tape drives to become
available, but did not move any data even when tape drives became
available. The processes were on a library manager client, and
there is a firewall between that TSM server and the library manager.
The TCP connections requesting tape drives for the 'backup stgpool'
processes were severed by the firewall because of inactivity timeouts.

The timeouts should have been prevented by the line:

export LD_PRELOAD=/usr/lib/libkeepalive.so

in the .profile file for the instance user. This line would have
forced the use of a wrapper for the routine that opens TCP
connections. The wrapper would have enabled the transmission of
keep-alive packets over otherwise idle connections. The 'export' was
missing.

Our local documentation for the TSM configuration indicates that the
'export' was originally placed after the lines introduced with the
comment:

# The following three lines have been added by IBM DB2 instance utilities.

Is it possible that the 6.2.5.0 installer removed everything after that
comment (including the 'export') and wrote replacements for the lines
added by the 6.2.2.0 installer, but did not replace the 'export'?

Thomas Denier
Thomas Jefferson University Hospital


Windows installer error

2013-11-19 Thread Thomas Denier
I just had Windows 7 installed on my workstation, which was previously
running Windows XP. The installation process did not preserver the TSM
client software I had installed under Windows XP. Attempts to install the
client software under Windows 7 fail with the following message:

The operating system is not adequate for running IBM Tivoli Storage
Manager Client.

When the failure occurs I am offered the opportunity to view the
installer log, which contains the following message:

Error 2732: Directory Manager not initialized.

I have been assured that I have administrative rights for the
workstation, and I was able to install and use the viewer component of
TSMManager and the Google Chrome browser earlier today. The error occurs
regardless of whether I specify 'Open' or 'Run as administrator' when
I start the installation. The error occurs for the 6.2.1.0 6.2.2.0,
and 6.2.4.0 levels of the 32 bit client. Removing the folder used for
files created by unpacking the distribution file does not help.
Rebooting does not help.

I would welcome any suggestions for resolving this problem.

Thomas Denier
Thomas Jefferson University Hospital


Re: Windows installer error

2013-11-19 Thread Thomas Denier
-Jim Schneider wrote: -

To: ADSM-L@VM.MARIST.EDU
From: Schneider, Jim 
Sent by: ADSM: Dist Stor Manager 
Date: 11/19/2013 14:34
Subject: Re: [ADSM-L] Windows installer error


I think Windows 7 is a 64-bit operating system.

Jim

It is apparently available in both 32 and 64 bit versions, since
Microsoft provides instructions for finding out whether a
Windows 7 system is 32 or 64 bit. Following these instructions
revealed that the copy on my workstation is 64 bit. I was
able to install the 64 bit 6.2.4.0 client successfully.

Thank you for your assistance.

Thomas Denier
Thomas Jefferson University Hospital


Re: help - tsm vg upgrade test

2013-09-11 Thread Thomas Denier
-Richard Rhodes wrote: -

  Then . . . get this ERROR:
 Beginning database preparation...
 sh[129]: /usr/tivoli/tsm/upgrade/bin/dsmupgrd:  not found.
 Preparation completed with return code 499

It can't find /usr/tivoli/tsm/upgrade/bin/dsmupgrd, which exists:
  lparB:/home/root==ls -ld /usr/tivoli/tsm/upgrade/bin/dsmupgrd
  -rwxr-xr-x1 root system 21221457 Oct 24 2011
/usr/tivoli/tsm/upgrade/bin/dsmupgrd

I remember once seeing a message like this because the interpreter
named in the first line of the script did not exist. The situation
was something like the following:

The first line of the script was

#! /bin/sh

and the system did not in fact have a binary named 'sh' in /bin.

Thomas Denier,
Thomas Jefferson University Hospital


Re: DR with automated library

2013-08-13 Thread Thomas Denier
-David Ehresman wrote: -

Has anyone successfully done a DR (test) using an automated library
(as opposed to defining a manual library)?  If so, what are the
needed changes to devconfig?

We used to do this successfully with a Version 5 server. The
original server had an IBM 3494 library. The replacement
configuration at our DR site changed over the years. We
initially got a 3494, and later got a TS3500. We created
a device configuration file from scratch based on the
special file names for for tape and library devices and
on the results of queries issued with tape utility
programs.

I have never had occasion to try this with our Version 6
servers; our tape library is several miles from either of
the TSM servers. I gather that the Version 6 device
configuration file contains information about the state of
the database that cannot be created from scratch.

As far as I can tell from our Version 5 DR instructions,
the changes needed for a pre-existing device configuration
file would be as follows:

1.If changing from 3494 to SCSI library or vice versa,
replace the 'DEFINE LIBRARY' command.

2.If changing library type or keeping the same library
type but changing the special file name for the library,
update the 'DEFINE PATH' command for the library.

3.If the DR site has fewer tape drives than the original
configuration, remove excess 'DEFINE DRIVE' and 'DEFINE
PATH' commands.

4.If the DR site has a SCSI library, add or modify
'element=' operands in the 'DEFINE DRIVE' commands to
match the DR site drive locations.

5.If the DR site has different special file names for
tape drives, update the 'DEFINE PATH' commands for
the drives.

6.If the DR site has a SCSI library, add or modify
'/* LIBRARYINVENTORY ...' lines to match the volume
locations in the replacement library.

Thomas Denier
Thomas Jefferson University Hospital


DR tests with multiple TSM servers

2013-07-31 Thread Thomas Denier
Until fairly recently my employer has had one production TSM server with
a DR plan involving a commercial hot site provider. We now have multiple
production servers at multiple locations and a DR plan using our own
facilities. We think we understand how recovery from a real disaster would
work, but we am having trouble figuring out how to run DR tests in the
new environment.

We have two IBM z10 systems in different locations. Each has a Linux
image for hosting the production TSM instance or instances for its
location. One site has a single TSM instance for storing client data.
The other site has a client data storage instance, which also serves
as a configuration manager for both sites, and a small instance
configured as a library manager.

Each site has a primary pool on sequential disk files and a copy pool
on tape volumes in a shared tape library at a third location.

We make extensive use of server to server communications. I have already
mentioned configuration management and library sharing. We use virtual
volumes to send recovery plans between locations and to send library
manager database backups to the other location. We have command routing
between any pair of servers available for the convenience of the TSM
administrators.

Our DR plan involves a standby Linux image at each location. Each
standby image will have empty versions of the instance or instances from
the other location installed and ready for database restores.

We would like to be able to test the database restore process while
all the production instances are active. We are prepared to suspend
normal tape activity during DR tests. We would like to be able to
run test client restores from the recreated instances.

We are a bit nervous about the idea of two TSM instances on different
Linux images with the same server name and with both configured for
communications with other servers.

One of the options we are considering is to execute a 'set
servername' command as soon as possible after a TSM database restore
to eliminate the server name collisions as quickly as possible. We
have already thought of several complications that would result from
this approach. We would need to execute some 'define server' commands.
We would need to change the ownership of tape volumes used during
tests. In some cases we would need to update a device configuration
file to support a TSM database restore using a renamed library
manager and then update the library definition in the restored
database.

We would appreciate any advice or warnings from TSM administrators
who have run DR tests in environments similar to ours.

Thomas Denier
Thomas Jefferson University Hospital


Windows client memory leak

2013-07-26 Thread Thomas Denier
We have a number of Windows systems with TSM 6.2.2.0 client code. It is
by now well known that the client scheduler service at this level retains
the maximum amount of memory needed during the backup window through
intervals between backup windows.

We have installed 6.2.4.0 client code on a few of our Windows systems.
This eliminates the behavior described above, but our Windows administrators
are reporting evidence of a memory leak. The amount of memory used by the
client scheduler service when it is waiting for the next backup to start
seems to grow by about 10 KB per day. Are other sites seeing this? Is
there an available service level with a fix for this, or a prediction
for a fix in a future service level?

Thomas Denier
Thomas Jefferson University Hospital


Re: Rename Node and Rename Filespace

2013-07-09 Thread Thomas Denier
-George Huebschman wrote: -

I want to verify that I understand node and filespace renaming
correctly:

I need to rename a node that is backing up under the wrong name;
Linux_SRV2 has been backing up as Linux_SRV1.
Ultimately I need both SRV1 and SRV2.
I don't want to lose the data already backed up when I start the
actual
SRV1 backing up as SRV1.
I don't want to orphaned filespaces out there either, meaning that I
want
to avoid three sets of filespaces; New SRV1, New SRV2, and Old
SRV2/1

I recall having to rename a node about 6 years ago.  It was a little
different, I just wanted to rename the node, I wasn't playing
musical
chairs with name, and it was Windows and NAS, not Linux.
In that case I  ended up with filespaces for the old and new client
names.
I think in that case, I renamed the client from the dsm.opt file
(Windows) without renaming the node at the TSM server.  Then backups
occurred and it was too late to rename the filespaces.

In reading this thread,
 http://adsm.org/forum/showthread.php?22687-Renaming-a-node ,
I see that Chad Small states that the filespaces stay with the node
when
it is renamed.

 - So...all should be well if I rename the node at the server and at
the
client in a timely fashion.  I do not need to rename the filespaces
at
all.

I have never needed to rename file spaces when renaming Unix or Linux
nodes. I frequently do need to rename filespaces when renaming
Windows nodes. The Windows machine name is embedded in many of the file
space names. If the node name change is associated with a machine name
change (the usual situation at our site) the file spaces should
be renamed to match the new machine name. I have no experience with
renaming NAS nodes.

Thomas Denier
Thomas Jefferson University Hospital


Re: select question on backups table

2013-05-07 Thread Thomas Denier
-Gary Lee wrote: -

Tsm server 5.5.4.

Windows client 6.3.0
Will the following select show me all backups at and below the
designated subdirectory?

select * from backups where node_name='CASTSTORAGE' AND -
upper(hl_NAME) like '\BIOMECHANICS\DICKIN\FATIGUE STUDY\FATIGUE
STUDY'

If not, where have I gone wrong?

Selects against the Version 5 'backups' table tend to be very
time consuming. It would almost certainly be faster to use client
facilities to find the backup files. Note that a TSM administer
with system privilege can do this from a Windows system other
than the one the files came from.

Thomas Denier
Thomas Jefferson University Hospital


Re: SELECT statements and column widths

2013-04-17 Thread Thomas Denier
-Neil Schofield wrote: -

The fact that only the FILESPACE_NAME column is twice as wide as it
should be makes me wonder if it's a bug?

It probably has something to do with the possible occurrence of
Unicode characters outside the ASCII character set in filespace
names.

Thomas Denier
Thomas Jefferson University Hospital

Re: TS3500/3584 managed by Linux host

2013-04-15 Thread Thomas Denier
-Zoltan Forray wrote: -

I am trying to prepare for the move from our 3494 to a TS3500/3584.

With the 3494, it needed port 3494 open on the host system that was
the
library manager, for the ibmatl program to function.

What ports are needed open through the firewall of the Linux servers
that
are library managers (is it as simple as 3584?)?  How about for AME?
Any
additional ports needed for this function from the TS3500 to each
library
manager server (I assume the keys are only stored with the server
that
manages the tapes, not each server that uses a tape - or is this a
bad/wrong assumption?)

A TS3500 is eithter a relabeled 3584 or a compatible successor to
the 3584; different Web sites give conflicting information on this
point.

All communications between the TSM server code and a TS3500 travel
over the SAN connections between the two.

Thomas Denier
Thomas Jefferson University Hospital

Re: Moving TSM server from AIX to Linux

2013-04-04 Thread Thomas Denier
Was the Solaris system using SPARC processors? Solaris on SPARC
and Linux on IBM mainframe share the big-endian byte order. I
would be much less optimistic about a database backup and
restore involving platforms with opposite byte order, such as
a backup from AIX and a restore to Linux on x86.

Thomas Denier
Thomas Jefferson University Hospital

-Gary Lee wrote: -

From: Lee, Gary 

Haven't tried it with v6, but that's how I moved a server from
solaris to suse linux.  The linux was running as a guest under VM on
our mainframe.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of James Choate

Is it really as easy as just installing a new server on the new
platform and then restoring a db?

I wasn't aware that you could restore a db across platforms.  I
thought you would have to export/import the nodes out of the AIX
server into the Linux server.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of Lee, Gary

Shouldn't be a big deal.

However, tougher with v6 than v5.

You should be able to restore a db backup after installing the new
server.

Then, define storage pools, tape drives, and libraries as necessary.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of Troy A Cross

I've been tasked with the project to move our TSM server from AIX to
Linux (prefer RedHat or CentOS).

Any pointers?

Thanks, Troy

Server upgrade oddities

2013-03-13 Thread Thomas Denier
Yesterday I upgraded the TSM server code on one of our zSeries Linux
systems from 6.2.2.0 to 6.2.5.0. There were two exciting moments that
didn't occur when I performed the same upgrade in a test environment.

The first exciting moment occurred while I was running './install.bin -i
console' to install the updated server code. I accepted the default
locale and selected the server as the only product to be installed. I
eventually saw the pre-installation summary and pressed the 'Enter' key
to proceed. Some minutes later a long list of subroutine names appeared
on the screen. I scrolled the screen buffer back to the top of the list
and discovered a message stating that malloc had discovered corrupted
data in memory. Many minutes after that the installer reported that it
had successfully installed the TSM server, DB2, and the TSM client API.
Is the reported memory corruption a known problem? Is it reasonable to
take the claim of successful installation at face value?

The system hosts two TSM server instances: a library manager and an
instance for storing client files. The library manager was started in
the foreground, halted, and restarted in the background without incident.
The file storage instance was started in the foreground and completed
initialization with no obvious signs of trouble. When I halted the
instance it apparently went into a loop that consumed most of the CPU
capacity. A default 'kill' signal had no effect. When the instance had
used more than five minutes of CPU time I reluctantly resorted to the
'kill -9' command. I was then able to restart the instance in the
background. Is looping after a halt request a known problem for the
6.2.5.0 server level?

Thomas Denier,
Thomas Jefferson University Hospital

6.2.5 server level

2013-02-15 Thread Thomas Denier
We are preparing to upgrade our TSM 6.2.2.0 servers. These run under zSeries 
Linux.
For the time being we are stuck at some maintenance level of the 6.2 server code
because we have Linux clients supported by 5.5 client code but not by Version 6
client code. We had been thinking of upgrading to 6.2.4.0 server code. I notice
that 6.2.5.0 has been available for a couple of months. Has anyone had 
strikingly
good or bad experiences with the 6.2.5.0 server code level?

Thomas Denier
Thomas Jefferson University Hospital 

Re: Tape library possible replacement - push/pull

2013-02-04 Thread Thomas Denier
-Zoltan Forray wrote: -

As this project moves forward, I am doing more and more research
since I am
unfamiliar with this beast (3494 has been my baby since it was
installed in
1995).

The book talks about *to communicate with a server, the IBM System
Storage
TS3500 Tape Library uses a Fibre Channel interface (also called a
port).*

Why?  What is this fibre channel interface used for?   The 3494 use
IP to
communicate/manage the library and the drives inside the library used
fibre
plus an RS422 connection to the library manager.

FCP has a two level addressing structure: each storage device port is
identified by a WWPN (World Wide Port Name), and each device
supported by a given port is identified by a LUN (Logical Unit
Number). On a disk array, a LUN identifies a block of disk storage
that host systems will treat as if it were a physical disk drive.
Each tape drive in a TS3500 has a LUN used for drive operations.
At least one drive will be configured as a control path. A drive
configured that way will have a second LUN. Commands and data sent
to the second LUN will be forwarded to the library manager firmware.

Thomas Denier
Thomas Jefferson University Hospital 

Admin_center account

2013-01-30 Thread Thomas Denier
Our TSM servers have an Admin_center administrator account that was created
automatically when the server software was installed. The account is intended
for use with the ISC (Integrated Solutions Console), which we have never
installed. Even though the account is locked, I have been getting questions
about it in connection with a security audit. Is there any reason not to
remove the account? Would there be any particular difficulty in recreating
the account in the unlikely event that we install ISC someday? Is there any
risk that IBM would use removal of the account as a pretext for refusing to
support our TSM installation?

Thomas Denier
Thomas Jefferson University Hospital

Re: Tape library possible replacement - push/pull

2013-01-25 Thread Thomas Denier
-Zoltan Forray wrote: -

We are starting to examine the idea of replacing our aging 3494
(installed
1995) due to difficulty in getting parts and having more frequent
outages
due to robot issues.

Since we have 17-TS1130 drives, the natural transition would be to a
TS3500/3584.

If we go down this route, one issue might be the
floorspace/placement,
 which might require a weekend eject-push-pull-reload.

Never having dealt with any tape library other than the 3494, I am
trying
to collect as much info as I can on doing a swap-out.

Is it as simple as defining the new library, moving the drives/paths
to it,
 changing the devclass to point to the new library, checkin
everything
(scratch/private)??  Am I missing anything?

TS3500 libraries are often equipped with a sort of virtualization
facility, called ALMS, that allows setting up one or more logical
libraries within a physical TS3500. If you end up with this facility
installed, you will have to deal with a two-tier checkin process:
getting volumes into the right logical library, and then getting
them recognized by TSM. I think you can configure the library so
that it associates volume number ranges with specific logical
libraries and automatically puts newly inserted volumes into the
appropriate logical libraries.

If you use the tape drive encryption capabilities, you will need
to use the TS3500 management interface to enable encryption. My
experience with TS3500s and encryption involves application
managed encryption. I don't know whether there are any additional
complications if you use library managed encryption or system
managed encryption.

Thomas Denier
Thomas Jefferson University Hospital

TS3500 test environment

2013-01-09 Thread Thomas Denier
I am in the process of setting up a test environment for our Version 6
TSM servers. The servers run under zSeries Linux and are currently at
the 6.2.2.0 level. The main motivation for setting up a test environment
at this time is to support an upgrade in the near future.

Our production environment uses a TS3500 tape library with the ALMS
feature. We will need tape infrastructure available to the test
environment for some tests, but we do not wish to have any tape drives
permanently attached to the test environment. Is it possible to have a
logical library remain in existence with tape volumes assigned but with
no tape drives assigned?

Thomas Denier
Thomas Jefferson University

Deduplication candidates

2013-01-09 Thread Thomas Denier
We are evaluating the potential usefulness of TSM deduplication on our
Version 6 TSM servers. The servers run under zSeries Linux and are
currently at the 6.2.2.0 level. We would not start using deduplication
without upgrading to 6.2.4.0 or higher.

Parts of our workload, such as database backups sent to the servers using
the TSM API, are clearly good candidates for deduplication. Other parts
of the workload, such as large collections of medical images, are
clearly poor candidates. However, there are two major groups of files we
are not sure about.

Most of our Oracle servers regularly write Oracle exports to local disk.
Each set of exports is subsequently backed up using the backup/archive
client. Are exports of a given database done on different days likely to
have significant numbers of matching blocks?

All of our SQL Server hosts perform daily dumps (as our DBAs call them)
or backups (as the vendor now prefers to call them) of database contents
to local disk files. The output files are subsequently backed up using
the backup/archive client. Currently, the database contents are
compressed in most cases. If we could persuade the DBAs to give up the
compression, would dumps of a given database done on different days be
likely to have significant numbers of matching blocks?

Thomas Denier
Thomas Jefferson University Hospital

Re: TS3500 test environment

2013-01-09 Thread Thomas Denier
I agree that we would not be able to retain a control path to
the partition when all the tape drives were gone, but that would
not in itself be a problem at times when we were not performing
tests involving tape activity. The question is whether the TS3500
firmware would allow us to remove all the drives and the control
path from a partition with tape volumes, leaving the partition
unusable until we brought back at least one tape drive.

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote: -

To: ADSM-L@VM.MARIST.EDU
From: Paul Zarnowski 
Sent by: ADSM: Dist Stor Manager 
Date: 01/09/2013 11:56
Subject: Re: [ADSM-L] TS3500 test environment


On TS3500s, I believe the library control interface for a partition
is assigned to one of the tape drives in that partition.  Thus, with
no drives, there can be no control of the partition.  I believe the
answer is that this is not possible - you must have at least one
drive assigned to each partition.

..Paul

At 11:25 AM 1/9/2013, Thomas Denier wrote:
I am in the process of setting up a test environment for our Version
6
TSM servers. The servers run under zSeries Linux and are currently
at
the 6.2.2.0 level. The main motivation for setting up a test
environment
at this time is to support an upgrade in the near future.

Our production environment uses a TS3500 tape library with the ALMS
feature. We will need tape infrastructure available to the test
environment for some tests, but we do not wish to have any tape
drives
permanently attached to the test environment. Is it possible to have
a
logical library remain in existence with tape volumes assigned but
with
no tape drives assigned?

Thomas Denier
Thomas Jefferson University


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu

  1   2   3   4   5   6   7   >