Aw: Re: [ADSM-L] TSM Server on RHEL Cluster

2017-11-10 Thread John Keyes
I think all of this mentions Tivoli Automation...

But I just found the solution. In //sqllib/db2nodes.cfg there is 
the hostname configured. I replaced it with a symlink to a locally stored and 
correctly configured db2nodes.cfg. No TSM starts without problems on each 
server.

Kind Regards,
John

> Gesendet: Freitag, 10. November 2017 um 14:21 Uhr
> Von: "Marc Lanteigne" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] TSM Server on RHEL Cluster
>
> Hi John,
>
> I have not done it myself, but it appears supported in a cluster:
> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.3/srv.admin/c_cluster_linux.html
>
>
> -
> Thanks,
> Marc...
> 
> Marc Lanteigne
> Accelerated Value Specialist for Spectrum Protect
> 416.478.0233 | marclantei...@ca.ibm.com
> Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern
>
> Follow me on: Twitter, developerWorks, LinkedIn
>
>
> -Original Message-
> From: John Keyes [mailto:ra...@gmx.net]
> Sent: Friday, November 10, 2017 8:26 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] TSM Server on RHEL Cluster
>
> Hello,
>
> i am trying to "clusterfy" a clients TSM 6 Server Setup. I know IBM doesn't
> directly support this but it should still be possible.
> Now I got the same version of everything installed and a working RHEL
> cluster setup.
> The TSM installation in /opt is local on each server, but the instance
> home, database, logs and storage are on a shared storage. I configured a
> new (isolated) instance with the same properties (user, name, directories)
> as the originial instance, so that everything should be in the right place
> when I would mount the shared storage.
> However something is off, and everytime i try to start the instance, it
> seems that TSM cant find the DB2 database.
>
> Has anyone ever tried something familiar and can maybe provide some
> insight?
>
> Kind Regards,
> John
>


Fw: TSM Server on RHEL Cluster

2017-11-10 Thread John Keyes
And I forgot to attach the error message:
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR7801I Subsystem process ID is 22482.
ANR0900I Processing options file /tsminst1/cfg/dsmserv.opt.
ANR7814I Using instance directory /tsminst1/cfg.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR3336W Default certificate labeled TSM Server SelfSigned Key in key data base 
is down level.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0151W Database manager fails to start. For more information about the 
failure, issue the db2start command.
ANR0171I dbiconn.c(1916): Error detected on 0:1, database in evaluation mode.
ANR0169E An unexpected error has occurred and the TSM server is stopping.
ANR0162W Supplemental database diagnostic information:  -1:58031:-1031 
([IBM][CLI Driver] SQL1031N  The database directory cannot be found on the 
indicated file system.  SQLSTATE=58031
).

Transaction hash table contents (slots=256):
  *** no transactions found ***

Lock hash table contents (slots=3002):
Note: Enabling trace class TMTIMER will provide additional timing info on the 
following locks
  *** no locks found ***


> Hello,
>
> i am trying to "clusterfy" a clients TSM 6 Server Setup. I know IBM doesn't 
> directly support this but it should still be possible.
> Now I got the same version of everything installed and a working RHEL cluster 
> setup.
> The TSM installation in /opt is local on each server, but the instance home, 
> database, logs and storage are on a shared storage. I configured a new 
> (isolated) instance with the same properties (user, name, directories) as the 
> originial instance, so that everything should be in the right place when I 
> would mount the shared storage.
> However something is off, and everytime i try to start the instance, it seems 
> that TSM cant find the DB2 database.
>
> Has anyone ever tried something familiar and can maybe provide some insight?
>
> Kind Regards,
> John
>


Fw: TSM Server on RHEL Cluster

2017-11-10 Thread John Keyes
> Gesendet: Freitag, 10. November 2017 um 13:24 Uhr
> Von: "John Keyes" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: TSM Server on RHEL Cluster
>
> Hello,
>
> i am trying to "clusterfy" a clients TSM 6 Server Setup. I know IBM doesn't 
> directly support this but it should still be possible.
> Now I got the same version of everything installed and a working RHEL cluster 
> setup.
> The TSM installation in /opt is local on each server, but the instance home, 
> database, logs and storage are on a shared storage. I configured a new 
> (isolated) instance with the same properties (user, name, directories) as the 
> originial instance, so that everything should be in the right place when I 
> would mount the shared storage.
> However something is off, and everytime i try to start the instance, it seems 
> that TSM cant find the DB2 database.
>
> Has anyone ever tried something familiar and can maybe provide some insight?
>
> Kind Regards,
> John
>


TSM Server on RHEL Cluster

2017-11-10 Thread John Keyes
Hello,

i am trying to "clusterfy" a clients TSM 6 Server Setup. I know IBM doesn't 
directly support this but it should still be possible.
Now I got the same version of everything installed and a working RHEL cluster 
setup.
The TSM installation in /opt is local on each server, but the instance home, 
database, logs and storage are on a shared storage. I configured a new 
(isolated) instance with the same properties (user, name, directories) as the 
originial instance, so that everything should be in the right place when I 
would mount the shared storage.
However something is off, and everytime i try to start the instance, it seems 
that TSM cant find the DB2 database.

Has anyone ever tried something familiar and can maybe provide some insight?

Kind Regards,
John


Aw: Re: [ADSM-L] TSM 6 to 7 Upgrade questions

2017-10-19 Thread John Keyes
Thank you, thats what I needed to know.

Best Regards,
John

> Gesendet: Donnerstag, 19. Oktober 2017 um 12:56 Uhr
> Von: "Marc Lanteigne" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] TSM 6 to 7 Upgrade questions
>
> Just to clarify this statement: “Depending on old OS and new OS, you may want 
> to upgrade before or after the move.”
> 
>  
> 
> You will want to move from v6 to v6, then upgrade to v7
> 
> Or upgrade to v7, and move from v7.
> 
>  
> 
> I have seen cases where the new OS didn’t support v6, so they upgrade on the 
> old machine to v7 first, then migrate to a new machine running V7.  Also seen 
> cases where the old OS didn’t support v7, so move from v6 to v6 on the new 
> machine, then upgrade to v7.
> 
>  
> 
> Because the upgrade and the move are two separate tasks, they don’t need to 
> happen at the same time.  The order will depend on the environment.
> 
>  
> 
> -
> 
> Thanks,
> Marc...
> 
> 
> 
> Marc Lanteigne
> 
> Accelerated Value Specialist for Spectrum Protect
> 
>  
> 
>  
> 
> From: Marc Lanteigne [mailto:marclantei...@ca.ibm.com] 
> Sent: Thursday, October 19, 2017 7:42 AM
> To: ADSM: Dist Stor Manager 
> Cc: ADSM-L 
> Subject: Re: [ADSM-L] TSM 6 to 7 Upgrade questions
> 
>  
> 
> Hi John,
> 
>  
> 
> You are actually doing two separate tasks.  Moving to a different machine and 
> upgrading the software.  You have to do one before the other.  Depending on 
> old OS and new OS, you may want to upgrade before or after the move.  
> 
>  
> 
> The actual move is very similar to a DR exercise.  At a very high level:
> 
> - install OS
> 
> - install Soectrum Protect 
> 
> - restore DB
> 
> - swing storage from old to new server
> 
>  
> 
> Marc...
> 
> 
> Sent from my iPhone using IBM Verse
> 
>   _  
> 
> On Oct 19, 2017, 7:19:13 AM, ra...@gmx.net <mailto:ra...@gmx.net>  wrote:
> 
> From: ra...@gmx.net <mailto:ra...@gmx.net> 
> To: ADSM-L@VM.MARIST.EDU <mailto:ADSM-L@VM.MARIST.EDU> 
> Cc: 
> Date: Oct 19, 2017, 7:19:13 AM
> Subject: [ADSM-L] TSM 6 to 7 Upgrade questions
> 
> Hello,
> im planning to upgrade a clients TSM 6.3 server to TSM 7.1 on new hardware. 
> However the documentation I found 
> (https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/srv.install/t_srv_upgrade63_71.html)
>  only provides a way to make an in-place upgrade. But I need to move to new 
> hardware. Is there a way to extract the database and import it on the new 
> system, like it was for TSM 5 to 6 upgrade? Can i use the v5 dsmupgrd 
> utilities to extract the TSM 6 database to  Tape, and then TSM 7 dsmserv 
> insertdb to import the database again?
> Best Regards,
> John
>


TSM 6 to 7 Upgrade questions

2017-10-19 Thread John Keyes
Hello,

im planning to upgrade a clients TSM 6.3 server to TSM 7.1 on new hardware. 
However the documentation I found 
(https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/srv.install/t_srv_upgrade63_71.html)
 only provides a way to make an in-place upgrade. But I need to move to new 
hardware. Is there a way to extract the database and import it on the new 
system, like it was for TSM 5 to 6 upgrade? Can i use the v5 dsmupgrd utilities 
to extract the TSM 6 database to  Tape, and then TSM 7 dsmserv insertdb to 
import the database again?


Best Regards,
John


Re: Strange problem with the same ACTIVELOG files opened multiple times

2016-12-07 Thread Dury, John C.
Well at least I feel a little better that it is happening to someone else also. 
We were also having a problem with the DB Backup completely hanging. I thought 
maybe it was just stuck, but it actually happened on a Friday and was still 
running as of Monday morning but making no progress. Originally the DB backup 
was going to a an NFS mounted filesystem but after moving to a local RAID 1 
group and changing some other recommendations in /etc/sysctl.conf, the hung DB 
backup issue seems to be solved. Our DB and ACTIVELOG files are all on local 
SSD also. The BACKUPPOOL volumes are on a NFS (10G) mounted filesystem that 
lives on a Data Domain. I do have an open PMR with IBM about the excessive 
number of times each /tsmactivelog S00* file is open. It has been sent to 
development for investigation. I will post any updates here.
John



Graham 
Stewart<https://www.mail-archive.com/search?l=adsm-l@vm.marist.edu&q=from:%22Graham+Stewart%22>
 Tue, 06 Dec 2016 11:09:00 
-0800<https://www.mail-archive.com/search?l=adsm-l@vm.marist.edu&q=date:20161206>
We see something very similar (TSM 7.1.6.0, RHEL 7.2):

-83570 open active S00*.log files, under db2sysc
-193 S00*.log files
-DB backups sometimes (but not always) slow down, showing no progress,
but eventually get going again. We've never rebooted under this
scenario, and the DB backups always complete successfully, but some days
take twice as long as other days.

No NFS. All local disk: SSD (DB and logs) and SATA (storage pools). No
deduplication.

We haven't yet placed a support call with IBM about the variance in DB
backup times, but intend to.

--
Graham Stewart
Network and Storage Services Manager
Information Technology Services
University of Toronto Libraries
416-978-6337

On 12/06/2016 01:41 PM, Dury, John C. wrote:
We continue to see this on all 4 TSM 7.x servers. I do not see this behavior on
the TSM 6.x servers at all. Anyone else running TSM 7.x on RHEL 7.x?
The /tsmactivelog/NODE/LOGSTREAM/S0018048.LOG is open simultaneously a
total of 386 times. This can't be normal behavior.



We have two TSM v7.1.7.0 server running on different RHEL 7.x servers. The
primary storage pool is BACKUPPOOL which has it's volumes in the local OS
mounted as NFS volumes across a 10g network connection. The volumes live on the
Data domain which does it's own deduplication in the background. We have a
schedule that does a full TSM DB backup daily. The target is a separate file
system but is also mounted as a NFS on the Data Domain across a 10g network
connection. The TSM Active log is mounted on local disk. The TSM DB is also
mounted locally on a RAID of SSD drives for performance.
The issue I am seeing is that although there are only 261
S00*.LOG files in /tsmactivelog, they appear to all be open multiple times.
"lsof|grep -i tsmactive|wc -l"
command tells me that there are 94576 files opened in /tsmactivelog. The
process that has the /TSMACTIVELOG files opened is db2sysc. I've never seen
this on our TSM 6.x server. It's almost as if the active log file is opened but
then never closed. It isn't a gradual climb to a high number of mounts either.
10 minutes after booting the server, there are an excessive number of files
mounted in /tsmactivelog. This behavior is happening even when the server is
extremely idle with very few (or none) sessions and/or processes running.
The issue we keep seeing every few days is processes like the TSM full DB
backup, runs for a few minutes, and then progress just stops. After cancelling
it, it never comes down so I am forced to HALT the server and then reboot it.
It seems like the excessive number of opened files in /tsmactivelog and the
hanging DB Backup feel related but I am not sure.
I've been working with IBM on the hanging processes but so far they are also
stumped but they agree the two issues seem like they should be related.

I'm hoping someone out there might have some ideas.


Re: Strange problem with the same ACTIVELOG files opened multiple times

2016-12-06 Thread Dury, John C.
We continue to see this on all 4 TSM 7.x servers. I do not see this behavior on 
the TSM 6.x servers at all. Anyone else running TSM 7.x on RHEL 7.x?
The /tsmactivelog/NODE/LOGSTREAM/S0018048.LOG is open simultaneously a 
total of 386 times. This can't be normal behavior.



We have two TSM v7.1.7.0 server running on different RHEL 7.x servers. The
primary storage pool is BACKUPPOOL which has it's volumes in the local OS
mounted as NFS volumes across a 10g network connection. The volumes live on the
Data domain which does it's own deduplication in the background. We have a
schedule that does a full TSM DB backup daily. The target is a separate file
system but is also mounted as a NFS on the Data Domain across a 10g network
connection. The TSM Active log is mounted on local disk. The TSM DB is also
mounted locally on a RAID of SSD drives for performance.
The issue I am seeing is that although there are only 261
S00*.LOG files in /tsmactivelog, they appear to all be open multiple times.
"lsof|grep -i tsmactive|wc -l"
command tells me that there are 94576 files opened in /tsmactivelog. The
process that has the /TSMACTIVELOG files opened is db2sysc. I've never seen
this on our TSM 6.x server. It's almost as if the active log file is opened but
then never closed. It isn't a gradual climb to a high number of mounts either.
10 minutes after booting the server, there are an excessive number of files
mounted in /tsmactivelog. This behavior is happening even when the server is
extremely idle with very few (or none) sessions and/or processes running.
The issue we keep seeing every few days is processes like the TSM full DB
backup, runs for a few minutes, and then progress just stops. After cancelling
it, it never comes down so I am forced to HALT the server and then reboot it.
It seems like the excessive number of opened files in /tsmactivelog and the
hanging DB Backup feel related but I am not sure.
I've been working with IBM on the hanging processes but so far they are also
stumped but they agree the two issues seem like they should be related.

I'm hoping someone out there might have some ideas.


Strange problem with the same ACTIVELOG files opened multiple times

2016-11-11 Thread Dury, John C.
We have two TSM v7.1.7.0 server running on different RHEL 7.x servers. The 
primary storage pool is BACKUPPOOL which has it's volumes in the local OS 
mounted as NFS volumes across a 10g network connection. The volumes live on the 
Data domain which does it's own deduplication in the background. We have a 
schedule that does a full TSM DB backup daily. The target is a separate file 
system but is also mounted as a NFS on the Data Domain across a 10g network 
connection. The TSM Active log is mounted on local disk. The TSM DB is also 
mounted locally on a RAID of SSD drives for performance.
The issue I am seeing is that although there are only 261 
S00*.LOG files in /tsmactivelog, they appear to all be open multiple times.
"lsof|grep -i tsmactive|wc -l"
command tells me that there are 94576 files opened in /tsmactivelog. The 
process that has the /TSMACTIVELOG files opened is db2sysc. I've never seen 
this on our TSM 6.x server. It's almost as if the active log file is opened but 
then never closed. It isn't a gradual climb to a high number of mounts either. 
10 minutes after booting the server, there are an excessive number of files 
mounted in /tsmactivelog. This behavior is happening even when the server is 
extremely idle with very few (or none) sessions and/or processes running.
The issue we keep seeing every few days is processes like the TSM full DB 
backup, runs for a few minutes, and then progress just stops. After cancelling 
it, it never comes down so I am forced to HALT the server and then reboot it. 
It seems like the excessive number of opened files in /tsmactivelog and the 
hanging DB Backup feel related but I am not sure.
I've been working with IBM on the hanging processes but so far they are also 
stumped but they agree the two issues seem like they should be related.

I'm hoping someone out there might have some ideas.


server script to find what schedules a node is associated with?

2016-10-04 Thread Dury, John C.
Anyone have any idea how to query a TSM server (v6 or v7) to find what backup 
schedules a node is associated with? I'm trying to do the reverse of a "q 
assoc" command and looking for  way to supply a node parameter to a script and 
have the TSM server tell me it is in the following list of schedules.
.


Aw: Re: [ADSM-L] postschedulecmd question

2016-06-20 Thread John Keyes
I see, thank you.
It's a bummer really, since other backup solutions like bareos do provide a 
more elaborate control over post schedule events.

Best regards,
John

> Gesendet: Donnerstag, 16. Juni 2016 um 15:46 Uhr
> Von: "Pagnotta, Pamela (CONTR)" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] postschedulecmd question
>
> Hi John,
>
> We use these commands sparingly so I am just reading the documentation. The 
> way I read the description, it seems as if the postschedulecmd will run 
> regardless of the return code of the actual scheduled backup.
>
> Notes:
> If the postschedulecmd command does not complete with return code 0, the 
> client will report that the scheduled event completed with return code 8 
> (unless the scheduled operation encounters a more severe error yielding a 
> higher return code). If you do not want the postschedulecmd command to be 
> governed by this rule, you can create a script or batch file that invokes the 
> command and exits with return code 0. Then configure postschedulecmd to 
> invoke the script or batch file. The return code for the postnschedulecmd 
> command is not tracked, and does not influence the return code of the 
> scheduled event.
>
>
> Also looking at the perschedulecmd notes, it says
>
> Note:
> Successful completion of the preschedulecmd command is considered to be a 
> prerequisite to running the scheduled operation. If the preschedulecmd 
> command does not complete with return code 0, the scheduled operation and any 
> postschedulecmd and postnschedulecmd commands will not run. The client 
> reports that the scheduled event failed, and the return code is 12. If you do 
> not want the preschedulecmd command to be governed by this rule, you can 
> create a script or batch file that invokes the command and exits with return 
> code 0. Then configure preschedulecmd to invoke the script or batch file. The 
> return code for the prenschedulecmd command is not tracked, and does not 
> influence the return code of the scheduled event.
>
>
> So it appears that the postschedulecmd would run regardless of the RC of the 
> scheduled backup, but not if there is a non-zero RC during the preschedulecmd.
>
>
> Pam
>
> Pam Pagnotta
> Sr. System Engineer
> Criterion Systems, Inc./ActioNet
> Contractor to US. Department of Energy
> Office of the CIO/IM-622
> Office: 301-903-5508
> Mobile: 301-335-8177
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of John 
> Keyes
> Sent: Thursday, June 16, 2016 9:28 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] postschedulecmd question
>
> Hi guys, I did not find an answer on the internet to the following question:
> Will a command defined as postschedulecmd be executed if the schedule itself 
> fails?
>
> Best regards,
> John
>


postschedulecmd question

2016-06-16 Thread John Keyes
Hi guys, I did not find an answer on the internet to the following question:
Will a command defined as postschedulecmd be executed if the schedule itself 
fails?

Best regards,
John


Re: All volumes in STGPOOL show as full. Please help.

2016-01-14 Thread Dury, John C.
We do not have a backup of the DEDUP pool but it is being replicated to a 
different TSM server. The copy stgpool option is disabled.
John


-Original Message-
From: Stef Coene [mailto:stef.co...@docum.org] 
Sent: Thursday, January 14, 2016 8:54 AM
To: ADSM: Dist Stor Manager; Dury, John C.
Subject: Re: All volumes in STGPOOL show as full. Please help.

Hi,

There is one question not asked: do you have a backup of the dedup pool?
Per default, TSM will only reclaim volumes after the data is copied to a copy 
storage pool. You can overrule that in the dsmserv.opt config file.


Stef


Re: All volumes in STGPOOL show as full. Please help.

2016-01-13 Thread Dury, John C.
I tried reclamation but it fails since all volume in stgpool DEDUP show as 
full. An audit run to successful completion but volume stays as FULL. Even a 
MOVE DATA of a volume in DEDUP pool runs to successful completion but the 
status stays as FULL.
John


-Original Message-
From: Rettenberger Sabine [mailto:sabine.rettenber...@wolf-heiztechnik.de] 
Sent: Wednesday, January 13, 2016 10:35 AM
To: ADSM: Dist Stor Manager
Cc: Dury, John C.
Subject: AW: All volumes in STGPOOL show as full. Please help.

Hi John,

did you try a RECLAIM at the DEDUP-Pool to get the volumes free?
What says an AUDIT VOLUME of the empty volumes?
Are they really empty?

Best regards
Sabine

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Im Auftrag von Dury, 
John C.
Gesendet: Mittwoch, 13. Januar 2016 16:01
An: ADSM-L@VM.MARIST.EDU
Betreff: All volumes in STGPOOL show as full. Please help.

We have 2 storage pools. A BACKUPPOOL and a DEDUP pool. All nightly backups 
come into the BACKUPPOOL and then migrate to the DEDUP pool for permanent 
storage. All volumes in the DEDUP pool are showing FULL although the pool is 
only 69% in use. I tried doing a move data on a volume in the DEDUP pool to the 
BACKUPPOOL just to free space so reclamation could run, and although it says it 
ran to successful completion, the volume still shows as FULL. So for whatever 
reason, all volumes in the DEDUP pool are never freeing up. I ran an audit on 
the same volume I tried to the MOVE DATA command on and it also ran to 
successful completion. No idea what is going on here but hopefully someone else 
has an idea. If our BACKUPPOOL fills up, we can't back anything up any more and 
we will have a catastrophe. The BACKUPPOOL is roughly 15T  and 15% full and I 
have no way to increase it.
Please reply directly to my email address and list as I am currently subscribed 
as digest only.
TSM Server is 6.3.5.300 on RHEL 5
--
Wolf GmbH
Sitz der Gesellschaft:
Industriestr. 1, D-84048 Mainburg
Registergericht:
Amtsgericht Regensburg HR-B-1453
Vorsitzender des Aufsichtsrates: Alfred Gaffal Geschäftsführung:Bernhard Steppe 
(Sprecher) Christian Amann Gerdewan Jacobs Rudolf Meindl

Telefon +49 8751/74-0
Telefax +49 8751/74-1600
Internet: 
http://cp.mcafee.com/d/avndzgOcygArhovjsd79EVjupdTdFEK9EEEI6QQn4QkhPNJ6XbNEVppvhd79JBV4QsFIEECN51K9o58dyNYy9evbCQ6No-h4DfBPq9EVZNMQsKfZvAkS7C3hOZRXBQSkTKjvssKqehTbnhIyyHtVPBgY-F6lK1FJ4SyrLOb2rPUV5xcQsCXCOD6aDKBOvO59_oD2hnAfdQQxa7Z7VP-13UAvy2z5jTiVfV2A_Ijx8HO7CS6nCjow0GhEw43PPh0pGuSHZ3h1a1Ew60i4wI60M5jh1gDkMq8dEq87mPP-d40mf_quq82vDHsrvpdJIlhZP2t
USt-IdNr.: DE811132553
ILN: 40 45013 01


All volumes in STGPOOL show as full. Please help.

2016-01-13 Thread Dury, John C.
We have 2 storage pools. A BACKUPPOOL and a DEDUP pool. All nightly backups 
come into the BACKUPPOOL and then migrate to the DEDUP pool for permanent 
storage. All volumes in the DEDUP pool are showing FULL although the pool is 
only 69% in use. I tried doing a move data on a volume in the DEDUP pool to the 
BACKUPPOOL just to free space so reclamation could run, and although it says it 
ran to successful completion, the volume still shows as FULL. So for whatever 
reason, all volumes in the DEDUP pool are never freeing up. I ran an audit on 
the same volume I tried to the MOVE DATA command on and it also ran to 
successful completion. No idea what is going on here but hopefully someone else 
has an idea. If our BACKUPPOOL fills up, we can't back anything up any more and 
we will have a catastrophe. The BACKUPPOOL is roughly 15T  and 15% full and I 
have no way to increase it.
Please reply directly to my email address and list as I am currently subscribed 
as digest only.
TSM Server is 6.3.5.300 on RHEL 5


Re: How can I exclude files from dedupe processing?

2015-08-03 Thread Dury, John C.
Unfortunately the final storage pool is the dedupe pool.





Date:Sun, 2 Aug 2015 16:32:17 +

From:Paul Zarnowski mailto:p...@cornell.edu>>

Subject: Re: How can I exclude files from dedupe processing?



Since the files are already using a separate management class, you can just=  
change the destination storage pool for that class to go to a non-duplicat= ed 
storage pool.



..Paul

(sent from my iPhone)



On Aug 2, 2015, at 11:07 AM, Dury, John C. mailto:JDury= 
@DUQLIGHT.COM<mailto:jd...@duqlight.com%3cmailto:JDury=%2...@duqlight.com>>> 
wrote:



I have a 6.3.5.100 Linux server and several TSM 7.1.0.0 Linux clients. Thos= e 
linux clients are dumping several Large Oracle databases using compressio= n, 
and then those files are being backed up to TSM. Because the files are c= 
ompressed when dumped via RMAN, they are not good candidates for dedupe pro= 
cessing. Is there any way to have them excluded from dedupe server processi= ng 
? I know I can exclude them from client dedupe processing which I am not=  
doing on this client anyways. I have the SERVERDEDUPTXNLIMIT limit set to = 
200, but these rman dumps are smaller than 200g. I have our DBAs investigat= 
ing using TDP for Oracle, but until then, I would like to exclude these fil= es 
from dedupe processing as I suspect it is causing issues with space recl= 
amation. If it helps, these files are in their own management class also.

Ideas?


How can I exclude files from dedupe processing?

2015-08-02 Thread Dury, John C.
I have a 6.3.5.100 Linux server and several TSM 7.1.0.0 Linux clients. Those 
linux clients are dumping several Large Oracle databases using compression, and 
then those files are being backed up to TSM. Because the files are compressed 
when dumped via RMAN, they are not good candidates for dedupe processing. Is 
there any way to have them excluded from dedupe server processing ? I know I 
can exclude them from client dedupe processing which I am not doing on this 
client anyways. I have the SERVERDEDUPTXNLIMIT limit set to 200, but these rman 
dumps are smaller than 200g. I have our DBAs investigating using TDP for 
Oracle, but until then, I would like to exclude these files from dedupe 
processing as I suspect it is causing issues with space reclamation. If it 
helps, these files are in their own management class also.
Ideas?


Aw: Re: [ADSM-L] TSM TDPSQL crashes on log backup

2015-07-29 Thread John Keyes
Thanks for the answer. I tried the patch anyway, but as you said, it did not 
work.
Now I downgraded to 7.1.1.1 and it works again.

Best regards,
John


> Gesendet: Mittwoch, 29. Juli 2015 um 14:33 Uhr
> Von: "Zoltan Forray" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] TSM TDPSQL crashes on log backup
>
> If you go look at the ADSM-L archives you will see I just went through this
> exact same problem, a week or so ago. It is a known problem:
>
> http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-lists-3/ibm-tsm-13/issued-with-tdp-sql-7-1-2-0-and-2014-sql-server-128607/
>
> IT09192: DATABASE BACKUPS USING DATA PROTECTION FOR SQL CAN CRASH DURING
> PROCESSING WITH REFERENCES TO MODULE CLR.DLL
>
> IBM recently released TDP SQL patch 7.1.2.1 but it DOES NOT fix this
> problem fully/yet - at least not on our system with this issue.  The APAR
> discussed 2-problem and the FIN mentioned is 7.1.3.
>
> On Wed, Jul 29, 2015 at 6:16 AM, John Keyes  wrote:
>
> > Hello,
> >
> > i have a strange problem with one of my SQL Servers. It's a SQL Server
> > 2014, running on Windows Server 2012 R2. TSM BA and TDPSQL are installed in
> > version 7.1.2.0. Most Databases use a simple recovery plan, but a few have
> > a transaction log. VSS Full Backups are working fine as always. We set the
> > whole thing up a few weeks ago, but since last week every log backup, or
> > even legacy full backup for that matter, causes the TSM software to crash.
> > There are no error reports in the tsm log files, the windows event viewer
> > shows this:
> > Faulting application name: tdpsqlc.exe, version: 7.1.2.0, time stamp:
> > 0x55102a96
> > Faulting module name: clr.dll, version: 4.0.30319.34209, time stamp:
> > 0x5348a1ef
> > Exception code: 0xc409
> > Fault offset: 0x00355714
> > Faulting process id: 0x1838
> > Faulting application start time: 0x01d0c9e477c55264
> > Faulting application path: C:\Progra~1\Tivoli\TSM\TDPSql\tdpsqlc.exe
> > Faulting module path:
> > C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll
> > Report Id: bfee8f36-35d7-11e5-80cb-00155dfd9cbb
> > Faulting package full name:
> > Faulting package-relative application ID:
> >
> > I also rebootet the server and reinstalled TSM, but to no avail. Does
> > anyone have any ideas?
> >
> >
> > Kind Regards,
> > John
> >
>
>
>
> --
> *Zoltan Forray*
> TSM Software & Hardware Administrator
> Xymon Monitor Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
>


TSM TDPSQL crashes on log backup

2015-07-29 Thread John Keyes
Hello,

i have a strange problem with one of my SQL Servers. It's a SQL Server 2014, 
running on Windows Server 2012 R2. TSM BA and TDPSQL are installed in version 
7.1.2.0. Most Databases use a simple recovery plan, but a few have a 
transaction log. VSS Full Backups are working fine as always. We set the whole 
thing up a few weeks ago, but since last week every log backup, or even legacy 
full backup for that matter, causes the TSM software to crash. There are no 
error reports in the tsm log files, the windows event viewer shows this:
Faulting application name: tdpsqlc.exe, version: 7.1.2.0, time stamp: 0x55102a96
Faulting module name: clr.dll, version: 4.0.30319.34209, time stamp: 0x5348a1ef
Exception code: 0xc409
Fault offset: 0x00355714
Faulting process id: 0x1838
Faulting application start time: 0x01d0c9e477c55264
Faulting application path: C:\Progra~1\Tivoli\TSM\TDPSql\tdpsqlc.exe
Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll
Report Id: bfee8f36-35d7-11e5-80cb-00155dfd9cbb
Faulting package full name:
Faulting package-relative application ID:

I also rebootet the server and reinstalled TSM, but to no avail. Does anyone 
have any ideas?


Kind Regards,
John


Re: Third party apps for mysql

2015-03-31 Thread Underdown,John William
Steve,

We use Percona XtraBackup or LVM snapshots to create MySQL backup to 
disk, then TSM client to backup to TSM server.

john

On 03/31/2015 06:57 AM, Steven Harris wrote:
> Hi All
>
> I've just been hit with a new app to run on RedHat 7 and mysql.
>
> There are a number of third party products out there to back up mysql to
> TSM, may I have your recommendations please?  Any horror stories or
> products to avoid are also appreciated.
>
> Thanks
>
> Steve.
>
> Steven Harris
> TSM Admin
> Canberra Australia





Re: In-VMguest file-level restore for Windows?

2015-03-24 Thread Underdown,John William
Hey, it's me again.

using the "set access backup -TYPE=VM" option, see below. then using the 
in-guest VM Recovery Agent, i see all the VM nodes on the TSM Server but 
selecting any other VM gets a "not authorized to restore this virtual 
machine" error. it can access it's own backups with no problem.

it's a bit messy but seems doable. has anyone else tried this before? am 
i on the right track?

Thanks.

tsm> set access backup -TYPE=VM PWINAPP448 PWINAPP448
ANS1148I 'Set Access' command successfully completed

tsm> q access
Type NodeUserPath
 
Backup   PWINAPP448  *   \VMFULL-PWINAPP448\*\*

On 03/24/2015 08:45 AM, John Underdown wrote:
> Hi All,
>
> this is the only reference i can find for In-guest file-level restore 
> restore for Windows, see below. it appears the VM guest uses both 
> VMware Recovery Agent and Tivoli Storage Manager backup-archive 
> client's "dsmc set access" option. but i can't figure out the 
> configuration. can anyone provide instructions on how to set this up?
>
> i want each VM guest to have file level restore to it's backup files 
> only.
>
> Any help in getting this resolved will be greatly appreciated.
>
> Thanks.
>
> john
>
> From IBM Tivoli Storage Manager for Virtual Environments Version 7.1
>
> These configurations require Data Protection for VMware Recovery Agent 
> to be installed in each VM guest. The mount and instant restore 
> processes are performed for a single partition from the backed up disk.
>
> The Data Protection for VMware Recovery Agent node name is typically 
> granted access only to the VM where it is running with the Tivoli 
> Storage Manager backup-archive client "dsmc set access" command. The 
> restore process is typically begun by a VMware user who logs in to the 
> guest machine of the VM.





In-VMguest file-level restore for Windows?

2015-03-24 Thread Underdown,John William
Hi All,

this is the only reference i can find for In-guest file-level restore 
restore for Windows, see below. it appears the VM guest uses both VMware 
Recovery Agent and Tivoli Storage Manager backup-archive client's "dsmc 
set access" option. but i can't figure out the configuration. can anyone 
provide instructions on how to set this up?

i want each VM guest to have file level restore to it's backup files only.

Any help in getting this resolved will be greatly appreciated.

Thanks.

john

 From IBM Tivoli Storage Manager for Virtual Environments Version 7.1

These configurations require Data Protection for VMware Recovery Agent 
to be installed in each VM guest. The mount and instant restore 
processes are performed for a single partition from the backed up disk.

The Data Protection for VMware Recovery Agent node name is typically 
granted access only to the VM where it is running with the Tivoli 
Storage Manager backup-archive client "dsmc set access" command. The 
restore process is typically begun by a VMware user who logs in to the 
guest machine of the VM.





Re: Re: [ADSM-L] Re: [ADSM-L] Exchange Backup with Flashcopy Manager EXTREMELY slow

2015-03-03 Thread John Keyes
Hello guys!

I opened PMR and traced the problem. There is already an APAR open for this 
problem and thankfully it has already been fixed in the TSM 7.1.1.3 patch.
http://www-01.ibm.com/support/docview.wss?uid=swg1IT01824
Im currently running some test backups, but it looks very good. Just wanted to 
let you know in case anybody stumbles upon the same problem.
The fix will be included in the TSM BA client versions 6.3.3, 6.4.3 and 7.1.2 
which will be released on April 17th, I think.

Regards,
John

> Gesendet: Sonntag, 01. März 2015 um 15:18 Uhr
> Von: "Del Hoobler" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] Aw: Re: [ADSM-L]  Re: [ADSM-L] Exchange Backup with 
> Flashcopy Manager EXTREMELY slow
>
> Hi John,
> 
> You are skipping the integrity check already and you are not sending
> any data across the network. This really only leaves a few things 
> left to check, the mailbox history, the Exchange Server "prepare" 
> function, 
> and the snapshot itself.
> 
> I recommend opening a PMR to see where the issue is.
> The support team can get a trace and find out which
> step the delay is occurring in.
> 
> 
> Del
> 
> 
> 
> 
> 
> 
> "ADSM: Dist Stor Manager"  wrote on 02/27/2015 
> 03:46:19 AM:
> 
> > From: John Keyes 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 02/27/2015 03:48 AM
> > Subject: Aw: Re: [ADSM-L]  Re: [ADSM-L] Exchange Backup with 
> > Flashcopy Manager EXTREMELY slow
> > Sent by: "ADSM: Dist Stor Manager" 
> > 
> > Hello guys,
> > 
> > the time for our Exchange Backups is still steadily increasing. As 
> > you can see below, a vss snapshot now takes 5 times(!) as much time 
> > as it did just 3 weeks ago. These log excerpts are all from the same
> > server and database and are all incremental backups (except the one 
> > on 02/22/2015, which is full backup). Im now absolutely sure this is
> > not normal but I have no Idea what the problem could be. The 
> > exchange admin assured me that he changed nothing on the server.
> > The exchange database volumes and the backup volumes all reside on 
> > the same storage, which is connected through fibrechannel. So 
> > network traffic cant be a problem. 
> > Anyone got some ideas?
> > 
> > 02/05/2015 13:26:19 ANS1750I Volume mount point 'c:
> > \exchangedatabases\dag1-01\dag1-01.log' is mounted to volume 'C:
> > \ExchangeDatabases\DAG1-01\DAG1-01.log'. Using snapshot volume for 
> > 'C:\ExchangeDatabases\DAG1-01\DAG1-01.log' to backup.
> > 02/05/2015 13:28:20 VSS Backup operation completed with rc = 0.
> > 02/05/2015 13:28:20   Elapsed Processing Time: 209.85 seconds
> > 02/06/2015 16:10:10 
> > 
> > 02/07/2015 19:10:39 ANS1750I Volume mount point 'c:
> > \exchangedatabases\dag1-01\dag1-01.log' is mounted to volume 'C:
> > \ExchangeDatabases\DAG1-01\DAG1-01.log'. Using snapshot volume for 
> > 'C:\ExchangeDatabases\DAG1-01\DAG1-01.log' to backup.
> > 02/07/2015 19:12:42 VSS Backup operation completed with rc = 0.
> > 02/07/2015 19:12:42   Elapsed Processing Time: 206.52 seconds
> > 02/08/2015 01:33:13 
> > 
> > 02/15/2015 00:17:25 ANS1750I Volume mount point 'c:
> > \exchangedatabases\dag1-01\dag1-01.log' is mounted to volume 'C:
> > \ExchangeDatabases\DAG1-01\DAG1-01.log'. Using snapshot volume for 
> > 'C:\ExchangeDatabases\DAG1-01\DAG1-01.log' to backup.
> > 02/15/2015 00:20:32 VSS Backup operation completed with rc = 0.
> > 02/15/2015 00:20:32   Elapsed Processing Time: 406.69 seconds
> > 02/16/2015 00:13:56 
> > 
> > 02/20/2015 00:17:24 ANS1750I Volume mount point 'c:
> > \exchangedatabases\dag1-01\dag1-01.log' is mounted to volume 'C:
> > \ExchangeDatabases\DAG1-01\DAG1-01.log'. Using snapshot volume for 
> > 'C:\ExchangeDatabases\DAG1-01\DAG1-01.log' to backup.
> > 02/20/2015 00:19:54 VSS Backup operation completed with rc = 0.
> > 02/20/2015 00:19:54   Elapsed Processing Time: 423.87 seconds
> > 02/21/2015 00:14:38 
> > 
> > 02/21/2015 00:20:30 ANS1750I Volume mount point 'c:
> > \exchangedatabases\dag1-01\dag1-01.log' is mounted to volume 'C:
> > \ExchangeDatabases\DAG1-01\DAG1-01.log'. Using snapshot volume for 
> > 'C:\ExchangeDatabases\DAG1-01\D

Aw: Re: [ADSM-L] Re: [ADSM-L] Exchange Backup with Flashcopy Manager EXTREMELY slow

2015-02-27 Thread John Keyes
 mount point 
'c:\exchangedatabases\dag1-01\dag1-01.log' is mounted to volume 
'C:\ExchangeDatabases\DAG1-01\DAG1-01.log'. Using snapshot volume for 
'C:\ExchangeDatabases\DAG1-01\DAG1-01.log' to backup.
02/26/2015 21:21:19 VSS Backup operation completed with rc = 0.
02/26/2015 21:21:19   Elapsed Processing Time: 1,073.57 seconds

> Gesendet: Dienstag, 24. Februar 2015 um 13:36 Uhr
> Von: "Billaudeau, Pierre" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] Aw: Re: [ADSM-L] Exchange Backup with Flashcopy Manager 
> EXTREMELY slow
>
> Hi John,
> I had a similar situation with Exchange 2010 TSM DP backups and the 
> reason was the VM  had been Vmotioned from one ESX to another. Lan speed went 
> from 1Gbs to 100Gbs. You might want to check this.
> 
> Pierre Billaudeau
> Administrateur stockage
> 
> -Message d'origine-
> De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de 
> Jeanne Bruno
> Envoyé : 23 février 2015 15:43
> À : ADSM-L@VM.MARIST.EDU
> Objet : Re: [ADSM-L] Aw: Re: [ADSM-L] Exchange Backup with Flashcopy Manager 
> EXTREMELY slow
> 
> Hello.  we skip integrity check as well.  Backups stored on a DS3512 device
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of John 
> Keyes
> Sent: Monday, February 23, 2015 2:47 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Aw: Re: [ADSM-L] Exchange Backup with Flashcopy Manager 
> EXTREMELY slow
> 
> Thank you Chavdar Cholev and Jeanne Bruno for sharing. I forgot to tell, we 
> do skip the integrity check, and there are only passive Databases on the 
> server.
> Are your backups going to TSM server or to a local storage? Im interested 
> either way.
> 
> Best Regards,
> John
> 
> 
> 
> Gesendet: Montag, 23. Februar 2015 um 20:20 Uhr
> Von: "Chavdar Cholev" 
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] Exchange Backup with Flashcopy Manager EXTREMELY slow 
> We have ~ 2TB full exchange backup and it takes ~ 20 hours. There is exch db 
> consistency check before backup which takes considerable amount of time.
> It can be switched off... but it is not recommnded
> 
> On Monday, 23 February 2015, John Keyes  wrote:
> 
> > Hello fellow TSM Admins!
> >
> > I'm struggling with Exchange Backup which over the course of the last
> > few Days (mainly since the last full backup this weekend) takes about
> > 3-4 times as long as before.
> > We have Exchange 2013 DAG with 48 Databases and a dedicaded VM to do
> > the Backups (Flashcopy Manager 4.1.1, Backupdestination=local, on an
> > IBM V7000 Storage).
> > Right now even an incremental backup of 1 database takes ~12 minutes,
> > but the volume is only 30GB big and has only 5GB (5000 x 1MB Files) on
> > it. Two weeks ago this would only take 3 Minutes.
> > A Full backup takes about half an hour (so I couldn't even complete a
> > full backup for every database in one day). I also tried to
> > full-backup the same database twice right after one another, with no 
> > difference.
> >
> > This can hardly be normal, can it? Has anyone had the same problems
> > and maybe could point me in the right direction?
> > Also it would be great if anyone could share how much time their
> > backups take...
> >
> > Best regards,
> > John
> >
> 
> --
> 
> 
> Information confidentielle : Le présent message, ainsi que tout fichier qui y 
> est joint, est envoyé à l'intention exclusive de son ou de ses destinataires; 
> il est de nature confidentielle et peut constituer une information 
> privilégiée. Nous avertissons toute personne autre que le destinataire prévu 
> que tout examen, réacheminement, impression, copie, distribution ou autre 
> utilisation de ce message et de tout fichier qui y est joint est strictement 
> interdit. Si vous n'êtes pas le destinataire prévu, veuillez en aviser 
> immédiatement l'expéditeur par retour de courriel et supprimer ce message et 
> tout document joint de votre système. Merci.
>


Aw: Re: [ADSM-L] Exchange Backup with Flashcopy Manager EXTREMELY slow

2015-02-23 Thread John Keyes
Thank you Chavdar Cholev and Jeanne Bruno for sharing. 
I forgot to tell, we do skip the integrity check, and there are only passive 
Databases on the server.
Are your backups going to TSM server or to a local storage? Im interested 
either way.
 
Best Regards,
John
 
 

Gesendet: Montag, 23. Februar 2015 um 20:20 Uhr
Von: "Chavdar Cholev" 
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: [ADSM-L] Exchange Backup with Flashcopy Manager EXTREMELY slow
We have ~ 2TB full exchange backup and it takes ~ 20 hours. There is exch
db consistency check before backup which takes considerable amount of time.
It can be switched off... but it is not recommnded

On Monday, 23 February 2015, John Keyes  wrote:

> Hello fellow TSM Admins!
>
> I'm struggling with Exchange Backup which over the course of the last few
> Days (mainly since the last full backup this weekend) takes about 3-4 times
> as long as before.
> We have Exchange 2013 DAG with 48 Databases and a dedicaded VM to do the
> Backups (Flashcopy Manager 4.1.1, Backupdestination=local, on an IBM V7000
> Storage).
> Right now even an incremental backup of 1 database takes ~12 minutes, but
> the volume is only 30GB big and has only 5GB (5000 x 1MB Files) on it. Two
> weeks ago this would only take 3 Minutes.
> A Full backup takes about half an hour (so I couldn't even complete a full
> backup for every database in one day). I also tried to full-backup the same
> database twice right after one another, with no difference.
>
> This can hardly be normal, can it? Has anyone had the same problems and
> maybe could point me in the right direction?
> Also it would be great if anyone could share how much time their backups
> take...
>
> Best regards,
> John
>


Exchange Backup with Flashcopy Manager EXTREMELY slow

2015-02-23 Thread John Keyes
Hello fellow TSM Admins!
 
I'm struggling with Exchange Backup which over the course of the last few Days 
(mainly since the last full backup this weekend) takes about 3-4 times as long 
as before.
We have Exchange 2013 DAG with 48 Databases and a dedicaded VM to do the 
Backups (Flashcopy Manager 4.1.1, Backupdestination=local, on an IBM V7000 
Storage).
Right now even an incremental backup of 1 database takes ~12 minutes, but the 
volume is only 30GB big and has only 5GB (5000 x 1MB Files) on it. Two weeks 
ago this would only take 3 Minutes.
A Full backup takes about half an hour (so I couldn't even complete a full 
backup for every database in one day). I also tried to full-backup the same 
database twice right after one another, with no difference.

This can hardly be normal, can it? Has anyone had the same problems and maybe 
could point me in the right direction?
Also it would be great if anyone could share how much time their backups take...
 
Best regards,
John


Re: recovering a DB from full backup when one of TSM DB DIR became corrupt and DB is in roll forward mode (need help)

2014-11-24 Thread Dury, John C.
The issue was that I needed a way to recreate the actual files in the dbdirs. 
It is fixed now.

 Original Message 
From: Erwann Simon
Sent: Mon, Nov 24, 2014 12:42 AM
To: ADSM: Dist Stor Manager ; Dury, John C.
CC:
Subject: Re: [ADSM-L] recovering a DB from full backup when one of TSM DB DIR 
became corrupt and DB is in roll forward mode (need help)


Hello John,
See both Admin Guide and Admin Reference regarding "dsmserv restore db". There 
are two recovery methods : on that deletes the logs (point in time restore,  
ignore it) and one that keeps and rollforwards the logs after db restore (most 
current state, use this one).
The command is pretty simple :
Login as instance owner
Change to the instance directory
Simply run : dsmserv restore db

Note that of you use the todate option of the dsmserv restore db command, 
you'll have a point in time restore which deletes the log !



Le 24 novembre 2014 00:57:56 CET, "Dury, John C."  a écrit :
>Long story short, I have a full DB backup taken on a linux system
>running TSM 6.3.4.300 with the DB in roll forward mode. The DB is
>spread across 4 different dbdirs file systems. One of the file systems
>became corrupt and I need to recover the DB from the full backup.
>I have the dbdirs file systems back  and the active log and archive log
>directories are ok and were untouched. The dbdirs directories are back
>but there are no files in them.
>The active log and archive log directories are ok and uncorrupted.
>How can I restore the DB to the most recent point since the archivelog
>is still intact and recreate the files in the dbdirs so a restore can
>repopulate them from the full DB backup?

--
Erwann SIMON
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.


recovering a DB from full backup when one of TSM DB DIR became corrupt and DB is in roll forward mode (need help)

2014-11-23 Thread Dury, John C.
Long story short, I have a full DB backup taken on a linux system running TSM 
6.3.4.300 with the DB in roll forward mode. The DB is spread across 4 different 
dbdirs file systems. One of the file systems became corrupt and I need to 
recover the DB from the full backup.
I have the dbdirs file systems back  and the active log and archive log 
directories are ok and were untouched. The dbdirs directories are back  but 
there are no files in them.
The active log and archive log directories are ok and uncorrupted.
How can I restore the DB to the most recent point since the archivelog is still 
intact and recreate the files in the dbdirs so a restore can repopulate them 
from the full DB backup?


Re: Anyone else doing replica backups of exchange datastores? Need some help please.

2014-11-05 Thread Dury, John C.
This is exactly how I am trying to get the replica backup to work. It is 
running on the passive server and I believe the DSMAGENT configuration is 
correct in ters of the proxy settings. I continue to work with IBM to get it to 
work.




Hi John, You can't run a backup of a replica from the primary server. You must 
be running that command on the passive server. The Microsoft Exchange Replica 
Writer will be running on the passive server. That is the machine you need to 
have DP/Exchange and the Windows BA Client installed and configured on. Also 
keep in mind that it's the DSMAGENT (Windows BA Client) is the node that 
actually sends the data to the TSM Server. It uses the proxy capability to 
store the data on behalf of the DP/Exchange node. The service team can assist 
you through this. Thank you, Del 
 "ADSM: Dist Stor Manager" 
 wrote on 10/31/2014 02:47:25 PM: > From: "Dury, 
John C."  > To: ADSM-L AT VM.MARIST DOT EDU > Date: 
10/31/2014 02:49 PM > Subject: Re: Anyone else doing replica backups of 
exchange > datastores? Need some help please. > Sent by: "ADSM: Dist Stor 
Manager"  > > Thanks for the reply. I agree that 
it sounds like a configuration > issue. We are trying to do this via command 
line so it can be > scripted and therefore automated. I do have a problem open 
with IBM > and sent them some logs and lots of information but their first was 
> response was to tell me that it wasn't configured correctly as they > were 
looking at early entries in the logs I sent them instead of > further down. 
Lesson learned, delete all of your logs before > recreating problem as support 
won't look at timestamps and first > error they see must be the problem.  
The command line I am > using looks like this: > C:\Program 
Files\Tivoli\TSM\TDPExchange>tdpexcc backup  > incr /fromreplica 
/backupmethod=vss /backupdestination=tsm / > tsmoptfile="c:\Program 
Files\Tivoli\TSM\TDPExchange\dsm.opt" > I was getting it to create a TSM 
session but the session would just > be WAITING on the server for hours and 
never actually did anything. > FWIW, the datastore I am testing with is tiny. > 
Now when I try running the backup on the inactive CCR replica I get: > Updating 
mailbox history on TSM Server... > Mailbox history has been updated 
successfully. > Querying Exchange Server to gather component information, 
please wait... > ACN5241E The Microsoft Exchange Information Store is currently 
not running. > but as I mentioned before, according to my exchange admins, that 
> service should never be running on both sides of the CCR cluster. > I have 
changed the dsm.opt and tdpexc.cfg options so many times with > every option 
can think of which is why I was hoping someone had a > sanitized version of 
their dsm.opt and tdpexc.cfg that are working > in their environment I could 
look at. >


Re: Anyone else doing replica backups of exchange datastores? Need some help please.

2014-10-31 Thread Dury, John C.
Thanks for the reply. I agree that it sounds like a configuration issue. We are 
trying to do this via command line so it can be scripted and therefore 
automated.  I do have a problem open with IBM and sent them some logs and lots 
of information but their first was response was to tell me that it wasn't 
configured correctly as they were looking at early entries in the logs I sent 
them instead of further down. Lesson learned, delete all of your logs before 
recreating problem as support won't look at timestamps and first error they see 
must be the problem.  The command line I am using looks like this:
C:\Program Files\Tivoli\TSM\TDPExchange>tdpexcc backup  incr 
/fromreplica /backupmethod=vss /backupdestination=tsm /tsmoptfile="c:\Program 
Files\Tivoli\TSM\TDPExchange\dsm.opt"
I was getting it to create a TSM session but the session would just be WAITING 
on the server for hours and never actually did anything. FWIW, the datastore I 
am testing with is tiny.
Now when I try running the backup on the inactive CCR replica I get:
Updating mailbox history on TSM Server...
Mailbox history has been updated successfully.
Querying Exchange Server to gather component information, please wait...
ACN5241E The Microsoft Exchange Information Store is currently not running.
but as I mentioned before, according to my exchange admins, that service should 
never be running on both sides of the CCR cluster.
I have changed the dsm.opt and tdpexc.cfg options so many times with every 
option can think of which is why I was hoping someone had a sanitized version 
of their dsm.opt and tdpexc.cfg that are working in their environment I could 
look at.




Hi John, This looks like a configuration issue to me. Exchange Server 2007 CCR 
and LCR replica copies can be backed up and restored by using the VSS method 
only. Microsoft does not allow Legacy backups of Exchange Server 2007 CCR and 
LCR replica copies. Also keep in mind, all VSS Restores of a CCR or LCR replica 
can be restored only into the running instance of a storage group (primary, 
recovery, or alternate). Microsoft does not support VSS Restores into a replica 
instance. If you want to back up from the replica copy when running in a CCR or 
LCR environment, specify the "FromReplica True" backup option in the Protect 
tab of the MMC GUI. You can also specify the /fromreplica parameter with the 
tdpexcc backup command on the command-line interface. Here is the important 
one... For CCR copies, you must run the backup while logged on to the secondary 
node of the cluster that currently contains the replica copy and you must use 
the "FROMREPLICA" option. Here are a few more things: 
http://www-01.ibm.com/support/knowledgecenter/SSTG2D_6.4.0/com.ibm.itsm.mail.exc.doc/c_dpfcm_bup_replica_exc.html?cp=SSTG2D_6.4.0&lang=en
 If you are not able to get this working, you should open a PMR so that the 
service team can help you get the configuration working. Del 
 "ADSM: Dist Stor Manager" 
 wrote on 10/31/2014 08:40:00 AM: > From: "Dury, 
John C."  > To: ADSM-L AT VM.MARIST DOT EDU > Date: 
10/31/2014 08:42 AM > Subject: Anyone else doing replica backups of exchange 
datastores? > Need some help please. > Sent by: "ADSM: Dist Stor Manager" 
 > > I have been trying to get this to work for 
days now and I don't seem > to be making any progress. I have tried all kinds 
of options in both > the dsm.opt and tdpexc.cfg files and I get various 
messages. The > documentation in the TDP for Exchange manual on doing replica > 
backups is not very detailed at all. > Our environment looks like this. > We 
have two windows 2008 exchange 2007 servers setup for CCR > replication to each 
other. I am trying to do replica backups on the > offline cluster member so it 
doesn't affect performance on the live > cluster member. > The TSM server is 
6.3.4.300 running on RHEL5 linux. > I thought I had it working but nothing was 
actually backed up. The > TSM session was established on the TSM server but no 
data was > actually sent. The sessions appeared to be hung and seemed to > 
eventually timeout. > The other message I was receiving on the offline exchange 
cluster member was > Updating mailbox history on TSM Server... > Mailbox 
history has been updated successfully. > > Querying Exchange Server to gather 
component information, please wait... > > ACN5241E The Microsoft Exchange 
Information Store is currently not running. > but from what I was told by our 
exchange experts is that that > service only runs on the active cluster member 
and not on the offline member. > Below are my sanitized dsm.opt and tdpexc.cfg 
> dsm.opt > NODename exchange > deduplication no > CLUSTERnode yes > 
COMPRESSIon Off > COMPRESSalways On > PASSWORDAccess Genera

Anyone else doing replica backups of exchange datastores? Need some help please.

2014-10-31 Thread Dury, John C.
I have been trying to get this to work for days now and I don't seem to be 
making any progress. I have tried all kinds of options in both the dsm.opt and 
tdpexc.cfg files and I get various messages. The documentation in the TDP for 
Exchange manual on doing replica backups is not very detailed at all.
Our environment looks like this.
We have two windows 2008 exchange 2007 servers setup for CCR replication to 
each other. I am trying to do replica backups on the offline cluster member so 
it doesn't affect performance on the live cluster member.
The TSM server is 6.3.4.300 running on RHEL5 linux.
I thought I had it working but nothing was actually backed up. The TSM session 
was established on the TSM server but no data was actually sent. The sessions 
appeared to be hung and seemed to eventually timeout.
The other message I was receiving on the offline exchange cluster member was
Updating mailbox history on TSM Server...
Mailbox history has been updated successfully.

Querying Exchange Server to gather component information, please wait...

ACN5241E The Microsoft Exchange Information Store is currently not running.
but from what I was told by our exchange experts is that that service only runs 
on the active cluster member and not on the offline member.
Below are my sanitized dsm.opt and tdpexc.cfg
dsm.opt
NODename  exchange
deduplication no
CLUSTERnode   yes
COMPRESSIon   Off
COMPRESSalwaysOn
PASSWORDAccessGenerate
resourceutilization 5
COMMMethodTCPip
TCPPort   1500
TCPServeraddress tsmserver
TCPWindowsize 128
TCPBuffSize   64
diskbuffsize 32
SCHEDMODE Prompted
SCHEDLOGRetention 14,d
HTTPport  1581
tdpexc.cfg
BUFFers 4
BUFFERSIze 8192
clusternode yes
compression off
compressalways on
LOGFile tdpexc.log
LOGPrune 60
MOUNTWait Yes
TEMPLOGRestorepath P:\TempRestoreLoc
LASTPRUNEDate 10/31/2014 06:55:09
BACKUPMETHod LEGACY
* BACKUPMETHod vss
RETRies 0
LANGuage ENU
BACKUPDESTination TSM
LOCALDSMAgentnode exchangeofflineclustertsmnodename
REMOTEDSMAgentnode  exchange
TEMPDBRestorepath P:\TempRestoreLoc
CLIENTACcessserver

Could someone who actually has this working send me their sanitized 
dsm.opt.tdpexc.cfg and the actual command used to do a replica backup from the
offline cluster member?


Manually Expire Archive

2014-06-11 Thread John Keyes
Hello,
 
I have a bit of a problem at hands. We operate a TSM Server for a client. I 
have of course access to the TSM Server but not to the backup clients.
We are doing forever-archives for 3 nodes, but since we are nearly out of 
scratch tapes in the library, the client decided he wants to switch over to 1 
year archives for one specific node.
So how can I expire just the archive data of this one node that is older than a 
year?
As far as I know I cannot rebind existing archive data to other management 
classes and if I change the expiration in this management class, the other 
nodes are also affected.
Does anyone have an idea?
 
Kind regards,
John


Re: file system backups of a Dell NDMP Equallogic device

2014-05-05 Thread Dury, John C.
Sorry  for revisiting this but I'm in a predicament now. Trying to backup the 
NDMP device is a miserable failure and frankly just ugly.  I honestly can't see 
why anyone would use TSM to backup any NDMP devices except for maybe speed 
issues.

We decided to mount all of the NFS shares locally on the TSM server and allow 
them to be backed up that way but now the problem is that even with 
resourceutilization set to 20, it still takes 18+ hours just to do an 
incremental because there are millions and millions of files in all of those 
NFS shares. So this isn't going to work either. I can try the proxy node 
solution but frankly I'm skeptical about it also because of the tremendous 
number of small files. Of course this is all for a mission critical application 
so I have to come up with a workable solution and I'm running out of ideas.

Help!







Oh, that works just fine.  Then you're backing up over NFS, no NDMP involved.

And TSM will not back up an NFS-mounted volume by default, so you won't get

multiple copies.



Put the virtualmountpoint names in the DOMAIN statement in dsm.sys  of the

client you want to run the backups (or create dsmc incr commands that list the

sharenames, however you roll), fight through whatever permissions issues pop

up, and Bob's your uncle.  You'll get incremental-only backups of those files.



What you won't know for a while, is how long it takes to noodle through the

filesystems across the NFS mount- depends on how many kazillion objects in the

directories.

If you list the names in the DOMAIN statement, you can add "RESOURCEUTILIZATION

10" to the dsm.sys and process 4 shares at once, if the directory noodling is

more time consuming than the actual data transfer, which it usually is if these

shares are made of a lot of small files.



If you can't get through them by running 4 at a time, I've solved that before

by setting up multiple proxy clients (using GRANT PROXYNODE), to get even more

parallel streams running, but with all the backups stored under 1 nodename so

that it's easy to find them at restore time.



W









-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of

Dury, John C.

Sent: Friday, February 14, 2014 3:46 PM

To: ADSM-L AT VM.MARIST DOT EDU

Subject: Re: [ADSM-L] file system backups of a Dell NDMP Equallogic device



We are more concerned about file level backups than an image backup. Eventually

the NAS devices will be replicating using Equallogic replication once we get

some more storage but for now, we want to make sure that the files in the NFS

shares are correctly backed up but I really wanted to avoid backing up the same

NFS data to multiple TSM nodes since some of the NFS mount are shared amongst

several servers/nodes. My strategy is to pick one TSM node and make sure it has

NFS mounts for all of the NFS that live on the NAS and then just backup it up

as virtualmountpoint(s) so something like this



/NAS exists off of root on TSM node and is local



mount NFS1 as /NAS/NFS1



mount NFS2 as /NAS/NFS2



etc







put entry in dsm.sys virtualmountpoint /NAS



and then just let incrementals run normally.



All restores would need to be done on the NODE that can see all of the NFS

mounts.



Think that will work?



























I agree with Wanda. Our strategy for our filers (BlueARC, Isilon) is to



backup at the file-level exclusively, using NFS. Modern TSM servers support



no-query restores well enough that we can get a restore of the latest data



very quickly (make sure you have plenty of CPU and memory, along with very



fast database disks). To perform the backups efficiently, you might want to



think about splitting your data up into separate nodes or filespaces,



backed up with independent schedules, so that you're not bottlenecked on a



single component.







As far as I can tell, NDMP was written by storage vendors to make one buy



more expensive storage, and more of it than one needs.











You don't have to use tape.



You can do NDMP backups via TCP/IP to your regular TSM storage pool hierarchy.



But AFAIK you still have to do it at the volume/share level that the NAS device



understands, I don't think you can do it at the root.







Using "virtualmountpoint" is for backing up incrementally at the *file* level



via NFS or CIFS mounts, not NDMP, so I'm not sure which way you are headed.







Question is, what are you doing this for?



NDMP is a stupid, simplistic protocol.  You won't like what you have to do to



achieve an individual file restore.  If you are trying to get DR capability to



rebuild your NDMP shares in case of an emergency, it makes sense.  If you are



just trying to provide backup coverage to restore people's files like you would



from a file server, it may not.







I

Tivoli Admin Center frustrations

2014-05-03 Thread Dury, John C.
I apologize in advance but I'm just getting more frustrated. I've installed 
both the windows and linux version of TSM Admin Center 6.3.4.300 and both 
seemed to install correctly with no errors. After correctly logging in, I can't 
add a TSM server under the "Manage Server" option. I've tried 4 different web 
browsers (opera,firefox,chrome and IE) and the only combination I could get to 
work correctly was IE and the linux Admin Center. I've done some searching and 
followed some suggestions like enabling javascript in firefox but it still 
hasn't helped. We also have TSM Manager licensed but were looking for more 
functionality (specifically Automatic client deployment)  that Admin Center 
supplies plus maybe saving some money. Admin Center has been available for how 
many years now and is still this buggy? One browser of four is all that seems 
to work out of the box and only with the linux version?
It seems like I am not the only one feeling the pain. Anyone have any 
suggestions? I can call IBM of course but I'm afraid that will result in hours 
and hours trying to get it to work, and frankly I don't have the time (or maybe 
patience).


Re: TSM 7.1 install on AIX problems

2014-04-22 Thread John Monahan
RE: Your poor DB backup performance - check this out:
http://www-01.ibm.com/support/docview.wss?uid=swg21587513



___
John Monahan
Professional Services Consultant
Logicalis, Inc. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Matthew McGeary
Sent: Tuesday, April 22, 2014 10:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM 7.1 install on AIX problems

I just did the install last week and didn't see any issues with the install 
script or memory dumps.  What version of AIX are you running?  I'm on 6.1 TL 7 
SP 6 (I updated from TL 7 SP 4 in order to do the TSM 7.1
install.)

I am seeing abysmal speeds during backup db commands (about half the average 
speed I was getting before,) but that's neither here nor there.
Sigh.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Kevin Kettner 
To: ADSM-L@VM.MARIST.EDU
Date:   04/18/2014 12:19 PM
Subject:[ADSM-L] TSM 7.1 install on AIX problems
Sent by:"ADSM: Dist Stor Manager" 



I'm attempting to install TSM 7.1 on an AIX box for the first time. Did anyone 
else notice that the install.sh script is missing a "!" in the first line? It 
has "#/bin/sh" and it should have "#!/bin/sh". It's an easy fix, but makes me 
wonder about the quality of the rest of the code.

After working around that it's throwing memory errors (see below), which I have 
a PMR open on. Has anyone else had this problem and found a solution? I Googled 
for people reporting either of these issues and didn't find anything, which 
seems odd, especially the install script issue. I can't be the only one to try 
to install it on AIX.

I'll post the fix when I figure it out or get it from IBM.

Bonus points for beating IBM support to the punch. ;-)

Thanks!

-Kevin

# ./install.sh
./install.sh[233]: 11993088 Memory fault(coredump) Unhandled exception 
Type=Segmentation error vmState=0x0004
J9Generic_Signal_Number=0004 Signal_Number=000b
Error_Value= Signal_Code=0033
Handler1=F1324208 Handler2=F131BF1C
R0=D336EDCC R1=3012FEF0 R2=F11C269C R3=F11C0450
R4= R5= R6= R7=0003
R8=0043 R9= R10=2FF22ABC R11=34E0
R12=03291A68 R13=30C4C400 R14=32040920 R15=F12891EC
R16=0007 R17= R18=F1326388 R19=30C4C450
R20=32BAE460 R21=32040938 R22= R23=3BC8
R24=10010E04 R25=F131D130 R26=301365A4 R27=007E
R28=CFA71C28 R29=F1325B7C R30=D3390410 R31=F11C0430
IAR=D33853A8 LR=D336EDE8 MSR=D032 CTR=D3631E70
CR=22004284 FPSCR=8200 XER=0005 TID=
MQ=
FPR0 32bc69000110 (f: 272.00, d: 2.697706e-64)
FPR1 41e0 (f: 0.00, d: 2.147484e+09)
FPR2 c1e0 (f: 0.00, d: -2.147484e+09)
FPR3 433001e0 (f: 31457280.00, d: 4.503600e+15)
FPR4 43300800 (f: 0.00, d: 4.512396e+15)
FPR5 41338518 (f: 0.00, d: 1.279256e+06)
FPR6 41338518 (f: 0.00, d: 1.279256e+06)
FPR7 433008138518 (f: 1279256.00, d: 4.512396e+15)
FPR8 0035002e00760032 (f: 7733298.00, d: 1.168203e-307)
FPR9 0030003100330030 (f: 3342384.00, d: 8.900711e-308)
FPR10 003700330030005f (f: 3145823.00, d: 1.279461e-307)
FPR11 0031003700350034 (f: 3473460.00, d: 9.457031e-308)
FPR12 3fe8 (f: 0.00, d: 7.50e-01)
FPR13 4028 (f: 0.00, d: 1.20e+01)
FPR14  (f: 0.00, d: 0.00e+00)
FPR15  (f: 0.00, d: 0.00e+00)
FPR16  (f: 0.00, d: 0.00e+00)
FPR17  (f: 0.00, d: 0.00e+00)
FPR18  (f: 0.00, d: 0.00e+00)
FPR19  (f: 0.00, d: 0.00e+00)
FPR20  (f: 0.00, d: 0.00e+00)
FPR21  (f: 0.00, d: 0.00e+00)
FPR22  (f: 0.00, d: 0.00e+00)
FPR23  (f: 0.00, d: 0.00e+00)
FPR24  (f: 0.00, d: 0.00e+00)
FPR25  (f: 0.00, d: 0.00e+00)
FPR26  (f: 0.00, d: 0.00e+00)
FPR27  (f: 0.00, d: 0.00e+00)
FPR28  (f: 0.00, d: 0.00e+00)
FPR29  (f: 0.00, d: 0.00e+00)
FPR30  (f: 0.00, d: 0.00e+00)
FPR31  (f: 0.00, d: 0.00e+00)
Target=2_40_20110203_074623 (AIX 7.1)
CPU=ppc (16 logical CPUs) (0x7c000 RAM)
--- Stack Backtrace ---
(0xD336E81C)
(0xD436CE48)
(0xD436F698)
(0xD4368D38)
(0xD4368B24)
(0xD369BBA0)
(0xD436A058)
(0xD436A200)
(0xD491BD18)
(0xD492236C)
(0xD4926438)
(0xD499BF48)
(0xD4965780)
(0xD4965A30)
(0xD49E4BAC)
(0xD49658E4)
(0xD4965E24)
(0xD496A6C4)
(0x100013C0)
(0xD04FED08)
---
JVMDUMP006I Processing dump event "gpf", detail "" - please wait.
JVMDUMP0

Re: TSM for VE and "monthly fulls"

2014-04-07 Thread John Monahan
I only had to do a one-time backup so I never did come up with something else. 
I'm going to ask about this at Edge.



___
John Monahan
Professional Services Consultant
Logicalis, Inc. 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Friday, March 21, 2014 3:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM for VE and "monthly fulls"

OH, bimps.
Glad to know that before I did all the work, you just saved me a lot of effort 
and I thank you.

So when you got down that road, what did you end up doing instead?

Only other thing I can think of is to install the BA client in the guest and do 
an incremental once a month?

W


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of John 
Monahan
Sent: Thursday, March 20, 2014 1:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM for VE and "monthly fulls"

If you backup to a different nodename you will reset the CBT info. So you can 
do that monthly full backup to the different nodename, but when you switch back 
to the original, it will also require a full. So in essence you need to do 2 
back to back fulls to get this done. I've been down this road. 



___
John Monahan
Professional Services Consultant
Logicalis, Inc. 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Monday, March 10, 2014 11:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM for VE and "monthly fulls"

Would like to know if anyone has solved the conundrum of TSM VE and customers 
who want "monthly fulls" in the vault for a year or more.

Now I know that's a bad idea, and it doesn't play well with TSM 
incremental-forever.  But still there are customers who insist, or have 
managers with ill-thought-out pseudo-requirements for it, or parent firms who 
insist on it.  And I can work those issues with backupsets or exports or 
nodename-monthly node owners or sometimes even archives for other types of data.

But I have not been able to do an export on deduped VE filespaces.  It takes so 
long and puts on so many locks it has to be cancelled.
So I'm pondering the idea of doing a true VM full, to a different datamover 
with a different filespace owner.  But I don't really like that idea either 
because the full will have the snapshot outstanding for a long time, and the 
scheduling will be tricky so we don't overlap the incremental snapshots.

SO, anybody else have insight into this particular can of worms?

Thanks
Wanda

**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  | 
wanda.prat...@icfi.com<mailto:wanda.prat...@icfi.com>  |  
www.icfi.com<http://www.icfi.com> | 410-868-4872 (m) ICF International  | 7125 
Thomas Edison Dr., Suite 100, Columbia, Md |443-718-4900 (o)


Re: server takes long time to connect to via admin or backup client

2014-04-01 Thread Dury, John C.
Thanks for the reply. We aren't using ldap at all so I don't think that is the 
cause.

John



Frank 
Fegert<http://www.mail-archive.com/search?l=adsm-l@vm.marist.edu&q=from:%22Frank+Fegert%22>
 Mon, 31 Mar 2014 13:20:38 
-0700<http://www.mail-archive.com/search?l=adsm-l@vm.marist.edu&q=date:20140331>

Hello,



On Mon, Mar 31, 2014 at 09:00:10AM -0400, Dury, John C. wrote:

> This is a weird one and I have opened a pmr with IBM but I thought I would

> check with you all and see if anyone had any ideas what is going on.  I have

> a TSM 6.3.4.300 server running on RHEL5 with all the latest maintenance

> installed, and when I try to connect to it via an admin session, either

> locally from within the server via loopback (127.0.0.1) or remotely using an

> admin client, it seems to take several minutes before I connect and get any

> response back. SSHing into the server is almost immediate so it's not the OS.

> The server is not extremely busy and this happens when it is doing almost

> nothing and is very consistent. I have an almost identical TSM server that

> does not have this problem at all. I can immediately connect via an admin

> client and I immediately get a response. I have compared both the dsmserv.opt

> files on both servers as well as the /etc/sysctl.conf files and nothing seems

> to be out of place. I don't see anything odd in the db2diag.*.log file

> either. I'm just not sure where else to look or what could be causing this

> but it is definitely affecting backup performance since the backup clients

> can take several minutes just to connect to the server.

> Ideas?



just a wild guess, but take a look at this:



http://www.ibm.com/support/docview.wss?uid=swg21667740&myns=swgtiv&mynp=OCSSGSG7&mync=E



HTH & best regards,



Frank


server takes long time to connect to via admin or backup client

2014-03-31 Thread Dury, John C.
This is a weird one and I have opened a pmr with IBM but I thought I would 
check with you all and see if anyone had any ideas what is going on.
I have a TSM 6.3.4.300 server running on RHEL5 with all the latest maintenance 
installed, and when I try to connect to it via an admin session, either locally 
from within the server via loopback (127.0.0.1) or remotely using an admin 
client, it seems to take several minutes before I connect and get any response 
back. SSHing into the server is almost immediate so it's not the OS. The server 
is not extremely busy and this happens when it is doing almost nothing and is 
very consistent. I have an almost identical TSM server that does not have this 
problem at all. I can immediately connect via an admin client and I immediately 
get a response. I have compared both the dsmserv.opt files on both servers as 
well as the /etc/sysctl.conf files and nothing seems to be out of place. I 
don't see anything odd in the db2diag.*.log file either. I'm just not sure 
where else to look or what could be causing this but it is definitely affecting 
backup performance since the backup clients can take several minutes just to 
connect to the server.
Ideas?


Re: TSM for VE and "monthly fulls"

2014-03-20 Thread John Monahan
I was told as long as I kept the mode as IFINCR it would not reset CBT info 
before I started, but it did. This was on TSM for VE 6.4, VMWare 5.1, and a 
6.3.3 server. Perhaps there is another way that would work without resetting 
CBT. If so, please share.

I did onetime IFINCR backups to a new datacenter nodename, new stgpool, using a 
new datamover during the day for a proof of concept. Since the data went to a 
new nodename it triggered a full backup. After switching back to their original 
datamover and datacenter node as part of the regular nightly schedule - that 
also triggered a full again for those VMs I was using for testing, with these 
messages:

ANS9387W Incremental backup unsuccessful for virtual machine 'XXX'. 
Performing Full backup instead.
ANS9384W Unable to get VMware Changed Block Tracking(CBT) data for virtual 
machine 'XX'. Full VM backup continues, and includes both used and unused 
areas of the disk.



___
John Monahan
Professional Services Consultant
Logicalis, Inc. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rob 
Edwards
Sent: Thursday, March 20, 2014 8:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM for VE and "monthly fulls"

John,

In TSM 6.4 this behavior was changed so that we do not always reset the CBT on 
full backups. This was done, in part, to allow out of band full backups to be 
done to a different node / server without interrupting the incremental chain of 
the primary backup.

Are you seeing this issue with 6.4?

---
Rob Edwards
TSM Development
email: r...@us.ibm.com
tieline: 938.2784, external: 720.396.2784




From:   John Monahan 
To: ADSM-L@vm.marist.edu,
Date:   03/20/2014 01:48 AM
Subject:Re: TSM for VE and "monthly fulls"
Sent by:"ADSM: Dist Stor Manager" 



If you backup to a different nodename you will reset the CBT info. So you can 
do that monthly full backup to the different nodename, but when you switch back 
to the original, it will also require a full. So in essence you need to do 2 
back to back fulls to get this done. I've been down this road.



___
John Monahan
Professional Services Consultant
Logicalis, Inc.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Monday, March 10, 2014 11:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM for VE and "monthly fulls"

Would like to know if anyone has solved the conundrum of TSM VE and customers 
who want "monthly fulls" in the vault for a year or more.

Now I know that's a bad idea, and it doesn't play well with TSM 
incremental-forever.  But still there are customers who insist, or have 
managers with ill-thought-out pseudo-requirements for it, or parent firms who 
insist on it.  And I can work those issues with backupsets or exports or 
nodename-monthly node owners or sometimes even archives for other types of data.

But I have not been able to do an export on deduped VE filespaces.  It takes so 
long and puts on so many locks it has to be cancelled.
So I'm pondering the idea of doing a true VM full, to a different datamover 
with a different filespace owner.  But I don't really like that idea either 
because the full will have the snapshot outstanding for a long time, and the 
scheduling will be tricky so we don't overlap the incremental snapshots.

SO, anybody else have insight into this particular can of worms?

Thanks
Wanda

**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  | wanda.prat...@icfi.com< 
mailto:wanda.prat...@icfi.com>  |  www.icfi.com<http://www.icfi.com> |
410-868-4872 (m) ICF International  | 7125 Thomas Edison Dr., Suite 100, 
Columbia, Md |443-718-4900 (o)


Re: TSM for VE and "monthly fulls"

2014-03-19 Thread John Monahan
If you backup to a different nodename you will reset the CBT info. So you can 
do that monthly full backup to the different nodename, but when you switch back 
to the original, it will also require a full. So in essence you need to do 2 
back to back fulls to get this done. I've been down this road. 



___
John Monahan
Professional Services Consultant
Logicalis, Inc. 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Monday, March 10, 2014 11:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM for VE and "monthly fulls"

Would like to know if anyone has solved the conundrum of TSM VE and customers 
who want "monthly fulls" in the vault for a year or more.

Now I know that's a bad idea, and it doesn't play well with TSM 
incremental-forever.  But still there are customers who insist, or have 
managers with ill-thought-out pseudo-requirements for it, or parent firms who 
insist on it.  And I can work those issues with backupsets or exports or 
nodename-monthly node owners or sometimes even archives for other types of data.

But I have not been able to do an export on deduped VE filespaces.  It takes so 
long and puts on so many locks it has to be cancelled.
So I'm pondering the idea of doing a true VM full, to a different datamover 
with a different filespace owner.  But I don't really like that idea either 
because the full will have the snapshot outstanding for a long time, and the 
scheduling will be tricky so we don't overlap the incremental snapshots.

SO, anybody else have insight into this particular can of worms?

Thanks
Wanda

**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  | 
wanda.prat...@icfi.com<mailto:wanda.prat...@icfi.com>  |  
www.icfi.com<http://www.icfi.com> | 410-868-4872 (m) ICF International  | 7125 
Thomas Edison Dr., Suite 100, Columbia, Md |443-718-4900 (o)


Re: file system backups of a Dell NDMP Equallogic device

2014-02-14 Thread Dury, John C.
We are more concerned about file level backups than an image backup. Eventually 
the NAS devices will be replicating using Equallogic replication once we get 
some more storage but for now, we want to make sure that the files in the NFS 
shares are correctly backed up but I really wanted to avoid backing up the same 
NFS data to multiple TSM nodes since some of the NFS mount are shared amongst 
several servers/nodes. My strategy is to pick one TSM node and make sure it has 
NFS mounts for all of the NFS that live on the NAS and then just backup it up 
as virtualmountpoint(s) so something like this

/NAS exists off of root on TSM node and is local

mount NFS1 as /NAS/NFS1

mount NFS2 as /NAS/NFS2

etc



put entry in dsm.sys virtualmountpoint /NAS

and then just let incrementals run normally.

All restores would need to be done on the NODE that can see all of the NFS 
mounts.

Think that will work?













I agree with Wanda. Our strategy for our filers (BlueARC, Isilon) is to

backup at the file-level exclusively, using NFS. Modern TSM servers support

no-query restores well enough that we can get a restore of the latest data

very quickly (make sure you have plenty of CPU and memory, along with very

fast database disks). To perform the backups efficiently, you might want to

think about splitting your data up into separate nodes or filespaces,

backed up with independent schedules, so that you're not bottlenecked on a

single component.



As far as I can tell, NDMP was written by storage vendors to make one buy

more expensive storage, and more of it than one needs.





You don't have to use tape.

You can do NDMP backups via TCP/IP to your regular TSM storage pool hierarchy.

But AFAIK you still have to do it at the volume/share level that the NAS device

understands, I don't think you can do it at the root.



Using "virtualmountpoint" is for backing up incrementally at the *file* level

via NFS or CIFS mounts, not NDMP, so I'm not sure which way you are headed.



Question is, what are you doing this for?

NDMP is a stupid, simplistic protocol.  You won't like what you have to do to

achieve an individual file restore.  If you are trying to get DR capability to

rebuild your NDMP shares in case of an emergency, it makes sense.  If you are

just trying to provide backup coverage to restore people's files like you would

from a file server, it may not.



If you want to do NDMP via TCP/IP instead of direct to tape, reply with your

TSM server platform and server level, and I'll send you back the page reference

in the manual you need...



W









-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of

Dury, John C.

Sent: Thursday, February 13, 2014 2:11 PM

To: ADSM-L AT VM.MARIST DOT EDU

Subject: [ADSM-L] file system backups of a Dell NDMP Equallogic device



We have two Dell NDMP storage devices and a TSM server at both sites. We'd like

to be able to root file level (image backups don't help much) backups (and

restores if necessary) of the entire  NDMP device to the local TSM server. Can

someone point me in the right direction or tell me how they did it? NAS/NDMP is

pretty new to me and from what I have read so far, the documentation talks

about backing up directly to tape, which we don't have any more.  All of our

storage is online.



What I was originally planning on doing, was creating all of the NFS shares on

one linux server, and backing them up as /virtualmountpoints. I'd like to setup

just one which points to the root of all the NFS systems on the NAS device but

I see no way to do that either.

Any help is appreciated.

Op 13 feb. 2014, om 20:11 heeft Dury, John C.  het
volgende geschreven:

> We have two Dell NDMP storage devices and a TSM server at both sites. We'd
> like to be able to root file level (image backups don't help much) backups
> (and restores if necessary) of the entire  NDMP device to the local TSM
> server. Can someone point me in the right direction or tell me how they did
> it? NAS/NDMP is pretty new to me and from what I have read so far, the
> documentation talks about backing up directly to tape, which we don't have
> any more.  All of our storage is online.
>
> What I was originally planning on doing, was creating all of the NFS shares
> on one linux server, and backing them up as /virtualmountpoints. I'd like to
> setup just one which points to the root of all the NFS systems on the NAS
> device but I see no way to do that either.
> Any help is appreciated.


if supported by the Dell, NDMP to disk is even simpler than NDMP to tape...
just don't define any paths from the datamover to tape (which you don't have
any way)

--

Met vriendelijke groeten/Kind Regards,

Remco Post
r.post AT plcs DOT nl
+31 6 248 21 622


file system backups of a Dell NDMP Equallogic device

2014-02-13 Thread Dury, John C.
We have two Dell NDMP storage devices and a TSM server at both sites. We'd like 
to be able to root file level (image backups don't help much) backups (and 
restores if necessary) of the entire  NDMP device to the local TSM server. Can 
someone point me in the right direction or tell me how they did it? NAS/NDMP is 
pretty new to me and from what I have read so far, the documentation talks 
about backing up directly to tape, which we don't have any more.  All of our 
storage is online.

What I was originally planning on doing, was creating all of the NFS shares on 
one linux server, and backing them up as /virtualmountpoints. I'd like to setup 
just one which points to the root of all the NFS systems on the NAS device but 
I see no way to do that either.
Any help is appreciated.


Re: TSM/VE and SRM Replication

2014-01-23 Thread Underdown,John William
we've been hit by this. using the vSphere sdk for perl, i wrote a script 
that runs daily to find these and send out an alert.

here's the main loop.

Util::connect();

my $entity_views = Vim::find_entity_views(
view_type => 'VirtualMachine',
 properties => ['name','layoutEx']);

foreach my $ev (@$entity_views){
if (defined $ev->layoutEx){
   my $vmdk = $ev->layoutEx->file;
   foreach my $vmdk(@$vmdk) {
 if ($vmdk->name =~ /delta.vmdk/ && $vmdk->name =~ /_\d+-/){
   $buffer = $buffer . $ev->name . " " . $vmdk->name . "\n";
 }
   }

}
}

Util::disconnect();


On 01/23/2014 01:38 PM, Ryder, Michael S wrote:
> This warning is a "new" feature of vSphere 5.x -- prior to 5, you would not
> get any alert about consolidation being needed.  It's possible this was
> happening before and you just didn't know it...?
>
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2003638
>
> Did your site recently upgrade to 5.5?  Personally, I didn't see these at
> 5.0 and now at 5.1 see these messages infrequently -- my current theory (at
> least for my site) is that this happens when an image/snapshot backup job
> is disturbed somehow, for example by a momentary interruption of network
> traffic.
>
> Best regards,
>
> Mike
> RMD IT, x7942
>
>
> On Thu, Jan 23, 2014 at 12:56 PM, Prather, Wanda 
> wrote:
>
>> Hi Dude!
>>
>> Yes, I have seen that happen (although in my case it was another product
>> not VMware SRM) .
>> It doesn't matter what product it is - if there is *anything* else in the
>> environment that is doing VM snapshots - including humans, you have to set
>> up your schedules so they don't conflict.
>> Else sooner or later this will happen.
>>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>> Bill Boyer
>> Sent: Thursday, January 23, 2014 10:39 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: [ADSM-L] TSM/VE and SRM Replication
>>
>> Has anyone using TSM/VE backups as well as VMware SRM for replication had
>> any conflicts with snapshots between the 2 products? We've had several
>> instances where VM's have issued an alert about Virtual Machine
>> consolidation needed. And just last night we had one of our file server
>> VM's have snapshots with invalid parents and the VM was totally
>> unresponsive.
>> These events seem to occur when the TSM/VE backup completes and the
>> snapshot is deleted. We are at VMware 5.5.
>>
>>
>>
>> Bill Boyer
>> DSS, Inc.
>> (610) 927-4407
>> "Enjoy life. It has an expiration date." - ??
>>





Re: TSM NDMP HNAS Question?

2013-12-12 Thread Underdown,John William
Dear W,

Thanks for the reply. i'm on a steep learning curb with NDMP. i followed 
these instructions which were provided by another adsm-l member.

http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=%2Fcom.ibm.itsm.srv.doc%2Ft_ndmp_nas_file_srvs_backup_restore.html

Performing NDMP filer to Tivoli Storage Manager server backups
You can back up data to a single Tivoli Storage Manager server rather 
than attaching a tape library to each NAS device.

on step 3.

register node nas1 nas1 type=nas domain=standard
define datamover nas1 type=nas hla=nas1 user=root password=* 
dataformat=netappdump

i get

ANRD_2854923275 spiOutMsg(ndmpspi.c:1271) Thread<227>: NDMP Server 
does not support Token Base Incremental Backup.

i found this in the archives when searching for this error:

"BlueARC support pointed me in the right direction.
`ndmp-option -d tokens on`, followed by an NDMP service restart on the
NAS made backups work.  Now to figure out the rest of the NDMP voodoo.."


it's an old posting, but hnas is using BlueARC's ndmp and i have the 
hnas docs and console access, so i'll follow this lead and see were it 
takes me..

if anyone else has any ideas, it would be greatly appreciated.

Many Thanks.

On 12/11/2013 11:23 AM, Prather, Wanda wrote:
> AFAIK any NAS box that supports NDMP can be backed up to TSM via TCP/IP and 
> ends up in the regular storage pool hierarchy, i.e. disk is fine.
>
> But I don't know that the NDMP protocol knows how to do a non-tape fibre 
> connection, any more than TSM does.
> Do you have any documentation on how that would work from the NDMP end?
>
> W
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Underdown,John William
> Sent: Tuesday, December 10, 2013 1:58 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] TSM NDMP HNAS Question?
>
> Hi All.
>
> i'm trying to determine if it's possible to setup a HNAS (Hitachi) 3080 NDMP 
> backup to TSM Server 6.3.4.0/RHEL 6.4. we don't have any tape in our 
> environment, virtual or otherwise, we're disk only. the SAN admin can create 
> a FC connection between the TSM server and the HNAS if it can be used for lan 
> free backups.
>
> i'm assuming these steps on the TSM server (excluding setting up policy 
> domain, copy group, storage pools, and etc):
>
> register node type=nas.
> define datamover type=nas
> define virtualfsmapping
>
> i don't know if this is possible without some type of library and if so how 
> does TSM negotiate the FC path?
>
> if someone could point me in the right direction, it would be greatly 
> appreciated.
>
> Many Thanks.
>
> john underdown





TSM NDMP HNAS Question?

2013-12-10 Thread Underdown,John William
Hi All.

i'm trying to determine if it's possible to setup a HNAS (Hitachi) 3080
NDMP backup to TSM Server 6.3.4.0/RHEL 6.4. we don't have any tape in
our environment, virtual or otherwise, we're disk only. the SAN admin
can create a FC connection between the TSM server and the HNAS if it can
be used for lan free backups.

i'm assuming these steps on the TSM server (excluding setting up policy
domain, copy group, storage pools, and etc):

register node type=nas.
define datamover type=nas
define virtualfsmapping

i don't know if this is possible without some type of library and if so
how does TSM negotiate the FC path?

if someone could point me in the right direction, it would be greatly
appreciated.

Many Thanks.

john underdown


Re: TSM VE 6.4

2013-11-18 Thread Underdown,John William
http://www-01.ibm.com/support/docview.wss?uid=swg21647461

Protection for VMware Version 6.4.1
_Supported VMware vSphere versions_

The following VMware vSphere versions are supported:

  * vSphere 4.1
  * vSphere 5.0
  * vSphere 5.1

http://www-01.ibm.com/support/docview.wss?uid=swg21652843

Data Protection for VMware Version 7.1
_
Supported VMware vSphere versions_

The following VMware vSphere versions are supported:

  * vSphere 5.0*
  * vSphere 5.1
  * vSphere 5.5

* = Full VM Instant Restore/Instant Access function only supported on 
vSphere 5.1 and 5.5


On 11/18/2013 11:04 AM, Ehresman,David E. wrote:
> Is vSphere 5.5 supported by TSM VE 6.4?  If so, can you point me to the 
> relevant documentation?
>
> David





Re: ANR1817W when exporting node to another server

2013-09-11 Thread Dury, John C.
That is exactly what I am trying to do.
John

- What happens if you specify a specific node name that hasn't been exported
- yet?


On Wed, Sep 11, 2013 at 12:20 PM, Dury, John C. 
wrote:

> I have tried several combinations of the following and they all fail with
> the output listed below.
>
> export node toserver=LINUXSERVER filespace=* filedata=all
> export node toserver=LINUXSERVER filedata=all
> export node toserver=LINUXSERVER filedata=all replacedefs=yes
> mergefilespaces=yes
> export node toserver=LINUXSERVER filedata=all replacedefs=no
> mergefilespaces=no
> export node toserver=LINUXSERVER filedata=all domain=standard
>
>
>
> - Didn't notice the actual EXPORT command/options you used?
>
>
> On Wed, Sep 11, 2013 at 8:30 AM, Dury, John C. 
> wrote:
>
> > We have 2 6.3.4.0 TSM servers, one AIX and one linux. We want to move all
> > of the data from the AIX server to the linux server and then eventually
> > retire the AIX server. I understand I can also do this with replication,
> > but because the lijnux server will eventually be replicating to a second
> > linux server at a remote site and until the whole process is complete,
> the
> > AIX server will still be doing backups, I thought exporting node data
> would
> > be a simpler solution.
> > I have the AIX and linux servers talking to each other and was able to
> > successfully export admins from the AIX server directory to the linux
> > server but when I try to export a node from AIX to linux, I get the
> > following messages during the process and I'm not sure why. I've tried
> > every combination of replacedefs and mergefilespaces but all get the same
> > errors. Here is the output from the "export node". The output below was
> > sanitized but should be easily readable.
> >
> > Ideas anyone?
> >
> >
> > 09/11/2013 08:03:32  ANR0984I Process 667 for EXPORT NODE started in the
> >   BACKGROUND at 08:03:32. (SESSION: 744051, PROCESS:
> > 667)
> > 09/11/2013 08:03:32  ANR0609I EXPORT NODE started as process 667.
> (SESSION:
> >   744051, PROCESS: 667)
> > 09/11/2013 08:03:32  ANR0402I Session 744080 started for administrator
> > ADMINID
> >   (Server) (Memory IPC). (SESSION: 744051, PROCESS:
> > 667)
> > 09/11/2013 08:03:32  ANR0408I Session 744081 started for server
> LINUXSERVER
> >   (Linux/x86_64) (Tcp/Ip) for general administration.
> >   (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:03:32  ANR0610I EXPORT NODE started by ADMINID as process
> > 667.
> >   (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:03:33  ANR0635I EXPORT NODE: Processing node NODENAME in
> > domain
> >   STANDARD. (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space
> > \\NODENAME\c$
> >   for node NODENAME fsId 1 . (SESSION: 744051,
> PROCESS:
> >   667)
> > 09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space
> > \\NODENAME\d$
> >   for node NODENAME fsId 2 . (SESSION: 744051,
> PROCESS:
> >   667)
> > 09/11/2013 08:04:09  ANR8337I LTO volume TAPEVOLUME mounted in drive
> > TAPEDRIVE
> >   (/dev/rmt4). (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:04:09  ANR0512I Process 667 opened input volume TAPEVOLUME.
> > (SESSION:
> >   744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR1817W EXPORT NODE: No valid nodes were identified
> > for
> >   import on target  server LINUXSERVER. (SESSION:
> > 744051,
> >   PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0562I EXPORT NODE: Data transfer complete,
> deleting
> >   temporary data. (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0405I Session 744081 ended for administrator
> > LINUXSERVER
> >   (Linux/x86_64). (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0515I Process 667 closed volume TAPEVOLUME.
> > (SESSION:
> >  744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0724E EXPORT NODE: Processing terminated
> > abnormally -
> >   transaction failure. (SESSION: 744051, PROCESS:
> 667)
> > 09/11/2013 08:05:37  ANR0626I EXPORT NODE: Copied 1 node definitions.
> > (SESSION:
> >   744051, PROCESS: 667)
> > 09/11/2013 08:05:37  A

Re: ANR1817W when exporting node to another server

2013-09-11 Thread Dury, John C.
I found the problem. There was a leftover replicated node from previous 
testing, with the same name. Replicated nodes don't show up when doing a "q 
node" so I wasn't seeing it on the new server.
Thanks for the fresh eyes.

From: Dury, John C.
Sent: Wednesday, September 11, 2013 2:02 PM
To: 'ADSM-L (ADSM-L@VM.MARIST.EDU)'
Subject: Re: [ADSM-L] ANR1817W when exporting node to another server

That is exactly what I am trying to do.
John

- What happens if you specify a specific node name that hasn't been exported
- yet?


On Wed, Sep 11, 2013 at 12:20 PM, Dury, John C. 
wrote:

> I have tried several combinations of the following and they all fail with
> the output listed below.
>
> export node toserver=LINUXSERVER filespace=* filedata=all
> export node toserver=LINUXSERVER filedata=all
> export node toserver=LINUXSERVER filedata=all replacedefs=yes
> mergefilespaces=yes
> export node toserver=LINUXSERVER filedata=all replacedefs=no
> mergefilespaces=no
> export node toserver=LINUXSERVER filedata=all domain=standard
>
>
>
> - Didn't notice the actual EXPORT command/options you used?
>
>
> On Wed, Sep 11, 2013 at 8:30 AM, Dury, John C. 
> wrote:
>
> > We have 2 6.3.4.0 TSM servers, one AIX and one linux. We want to move all
> > of the data from the AIX server to the linux server and then eventually
> > retire the AIX server. I understand I can also do this with replication,
> > but because the lijnux server will eventually be replicating to a second
> > linux server at a remote site and until the whole process is complete,
> the
> > AIX server will still be doing backups, I thought exporting node data
> would
> > be a simpler solution.
> > I have the AIX and linux servers talking to each other and was able to
> > successfully export admins from the AIX server directory to the linux
> > server but when I try to export a node from AIX to linux, I get the
> > following messages during the process and I'm not sure why. I've tried
> > every combination of replacedefs and mergefilespaces but all get the same
> > errors. Here is the output from the "export node". The output below was
> > sanitized but should be easily readable.
> >
> > Ideas anyone?
> >
> >
> > 09/11/2013 08:03:32  ANR0984I Process 667 for EXPORT NODE started in the
> >   BACKGROUND at 08:03:32. (SESSION: 744051, PROCESS:
> > 667)
> > 09/11/2013 08:03:32  ANR0609I EXPORT NODE started as process 667.
> (SESSION:
> >   744051, PROCESS: 667)
> > 09/11/2013 08:03:32  ANR0402I Session 744080 started for administrator
> > ADMINID
> >   (Server) (Memory IPC). (SESSION: 744051, PROCESS:
> > 667)
> > 09/11/2013 08:03:32  ANR0408I Session 744081 started for server
> LINUXSERVER
> >   (Linux/x86_64) (Tcp/Ip) for general administration.
> >   (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:03:32  ANR0610I EXPORT NODE started by ADMINID as process
> > 667.
> >   (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:03:33  ANR0635I EXPORT NODE: Processing node NODENAME in
> > domain
> >   STANDARD. (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space
> > \\NODENAME\c$
> >   for node NODENAME fsId 1 . (SESSION: 744051,
> PROCESS:
> >   667)
> > 09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space
> > \\NODENAME\d$
> >   for node NODENAME fsId 2 . (SESSION: 744051,
> PROCESS:
> >   667)
> > 09/11/2013 08:04:09  ANR8337I LTO volume TAPEVOLUME mounted in drive
> > TAPEDRIVE
> >   (/dev/rmt4). (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:04:09  ANR0512I Process 667 opened input volume TAPEVOLUME.
> > (SESSION:
> >   744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR1817W EXPORT NODE: No valid nodes were identified
> > for
> >   import on target  server LINUXSERVER. (SESSION:
> > 744051,
> >   PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0562I EXPORT NODE: Data transfer complete,
> deleting
> >   temporary data. (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0405I Session 744081 ended for administrator
> > LINUXSERVER
> >   (Linux/x86_64). (SESSION: 744051, PROCESS: 667)
> > 09/11/2013 08:05:37  ANR0515I Process 667 closed volume TAPEVOLUME.
> > (S

Re: ANR1817W when exporting node to another server

2013-09-11 Thread Dury, John C.
I have tried several combinations of the following and they all fail with the 
output listed below.

export node toserver=LINUXSERVER filespace=* filedata=all
export node toserver=LINUXSERVER filedata=all
export node toserver=LINUXSERVER filedata=all replacedefs=yes 
mergefilespaces=yes
export node toserver=LINUXSERVER filedata=all replacedefs=no mergefilespaces=no
export node toserver=LINUXSERVER filedata=all domain=standard



- Didn't notice the actual EXPORT command/options you used?


On Wed, Sep 11, 2013 at 8:30 AM, Dury, John C. 
wrote:

> We have 2 6.3.4.0 TSM servers, one AIX and one linux. We want to move all
> of the data from the AIX server to the linux server and then eventually
> retire the AIX server. I understand I can also do this with replication,
> but because the lijnux server will eventually be replicating to a second
> linux server at a remote site and until the whole process is complete, the
> AIX server will still be doing backups, I thought exporting node data would
> be a simpler solution.
> I have the AIX and linux servers talking to each other and was able to
> successfully export admins from the AIX server directory to the linux
> server but when I try to export a node from AIX to linux, I get the
> following messages during the process and I'm not sure why. I've tried
> every combination of replacedefs and mergefilespaces but all get the same
> errors. Here is the output from the "export node". The output below was
> sanitized but should be easily readable.
>
> Ideas anyone?
>
>
> 09/11/2013 08:03:32  ANR0984I Process 667 for EXPORT NODE started in the
>   BACKGROUND at 08:03:32. (SESSION: 744051, PROCESS:
> 667)
> 09/11/2013 08:03:32  ANR0609I EXPORT NODE started as process 667. (SESSION:
>   744051, PROCESS: 667)
> 09/11/2013 08:03:32  ANR0402I Session 744080 started for administrator
> ADMINID
>   (Server) (Memory IPC). (SESSION: 744051, PROCESS:
> 667)
> 09/11/2013 08:03:32  ANR0408I Session 744081 started for server LINUXSERVER
>   (Linux/x86_64) (Tcp/Ip) for general administration.
>   (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:03:32  ANR0610I EXPORT NODE started by ADMINID as process
> 667.
>   (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:03:33  ANR0635I EXPORT NODE: Processing node NODENAME in
> domain
>   STANDARD. (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space
> \\NODENAME\c$
>   for node NODENAME fsId 1 . (SESSION: 744051, PROCESS:
>   667)
> 09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space
> \\NODENAME\d$
>   for node NODENAME fsId 2 . (SESSION: 744051, PROCESS:
>   667)
> 09/11/2013 08:04:09  ANR8337I LTO volume TAPEVOLUME mounted in drive
> TAPEDRIVE
>   (/dev/rmt4). (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:04:09  ANR0512I Process 667 opened input volume TAPEVOLUME.
> (SESSION:
>   744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR1817W EXPORT NODE: No valid nodes were identified
> for
>   import on target  server LINUXSERVER. (SESSION:
> 744051,
>   PROCESS: 667)
> 09/11/2013 08:05:37  ANR0562I EXPORT NODE: Data transfer complete, deleting
>   temporary data. (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0405I Session 744081 ended for administrator
> LINUXSERVER
>   (Linux/x86_64). (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0515I Process 667 closed volume TAPEVOLUME.
> (SESSION:
>  744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0724E EXPORT NODE: Processing terminated
> abnormally -
>   transaction failure. (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0626I EXPORT NODE: Copied 1 node definitions.
> (SESSION:
>   744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0627I EXPORT NODE: Copied 2 file spaces 0 archive
>   files, 1 backup files, and 0 space managed files.
>   (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0629I EXPORT NODE: Copied 1589 bytes of data.
> (SESSION:
>   744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0611I EXPORT NODE started by ADMINID as process
> 667 has
>   ended. (SESSION: 744051, PROCESS: 667)
> 09/11/2013 08:05:37  ANR0986I Process 667 for EXPORT NODE running in the
>   BACKGROUND processed 4 items for a total of 1,589
> bytes
>   

ANR1817W when exporting node to another server

2013-09-11 Thread Dury, John C.
We have 2 6.3.4.0 TSM servers, one AIX and one linux. We want to move all of 
the data from the AIX server to the linux server and then eventually retire the 
AIX server. I understand I can also do this with replication, but because the 
lijnux server will eventually be replicating to a second linux server at a 
remote site and until the whole process is complete, the AIX server will still 
be doing backups, I thought exporting node data would be a simpler solution.
I have the AIX and linux servers talking to each other and was able to 
successfully export admins from the AIX server directory to the linux server 
but when I try to export a node from AIX to linux, I get the following messages 
during the process and I'm not sure why. I've tried every combination of 
replacedefs and mergefilespaces but all get the same errors. Here is the output 
from the "export node". The output below was sanitized but should be easily 
readable.

Ideas anyone?


09/11/2013 08:03:32  ANR0984I Process 667 for EXPORT NODE started in the
  BACKGROUND at 08:03:32. (SESSION: 744051, PROCESS: 667)
09/11/2013 08:03:32  ANR0609I EXPORT NODE started as process 667. (SESSION:
  744051, PROCESS: 667)
09/11/2013 08:03:32  ANR0402I Session 744080 started for administrator ADMINID
  (Server) (Memory IPC). (SESSION: 744051, PROCESS: 667)
09/11/2013 08:03:32  ANR0408I Session 744081 started for server LINUXSERVER
  (Linux/x86_64) (Tcp/Ip) for general administration.
  (SESSION: 744051, PROCESS: 667)
09/11/2013 08:03:32  ANR0610I EXPORT NODE started by ADMINID as process 667.
  (SESSION: 744051, PROCESS: 667)
09/11/2013 08:03:33  ANR0635I EXPORT NODE: Processing node NODENAME in domain
  STANDARD. (SESSION: 744051, PROCESS: 667)
09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space \\NODENAME\c$
  for node NODENAME fsId 1 . (SESSION: 744051, PROCESS:
  667)
09/11/2013 08:03:33  ANR0637I EXPORT NODE: Processing file space \\NODENAME\d$
  for node NODENAME fsId 2 . (SESSION: 744051, PROCESS:
  667)
09/11/2013 08:04:09  ANR8337I LTO volume TAPEVOLUME mounted in drive TAPEDRIVE
  (/dev/rmt4). (SESSION: 744051, PROCESS: 667)
09/11/2013 08:04:09  ANR0512I Process 667 opened input volume TAPEVOLUME. 
(SESSION:
  744051, PROCESS: 667)
09/11/2013 08:05:37  ANR1817W EXPORT NODE: No valid nodes were identified for
  import on target  server LINUXSERVER. (SESSION: 744051,
  PROCESS: 667)
09/11/2013 08:05:37  ANR0562I EXPORT NODE: Data transfer complete, deleting
  temporary data. (SESSION: 744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0405I Session 744081 ended for administrator LINUXSERVER
  (Linux/x86_64). (SESSION: 744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0515I Process 667 closed volume TAPEVOLUME. (SESSION:
 744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0724E EXPORT NODE: Processing terminated abnormally -
  transaction failure. (SESSION: 744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0626I EXPORT NODE: Copied 1 node definitions. (SESSION:
  744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0627I EXPORT NODE: Copied 2 file spaces 0 archive
  files, 1 backup files, and 0 space managed files.
  (SESSION: 744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0629I EXPORT NODE: Copied 1589 bytes of data. (SESSION:
  744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0611I EXPORT NODE started by ADMINID as process 667 has
  ended. (SESSION: 744051, PROCESS: 667)
09/11/2013 08:05:37  ANR0986I Process 667 for EXPORT NODE running in the
  BACKGROUND processed 4 items for a total of 1,589 bytes
  with a completion state of FAILURE at 08:05:37. (SESSION:
  744051, PROCESS: 667)
09/11/2013 08:05:37  ANR1893E Process 667 for EXPORT NODE completed with a
  completion state of FAILURE. (SESSION: 744051, PROCESS:
  667)
09/11/2013 08:07:40  ANR2017I Administrator ADMINID issued command: QUERY
  ACTLOG search='process: 667'  (SESSION: 744051)


Re: expire inventory seems to be hanging and not processing all nodes

2013-08-09 Thread Dury, John C.
-How long have you been at 6.3.4.0? I've seen this on slightly older servers
-when Windows 2008 Server clients (and similar era Windows desktop, from what I
-hear) back up their system states. You get huge numbers of small objects, and
-expiration takes a long, long time to process those with no obvious progress.
-
-However, that's supposed to be fixed in 6.3.4.0. I'm wondering if you backed up
-some such clients before upgrading to 6.3.4.0?
-
-Nick

We've been running v6.3.4.0 for a few weeks now.


-I saw a similar behavior on my 6.3.3.100 on Windows earlier this week.

-

-Expiration appeared to be stuck for a couple of hours having processed only 10

-or 15 nodes of 200.

-

-Do you have a duration specified to limit time?  That might explain your 462 of

-595.

-

-I had also tried to cancel, and also did the halt to clear the process.

-

-Upon restarting, then a new expire, it again appeared to stall, but in time

-picked up and cruised through without issue.

-

-All expiration since that day has run fine, in "normal" behavior and timespan.

I do not have duration limit on the process. I looked at a previous night and 
it ran in 90 minutes or less. The current one has been running for a few hours 
and is still hung at 462 out of 595 nodes and the number of objects hasn't 
changed.


From: Dury, John C.
Sent: Friday, August 09, 2013 12:03 PM
To: ADSM-L (ADSM-L@VM.MARIST.EDU)
Subject: expire inventory seems to be hanging and not processing all nodes

I have an AIX server with TSM 6.3.4.0. Expiration is set to run at 2am and 
seems to get so far and then hangs. But what's odd is that is also seems to 
only process some of the nodes but not all. For example, I have it running now 
and it seems to be stuck:
   1 Expiration   Processed 462 nodes out of 595 total nodes,
   examined 4315 objects, deleting 4271 backup
   objects, 0 archive objects, 0 DB backup volumes,
   0 recovery plan files; 0 objects have been
   retried and 0 errors encountered.

I looked at previous nights and it ran successfully but only processed 462 
nodes out of 595 and says it ended successfully. If I cancel the one that is 
running now, it just hangs and never cancels. The only way to get rid of the 
process is to halt the server. I have opened a problem with IBM but I thought I 
would see if any of you have seen this issue.  Any idea why it isn't processing 
all 595 nodes?


expire inventory seems to be hanging and not processing all nodes

2013-08-09 Thread Dury, John C.
I have an AIX server with TSM 6.3.4.0. Expiration is set to run at 2am and 
seems to get so far and then hangs. But what's odd is that is also seems to 
only process some of the nodes but not all. For example, I have it running now 
and it seems to be stuck:
   1 Expiration   Processed 462 nodes out of 595 total nodes,
   examined 4315 objects, deleting 4271 backup
   objects, 0 archive objects, 0 DB backup volumes,
   0 recovery plan files; 0 objects have been
   retried and 0 errors encountered.

I looked at previous nights and it ran successfully but only processed 462 
nodes out of 595 and says it ended successfully. If I cancel the one that is 
running now, it just hangs and never cancels. The only way to get rid of the 
process is to halt the server. I have opened a problem with IBM but I thought I 
would see if any of you have seen this issue.  Any idea why it isn't processing 
all 595 nodes?


I could scream!

2013-06-18 Thread Dury, John C.
Way back when we had 1 TSM server at v5.x, we asked IBM several times if there 
was a way to migrate a TSM server to a different platform (AIX --> Linux) and 
they said "no".  So we came up with a strategy to install a 2nd TSM server 
(Linux) at our alternate site and then have all local clients backup to it and 
then replicate to the main TSM server at our primary site (AIX). All local 
clients at primary site will do the same, backup to AIX server and then 
replicate to Linux server.  Once both servers have full copies of each other's 
data, we will change all replicated nodes on the linux server to regular nodes 
and change all remote nodes to then backup to the linux server. Afterwards the 
AIX server would be phased out and a new Linux server would be created and the 
whole process reversed.
Basically the goal was to get rid of the TSM AIX server and move to linux only 
since AIX hardware is much more expensive. So now with version 6.3.4, there is 
a way to migrate from an AIX 5.x server to a Linux x86_64 server which would 
have solved our problems. . Where was this years ago? Of course now we have 2 
v6 servers and I am in the middle of the process described above.  Being able 
to move from AIX 5.x to Linux 6.3.4 would have been the solution we were 
looking for but now it's too late.
Sorry for my rant but this would have saved me a year's worth of work, at least.


Re: DP/Exchange mailbox restore (IMR) fix for Exchange 2010 with SP3 is ready!

2013-05-21 Thread John Stephens
Del:
Do you know if there is a fix for the Fastback for Exchange client for Exchange 
2010 SP3?
FB for EXCHange  version 6.1.5 will not run with exchange 2010 SP3 and a fix 
was in the works.
Thank you



John Stephens | CSE |STORServer, Inc
Mobile: 813.817.2526
john.steph...@storserver.com
www.STORServer.com

Sold with guaranteed data recovery - 
NOTHING ELSE MATTERS!



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: Monday, May 20, 2013 6:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DP/Exchange mailbox restore (IMR) fix for Exchange 2010 with 
SP3 is ready!

For those waiting for the Data Protection for Exchange mailbox restore (IMR) 
fix for Exchange 2010 with SP3...
... it's ready!

   http://www.ibm.com/support/docview.wss?uid=swg24034995


Note: Until the FlashCopy Manager for Microsoft Exchange 3.2.1 fix is 
available, FlashCopy Manager customers may install and use the Data Protection 
for Exchange 6.4.0.2 interim fix in their environment.
The Data  Protection for Microsoft Exchange fix provides equivalent 
functionality to FlashCopy Manager for Microsoft Exchange when used in 
FlashCopy Manager environments.

Thanks,

Del


Re: State of TSM Reporting

2013-04-05 Thread John Stephens
Hans:
TCR  ( an optional component of the Admin center) has made great improvements  
over the last few years, but it still seems to have many moving parts.
I use STORServerConsole for all my reporting needs as well as complete TSM 
management functionality.



John Stephens | CSE |STORServer, Inc
Mobile: 813.817.2526
john.steph...@storserver.com



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Hans 
Christian Riksheim
Sent: Thursday, April 04, 2013 7:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] State of TSM Reporting

Hi.

we are going away from perl and want to install a reporting feature for our TSM 
servers. What is the current state of TSM Reporting? I got burned the last time 
I touched this but that was some years ago.

Is it of acceptable quality today or is something new and better around the 
corner? Or should I just go for a 3rd party solution.
Suggestions?

Regards,

Hans Chr.


suggestions on setting up a Dell MD3260 as primary storage pool

2013-03-03 Thread Dury, John C.
We are changing our backup system and plan on using a Dell MD3260 as a primary 
disk storage device.  It will house all backups from the local site as well as 
replicated data from the remote site. It will function as a primary stgpool 
only. The tape library that is also attached will eventually be phased out.

My question is, what recommendations does anyone have on setting up this 
device. I was thinking of creating several 2T virtual disks and then presenting 
them to the linux TSM server but I am not sure whether I want to put them all 
in a volume group or not since all of the striping is done on the back end of 
the device as all of the hard drives are in one giant dynamic disk pool. In 
each 2T disk, I was going to create 4 500G disk storage volumes.  Or maybe 2 1T 
disk storage volumes. Because the disk storage volumes will be permanent and 
not eventually offloaded to tape, I want to make sure I set this up correctly.
Any suggestions for best performance?  Ext3? Ext2?
Thanks.


TSM4VE Plugin Configuration?

2012-10-18 Thread Underdown,John William
i sure this is really simple but i'm missing it. i setup tsm4ve 6.3 according 
to install 
guide. using the following;

datacenter STCVBLOCK
datacenter node STCVBLOCK_DC
datamovers LIN_VE_DM1,LIN_VE_DM2

everything works using the dsmc (-asnode=STCVBLOCK_DC) and vcli (see below) 
commands. but 
the Vcenter plugin isn't using the correct TSM nodes. plugin summary is showing:

name: STCVBLOCK
TSM node name: STCVBLOCK
TSM node data mover nodes: STCVBLOCK_DC

i would think the plugin is getting the configuration from 
/home/tdpvmware/tdpvmware/config/profile, but this looks like it's set 
correctly.

VE_DATACENTER_NAME STCVBLOCK::STCVBLOCK_DC

were are the tsm node names set for the vCenter plugin?

Many Thanks.

vmcli -f inquire_config -t TSM
#TASK 867 inquire_config 20121018121056320
#PARAM INSTALLED=TSM
#RUN 867 20121018121056320
#LANG en_US
#PARAM BACKEND=TSM
#PARAM OPERATION_TYPE 4
#PHASE_COUNT 4
#PHASE PREPARE
#PARAM BACKUP_TYPE=0
#PARAM TSM_SERVER_NAME=10.200.8.218
#PARAM TSMCLI_NODE_NAME=VBLOCKVCLI
#PARAM VCENTER_NODE_NAME=PWINCOR100
#PARAM DATACENTER_NODE_NAME=
#PARAM OFFLOAD_HOST_NAME=
#PARAM TSM_OPTFILE=/tmp/T4VE_X9vJb2
#PARAM INPUT_FILE=
#PARAM TRACEFILE=
#PARAM TRACEFLAGS=
#PHASE INITIALIZE
#PHASE INQUIRE_DATACENTER_NODES
#CHILD datacenternode:STCVBLOCK::STCVBLOCK_DC
#PARENT vcenternode:PWINCOR100
#CHILD datacenternode:STCVBLOCK::STCVBLOCK
#PARENT vcenternode:PWINCOR100
#PHASE INQUIRE_PROXY_NODES
#CHILD targetnode:STCVBLOCK_DC
#PARENT peernode:LIN_VE_DM1
#CHILD hladdress:10.200.8.218
#PARENT peernode:LIN_VE_DM1
#CHILD lladdress:46703
#PARENT peernode:LIN_VE_DM1
#CHILD targetnode:STCVBLOCK_DC
#PARENT peernode:LIN_VE_DM2
#CHILD hladdress:10.200.8.109
#PARENT peernode:LIN_VE_DM2
#CHILD lladdress:57686
#PARENT peernode:LIN_VE_DM2
#PARAM STATUS=success
#END RUN 867 20121018121059793
#END TASK 867
#INFO FMM16014I The return code is 0.
#END

-
NOTICE: This communication is intended only for the person or
entity to whom it is addressed and may contain confidential,
proprietary, and/or privileged material. Unless you are the
intended addressee, any review, reliance, dissemination,
distribution, copying or use whatsoever of this communication is
strictly prohibited. If you received this in error, please reply
immediately and delete the material from all computers. Email sent
through the Internet is not secure. Do not use email to send us
confidential information such as credit card numbers, PIN numbers,
passwords, Social Security Numbers, Account numbers, or other
important and confidential information.


Re: ANS8810E setting LD_LIBRARY_PATH.

2012-10-05 Thread Underdown,John William
per Alex's suggestion, i'll put the export command in the dsmcad start script. 
but it 
appears linking the TSM libcurl.so.4 library to the OS's fixes the problem, see 
below. 
but, this feels like hack and i don't know what issues this may cause. any 
thoughts?

to me it looks like a conflict between the TSM Client libraries and the OS's 
that need to 
be corrected by Tivoli. but presently i'm working with first level support on 
this and 
being ask for every basic information that doesn't apply to the problem.

mv /opt/tivoli/tsm/client/ba/bin/libcurl.so.4 
/opt/tivoli/tsm/client/ba/bin/libcurl.so.4.sav
ln -s /usr/lib64/libcurl.so.4 /opt/tivoli/tsm/client/ba/bin/libcurl.so.4

On 10/04/2012 06:21 PM, Alex Paschal wrote:
> Hi, John. Typically, instead of setting this variable globally, you
> would set it in a script before it calls dsmc, that way the variable's
> only set until the script ends. Only if you're running this as a
> dedicated service user, or only if it doesn't break anything, would I
> recommend setting the variable in the user's .kshrc, .bashrc,
> .bash_profile, .profile, or whatever.
>
>
> On 10/4/2012 11:39 AM, Underdown,John William wrote:
>> Hi All,
>>
>> i'm having a problem with setting LD_LIBRARY_PATH variable on a Linux box 
>> for backing up
>> VM's see below. any guidance on this would be most appreciated.
>>
>> Thanks.
>>
>> Problem Details
>> .
>> Product or Service: Tivoli Storage Manager Client Linux 6.3.L
>> Component ID: 5698ISMCL
>> .
>> Operating System: Linux
>> .
>> Problem title
>> ans8810E setting LD_LIBRARY_PATH
>> .
>> Problem description
>> on RHEL 5.x i would set the LD_LIBRARY_PATH globally in the login profile. 
>> on RHEL 6.x
>> when i do that, it breaks many of the RHEL commands., see below. what is the 
>> best way to
>> set the LD_LIBRARY_PATH variable?
>>
>>
>> # yum update
>> Fatal Python error: pycurl: libcurl link-time version is older than
>> compile-time version
>> Aborted (core dumped)
>>
>> Red Hat Enterprise Linux Server release 6.3 (Santiago) 
>> 2.6.32-279.9.1.el6.x86_64
>>
>> TSM Client Version 6, Release 3, Level 0.15
>>
>>
>> -
>> NOTICE: This communication is intended only for the person or
>> entity to whom it is addressed and may contain confidential,
>> proprietary, and/or privileged material. Unless you are the
>> intended addressee, any review, reliance, dissemination,
>> distribution, copying or use whatsoever of this communication is
>> strictly prohibited. If you received this in error, please reply
>> immediately and delete the material from all computers. Email sent
>> through the Internet is not secure. Do not use email to send us
>> confidential information such as credit card numbers, PIN numbers,
>> passwords, Social Security Numbers, Account numbers, or other
>> important and confidential information.
>>

Re: define server for replication

2012-10-04 Thread Dury, John C.
Excellent! This did the trick and I can now export a node to the other server.
Thanks for your help. I believe this will also solve my other thread also "*  
[ADSM-L] move nodes from one server to a different server via 
export<http://adsm.org/lists/html/ADSM-L/2012-10/msg00018.html>"

Hi,

Define server uses servername and serverpa to validate where as ping uses admin
name and pass to validate session. Make sure the admin and pass on server where
you issue the ping also is avail on the server you ping to.

Kind regards,
Karel

- Oorspronkelijk bericht -
Van: Dury, John C. 
Verzonden: woensdag 3 oktober 2012 23:34
Aan: ADSM-L AT VM.MARIST DOT EDU
Onderwerp: [ADSM-L] define server for replication

We have 2 datacenters and plan on having a tsm server at each site. Every
client will backup all o the nodes at that site to the tsm server at that site
and then replicate their data to the server at the other site. Both servers are
running tsm 6.3.2. One is aix and one is linux. The linux server is brand new
and was just built. The aix server has been around seemingly forever and is
currently  backing up nodes from both sites.

I've read through the manual on replication and believe I understand how it
works. Just as a test, I successfully defined both servers to each other and
successfully replicated one small node from one server to the other.
For whatever reason, the servers stopped taking to each other (I can't find any
reason anywhere) so I decided I would delete the replicated node on both
servers and also delete each server from the other and redefine them.
When I try "def server <2ndserver> serverpassword=
hladdress=2ndserver.blah.blah lladdress=1500" (with correct values of course),
it works successfully but when I try "ping server 2ndserver" it always fails
with
"ANR4373E Session rejected by target server 2ndserver, reason: Authentication
Failure."
I've tried making sure that all of the below were done on both servers:
set serverhladdress
set serverlladdress
set serverpassword
and tried with both crossdefine on and crossdefine off.
I can't find any logic as to why this doesn't work when I know for a fact it
worked previously. There are no nodes type=server defined on either tsm server.
The userid I am using to do the command is the same on both servers and has the
same password although I don't think that matters.  Both servers have nothing
currently set for target replication server.

The replicated node I used for testing is also still defined on the target
server and I can't delete it either because it says "ANR1633E REMOVE NODE: Node
NODENAME is set up for replication and cannot be renamed or removed.". The
replication settings on the replicated node on the target server are
replstate=disabled replmode=receive. I can't change the replmode no matter what
either.

Is anyone else successfully using replication? It worked once and hasn't since
and now as I am trying to set it all up again as if from scratch, I can't even
get the two servers to talk anymore.
Help!


ANS8810E setting LD_LIBRARY_PATH.

2012-10-04 Thread Underdown,John William
Hi All,

i'm having a problem with setting LD_LIBRARY_PATH variable on a Linux box for 
backing up 
VM's see below. any guidance on this would be most appreciated.

Thanks.

Problem Details
.
Product or Service: Tivoli Storage Manager Client Linux 6.3.L
Component ID: 5698ISMCL
.
Operating System: Linux
.
Problem title
ans8810E setting LD_LIBRARY_PATH
.
Problem description
on RHEL 5.x i would set the LD_LIBRARY_PATH globally in the login profile. on 
RHEL 6.x 
when i do that, it breaks many of the RHEL commands., see below.  what is the 
best way to 
set the LD_LIBRARY_PATH variable? 


# yum update
Fatal Python error: pycurl: libcurl link-time version is older than
compile-time version
Aborted (core dumped)

Red Hat Enterprise Linux Server release 6.3 (Santiago)  
2.6.32-279.9.1.el6.x86_64 

TSM  Client Version 6, Release 3, Level 0.15


-
NOTICE: This communication is intended only for the person or
entity to whom it is addressed and may contain confidential,
proprietary, and/or privileged material. Unless you are the
intended addressee, any review, reliance, dissemination,
distribution, copying or use whatsoever of this communication is
strictly prohibited. If you received this in error, please reply
immediately and delete the material from all computers. Email sent
through the Internet is not secure. Do not use email to send us
confidential information such as credit card numbers, PIN numbers,
passwords, Social Security Numbers, Account numbers, or other
important and confidential information.


Re: define server for replication

2012-10-04 Thread Dury, John C.
I was able to fix this and remove the replicated nodes. I had to redefine the 
whole infrastructure for replication just to delete it. It turns out this 
wasn't going to work for what we are trying to do anyways. See next message 
"move nodes from one server to a different server via export"




Unfortunately I had already removed the replicated nodes on both servers and it
still won't let me delete it.
John




Hi John

About the second part of your message, ANR1633E REMOVE NODE: Node NODENAME is
set up for replication and cannot be renamed or removed.".

You need first to remove the REPLNODE:   remove replnode NODENAME from both
servers (source and target)

Before you can remove the node

Regards Robert
--

Date:Wed, 3 Oct 2012 17:34:22 -0400
From:"Dury, John C." 
Subject: define server for replication

We have 2 datacenters and plan on having a tsm server at each site. Every c=
lient will backup all o the nodes at that site to the tsm server at that si= te
and then replicate their data to the server at the other site. Both serv= ers
are running tsm 6.3.2. One is aix and one is linux. The linux server is=  brand
new and was just built. The aix server has been around seemingly for= ever and
is currently  backing up nodes from both sites.

I've read through the manual on replication and believe I understand how it=
works. Just as a test, I successfully defined both servers to each other a= nd
successfully replicated one small node from one server to the other.
For whatever reason, the servers stopped taking to each other (I can't find=
any reason anywhere) so I decided I would delete the replicated node on bo= th
servers and also delete each server from the other and redefine them.
When I try "def server <2ndserver> serverpassword=3D hladdr=
ess=3D2ndserver.blah.blah lladdress=3D1500" (with correct values of course)= ,
it works successfully but when I try "ping server 2ndserver" it always fa= ils
with "ANR4373E Session rejected by target server 2ndserver, reason:
Authenticati= on Failure."
I've tried making sure that all of the below were done on both servers:
set serverhladdress
set serverlladdress
set serverpassword
and tried with both crossdefine on and crossdefine off.
I can't find any logic as to why this doesn't work when I know for a fact i= t
worked previously. There are no nodes type=3Dserver defined on either tsm=
server. The userid I am using to do the command is the same on both server= s
and has the same password although I don't think that matters.  Both serv= ers
have nothing currently set for target replication server.

The replicated node I used for testing is also still defined on the target =
server and I can't delete it either because it says "ANR1633E REMOVE NODE: =
Node NODENAME is set up for replication and cannot be renamed or removed.".=
The replication settings on the replicated node on the target server are r=
eplstate=3Ddisabled replmode=3Dreceive. I can't change the replmode no matt= er
what either.

Is anyone else successfully using replication? It worked once and hasn't si=
nce and now as I am trying to set it all up again as if from scratch, I can= 't
even get the two servers to talk anymore.
Help!

--

End of ADSM-L Digest - 2 Oct 2012 to 3 Oct 2012 (#2012-229)
***


move nodes from one server to a different server via export

2012-10-04 Thread Dury, John C.
Please ignore my previous post about trying to def a target server to source 
server and it failing. It looks like it works even though I still can't ping 
the target server from the source server. The replication works correctly 
though.

A brief description of our environment. We have 2 datacenters. Let's call them 
Site A and site B. Both sites have TSM nodes which are all now currently 
backing up to the TSM server at site A.

We would like to have a tsm server at both sites and have the clients at each 
respective site backup to their local site, and then replicate that data to the 
other site.

I was hoping to be able to replicate node data from site A to site B and then 
convert that node to a regular node so the node at that site could then backup 
to site B.

It doesn't look like there is any way to do that so the next choice is to 
export the node from server A to server B and then change the node so it backs 
up to server B instead of server A. Once that is done, I can enable replication 
between sites for that node so it will replicate from site B to site A and thus 
we have a DR solution for all of our nodes. Because this is tremendous amount 
of data I'd like to only have to send the data once from site A to site B via 
the export, and then use replication to replicate daily changes from site B 
back to site A which should be minimal data changes since the node data already 
exists at site A.
Is this possible?
I have both servers defined to each other and when I try to export a node's 
data from site A to site B I get:
ANR0561E EXPORT NODE: Process aborted - sign on to server,  failed. 
(SESSION: 118112, PROCESS: 318)
Both servers have my admin id defined with the same password and both servers 
have the same password (although I don't think that matters).
Site B was defined to site A via:
def server siteB pass=server hladdress=*.*.*.* lladdress=1500
I also tried the other syntax:
Def server siteB serverpass=password hladdress=*.*.*.* lladdress=1500
I also tried defining a node on siteB with the name of the server at siteA and 
type=server
Force sync also didn't help.
No matter what, I get the "sign on to server failed" message.
Is it me, is "def server" extremely buggy?!


Re: ADSM-L Digest - 2 Oct 2012 to 3 Oct 2012 (#2012-229)

2012-10-04 Thread Dury, John C.
Unfortunately I had already removed the replicated nodes on both servers and it 
still won't let me delete it.
John




Hi John

About the second part of your message, ANR1633E REMOVE NODE: Node NODENAME is 
set up for replication and cannot be renamed or removed.".

You need first to remove the REPLNODE:   remove replnode NODENAME from both 
servers (source and target)

Before you can remove the node

Regards Robert
--

Date:Wed, 3 Oct 2012 17:34:22 -0400
From:    "Dury, John C." 
Subject: define server for replication

We have 2 datacenters and plan on having a tsm server at each site. Every c= 
lient will backup all o the nodes at that site to the tsm server at that si= te 
and then replicate their data to the server at the other site. Both serv= ers 
are running tsm 6.3.2. One is aix and one is linux. The linux server is=  brand 
new and was just built. The aix server has been around seemingly for= ever and 
is currently  backing up nodes from both sites.

I've read through the manual on replication and believe I understand how it=  
works. Just as a test, I successfully defined both servers to each other a= nd 
successfully replicated one small node from one server to the other.
For whatever reason, the servers stopped taking to each other (I can't find=  
any reason anywhere) so I decided I would delete the replicated node on bo= th 
servers and also delete each server from the other and redefine them.
When I try "def server <2ndserver> serverpassword=3D hladdr= 
ess=3D2ndserver.blah.blah lladdress=3D1500" (with correct values of course)= , 
it works successfully but when I try "ping server 2ndserver" it always fa= ils 
with "ANR4373E Session rejected by target server 2ndserver, reason: 
Authenticati= on Failure."
I've tried making sure that all of the below were done on both servers:
set serverhladdress
set serverlladdress
set serverpassword
and tried with both crossdefine on and crossdefine off.
I can't find any logic as to why this doesn't work when I know for a fact i= t 
worked previously. There are no nodes type=3Dserver defined on either tsm=  
server. The userid I am using to do the command is the same on both server= s 
and has the same password although I don't think that matters.  Both serv= ers 
have nothing currently set for target replication server.

The replicated node I used for testing is also still defined on the target = 
server and I can't delete it either because it says "ANR1633E REMOVE NODE: = 
Node NODENAME is set up for replication and cannot be renamed or removed.".=  
The replication settings on the replicated node on the target server are r= 
eplstate=3Ddisabled replmode=3Dreceive. I can't change the replmode no matt= er 
what either.

Is anyone else successfully using replication? It worked once and hasn't si= 
nce and now as I am trying to set it all up again as if from scratch, I can= 't 
even get the two servers to talk anymore.
Help!

--

End of ADSM-L Digest - 2 Oct 2012 to 3 Oct 2012 (#2012-229)
***


define server for replication

2012-10-03 Thread Dury, John C.
We have 2 datacenters and plan on having a tsm server at each site. Every 
client will backup all o the nodes at that site to the tsm server at that site 
and then replicate their data to the server at the other site. Both servers are 
running tsm 6.3.2. One is aix and one is linux. The linux server is brand new 
and was just built. The aix server has been around seemingly forever and is 
currently  backing up nodes from both sites.

I've read through the manual on replication and believe I understand how it 
works. Just as a test, I successfully defined both servers to each other and 
successfully replicated one small node from one server to the other.
For whatever reason, the servers stopped taking to each other (I can't find any 
reason anywhere) so I decided I would delete the replicated node on both 
servers and also delete each server from the other and redefine them.
When I try "def server <2ndserver> serverpassword= 
hladdress=2ndserver.blah.blah lladdress=1500" (with correct values of course), 
it works successfully but when I try "ping server 2ndserver" it always fails 
with
"ANR4373E Session rejected by target server 2ndserver, reason: Authentication 
Failure."
I've tried making sure that all of the below were done on both servers:
set serverhladdress
set serverlladdress
set serverpassword
and tried with both crossdefine on and crossdefine off.
I can't find any logic as to why this doesn't work when I know for a fact it 
worked previously. There are no nodes type=server defined on either tsm server. 
The userid I am using to do the command is the same on both servers and has the 
same password although I don't think that matters.  Both servers have nothing 
currently set for target replication server.

The replicated node I used for testing is also still defined on the target 
server and I can't delete it either because it says "ANR1633E REMOVE NODE: Node 
NODENAME is set up for replication and cannot be renamed or removed.". The 
replication settings on the replicated node on the target server are 
replstate=disabled replmode=receive. I can't change the replmode no matter what 
either.

Is anyone else successfully using replication? It worked once and hasn't since 
and now as I am trying to set it all up again as if from scratch, I can't even 
get the two servers to talk anymore.
Help!


Re: TSM Server Upgrade from 5.5.5.2 to 6.3.2

2012-08-20 Thread Dury, John C.
I did the upgrade and for the most part it went smoothly. The upgrade wizard 
did insist on the directories being empty but I was able to create a 
subdirectory under my target directory and use that so I didn't have to have as 
much SAN space since I could reuse some directories.

John







With my upgrades I never did test the theory, but I think the directory you

specify must be empty.  You might try something like this:

/tsmdb/v6 instead of /tsmdb

Again this is a theory I did not test, but somewhere I saw an example of the DB

location that was not on the root of the file system.



Please post if this works, I am sure others would be interested.



Andy Huebner



-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of

Dury, John C.

Sent: Friday, August 17, 2012 11:06 AM

To: ADSM-L AT VM.MARIST DOT EDU

Subject: [ADSM-L] TSM Server Upgrade from 5.5.5.2 to 6.3.2



I am planning on upgrading our TSM server from v5.5.5.2 to v6.3.2 this weekend.

I have checked and all of the prerequisites are in place and I think I am

ready. My only question is, we currently have the v5 database in several

directories that have several .DSM files in them but also have lots of free

space in them, more than enough to house the V6 DB even if it doubles in size.

. According to the upgrade manual, the new directories for the v6 server must

be empty but I was hoping to be able to use the same directories where the

current v5 database files live since they have more than enough space and then

I would eventually delete the old DSM files from the v5 database once the

upgrade is complete.

When I go to do the upgrade/migration, will it let me use the same directories

even though they have one or two files in them already? I realize it might slow

the process down but I am limited on SAN space right now.



This e-mail (including any attachments) is confidential and may be legally

privileged. If you are not an intended recipient or an authorized

representative of an intended recipient, you are prohibited from using, copying

or distributing the information in this e-mail or its attachments. If you have

received this e-mail in error, please notify the sender immediately by return

e-mail and delete all copies of this message and any attachments.



Thank you.


TSM Server Upgrade from 5.5.5.2 to 6.3.2

2012-08-17 Thread Dury, John C.
I am planning on upgrading our TSM server from v5.5.5.2 to v6.3.2 this weekend. 
I have checked and all of the prerequisites are in place and I think I am 
ready. My only question is, we currently have the v5 database in several  
directories that have several .DSM files in them but also have lots of free 
space in them, more than enough to house the V6 DB even if it doubles in size. 
. According to the upgrade manual, the new directories for the v6 server must 
be empty but I was hoping to be able to use the same directories where the 
current v5 database files live since they have more than enough space and then 
I would eventually delete the old DSM files from the v5 database once the 
upgrade is complete.
When I go to do the upgrade/migration, will it let me use the same directories 
even though they have one or two files in them already? I realize it might slow 
the process down but I am limited on SAN space right now.


backup tape stgpool to tape stgpool pins recovery log

2012-07-25 Thread Dury, John C.
I have 2 sl500 tape libraries attached via dark fiber (which doesn't appear to 
be getting anywhere close to the speeds it should but that's another issue) to 
an AIX TSM 5.5 server.  One is copy storage pool and the other a primary. I 
backup the primary to the copy storage pool daily. This process is taking an 
extremely long time and seems to hang because there is at least one extremely 
large file that while it is copying, pins the recovery log and does not finish 
before the recovery log is over 80% which then causes TSM to slow to a crawl so 
I end up cancelling the backup storage pool process.  This is hanging my TSM 
server on a daily basis because the recovery keeps getting pinned by the backup 
storage process.
Is there any way at all I can find out what node or node and filespace, is 
taking so long to backup so I can verify it really needs to be backed up at all?


Re: TDP for VMWARE

2012-04-20 Thread John Morrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 16/04/12 11:33, Wei JW Ji wrote:
> Hi John,
>
> Have you tried to specify the TCPPORT into the dsm.opt of baclient?
>
> It works for me for the datamover node.
>
> Thanks,
> Hunter
This is fine!

This is for the VMcli config.


/opt/tivoli/tsm/tdpvmware/common/scripts/vmcliprofile

- -- 
John Morrison BEng. tekn. systemadm, Uppsala universitet
Avd. IT och inköp
Nät- och serverdrift
ITC, Lägerhyddsv. 2B, Hus 3
PO Box 887, 751 08 Uppsala, Sweden
Tel: +46 18 471 7718,  Mob: +46 70 425 0477
E-mail:  john.morri...@uadm.uu.se
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPkSgDAAoJEFoVp1K1f/uPpGYH/2DMlpDdkXpvutG/eQHKfq++
M3t1Jutv7hYn+5tVjMIBgfzThdE7REJbcwDxNGOYMYzQdKxDLUjNpDjkjvOth29B
nXR/1uIINAXzlww2aMc7v84owNWH00Sv9VUnQi2Db4FE2PYfcgd84pENx8YbyIab
1Yu2c5GsJTtik/CP/MnQ5+BKTWX24zqD7BMTO46XqluUlSpBSyvJyBQimn7hYoVO
RfrNg4uLr0S5jCnROKmiw+X7JhBYsQ1Q/FH2Q2sNy506bubSmy9WiwM2apnYRJyJ
9aqoc17PXoUKk6NnYPbzAMrH7CAyWI1MBGw/vcoFkSPWfWtLWQHZxJREEs4rfhk=
=bi89
-END PGP SIGNATURE-


Re: TSM 6.2.3 DB2 Deadlock

2012-04-16 Thread John Monahan
Are you also doing dedupe?

http://www-01.ibm.com/support/docview.wss?uid=swg1IC81115





___
John Monahan
Delivery Consultant
Logicalis, Inc. 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Monday, April 16, 2012 10:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM 6.2.3 DB2 Deadlock

I find my DB2 database in a deadlock condition.  It's been deadlocked for about 
40 hours.

It's run fine for many, many months until this weekend.

Running on Windows 2008 R2.

During node backup, migration kicked in (as it does frequently).  But, 
apparently, migration is trying to move a file that a node is backing up?

Anyone have a suggestion?  I've opened a PMR but don't have a response yet.

I hesitate to halt/restart the TSM service without some guidance.



Harold Vandeventer
Systems Programmer
State of Kansas - Department of Administration - Office of Information 
Technology Services harold.vandeven...@da.ks.gov
(785) 296-0631


Re: TDP for VMWARE

2012-04-16 Thread John Morrison
Cheers,

I've tried that already Rick. I saw the same, but to no avail.

John


Anybody else? :)


On 13/04/12 12:40, Rick Harderwijk wrote:
> John,
>
> The file is shown on my screen as having the # directly behind 1507 .
> Should there be a space between these two?
>
> Regards,
> Rick
> 2012/4/13 John Morrison 
>
> Hello
>
> I've run into a spot of bother with the data mover on linux not picking
> up the port number from the profile file in:
>
> /home/tdpvmware/tdpvmware/config/profile
>
> >>>>> VMCLI
> VMCLI_TRACEYES
> VE_TSM_SERVER_NAME some ip address# -s
> VE_TSM_SERVER_PORT  1507# -p
> VE_TSMCLI_NODE_NAMEVC_VCLI  # -n
> VE_VCENTER_NODE_NAME   VC  # -v
> VE_TRACE_FILE tsmcli.trace # -x
> tsmcli trace file
> VE_TRACE_FLAGSapi api_detail # -y trace
> flags
> #VE_DATACENTER_NAMEdatacentername::datacenternodename
> VE_DATACENTER_NAME VC_DC::VC_DC
> VMCLI_TASK_EXPIRATION_TIME 864000# in
> seconds, defaults to 864000s = 10 days
> VMCLI_RESTORE_TASK_EXPIRATION_TIME 2592000   # in
> seconds, defaults to 2592000s = 30 days
> VMCLI_GRACE_PERIOD 2592000   # in
> seconds, defaults to 2592000s = 30 days
> VMCLI_SCHEDULER_INTERVAL   60 # in
> seconds, defaults to 1s
> VMCLI_DB_HOST  localhost
> VMCLI_DB_PORT   1527
> VMCLI_CACHE_EXPIRATION_TIME600   # in
> seconds, defaults to 600s = 10 min
> VMCLI_DB_NAME  VMCLIDB
> VMCLI_RECON_INTERVAL_FCM   600   #
> backend specific recon interval setting in seconds default 600s = 10 min
> VMCLI_RECON_INTERVAL_TSM   1200  #
> backend specific recon interval setting in seconds default 1200s = 20 min
> VMCLI_DB_BACKUPAT 00:00
> VMCLI_DB_BACKUP_VERSIONS   3
> VMCLI_LOG_DIR  logs
> DERBY_HOME  /home/tdpvmware/tdpvmware
> <<<
>
>
> - From this it generates a dsm.sys every 1 min in:
>
> /opt/tivoli/tsm/tdpvmware/tsmcli/bin64/dsm.sys
>
> SERVERNAME   TSMCLIWRAPPER
>  PASSWORDACCESS GENERATE
>  TCPSERVERADDRESS some ip number
>  NODENAME VC_VCLI
>
> But when the inquire command is ran, it calls to the default port 1500
> and not 1507
>
> Anybody seen this problem?
>
>>

-- 
John Morrison BEng. tekn. systemadm, Uppsala universitet
Avd. IT och inköp
Nät- och serverdrift
ITC, Lägerhyddsv. 2B, Hus 3
PO Box 887, 751 08 Uppsala, Sweden
Tel: +46 18 471 7718,  Mob: +46 70 425 0477
E-mail:  john.morri...@uadm.uu.se


Re: TDP for VMWARE

2012-04-13 Thread John Morrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cheers,

I've tried that already Rick. I saw the same, but to no avail.

John

On 13/04/12 12:40, Rick Harderwijk wrote:
> John,
>
> The file is shown on my screen as having the # directly behind 1507 .
> Should there be a space between these two?
>
> Regards,
> Rick
> 2012/4/13 John Morrison 
>
> Hello
>
> I've run into a spot of bother with the data mover on linux not picking
> up the port number from the profile file in:
>
> /home/tdpvmware/tdpvmware/config/profile
>
> >>>>> VMCLI
> VMCLI_TRACE YES
> VE_TSM_SERVER_NAME some ip address # -s
> VE_TSM_SERVER_PORT 1507# -p
> VE_TSMCLI_NODE_NAME VC_VCLI # -n
> VE_VCENTER_NODE_NAME VC # -v
> VE_TRACE_FILE tsmcli.trace # -x
> tsmcli trace file
> VE_TRACE_FLAGS api api_detail # -y trace
> flags
> #VE_DATACENTER_NAME datacentername::datacenternodename
> VE_DATACENTER_NAME VC_DC::VC_DC
> VMCLI_TASK_EXPIRATION_TIME 864000 # in
> seconds, defaults to 864000s = 10 days
> VMCLI_RESTORE_TASK_EXPIRATION_TIME 2592000 # in
> seconds, defaults to 2592000s = 30 days
> VMCLI_GRACE_PERIOD 2592000 # in
> seconds, defaults to 2592000s = 30 days
> VMCLI_SCHEDULER_INTERVAL 60 # in
> seconds, defaults to 1s
> VMCLI_DB_HOST localhost
> VMCLI_DB_PORT 1527
> VMCLI_CACHE_EXPIRATION_TIME 600 # in
> seconds, defaults to 600s = 10 min
> VMCLI_DB_NAME VMCLIDB
> VMCLI_RECON_INTERVAL_FCM 600 #
> backend specific recon interval setting in seconds default 600s = 10 min
> VMCLI_RECON_INTERVAL_TSM 1200 #
> backend specific recon interval setting in seconds default 1200s = 20 min
> VMCLI_DB_BACKUP AT 00:00
> VMCLI_DB_BACKUP_VERSIONS 3
> VMCLI_LOG_DIR logs
> DERBY_HOME /home/tdpvmware/tdpvmware
> <<<
>
>
> - From this it generates a dsm.sys every 1 min in:
>
> /opt/tivoli/tsm/tdpvmware/tsmcli/bin64/dsm.sys
>
> SERVERNAME TSMCLIWRAPPER
> PASSWORDACCESS GENERATE
> TCPSERVERADDRESS some ip number
> NODENAME VC_VCLI
>
> But when the inquire command is ran, it calls to the default port 1500
> and not 1507
>
> Anybody seen this problem?
>
>>

- -- 
John Morrison BEng. tekn. systemadm, Uppsala universitet
Avd. IT och inköp
Nät- och serverdrift
ITC, Lägerhyddsv. 2B, Hus 3
PO Box 887, 751 08 Uppsala, Sweden
Tel: +46 18 471 7718,  Mob: +46 70 425 0477
E-mail:  john.morri...@uadm.uu.se
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPiCAIAAoJEFoVp1K1f/uPvAgIAKkm4u58xX/jhN5BfKP28hkJ
YysR4vazDJDKRN9e3asrYK8vZ2dTqIunYUnCSjqS347M15F+r1MKYtPkY9T6t8se
a9RXx6iwO4hYLvs2T1BAcymxmRbl5n0MAG5jmYwUZUz5k6NmUkkbYWH/VskupuC0
acJVJC6YCLk4EXJMtiTF6UBwtop5ONtMfX4plPrhq+m+wkfba5flWz0I3s1wDmp2
B+q/rxCXHb9Dq6idup5IF8klik4Mdid00T6B2pTUtUvLLDBi6qVM6xryyppwKA4/
bPNxqDYlbeeHhOkHH32wMPstvPSyM5P8x1loyShIZy5rEVVR3wshlExKCgObzjQ=
=1h79
-END PGP SIGNATURE-


TDP for VMWARE

2012-04-13 Thread John Morrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello

I've run into a spot of bother with the data mover on linux not picking
up the port number from the profile file in:

/home/tdpvmware/tdpvmware/config/profile

>>> VMCLI
VMCLI_TRACEYES
VE_TSM_SERVER_NAME some ip address# -s
VE_TSM_SERVER_PORT  1507# -p
VE_TSMCLI_NODE_NAMEVC_VCLI  # -n
VE_VCENTER_NODE_NAME   VC  # -v
VE_TRACE_FILE tsmcli.trace # -x
tsmcli trace file
VE_TRACE_FLAGSapi api_detail # -y trace
flags
#VE_DATACENTER_NAMEdatacentername::datacenternodename
VE_DATACENTER_NAME VC_DC::VC_DC
VMCLI_TASK_EXPIRATION_TIME 864000# in
seconds, defaults to 864000s = 10 days
VMCLI_RESTORE_TASK_EXPIRATION_TIME 2592000   # in
seconds, defaults to 2592000s = 30 days
VMCLI_GRACE_PERIOD 2592000   # in
seconds, defaults to 2592000s = 30 days
VMCLI_SCHEDULER_INTERVAL   60 # in
seconds, defaults to 1s
VMCLI_DB_HOST  localhost
VMCLI_DB_PORT   1527
VMCLI_CACHE_EXPIRATION_TIME600   # in
seconds, defaults to 600s = 10 min
VMCLI_DB_NAME  VMCLIDB
VMCLI_RECON_INTERVAL_FCM   600   #
backend specific recon interval setting in seconds default 600s = 10 min
VMCLI_RECON_INTERVAL_TSM   1200  #
backend specific recon interval setting in seconds default 1200s = 20 min
VMCLI_DB_BACKUPAT 00:00
VMCLI_DB_BACKUP_VERSIONS   3
VMCLI_LOG_DIR  logs
DERBY_HOME  /home/tdpvmware/tdpvmware
<<<


- From this it generates a dsm.sys every 1 min in:

/opt/tivoli/tsm/tdpvmware/tsmcli/bin64/dsm.sys

SERVERNAME   TSMCLIWRAPPER
 PASSWORDACCESS GENERATE
 TCPSERVERADDRESS some ip number
 NODENAME VC_VCLI

But when the inquire command is ran, it calls to the default port 1500
and not 1507

Anybody seen this problem?

- -- 
John Morrison BEng. tekn. systemadm, Uppsala universitet
Avd. IT och inköp
Nät- och serverdrift
ITC, Lägerhyddsv. 2B, Hus 3
PO Box 887, 751 08 Uppsala, Sweden
Tel: +46 18 471 7718,  Mob: +46 70 425 0477
E-mail:  john.morri...@uadm.uu.se
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPh+6tAAoJEFoVp1K1f/uPmZEH/i2s7ZHKyfHaWF+B5soSLL1s
ax0XAP8AiDPEkVsOgPNqY9erZB5qSZ20WL2qdxWOYAwm2qJB+C8oGTyFFXzW4X2/
hiI6s5KVIFMgkRigwHO+Deu7DvMwjQPo50xMAR1mEPQzVYjNmfDxSxCRUT0mVzEU
8HrUhb16tVnGOes6JtMHrO4N1UmqkU+5UIXHSEUpKhsczvD79eQydPNogvNVqeCt
4KzoK+Ay/ZiN9KCeSkwJxPVtZtAfViW93pO1zhEjqzdsfjctr4IEyFCdn0tW4cdK
HriEF+TRRQdo/CJzUopBNcignctCrS65SgChwUn6y7c0LSK+iHvthLoKwtZ5Xuk=
=N+Gw
-END PGP SIGNATURE-


Re: RFE 17805

2012-03-29 Thread John Monahan
I believe TSM 6.3 node replication is one answer to your request.



___
John Monahan
Delivery Consultant
Logicalis, Inc. 

___
Business and technology working as one 

This e-mail message is intended only for the confidential and privileged use of 
the addressee (or others authorized to receive for the addressee). If you are 
not an intended recipient, please delete the e-mail and any attachments and 
notify the sender immediately

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
EJ van - SPLXO
Sent: Thursday, March 29, 2012 3:31 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: RFE 17805

Hi Remco!
I totally agree!
In our shop we had to create a dedicated server to enable backups in case of a 
disaster. It contains a standby primary pool with 500 tapes, just sitting 
there, costing money. This pool is located on the remote location and is 
readonly. When the primary pool is lost, we have to make this pool readwrite so 
backups can continue.
In this case you can see that the primary focus of development is still on 
restore, while backing up became equally vital. Oracle database rely on a 
frequent backup of the archive logs. When they become full, Oracle simply stops.
I voted for your RFE and request others to do the same. It takes just one 
minute of your time.
Kind regards,
Eric van Loon

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Remco 
Post
Sent: woensdag 28 maart 2012 20:43
To: ADSM-L@VM.MARIST.EDU
Subject: RFE 17805

Hi all,

I submitted a RFE for TSM regarding the behavior of copy pools.

You can find the RFE via
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRFEs and search for 
RFE ID 17805.

I'd like to invite all off you to vote on this RFE if you find it useful, so 
that development can get a good idea of how important this FRE would be to the 
customers.

Below you find the input I submitted:

Description:For one storage pool to be an exact replica of another
pool and be equivalent in every way, to the extend that if one pool becomes 
unavailable all activity continues on the remaining pool, including write 
activity.

This would also involve changing the behavior of commands like delete volume, 
migration, etc.

Use case:   In a dual-site setup with two tape libraries, one would
want operations to continue as seamless as possible in case of a site disaster. 
Current copy storage pools can't be promoted to primary pools, so with the 
current technology one has to create new storage pools on the second site, even 
though a copy storage pool is already available.

Business justification: there are many dual site setup with IBM and its
customers. All data in a TSM server environment (disk) can be mirrored, but as 
soon as the primary tape storage pools become unavailable, there is a lot of 
manual intervention involved before normal operations can continue.

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




Re: Antwort: Re: [ADSM-L] Sources of discrepancy between FILE volume size on disk and reported space?

2012-01-04 Thread Underdown,John William
Thanks Alex,

that works! it'll save me a TON of space!

is this a great forum or WHAT?!?!?!

On 01/04/2012 09:17 AM, Alexander Heindl wrote:
> use a nonblocked pool
> (you cannot change this setting - you'll need to move all data)
>
> consider some disadvantages of nonblocked data, e.g. no autocopy with
> "blocked" pools...
>
> Regards,
> Alex
>
>
>
>
> Von: "Allen S. Rout" 
> An: ADSM-L@VM.MARIST.EDU
> Datum: 04.01.2012 15:04
> Betreff: Re: [ADSM-L] Sources of discrepancy between FILE volume size on
> disk and reported space?
> Gesendet von: "ADSM: Dist Stor Manager" 
> ----
>
>
>
> On 01/04/2012 08:01 AM, Underdown,John William wrote:
>  > i have the same issue with our document-imaging backups. this is due to
>  > zillions of itty-bitty files being backed up, see below. btw, the
>  > TXNGROUPMAX option only applies to files being backed up, not moved or
>  > reclaimed. [...]
>
> Sounds like this could very well be my problem. Thanks!
>
> Shame I can't fix it for extant data. I might not be able to fix it at
> all: if the content managment system sends just a page, or just a
> packet of metadata, the txngroupmax will be irrelevant: transaction
> size = 1. :P
>
>
> - Allen S. Rout
>
>

-
NOTICE: This communication is intended only for the person or
entity to whom it is addressed and may contain confidential,
proprietary, and/or privileged material. Unless you are the
intended addressee, any review, reliance, dissemination,
distribution, copying or use whatsoever of this communication is
strictly prohibited. If you received this in error, please reply
immediately and delete the material from all computers. Email sent
through the Internet is not secure. Do not use email to send us
confidential information such as credit card numbers, PIN numbers,
passwords, Social Security Numbers, Account numbers, or other
important and confidential information.


Re: Sources of discrepancy between FILE volume size on disk and reported space?

2012-01-04 Thread Underdown,John William
i have the same issue with our document-imaging backups. this is due to 
zillions of itty-bitty files being backed up, see below. btw, the 
TXNGROUPMAX option only applies to files being backed up, not moved or 
reclaimed.

Volume 100% utilized, Estimated Capacity does not meet MAXCAPacity
Technote (FAQ)

Problem
The output from "q vol" on a FILE devclass volume in a TSM 5.3 Server is 
reporting 100% utilized, but the "Estimated Capacity" has not reached 
the MAXCAPacity.

Cause
For example, a FILE devclass named FILEDIR has been set to have a MAX 
Capacity of 64MB but each volume has only been filled to 34MB before 
becoming full:

Output from "q devclass":
Device Device Storage Device Format Est/Max Mount
Class Access Pool Type Capacity Limit
Name Strategy Count (MB)
- -- --- - --  --
FILEDIR Sequential 1 FILE DRIVE 64.0 20

After a backup of small objects, the output from "q vol" will show 
something like:
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
 --- -- - - 
C:\TSMDATA\SERVER1\DIRM- FILEDIR FILEDIR 34.0 100.0 Full
C\01E6.BFS
C:\TSMDATA\SERVER1\DIRM- FILEDIR FILEDIR 34.0 100.0 Full
C\01E7.BFS

There was a design change to the file devclass in 5.3 where each I/O to 
the a FILE devclass volume will be at least 256K, regardless of how much 
data is being written. Under these conditions, if there is only one 
object backed up that is 500 bytes in size, it will take 256k to store 
it on the FILE devclass volume. This will have the biggest impact when 
backing up small objects (for instance, small files or directories).

Solution
Increasing the aggregate size may help reduce this overhead. This can be 
done by modifying the value for the TXNGROUPMAX on either the TSM Server 
or for this particular node (register/update node). This will make the 
aggregate size larger and less space is wasted with the overhead. Also 
the client option TXNBYTELIMIT may need to be adjusted as well.

NOTE: Increasing the TXNGROUPMAX value will result in increased recovery 
log utilization. Higher recovery log utilization may increase the risk 
of running out of log space. Evaluate each node's performance before 
changing this parameter. For more information on managing the recovery 
log, see the Administrator's Guide.

The results from the query volume may vary by environment. Also check 
the OS to see if the filesize of the volume is equal to the MAXCAPacity. 
If it is not, then APAR IC44597 is also affecting the environment.


On 01/03/2012 02:58 PM, Allen S. Rout wrote:
> So, I've got a passel of document-imaging data whose primary storage is
> on FILE. As you might imagine, it's amazingly static, and as a
> consequence gets little attention. But it just got some, and I'm
> confused by it.
>
> I recently set about moving my TSM infrastructure from the Old Storage
> (emc cx320) to the New Storage (emc VNX). I built new volume groups
> and filesystems, updated my management class to only write to the new
> filesystems, and started a long series of MOVE DATA macros.
>
> As I watched these go by, I was interested by the fact that the move
> processes were reporting less than 5G moved, sometimes substantially. I
> figured I'd neglected to set reclaim thresholds on this data, d'oh. So
> I was getting an opportunity to reclaim the whole dang infrastructure.
> how nice! I started counting space savings before it was hatched.
>
>
> So now the move is done, and I've got my 4.mumble TB of data all moved
> to brand new 5G volumes, none of which existed a month ago, all of which
> were populated by
>
> MOVE DATA [foo] reconst=yes
>
> and none of which have any reclaimable space
>
> tsm: VI>select pct_utilized,pct_reclaim,count(*) from volumes where
> devclass_name='MED FILE' group by pct_utilized,pct_reclaim
>
> PCT_UTILIZED PCT_RECLAIM Unnamed[3]
>  --- ---
> 6.1 0.0 1
> 100.0 0.0 848
>
>
> but the ESTCAP of these volumes is pretty far off 5G. Here's a
> histogram-like list of them, counted by GB of ESTCAP.
>
>
> 1 1.3
> 1 1.5
> 2 1.6
> 4 1.7
> 8 1.8
> 16 1.9
> 20 2.0
> 39 2.1
> 37 2.2
> 44 2.3
> 35 2.4
> 53 2.5
> 51 2.6
> 52 2.7
> 40 2.8
> 60 2.9
> 56 3.0
> 44 3.1
> 46 3.2
> 61 3.3
> 152 3.4
> 14 3.5
> 6 3.6
> 3 3.7
> 1 3.9
> 2 4.0
> 1 5.0
>
>
> So, what am I missing? I feel like a bozo, but I can't come up with a
> reason a full 5G volume would have an ESTCAP of 1.3G with no reclaimable
> space.
>
>
> - Allen S. Rout

-
NOTICE: This communication is intended only for the person or
entity to whom it is addressed and may contain confidential,
proprietary, and/or privileged material. Unless you are the
intended addressee, any review, reliance, dissemination,
distribution, copying or use whatsoever of this communication is
strictly prohibited. If you received this in error, please reply
immediately and delete the ma

Node replication DB sizing

2011-12-19 Thread John Monahan
I was wondering if anyone has any experience with the new node replication and 
how it affects the TSM database size, both on the source and target sides. I'm 
unable to find any type of sizing guidance for the DB side of things.

Thanks


Re: Anna Hazare Coming to Freedom Park on 17th Dec 2011(Sat)- To Save Lokayukta and to Deman Jan Lokpal Bill

2011-12-12 Thread john d
Why is this posted?



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Yuvaraja Shivarama
Sent: Monday, December 12, 2011 12:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Anna Hazare Coming to Freedom Park on 17th Dec 2011(Sat)-
To Save Lokayukta and to Deman Jan Lokpal Bill

Hi Friends,

As Anna Hazare coming to Bangalore(FREEDOM PARK- 1:30 PM) on 17th Dec
2011(Sat), we are requesting all your presence to make this movement
successful.

So please bring your family and friends as much as possible to show our
strenth, to the govenment, because now central govenment is watching each
and every steps treading by Anna Hazare team, and the support getting for
the same. this is right time for all of us to fight against the cooruption
and to eradicate from the root.

Rally Starting Points for Campaign:

*Starting Point: Contact
Numbers:*
Udupi Garden, BTM Layout  9620438368,
8904525457
RV Dental, JP nagar8880467170,
9980488477
Central Mall, Bellandur 9742580616,
9886498278
KR Park, Basavanagudi9740700449,
8880467170
MarathaHalli Bridge  9036000152,
9916661152
Rly Station, KR Puram 944870,
9986043315
Gaurda Mall, Magrath Road  9035307966,
9538882692
New Town B. Stand, YLHNKA   9964645444, 8884722172
Rly Station, Yeshwanthpur9341636022,
8892764215
Shankar Mutt, Basaveshwarnagar  8892721548, 9741908666
Anti Corruption Public rallies will start from nodal points on the 17th Dec.
Call now to participate.

Program Schedule:
1:30 PM: Evenet Start ( FREEDOM PARK, Bangalore)
02:30 PM: Address by Shri Anna Hazare and Team
05:30 PM : Close

please copy and distribute this to your friends.


Thanks
Yuvaraj


Re: Backing up Centera?

2011-12-03 Thread John D. Schneider
Greetings,
No disrespect intended, but you are missing the fundamental
rationale behind the Centera.  It is an archiving platform, not a
primary storage platform.  It is intended to be the last platform the
data ever resides on.  It has replication capabilities, so you can keep
another copy off-site on another Centera for redundancy purposes, but
since the data on it should be archive data, you should not be backing
it up, and you should not need to.  You don't need to back up an
archive, since it is static data that is no longer being used for
production processing.  
That also means that you should not be sending data to it that you
intend to use for primary production applications.  It is a good
platform for email archive, or application data that is no longer being
used on a daily basis, but might be needed at a later date for legal
discovery or other reasons.  
EMC has had the Centera platform around for many years, and they
always try to steer people away from trying to use the Centera for
purposes outside of it's intended design.

Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721

 
 
 Original Message 
Subject: Re: [ADSM-L] Backing up Centera?
From: john d 
Date: Fri, December 02, 2011 10:42 am
To: ADSM-L@VM.MARIST.EDU

NDMP might be an option. Hopefully, you are running TSM 6.2



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Huebner,Andy,FORT WORTH,IT
Sent: Friday, December 02, 2011 11:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backing up Centera?

Seven10 software, the Altus product.

Andy Huebner


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Sachin Chaudhari
Sent: Thursday, December 01, 2011 10:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backing up Centera?






Hello,

How to take EMC centera backup? One way of doing it through additional
EMC
Module CBRM with TSM. But CBRM was end of life product,

Pl suggest any alternate way of doing Centera backup on TSM-TAPE without
CBRM.?
Regards,
Sachin C.

This e-mail (including any attachments) is confidential and may be
legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its
attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and
any
attachments.

Thank you.



Re: Backing up Centera?

2011-12-02 Thread john d
NDMP  might be an option.   Hopefully, you are running TSM 6.2



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Huebner,Andy,FORT WORTH,IT
Sent: Friday, December 02, 2011 11:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backing up Centera?

Seven10 software, the Altus product.

Andy Huebner


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Sachin Chaudhari
Sent: Thursday, December 01, 2011 10:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backing up Centera?






Hello,

How to take EMC centera backup?  One way of doing it through additional EMC
Module CBRM with TSM. But CBRM was end of life product,

Pl suggest any alternate way of doing Centera backup on TSM-TAPE without
CBRM.?
Regards,
Sachin C.

This e-mail (including any attachments) is confidential and may be legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.

Thank you.


Re: Ang: multi-threaded Exchange backups

2011-11-28 Thread john d
me too.   I would be interest

  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Robert 
Ouzen
Sent: Monday, November 28, 2011 9:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Ang: multi-threaded Exchange backups

Hi Steve

I will be too interested about this script if you pleased

Regards Robert
rou...@univ.haifa.ac.il


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Monday, November 28, 2011 4:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: multi-threaded Exchange backups

Hi

I wouldnt mind getting a copy of that script, I'm not much of a fan of multiple 
nodes / schedules either.

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-"ADSM: Dist Stor Manager"  skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: "Schaub, Steve" 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 11/28/2011 15:07
Ärende: multi-threaded Exchange backups

In case anyone is interested, I modified our Powershell script for full backup 
of our Exchange databases so that it now backs up multiple concurrent Storage 
Groups in a single script (because I'm too darn lazy to deal with multiple node 
names, schedules, etc).

In our case, it dropped elapsed times from 36hr to 11hr, running 4 concurrent 
threads.

If anyone would like a copy, just contact me.

Steve Schaub
Systems Engineer II, Windows Backup/Recovery BlueCross BlueShield of Tennessee 
steve_schaub "at" bcbst.com

-
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: TSM 5.5 actlog size..

2011-11-18 Thread Underdown,John William
i copy records daily to a MySQL database, where i keep a year's worth of 
data. i can then keep the retention period low on the TSM database.

MySQL is running on a salvaged box from the trash pile (it actually took 
several boxes to make up one good one), running CentOS Linux. reporting 
is done using Perl and Apache, with no impact on the TSM server and zero 
cost.

On 11/17/2011 03:35 PM, Ajay Patel wrote:
> Hi,
>
> On one of  my tsm server Activity Log Size: 3,742 M  so it is very
> difficult to get information from actlog - response time for 24 hour is
> too high and if we query for more then 24 hr then not sure on output...
>
> Could you please suggest  the solution without touching retention period
> ?.. TSM server version - 5.5 .4.3 on AIX
>
>
> Regards,
> Ajay Patel

-
NOTICE: This communication is intended only for the person or
entity to whom it is addressed and may contain confidential,
proprietary, and/or privileged material. Unless you are the
intended addressee, any review, reliance, dissemination,
distribution, copying or use whatsoever of this communication is
strictly prohibited. If you received this in error, please reply
immediately and delete the material from all computers. Email sent
through the Internet is not secure. Do not use email to send us
confidential information such as credit card numbers, PIN numbers,
passwords, Social Security Numbers, Account numbers, or other
important and confidential information.


Migrating from AIX to Linux (again)

2011-11-16 Thread Dury, John C.
Our current environment looks like this:
We have a production TSM server that all of our clients backup to throughout 
the day. This server has 2 SL500 tape libraries attached via fiber. One is 
local and the other at a remote site which is connected by dark fiber. The 
backup data is sent to the remote SL500 library several times a day in an 
effort to keep them in sync.  The strategy is to bring up the TSM DR server at 
the remote site and have it do backups and recovers from the SL500 at that site 
in case of a DR scenario.

I've done a lot of reading in the past and some just recently on the possible 
ways to migrate from an AIX TSM server to a Linux TSM server. I understand that 
in earlier versions (we are currently at 5.5.5.2) of the TSM server it allowed 
you to backup the DB on one platform (AIX for instance) and restore on another 
platform (Linux for instance) and if you were keeping the same library, it 
would just work but apparently that was removed by IBM in the TSM server code 
to presumably prevent customers from moving to less expensive hardware. (Gee, 
thanks IBM! ).
I posted several years ago about any possible ways to migrate the TSM Server 
from AIX to Linux.
The feasible solutions were as follows:

1.   Build new linux server with access to same tape library and then 
export nodes from one server to the other and then change each node as it's 
exported, to backup to the new TSM Server instead.  Then the old data in the 
old server can be purged. A lengthy and time consuming process depending on the 
amount of data in your tape library.

2.   Build a new TSM linux server and point all TSM clients to it but keep 
the old TSM server around in case of restores for a specified period of time 
until it can be removed.

There may have been more options but those seemed the most reasonable given our 
environment. Our biggest problem with scenario 1 above is exporting the data 
that lives on the remote SL500 tape library would take much longer as the 
connection to that tape library is slower than the local library.  I can 
probably get some of our SLAs adjusted to not have to export all data and only 
the active data but that remains to be seen.

My question. Has any of this changed with v6 TSM or has anyone come up with a 
way to do this in a less painful and time consuming way? Hacking the DB so the 
other platform code doesn't block restoring an AIX TSM DB on a Linux box? 
Anything?

Thanks again and sorry to revisit all of this again. Just hoping something has 
changed in the last few years.
John


Re: EXPORTING clients

2011-07-12 Thread John Monahan
FYI TSM 6.2.2.0 has been pulled from the FTP site because of so many problems. 
See here: 

ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/server/v6r2/AIX/6.2.2.0/README.txt

You can browse the list of issues fixed in the 6.2.2.X server patch levels here:

https://www-304.ibm.com/support/docview.wss?uid=swg21456937




___
John Monahan
Delivery Consultant
Logicalis, Inc. 
john.mona...@us.logicalis.com
www.us.logicalis.com

___
Business and technology working as one 

This e-mail message is intended only for the confidential and privileged use of 
the addressee (or others authorized to receive for the addressee). If you are 
not an intended recipient, please delete the e-mail and any attachments and 
notify the sender immediately


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Hughes, Timothy
Sent: Sunday, July 10, 2011 11:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: EXPORTING clients

AIX 6.1.0
TSM SERVER 6.2.2.0

Regards

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Sunday, July 10, 2011 11:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: EXPORTING clients

What V/R/M/OS are you running?

"Hughes, Timothy"  wrote:

Hi Zoltan,

Thanks...I will take a look at those there on the IBM support site I assume 
correct? I tried looking for something about this issue before but could not 
find anything. I will try again must have missed something.

Regards

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Saturday, July 09, 2011 11:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: EXPORTING clients

IIRC, there are some server patches related to exports causing problems.

"Hughes, Timothy"  wrote:

I think you may be correct, I don't see those input wait times most of the time 
only once or twice since doing the exports. Still our main issue seems to be 
the suspend export command seems to hang and/or cause the destination TSM 
Server to crash. Also running more than one export seems to cause a hang 
sometimes currently running only 1 export (frustrating) cause this is going to 
take forever doing these exports.

regards

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Friday, July 08, 2011 2:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: EXPORTING clients

Richard,

Here is something to think about:

What Timothy is doing is essentially tape-to-tape data movement for one node, 
with that node's data likely scattered to some degree across the input tape. 
This means that there will be some tape repositioning needed to read the input 
tape. If the output was to disk, it is likely that the input tape could stay 
pretty much in streaming mode. However, with the output being directed to tape, 
IMHO it is much more likely that between the input tape repositions and not 
being able to keep the output tape in streaming mode, the combination will 
likely result in an inordinate amount of backhitching. I don't think this is a 
hardware error, so much as a tape technology limitation. We get around this by 
using serial disk as an interim step, which allows the whole process to run 
more quickly (because there is less overall backhitching).

..Paul


At 08:34 AM 7/8/2011, Richard Sims wrote:
>On Jul 8, 2011, at 7:57 AM, Hughes, Timothy wrote:
>
>> Yes, We are doing Server-to-tape exports. I did notice a couple times long 
>> input tape mounts on the destination TSM SERVER one was around 6,000 seconds 
>> another around 14,000 seconds and another 17,000 seconds.
>
>Export/Import operations are, per the Admin Guide manual topic "Preemption of 
>client or server operations", high priority, which would pre-empt lower 
>priority operations in order to get a tape mount started. If prompt allocation 
>of tape drives is being reflected in your Activity Log, but then tape mount 
>and positioning is taking an inordinate amount of time, that would suggest 
>hardware issues with tapes or drives. If non-Export operations do not exhibit 
>such delays, then something else is going on, where analysis of the Activity 
>Log and operating system logs may reveal factors. If no cause is evident, 
>contacting TSM Support would be in order.
>
> Richard Sims


--
Paul Zarnowski Ph: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801 Em: p...@cornell.edu


Re: EMERGENCY QUESTION: RESTORE ONLY PERMISSIONS?

2011-06-17 Thread john d
Do not think you are correct on TSM only keeps one version permission -  I
believe it would keep IAW the policy settings.Can you show the link
which you are getting this info from.   Thanks

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Carlo Zanelli
Sent: Friday, June 17, 2011 3:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] EMERGENCY QUESTION: RESTORE ONLY PERMISSIONS?

Also with archive?

Carlo.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
David E Ehresman
Sent: venerdì 17 giugno 2011 21:03
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] EMERGENCY QUESTION: RESTORE ONLY PERMISSIONS?

And TSM only keeps one version of unix permissions.  So once a backup is
done with the changed permissions, there is no TSM way to restore them back
to what they were before.

David

>>> "Moyer, Joni M"  6/17/2011 1:45 PM >>>
Hi Everyone,

I was just contacted and told that something changed group ownership on
every file starting in / (root), so what I'm asking is, is there anyway to
just restore permissions only and not the actual data?  This is for a
Solaris TSM client.

Just let me know.

Thanks so much in advance!

Joni


This e-mail and any attachments to it are confidential and are intended
solely for use of the individual or entity to whom they are addressed.
If you have received this e-mail in error, please notify the sender
immediately and then delete it. If you are not the intended recipient, you
must not keep, use, disclose, copy or distribute this e-mail without the
author's prior permission. The views expressed in this e-mail message do not
necessarily represent the views of Highmark Inc., its subsidiaries, or
affiliates.

-   
Ai sensi del D.Lgs. 196/2003 si precisa che le informazioni contenute nel
messaggio e negli eventuali allegati sono riservate al/ai destinatario/i
indicato/i. Nel caso di erroneo recapito, si chiede cortesemente a chi legge
di dare immediata comunicazione al mittente e di cancellare il presente
messaggio e gli eventuali allegati. Si invita ad astenersi dall'effettuare:
inoltri, copie, distribuzioni e divulgazioni non autorizzate del presente
messaggio e degli eventuali allegati.

--   
According to Italian law (D.Lgs 196/2003) information contained in this
message and any attachment contained therein is addressed exclusively to the
intended recipient. If you have received this message in error would you
please inform immediately the sender and delete the message and its
attachments. You are also requested not to make copies, nor to forward the
message and its attachments or disclose their content unless authorised. 

DD & TSM

2011-06-16 Thread john d
Hi Tim,

We were in class about five ago in Middleboro Ma   More Group

At present, I am an installer (still in training) of DD and then
Integrating to TSM.  Are using you using VTL, NFS or both.   What model
are you using?




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tim
Brown
Sent: Thursday, June 16, 2011 3:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] tsm and data domain

Any one use emc's data domain devices for storage pools and replication

Would like to here positive and negative issues.



Thanks,



Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com <>
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the intended
recipient. If the reader of this message is not the intended recipient, or
an employee or agent responsible for delivering this message to the intended
recipient, please notify the sender immediately by replying to this note and
deleting all copies and attachments.


Re: Select Help

2011-06-03 Thread john d
define script policy_check desc="check domain, mgmt, bu/ar_copy"

update script policy_check "set sqldisplay wide"

update script policy_check "select cast(DOMAIN_NAME as char(12))as 
dom,cast(SET_NAME as char(8))as setname,cast(DEFMGMTCLASS as char(16))as 
def_MGT,cast(DESCRIPTION as char (33))as desc from policysets where 
set_name='ACTIVE'"

update script policy_check "set sqldisplay wide"

update script policy_check "select cast(DOMAIN_NAME as char(12))as dom, 
cast(ACTIVATE_DATE as date)as act_date,  cast(DEFMGMTCLASS as char(19))as 
DEFMGT, NUM_NODES, cast(BACKRETENTION as char(7))as backret, cast(ARCHRETENTION 
as char(7))as archret, cast(ACTIVESTGPOOLS as char(10))as active_stg, 
cast(DESCRIPTION as char (33))as desc from domains"

update script policy_check "set sqldisplay wide"

update script policy_check "select cast(DOMAIN_NAME as char(12)) as dom, 
cast(SET_NAME as char(12))as set_name, CLASS_NAME, cast(DEFAULTMC as char(6))as 
def_mc,cast(MIGDESTINATION as char(22))as mig_dest,cast(DESCRIPTION as char 
(35))as desc from MGMTCLASSES where SET_NAME='ACTIVE'"

update script policy_check "set sqldisplay wide"

update script policy_check "select cast(DOMAIN_NAME as char(12))as 
dom,cast(sET_NAME as char(6))as setname,cast(CLASS_NAME as char(19))as 
class,cast(cOPYGROUP_NAME as char(8))as copyg, cast(VEREXISTS as char(7))as 
vere, cast(VERDELETED as char(7)) as verd, cast(RETEXTRA as char(4))as 
rete,cast(RETONLY as char(4)) as reto, cast(SERIALIZATION as char(12))as 
serial, cast(DESTINATION as char(22))as destination, 'BU' as backup from 
BU_COPYGROUPS where set_name='ACTIVE'"

update script policy_check "set sqldisplay wide"

update script policy_check "select cast(DOMAIN_NAME as char(12))as dom,  
cast(sET_NAME as char(6))as setname,cast(CLASS_NAME as char(19))as 
class,cast(cOPYGROUP_NAME as char(8))as copyg, cast(RETVER as char(5))as retv, 
cast(SERIALIZATION as char(12))as serial, cast(DESTINATION as char(22))as 
destination,'ARCHIVE'as archive from AR_COPYGROUPS where set_name='ACTIVE'"

update script policy_check "set sqldisplay wide"

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Mike 
De Gasperis
Sent: Friday, June 03, 2011 4:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Select Help

I've played around with this select before and I've never been able to get it 
perfected.

I'm looking for something that will tell me the information below for the 
active policy set and default management class

associations.node_name

client_schedules.schedule_name
client_schedules.startdate
client_schedules.starttime
client_schedules.period
client_schedules.perunits
client_schedules.dayofweek

bu_copygroups.domain_name
bu_copygroups.class_name
bu_copygroups.verexists
bu_copygroups.verdeleted
bu_copygroups.retextra
bu_copygroups.retonly

This is the closest I've gotten with a select, the problem is it returned bogus 
information for other domains. I know it's wrong I just have no idea how to 
correct it or if it's possible with the TSM SQL engine. If any of you SQL 
wizards have an idea or even an alternate better solution to get this 
information I'd greatly appreciate hearing what you have.

select associations.node_name, client_schedules.schedule_name, 
client_schedules.startdate, client_schedules.starttime, 
client_schedules.period, client_schedules.perunits, client_schedules.dayofweek, 
domains.domain_name, domains.defmgmtclass, bu_copygroups.verexists, 
bu_copygroups.verdeleted, bu_copygroups.retextra, bu_copygroups.retonly from 
client_schedules, domains, bu_copygroups, associations associations where 
client_schedules.schedule_name=associations.schedule_name and 
bu_copygroups.class_name=domains.defmgmtclass


Re: TSM & client machine renames

2011-06-02 Thread John Underdown
i used dsmadmc has a backend for a CGI script. used perl modules DBD::TSM and 
DBI, but you can call dsmadmc directly. Access was controlled thru clear text 
file that was no way related to TSM access and deleted after we were done. 
Timing was key but well documented for users to follow. Hope this helps. 

#!/usr/bin/perl -w
use CGI qw(:standard);
use CGI::Carp qw(warningsToBrowser fatalsToBrowser);
use strict;
use DBI;

print header;
print start_html("Rename TSM Node");
print end_html;

my $nodename = uc(param('nodename'));
my $rename = uc(param('rename'));
my $userid = param('userid');
my $passwd = param('passwd');

open READ, ") {
  $_ =~ /(\w+)\s(\w+)/;
  if($1 eq $userid && $2 eq $passwd){
  $autherr = 0;
   }
}

close READ;

my ($sec,$min,$hour,$day,$mon,$year,$wday,$yday,$isdst)=localtime;
$year+=1900;
$mon+=1;
my $str = "$mon/$day/$year $hour:$min:$sec $userid";

open OUT, ">/rename/logs/$mon$day$hour$min$sec-$userid.log";

if ($autherr){
   print "Wrong User ID and/or Password!\n";
   print OUT "$str Wrong User ID and/or Password!\n";
   exit;
}

my ($dbh1,$dbh2,$sth1,$sth2,$fs1,$fs2);

print "Renaming $nodename to $rename\n";
print OUT "$str Renaming $nodename to $rename\n";

$dbh1=DBI->connect("DBI:TSM:tsm-1","tsmadmin","tsmpw",
 {RaiseError => 0,
 PrintError => 0}) or die $DBI::errstr;

$dbh2=DBI->connect("DBI:TSM:tsm-1","tsmadmin","tsmpw",
 {RaiseError => 0,
 PrintError => 0}) or die $DBI::errstr;

$sth1=$dbh1->prepare ("q node $nodename")
   or die $dbh1->errstr;

if (!$sth1->execute()){
   print "Node $nodename Not Found!";
   print OUT "$str Node $nodename Not Found!\n";
   exit;
}

$sth1=$dbh1->prepare ("rename node $nodename $rename")
   or die $dbh1->errstr;

if ($sth1->execute()){
   print "Renamed node $nodename to $rename\n";
   print OUT "$str Renamed node $nodename to $rename\n";
}else{
   print "Failed to rename node $nodename to $rename\n";
   print OUT "$str Failed to rename node $nodename to $rename\n";
   exit;
}

$sth1=$dbh1->prepare ("update node $rename synovus1")
   or die $dbh1->errstr;

if ($sth1->execute()){
   print "Changed node $rename password to synovus1\n";
   print OUT "$str Changed node $rename password to synovus1\n";
}else{
   print "Failed to Changed node $rename password\n";
   print OUT "$str Failed to Changed node $rename password\n";
}

$sth1=$dbh1->prepare ("select FILESPACE_NAME from filespaces where 
node_name='$rename'")
   or die $dbh1->errstr;

$nodename = lc($nodename);
$rename = lc($rename);

if ($sth1->execute()){
   while (($fs2) = $sth1->fetchrow_array) {
  $fs1 = $fs2;
  if ($fs2 =~ s/$nodename/$rename/){
 $sth2=$dbh2->prepare ("rename filespace $rename $fs1 $fs2 namet=uni")
   or die $dbh2->errstr;
 if ($sth2->execute()){
print "Renamed filespace  $rename $fs1 $fs2\n";
print OUT "$str Renamed filespace  $rename $fs1 $fs2\n";
 }else{
print "Failed to rename filespace  $rename $fs1 $fs2\n";
print OUT "$str Failed to renamed filespace  $rename $fs1 $fs2\n";
 }
  }
   }
}

$dbh1->disconnect;
$dbh2->disconnect;
close OUT;

From:   Paul Zarnowski 
To: ADSM-L@VM.MARIST.EDU
Date:   06/02/2011 10:59 AM
Subject:[ADSM-L] TSM & client machine renames
Sent by:"ADSM: Dist Stor Manager" 

Hello all,

We are contemplating a massive Active Domain reorganization which would
involve renaming hundreds of Windows machines that we backup into TSM.  We
forsee a few problems with this, and I am looking to see if any other TSM
sites have faced a similar problem and what they did to address it.

The problems:
1. Renaming a Windows system will result in TSM making a fresh backups for
the volumes on that system (because the system name is part of the
filespace name).  Renaming the filespace on the TSM server will address
this, but timing is a problem.  If you rename the filespace a day early or
a day late, you will still end up with extra backups.

2. TSM likes to replace DOMAIN C: statements with DOMAIN \\systemname\C$.
If the systemname changes, then the TSM backup will fail, because it won't
be able to find the old systemname (unless and until the DOMAIN statement
is updated).  Again, with so many machines, updating all those DSM.OPT
files will be problematic.

3. If we have a large number of unintended extra backups, TSM server
resources (database size and stgpool capacity) will be stretched.


Having a tool that would allow our customers to rename their TSM
filespaces on-demand would be a big help.  As we do not give out policy
domain privileges, we cannot use dsmadmc to do this.  I am looking for
other solutions that any of you might have developed, or even just thought
about.  If the TSM BA client allowed a user to rename their filespace,
that would be a great solution.  But it's not there.

Thanks for any help (or condolences).

..Paul


--
Paul Zarnowski   

Re: Basic Fastback DR questions .. How is it used? Scenarios?

2011-05-12 Thread John Stephens
David:

Are you using the DR configuration thru FTP or to a TSM server storage pool?

to help you with your general question,



Now.what can I do with this?

I'd like some real world uses for this solution.

I'm not seeing any way to transfer the data from the DR repository back to the 
existing repository. ' you would not do this'

Isn't that the point? ' Not really, the point is to provide you a location to 
restore your data from in the event the primary FB server is destroyed

This is snapshot technology,  mostly short term protection, not losing your 
data in the event of a disaster is the goal, and then bringing up a new FB 
server to start backing up again, would be a general road map. You would have 
the data generations on your DR server and once your FB server had performed up 
to that amount, all would be 'synched up' again.



If my Corporate repository crashes I have a copy at a remote site.

What are some scenarios of recovery? Rebuild  the FB server and begin backing 
up again, if you have saved the policy and configuration files you will not 
have to recreate anything just setup a repository.



You do not really have a need to run the FB manager on the DR hub server, it is 
a storage location that FB mounter receives data from in place of the primary 
FB server.





John Stephens | CSE |STORServer, Inc

T 719.266.8777 ext 7201| M 813.817.2526

john.steph...@storserver.com

www.STORServer.com

STORServer solves your data backup challenges.

Once and for all



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of dpsmith
Sent: Friday, May 06, 2011 4:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Basic Fastback DR questions .. How is it used? Scenerios?



Hello.



I'm having a conceptual problem with how the Fastback DR system is supposed to 
work or how it's manageable.

And IBM's documentation is a bad on this!



I've been running Fastback 5.5.5 for over a year now. I love it.

While planning to upgrade to the newest 64bit version I took a closer look at 
the DR component because we have not been satisfied with Symantec Backup Exec 
for an off site solution (tape).

Before upgrading I wanted to figure out how it's supposed to work



So I setup a test server on my DMZ, installed Titan FTP and then Fastback DR 
server, then modified the ini like instructed.

Connected to it from my existing Fastback Server and successfully sent one of 
my smallest servers to the DR server.

So...everything looks good so far right?



What exactly transferred to the DR server?

I can see the whole snapshot log for the past year from my Corporate FB server 
on the CCS on my FB DR server.

If I use FB Mount on the DR server I can see not just today's backup like I 
expected but the last two weeks like I can on my existing FB server.

So it transferred everything for that particular server that I had been backing 
up?

Ok, that's still good, if so.



Eventually I'll have a 2nd "cold/warm?"site setup with the DR server and a 
repository that matches my Main corporate site's repository.

Every night they will sync up.

I'll have a good copy of my Repository at an off site location.



Now.what can I do with this?

I'd like some real world uses for this solution.

I'm not seeing anyway to transfer the data from the DR repository back to the 
existing repository.

Isn't that the point?

If my Corporate repository crashes I have a copy at a remote site.

What are some scenarios of recovery?

How do people use this feature?



Some things I can't use.

I can't use the FastBack Manager on my DR server.

It keeps giving me the "Please verify the IP..."

Now it's probably because that there is no "Fastback Server" Service in Windows 
Services.

Just the Hub and the Mount services.

Is this something I need to install?

Why?

What will it enable me to do?

I just installed the DR server initially.



Is there any use for having the Fastback server installed on the DR server?

Again, this goes back to my not understanding how it's supposed to work.





Let me toss out this scenario I have in my head of how it's supposed to work.

Site 1 and Site 2 are replicating (I know that's probably not the right word).

Site 1 repository is destroyed.

What do I do?

Will I have to transfer the whole 6TB from Site 2's Repository back to Site 1's 
when it's back up and running (say with a blank repository)?



Lots of questions I know but I'd really like to lose Symantec if I can prove 
this is a worthy replacement.



Thanks



David



+--

|This was sent by dpsm...@princesshouse.com<mailto:dpsm...@princesshouse.com> 
via Backup Central.

|Forward SPAM to ab...@backupcentral.com<mailto:ab...@backupcentral.com>.

+--


unsubscribe

2011-04-26 Thread Bortscheller, John
John Bortscheller


Re: nightmares with a STK SL500 tape library,

2011-04-07 Thread Dury, John C.
I'll try to answer your questions but there were a lot! The drives in both 
libraries are IBM LTO4 drives. Both libraries currently have different version 
library code although the faulty library has been upgraded several times to 
different versions as they were available, and the problem still occurred. Both 
libraries are connected to the same TSM server and each is on a different HBA. 
I've tried man different zoning options and they solved nothing. The LTO 4 
drives have also been upgraded to newer firmware levels as they have become 
available., again solving nothing. I've looked in the SAN switch and in TSM and 
see nothing explaining why the robot is going offline. I've sent several logs 
from the robot itself and STK/SUN/ORACLE support says they see no errors except 
that the robot has gone offline. They have said when visiting, that the robot 
seems to vibrate/shimmy sometimes which is why they think it Is going offline. 
They also told me that there was a known issue in previous library firmware 
versions, that if the management Ethernet cable was plugged in, it could cause 
the robot to go offline (something to do with surges from the network switch) 
but this was supposedly fixed in the version we are running. Currently we have 
the Ethernet unplugged (remember data goes across fiber only) to see if this 
solves anything. Unfortunately it may be a month or longer before we even know. 
Absurd! Hopefully that answered some of all of your questions.
Thanks for responding btw.


nightmares with a STK SL500 tape library

2011-04-05 Thread Dury, John C.
We purchased an STK SL500 tape library with 4 LTO4 drives in it a few years ago 
and we have had nothing but problems with it, almost from the beginning. It is 
fully loaded with LTO4 cartridges (about 160) and seems to randomly just crash 
and take all of the drives offline to TSM. We also have a second SL500 that is 
at a remote site and connected to the same TSM server , and it has no problems 
at all. The remote SL500 has copies (backup stg pool) of the local SL500. We've 
gone round and round with STK/Oracle support and they have actually come onsite 
and physically replaced the entire robot and all of it's parts, several times 
and they can never find a reason as to what is causing it to go offline. Keep 
in mind this has been happening about once a month or so for over a year.

My questions to all of you is not so much what could be wrong (although if you 
have ideas, that would be great also), but, we are considering a new robot and 
are hoping to be able to use or reuse our existing LTO4 tapes. Right now it has 
about 80 scratches so if we were to goto a second library, I should be able to 
have both defined to TSM and move the data from one to the other after putting 
some of the scratches in the new library and labeling/initializing them until 
all data is in the new library and then I can light the old one on fire (j/k) !
Like most IT departments we are severely budget constrained so we would like to 
reuse the tape drives and the tape cartridges and only purchase a robot that 
can handle 160 slots or so. Suggestions if this is even an option or which 
robots and/or models to look at? Remember, very little budget for this if I 
could even get it approved at all but we really don't know what else to do with 
the bad SL500 at this point and we have a project coming up that is going to 
increase the amount and flow of data to our TSM system significantly within the 
new few years.
Help!
John


Re: TS3500 Library with TS1120 drives configuration

2011-03-21 Thread John D. Schneider
Mario,
   You don't mention whether you are using the IBMTape device drivers or
not.  Since this is an IBM library and IBM drives, you should be using
the IBMTape device drivers.  That is why the devices say GENERICTAPE
instead of 3592.  Go to www.ibm.com/storage and find the web site for
the IBM TS3500 library, and pull down the drivers it provides.
   Before you install the drivers, take a look at your zoning.  From the
description you gave, the library and the two tape drives are only going
to be visible from one FC path?  If so, then there should be one zone
for each tape drive, and no more.  The 3592 tape drives have two FC
ports each, but there is not much point in zoning separate paths through
each side of the tape drive if they all go to one FC adapter.
   The library can define a library control path for each tape drive if
you want.  In other words, in the TS3500 library itself, you can
configure Tape0 as a library control path, and also Tape1.  When you
configure the library devices, you will see two library devices appear. 
This is normal.  When you define it to TSM, just pick the first one to
be the device for the library manager.  If that device is ever
unavailable, the driver will automatically try to talk to the library
using the alternate path.  That is built into the driver.

   Once you have verified it is zoned correctly, delete the library and
all the drives from the server, then follow the instructions that come
with the IBMTape driver to install it.  It will configure the library
device and all the drives.  It looks like you have used the correct
device names to configure the library path and drives.  Once you get the
driver issue straightened out, it should work. 


Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721



 Original Message 
Subject: [ADSM-L] TS3500 Library with TS1120 drives configuration
From: Mario Behring 
Date: Mon, March 21, 2011 11:12 pm
To: ADSM-L@VM.MARIST.EDU

Hi all,

I have a TS3500 tape Library with 2 drives TS1120...how should I define
it to
TSM? TSM Server is 6.2.2 running on Windows 2008.


Here is what Windows and TSM Management Console actually "see"and
the tsm:

 * 4 Changer0 devices
 * 2 \.\\Tape0 devices
 * 2 \.\\Tape1 devices
Tivoli Storage Manager -- Device List Utility

Licensed Materials - Property of IBM

5697-TSM (C) Copyright IBM Corporation 2000, 2005. All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

Computer Name: HOSTNAME
OS Version: 6.1
OS Build #: 7601
TSM Device Driver: TSMScsi - Not Running

One HBA was detected.

Manufacturer Model Driver Version Firmware
Description
--
--

Emulex Corporation 42C2069 elxstor 7.2.30.018 2.82A3 IBM
42C2069 4Gb 1-Port PCIe FC HBA for System
 x

TSM Name ID LUN Bus Port SSN WWN TSM
Type Driver Device Identifier

-

mt0.0.0.4 0 0 0 4 07841159 500507630F559506
GENERICTAPE NATIVE IBM 03592E05 1EC7
lb0.1.0.4 0 1 0 4 000A18580402 500507630F559506
LIBRARY IBM IBM 03584L22 7363
mt1.0.0.4 1 0 0 4 07841088 500507630F559507
GENERICTAPE NATIVE IBM 03592E05 1EC7
lb1.1.0.4 1 1 0 4 000A18580402 500507630F559507
LIBRARY IBM IBM 03584L22 7363
mt2.0.0.4 2 0 0 4 07841088 500507630F959507
GENERICTAPE NATIVE IBM 03592E05 1EC7
lb2.1.0.4 2 1 0 4 000A18580402 500507630F959507
LIBRARY IBM IBM 03584L22 7363
mt3.0.0.4 3 0 0 4 07841159 500507630F959506
GENERICTAPE NATIVE IBM 03592E05 1EC7
lb3.1.0.4 3 1 0 4 000A18580402 500507630F959506
LIBRARY IBM IBM 03584L22 7363

Completed in: 0 days, 0 hours, 0 minutes, 0 seconds.

Why the tsmldst utility identify 4 libraries and 4 drives? Actually the
WWN and
the Device Identifier show that there is only 1 Library, but how do I
configure
it in TSM?


Here is what I've defined at TSM:

 * Device: Type = 3592 and Format = 3592-2C
 * Library: Type = SCSI, Device=\.\\Changer0
 * Drives: Device= \.\\Tape0 and Device=\.\\Tape1


All created PATHs are onlinebut I can't manage to checkin any
tape

Any help is appreciated.

Mario


Re: undocumented ANR

2011-03-14 Thread John Monahan
DBDIAGLOGSIZE is a server option that defaults to 1024MB.  Try putting it in 
your server options file with a lower size.

___
John Monahan
Delivery Consultant
Logicalis, Inc. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Remco 
Post
Sent: Monday, March 14, 2011 8:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: undocumented ANR

Hi All,

the more time I spend engeneering with TSM version 6, the more unpleasant 
surprises I get.

To prevent one TSM server from filling up the home file system and crashing all 
TSM servers on that box, we had decided to create a saparate home filesystem 
for each TSM server on a box. The installation guide told us that each DB2 
instance requires about 450 MB in the home filesystem, so we allocated space 
for that. Turns out that IBM forgot to print a 1 because during dsmserv format 
I got:

"ANR1547W The server failed to update the DBDIAGLOGSIZE server option due to 
insufficient available space.  Required space: 1024 megabytes;  available
space: 174 megabytes. The current value:2 megabytes."
I never asked for a diaglog of 1 GB! how do I tell TSM that really, 174 MB is 
more that enough? I really hope that nobody really needs 1 GB of diag log... 
The log should roll over after a few days, right?


--
Met vriendelijke groeten,

Remco Post, PLCS


Re: 1024 DB2 database connection limit on AIX

2011-03-08 Thread John Monahan
Thanks for the info.  What is/was your maxsessions server setting?

___
John Monahan
Delivery Consultant
Logicalis, Inc. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
deehr...@louisville.edu
Sent: Tuesday, March 08, 2011 4:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 1024 DB2 database connection limit on AIX

I hit this problem the first night of backups after converting to V6.2.2.0 from 
v5.5.  I don't know but I'd guess the determining factor would be number of 
nodes backing up.  We have between 400 -500 nodes backing up.  They back up 
between 3-5 TB a night.  My v6 db is about 140 GB (used).

David Ehresman

>>> John Monahan  3/8/2011 11:11 AM >>>
http://www-01.ibm.com/support/docview.wss?tcss=Newsletter&uid=swg21428557


I'm wondering if anyone on the list has run into this issue or if someone from 
IBM can clarify or give examples to quantify slightly better than "servers with 
large workloads".  How large a DB?  How many clients?  How much backed up 
nightly?  At least give something in the ballpark so TSM admins can make an 
educated decision if they need to consider this problem upfront.  Putting in a 
large server and then crossing your fingers that you don't reach the limit 
isn't really a good customer support strategy.

Thanks



___
John Monahan
Delivery Consultant
Logicalis, Inc.


1024 DB2 database connection limit on AIX

2011-03-08 Thread John Monahan
http://www-01.ibm.com/support/docview.wss?tcss=Newsletter&uid=swg21428557

I'm wondering if anyone on the list has run into this issue or if someone from 
IBM can clarify or give examples to quantify slightly better than "servers with 
large workloads".  How large a DB?  How many clients?  How much backed up 
nightly?  At least give something in the ballpark so TSM admins can make an 
educated decision if they need to consider this problem upfront.  Putting in a 
large server and then crossing your fingers that you don't reach the limit 
isn't really a good customer support strategy.

Thanks



___
John Monahan
Delivery Consultant
Logicalis, Inc.


Re: Fw: Estimating amount of data involved in a restore

2011-01-29 Thread John D. Schneider
Since you said you are going to restore an entire filespace, you can
issue the "query filespace" command to find out how large the filesystem
is, and the percent full, which will tell you the amount of data to be
restored.  

Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721



 Original Message 
Subject: [ADSM-L] Fw: Estimating amount of data involved in a restore
From: Pete Tanenhaus 
Date: Fri, January 28, 2011 6:25 pm
To: ADSM-L@VM.MARIST.EDU

The following command should give you the information you are looking
for:

dsmc query backup d:\*.* -subdir=yes -querydetail

The summary output displayed after the list of backed up files contains
information pertaining to the aggregate amount data, the total number
of files and directories, etc.

Hope this helps 



Pete Tanenhaus
Tivoli Storage Manager Client Development
email: tanen...@us.ibm.com
tieline: 320.8778, external: 607.754.4213

"Those who refuse to challenge authority are condemned to conform to it"


-- Forwarded by Pete Tanenhaus/San Jose/IBM on
01/28/2011 07:18 PM ---
Please respond to "ADSM: Dist Stor Manager" 
Sent by: "ADSM: Dist Stor Manager" 
To: ADSM-L@vm.marist.edu
cc:
Subject: Estimating amount of data involved in a restore


Does anyone know how to get info out of TSM prior to an performing a
restore, for instance a restore of an entire filespace on a node e.g
the d: drive on a Windows Server, that would tell you how much data
there is to be restored?



I've tried the Export Node command with Preview=Yes and it appears to do
this fairly well but can run for a long time, so I'm trying to find a
better or quicker way.



Thanks for any help.


Re: Anyone using ILMT?

2011-01-27 Thread John D. Schneider
Zoltan,
   You don't get specific about what you problem is, but if you can't
find it in the manual, you can call support and get help for it.  
   Or you can ask again and include the details of your configuration,
and what you want to know.  Since ILMT is being used frequently by TSM
customers, I don't think asking about it in this forum is completely off
the subject.  


Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721



 Original Message 
Subject: [ADSM-L] Anyone using ILMT?
From: Zoltan Forray/AC/VCU 
Date: Thu, January 27, 2011 12:08 pm
To: ADSM-L@VM.MARIST.EDU

This may be the wrong place to ask this (if so...please tell me
where
to go, if you know, of course ;)

Anyone have any experience using the IBM License Metric Tool?

I just got in up on a test server and am playing with it and have come
across a strange situation I am looking for help with?

Googling ILMT brings up all kind of PPT files and users guides, but not
what I am looking for.
Zoltan Forray
TSM Software & Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Monitoring and Reporting through firewall

2011-01-20 Thread John Monahan
I'm wondering if anyone has any experience with the new Monitoring and 
Reporting accessing TSM servers running the monitoring agent through a 
firewall?  I'm checking out the firewall section of the Tivoli Monitoring 
redbook but not sure if that info will exactly translate to the TSM monitoring 
and reporting.


Re: Configuring IBM 3584 library

2010-12-12 Thread John D. Schneider
I think Mark misread the original post.  Since this is an EDL emulating
an IBM3584 library, then you don't have to update firmware on the
drives, obviously.  But you will want to make sure your CE has the
latest software installed on the EDL.

Since you are emulating an IBM library, be sure to install the IBM
drivers, either the IBMTape drivers for Windows, or Atape drivers for
Unix, before you attach the tape drives and discover them.   

And if I can offer a bit of advice, use the "advanced tape creation"
method on the EDL, and create your virtual tapes to be smaller than real
3592 tape drives.  Real 3529 drives are 300GB, but on an VTL, you can
get better tape utilization by making them smaller, 50-100GB in size. 
That way, you can reclaim them sooner and reuse the space sooner.  And
if you ever get into a situation where you need to recover a lot of
filesystems or serves, you can get more restores going in parallel if
they are on smaller tapes.

Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721






 Original Message 
Subject: Re: [ADSM-L] Configuring IBM 3584 library
From: "Loy, Mark W" 
Date: Fri, December 10, 2010 12:32 pm
To: ADSM-L@VM.MARIST.EDU

Just a piece of advice, make sure that all of the firmware (both library
and drives) are up to date. We've had a fair amount of issues with our
3584 with 3592-E05 drives that firmware fixed and we were only one or
two versions behind the most recent. The same holds true, of course,
with the drivers as well.

--Mark


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Martha McConaghy
Sent: Friday, December 10, 2010 1:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Configuring IBM 3584 library

I need a little quick help. I'm trying to connect our TSM 5.5 server to
an EMC VTL. The EMC box is emulating an IBM 3584 library with IBM 3592
tape drives. I can't figure out how to define the library. Should it
be SCSI? Anyone have any clue what options I should use?

We've never used either before and I haven't had much luck searching the
manuals for info on it.

Martha McConaghy
Marist IT


DISREGARD: backup storage pool slow - find offending file

2010-11-03 Thread Dury, John C.
Ignore this. I got lucky and querying the contents of the volumes in use shows 
only 1 file. I was expecting it to be much larger since these are LTO4 drives 
and can hold lots of data.

From: Dury, John C.
Sent: Wednesday, November 03, 2010 4:37 PM
To: ADSM-L (ADSM-L@VM.MARIST.EDU)
Subject: backup storage pool slow - find offending file

We back all of our nightly data up to an SL 500 LTO4 tape library and then 
during the day, that cloud of data is also copied to another SL500 LTO4 tape 
library at a remote location. The backup storage pool is taking a significantly 
long time to run and appears to be running for a long time because there seems 
to be one file that takes an especially long time to copy from one tape library 
to the other as the backup storage pool is using 3 drives at once and every 
single day, two of the three processes finish and one runs much much longer.  
How can I track this down to find out the file that is causing the backup 
storage pool process to run for such a long time? I'm hoping to narrow it down 
to a file/server to see if it can possibly be excluded or dealt with a 
different way.
Thanks.


  1   2   3   4   5   6   7   8   9   10   >