Re: Number of volumes per client

2019-07-11 Thread Ron Delaware
Hi Eric,
These should give you what you are looking for


echo 
"" 

echo "* Node Activities - How many volumes each client has data   data11*" 

select count(DISTINCT volume_name) as Num_volumes, node_name, stgpool_name 
from volumeusage group by node_name, stgpool_name order by volumes desc 
echo 
""
 


echo 
"" 

echo "* Node Activities - List each volume and how many nodes are on 
tape*" 
select volume_name, stgpool_name, count(distinct node_name) as Nodes from 
volumeusage group by volume_name, stgpool_name order by 3 desc 
echo 
""
 



echo 
"" 

echo "* Node Activities - For each node list  avg MB per tape change 
%TAPE% to match your specific tape pool*" 
select vu.node_name, ao.total_mb, count(distinct vu.volume_name) as tapes, 
ao.total_mb/count(distinct vu.volume_name) as "AVG MB/tape" from 
volumeusage vu, auditocc ao where vu.stgpool_name like '%TAPE%' and 
vu.node_name=ao.node_name group by vu.node_name, ao.total_mb order by 4
echo 
""
 



Ronald C. Delaware 
Senior Consultant 
IBM Corporation | System Lab Services
925-822-3977 (Office)
925-457-9221 (cell phone)
mailto:ron.delaw...@us.ibm.com 
  



From:   Marc Lanteigne 
To: ADSM-L@VM.MARIST.EDU
Date:   07/11/2019 07:15 AM
Subject:[EXTERNAL] Re: [ADSM-L] Number of volumes per client
Sent by:"ADSM: Dist Stor Manager" 



Hi Eric,

You could do something like this:

select node_name,count(distinct(volume_name)) from contents group by
node_name

-
Thanks,
Marc...
___
Marc Lanteigne
Spectrum Protect Specialist AVP / SRT
416.478.0233 | marclantei...@ca.ibm.com
Summer Hours:  Monday to Thursday, 6:00 to 16:30 Eastern

Latest Servermon for Spectrum Protect:
https://urldefense.proofpoint.com/v2/url?u=http-3A__ibm.biz_IBMSpectrumProtectServermon=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=hLJKqiq5uZKR2bRDOYSNbd0R_g8juynSkOjr_umT9g4=vEW1Kb5UUNJPB6-7WBTG96C-LFZSJeY_p5sCXJARzBE=sFMbbETX5Vd-NzoKeMgvqidCuJ4lqzzWfrbvhCmp57E=
 

Performance Mustgather for Spectrum Protect:
https://urldefense.proofpoint.com/v2/url?u=http-3A__ibm.biz_IBMSpectrumProtectPerformanceMustgather=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=hLJKqiq5uZKR2bRDOYSNbd0R_g8juynSkOjr_umT9g4=vEW1Kb5UUNJPB6-7WBTG96C-LFZSJeY_p5sCXJARzBE=1i1fKfzAmWtRtt_3kdvSRz48l_g0KoSdNZV97yLWULY=
 

Spectrum Protect Blueprint:  
https://urldefense.proofpoint.com/v2/url?u=http-3A__ibm.biz_IBMSpectrumProtectBlueprints=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=hLJKqiq5uZKR2bRDOYSNbd0R_g8juynSkOjr_umT9g4=vEW1Kb5UUNJPB6-7WBTG96C-LFZSJeY_p5sCXJARzBE=U_G1kdJgxQvMe1i8Y98Tj8lNOFw_D-Ua0VSfBT5vPsQ=
 


Follow me on: Twitter, developerWorks, LinkedIn



-Original Message-
From: Loon, Eric van (ITOP NS) - KLM 
Sent: Thursday, July 11, 2019 10:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXTERNAL] [ADSM-L] Number of volumes per client

Hi guys,

I'm looking for a SQL statement to count the amount of volumes used per
client. Can anybody help me out here?
Thank you very much in advance!

Kind regards,
Eric van Loon
Air France/KLM Storage & Backup

For information, services and offers, please visit our web site:
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.klm.com=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=hMBqtRSV0jXgOdXEmlNk_-O9LHkPCGSh9PJBRSlL8Q4=nfOo48SnthJS2-_HlS7z2-jiv1rUtIJ711DzUTOx0LY=nT8p19UU2HYBIURwdYRBqCDMI_GY8rLissfAm5ZG45E=
 
.
 This e-mail and any attachment may contain confidential and privileged
material intended for the addressee only. If you are not the addressee, 
you
are notified that no part of the e-mail or any attachment may be 
disclosed,
copied or distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately by
return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
employees shall not be liable for the incorrect or incomplete transmission
of this e-mail or any attachments, nor responsible for any delay in
receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
Airlines) is registered in Amstelveen, The Netherlands, with registered
number 33014286



Re: Tapes from LTO5 to LTO7

2017-08-07 Thread Ron Delaware
Robert,

You have many options, what is it that you are trying to accomplish? Move 
the data from LTO5 tape to LTO7 tape? Utilize a single library for all 
your tapes? (this option has limitations). If you could be a bit more 
specific on the goal, we could provide you with better advice.



Best Regards,

_

Ronald C. Delaware
Senior Consultant
IBM Corporation | System Lab Services
925-476-5315 (Office)
925-457-9221 (cell phone)
mailto:ron.delaw...@us.ibm.com
   




From:   "rou...@univ.haifa.ac.il" 
To: ADSM-L@VM.MARIST.EDU
Date:   08/07/2017 10:14 AM
Subject:[ADSM-L] Tapes from LTO5 to LTO7
Sent by:"ADSM: Dist Stor Manager" 



Hi to all

I have a robot TS3200 LTO5 and made me a lot of problems . I acquire a 
brand new TS3200 LTO7 with 4 drives.

I wonder if I can connect the new library to the server (where library 
LTO5 is attach) and do:


1.   Checkout of all my tapes from LTO5

2.   Checkin those tapes to LTO7 .

3.   Do a move data or move nodedata or update the storage with next 
storagepool (LTO7)

Possible  Or other solutions ???

Best Regards

Robert







Re: Inactive active filesystem for vm in script

2017-06-27 Thread Ron Delaware
Robert,

As you can see there are more that a few way to get the info I will add a 
couple more for you tool box.  You can create a macro or a script to do 
the same thing. You would have to translate the VM verbage into TSM and 
use select statement like Neil did.  The translation would be as such:

vm bodev -ina = node_name='BODEV' state='INACTIVE_VERSION'

The select statement would look like this

select node_name as node, filespace_name as drive, hl_name as directory, 
ll_name as filename, state, deactivate_date, class_name as mgmtclass from 
backups where node_name='BODEV' and type='FILE' and 
state='INACTIVE_VERSION'

the output would look like this

   NODE: BODEV
  DRIVE: \\bodev\c$
  DIRECTORY: \USERS\BODEV\DOWNLOADS\DOCUMENTS\
   FILENAME: 
PERSONALHEALTHRECORD_5241501C-A51F-4D05-8E9F-56A2A90B30B7.PDF
  STATE: INACTIVE_VERSION
DEACTIVATE_DATE:
  MGMTCLASS: DEFAULT

   NODE: BODEV
  DRIVE: \\bodev\n$
  DIRECTORY: \FILEHISTORY\BOB\FASTER\DATA\C\USERS\BOB\VIDEOS\
   FILENAME: BICEPBARBELLS (2016_04_18 03_30_08 UTC).MP4
  STATE: INACTIVE_VERSION
DEACTIVATE_DATE:
  MGMTCLASS: DEFAULT

You would then be able to save that select command to a script with the 
define script command within TSM or if you saved it to a file you could 
run it as a macro or use the file to create the script.  You could even 
use variables in place of hard coding specific names for the Node and 
state if you wanted to. As I stated, there are numerous ways to get the 
info you are looking for.

This is probably more information than you are looking for but it gives 
you an idea of what you could do if you wanted to.



Best Regards,

_

Ronald C. Delaware
Senior Consultant
IBM Corporation | System Lab Services
925-476-5315 (Office)
925-457-9221 (cell phone)
mailto:ron.delaw...@us.ibm.com
   




From:   "Schneider, Jim" 
To: ADSM-L@VM.MARIST.EDU
Date:   06/27/2017 11:01 AM
Subject:Re: [ADSM-L] Inactive active filesystem for vm in script
Sent by:"ADSM: Dist Stor Manager" 



An alternative can be run from the backup proxy server.
dsmc restore vm bodev -pick -inactive

Jim Schneider
Essendant

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Schofield, Neil (Storage & Middleware, Backup & Restore)
Sent: Tuesday, June 27, 2017 12:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Inactive active filesystem for vm in script

Classification: Public

Robert

I'm not sure this is the best answer to your question, but in the absence 
of any other replies I thought it might at least prompt somebody to come 
up with something better.

I'm assuming here that your hypervisor is vSphere and not Hyper-V? In 
which case empirical evidence from my systems suggests that for every 
backup of a snapshot of the VM called bodev, I get one bodev.ovf backup 
object.

I would normally shy away from running Selects against the Backups table, 
so use the following at your own risk. However to determine how many 
snapshots you've got stored for the VM called bodev, maybe try something 
like the following?

SELECT COUNT(*) FROM BACKUPS WHERE FILESPACE_NAME='\VMFULL-bodev' AND 
LL_NAME LIKE '%.ovf'

It looks like all versions of bodev.ovf will show as Active versions, but 
perhaps you can just assume the number of inactive backups is 1 less than 
the total?

Regards

Neil Schofield
IBM Spectrum Protect SME
Backup & Recovery | Storage & Middleware | Central Infrastructure Services 
| Infrastructure & Service Delivery | Group IT LLOYDS BANKING GROUP 




Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ. 
Registered in Scotland no. SC95000. Telephone: 0131 225 4555. Lloyds Bank 
plc. Registered Office: 25 Gresham Street, London EC2V 7HN. Registered in 
England and Wales no. 2065. Telephone 0207626 1500. Bank of Scotland plc. 
Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in Scotland 
no. SC327000. Telephone: 03457 801 801. Cheltenham & Gloucester plc. 
Registered Office: Barnett Way, Gloucester GL4 3RL. Registered in England 
and Wales 2299428. Telephone: 0345 603 1637

Lloyds Bank plc, Bank of Scotland plc are authorised by the Prudential 
Regulation Authority and regulated by the Financial Conduct Authority and 
Prudential Regulation Authority.

Cheltenham & Gloucester plc is authorised and regulated by the Financial 
Conduct Authority.

Halifax is a division of Bank of Scotland plc. Cheltenham & Gloucester 
Savings is a division of Lloyds Bank plc.

HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in 
Scotland no. SC218813.

This e-mail (including any attachments) is private and confidential and 
may contain privileged material. If you have received this e-mail in 
error, please notify the sender and delete it 

Re: Restore NDMP backups from Netapp 7 mode to CIFS shares ?

2017-03-22 Thread Ron Delaware
IF you setup the web client to perform the restores, you can restore them 
to someplace other than the original location, provided that your NDMP 
backups are using what is called a 3-way setup of the NDMP environment. 
This setup has the NDMP backups being sent to a TSM/Spectrum Protect 
controlled storage pool, vs the NAS Filer controlling the data an sending 
directly to a fiber/san attached tape/disk pool




Best Regards,

_

Ronald C. Delaware
Senior Consultant
IBM Corporation | System Lab Services
925-457-5315 (Office)
925-457-9221 (cell phone)
mailto:ron.delaw...@us.ibm.com
   




From:   TSM ORT 
To: ADSM-L@VM.MARIST.EDU
Date:   03/21/2017 10:23 PM
Subject:[ADSM-L] Restore NDMP backups from Netapp 7 mode to CIFS 
shares ?
Sent by:"ADSM: Dist Stor Manager" 



We have been doing NDMP backups  several years from source is Netapp 
7-mode
CIFS shares and purpose of NDMP backups were long term data keeping. We
will be keeping NDMP backups however the Netapp will be decommissioned 
soon
.What we are looking if it is possible to restore NDMP backups from TSM
server to non Netapp targets such as a windows share or a  local disk. Our
NDMP backups are containing just CIFS shares as data  there is no LUN and
created with TOC .
Thanks lot .







Re: ISP 81 Discontinued functions

2016-12-14 Thread Ron Delaware
to piggyback on Michael's statement, it has been IBM's position for over a 
decade,  to use Christie as its Bare Metal Restore Solution and to put 
IBM's programming efforts towards other areas of Tivoli Storage Manager 
(TSM), now IBM Spectrum Protect, product. Having worked with this product 
for 16+ years now, I believe that it was a good decision. The Christie 
product has matured to the point that it is an almost seamless integration 
into the IBM Spectrum Protect software as well as being a stand-alone 
product as well. I have designed and delivered solutions where the TSM 
client was replace with a Christie client, and the results were amazing 
and customer expectations and satisfaction were exceeded.

Yes there is the additional cost, but the capabilities and functionality, 
the ease of use and administration of the solution is worth the cost. 



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Ryder, Michael S" 
To: ADSM-L@VM.MARIST.EDU
Date:   12/14/2016 10:35 AM
Subject:Re: [ADSM-L] ISP 81 Discontinued functions
Sent by:"ADSM: Dist Stor Manager" 



Just piping in here to lend support to TBMR - this has come in very
handily, especially for the critical workstations we backup.  Pop in the
boot CD, answer a few prompts, and then off it goes, restoring directly
from TSM/SP back to the local harddrive, along with the functionality to
adjust for differing hardware/chipsets and drive-configuration.

It is *totally awesome* not having to rely on some hand-scripted process,
and like any backup solution, cheap insurance.

When one of those critical workstations fails, we can consistently bring 
it
back to its identical state within 1-2 hours, with only 10-15 minutes
interaction needed on the front and back end of that process.

Cost of ~$500/client is well worth it, in relation to the total TSM/SP
equation and considering the costs of our downtime.  Of course that has to
be evaluated on a case-by-case basis.

Best regards,

Mike Ryder, x7942
RMD IT Client Services

On Wed, Dec 14, 2016 at 12:15 PM, Rick Adamson 
wrote:

> Whenever I have requested approval for Christie BMR the response was 
that
> it was way too expensive.
>
> Every time the debate comes up on TSM/SP versus other solutions someone
> always targets this shortcoming. It makes it difficult to argue TSM/SP 
as a
> world-class enterprise solution when your options to perform a bare 
metal
> recovery are making a sizable purchase from a third-party, or work thru 
an
> unnecessarily complicated process.
>
> Seeing as there's a defined process for the offline SS recovery, how
> difficult would it be for IBM to automate the function, have it prompt 
for
> drivers,  and add the creation to the GUI/CLI ??
>
> Epic fail 
>
> -Rick Adamson
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Tom Alverson
> Sent: Wednesday, December 14, 2016 10:03 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] ISP 81 Discontinued functions
>
> It looks like that product just gives you the features that are
> automatically included in the Symantec boot recovery disk.
>
> On Wed, Dec 14, 2016 at 9:39 AM, Del Hoobler  wrote:
>
> > For those that do need more, take a look at Cristie TBMR. IBM resells 
it:
> >
> >
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cristie.com_re
> > cover_tbmr_=DgIFaQ=AzgFQeXLLKhxSQaoFCm29A=eqh5PzQPIsPArLoI_uV1mK
> > vhIpcNP1MsClDPSJjFfxw=MaBDnfu3IOl4KU7j66irgUXnzuj2QunRKvAwOKGl57c=
> > YJlfO6xtNBnRSGWsCJCy4UsCvPHpyj734mOlG4yUo-A=
> >
> > From their doc:
> >
> > "TBMR allows you to perform a bare machine recovery of your system
> > direct from a TSM backup. Your critical systems are protected from the
> > consequences of physical damage, human error, or system failure. Users
> > can recover their protected servers to any point in time provided by
> > TSM as well as schedule simulated recoveries. TBMR is re-sold globally
> > by IBM and its channel partners as the recommended system recovery
> solution for TSM.
> > It’s currently available for Windows, Linux, Solaris and AIX operating
> > systems."
> >
> >
> > Thank you,
> >
> > Del
> >
> > 
> >
> > "ADSM: Dist Stor Manager"  wrote on 12/14/2016
> > 02:18:03 AM:
> >
> > > From: Stefan Folkerts 
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 12/14/2016 02:18 AM
> > > Subject: Re: ISP 81 

Re: TSM Migration Question

2016-09-21 Thread Ron Delaware
Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be 
if there was a loop, meaning your disk pool points to the tape pool as the 
next stgpool, and your tape pool points to the disk pool as the next 
stgpool. If your tape pool was to hit the high migration mark, it would 
start a migration to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7

Re: Broken Drive

2016-09-21 Thread Ron Delaware
Fabio,

your idea is not as crazy as you think.  TSM and Spectrum Protect have an 
option available that allows you to use disk as a reclamation area.  This 
is from the manual:

RECLAIMSTGpool
Specifies another primary storage pool as a target for reclaimed data from 
this storage pool. This parameter is optional. When the server reclaims 
volumes for he storage pool, the server moves unexpired data from the 
volumes that are being reclaimed to the storage pool named with this 
parameter. A reclaim storage pool is most useful for a storage pool that 
has only one drive in its library. When you specify this parameter, the 
server moves all data from reclaimed volumes to the reclaim storage pool 
regardless of the number of drives in the library. To move data from the 
reclaim storage pool back to the original storage pool, use the storage 
pool hierarchy. Specify the original storage pool as the next storage pool 
for the reclaim storage pool. 

  You would need to create a new primary storage pool (device class file), 
then update your current tape storage pool with the RECLAIMSTG= and you will be able to perform reclamations.  As a FYI, the 
data that goes into that reclamation stgpool WILL NOT automatically move 
the data back to your tape pool, you will have to do a move data command 
or setup up the tape pool as your next stgpool when you create the new 
stgpool.

Hope this helps, you might think about purchasing more tape drives or 
start using your data domain appliances as a tape library.

Best Regards,


_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   Fábio Chicout 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:07 AM
Subject:[ADSM-L] Broken Drive
Sent by:"ADSM: Dist Stor Manager" 



Hi, there!

I've got a broken drive here and my tape library is working only with 1 
drive (a IBM TS3200).
As it's only with 1 drive, all reclamation is failing.

Me and my team had a (very) crazy idea on using a VTL as a second drive to 
make (eventual, not scheduled) reclamation, while we're working on fix the 
tape library.

My questions:

- Is it possible?
- If so, what is the best way to achieve?

Att,
--


Re: Move Container question

2016-08-07 Thread Ron Delaware
Robert,

This is a housekeeping type task that is performed, think of it like you 
if you ran a defrag disk.  This task takes place so that admin task of 
reclamation is not needed.  All of that is performed by the code without 
configuration, that same for your deduplication. Containers are dedup 
enabled out of the box and as data goes in, the code decides if the data 
should be deduped or not, and no configuration its all done in the 
background. 
_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   Robert Ouzen 
To: ADSM-L@VM.MARIST.EDU
Date:   08/06/2016 07:07 PM
Subject:[ADSM-L] Move Container question
Sent by:"ADSM: Dist Stor Manager" 



Hi to all

Can anybody give me more information of the process for Storage of type 
DIRECTORY

ANR0986I Process 263 for Move Container (Automatic)   running in the 
BACKGROUND processed 73,945 items for a
  total of 10,705,469,440 bytes with a completion 
state of  SUCCESS at 03:23:06. (PROCESS: 263)

It’s run automatic , where is the configuration about it ?

There with a way to create a script to get report of this running. When I 
tried the regular  Q ACT begind=-3 s= ANR0986I

Got also Reclamation and Move Container and Identify Duplicates and ….

Thanks Tobert







Re: Cleversafe onprem + deduplication with TSM

2016-04-29 Thread Ron Delaware
that is because the container (cloud or directory) manages deduplication. 
As the data is ingested, Spectrum Protect detemines if the data is to be 
deduplicated. Inside the storage pool, you will see two types of 
containers, a container that is deduplicated and a non-deduplicated.  To 
answer your question, yes you can create a deduplicated IBM Cloud Object 
Storage System (ICOSS) (formerly cleversafe) storage pool.  Please be 
aware of the differences between Block storage pools and Object storage 
pools. The Object storage pool will give you the ability to manage 
Exabytes of data efficiently, but there is a cost to be able to do that, 
which is performance. Object storage pools should be used for unstructured 
data backups that require a rare restore and archives of structured data

 
Best Regards,


Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist - Expert
IBM Certification Exam Developer
IBM Certified Solution Advisor - Spectrum Storage Solutions v4




From:   TSM ORT 
To: ADSM-L@VM.MARIST.EDU
Date:   04/29/16 00:23
Subject:[ADSM-L] Cleversafe onprem + deduplication with TSM
Sent by:"ADSM: Dist Stor Manager" 



Hi
Can I create a deduplication enabled storage pool using Cleversafe cloud
using TSM 7.1.5. I can find that there are flags to enable / disable
encryption for on-premises however there are no flags to enable for 
disable
for dedduplication nor compression.
Thanks


Fw: [ADSM-L] Help on Directory Container

2016-03-22 Thread Ron Delaware
 Robert,

you run protect stgpool prior to replicate node. By doing the protect 
first, you can reduce the amount of traffic needed for the replicate node 
process. Also, without performing protect stgpool, you loose the ability 
to use REPAIR STGPOOL as it depends on Protect Stgpool being performed. 

Protect Stgpool and Node replication perform two different functions, but 
complement each other. Node replication will send stgpool data and catalog 
metadata.  To insure the logical consistency of the metadata, it will lock 
the data it is replicating (not positive at what level).  After doing node 
replication, you can recover client data.

Protect stgpool moves the data, as well, but does not lock as badly, so it 
will have less of an impact to other operations.  Once it completes, you 
can recover stgpool data (which can happen automatically if Protect senses 
a data corruption in the source pool), also Protect Stgpool gives you 
chunk level recovery that hooks into REPAIR Stgpool.


Best Regards.

Ron Delaware





From:   Robert Ouzen <rou...@univ.haifa.ac.il>
To: ADSM-L@VM.MARIST.EDU
Date:   03/20/16 22:52
Subject:[ADSM-L] Help on Directory Container
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hi to all

I need a little bit more information about all the administrative process 
on my TSM environment .

I have a TSM server 7.1.5 on an O.S Windows server 2008 R2 64B.
I have mix STGPOOLS , some of type FILE (with DEDUP on server side) and 
some of type DIRECTORY

For now my order process of administrative tasks is:


1.   At late afternoon for few hours , run IDENTIFY DUPLICATES

2.   At morning 07:00 ,   run EXPIRE 
INVENTORY

3.   After it, run 
BACKUP DB

4.   After it, run 
RECLAIM STG

Now because I created new STGpools with type DIRECTORY (inline dedup), and 
 create too an TARGET  server version 7.1.5..
Update my stgpools of type DIRECTORYwith the option: 
PROTECTstgpool=Targetstgppol  (on Target server)

Now the questions:

What will be the correct order process to add the new tasks as:

PROTECT STGPOOL
REPLICATE NODE

And what will be the correct command to repair damage container in source 
STG   ?

As  AUDIT CONTAINER ( with which options ??) and REPAIR CONTAINER???

T.I.A  Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il<
http://computing.haifa.ac.il/>








Re: Identifying empty nodes

2016-02-29 Thread Ron Delaware
EJ,

Just so we are on the same page about this, my understanding is that you 
want only the nodes that have ZERO data (backup, archive, or copy). Under 
normal circumstances, there would be two ways that would happen within 
TSM, if you are running Spectrum Protect (SP)(version 7.1.3 and above) 
there would be three ways, and they are; 
1. The node never performed a backup
2. All of the nodes filespaces have been deleted
3. If node was decommissioned using the SP decommission command

The only time a file within TSM/SP is deleted from the TSM/SP database, is 
if it was modified or deleted, at that time you data retention setting 
kick in. With the SP decommission command, SP marks all of the files as 
inactive thereby applying the assigned data retention setting to all data, 
which would allow for all data to be deleted and once that has completed, 
removes the decommissioned node.

That said, you can try this:

audit lic (ensures that the latest totals are recorded for each node)
select node_name from auditocc where total_mb=0 

That command will validate there is no data (archive, backup, copy) for 
the node

Best Regards,

Ronald C Delaware
Master IT Certified Open Group
IBM IT Plus Certified - level 2 Expert
IBM Solution Advisor - Spectrum Products v4
IBM Spectrum Protect Administrator/Implementor
IBM Certified Exam Developer
IBM Spectrum Scale 4.1 Certified
 
 



From:   "Loon, EJ van (ITOPT3) - KLM" 
To: ADSM-L@VM.MARIST.EDU
Date:   02/29/16 01:36
Subject:[ADSM-L] Identifying empty nodes
Sent by:"ADSM: Dist Stor Manager" 



Hi guys!
I'm trying to query the occupancy table to identify nodes with no more 
data store, but I can't find the proper syntax. I thought of this one:
select node_name from occupancy where physical_mb is null
But it returns no nodes, while I know for sure there are several of 
them...
Thanks for any help in advance!
Kind regards,
Eric van Loon
AF/KLM Storage Engineering


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain 
confidential and privileged material intended for the addressee only. If 
you are not the addressee, you are notified that no part of the e-mail or 
any attachment may be disclosed, copied or distributed, and that any other 
action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify 
the sender immediately by return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission 
of this e-mail or any attachments, nor responsible for any delay in 
receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered 
number 33014286



Re: All volumes in STGPOOL show as full. Please help.

2016-01-13 Thread Ron Delaware
John,

Do you have enough scratch volumes available in the dedup pool? Could you 
please provide output from these commands: q stg deduppool f=d and  q devc 
 f=d

 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



From:   "Dury, John C." 
To: ADSM-L@VM.MARIST.EDU
Date:   01/13/16 07:02
Subject:[ADSM-L] All volumes in STGPOOL show as full. Please help.
Sent by:"ADSM: Dist Stor Manager" 



We have 2 storage pools. A BACKUPPOOL and a DEDUP pool. All nightly 
backups come into the BACKUPPOOL and then migrate to the DEDUP pool for 
permanent storage. All volumes in the DEDUP pool are showing FULL although 
the pool is only 69% in use. I tried doing a move data on a volume in the 
DEDUP pool to the BACKUPPOOL just to free space so reclamation could run, 
and although it says it ran to successful completion, the volume still 
shows as FULL. So for whatever reason, all volumes in the DEDUP pool are 
never freeing up. I ran an audit on the same volume I tried to the MOVE 
DATA command on and it also ran to successful completion. No idea what is 
going on here but hopefully someone else has an idea. If our BACKUPPOOL 
fills up, we can't back anything up any more and we will have a 
catastrophe. The BACKUPPOOL is roughly 15T  and 15% full and I have no way 
to increase it.
Please reply directly to my email address and list as I am currently 
subscribed as digest only.
TSM Server is 6.3.5.300 on RHEL 5






Re: Question about replication 7.1.3.100

2015-11-16 Thread Ron Delaware
Robert,

I am still pretty much old school when it comes to managing structured vs 
unstructured data, in that you keep them separate. But with Spectrum 
Protect (SP) 7.1.3 we need to start thinking differently.  In my 
experience with the product so far, I doesn't seem to make a difference 
since everything stays in the container. SP containers determines what 
gets deduped and what doesn't as it performs in-line deduplication, it 
does away with those time consuming tasks of Reclamation and migration. 
I have multiple customers that are using a single container for all of 
there data and I have not experienced any problems with backups or 
restores. I have not read anything that states you should separate the 
two.

This is my opinion based on my experience with the product, and may not be 
the official best practice of IBM.



 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



From:   Robert Ouzen 
To: ADSM-L@VM.MARIST.EDU
Date:   11/16/15 01:33
Subject:[ADSM-L] Question about replication 7.1.3.100
Sent by:"ADSM: Dist Stor Manager" 



Hi to all

I want to implant a replication server with TSM server 7.1.3.100 and to 
use the new feature for STGpool “Directory Containers”.

I want to replicate backups of Oracle DB , TDP for SAP , TDP for Exchange 
and regular files.

I am wondering if is better to create on the replicate server several STG 
pools or only a big one ???

Any experience ? ideas will be really appreciate.

Source and Target servers are with:


· Windows 2008/2012 R2 64B

· TSM server version 7.1.3.100

· TSM client version   7.1.3.1.
Best Regards


Robert Ouzen
Haifa University






Clarification of statement concerning upgrading to Spectrum Protect 7.1.3

2015-10-20 Thread Ron Delaware
To All,
Yesterday I replied to a question concerning upgrading to the latest 
version of Spectrum Protect v7.1.3 (formerly known as Tivoli Storage 
Manager). In my attempt to be brief, my comment may have caused more 
confusion. 
The patch that was applied was an efix for APAR  IT11581, it is not a GA 
patch. In order to receive the efix patch,  you need to have an open PMR 
and the symptoms that you are experiencing are present in the APAR.
After the efix is applied running a 'show banner' command will display 
this:

The 7.1.3.200 deliverable reveals a banner at startup  the following:   ( 
is also echoed using SHOW BANNER ) 
*
* This is a Limited Availability TEMPORARY fix for  *
* IT11581  REPLICATE NODE TO TARGET DIRECTORY-CONTAINER STORAGE *
*  POOL CAN HANG CAUSING HIGH TARGET SYSTEM CPU MEMORY  *
*  USAGE.   *
* It is based on fixes made generally   *
* available with patch level 7.1.3.000  *

You'll see that APAR IT11581 is the bug mentioned and symptoms which may 
not occur because it's 
a timing issue.  It can only occur when replicating from a source 
replication server non-container stgpool to a 7.1.3 target replication
server hosting a container pool, the target server may exhibit high CPU. 
In fact the high CPU on the target replication server is a non-preemptive 
loop that exhibits a hang symptom on the target server preventing 
replication from completing. 

The movement of data that I referred to, was indeed using the method where 
you have your primary (production) server perform a node replication to a 
secondary SP server that has the identical Domain structure as the primary 
(perform export policy), with the copygroup destination pointing to a 
container storage pool. 

Currently there is no 'move in' command available to move non-container 
data into a 7.1.3 container, our Development Team is working on delivering 
that functionality sometime in 1st half of 2016  ( "Subject to change at 
the discretion of IBM") . 

I hope that this clarifies any confusion I might have caused with my brief 
statement. 



 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



Re: Version 7.1.3

2015-10-19 Thread Ron Delaware
Stefan,

If you do not have the need to move non-container data into a container, 
then I would recommend performing the upgrade to SP v7.1.3.2. We found a 
small bug in the 7.1.3.0 and have patched the code. It only affected the 
movement of non-container stgpool data into a container stgpool.

The decommission function works and is much easier to perform via the Op 
Center than doing it manually.

 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



From:   Stefan Folkerts 
To: ADSM-L@VM.MARIST.EDU
Date:   10/19/15 06:45
Subject:Re: [ADSM-L] Version 7.1.3
Sent by:"ADSM: Dist Stor Manager" 



The relatively new TSM admin at the site I am currently implementing
Spectrum Protect is keen on getting the decommission function (in the OC)
because they decommission at least 5 development VM's every week and don't
like deleting filespaces from the commandline on multiple servers.

I haven't heard of any major issues in 7.1.3.0, the download is still
up...can any of you think of a reason to hold back on the production
upgrade to 7.1.3.0?

Regards,
   Stefan


On Wed, Sep 30, 2015 at 7:05 PM, J. Pohlmann  wrote:

> Karel, your reference is to 6.1 - note the fix in 6.1.3 - it was 
supported
> but in many cases you did not get a great dedup percentage. My test was 
for
> a 7.1.3 environment with directory container storage pools.
>
> Virtual volumes are still very useful for TSM database backups and 
storing
> Recovery Plan Files at another TSM server. For client data protection, I
> will always prefer node replication.
>
> Regards,
> Joerg Pohlmann
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Karel Bos
> Sent: September 30, 2015 00:27
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Version 7.1.3
>
> Hi,
>
> Storing virtual volumes in a dedup pool is not supported. It was not
> restricted but 
>
> http://www-01.ibm.com/support/docview.wss?uid=swg1IC64970
>
> Kind regards,
>
> Karel
>
> 2015-09-23 20:44 GMT+02:00 J. Pohlmann :
>
> > So far only in a lab environment. Found that, when using directory
> > container storage pools, only node type=client data works. For my node
> > type=server, the attempt to store data in the directory container pool
> > failed. Sounds like a restrictions that you can't store virtual volume
> > data (the archive
> > objects) at the target server in a directory container pool. Storing
> > the archive objects for virtual volumes in a deduped file device class
> > storage pool works as before.
> >
> > Regards,
> > Joerg Pohlmann
> >
> > 09/22/2015 13:05:39  ANR0406I Session 111 started for node VVVISTA
> > (Windows)
> >   (Tcp/Ip W2K8-TSM63(49256)). (SESSION: 111)
> > 09/22/2015 13:05:50  ANR2776W Transaction failed for session 111 
for
> > node
> >   VVVISTA (Windows) - A storage pool for the
> target
> >   destination is associated with a container
> > or cloud
> >   storage pool. (SESSION: 111)
> > 09/22/2015 13:06:36  ANR0403I Session 111 ended for node VVVISTA
> > (Windows).
> >   (SESSION: 111)
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of Robert Ouzen
> > Sent: September 23, 2015 03:59
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] Version 7.1.3
> >
> > Hello All
> >
> > I wonder if people  , already upgrade to version 7.1.3 and have
> > comments , appreciation .. I understand that it's a new big concept as
> > STG container for inline dedup.
> >
> > Anyone use it ? and can share remarks !
> >
> > T.I.A Robert
> >
>






Re: going to all random disk pool for tsm for ve

2015-08-24 Thread Ron Delaware
You treat the filepool as if it was tape. You don't want hundreds of nodes 
data on a tape cartridge because it causes contention, and it causes 
massive amount of tape mounts.  You can get the same types of problems 
with filepool volumes even though it is disk. When a filepool volume is 
being read it is mounted, just like a tape. If you are using replication 
between two ProtecTIER VTL's, production and DR,  you can actually 
checkout a volume in one vtl and check it in in the other vtl, just like 
tape.

 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



From:   Stefan Folkerts stefan.folke...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   08/24/15 12:39
Subject:Re: [ADSM-L] going to all random disk pool for tsm for ve
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



But if enough volumes are in the filling state this is not a problem and
may I ask why use collocation on a filepool?

On Monday, August 24, 2015, Lee, Gary g...@bsu.edu wrote:

 No deduplication.

 However, I have the trouble of a file class pool, collocated by 
filespace,
 showing full and not allowing backups even though the pool shows 75% 
full,
 just used itx maxscratch allocation.
 This causes an inordinate amount of human interfvention to keep afew
 volumes below maxscratch.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU javascript:
;]
 On Behalf Of Stefan Folkerts
 Sent: Monday, August 24, 2015 6:07 AM
 To: ADSM-L@VM.MARIST.EDU javascript:;
 Subject: Re: [ADSM-L] going to all random disk pool for tsm for ve

 I believe a fileclass storagepool is better suited for this kind of
 permanent storage on disk.
 A diskpool should be used for things such as disk between the backup 
client
 and tape for temporary random I/O storage that can handle an unlimited
 amount of sessions.
 A fileclass storagepool is better suited for long term storage.
 I create a diskpool for VE metadata when using dedup on the filepool and
 use fileclass storagepools for permanent storage on disk with or without
 deduplication.

 Are you planning on using deduplication on disk?


 On Fri, Aug 21, 2015 at 10:12 PM, David Ehresman 
 david.ehres...@louisville.edu javascript:; wrote:

  Gary,
 
  TSMVE 7.1.2 makes file level restores a reality.  You may want to work
  with them a bit before deciding if you want to go to all random disk.
 
  David
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU
 javascript:;] On Behalf Of
  Lee, Gary
  Sent: Friday, August 21, 2015 3:31 PM
  To: ADSM-L@VM.MARIST.EDU javascript:;
  Subject: [ADSM-L] going to all random disk pool for tsm for ve
 
  Hoping to get upgraded to tsm server 7.1.x within the next month.
  At that time, we are considering changing our storage strategy to all
  random disk pool.
 
  This because we cannot do a file level restore from a vmware backup 
from
  sequencial media, (be it tape or disk).
 
  Has anyone done this, or is there a better way?
 
  Looking for guideance and others' experience.
 







Re: Data Protection for IBM Domino

2015-08-21 Thread Ron Delaware
EJ,

You haven't stated what you are attempting to accomplish.
He now asks me 'What if I run daily incrementals instead of the 
selectives?' I don't know if that will work, nor can I find the answer in 
the manuals...
If you run daily incremental's, you need to ensure that the data retention 
is set correctly, so that you ensure that you can go back to a specific 
point in time. Why were you performing Selectives instead of incrementals 
to begin with? Be more specific in what you are trying to achieve and we 
can provide you with better guidance or ideas.
 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



From:   Loon, EJ van (ITOPT3) - KLM eric-van.l...@klm.com
To: ADSM-L@VM.MARIST.EDU
Date:   08/21/15 06:02
Subject:[ADSM-L] Data Protection for IBM Domino
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi guys!
One of my customers uses Data Protection for IBM Domino for a few years 
now. All mail instances are backed up daily with the domdsmc selective 
command. I recently discovered that old deleted databases were never 
removed from TSM, which seems to be caused by the fact that we do 
selectives only. Only an occasional domdsmc incremental seems to remove 
them.
So I recommended the customer to implement an incremental backup once a 
week. He now asks me What if I run daily incrementals instead of the 
selectives? I don't know if that will work, nor can I find the answer in 
the manuals...
The customer uses circular logging only for all Domino instances.
Thank you very much for your help in advance!
Kind regards,
Eric van Loon
AF/KLM Storage Engineering

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain 
confidential and privileged material intended for the addressee only. If 
you are not the addressee, you are notified that no part of the e-mail or 
any attachment may be disclosed, copied or distributed, and that any other 
action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify 
the sender immediately by return e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission 
of this e-mail or any attachments, nor responsible for any delay in 
receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered 
number 33014286







Re: help with designing a backup system for Teradata

2015-07-30 Thread Ron Delaware
Rick,

What type of storeage system are you using? Does it have the necessary I/O
capability to allow the throughput you are going to require?


Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   Rhodes, Richard L. rrho...@firstenergycorp.com
To: ADSM-L@VM.MARIST.EDU
Date:   07/30/15 10:03
Subject:[ADSM-L] help with designing a backup system for Teradata
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



We purchased a Teredata database system.
It currently is in test/dev stage with little data.
We don't really know the ultimate backup requirements.
To get things started we setup a simple backup system:

   Teradata
- to Bar server (Win) with TSM interface sftw
- to TSM server (AIX)
- to filepool on DataDomain (getting ~5x dedup)

   From the Bar server to TSM server is a standard 1GB ethernet.

Now we need to scale up/out!

The consultants are saying we will need to backup 30TB in a 6hr window,
but maybe has high as 50TB in 6hr.
That is (roughly):
 30TB in 6hr = 1,400 MB/sec
 50TB in 6hr = 2,300 MB/sec

So we need to design a TSM backup system to support this.

My thoughts:

1) Put a storage agent on the Bar server (Win server)
 and feed a VTL via 4x8gb san connections via a bunch of virtual tape.

2) Put the TSM server directly on the Bar server for just
 local tape and still feed a VTL as above.
 No library sharing.

3) I'd really like to not use tape (even virtual tape),
 but I can't think of any way to feed file devices
 with that throughput.

I'd appreciate any thought/comments anyone might have!

Thanks

Rick




-

The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.




Re: Moving DB , LOG , ARCHLOG

2015-07-22 Thread Ron Delaware
Robert,

In order to move your TSM database, and logs, you will have to perform a
restore db and input the new directories/filespaces that you want the
database and logs to reside.

Moving both the database and recovery log
You can move the database, active log, and archive logs that are on the
same file
system to various directories on different file systems for better
protection.
Procedure
1. Back up the database by issuing the following command:
backup db type=full devclass=files
2. Halt the server.
3. Create directories for the database, active logs, and archive logs. The
directories
must be accessible to the user ID of the database manager. For example:
mkdir l:\tsm\db005
mkdir m:\tsm\db006
mkdir n:\tsm\db007
mkdir o:\tsm\db008
mkdir p:\tsm\activelog
mkdir q:\tsm\archivelog
4. Create a file that lists the locations of the database directories.
This file is used
if the database must be restored. Enter each location on a separate line.
For
example, these are the contents of the dbdirs.txt file:
l:\tsm\db005
m:\tsm\db006
n:\tsm\db007
o:\tsm\db008
5. Remove the database instance by issuing the following command:
dsmserv removedb TSMDB1
6. Issue the DSMSERV RESTORE DB utility to move the database and create
the new
active log. For example:
dsmserv restore db todate=today on=dbdirs.txt
activelogdir=p:\tsm\activelog
7. Restart the server.


Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   07/22/15 09:44
Subject:[ADSM-L] Moving DB , LOG , ARCHLOG
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hello

I am in stage of moving my DB , Activelog and Archivelog from old storage
to faster storage,   DB on SSD disks

I am wonder if I can do it in one step, meaning:


1.   Creating new volumes for DB on SSD lists, create a list on
dbdir.file

2.   Create a new volume of 150G for  activelogas:  X:\tsmactlog

3.   Create a new volume of 400G for archivelog  as:  Y:\tsmarchlog

4.   On dsmserv.opt update those entries as:  ACTIVELOGDirectory
X:\tsmactlog

   ARCHLOGDirectory   Y:\tsmarchlog

5.   Run a full backup DB

6.   Halt the server

7.   Run dsmserv removedb  TSMDB1

8.   Run dsmserv restore DB todate=today on=dbdir.file

9.   Restart the server

10.   Move the archivelog from the old directory to the new directory as:

Xcopy /s  OLDDIRECTORY\*  Y:\tsmarchlog



Or need to do it in 3 steps ? (meaning 3 halt of server)



1.   ACTIVELOG  change in dsmserv.opt  , halt server and restart
server

2.   ARCHIVELOG  change in dsmser.opt , halt server and restart server

3.   DB , backupdb  ,halt server , restore DB on=dbfile , restart
server

TSM server version 7.1.1.200 on Windows 2008R2 64B

Best Regards

Robert Ouzen




Re: ANR2033E Command failed - lock conflict

2015-07-07 Thread Ron Delaware
Grant,

To tackle your problem of multiple backups at the same time you can carve
up your NAS server (as long as you are using volumes and directories and
NOT volumes and Trees) using virtualmountpoint

Using the virtualmountpoint option to identify a directory within a file
system provides a direct path to the files you want to back up, saving
processing time. It is more efficient to define a virtual mount point
within a file system than it is to define that file system using the
domain option, and then to use the exclude option in your include-exclude
options list to exclude the files that you do not want to back up.

Use the virtualmountpoint option to define virtual mount points for
multiple file systems, for local and remote file systems, and to define
more than one virtual mount point within the same file system. Virtual
mount points cannot be used in a file system handled by automounter.

You can use the virtualmountpoint option to back up unsupported file
systems, with certain limitations.

After you define a virtual mount point, you can specify the path and
directory name with the domain option in either the default client options
file or on the incremental command to include it for incremental backup
services. When you perform a backup or archive using the virtualmountpoint
option, the query filespace command lists the virtual mount point in its
response along with other file systems. Generally, directories that you
define as virtual mount points are treated as actual file systems and
require that the virtualmountpoint option is specified in the dsm.sys file
to restore or retrieve the data.

After the creation of your virtualmountpoints, you would create a NAS
datamover for each virtualmountpoint, and each NAS Datamover would be
assigned to a schedule (could be the same schedule or separate schedules).
This would allow you to have multiple backups of your NAS Server running
at the same time.  I have not used this configuration with the NAS
Appliance controlling the data storage pools, but this works fine when TSM
is setup to control the data (Normally called a 3-way setup). This allows
the NAS data to be treated that same as any other data that TSM manages,
sending it to disk storage pools, tape storage pools, making copies for
offsite requirements and such.


Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   Grant Street grant.str...@al.com.au
To: ADSM-L@VM.MARIST.EDU
Date:   07/07/15 17:08
Subject:Re: [ADSM-L] ANR2033E Command failed - lock conflict
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Nick

It is a bit cumbersome to do parallel backups, but it does work.
Essentially you run a backup node process per NAS volume depending on
your NAS's definition of volume.
This is tricky when you have few volumes or volumes that vary greatly in
size. If you only have two volumes you can only ever create two streams.
If you have two volumes, one 100GB and one 1TB and you want to do a
backup storge pool after, you have to wait for the largest to finish.

We never saw it in 6.3 that we ran for 18 months or more. We have seen
it every time in 7.1.1.300. We have to keep retrying until it works.

Grant

On 08/07/15 00:22, Nick Marouf wrote:
 Hi Grant,

 I've been interested in pursing NDMP backups in parallel, How does it
work
 overall?

 Is this lock conflict something that you have experienced specifically
with
 the new version of tsm, and not with version 6.3?

 Thanks,
 -Nick


 On Tue, Jul 7, 2015 at 12:41 AM, Grant Street grant.str...@al.com.au
 wrote:

 Hi All

 We are running some NDMP backups in parallel using the PARALLEL
 functionality in a TSM script.

 Since moving to 7.1.1.300 from 6.3 we have noticed that we are getting

 ANR2033E BACKUP NODE: Command failed - lock conflict.

 Has anyone else seen this? or have some advice?

 We can't change it to a single stream as it will take close to a week
in
 order to do the backup

 Thanks in advance

 Grant





Re: Backup and restore question

2015-06-17 Thread Ron Delaware
Robert,

You cannot perform a normal incremental once you have performed a full 
image, unless you have two separate nodes, one doing image backups and one 
doing file by file backups.  You can do incremental image backups but not 
an file by file incremental.

 
Best Regards,
_

email: ron.delaw...@us.ibm.com

 



From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   06/16/15 21:43
Subject:Re: [ADSM-L] Backup and restore question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hello Erwann

I admit I am quite lost .

I run a regular incremental backup on disk F:   (first time , so like full 
backup)

As:   INC  F:  -su=yes

I run a full image backup of F: as:   backup image F:

But if I run after a backup image with  -mode=incremental got error:

tsm backup image f:  -mode=incremental
ANS1813E Image Backup processing of '\\nasw\f$' finished with failures.

ANS1229E MODE=INCREMENTAL is not valid on 
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy17. Image backup not 
processed.

I erased everything an made a test without doing a regular incremental 
backup of disk F as shown below:

tsm backup image F:

Image backup of volume 'F:'

Total number of objects inspected:1
Total number of objects backed up:1
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:  0
Total number of objects failed:   0
Total number of objects encrypted:0
Total number of objects grew: 0
Total number of retries:  0
Total number of bytes inspected:  19.99 GB
Total number of bytes transferred:86.41 MB
Data transfer time:4.00 sec
Network data transfer rate:   22,123.01 KB/sec
Aggregate data transfer rate: 13,869.33 KB/sec
Objects compressed by:0%
Total data reduction ratio:   99.58%
Elapsed processing time:   00:00:06
tsm backup image F: -mode=incremental

Total number of objects inspected:   16
Total number of objects backed up:0
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:  0
Total number of objects failed:   0
Total number of objects encrypted:0
Total number of objects grew: 0
Total number of retries:  0
Total number of bytes inspected:  41.56 KB
Total number of bytes transferred:0  B
Data transfer time:0.00 sec
Network data transfer rate:0.00 KB/sec
Aggregate data transfer rate:  0.00 KB/sec
Objects compressed by:0%
Total data reduction ratio:  100.00%
Elapsed processing time:   00:00:06
tsm

Of course NO data in the -mode=incremental but NO error 

What is the correct sequence to have combination of regular incremental 
backup and iimage backup ??

Thanks again for the advice

Best Regards

Robert



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Erwann SIMON
Sent: Tuesday, June 16, 2015 11:26 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup and restore question

Hello Robert,

The –deletefiles flag won't be an option if you don't run regular 
incremental backup which is able to mark deleted files as inactive.

The -mode=incremental flag uses incremental by date method that does not, 
so TSM DB is not aware of file deletion.

-- 
Best regards / Cordialement / مع تحياتي
Erwann SIMON

- Mail original -
De: Robert Ouzen rou...@univ.haifa.ac.il
À: ADSM-L@VM.MARIST.EDU
Envoyé: Lundi 15 Juin 2015 14:50:58
Objet: Re: [ADSM-L] Backup and restore question

Hello Erwann

I need a little bit more explanations.

 if I understand correctly ,I need to do;

 1. backup image E: (full backup)
2. backup image E:  -mode=incremental   (only new files are 
backup , fast backup

I want to use (test) the restore with option imagetofile
 
Restore image E: F:\diskE.iso -imagetofile  -incremental –deletefiles

So NO need  for regular  incremental  backup  ???

T.I.A  Best Regards

Robert 
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Erwann SIMON
Sent: Sunday, June 14, 2015 10:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup and restore question

Hello Robert,

Backup image is a full image done at block level. When specifiying 
mode=incremental, il a file level incremental that is done : only files 
having a modified date later than the previous backup image are backed up.

The -deletefiles flag when restoring an image backup is 

Re: Moving nodes to a new policy

2015-06-06 Thread Ron Delaware
Paul,

Not sure what your retention looks like, but if you move the nodes to a
different domain with a different retention (smaller), then the data will
get bound to the new policy after a backup is performed and the new data
retention will kick in at that time. I believe that your understanding of
version deleted (verd) is off. The verde is only used once a file has been
deleted from the client.
Example: a file has been deleted from the client node, AND an incremental
backup is run, the TSM client will inform the TSM server of this fact.
This results in following actions on the TSM server:
  a) The TSM server deactivates the active version of the file and no more
versions are inserted into the database.
  b) The VERDELETED parameter is used to limit the number of versions held
by the TSM Server. Any number of versions in excess of the VERDELETED
  setting will be marked for immediate expiration and purged (via the
same mechanism as described above in Case A).
  c) All the inactive versions are tracked in the Expiring.Objects table.
Each version of the file is purged in accordance to the settings of the
RETEXTRA
  parameter, except for the LAST version. Its deactivation date will
be evaluated against the RETONLY parameter of the management class
setting. This is a
  safety feature of TSM to allow the last version (most recent) to be
recoverable for an extended period of time, should a user wish.
So a setting of 30 30 30 365 would keep 30 active versions (1 active 29
inactive) if a file is deleted TSM would keep 30 inactive versions and
based in the deactivate date would keep each version, except the last, for
30 days. The last inactive version would be kept for 365 days, the 365
days started the day it was deactivated, so actually that file would be
around for an additional 335 days when it became the only version of the
file still on the TSM Server.



Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   Paul_Dudley pdud...@anl.com.au
To: ADSM-L@VM.MARIST.EDU
Date:   06/04/15 17:35
Subject:[ADSM-L] Moving nodes to a new policy
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



If I have nodes that are currently under a policy that keeps 5 backup
versions, can I move them to another policy which keeps only 2 backup
versions?

Will TSM then start expiring all the older backup versions of files which
are no longer needed?

We have two nodes which are taking up a lot of backup space in TSM and
want to reduce this.





Thanks  Regards

Paul Dudley

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to
the named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If
you received
this e-mail in error, please immediately notify the sender by return
e-mail from your s
ystem. Please do not copy, use or make reference to it for any purpose, or
disclose its
 contents to any person.




Re: trouble trimming collumns in select

2015-06-03 Thread Ron Delaware
Gary,

Not sure what you are trying to do this is the command I ran and the
results
dsmadmc -id=ron -passw=ron -comma -datao=yes SELECT rtrim(node_name),
rtrim(filespace_name), filespace_id, rtrim(filespace_type),
daTE(backup_end) as back_up_DATE FROM filespaces WHERE
DAYS(current_date)-DAYS(backup_end)10

AMORKITO,\\amorkito\d$,1,NTFS,2015-05-04
AMORKITO,\\amorkito\e$,10,NTFS,2015-05-04
AMORKITO,\\amorkito\h$,9,NTFS,2015-05-04
AMORKITO,\\amorkito\v$,8,NTFS,2015-05-04
AMORKITO,\\amorkito\f$,7,NTFS,2015-05-04
AMORKITO,\\amorkito\i$,6,NTFS,2015-05-04
AMORKITO,\\amorkito\j$,5,NTFS,2015-05-04
AMORKITO,\\amorkito\c$,4,NTFS,2015-05-04
AMORKITO,\\amorkito\g$,3,NTFS,2015-05-04

if you are sending this to a file I would add a command before that. For
this example we are sending the data to a file called Testtrim.csv:
@echo off
@cls
echo Node,filespace,FS_ID,FS_type,last backup  c:\temp\testtrim.csv
dsmadmc -id=ron -passw=ron -comma -datao=yes SELECT rtrim(node_name),
rtrim(filespace_name), filespace_id, rtrim(filespace_type),
daTE(backup_end) as back_up_DATE FROM filespaces WHERE
DAYS(current_date)-DAYS(backup_end)10  c:\temp\testtrim.csv

c:\Program Files\Tivoli\TSM\baclienttype c:\temp\testtrim.csv
Node,filespace,FS_ID,type,last backup
AMORKITO,\\amorkito\d$,1,NTFS,2015-05-04
AMORKITO,\\amorkito\e$,10,NTFS,2015-05-04
AMORKITO,\\amorkito\h$,9,NTFS,2015-05-04
AMORKITO,\\amorkito\v$,8,NTFS,2015-05-04
AMORKITO,\\amorkito\f$,7,NTFS,2015-05-04
AMORKITO,\\amorkito\i$,6,NTFS,2015-05-04
AMORKITO,\\amorkito\j$,5,NTFS,2015-05-04
AMORKITO,\\amorkito\c$,4,NTFS,2015-05-04
AMORKITO,\\amorkito\g$,3,NTFS,2015-05-04

keep the command line as one line, turn off your word wrap in the editor.
Again, not knowing what you are attempting to do, sending the data to a
comma delimited file and opening in excel opens a lot of different options
for displaying the output.


Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   Lee, Gary g...@bsu.edu
To: ADSM-L@VM.MARIST.EDU
Date:   06/02/15 12:59
Subject:[ADSM-L] trouble trimming collumns in select
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Have the following script, but when I run on a 6.2.5 server using a 6.4.2
command line client no trimming is done.
Lines are thousands of characters long.

Script follows:

SELECT rtrim(node_name), -
rtrim(filespace_name), -
filespace_id, -
rtrim(filespace_type), -
DATE(backup_end) as backup DATE -
FROM filespaces WHERE -
  DAYS(current_date)-DAYS(backup_end) $1


What have I missed?




Re: NAS Backup

2015-05-18 Thread Ron Delaware
Eric,

When a NAS/NDMP backup start, there is a query from the datamover to the
TSM Server requesting space for the backup. It doesn't matter if you are
doing full's or incremental, the datamover uses that same storage
requirement for both.

example:
you have 100TB of NAS used space total. To do a full would require 100TB +
10% of space available on the TSM server. That's pretty straight forward.
The muddy area is when you are attempting to perform incremental backups.
When requesting space, the space that is requested is equal to the total
space used, which would be 100TB + 10%, this happens even though an
incremental will take place. This can cause incremental backups to be
mixed with Full backup storage.


Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   McWilliams, Eric emcwilli...@medsynergies.com
To: ADSM-L@VM.MARIST.EDU
Date:   05/18/15 11:48
Subject:[ADSM-L] NAS Backup
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



How does the NAS backup determine if there is enough space to back up the
data to?  I'm currently backing up an EMC Isilon directly to tape (I know,
I know, you don't have to tell me!) and am getting an error that there is
not enough space in the storage pool.

ANR1072E NAS Backup to TSM Storage process 8 terminated - insufficient
space in destination storage pool. (SESSION: 5226, PROCESS: 8)

I'm only backing up around 9TB so there should be more than enough space
in the tape library.  This has worked well up until last week.

Thanks

Eric

**
*** CONFIDENTIALITY NOTICE ***

 This message and any included attachments are from MedSynergies, Inc. and
are intended only for the addressee. The contents of this message contain
confidential information belonging to the sender that is legally
protected. Unauthorized forwarding, printing, copying, distribution, or
use of such information is strictly prohibited and may be unlawful. If you
are not the addressee, please promptly delete this message and notify the
sender of the delivery error by e-mail or contact MedSynergies, Inc. at
postmas...@medsynergies.com.




Re: Share permission changes

2015-05-11 Thread Ron Delaware
Just changing the share permissions would not cause the symptom that he is
experiencing. I agree with Steven, filesystem permissions must have been
changed. as those are propagated from the parent dir, so they may have
been changed and he wasn't aware.


Best Regards,
_

email: ron.delaw...@us.ibm.com





From:   Steven Langdale steven.langd...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   05/11/15 14:54
Subject:Re: [ADSM-L] Share permission changes
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Share perms are stored in the registry, so backed up with a system state
backup.  If all the files are getting backed up again he must have changed
the filesystem perms too.

On Mon, 11 May 2015 22:13 Paul Zarnowski p...@cornell.edu wrote:

 This is a problem for NTFS because the amount of metadata associated
with
 an object is more than you can put into the TSM database.  Thus, TSM
puts
 it into the storage pool, along with the object.  What this means is
that
 when the meta changes, the object has to be backed up again.  This is
not a
 problem for Unix/NFS, because there isn't much metadata and it can all
be
 put into the TSM DB, which means if it changes it's just a DB update and
 not another backup of the object.

 Bad enough for backups, but imagine if you had a PB-scale GPFS
filesystem
 and someone unwittingly makes such a change.  Now you're talking about
 having to recall all of those objects in order to back them up again.
 Ugh.  End of game.

 ..Paul


 At 04:54 PM 5/11/2015, Nick Marouf wrote:
 Hello
 
  From my experience changing share permission will force tsm to backup
all
 the data once more. A solution we used in the past was to assign groups
 instead of users to shares.
 
 Changes to group membership is behind the scenes in AD, and is not
picked
 up by TSM at the client level.
 
 
 On Mon, May 11, 2015 at 2:39 PM, Thomas Denier 
 thomas.den...@jefferson.edu
 wrote:
 
  One of our TSM servers is in the process of backing up a large part
of
 the
  contents of a Windows 2008 file server. I contacted the system
  administrator. He told me that he had changed share permissions but
not
  security permissions, and did not expect all the files in the share
to
 be
  backed up. Based on my limited knowledge of share permissions I
wouldn't
  have expected that either. Is it normal for a share permissions
change
 to
  have this effect? How easy is it to make a security permissions
change
  while trying to make a share permissions change?
 
  Thomas Denier,
  Thomas Jefferson University
  The information contained in this transmission contains privileged
and
  confidential information. It is intended only for the use of the
person
  named above. If you are not the intended recipient, you are hereby
 notified
  that any review, dissemination, distribution or duplication of this
  communication is strictly prohibited. If you are not the intended
  recipient, please contact the sender by reply email and destroy all
 copies
  of the original message.
 
  CAUTION: Intended recipients should NOT use email communication for
  emergent or urgent health care matters.
 


 --
 Paul ZarnowskiPh: 607-255-4757
 Assistant Director for Storage Services   Fx: 607-255-8521
 IT at Cornell / InfrastructureEm: p...@cornell.edu
 719 Rhodes Hall, Ithaca, NY 14853-3801





Re: Linux 6.4 client hangs on starting dsmc

2015-04-16 Thread Ron Delaware
I ran into a similar problem, there were special characters hidden in the 
dsm.sys file but we were not able to determine what they were. I would 
recommend:
1. Rename the current dsm.sys to dsm.sys.old
2. Create a new dsm.sys (DO NOT cut and paste) then save it
3. Try the backup again.

In my case, support was able to replicate the problem in their lab using 
the dsm.sys from the customers TSM Client. They were never able to 
determine what the characters were or where they were located in the 
dsm.sys file

 
Best Regards,
_

email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Francisco Javier francisco.parri...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   04/16/15 10:12
Subject:Re: [ADSM-L] Linux 6.4 client hangs on starting dsmc
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



try executing dsmc since command line, perhaps be a problem with some
configuration in option files.

Regards


2015-04-16 11:38 GMT-05:00 Arbogast, Warren K warbo...@iu.edu:

 A Linux client has been missing its scheduled backups.  The TSM client 
is
 at version s 6.4.0.0, and our TSM server is running 7.1.1.108 on a 
Redhat 6
 OS,

 The client admin reports that it hangs immediately when dsmc is started,
 but the admin can telnet successfully from the client to to the TSM 
server
 over ports 1500 and 1542, so we have crossed ‘firewall problem’ off the
 list of possible causes.

 'ssl yes’ and ‘ssl fipsmode yes’ are specified in dsm.sys, but the admin
 tried commenting out ‘sslfipsmode yes’ and running dsmc —with the same
 result.

 dsmerror.log is empty, and there are no recent messages in dsmwebcl.log.

 The admin reports that selinux is running, but that ’nothing has 
changed’
 in its configuration recently.  Since backups had been runining
 successfully till a week ago, certainly something has changed, but we 
can’t
 find it.

 Where else should we look for the cause of the immediate hang when dsmc 
is
 started?

 With many thanks,
 Keith Arbogast
 Indiana University





Re: script help

2015-03-12 Thread Ron Delaware
Jeanne,

If you were to run those command at the DB2 level, they would work fine or
possible as a shell script ran from a TSM macro. There are limitations, as
you found out, when trying to run select statements from within TSM.


Best Regards,
_

email: ron.delaw...@us.ibm.com
Storage Services Offerings






From:   Jeanne Bruno jbr...@cenhud.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/12/15 14:36
Subject:[ADSM-L] script help
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hello.  Need some help.  I'm trying to create a new script for myself and
I want to get the PROCESS_NUM from the Processes table in a variable.

def script Processes desc=get Process Number
update script Processes 'declare process processes.PROCESS_NUM%type'
update script Processes 'START:'
update script Processes 'select PROCESS_NUM into process from processes
where PROCESSIdentify Duplicates'
update script Processes 'if (rc_ok) goto ID'
update script Processes 'ID:'
update script Processes DBMS_OUTPUT.PUT_LINE('process id=' process)
update script Processes 'EXIT:'
update script Processes 'exit'

tsm: TSMPOK_SERVER1update script cancelreps 'declare process
processes.PROCESS_NUM%type'
ANR1469E UPDATE SCRIPT: Command script CANCELREPS, Line 1 is an INVALID
command : declare process processes.PROCESS_NUM%type.
ANS8001I Return code 3.

tsm: TSMPOK_SERVER1update script cancelreps
DBMS_OUTPUT.PUT_LINE('process id=' process)
ANR2002E Missing closing quote character.
ANR1469E UPDATE SCRIPT: Command script CANCELREPS, Line 20 is an INVALID
command : DBMS_OUTPUT.PUT_LINE('process id=' process).
ANS8001I Return code 3.

And when I put the word 'process' in quotes (just to get around the quote
error above)it's an invalid command anyway.

tsm: TSMPOK_SERVER1update script cancelreps
DBMS_OUTPUT.PUT_LINE('process')
ANR1469E UPDATE SCRIPT: Command script CANCELREPS, Line 20 is an INVALID
command : DBMS_OUTPUT.PUT_LINE('process').
ANS8001I Return code 3.

Are there DB2 equivalents for the 'declare' and 'DBMS_OUTPUT' commands???
I've googled and it looks you can use the commands in DB2, but maybe not
for TSM???

Any input is much appreciated.


Jeannie Bruno
Senior Systems Analyst
jbr...@cenhud.commailto:jbr...@cenhud.com
Central Hudson Gas  Electric
(845) 486-5780




Re: Select Statement Help

2015-03-09 Thread Ron Delaware
Bruce,

You could do a group by node_name at the end of your select statement.


Best Regards,
_

email: ron.delaw...@us.ibm.com
Storage Services Offerings






From:   Kamp, Bruce (Ext) bruce.k...@alcon.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/09/15 12:46
Subject:[ADSM-L] Select Statement Help
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I am found a couple TDP SQL nodes that aren't inactivating there backups
so TSM isn't expiring them...
What I am trying to find is the oldest backup date for each server with a
name like _TDP.

I can get this:
Node Name HL_NAME   BACKUP DATE STATE
-
- 
--
XYZ_TDP //  2009-08-17 ACTIVE_VERSION
XYZ_TDP //  2009-09-13 ACTIVE_VERSION
XYZ_TDP //  2009-09-14 ACTIVE_VERSION
XYZ_TDP //  2009-09-15 ACTIVE_VERSION
XYZ_TDP //  2009-09-16 ACTIVE_VERSION

What I really want is something like this:
Node Name HL_NAME   BACKUP DATE STATE
-
- 
--
XYZ_TDP //  2009-08-17 ACTIVE_VERSION
ABC_TDP //  2009-09-13 ACTIVE_VERSION
123_TDP //  2009-09-14 ACTIVE_VERSION

Is this possible ?

Thanks,
Bruce Kamp
TSM Administrator
(817) 568-7331




Re: How to setup dedicated tape drives for storage agent

2014-09-15 Thread Ron Delaware
Saravanan,

You could setup your tape library so that it is partitioned to use one
half for normal backup clients and one half for your storage agent(s), or,
you could setup your TSM server as a Library Manager and the storage
agent(s) as Library Clients.  Since you are using a VTL, the library
Manager configuration would work best for you.

Is there a specific reason you need 1/2 of the total drives just for the
storage agent?


Best Regards,
_

email: ron.delaw...@us.ibm.com
Storage Services Offerings






From:   Saravanan Palanisamy evergreen.sa...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   09/15/2014 09:47 AM
Subject:[ADSM-L] How to setup dedicated tape drives for storage
agent
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Dear TSM Folks,

Has anybody tried this approach to enhance tape drive availability ?

Create 64 tape drives in data domain and allocate 32 tape drives only for
TSM server backup ( Lan based backup) and remaining 32 tape drives
dedicate
for storage agent

Is this possible to implement this setup ?

I really had concern because storage agent definitely need tape drive
should be defined and path must be online in TSM server before defining
path for storage agent.

Will it work if turn off tape drive path in TSM server and only turn on
corresponding path for storage agent ? I never tried this approach and has
anybody got real time experience on this setup?

--
Thanks  Regards,
Saravanan
Mobile: +65-86966188




Re: TDP for VM

2014-08-08 Thread Ron Delaware
Ricky,

did you change the  asnode= option for the proxy datamover?  If you are
using the same asnode= option, there should be no problem, but if you have
changed it or omitted it, then the TSM Server assumes its a new node and
wants a full backup performed.


Best Regards,
_

email: ron.delaw...@us.ibm.com
Storage Services Offerings






From:   Plair, Ricky rpl...@healthplan.com
To: ADSM-L@vm.marist.edu
Date:   08/08/2014 02:25 PM
Subject:[ADSM-L] TDP for VM
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



All,

I have created a new VM proxy to backup our VM Cluster. This VM Cluster
has already been backed up numerous times by a different VM proxy.

The problem I'm having is,  the new proxy doesn't see any of the backups
that have been completed on the old proxy and therefore wants to perform a
full on the entire VM Cluster.

My question is, how can I get the new VM proxy to perform an incremental
backup not a full backup. The TSM server has not changed, the VM Cluster
nodes have not changed just the proxy.

I really appreciate the help.

Rick

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_
CONFIDENTIALITY NOTICE: This email message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information and/or Protected Health Information (PHI)
subject to protection under the law, including the Health Insurance
Portability and Accountability Act of 1996, as amended (HIPAA). If you are
not the intended recipient or the person responsible for delivering the
email to the intended recipient, be advised that you have received this
email in error and that any use, disclosure, distribution, forwarding,
printing, or copying of this email is strictly prohibited. If you have
received this email in error, please notify the sender immediately and
destroy all copies of the original message.




Re: Question about NDMP and TSM.

2014-07-16 Thread Ron Delaware
Wanda,

this document will give a bit of insight to deduped NAS data.

http://www-05.ibm.com/de/events/breakfast/pdf/TSM_Dedup_Best_Practices_v1_0.pdf


 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Prather, Wanda wanda.prat...@icfi.com
To: ADSM-L@vm.marist.edu
Date:   07/16/2014 07:53 PM
Subject:Re: [ADSM-L] Question about NDMP and TSM.
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Interesting point about NDMP and dedup.
Do you have any experience with it?  What kind of dedup ratios did you 
see?


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Steven Harris
Sent: Tuesday, July 15, 2014 9:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question about NDMP and TSM.

And as a bonus, ndmp storage pools cannot be reclaimed and this means that 
they hold tapes until the last data has expired.  TSM format storage pools 
can be reclaimed, and if they are file storage pools can be deduped as 
well.


Regards

Steve

Steven Harris
TSM Admin
Canberra Australia



On 16 July 2014 05:38, Ron Delaware ron.delaw...@us.ibm.com wrote:

 Ricky,

 The configuration that you are referring to is what could be 
 considered the 'Traditional' implementation of NDMP.  As you have 
 found for yourself, there are a number of restrictions on how the data 
can be managed though.

 If you configure the NDMP environment so that a Tivoli Storage Manager 
 controls the data flow instead of the NetApp Appliance, you have more 
 options


 This configuration will allow you backup up to TSM storage pools 
 (Disk, VTL, Tape), send copies offsite, because the TSM Server 
 controls the destination.  You have the option to use a traditional 
 TSM Client utilizing the NDMP protocol or have the TSM server perform 
 the backup and restores using the BACKUP NODE and RESTORE NODE 
 commands. It a table of contents storage pool (disk based only highly 
 recommended) you can perform single file restores.  you can also 
 create virtual filespace pointers to your vfiler that will allow you 
 to run simultaneous backups of the vfiler, that could shorten your 
backup and restore times.





 Best Regards,

 _
 * Ronald C. Delaware*
 IBM Level 2 - IT Plus Certified Specialist – Expert IBM Corporation | 
 Tivoli Software IBM Certified Solutions Advisor - Tivoli Storage IBM 
 Certified Deployment Professional Butterfly Solutions Professional
 916-458-5726 (Office
 925-457-9221 (cell phone)

 email: *ron.delaw...@us.ibm.com* ron.delaw...@us.ibm.com *Storage 
 Services Offerings* 
 http://www-01.ibm.com/software/tivoli/services/consulting/offers-stor
 age-optimization.html





 From:Schneider, Jim jschnei...@ussco.com
 To:ADSM-L@vm.marist.edu
 Date:07/15/2014 12:19 PM
 Subject:Re: [ADSM-L] Question about NDMP and TSM.
 Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
 --



 Ricky,

 The Isilon uses the OneFS file system and TSM views it as one huge 
 file system.  If backing up to disk, TSM will attempt to preallocate 
 enough space to back up the entire allocated space on the Isilon. 
 Defining Virtual File systems will not help because directory quota 
 information is not passed to TSM, and TSM only sees the total allocated 
space.

 We were able to back up the Isilon to disk when we started on a test 
 system with little data on it, around 25 GB.  When we attempted to 
 implement the same backups on a second, well-populated Isilon we ran 
 into the space allocation problem.

 When backing up to tape, TSM assumes you have unlimited storage 
 available and is able to run VFS backups.  We use Virtual File Space 
 Mapping (VFS) and back up to tape.

 Refer to EMC SR#4646, TSM PMR 23808,122,000.

 Jim Schneider
 United Stationers

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU 
 ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, Ricky
 Sent: Tuesday, July 15, 2014 1:21 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Question about NDMP and TSM.

 I have been asked to look into backing up our EMC Isilon using our TSM 
 server.

 Everything I read,  seems to point to backing this NDMP device to tape.

 Problem is, we do not use tape to backup production.

 I have researched and found a few articles about backing the NDMP 
 device to tape but, there seem to be more cons than pros.

 Is there anybody backing up a NDMP device to disk that can give me 
 some pros and,  how they are using disk for this task.


 I appreciate your time

Re: Question about NDMP and TSM.

2014-07-15 Thread Ron Delaware
Ricky,

The configuration that you are referring to is what could be considered 
the 'Traditional' implementation of NDMP.  As you have found for yourself, 
there are a number of restrictions on how the data can be managed though.

If you configure the NDMP environment so that a Tivoli Storage Manager 
controls the data flow instead of the NetApp Appliance, you have more 
options


This configuration will allow you backup up to TSM storage pools (Disk, 
VTL, Tape), send copies offsite, because the TSM Server controls the 
destination.  You have the option to use a traditional TSM Client 
utilizing the NDMP protocol or have the TSM server perform the backup and 
restores using the BACKUP NODE and RESTORE NODE commands. It a table of 
contents storage pool (disk based only highly recommended) you can perform 
single file restores.  you can also create virtual filespace pointers to 
your vfiler that will allow you to run simultaneous backups of the vfiler, 
that could shorten your backup and restore times.



 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Schneider, Jim jschnei...@ussco.com
To: ADSM-L@vm.marist.edu
Date:   07/15/2014 12:19 PM
Subject:Re: [ADSM-L] Question about NDMP and TSM.
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Ricky,

The Isilon uses the OneFS file system and TSM views it as one huge file 
system.  If backing up to disk, TSM will attempt to preallocate enough 
space to back up the entire allocated space on the Isilon. Defining 
Virtual File systems will not help because directory quota information is 
not passed to TSM, and TSM only sees the total allocated space.

We were able to back up the Isilon to disk when we started on a test 
system with little data on it, around 25 GB.  When we attempted to 
implement the same backups on a second, well-populated Isilon we ran into 
the space allocation problem.

When backing up to tape, TSM assumes you have unlimited storage available 
and is able to run VFS backups.  We use Virtual File Space Mapping (VFS) 
and back up to tape.

Refer to EMC SR#4646, TSM PMR 23808,122,000.

Jim Schneider
United Stationers
 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Plair, Ricky
Sent: Tuesday, July 15, 2014 1:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Question about NDMP and TSM.

I have been asked to look into backing up our EMC Isilon using our TSM 
server.

Everything I read,  seems to point to backing this NDMP device to tape.

Problem is, we do not use tape to backup production.

I have researched and found a few articles about backing the NDMP device 
to tape but, there seem to be more cons than pros.

Is there anybody backing up a NDMP device to disk that can give me some 
pros and,  how they are using disk for this task.


I appreciate your time!



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ CONFIDENTIALITY NOTICE: This email message, including any attachments, 
is for the sole use of the intended recipient(s) and may contain 
confidential and privileged information and/or Protected Health 
Information (PHI) subject to protection under the law, including the 
Health Insurance Portability and Accountability Act of 1996, as amended 
(HIPAA). If you are not the intended recipient or the person responsible 
for delivering the email to the intended recipient, be advised that you 
have received this email in error and that any use, disclosure, 
distribution, forwarding, printing, or copying of this email is strictly 
prohibited. If you have received this email in error, please notify the 
sender immediately and destroy all copies of the original message.

**
Information contained in this e-mail message and in any attachments 
thereto is confidential. If you are not the intended recipient, please 
destroy this message, delete any copies held on your systems, notify the 
sender immediately, and refrain from using or disclosing all or any part 
of its content to any other person.





Re: Lun versus logical volume for DB volumes

2014-07-15 Thread Ron Delaware
Steven,

The logical volumes are not dedicated disks in most cases, which means 
that other applications may be using the same disks at the same time. With 
our new TSM Server Blueprint standards, TSM database's over 1TB require 
16 luns.

You can go to this link to find out more

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/NEW%20-%20Tivoli%20Storage%20Manager%20Blueprint%20-%20%20Improve%20the%20time-to-value%20of%20your%20deployments


 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Steven Harris st...@stevenharris.info
To: ADSM-L@vm.marist.edu
Date:   07/15/2014 06:55 PM
Subject:[ADSM-L] Lun versus logical volume for DB volumes
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Hi,

I've specced a design for a new TSM server and as recommended have
specified multiple luns for the database.  The folklore is that DB2 will
start one thread per lun so for a big database you use 8 luns and hence 
get
8 threads.

My AIX guy is asking whether I really need 8 luns or will 8 AIX logical
volumes have the same effect.

Does anyone know or can tell me where to look?

Thanks

Steve.

Steven Harris
TSM Admin
Canberra Australia





Re: ANS1809W-messages using shared memory communication?

2014-07-08 Thread Ron Delaware
From the manual for AIX

The user data limit that is displayed when you issue the ulimit -d command 
is the soft user data limit. It is not necessary to set the hard user data 
limit for DB2. The default soft user data limit is 128 MB. This is 
equivalent to the value of 262,144 512-byte units as set in 
/etc/security/limits folder, or 131,072 KB units as displayed by the 
ulimit -d command. This setting limits private memory usage to about one 
half of what is available in the 256 MB private memory segment available 
for a 32-bit process on AIX. 

Note: A DB2 server instance cannot make use of the Large Address Space or 
of very large address space AIX 32-bit memory models due to shared memory 
requirements. On some systems, for example those requiring large amounts 
of sort memory for performance, it is best to increase the user data limit 
to allow DB2 to allocate more than 128 MB of memory in a single process. 

You can set the user data memory limit to unlimited (a value of -1). 
This setting is not recommended for 32-bit DB2 because it allows the data 
region to overwrite the stack, which grows downward from the top of the 
256 MB private memory segment. The result would typically be to cause the 
database to end abnormally. It is, however, an acceptable setting for 
64-bit DB2 because the data region and stack are allocated in separate 
areas of the very large address space available to 64-bit AIX processes.

 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Sims, Richard B r...@bu.edu
To: ADSM-L@vm.marist.edu
Date:   07/08/2014 04:50 AM
Subject:Re: [ADSM-L] ANS1809W-messages using shared memory 
communication?
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



There can be other causes of this message, such as preemption by higher 
priority tasks (particularly restores, retrieves, and recalls). See what 
the server Activity Log contains at the time that the client encountered 
its error, and examine the log prior to that for resource consumption 
(drives, etc.) by other sessions or processes leading up to the problem 
for the affected backup session. If you are using disk storage pools for 
arriving data, insufficient sizing can result in filling and then an 
elevated demand for tape drive resources and thus contention and delays. I 
would also inspect the client logs to see if client processing got mired 
such that even a 60 minute Idletimeout would have been exceeded.

Richard Sims





Re: TSM server upgrade from v5.5 to v6.2

2014-06-26 Thread Ron Delaware
Saravanan,

There are two available options that you might not be aware of.
IBM has two workshop offerings that focus on migrating or upgrading from 
TSM 5.5.x to TSM 7.1.

1. Butterfly Migration Workshop - This consulting engagement focuses on 
leading the migration planning and data discovery, resulting in a detailed 
migration plan and project plan. The migration or upgrade is performed 
using the IBM Butterfly software. It allows you to keep your current 
environment up while the cut-over is being performed

2.  TSM Upgrade to 7.1 Planning Workshop -  This consulting engagement 
provides assistance in performing the TSM upgrade/migration planning, 
configuration and knowledge transfer.  The goal of the workshop is to 
provide you with a  detailed roadmap, based on your environment, of the 
best, most efficient process to get you to TSM 7.1. 

That said, based on your requirements and TSM environment, in order to get 
to TSM 7.1, you would need to upgrade to 6.2.5 first. Then upgrade your 
clients, then your data protection modules, if any.  Any Library Managers 
need to be upgraded first. The good news is, once your TSM server is 
upgraded, you can configure the Administration Center (a requirement, as 
the Web interface isn't supported nor will it work at the TSM 6.x + level) 
to upgrade your TSM clients to 7.1 using the auto deployment tool.

The reason for going to version 6.2.5 first, there is less down time 
during the database conversion.  TSM 6.3.4 and above use a different DB2 
database version. The version used at 6.2.5 use DB2 tables that allow for 
a smoother transition from TSM 5.  You will read, and probably hear that 
you can upgrade directly to TSM 7.1, which is true, but the caveats are 
enormous. A major show stopper is that TSM 7.1 does not support TSM 5 
clients/servers/agents.

If you do this on your own, I agree that the hybrid method is your only 
real option. I believe that this site is open to the public
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/index?tag=tsm-v.6-hybrid-upgrade-migration-method
it should help
 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Saravanan evergreen.sa...@gmail.com
To: ADSM-L@vm.marist.edu
Date:   06/26/2014 10:11 AM
Subject:Re: [ADSM-L] TSM server upgrade from v5.5 to v6.2
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



I have new hardware migration plan but has any one tried this approach


Step1: Build new hardware with same hostname but temp IP address and 
connect all the network cables ( it has backup vlan) 

Same hostname will not impact DB2 database restart 

Step 2: Take flash copy backup and mount TSM 5.5 server in the new 
hardware 

Step 3: upgrade 5.5 to 6.2 and it might take 50 hrs and it will not have 
50 hrs data in the new TSM server

Step 4 : once TSM upgrade completed then swap production up address and 
bring to production 

Old server will be available with temp IP address 

It Will take 8 hrs for this cut over 

Here is my challenge starts 

How to copy 50 hrs difference to my new database 

Is there any way to export 50 hrs of meta data ? 


If it's having only 2 weeks retention then I will straight away 
decommission hit after 2 weeks but it has daily long retention jobs 

Please feel free to comment and your help is really appreciated 


By Sarav
+65-82284384


 On 27 Jun, 2014, at 12:44 am, Saravanan evergreen.sa...@gmail.com 
wrote:
 
 Hi, 
 
 Can any body suggest the best way for TSM server migration ? 
 
 Our TSM 5.5 database is 510 GB and it has lot of archive jobs( long 
retention )
 
 We can't perform in place upgrade because we will not get downtime more 
than 8 hrs 
 
 It has library manager to manage VTL and 3584 tape library 
 
 
 By Sarav
 +65-82284384
 





Re: TSM server upgrade from v5.5 to v6.2

2014-06-26 Thread Ron Delaware
This is the public access site

http://www-01.ibm.com/software/tivoli/services/consulting/it-service-mgmt/

 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Saravanan evergreen.sa...@gmail.com
To: ADSM-L@vm.marist.edu
Date:   06/26/2014 10:11 AM
Subject:Re: [ADSM-L] TSM server upgrade from v5.5 to v6.2
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



I have new hardware migration plan but has any one tried this approach


Step1: Build new hardware with same hostname but temp IP address and 
connect all the network cables ( it has backup vlan) 

Same hostname will not impact DB2 database restart 

Step 2: Take flash copy backup and mount TSM 5.5 server in the new 
hardware 

Step 3: upgrade 5.5 to 6.2 and it might take 50 hrs and it will not have 
50 hrs data in the new TSM server

Step 4 : once TSM upgrade completed then swap production up address and 
bring to production 

Old server will be available with temp IP address 

It Will take 8 hrs for this cut over 

Here is my challenge starts 

How to copy 50 hrs difference to my new database 

Is there any way to export 50 hrs of meta data ? 


If it's having only 2 weeks retention then I will straight away 
decommission hit after 2 weeks but it has daily long retention jobs 

Please feel free to comment and your help is really appreciated 


By Sarav
+65-82284384


 On 27 Jun, 2014, at 12:44 am, Saravanan evergreen.sa...@gmail.com 
wrote:
 
 Hi, 
 
 Can any body suggest the best way for TSM server migration ? 
 
 Our TSM 5.5 database is 510 GB and it has lot of archive jobs( long 
retention )
 
 We can't perform in place upgrade because we will not get downtime more 
than 8 hrs 
 
 It has library manager to manage VTL and 3584 tape library 
 
 
 By Sarav
 +65-82284384
 





Re: include \ exclude

2014-06-11 Thread Ron Delaware
It is always a good practice to put your include statements at the bottom 
of your list and the excludes at the top of the file. Remember that 
exclude.dir is read first, regardless of where it is located in the file 
and trumps any include statement.

To answer your question, yes by having the Exclude 
D:\USR1\Folder1\...\*.* before your include, will cause TSM not to 
process your include.

 
Best Regards,
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 



From:   Tim Brown tbr...@cenhud.com
To: ADSM-L@vm.marist.edu
Date:   06/11/2014 12:06 PM
Subject:[ADSM-L] include \ exclude
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Attempting to include just one subfolder within a folder that I want to 
exclude.
Cant  seem to come up with right combination.

Is the 2nd overriding the first?

Include D:\USR1\Folder1\Folder2\Folder3\Folder4\...\*.*

Exclude D:\USR1\Folder1\...\*.*

Thanks,

Tim





Re: File retention

2014-03-25 Thread Ron Delaware
Tom,

As with everything, there is a cost, some times its money, sometimes it's 
time, most of the time it is a combination of both.  Here is my 3 cents:

You can crate backup sets of the clients for the clients that are listed 
to be deleted or no longer needed.  There are two bonus items of a backup 
set:
1. Once it is created, it becomes self sufficient, as you no longer need a 
TSM server to restore the data. To restore data from a backup set, you 
need a tape drive for the tape cartridge of the data, and a TSM client.

2. The HOST who's data you want, can be offline or even removed from the 
Enterprise, as long as the data still resides in a primary or offsite copy 
pool, since you create the backup set from the data that is already stored 
within TSM, so there is no need for the HOST to be available.

The cost is time. It can take for 1 - 20 hours to create a single backup 
set, since the data is being restored from tape then written to tape.


t 

Best Regards,
  _
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist ? Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


 
If you haven?t learned something new today, then you?ve wasted your day



From:   Tom Taylor ttay...@jos-a-bank.com
To: ADSM-L@vm.marist.edu, 
Date:   03/25/2014 09:01 AM
Subject:Re: [ADSM-L] File retention
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Thank you for your response, I am not familiar with copy group features. I
have been pouring over the TSM admin guide for a few weeks now trying to
learn, but alas, its a long read. I have been reading about migration,
but some things that migration describes scare me, and make it sound like
its not what I want. If you have some advice and we could discuss I would
love that, as its much faster and easier to get info from a person than it
is from a book (PDF, DOC, web page, etc.).








Thank you so much!








Thomas Taylor
System Administrator
Jos. A. Bank Clothiers
Cell (443)-974-5768



From:
George Huebschman george.huebsch...@pnc.com
To:
ADSM-L@VM.MARIST.EDU,
Date:
03/25/2014 11:48 AM
Subject:
Re: [ADSM-L] File retention
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Mr Taylor.
I used to get this question or something like it while we were
transitioning our TSM environment to an outside vendor who had no
experience with TSM.
We had been instructed to modify (shorten) retention policies (Management
Class/Copy Group).
We were also directed to delete/remove a number of clients; not just stop
backing up, but remove.
I warned them...boldlyonce the data is gone, it's GONE.
Later the question came, repeatedly, But don't you have a copy on tape,
offsite somewhere?
No...I DID tell you, here is an e-mail.

The copy pool data is a copy of the data in the primary pool.  The Primary
pool is not defined as the media where the data is first recorded.  It is
the first place from which the data would be restored.
The Copy Pool is a disaster recovery resource.  It covers you in the event
of damaged media (corrupted filesystem, dropped tape, failed disk) or
damaged data center (fire, flood, Ravens fans).  It is a copy of what
currently exists in your primary pool.
When something is deleted from the Primary pool, it deleted from the Copy
pool.

Disk can be your first tier
Tape can be your first tier.  (Sometimes large files will go straight to
tape, bypassing disk, though you can address that.)
Tape can be your second tier...but still be primary pool media.

Are you familiar with how to use Copy Group features?
Active data is never automatically deleted.
Inactive data is retained according to the number of versions you
decide to keep and the length of time you choose to keep each version.
You can decide separately how long to keep the last
version of each piece of inactive  data.
Still if you have both Primary and Copy pools, you will have two (or more)
copies of each object you have backed up.


George Huebschman (George H.)
(301) 699-4013
(301) 875-1227 (Cell)



From:
Richard Rhodes rrho...@firstenergycorp.com
To:
ADSM-L@VM.MARIST.EDU
Date:
03/24/14 03:12 PM
Subject:
Re: [ADSM-L] File retention
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 What I want to accomplish is I want to set the primary pool to keep10
 versions of files for 30 days, and 10 versions of inactive files also
for
 30 days, and keep the last version for 30 days, but I want the copy pool
 to keep teh last version of an inactive file FOREVER.

I know of no way to accomplish this.
Policies are on files, not pools.  The policy of a file will be in effect
whether it's the primary pool copy or one/several copy pool copies.

Also, if you did have 

Re: How to change db2 instance name?

2014-03-04 Thread Ron Delaware
Keith,

you can find the procedure for changing the host name for you linux TSM 
Servers here:

http://www-01.ibm.com/support/docview.wss?uid=swg21448218

Changing names is not a task to be taken lightly. Changes that are made 
can affect future upgrades, see this link for more info:
http://www-01.ibm.com/support/docview.wss?uid=swg21573421

 
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist ? Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


If you haven?t learned something new today, then you?ve wasted your day



From:   Keith Arbogast warbo...@iu.edu
To: ADSM-L@vm.marist.edu, 
Date:   03/04/2014 02:16 PM
Subject:[ADSM-L] How to change db2 instance name?
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



We install / and /boot on a LUN on the SAN, and boot from there. We can 
copy a boot LUN with the OS installed on it, and zone the copy to new 
hardware to create another server.

Can we do that with a boot LUN that has the TSM server installed on it 
too, then change the DB2 instance name of the clone --to create a new TSM 
server?  If that is a straight forward process it might save us time in a 
deadline-driven hardware upgrade.

If this is well documented somewhere, kindly refer me to that. 

With my thanks,
Keith Arbogast
Indiana University

image/jpeg

Re: Recover Archived Data from Tapes without Catalog

2013-12-11 Thread Ron Delaware
Nora,

If you are able to bring up the old TSM server, you could do a export node 
directly to the new TSM server, provided that the two can communicate via 
TCP/IP with each other.  I believe that TSM 7.1 allows you to restore UNIX 
operating systems TSM database backups into the new server, though to be 
honest, I am not sure the solaris is included.

 
_
Ronald C. Delaware
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings


If you haven’t learned something new today, then you’ve wasted your day



From:   Nora tsm-fo...@backupcentral.com
To: ADSM-L@vm.marist.edu, 
Date:   12/11/2013 10:06 AM
Subject:[ADSM-L] Recover Archived Data from Tapes without Catalog
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Hello,
We recently lost a TSM server because of a severe server hardware problem 
that makes the tape library completely inaccessible to the server. We are 
now building a new TSM server and connecting the library to it. However 
the new TSM server we are building is on a different OS (Red Hat linux) 
while the original server was on Solaris 10. The different OS's between 
the 2 servers makes it impossible to restore the existing TSM database 
although we have it. And the old server's inability to use the tape 
library is also stopping us from using the Export Node functionality for 
data we want to save.
We are mainly concerned about 120 GB of archived data that we want to 
retrieve from this tape library with TSM before migrating to the new TSM 
server.
Does anyone know of a way to read TSM data of LTO4 tapes without the 
catalog so we can save this data? Or does IBM provide such a service ?

+--
|This was sent by noran...@adma.ae via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



image/jpeg

Re: replication question

2013-05-23 Thread Ron Delaware
Jeanne,

You are using a device class of FILE from what I see.  TSM Treats the file
as like a tape cartridge.  This shows that a scratch tape was created, TSM
started writing to it, the FILE reached the max size specified for that
device class, and then closed the file.

05/23/2013 09:23:23  ANR8340I FILE volume
M:\TDPEXCH_PRIM1\00212CBA.BFS
  mounted. (SESSION: 87355)
05/23/2013 09:23:23  ANR1340I Scratch volume
M:\TDPEXCH_PRIM1\00212CBA.BFS is
  now defined in storage pool TDPEXCHANGE_PRIM.
(SESSION:
  87355)
05/23/2013 09:23:23  ANR0511I Session 87355 opened output volume
  M:\TDPEXCH_PRIM1\00212CBA.BFS. (SESSION: 87355)
05/23/2013 10:40:39  ANR8341I End-of-volume reached for FILE volume
  M:\TDPEXCH_PRIM1\00212CBA.BFS. (SESSION: 87355)
05/23/2013 10:40:39  ANR0514I Session 87355 closed volume
  M:\TDPEXCH_PRIM1\00212CBA.BFS. (SESSION: 87355)


 The FILE size is determined by you when you created the device class.  If
you are getting a lot of these new FILE's being created, check your FILE
device class to determine the size. Usually 100GB would be the norm.
_
Ronald C. Delaware
IBM IT Plus Certified Specialist- Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings



From:   Jeanne Bruno jbr...@cenhud.com
To: ADSM-L@vm.marist.edu
Date:   05/23/2013 09:08 AM
Subject:[ADSM-L] replication question
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Hello.  Any insight is appreciated.

We are replicating our TDPExchange nodes to an offsite primary pool.
(windows servers, no LTO-tapes involved).
The replication is taking a really long time:

STATUS: Replicating node(s) EXCHANGE_BA1, EXCHANGE_BA2, TDP_EXCH1,
TDP_EXCH2. File
  spaces complete: 41. File spaces identifying and
replicating: 0. File spaces
  replicating: 1. File spaces not started: 0. Files
current: 660,011. Files
  replicated: 12,828 of 12,829. Files updated: 1,454 of
1,454. Files deleted:
  576 of 576. Amount replicated: 16,998 MB of 381 GB.
Amount transferred:
 13,470 MB. Elapsed time: 0 Day(s), 12 Hour(s), 27
Minute(s).

I checked the server on the other end and I'm seeing quite a few messages
like:

05/23/2013 09:23:23  ANR8340I FILE volume
M:\TDPEXCH_PRIM1\00212CBA.BFS
  mounted. (SESSION: 87355)
05/23/2013 09:23:23  ANR1340I Scratch volume
M:\TDPEXCH_PRIM1\00212CBA.BFS is
  now defined in storage pool TDPEXCHANGE_PRIM.
(SESSION:
  87355)
05/23/2013 09:23:23  ANR0511I Session 87355 opened output volume
  M:\TDPEXCH_PRIM1\00212CBA.BFS. (SESSION: 87355)
05/23/2013 10:40:39  ANR8341I End-of-volume reached for FILE volume
  M:\TDPEXCH_PRIM1\00212CBA.BFS. (SESSION: 87355)
05/23/2013 10:40:39  ANR0514I Session 87355 closed volume
  M:\TDPEXCH_PRIM1\00212CBA.BFS. (SESSION: 87355)

   and then another new volume:

05/23/2013 10:40:39  ANR8340I FILE volume
M:\TDPEXCH_PRIM2\00212CBB.BFS
  mounted. (SESSION: 87355)
05/23/2013 10:40:39  ANR1340I Scratch volume
M:\TDPEXCH_PRIM2\00212CBB.BFS is
  now defined in storage pool TDPEXCHANGE_PRIM.
(SESSION:
  87355)
05/23/2013 10:40:39  ANR0511I Session 87355 opened output volume
  M:\TDPEXCH_PRIM2\00212CBB.BFS. (SESSION: 87355)

But the amount replicated (the 16,998MB) is not increasing.   What is it
doing for 12 hours?
Thanks so much!


Jeannie Bruno
Senior Systems Analyst
jbr...@cenhud.commailto:jbr...@cenhud.com
Central Hudson Gas  Electric
(845) 486-5780


Re: include/exclude pattern matching *.* vs *

2013-04-18 Thread Ron Delaware
Zoltan,

Using your example of \\server\folder\*.*  I make the assumption that you
want to backup the folder directory and its subdirectories. The correct
way to do that is:

include \\server\c$\folder\...\*

if the directory you are wanting to backup is on a different drive, change
the c$ for your drive letter.

_
Ronald C. Delaware
IBM IT Plus Certified Specialist- Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings



From:   Zoltan Forray zfor...@vcu.edu
To: ADSM-L@vm.marist.edu
Date:   04/18/2013 09:42 AM
Subject:[ADSM-L] include/exclude pattern matching *.* vs *
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Is there a real difference in an INCLUDE statement using \*.* vs \* ?

We are troubleshooting a backup problem (nothing is being backed up) and
the OBJECTS option on the schedule says INCLUDE \\server\folder\*.*

Is there a really good definitive document that picks through all these
kinds of wildcard patters to show what would be includes/excluded?  All of
the examples I could root out show using 1-asterisk/*  but none showed 2
*.*

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: TSM Node Replication Question

2013-04-02 Thread Ron Delaware
Jeff,

you are really leaving yourself exposed (data wise).  Doing Node
Replication is NOT the same a performing a backup of your storage pools.
With Node Replication, you cannot replace a damaged volume or volumes.
Your setup on the Target server should be identical to the source server
to ensure you don't 'misplace' data.

Do you have deduplication being performed by the Client or server?
_
Ronald C. Delaware
IBM IT Plus Certified Specialist- Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings



From:   white jeff jeff.whi...@blueyonder.co.uk
To: ADSM-L@vm.marist.edu
Date:   04/02/2013 12:08 PM
Subject:[ADSM-L] TSM Node Replication Question
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



2 x TSM server v6.3.3
On AIX 7.1

Hi

I have approximately 250 clients for which i need to run 'replicate node'
to a second TSM server. My understanding is that initial replication will
do all data, thereafter only data that is not on the target server.

All backups run to the source server - the target server is at a remote
site and exists prely for data subject to 'replicate node' processing.

Everthing is configured, running the commands results in successful
processes and I can see the data in the storage pool on the target server.

On both servers, I have a 2tb stgpool for the client backups and a tape
library with LTO4 media.
Overall, there is approxmately 130tb of data managed in the source TSM
server.

On the source server, i assume i no longer need to run backup stgpool, as
replicate node is producing my offsite copy of the data.
On the source server, I still run daily migration processing to empty the
pool for the following night's backups.

But what about the target server? Does the stgpool have to be large enough
to hold ALL data or can i simply run migration processes on that backup
pool and leave it at 2tb?

Thanks in advance


Re: permanent retention

2012-12-12 Thread Ron Delaware
Gary,

The problem you will run into is that the backup data will be mixed in
with all of the other backup data, unless you have a management class that
sends the data to a separate storage pool and tape pool.  If you need to
have this data isolated, archives or a separate storage pool is the only
way to go.
_
Ronald C. Delaware
IBM IT Plus Certified Specialist- Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings



From:   Lee, Gary g...@bsu.edu
To: ADSM-L@vm.marist.edu
Date:   12/12/2012 01:53 PM
Subject:Re: [ADSM-L] permanent retention
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Ron:

I understand that archive is preferable, but there is sufficient data
there as to interfere with daily backups.  Therefore, I want to do it with
backups so as not to re-move data.

I have a management class permanent-retain with a backup copygroup
defined as follows:

Copy Group Type: Backup
   Versions Data Exists: 1
  Versions Data Deleted: 1
  Retain Extra Versions: No Limit
Retain Only Version: No Limit


in dsm.opt I have:

INCLUDE.BACKUP *:\Keep\...\* permanent-retain
INCLUDE.BACKUP *:\Keep\...\*.* permanent-retain

Will this do the job and not retain too much data?

Thanks for the help.

Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Ron Delaware
Sent: Wednesday, December 12, 2012 12:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] permanent retention

Gary,

You really don't want to use backup's for long term retention, that is an
archive function.  But if you must, you set everything to NOLIMIT NOLIMIT
NOLIMIT NOLIMIT that way the data will hang around forever, but as I
stated, Archives are the way to go
_
Ronald C. Delaware
IBM IT Plus Certified Specialist- Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings



From:   Lee, Gary g...@bsu.edu
To: ADSM-L@vm.marist.edu
Date:   12/12/2012 10:46 AM
Subject:[ADSM-L] permanent retention
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Just received a request to have a few files stored in tsm essentially
forever.

I am trying to figure out the best way to accomplish this with minimal
administrative overhead.

So far,
1. create a management class with an archive group set for  days, then
archive the files.

2. create a management class and use backup, but I am unclear how to
accomplish permanent retention there?

Any ideas well be appreciated.



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310


Re: permanent retention

2012-12-12 Thread Ron Delaware
Gary,

You really don't want to use backup's for long term retention, that is an
archive function.  But if you must, you set everything to NOLIMIT NOLIMIT
NOLIMIT NOLIMIT that way the data will hang around forever, but as I
stated, Archives are the way to go
_
Ronald C. Delaware
IBM IT Plus Certified Specialist- Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings



From:   Lee, Gary g...@bsu.edu
To: ADSM-L@vm.marist.edu
Date:   12/12/2012 10:46 AM
Subject:[ADSM-L] permanent retention
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Just received a request to have a few files stored in tsm essentially
forever.

I am trying to figure out the best way to accomplish this with minimal
administrative overhead.

So far,
1. create a management class with an archive group set for  days, then
archive the files.

2. create a management class and use backup, but I am unclear how to
accomplish permanent retention there?

Any ideas well be appreciated.



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310


Re: DB2 HADR - Cost?

2012-11-20 Thread Ron Delaware
The HADR function is part of DB2 and can be used without further cost for
additional software
_
Ronald C Delaware
TSM Storage Services Team Lead
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings




From:   Stackwick, Stephen stephen.stackw...@icfi.com
To: ADSM-L@vm.marist.edu
Date:   11/20/2012 10:08 AM
Subject:[ADSM-L] DB2 HADR - Cost?
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



The option to replicate a DB2 database is free on AIX, I believe. Does it
cost on other platforms?

Steve

STEPHEN STACKWICK | Senior Consultant | 301.518.6352 (m) |
stephen.stackw...@icfi.commailto:sstackw...@icfi.com | icfi.com
http://www.icfi.com/
ICF INTERNATIONAL | 410 E. Pratt Street Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)