Farewell

2017-07-19 Thread Paul Zarnowski
I seem to be having trouble with my email client, sending this email out.  
Sorry about any partial sends!

I guess it's time for me to say goodbye to this list, and to this wonderful 
ADSM community.  Thank you to Marist college for hosting this list -- it has 
been a wonderful resource for me over the years.  I started with ADSM v1.2, 
sometime around 1994.  I retired from Cornell a few months ago.  I have enjoyed 
working with many of you, and with many IBM developers.  Thank you.  It is nice 
to see TSM continue to be developed, and I will be following it for some time 
to come.  Good luck to you all.

..Paul

Re: Journaling question

2016-08-12 Thread Paul Zarnowski
Michael,

I don't believe you can use journal-based-backup for network connected file 
systems.  If you want to avoid incremental backups walking a NAS filesystem 
looking for changed files, you can look into NetApp (which TSM supports with 
snapdiff incrementals) and IBM Sonas.  Snapdiff uses a NetApp API which 
compares two snapshots, producing a list of changed files which TSM then backs 
up.  This eliminates the time it takes to "walk" the filesystem looking for 
changes, and this can dramatically speed up incremental backups.  We use this, 
but we also do a monthly walk (called a rebase) just to catch any edge cases 
that snapdiff misses.

I believe Isilon also has a similar unsupported API, but as it was not 
supported we opted not to go with Isilon.

..Paul


At 11:42 AM 8/12/2016, Michael P Hizny wrote:
>All,
>
>Does anyone know a way to use journaling in TSM on a networked file system.
>If you go through the wizard, you can select to journal the local file
>systems, but there is no option to journal network connected file systems.
>We can back them up by specifying them in the opt file as:
>
>DOMAIN "\\xxx.xxx.xxx.xxx\folder1"
>DOMAIN "\\xxx.xxx.xxx.xxx\folder2"But can you set up and use journaling on 
>them?
>
>Thanks,
>Mike
>
>Michael Hizny
>Binghamton University
>mhi...@binghamton.edu


--
Paul ZarnowskiPh: 607-255-4757
IT at Cornell / InfrastructureFx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: AIX to Linux migration performance

2016-07-29 Thread Paul Zarnowski
Hi Bill,

It's going to depend a lot on your network speed, some on the type of disk you 
are using for your databases, and maybe a little on the CPU speeds of your 
servers.

We have not yet migrated any of our production TSM instances, but we have been 
doing some dry-run performance tests.  On one instance having a 680GB database, 
we ran three tests, as follows:

Test 1:
 source (AIX) database on 15K
 target (Linux) database on SSD
 source server having 10Gbs ethernet
 target server having 1Gbs ethernet
 Elapsed time was 15 hours, 45 minutes

Test 2:
 source database on SSD
 target database on SSD
 source server having 10Gbs
 target server having 1Gbs
 Elapsed time was 12 hours, 32 minutes  - SSD helped some

Test 3:
 source and target databases both on SSD
 source and target servers both having 10Gbs
 Elapsed time was 6 hours, 20 minutes - quite a bit better due to the 10Gbs 
ethernet

Statistics & dry-runs were done by David Beardsley, here at Cornell.
..Paul

At 12:57 PM 7/29/2016, Brian G. Kunst wrote:
>We just did this back in May. It took roughly 4 hours to do our 580GB DB.
>
>--
>Brian Kunst
>Storage Administrator
>Large Scale Storage & Systems
>UW Information Technology
>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>> Bill Mansfield
>> Sent: Friday, July 29, 2016 8:43 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: [ADSM-L] AIX to Linux migration performance
>>
>> Has anyone migrated a big TSM server from AIX to Linux using the
>> Extract/Insert capability in 7.1.5?  Have a 2.5TB DB to migrate, would
>> like some idea of how long it might take.
>>
>> Bill Mansfield


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: query auditocc question

2016-07-26 Thread Paul Zarnowski
Paul,

The metrics displayed by Q AUDITOCC are updated by AUDIT LICENSE; not in real 
time.

..Paul
(sent from my iPhone)

On Jul 26, 2016, at 9:37 PM, Paul_Dudley 
> wrote:

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
contents to any person.


Re: AIX rmt devices in a library sharing environment - how do you handle them

2016-07-08 Thread Paul Zarnowski
We rename our rmt devices, using a script called tsmchrmt, which uses part of 
the wwn and lun_id to rename the rmt device.  That way the same drive has the 
same name on all of the AIX systems that have access to it.

>#!/bin/sh
>
>if [ $# != 1 ]
>then
>  /bin/echo "must specify 1 rmt device name as an argument."
>  exit 4
>fi
>d=$1
>
>WWN=`/usr/sbin/lsattr -El $d -a ww_name|/bin/cut -f2 -d" "|/bin/cut -c15-`
>LUN=`/usr/sbin/lsattr -El $d -a lun_id|/bin/cut -f2 -d" "|/bin/cut -c3`
>root=`/bin/echo $d|/bin/cut -c1-3`
>new_name=$root.$WWN.$LUN.0
>let "j=0"
>while [[ -e /dev/$new_name ]]
>do
>let "j=j+1"
>new_name=$root.$WWN.$LUN.$j
>done
>/usr/sbin/chdev -l $d -a new_name=$new_name



At 02:31 PM 7/8/2016, Rhodes, Richard L. wrote:
>Hi Everyone,
>
>I'm wondering how others handle lining up tape paths of multiple TSM servers 
>in a library sharing environment.
>
>We have a TSM library sharing environment across our TSM instances for sharing 
>our two 3584 libraries.  One 3584 at each datacenter with a dedicate TSm 
>instance for the library manager.
>
>Currently I have a script that crawls through all TSM instances, gets the wwn 
>of each rmt device (lsattr -El rmtX), lines up all the wwn/rmt# for a drive 
>and creates TSM path commands.  Kind of brute force, but has worked very well 
>over the years.  I can create path cmds for everything in about 15m.  But, 
>this means that a particular drive can have many different rmtX devices across 
>our TSM servers.
>
>A while ago I learned that you can rename a rmtX device (rendev -l rmtX -n 
>rmtY).  I've been thinking about a new system where I rename the rmtX devices 
>on each AIX lpar to a common name.
>
>For example:  If a particular lpar has rmt1 which is our "a" 3584, and the 
>drive is frame 1 drive 1, then I could call it rmta11.  There is atape 
>involved with multi paths, so there  would also be -pri and -alt versions of 
>the name somehow.  So this particular drive would have the same AIX device 
>name across all AIX lpars.
>
>So, I'm curious . . . What do you do to line up rmt devices?
>Do you rename rmt devices to a common name, or, line up the many different rmt 
>devices?
>If you rename rmt devices, what issues/problems have you worked through?
>
>
>Thanks
>
>Rick
>
>
>
>
>-
>
>The information contained in this message is intended only for the personal 
>and confidential use of the recipient(s) named above. If the reader of this 
>message is not the intended recipient or an agent responsible for delivering 
>it to the intended recipient, you are hereby notified that you have received 
>this document in error and that any review, dissemination, distribution, or 
>copying of this message is strictly prohibited. If you have received this 
>communication in error, please notify us immediately, and delete the original 
>message.


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: "Compressed data grew" retries?

2016-06-30 Thread Paul Zarnowski
At 09:31 AM 6/30/2016, Tom Alverson wrote:
>I will
>try the compressalways yes setting but hopefully that won't override the
>exclusions I have set up for common compressed files.

No, compressalways will not override exclude.compress.  The option simply says 
not to retry if compression grows the size.

The legacy LZW compression can grow files that are already in a compressed 
format.  The newer LZ4 compression should not grow an already compressed file.  
In that respect, it's similar to the compression that LTO tape drives can do.

..Paul



--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: "Compressed data grew" retries?

2016-06-30 Thread Paul Zarnowski
You can specify 'compressalways yes', which will prevent the retry.  However, 
you then have to live with the fact that the file is expanded in TSM storage.

I believe this problem does not apply to LZ4 compression, introduced with the 
latest clients when backing up to a container pool and using dedup.

At 08:57 AM 6/30/2016, you wrote:
>I have been watching an initial TSM backup running for a long time now over
>a WAN connection (100mbit/sec) and I keep seeing "compressed data grew"
>followed  by a "Retry".  Does this mean that for every file that
>compression increased the size (even 1%) the whole transfer is discarded
>and it starts over again with the same file (presumably with compression
>disabled)?  If so that is probably hurting my backup speed much more than
>the compression ever improved it.  Is there any way to prevent this short
>of adding one of these options for every type of compressible file?
>
>EXCLUDE.COMPRESSION "*:\...\*.jpg"
>EXCLUDE.COMPRESSION "*:\...\*.zip"


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: Anyone using TSM Snapdiff support (linux and windows) to backup ONTAP CDOT 8.3

2016-03-21 Thread Paul Zarnowski
Thank you Sameer and Erwan.  We got it working.

At 11:36 PM 3/17/2016, Sameer Veer wrote:
>Hello Paul,
>
>Earlier this month, for one of my customers, I configured NetApp 
>snapshot-assisted progressive incremental functionality ('incremental 
>-snapdiff' command) for "cluster mode" on the ONTAP CDOT 8.3 for both Windows 
>and Linux shares successfully.
>
>We used TSM B/A client 7.1.4.2 for this configuration. Refer to this Techdoc 
>for TSM B/A client supported levels:
><http://www-01.ibm.com/support/docview.wss?uid=swg21474217>http://www-01.ibm.com/support/docview.wss?uid=swg21474217
>
>In summary:
>- Spectrum Protect Client 7.1.2 (or higher) supports ONTAP 8.1.y & 8.2.y where 
>y is 0 or higher (8.2 both 7-mode and c-mode)
>- Spectrum Protect Client 7.1.3 (or higher) supports ONTAP 8.3.y c-mode in 
>addition to the above
>- Spectrum Protect Client 7.1.4 (or higher) similar to 7.1.3
>
>Regards,
>
>Sameer R. Veer
>Senior Managing Consultant, Services Delivery
>IBM Systems | Storage Software Solutions
><http://www-01.ibm.com/software/tivoli/services/consulting/offerings.html>Storage
> Services Offerings
> 
>sv...@us.ibm.com
>Cell Phone:+1 607 343 0996
>
>
>
>
>
>
>
>From:Erwann SIMON <erwann.si...@free.fr>
>To:ADSM-L@VM.MARIST.EDU
>Date:03/17/2016 06:03 PM
>Subject:Re: [ADSM-L] Anyone using TSM Snapdiff support (linux and 
>windows) to backup ONTAP CDOT 8.3
>Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
>
>
>
>Hello Paul,
>
>I've done this setup for a customer last novembre (TSM 7.1.3) for both Windows 
>and Linux shares.
>We had it working as expected, thanks to the skilled people of this shop.
>
>Indeed, the TSM B/A Client documentation is (was ?) not accurate for CDOT as 
>it refers mainly to 7 Mode.
>
>-- 
>Best regards / Cordialement / مع تحياتي
>Erwann SIMON
>
>- Mail original -
>De: "Paul Zarnowski" <p...@cornell.edu>
>À: ADSM-L@VM.MARIST.EDU
>Envoyé: Jeudi 17 Mars 2016 21:41:51
>Objet: [ADSM-L] Anyone using TSM Snapdiff support (linux and windows) to 
>backup ONTAP CDOT 8.3
>
>Has anyone successfully used TSM on Windows (and Linux) to backup a NetApp 
>filer running ONTAP CDOT 8.3?  We're looking for confirmation that this works, 
>as we've been having some problems getting it to run against a simulator 
>running 8.3.
>
>Thanks!
>..Paul
>
>
>--
>Paul ZarnowskiPh: 607-255-4757
>Assistant Director for Storage Services   Fx: 607-255-8521
>IT at Cornell / InfrastructureEm: p...@cornell.edu
>719 Rhodes Hall, Ithaca, NY 14853-3801
>
>
>


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801  


Anyone using TSM Snapdiff support (linux and windows) to backup ONTAP CDOT 8.3

2016-03-19 Thread Paul Zarnowski
Has anyone successfully used TSM on Windows (and Linux) to backup a NetApp 
filer running ONTAP CDOT 8.3?  We're looking for confirmation that this works, 
as we've been having some problems getting it to run against a simulator 
running 8.3.

Thanks!
..Paul


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: amazon storage gateway vtl and tsm

2016-02-04 Thread Paul Zarnowski
I suggest you talk to your IBM rep and ask them about TSM support for amazon S3 
storage.  IMHO, using a VTL gateway in front of S3 will cause high egress fees 
(i.e., when you do reclaim processing for the vtl volumes).  Storing directly 
into S3 would be (again, IMHO) more cost-effective.

..Paul


At 03:53 PM 2/4/2016, Lee, Gary wrote:
>Ok, we just got the word.  The solution for my unsupported ibm 3494 libraries 
>will be an amazon s3 storage gateway vtl.
>
>Anyone deployed one of these with tsm?
>
>We are running tsm 6.3.4 on RHEL 6.5 servers.
>
>I've been reading, and so far, there seem to be severe limits on libraries 
>using there vtl.
>Also, the setup seems to be, when the library is full, check out the virtual 
>cartridges to a virtual tape shelf.
>
>Any experiences, tips, and / or help is appreciated.


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: Backups

2015-11-02 Thread Paul Zarnowski
We are starting to consider this as well.  Any sharing of ideas would be 
appreciated.

At 12:09 PM 11/2/2015, Lepre, James wrote:
>Hello Everyone -
>
>   I currently have TSM7.1.3 Server and I am trying to figure out how to 
> backup servers we have in both Azure and AWS clouds.  Does anyone have any 
> suggestions, best practices or current solutions they could share.
>
>Any help would greatly be appreciated.
>
>Thank you
>
>
>James Lepre
>
>
>
>
>
>---
>Confidentiality Notice: The information in this e-mail and any attachments 
>thereto is intended for the named recipient(s) only.  This e-mail, including 
>any attachments, may contain information that is privileged and confidential  
>and subject to legal restrictions and penalties regarding its unauthorized 
>disclosure or other use.  If you are not the intended recipient, you are 
>hereby notified that any disclosure, copying, distribution, or the taking of 
>any action or inaction in reliance on the contents of this e-mail and any of 
>its attachments is STRICTLY PROHIBITED.  If you have received this e-mail in 
>error, please immediately notify the sender via return e-mail; delete this 
>e-mail and all attachments from your e-mail  system and your computer system 
>and network; and destroy any paper copies you may have in your possession. 
>Thank you for your cooperation.


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TS3500 library changes to Max Cartridges and Max VIO slots

2015-10-13 Thread Paul Zarnowski
Thanks Shawn.  I did know about refreshstate, but that did not work for us when 
we ran into this problem.  However, it's possible that something else was going 
on beyond just changing the number of slots in the logical library.
..Paul

At 11:44 AM 10/12/2015, Shawn DREW wrote:
>They added "refreshstate=yes" to the audit library command at a certain 
>version so that you will not need to restart/redefine anything when changing 
>slot counts on a library.
>Do a "help audit libr" to check if your version has that option.
>
>
>-Original Message-
>From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU] 
>Sent: Monday, October 12, 2015 8:01 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: [ADSM-L] TS3500 library changes to Max Cartridges and Max VIO slots
>
>Folks,
>
>Our TS3500 is configured as 2-logical libraries with n-slots configured for 
>each.  We just reached the maximum number of cartridges I can load into one of 
>these libraries due to the Max Cartridges value.  I want to adjust the 
>2-logical libraries to shift slots from one to the other.  Also want to reduce 
>the VIO Slots (currently at default/max of 255 for each library).
>
>When I tried to change the Max. Cartridges value for one of the libraries, I 
>got the message "WARNING - Changing Maximum settings for a logical library may 
>require reconfiguration of the host applications for selected logical library"
>
>The TS3500 is solely used for TSM.  2-of my 7-TSM servers are assigned as 
>library managers of the 2-logical libraries (1-onsite and 1-offsite tapes).  
>All TSM servers are RH Linux.
>
>Will I need to restart/reboot the 2-TSM servers that manage the libraries if I 
>make this change?  Will it impact all of the TSM servers?
>
>--
>*Zoltan Forray*
>TSM Software & Hardware Administrator
>Xymon Monitor Administrator
>Virginia Commonwealth University
>UCC/Office of Technology Services
>www.ucc.vcu.edu
>zfor...@vcu.edu - 804-828-4807
>Don't be a phishing victim - VCU and other reputable organizations will never 
>use email to request that you reply with your password, social security number 
>or confidential personal information. For more details visit 
>http://infosecurity.vcu.edu/phishing.html
>This message and any attachments (the "message") is intended solely for the 
>addressees and is confidential. If you receive this message in error, please 
>delete it and immediately notify the sender. Any use not in accord with its 
>purpose, any dissemination or disclosure, either whole or partial, is 
>prohibited except formal approval. The internet can not guarantee the 
>integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not 
>therefore be liable for the message if modified. Please note that certain 
>functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
>Inc. Unless otherwise provided above, this message was sent by BNP Paribas, or 
>one of its affiliates in Canada, having an office at 1981 McGill College 
>Avenue, Montreal, QC, H3A 2W8, Canada. To the extent this message is being 
>sent from or to Canada, you may unsubscribe from receiving commercial 
>electronic messages by using this link: www.bnpparibas.ca/en/unsubscribe/ 
><http://www.bnpparibas.ca/en/unsubscribe/>. See www.bnpparibas.ca 
><http://www.bnpparibas.ca> for more information on BNP Paribas, in Canada. 


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801  


Re: TS3500 library changes to Max Cartridges and Max VIO slots

2015-10-12 Thread Paul Zarnowski
It's possible you might need to delete and redefine the library device in the 
OS.  I've had to do this on AIX in the (distant) past.  In fact, I think I had 
to delete the library from TSM, requiring a checkin of all libvols again after 
the library (and drives and paths) were all redefined.

For this reason, we actually configure maxslots for each logical library to be 
greater than the number of actual tapes.

..Paul

At 09:52 AM 10/12/2015, Thomas Denier wrote:
>I think you will need to quiesce tape activity and execute "audit library" 
>with "refreshstate=yes" on each of the library managers.
>
>Thomas Denier
>Thomas Jefferson University
>
>Zoltan Forray wrote:
>
>Our TS3500 is configured as 2-logical libraries with n-slots configured for 
>each.  We just reached the maximum number of cartridges I can load into one of 
>these libraries due to the Max Cartridges value.  I want to adjust the 
>2-logical libraries to shift slots from one to the other.  Also want to reduce 
>the VIO Slots (currently at default/max of 255 for each library).
>
>When I tried to change the Max. Cartridges value for one of the libraries, I 
>got the message "WARNING - Changing Maximum settings for a logical library may 
>require reconfiguration of the host applications for selected logical library"
>
>The TS3500 is solely used for TSM.  2-of my 7-TSM servers are assigned as 
>library managers of the 2-logical libraries (1-onsite and 1-offsite tapes).  
>All TSM servers are RH Linux.
>
>Will I need to restart/reboot the 2-TSM servers that manage the libraries if I 
>make this change?  Will it impact all of the TSM servers?
>The information contained in this transmission contains privileged and 
>confidential information. It is intended only for the use of the person named 
>above. If you are not the intended recipient, you are hereby notified that any 
>review, dissemination, distribution or duplication of this communication is 
>strictly prohibited. If you are not the intended recipient, please contact the 
>sender by reply email and destroy all copies of the original message.
>
>CAUTION: Intended recipients should NOT use email communication for emergent 
>or urgent health care matters.


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: What am I doing wrong with this include

2015-10-02 Thread Paul Zarnowski
You can see MC's from the server by doing a 'select' on the 'backups' table.  
E.g.,

select * from backups where node_name='NODENAME'

At 08:19 AM 10/2/2015, Zoltan Forray wrote:
>The MC isn't the issue.  It is the assignment of it.  I went to the client
>and brought up a restore window to see what MC was applied / bound to files
>backed up (as well as rebound when it was changed).
>
>It is properly "rebinding" when I changed it so it is working - just not
>producing the results I want when it comes to what objects it is applying
>to.
>
>I haven't looked at it since the changes yesterday.  Will see if the owner
>can get me onto the box so I can see what MC has been applied to
>what.unless there is a way I can see this info from the TSM server side?
>
>On Thu, Oct 1, 2015 at 4:25 PM, Andrew Raibeck <stor...@us.ibm.com> wrote:
>
>> Hi Zoltan,
>>
>> Some things to check
>>
>> 1. Has the policy set with that management class has been activated?
>>
>> 2. Does the management class have a backup copy group?
>>
>> 3. Is the node doing the backup a member of the policy domain that includes
>> the 15DAYS management class? Note: if this is a proxy node backup, the
>> target node (specified with ASNODENAME option) is the one that has to be a
>> member of the domain with the 15DAYS management class.
>>
>> 4. Is a client option set associated with this node, and might that client
>> option set have overriding INCLUDE statements? And as above, if this is a
>> proxy node backup, it is the target node name specified by ASNODENAME that
>> you need to check.
>>
>> Other things that might be helpful:
>>
>> Run "dsmc query mgmtclass" to make sure the management class 15DAYS shows
>> up.
>>
>> Run "dsmc query inclexcl" to check the run-time include-exclude list, to
>> see if something could be overriding your current specification. Make sure
>> to use the same options file as the production backup, and to use
>> ASNODENAME if the production backup uses it.
>>
>> Check dsmerror.log to see if there are any binding errors.
>>
>> Best regards,
>>
>> - Andy
>>
>>
>> 
>>
>> Andrew Raibeck | Tivoli Storage Manager Level 3 Technical Lead |
>> stor...@us.ibm.com
>>
>> IBM Tivoli Storage Manager links:
>> Product support:
>>
>> http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager
>>
>> Online documentation:
>> http://www.ibm.com/support/knowledgecenter/SSGSG7/welcome
>> Product Wiki:
>>
>> https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager
>>
>> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2015-10-01
>> 13:35:38:
>>
>> > From: Zoltan Forray <zfor...@vcu.edu>
>> > To: ADSM-L@VM.MARIST.EDU
>> > Date: 2015-10-01 13:36
>> > Subject: What am I doing wrong with this include
>> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>> >
>> > I thought I had this figured out but keep getting the wrong results.
>> This
>> > is a W2K8 box with 7.1.2 client.
>> >
>> > I want to apply a different management class to one directory and all of
>> > its sub-directories.
>> >
>> > So I added the following statements:
>> >
>> > include  *  STANDARD
>> > include  *:\Software\ClickCommerce\Backup\*  15DAYS
>> >
>> > Yet STANDARD MC was applied to everything.  Yes I am 100% positive on the
>> > directory name that I want to apply the 15DAYS to.  No spaces in the
>> name.
>> >
>> > I first tried it without the first include MC and ended up with
>> everything
>> > having 15DAYS applied to it.
>> >
>> > So for tonights backup, I have changed it to:
>> >
>> > include  *  STANDARD
>> > include  D:\Software\ClickCommerce\Backup\...\*  15DAYS
>> >
>> > or if I am still wrong, please tell me what I am doing wrong?
>> >
>> > --
>> > *Zoltan Forray*
>> > TSM Software & Hardware Administrator
>> > Xymon Monitor Administrator
>> > Virginia Commonwealth University
>> > UCC/Office of Technology Services
>> > www.ucc.vcu.edu
>> > zfor...@vcu.edu - 804-828-4807
>> > Don't be a phishing victim - VCU and other reputable organizations will
>> > never use email to request that you reply with your passw

Re: What am I doing wrong with this include

2015-10-02 Thread Paul Zarnowski
Some of the documentation appears to not be platform specific, which causes 
ambiguity and problems.  The first include (include * managall) would be 
appropriate for a Unix system.  I'm not sure it is appropriate for a Windows 
system.  The second example is clearly for a Unix system (because it has the 
':' in it).  But note that second example uses a "?" to wildcard the volume, 
and not a "*".  I suspect that TSM is coded to not allow a "*" wildcard for 
matching a Windows drive letter, since it must be exactly one character.   But 
I've yet to find anything in the manual that states this.

I suppose it would be easy enough to test this theory...

At 08:42 AM 10/2/2015, Zoltan Forray wrote:
>I have seen documentation both ways and sometimes including both. This
>statement from
>http://publib.boulder.ibm.com/tividd/td/TSMC/SH26-4117-01/en_US/HTML/ans60013.htm
>
>To specify a management class named *managall* to use for all files to
>which you do not explicitly assign a management class, you would enter:
>
>   include * managall
>
>
>In another paragraph, it says:
>
>To specify your own default management class for files that are not
>explicitly included, specify:
>
>   include ?:* mgmt_class_name
>
>
>
>On Thu, Oct 1, 2015 at 2:37 PM, Paul Zarnowski <p...@cornell.edu> wrote:
>
>> Hi Zoltan,
>>
>> I think the documentation around include/exclude wildcarding is very
>> confusing, but here is what I think you need to do:
>>
>> include ?:\Software\ClickCommerce\Backup\...\*  15DAYS
>>
>> While I cannot find anything in the manuals that indicate that a "*" won't
>> work for the drive letter, all of their examples use "?" when wildcarding
>> the drive letter.  Also, the "..." is important if you want to catch all
>> files in subdirectories under "Backup", and not just the files in "Backup"
>> itself.  Also note that the directory objects themselves are assigned to a
>> management class via DIRMC, or if none is specified, then the management
>> class having the longest RETONLY in that Policy Domain.
>>
>> ..Paul
>>
>>
>> At 01:35 PM 10/1/2015, Zoltan Forray wrote:
>> >I thought I had this figured out but keep getting the wrong results.  This
>> >is a W2K8 box with 7.1.2 client.
>> >
>> >I want to apply a different management class to one directory and all of
>> >its sub-directories.
>> >
>> >So I added the following statements:
>> >
>> >include  *  STANDARD
>> >include  *:\Software\ClickCommerce\Backup\*  15DAYS
>> >
>> >Yet STANDARD MC was applied to everything.  Yes I am 100% positive on the
>> >directory name that I want to apply the 15DAYS to.  No spaces in the name.
>> >
>> >I first tried it without the first include MC and ended up with everything
>> >having 15DAYS applied to it.
>> >
>> >So for tonights backup, I have changed it to:
>> >
>> >include  *  STANDARD
>> >include  D:\Software\ClickCommerce\Backup\...\*  15DAYS
>> >
>> >or if I am still wrong, please tell me what I am doing wrong?
>> >
>> >--
>> >*Zoltan Forray*
>> >TSM Software & Hardware Administrator
>> >Xymon Monitor Administrator
>> >Virginia Commonwealth University
>> >UCC/Office of Technology Services
>> >www.ucc.vcu.edu
>> >zfor...@vcu.edu - 804-828-4807
>> >Don't be a phishing victim - VCU and other reputable organizations will
>> >never use email to request that you reply with your password, social
>> >security number or confidential personal information. For more details
>> >visit http://infosecurity.vcu.edu/phishing.html
>>
>>
>> --
>> Paul ZarnowskiPh: 607-255-4757
>> Assistant Director for Storage Services   Fx: 607-255-8521
>> IT at Cornell / InfrastructureEm: p...@cornell.edu
>> 719 Rhodes Hall, Ithaca, NY 14853-3801
>>
>
>
>
>--
>*Zoltan Forray*
>TSM Software & Hardware Administrator
>Xymon Monitor Administrator
>Virginia Commonwealth University
>UCC/Office of Technology Services
>www.ucc.vcu.edu
>zfor...@vcu.edu - 804-828-4807
>Don't be a phishing victim - VCU and other reputable organizations will
>never use email to request that you reply with your password, social
>security number or confidential personal information. For more details
>visit http://infosecurity.vcu.edu/phishing.html


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: What am I doing wrong with this include

2015-10-02 Thread Paul Zarnowski
Yes, good caution Skylar.  Our server is robust enough that we don't have 
issues doing this anymore, but I do recall the days where we used to.

At 12:19 PM 10/2/2015, Skylar Thompson wrote:
>I would caution against doing this if you have a large number of objects
>stored in TSM, where "large" is sort of nebulous depending on your TSM
>server specs and tolerance for database thrashing.
>
>Unfortunately SQL in dsmadmc doesn't seem to understand LIMIT so it would
>be hard just to get a limited preview of the bindings on the server-side.
>
>On Fri, Oct 02, 2015 at 12:11:49PM -0400, Paul Zarnowski wrote:
>> You can see MC's from the server by doing a 'select' on the 'backups' table. 
>>  E.g.,
>>
>> select * from backups where node_name='NODENAME'
>>
>> At 08:19 AM 10/2/2015, Zoltan Forray wrote:
>> >The MC isn't the issue.  It is the assignment of it.  I went to the client
>> >and brought up a restore window to see what MC was applied / bound to files
>> >backed up (as well as rebound when it was changed).
>> >
>> >It is properly "rebinding" when I changed it so it is working - just not
>> >producing the results I want when it comes to what objects it is applying
>> >to.
>> >
>> >I haven't looked at it since the changes yesterday.  Will see if the owner
>> >can get me onto the box so I can see what MC has been applied to
>> >what.unless there is a way I can see this info from the TSM server side?
>> >
>> >On Thu, Oct 1, 2015 at 4:25 PM, Andrew Raibeck <stor...@us.ibm.com> wrote:
>> >
>> >> Hi Zoltan,
>> >>
>> >> Some things to check
>> >>
>> >> 1. Has the policy set with that management class has been activated?
>> >>
>> >> 2. Does the management class have a backup copy group?
>> >>
>> >> 3. Is the node doing the backup a member of the policy domain that 
>> >> includes
>> >> the 15DAYS management class? Note: if this is a proxy node backup, the
>> >> target node (specified with ASNODENAME option) is the one that has to be a
>> >> member of the domain with the 15DAYS management class.
>> >>
>> >> 4. Is a client option set associated with this node, and might that client
>> >> option set have overriding INCLUDE statements? And as above, if this is a
>> >> proxy node backup, it is the target node name specified by ASNODENAME that
>> >> you need to check.
>> >>
>> >> Other things that might be helpful:
>> >>
>> >> Run "dsmc query mgmtclass" to make sure the management class 15DAYS shows
>> >> up.
>> >>
>> >> Run "dsmc query inclexcl" to check the run-time include-exclude list, to
>> >> see if something could be overriding your current specification. Make sure
>> >> to use the same options file as the production backup, and to use
>> >> ASNODENAME if the production backup uses it.
>> >>
>> >> Check dsmerror.log to see if there are any binding errors.
>> >>
>> >> Best regards,
>> >>
>> >> - Andy
>> >>
>> >>
>> >> 
>> >>
>> >> Andrew Raibeck | Tivoli Storage Manager Level 3 Technical Lead |
>> >> stor...@us.ibm.com
>> >>
>> >> IBM Tivoli Storage Manager links:
>> >> Product support:
>> >>
>> >> http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager
>> >>
>> >> Online documentation:
>> >> http://www.ibm.com/support/knowledgecenter/SSGSG7/welcome
>> >> Product Wiki:
>> >>
>> >> https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager
>> >>
>> >> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2015-10-01
>> >> 13:35:38:
>> >>
>> >> > From: Zoltan Forray <zfor...@vcu.edu>
>> >> > To: ADSM-L@VM.MARIST.EDU
>> >> > Date: 2015-10-01 13:36
>> >> > Subject: What am I doing wrong with this include
>> >> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>> >> >
>> >> > I thought I had this figured out but keep getting the wrong results.
>> >> This
>> >> > is a W2K8 box with 7.1.2 client.
>> >> >
>> &g

Re: What am I doing wrong with this include

2015-10-02 Thread Paul Zarnowski
Andy,

I stand corrected on the "?" vs "*" for the drive letter.  My apologies for 
bringing up bad information.

I did test the include *:* mgmt_class_name, and TSM 7.1 client flags it as an 
error.  It insists that a directory delimiter follow the ":".  So, *:\* is 
valid, or *:\...\* is valid, but not *:*.  The documentation should probably be 
updated to reflect this, if this example is still there...

..Paul

At 10:21 AM 10/2/2015, Andrew Raibeck wrote:
>> In another paragraph, it says:
>>
>> To specify your own default management class for files that are not
>> explicitly included, specify:
>>
>>include ?:* mgmt_class_name


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: What am I doing wrong with this include

2015-10-01 Thread Paul Zarnowski
Hi Zoltan,

I think the documentation around include/exclude wildcarding is very confusing, 
but here is what I think you need to do:

include ?:\Software\ClickCommerce\Backup\...\*  15DAYS

While I cannot find anything in the manuals that indicate that a "*" won't work 
for the drive letter, all of their examples use "?" when wildcarding the drive 
letter.  Also, the "..." is important if you want to catch all files in 
subdirectories under "Backup", and not just the files in "Backup" itself.  Also 
note that the directory objects themselves are assigned to a management class 
via DIRMC, or if none is specified, then the management class having the 
longest RETONLY in that Policy Domain.

..Paul


At 01:35 PM 10/1/2015, Zoltan Forray wrote:
>I thought I had this figured out but keep getting the wrong results.  This
>is a W2K8 box with 7.1.2 client.
>
>I want to apply a different management class to one directory and all of
>its sub-directories.
>
>So I added the following statements:
>
>include  *  STANDARD
>include  *:\Software\ClickCommerce\Backup\*  15DAYS
>
>Yet STANDARD MC was applied to everything.  Yes I am 100% positive on the
>directory name that I want to apply the 15DAYS to.  No spaces in the name.
>
>I first tried it without the first include MC and ended up with everything
>having 15DAYS applied to it.
>
>So for tonights backup, I have changed it to:
>
>include  *  STANDARD
>include  D:\Software\ClickCommerce\Backup\...\*  15DAYS
>
>or if I am still wrong, please tell me what I am doing wrong?
>
>--
>*Zoltan Forray*
>TSM Software & Hardware Administrator
>Xymon Monitor Administrator
>Virginia Commonwealth University
>UCC/Office of Technology Services
>www.ucc.vcu.edu
>zfor...@vcu.edu - 804-828-4807
>Don't be a phishing victim - VCU and other reputable organizations will
>never use email to request that you reply with your password, social
>security number or confidential personal information. For more details
>visit http://infosecurity.vcu.edu/phishing.html


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM 7.1.3 documention

2015-10-01 Thread Paul Zarnowski
Thanks Clare, I was just wondering where the pdf was for the 7.1.3 Admin Guide. 
 Now I know!

At 02:46 PM 10/1/2015, you wrote:
>I'm a member of the information development team that produces the server
>documentation. I'd like to share some information about the omission of
>the Administrator's Guide from V7.1.3 documentation.
>
>For Version 7.1.3 and going forward, the focus of the server information
>development team is on simplification and promotion of best practices for
>server planning, installation, and operation.
>
>With V7.1.3, we are delivering descriptions of four best-practice
>solutions:
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/srv.solutions/t_solutions_main.html
>
>And we are delivering solution guides for two of the four solutions. The
>guides have cookbook-like content to enable implementation and operation
>of a solution through task-oriented information that is easy to use and
>accessible. See the new guides here:
>Single-site disk solution:
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/srv.solutions/c_ssdisk_solution.html
>
>Multisite disk solution:
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/srv.solutions/c_msdisk_solution.html
>
>
>Some pieces of the former Administrator's Guide are available in the
>V7.1.3 collection in IBM Knowledge Center, but not in a prebuilt PDF file.
>See that content here:
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/srv.admin/r_configuring_srv.html
>
>We also included links to some V7.1.1 information for convenience.
>
>The V7.1.1 and earlier editions of the Administrator's Guide remain
>available. For V7.1.1, see
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.srv.common.doc/t_server_main.html
>. For the corresponding PDF files, download the V7.1.1 zip file from
>ftp://public.dhe.ibm.com/software/products/TSM/current/ and look for files
>that start with "b_srv_admin_guide".  (The V7.1.1 PDF files are not in IBM
>Knowledge Center.)
>
>Information about server commands is still being delivered as usual, in
>IBM Knowledge Center (online and in PDF files) and in command-line help.
>
>Delivery of additional solutions and solution guides is under
>consideration.
>
>Clare Byrne
>Information Development
>Tucson, AZ   USA
>---
>IBM Knowledge Center for all supported versions:
>http://www.ibm.com/support/knowledgecenter/SSGSG7
>Product documentation collections for download:
>http://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager/page/Product%20documentation%20collections%20for%20download


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: AW: How can I exclude files from dedupe processing?

2015-08-03 Thread Paul Zarnowski
Michael,

Thank you for pointing out the exclude.dedup option.  But this appears to only 
controls client-side deduplication.  Wouldn't the TSM server still try to 
deduplicate these objects?

..Paul

At 09:08 AM 8/3/2015, Michael malitz wrote:
Hallo John, Paul,

 

for client dedup, have a look to exclude.dedup  option

 

And: compressed objects can also be good candidates for deduplication. Have
a try and test reduction(s) 

 

Rgds Michael Malitz

 

 

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Im Auftrag von
Paul Zarnowski
Gesendet: Montag, 3. August 2015 14:29
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: How can I exclude files from dedupe processing?

 

Deduplication is an attribute of a a storage pool.  If you have data you
don't wish to de duplicate, you would need a separate storage pool.

 

..Paul

(sent from my iPhone)

 

On Aug 3, 2015, at 8:19 AM, Dury, John C. 
mailto:jd...@duqlight.com%3cmailto:jd...@duqlight.com
jd...@duqlight.commailto:jd...@duqlight.com wrote:

 

Unfortunately the final storage pool is the dedupe pool.

 

 

 

 

 

Date:Sun, 2 Aug 2015 16:32:17 +

 

From:Paul Zarnowski 
mailto:p...@cornell.edu%3cmailto:p...@cornell.edu
p...@cornell.edumailto:p...@cornell.edu mailto:p...@cornell.edu
mailto:p...@cornell.edu

 

Subject: Re: How can I exclude files from dedupe processing?

 

 

 

Since the files are already using a separate management class, you can just=
change the destination storage pool for that class to go to a non-duplicat=
ed storage pool.

 

 

 

..Paul

 

(sent from my iPhone)

 

 

 

On Aug 2, 2015, at 11:07 AM, Dury, John C. 
mailto:jd...@duqlight.com%3cmailto:jd...@duqlight.com
jd...@duqlight.commailto:jd...@duqlight.com
mailto:JDury=%2...@duqlight.com%3chttp://DUQLIGHT.COM mailto:JDury=
@DUQLIGHT.COMhttp://DUQLIGHT.COM
mailto:jd...@duqlight.com%3cmailto:JDury=%2...@duqlight.com%3cmailto:20@DUQLI
GHT.COM
mailto:jd...@duqlight.com%3cmailto:JDury=%2...@duqlight.commailto:20@DUQLIGHTmailto:20@DUQLIGHT
.COM wrote:

 

 

 

I have a 6.3.5.100 Linux server and several TSM 7.1.0.0 Linux clients. Thos=
e linux clients are dumping several Large Oracle databases using compressio=
n, and then those files are being backed up to TSM. Because the files are c=
ompressed when dumped via RMAN, they are not good candidates for dedupe pro=
cessing. Is there any way to have them excluded from dedupe server processi=
ng ? I know I can exclude them from client dedupe processing which I am not=
doing on this client anyways. I have the SERVERDEDUPTXNLIMIT limit set to =
200, but these rman dumps are smaller than 200g. I have our DBAs investigat=
ing using TDP for Oracle, but until then, I would like to exclude these fil=
es from dedupe processing as I suspect it is causing issues with space recl=
amation. If it helps, these files are in their own management class also.

 

Ideas?


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801  


Re: How can I exclude files from dedupe processing?

2015-08-03 Thread Paul Zarnowski
Deduplication is an attribute of a a storage pool.  If you have data you don't 
wish to de duplicate, you would need a separate storage pool.

..Paul
(sent from my iPhone)

On Aug 3, 2015, at 8:19 AM, Dury, John C. 
jd...@duqlight.commailto:jd...@duqlight.com wrote:

Unfortunately the final storage pool is the dedupe pool.





Date:Sun, 2 Aug 2015 16:32:17 +

From:Paul Zarnowski 
p...@cornell.edumailto:p...@cornell.edumailto:p...@cornell.edu

Subject: Re: How can I exclude files from dedupe processing?



Since the files are already using a separate management class, you can just=  
change the destination storage pool for that class to go to a non-duplicat= ed 
storage pool.



..Paul

(sent from my iPhone)



On Aug 2, 2015, at 11:07 AM, Dury, John C. 
jd...@duqlight.commailto:jd...@duqlight.commailto:JDury= 
@DUQLIGHT.COMhttp://DUQLIGHT.COMmailto:jd...@duqlight.com%3cmailto:JDury=%2...@duqlight.commailto:2...@duqlight.com
 wrote:



I have a 6.3.5.100 Linux server and several TSM 7.1.0.0 Linux clients. Thos= e 
linux clients are dumping several Large Oracle databases using compressio= n, 
and then those files are being backed up to TSM. Because the files are c= 
ompressed when dumped via RMAN, they are not good candidates for dedupe pro= 
cessing. Is there any way to have them excluded from dedupe server processi= ng 
? I know I can exclude them from client dedupe processing which I am not=  
doing on this client anyways. I have the SERVERDEDUPTXNLIMIT limit set to = 
200, but these rman dumps are smaller than 200g. I have our DBAs investigat= 
ing using TDP for Oracle, but until then, I would like to exclude these fil= es 
from dedupe processing as I suspect it is causing issues with space recl= 
amation. If it helps, these files are in their own management class also.

Ideas?


Re: How can I exclude files from dedupe processing?

2015-08-02 Thread Paul Zarnowski
Since the files are already using a separate management class, you can just 
change the destination storage pool for that class to go to a non-duplicated 
storage pool.

..Paul
(sent from my iPhone)

On Aug 2, 2015, at 11:07 AM, Dury, John C. 
jd...@duqlight.commailto:jd...@duqlight.com wrote:

I have a 6.3.5.100 Linux server and several TSM 7.1.0.0 Linux clients. Those 
linux clients are dumping several Large Oracle databases using compression, and 
then those files are being backed up to TSM. Because the files are compressed 
when dumped via RMAN, they are not good candidates for dedupe processing. Is 
there any way to have them excluded from dedupe server processing ? I know I 
can exclude them from client dedupe processing which I am not doing on this 
client anyways. I have the SERVERDEDUPTXNLIMIT limit set to 200, but these rman 
dumps are smaller than 200g. I have our DBAs investigating using TDP for 
Oracle, but until then, I would like to exclude these files from dedupe 
processing as I suspect it is causing issues with space reclamation. If it 
helps, these files are in their own management class also.
Ideas?


Re: TSM server V7.1.1.300

2015-07-07 Thread Paul Zarnowski
Robert,

We upgrade our servers from 7.1.1.108 to 7.1.1.301 about 2 weeks ago, without 
incident or the side effects that others have seen.  Our expiration runs in the 
same time as it did prior to the upgrade.  I do seem to recall that sometimes 
when upgrading DB2, it can take while for the indexes to be optimized (or 
whatever) and until that happens performance may be sluggish.  We noticed this 
when we first upgraded from v5 to v6.  We did not notice it for this minor 
upgrade.

Our server is not error free - we are seeing more tape errors recently, but we 
have no reason to suspect they are related to the server software upgrade.  
More likely related to some new drives and/or tapes that we recently purchased.

..Paul

At 03:00 AM 7/7/2015, Robert Ouzen wrote:
Hello all

Consider to upgrade my TSM servers to version 7.1.1.300 , want to know if 
someone already did it  and have any issues ?

Best Regards

Robert Ouzen


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: support for SnapDiff on NetApp filers running CDOT?

2015-06-03 Thread Paul Zarnowski
Thank you for this update, Del.  That is indeed good news.

At 10:13 AM 6/3/2015, Del Hoobler wrote:
I have an update for this thread:

Spectrum Protect (TSM) support for NetApp snapshot-assisted
progressive incremental backup (some may refer to it as SnapDiff)
for Clustered Data ONTAP (CDOT) is targeted for next month (July 2015).


Del




Del Hoobler/Endicott/IBM wrote on 03/11/2015 12:36:11 PM:

 From: Del Hoobler/Endicott/IBM
 To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 Date: 03/11/2015 12:36 PM
 Subject: Re: support for SnapDiff on NetApp filers running CDOT?

 Hi Steve,

 That status is not quite correct.

 IBM has the APIs, that's true. IBM also has them implemented
 and working successfully in the lab.

 IBM is working directly with NetApp on some special details
 about how we support SnapDiff (and NDMP) with CDOT.

 And so, at this point, I cannot give a definitive status
 until those discussions are completed.


 Del

 


 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 03/11/2015
 11:52:39 AM:

  From: Schaub, Steve steve_sch...@bcbst.com
  To: ADSM-L@VM.MARIST.EDU
  Date: 03/11/2015 11:53 AM
  Subject: support for SnapDiff on NetApp filers running CDOT?
  Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 
  Has anyone heard when Tivoli is going to release a version of
  SnapDiff that works on Clustered Data Ontap 8 (think latest version
  is 8.2 or 8.3)?  My understanding is that NetApp has given the API's
  to IBM, waiting on IBM to incorporate them.
 
  Thanks,
 
  Steve Schaub
  Systems Engineer II, Backup/Recovery
  Blue Cross Blue Shield of Tennessee
 
  -
  Please see the following link for the BlueCross BlueShield of
  Tennessee E-mail disclaimer:
http://www.bcbst.com/email_disclaimer.shtm
 


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: 3584 questions

2015-05-22 Thread Paul Zarnowski
What Thomas said, only I think he meant to say 3584 everwhere he said 3484. 
 AFAIK, there is no such thing as a 3484.

We did a similar thing moving from an ADIC Scalar10K SCSI library to a 3584.  
As long as the new library has the same library name as the old library, TSM 
will be fine.

At 03:50 PM 5/22/2015, Thomas Denier wrote:
The 3484 is a SCSI library with a mechanical design similar to that of a 3494. 
In more recent times IBM has marketed the 3484 or a very similar successor as 
the TS3500.

We used to do what amounted to a 3494 to 3584 migration during disaster 
recovery tests; our own system had a 3494, but our hot site vendor provided a 
3584.

I checked our old DR procedure. It does not cover the checkout operation, 
since all the volumes available at the test had been checked out and sent to 
an offsite vault at some point in the past. In outline the process was as 
follows:

Update 3494 tape drive paths to online=no.
Execute define library for the 3484.
Execute the related define drive and define path commands (including 
defining a path to the library).
Update tape device classes to use the new library.
Check volumes into the new library.
Execute an audit library command with checklabel=barcode for the new 
library.

The device for the new server to library path will probably follow a 
distinctly different naming convention than its 3494 counterpart.

Most commands that refer to a library name need somewhat different operands 
for a SCSI library  (such as a 3484) than for a 3494. You will need to review 
any such commands executed as part of your automated housekeeping or as part 
of manual procedures such as adding tape volumes.

Thomas Denier
Thomas Jefferson University

-Original Message-

We are looking at replacing our 3494 libraries with 3584s.
Thinkk that is the correct number.

Are they similar enough that I can simply check out the volumes from the old 
libraries and check into the new?

We are keeping our ts1120 drives and transferring them into the new robots.

Anyone with experience doing this move?

Thanks for any help.
The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent 
or urgent health care matters.


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: Share permission changes

2015-05-11 Thread Paul Zarnowski
This is a problem for NTFS because the amount of metadata associated with an 
object is more than you can put into the TSM database.  Thus, TSM puts it into 
the storage pool, along with the object.  What this means is that when the meta 
changes, the object has to be backed up again.  This is not a problem for 
Unix/NFS, because there isn't much metadata and it can all be put into the TSM 
DB, which means if it changes it's just a DB update and not another backup of 
the object.

Bad enough for backups, but imagine if you had a PB-scale GPFS filesystem and 
someone unwittingly makes such a change.  Now you're talking about having to 
recall all of those objects in order to back them up again.  Ugh.  End of game.

..Paul


At 04:54 PM 5/11/2015, Nick Marouf wrote:
Hello

 From my experience changing share permission will force tsm to backup all
the data once more. A solution we used in the past was to assign groups
instead of users to shares.

Changes to group membership is behind the scenes in AD, and is not picked
up by TSM at the client level.


On Mon, May 11, 2015 at 2:39 PM, Thomas Denier thomas.den...@jefferson.edu
wrote:

 One of our TSM servers is in the process of backing up a large part of the
 contents of a Windows 2008 file server. I contacted the system
 administrator. He told me that he had changed share permissions but not
 security permissions, and did not expect all the files in the share to be
 backed up. Based on my limited knowledge of share permissions I wouldn't
 have expected that either. Is it normal for a share permissions change to
 have this effect? How easy is it to make a security permissions change
 while trying to make a share permissions change?

 Thomas Denier,
 Thomas Jefferson University
 The information contained in this transmission contains privileged and
 confidential information. It is intended only for the use of the person
 named above. If you are not the intended recipient, you are hereby notified
 that any review, dissemination, distribution or duplication of this
 communication is strictly prohibited. If you are not the intended
 recipient, please contact the sender by reply email and destroy all copies
 of the original message.

 CAUTION: Intended recipients should NOT use email communication for
 emergent or urgent health care matters.



--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: So long, and thank you...

2015-04-04 Thread Paul Zarnowski
Wanda,

Thank you for all of your contributions over the years.  Enjoy the next chapter 
in your life!

..Paul   

 On Apr 3, 2015, at 5:11 PM, Prather, Wanda wanda.prat...@icfi.com wrote:
 
 This is my last day at ICF, and the first day of my retirement!
 
 I'm moving on to the next non-IT-support chapter in life.
 
 
 I can't speak highly enough of the people who give of their time and 
 expertise on this list.
 
 I've learned most of what I know about TSM here.
 
 
 You all are an amazing group, and it has been a  wonderful experience in 
 world-wide collaboration.
 
 
 Thank you all!
 
 
 Best wishes,
 
 Wanda


Re: DEVCLASS=FILE - what am I missing

2015-02-16 Thread Paul Zarnowski
One thing that hasn't been mentioned yet is how the use of FILE or DISK impacts 
server recovery.  If you have to restore your database, you will have to audit 
all of your DISK volumes.  You can avoid auditing FILE or TAPE volumes through 
the use of REUSEDELAY.

..Paul

At 01:52 PM 2/13/2015, Zoltan Forray wrote:
Thanks for all the replies.  Pretty much confirms that FILE isn't for me.
We don't do dedupe and there are a lot of manual/monitoring processes
involved (I have enough to do with 8-TSM servers I manage - don't need
more).


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: DEVCLASS=FILE - what am I missing

2015-02-13 Thread Paul Zarnowski
At 12:12 PM 2/13/2015, Zoltan Forray wrote:
Well, last night became a disaster.  Backups failing all over because it
couldn't allocate any more files and also would not automatically shift to
use the nextpool which is defined as a tape pool.

Alas, TSM doesn't automatically roll over when the ingest pool in FILE.  I 
really wish that it did.  Here's the relevant documentation for NEXTSTG for 
FILE stgpools:

   When there is insufficient space available in the current storage   pool, 
 the NEXTSTGPOOL parameter for sequential access storage pools   does not 
 allow data to be stored into the next pool. In this case,   the server issues 
 a message and the transaction fails.

..Paul


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director for Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: ANR1890W root-only restriction message

2015-01-20 Thread Paul Zarnowski
Yes, thank you Kurt.  Your post has saved us some trouble as well.

At 01:09 PM 1/19/2015, Prather, Wanda wrote:
Thank you Kurt for that information!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
BEYERS Kurt
Sent: Monday, January 19, 2015 11:21 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] ANR1890W root-only restriction message

Hello everybody,

Please be aware of a new option 'BACKUPINITIATIONROOT' which has the standard 
value of YES if you upgrade to TSM  server 7.1.1.100 (which I did) or 
6.3.5.100. Here is the technote:

http://www-01.ibm.com/support/docview.wss?uid=swg21693960

It resulted in  some failed database backups, which typically start under the 
database owner. Luckily it is a dynamic option that can be altered online in 
TSM.

Regards,
Kurt


*** Disclaimer ***
Vlaamse Radio- en Televisieomroeporganisatie Auguste Reyerslaan 52, 1043 
Brussel

nv van publiek recht
BTW BE 0244.142.664
RPR Brussel
http://www.vrt.be/gebruiksvoorwaarden


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director of Storage ServicesFx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM client availability for Max OS X 10.10 (Yosemite)

2014-10-22 Thread Paul Zarnowski
Hi Ruth,

We are very interested as well, and asked our business partner this same 
question.  This is what I got back from them:

[IBM's] goal is to support new versions of OS within 90 days.  I'm hoping 
that they will announce something in the next couple of weeks, but I'm also not 
holding my breath.

..Paul


At 04:17 PM 10/21/2014, Mitchell, Ruth Slovik wrote:
Hi all,

Does anyone know when a TSM client will be available/approved for Mac OS X 
10.10 Yosemite?
Many thanks.


Ruth Mitchell

U of I, Urbana, IL


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director of Storage ServicesFx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM client availability for Max OS X 10.10 (Yosemite)

2014-10-22 Thread Paul Zarnowski
Thanks Del!

At 01:49 PM 10/22/2014, Del Hoobler wrote:
Hi Ruth and Paul,

The test teams just completed testing and the support announcement
was just added to the requirements technote:

The overall page:
http://www-01.ibm.com/support/docview.wss?uid=swg21243309

The specific page for the Apple Macintosh Client Requirements:

http://www-01.ibm.com/support/docview.wss?rs=663context=SSGSG7q1=client+requirements+macintoshuid=swg21053584loc=en_UScs=utf-8lang=en

Note: Mac OS X 10.10 support is with a 7.1.1 or higher client.


Thank you,

Del





ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 10/21/2014
04:17:34 PM:

 From: Mitchell, Ruth Slovik rmi...@illinois.edu
 To: ADSM-L@VM.MARIST.EDU
 Date: 10/21/2014 04:19 PM
 Subject: TSM client availability for Max OS X 10.10 (Yosemite)
 Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU

 Hi all,

 Does anyone know when a TSM client will be available/approved for
 Mac OS X 10.10 Yosemite?
 Many thanks.


 Ruth Mitchell

 U of I, Urbana, IL



--
Paul ZarnowskiPh: 607-255-4757
Assistant Director of Storage ServicesFx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TS3500 and S24/S54 frames with TSM?

2014-04-14 Thread Paul Zarnowski
Hi Wanda,

Yeah, we've got two 3584's with S54 frames.  I love the price point and 
density.  They have worked pretty well.  We've had the usual drive problems 
(they do wear out after awhile), and one picker issue, but nothing specific to 
the S54.  We use ALMS which gets us slot virtualization, which works very 
nicely too.  We've had the S54s since just after they were announced, so we've 
got a long track record with them.  I'd buy them again if I had it to do all 
over again.

..Paul

At 03:30 PM 4/14/2014, Prather, Wanda wrote:
Anybody using a TS3500 library S24/S54 frame with TSM?
Any gotchas?
Good/bad news?



**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  | 
wanda.prat...@icfi.commailto:wanda.prat...@icfi.com  |  
www.icfi.comhttp://www.icfi.com | 410-868-4872 (m)
ICF International  | 7125 Thomas Edison Dr., Suite 100, Columbia, Md 
|443-718-4900 (o)


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director of Storage ServicesFx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM for desktop backups

2014-02-27 Thread Paul Zarnowski
 GUI
 allow a client to only view/restore their own files?

 We are concerned with both Windows and Macs.

 Does anyone know of a feature matrix comparison of TSM and Crashplan?

 Thanks in advance.


 --
 Have a wonderful day,
 David Jelinek




--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


--
Paul ZarnowskiPh: 607-255-4757
Assistant Director of Storage ServicesFx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: Deduplication number of chunks waiting in queue continues to rise?

2013-12-20 Thread Paul Zarnowski
 to try and get this cleared up.
There is no client-side dedup here, only server side.
I've also set deduprequiresbackup to NO for now, although I hate doing
that, to make sure that doesn't' interfere with the reclaim process.

But SHOW DEDUPDELETEINFO shows that the number of chunks waiting in
queue is *still* increasing.
So, WHAT is putting stuff on that dedup delete queue?
And how do I ever gain ground?

W



**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  |
wanda.prat...@icfi.com
|  www.icfi.com
ICF International  | 443-718-4900 (o)




--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM Dedup stgpool target

2013-11-19 Thread Paul Zarnowski
Bill,

My concern was that if the copy storage pool was tape, then with small primary 
stgpool volumes there might be a lot more tape mounts required, especially if 
the stgpools are collocated.  Since your output pools are on disk, then the 
higher mount/unmount activity wouldn't be an issue.

..Paul

At 05:50 PM 11/18/2013, Colwell, William F. wrote:
The BA STG processes on the source move from one primary volume to the next
without ending the session with the VV server.  I don't think this would
be any different if the target server wrote directly on to tape.


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM Dedup stgpool target

2013-11-18 Thread Paul Zarnowski
One other question, if you don't mind Bill:  Do you have Copy Storage Pools?  
If so, are they on tape or file?  If tape, is the small volume size on the 
primary pool an issue?  I.e., does TSM optimize output tape mounts?

Thanks.
..Paul

At 05:48 PM 11/14/2013, Colwell, William F. wrote:
Paul,

I am using 4 GB volumes on the 15k disks (aka ingest pool).  Since each disk 
is ~576 GiB
and there are 16 disks assigned to this server, that's a lot of volumes!

On the sata based pools I am using 50 GiB volumes.

All volumes are scratch allocated not pre-allocated.

I know scratch volumes are supposed to perform less well, but I haven't heard 
how much less and I did ask.
I couldn't run the way I do and manage pre-allocation.  There are 2 very big 
and very busy instances on the
processor and both share all the filesystems.  And each instance has multiple 
storage hierarchies so
mapping out pre-allocation would be a nightmare.

thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Thursday, November 14, 2013 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Hi Bill,

Can I ask what size volumes you use for the ingest pool (on 15k disks) and 
also on your 4TB sata pool?  I assume you are pre-allocating volumes and not 
using scratch?

Thanks.
..Paul

At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from 
Nexsan (now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances on the processor sharing the filesystems.  The 
OS is Linux rhel 5.

All volumes are scratch allocated.

The backups first land on non raid 15k 600GB disks in an Infortrend device.  
The copypooling is done from there
and also the identify processing.  Then they are migrated to the Nexsan based 
storagepools.

There is also a tape library.  Really big files are excluded from dedup via 
the stgpool MAXSIZE parameter and
land on a separate pool on the Nexsan storage which then migrates to tape.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Sergio O. Fuentes
Sent: Wednesday, November 13, 2013 10:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Dedup stgpool target

In an earlier thread, I polled this group on whether people recommend going 
with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
answers came back mixed, obviously with an 'It depends'-type clause.

So, moving on...  assuming that I'm using TSM dedup, what sort of target 
arrays are people putting behind their TSM servers.   Assume here, also, that 
you'll be having multiple TSM servers,  another backup product, *coughveeam 
and potentially having to do backup stgpools on the dedup stgpools.  I ask 
because I've been barking up the mid-tier storage array market as our 
potential disk based backup target simply because of the combination of cost, 
performance, and scalability.  I'd prefer something that is dense I.e. more 
capacity less footprint and can scale up to 400TB.  It seems like vendors get 
disappointed when you're asking for a 400TB array with just SATA disk simply 
for backup targets.  None of that fancy array intelligence like auto-tiering, 
large caches, replication, dedup, etc.. is required.

Is there another storage market I should be looking at, I.e. really dumb raid 
arrays, direct attached, NAS, etc...

Any feedback is appreciated, even the 'it depends'-type.

Thanks!
Sergio


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM Dedup stgpool target

2013-11-18 Thread Paul Zarnowski
Bill,

Are you virtual volumes purely on tape on the target server, or are they 
fronted by some sort of disk storage pool?  I am trying to understand whether a 
small volume size for the ingest dedup file pool will cause a lot of tape 
mounts on the copy storage pool during a backup storage pool process, or 
whether TSM is smart enough to optimize output tape volume mounts.  If your 
virtual volumes are fronted by some sort of disk, or if you have a plethora of 
tape drives, you might not notice this even if TSM was dumb in this regard.  Do 
you use collocation (in order to collocate volumes in your copy storage pool)?  
If not, that could be another reason why you wouldn't notice it.

One other question, if I may.  Why do you have a BKP_1A and BKP_1B storage 
pool?  They seem to have the same attributes and both funnel into BKP_2.

I'm sure you've put a lot of thought into this, but I'm not sure I'm getting 
everything you did, and why.

..Paul



At 10:24 AM 11/18/2013, Colwell, William F. wrote:
Paul,

I describe my copypool setup in a previous reply, last Friday.
If you lost it somehow, it is on adsm.org.

But quickly, they are on virtual volumes.  I have never seen any issues
related to the primary pool volume size.

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Monday, November 18, 2013 9:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

One other question, if you don't mind Bill:  Do you have Copy Storage Pools?  
If so, are they on tape or file?  If tape, is the small volume size on the 
primary pool an issue?  I.e., does TSM optimize output tape mounts?

Thanks.
..Paul

At 05:48 PM 11/14/2013, Colwell, William F. wrote:
Paul,

I am using 4 GB volumes on the 15k disks (aka ingest pool).  Since each disk 
is ~576 GiB
and there are 16 disks assigned to this server, that's a lot of volumes!

On the sata based pools I am using 50 GiB volumes.

All volumes are scratch allocated not pre-allocated.

I know scratch volumes are supposed to perform less well, but I haven't heard 
how much less and I did ask.
I couldn't run the way I do and manage pre-allocation.  There are 2 very big 
and very busy instances on the
processor and both share all the filesystems.  And each instance has multiple 
storage hierarchies so
mapping out pre-allocation would be a nightmare.

thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Thursday, November 14, 2013 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Hi Bill,

Can I ask what size volumes you use for the ingest pool (on 15k disks) and 
also on your 4TB sata pool?  I assume you are pre-allocating volumes and not 
using scratch?

Thanks.
..Paul

At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from 
Nexsan (now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances on the processor sharing the filesystems.  The 
OS is Linux rhel 5.

All volumes are scratch allocated.

The backups first land on non raid 15k 600GB disks in an Infortrend device.  
The copypooling is done from there
and also the identify processing.  Then they are migrated to the Nexsan 
based storagepools.

There is also a tape library.  Really big files are excluded from dedup via 
the stgpool MAXSIZE parameter and
land on a separate pool on the Nexsan storage which then migrates to tape.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Sergio O. Fuentes
Sent: Wednesday, November 13, 2013 10:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Dedup stgpool target

In an earlier thread, I polled this group on whether people recommend going 
with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
answers came back mixed, obviously with an 'It depends'-type clause.

So, moving on...  assuming that I'm using TSM dedup, what sort of target 
arrays are people putting behind their TSM servers.   Assume here, also, 
that you'll be having multiple TSM servers,  another backup product, 
*coughveeam and potentially having to do backup stgpools on the dedup 
stgpools.  I ask because I've been barking up the mid-tier storage array 
market as our potential disk based backup target simply because of the 
combination of cost, performance, and scalability.  I'd prefer something 
that is dense I.e. more capacity less

Re: TSM Dedup stgpool target

2013-11-14 Thread Paul Zarnowski
Hi Bill,

Can I ask what size volumes you use for the ingest pool (on 15k disks) and also 
on your 4TB sata pool?  I assume you are pre-allocating volumes and not using 
scratch?

Thanks.
..Paul

At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from Nexsan 
(now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances on the processor sharing the filesystems.  The OS 
is Linux rhel 5.

All volumes are scratch allocated.

The backups first land on non raid 15k 600GB disks in an Infortrend device.  
The copypooling is done from there
and also the identify processing.  Then they are migrated to the Nexsan based 
storagepools.

There is also a tape library.  Really big files are excluded from dedup via 
the stgpool MAXSIZE parameter and
land on a separate pool on the Nexsan storage which then migrates to tape.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Sergio O. Fuentes
Sent: Wednesday, November 13, 2013 10:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Dedup stgpool target

In an earlier thread, I polled this group on whether people recommend going 
with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
answers came back mixed, obviously with an 'It depends'-type clause.

So, moving on...  assuming that I'm using TSM dedup, what sort of target 
arrays are people putting behind their TSM servers.   Assume here, also, that 
you'll be having multiple TSM servers,  another backup product, *coughveeam 
and potentially having to do backup stgpools on the dedup stgpools.  I ask 
because I've been barking up the mid-tier storage array market as our 
potential disk based backup target simply because of the combination of cost, 
performance, and scalability.  I'd prefer something that is dense I.e. more 
capacity less footprint and can scale up to 400TB.  It seems like vendors get 
disappointed when you're asking for a 400TB array with just SATA disk simply 
for backup targets.  None of that fancy array intelligence like auto-tiering, 
large caches, replication, dedup, etc.. is required.

Is there another storage market I should be looking at, I.e. really dumb raid 
arrays, direct attached, NAS, etc...

Any feedback is appreciated, even the 'it depends'-type.

Thanks!
Sergio


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: how is memoryefficient diskcachemethod supposed to work

2013-10-14 Thread Paul Zarnowski
If I'm not mistaken (and I very well may be!), the diskcachemethod works on a 
directory by directory basis.  If you have a lot of objects in one flat 
directory (without subdirectories), it may not help that much.  Alternatively, 
if your volume is spread well (deeply) across lots of directories, then it will 
be more helpful.

 From the User's Guide:
Note that for file systems with large numbers (millions) of directories, the 
client still might not be able to allocate enough memory to perform 
incremental backup with memoryefficientbackup yes.


Another option would be to use -incrbydate, but that has its own caveats.

..Paul



At 07:29 AM 10/14/2013, Hans Christian Riksheim wrote:
We have a memory starved Windows 2003 server and incremental fails with
ANS1030E The operating system refused a TSM request for memory allocation.

So we try memoryefficient diskcachemethod.

What we see is that dsmc reports diskcachemethod is in use for all
filesystems. We also see that the diskcachelocation is populated.

However the dsmc process still increases to 1,3GB and eventually dies with
the error message above. Client is 6.4.0.10 32.bit.

Any ideas before I raise a PMR?

Hans Chr. Riksheim


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: NFS only supported on AIX.

2013-10-09 Thread Paul Zarnowski
Grant,

Snapdiff support was added in TSM 6.1, only for 64-bit AIX.  But it was added 
to Linux in v6.2.  From TSM for UNIX and Linux B-A Clients v6.4 
(SC23-9791-06), page 463:

Restriction: Incremental backup using snapshot difference is only available 
with
the Tivoli Storage Manager 64 bit AIX client and the Tivoli Storage Manager 
Linux
x86/86_64 client.

This indicates that snapdiff is supported on both AIX and Linux.  This is from 
their manual, not from a RedBook.  It was also prominently stated in slide 
decks and announcements for the 6.4 client.  If someone in IBM Support is 
indicating that SnapDiff is not supported on Linux, direct them to this page as 
they are mistaken.  There is no requirement to use NFS v4 for snapdiff that I 
am aware of, and we have been using it with NFSv3 for some time.

I don't know what resistance you are running into, or why, but perhaps it is 
because NFS can be problematic because of all of the mount and tuning options 
that surround it.

..Paul


At 07:16 PM 10/8/2013, Grant Street wrote:
Got confirmation this morning. Essentially they will only offer Best Efforts 
with any NFS that is not NFSv4 on AIX. The following are quotes from the 
ticket.

As my colleague Dave pointed out we will give our best effort to resolve any 
problems you may have with NFS, but since it is not supported in most 
environments except AIX, we can not guarentee that we will be able to resolve 
all issues related to NFS.

My understanding is that snapdiff will work in the environments specified in 
the restriction but it is not fully supported, in other words only on a best 
effort basis.

Grant


On 09/10/13 01:53, Paul Zarnowski wrote:
You may be confusing NFSv3 with v4.  I can believe that v4 support is 
limited, but v3 is supported on aix, Solaris, Linux, et al.  Snapdiff 
incrementals are only supported to a NetApp from AIX and Linux for NFS (v3).

..Paul
(excuse my brevity  typos - sent from my phone)

On Oct 8, 2013, at 1:11 AM, Grant Street gra...@al.com.au wrote:

I'm confirming this now ... But it doesn't look good. I'm not talking
will it or won't it work I'm more concerned about technical support. I
had an issue with restoring data to an NFS file system on a mac and was
told that it wasn't supported. That's when I started getting concerned.

The following does not contain NFS except AIX. eg. I would expect it to
list NFS against linux with ACL Support as NO

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.client.doc%2Fc_bac_aclsupt.html

Grant


On 08/10/13 14:38, Alex Paschal wrote:
Hello, Grant.  I'm certain NFS filesystems are supported on clients
other than AIX.  In fact, the URL below links to the UNIX BAClient
manual, which contains the sentence:
Note: On Solaris and HP-UX, the nfstimeoutoption can fail if the NFS
mount is hard.

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nfshsmounts.html

Perhaps your source confused NFS support with NFS Version 4 ACL support?

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nasfs.html



On 10/7/2013 4:41 PM, Grant Street wrote:
Hello All

Just a heads up to something I found out last week. I have been informed
that backing up an NFS server from a non AIX client is NOT supported.

This could also include using the snapdiff functionality on Netapps.
This is being confirmed now.

This may be old news to you, in which case , sorry, but this is a big
concern for us. I have created an RFE
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=40014


Thanks

Grant



--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: NFS only supported on AIX.

2013-10-08 Thread Paul Zarnowski
You may be confusing NFSv3 with v4.  I can believe that v4 support is limited, 
but v3 is supported on aix, Solaris, Linux, et al.  Snapdiff incrementals are 
only supported to a NetApp from AIX and Linux for NFS (v3). 
 
..Paul   
(excuse my brevity  typos - sent from my phone)

 On Oct 8, 2013, at 1:11 AM, Grant Street gra...@al.com.au wrote:
 
 I'm confirming this now ... But it doesn't look good. I'm not talking
 will it or won't it work I'm more concerned about technical support. I
 had an issue with restoring data to an NFS file system on a mac and was
 told that it wasn't supported. That's when I started getting concerned.
 
 The following does not contain NFS except AIX. eg. I would expect it to
 list NFS against linux with ACL Support as NO
 
 http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.client.doc%2Fc_bac_aclsupt.html
 
 Grant
 
 
 On 08/10/13 14:38, Alex Paschal wrote:
 Hello, Grant.  I'm certain NFS filesystems are supported on clients
 other than AIX.  In fact, the URL below links to the UNIX BAClient
 manual, which contains the sentence:
 Note: On Solaris and HP-UX, the nfstimeoutoption can fail if the NFS
 mount is hard.
 
 http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nfshsmounts.html
 
 Perhaps your source confused NFS support with NFS Version 4 ACL support?
 
 http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nasfs.html
 
 
 
 On 10/7/2013 4:41 PM, Grant Street wrote:
 Hello All
 
 Just a heads up to something I found out last week. I have been informed
 that backing up an NFS server from a non AIX client is NOT supported.
 
 This could also include using the snapdiff functionality on Netapps.
 This is being confirmed now.
 
 This may be old news to you, in which case , sorry, but this is a big
 concern for us. I have created an RFE
 http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=40014
 
 
 Thanks
 
 Grant
 


Re: NFS only supported on AIX.

2013-10-07 Thread Paul Zarnowski
Snapdiff is also supported from Linux.  

..Paul   
(excuse my brevity  typos - sent from my phone)

 On Oct 7, 2013, at 7:43 PM, Grant Street gra...@al.com.au wrote:
 
 Hello All
 
 Just a heads up to something I found out last week. I have been informed
 that backing up an NFS server from a non AIX client is NOT supported.
 
 This could also include using the snapdiff functionality on Netapps.
 This is being confirmed now.
 
 This may be old news to you, in which case , sorry, but this is a big
 concern for us. I have created an RFE
 http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=40014
 
 Thanks
 
 Grant
 


Re: DISK vs FILE DevClass

2013-09-19 Thread Paul Zarnowski
Just to add a few more thoughts to the discussion...

If you ever have to restore your TSM database, you will need to audit all of 
your DISK volumes.  If you set reusedelay appropriately, you can avoid having 
to audit FILE volumes.  Yes, this requires a bit more space, because you'll 
have volumes in PENDING status for a time.

One reason to limit mountlimit would be to try to avoid head thrashing.  
Generally, backup data sent to FILE pools should get good performance because 
you have a stream of data coming into a sequential volume.  DISK pools are 
random, so more head movement.  If you have a high mountlimit, then you could 
offset the benefits of writing sequentially to disk.

We find that running Backup Stgpool from FILE is faster than from DISK.  Our 
Copy stgpools are on remote tape.

..Paul

At 01:20 PM 9/19/2013, Prather, Wanda wrote:
For file devclass, I generally don't worry about maximum volumes because I 
don't set the volumes up as scratch, I predefine them.
Just something else that can cause issues for the customer, and reports of 
other people seeing the coming and going of scratch file volumes causing 
fragmentation in the filesystem.  Better to define the volumes same as a 
random DISK pool.

For mountlimit, it's just the maximum number of client processes you expect to 
be writing to that drive at once. Or set to 999, no reason to restrict it.

For maxcapacity, it just has to be larger than the largest container volume 
you plan to create in that pool.

If you have no plans for dedup, you have no REQUIREment for the file devclass.

And what I HATE about the file devclass, is that you don't get pool failover.  
If the pool fills up before you can migrate out, your backups fail, rather 
than waiting for a tape from the NEXTSTGPOOL.

If the data is going to migrate off to another pool, so the disk pool gets 
emptied frequently  anyway, what benefit to having a filepool?
And if it isn't emptied every day, you will have to run reclamation on it.

So when it's just a buffer diskpool, I prefer to use DISK rather than FILE.






-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Zoltan Forray
Sent: Thursday, September 19, 2013 11:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DISK vs FILE DevClass

We are in a transition of our SAN storage from one EMC box to another.
 Since this requires my relocating 18TB of TSM server storage, I thought I 
 would take this opportunity to revisit FILE devclass vs DISK, which we are 
 using now.

I have been reading through the Linux Server Admin Guide on the pro's and 
con's of both devclasses.  Still not sure if it would be better to go with
FILE.   Here is some info on what this server does,.

For the server that would be using this storage, the sole backups are Lotus 
Notes/Domino servers, so the backup data profile is not your usual data mix 
(largely Notes TDP).

No dedupe and no plans to dedupe.
No active storage and no need for it.
4-5TB daily with spikes to 15TB on weekends - 95%+ is TDP

When creating/updating the FILE devclass, how do I calculate/guesstimate the 
values for MOUNTLIMIT and MAXIMUM CAPACITY as well as the MAXIMUM VOLUMES?

Unfortunately, the storage they assigned to me on the VNX5700 is broken up 
into 8-pieces/luns, varying from 2.2TB to 2.4TB, each.

Looking for some feedback on which way we should go and why one is preferable 
than the other.

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: Deduplication/replication options

2013-07-26 Thread Paul Zarnowski
 in
 receipt.
  Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
  Airlines) is registered in Amstelveen, The Netherlands, with registered
  number 33014286
  
 
 



--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: SnapDiff fails after tsm client upgrade to 6.3.1.0 on proxy

2013-05-15 Thread Paul Zarnowski
Try 6.3.3.0.  We had a problem with the first 6.3 client we tried.  Seems to be 
working ok on 6.3.3.0, and there are some nice extra entries added to the 
scheduler log indicating which snapshots are being created and used.

At 02:59 PM 5/15/2013, Schaub, Steve wrote:
Regular incremental command line incremental of the local drives work ok, but 
when I try to execute a snapdiff, after a few seconds I get a popup saying 
IBM Tivoli Storage Manager Backup-Archive Command Line Client has stopped 
working.  Nothing shows up in any of the log files as far as I can see.

Anyone else had this problem?  I ended up having to reinstall 6.2.4.0 to get 
it working.

Thanks,

Steve Schaub
Systems Engineer II, Windows Backup/Recovery
BlueCross BlueShield of Tennessee

-
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM RFE regarding Litigation Hold

2013-05-08 Thread Paul Zarnowski
Rick,

Your note reminded me that I left out a couple of steps in my description of 
what we do.  We also change the nodename on the node as it is exported to the 
other server.  We don't suspend expiration while the export is running.  
Instead, we change it's domain on the original server while the export is 
running.  That domain has all the same management classes as the original 
domain, but with infinite copies and retention.  Once the export is complete, 
we move the original node back to its original domain.  Fixing up the schedule 
association.  This allows expiration to continue running for all other nodes on 
the server.

Harold, changing the domain of the node would have the immediate effect that 
you are looking for.

..Paul

At 05:39 PM 5/7/2013, Richard Rhodes wrote:
Our approach has been to export/import the node to another TSM instance
under a different node name with a suffix or prefix that indicated the
hold.  THe mgt class is set to no-expire.We stop expiration until this
copy is made.  This approach has lets the node be processed as usual, and
the copy can sit for as long as needed.

Rick





From:   Vandeventer, Harold [BS] harold.vandeven...@ks.gov
To: ADSM-L@VM.MARIST.EDU
Date:   05/07/2013 03:36 PM
Subject:Re: TSM RFE regarding Litigation Hold
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Great ideas Paul I'm preparing to build the alternate server without
expiration approach as soon as I can scare up some resources.

I'll look at the alternate Domain approach also.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Paul Zarnowski
Sent: Tuesday, May 07, 2013 12:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

We deal with a variety of types of litigation hold here, as well.  What
you can do now, easily, is to setup a parallel policy domain (i.e.,
LITHOLD) that has all the same management classes, but different retention
policy (i.e., retain forever).  Then, to avoid expiration you just have to
do this:

UPDATE NODE nodename DOMAIN=LITHOLD

This works if you have all the same management classes defined in LITHOLD
that you had defined in the original domain.  You can move the node back
and forth between domains as needed.  If LITHOLD is missing a management
class, then retention would be controlled by the grace period
definitions of the domain - something you'll probably want to avoid.

No changes needed on the client side since you're not changing management
class names, just their attributes.

If you have associated a schedule with the node, then you'll need to have
copies of the schedules in LITHOLD and re-associate the node with the
schedule in the LITHOLD domain (which can be defined the same).

We also deal with other types of litigation holds that require is to take
a snapshot of the data.  For this, we simply export (a copy of) the node
to another TSM server instance where expiration does not run or has no
effect.

..Paul


At 05:05 PM 5/3/2013, Vandeventer, Harold [BS] wrote:
To all...
I created an RFE to affect File Spaces and Expiration.  The feature would
cause expiration processing to be skipped for a file space that has been
selected.

It's RFE ID 33395 if you care to review and vote.

Briefly, the idea is to immediately respond to a situation in which we
cannot allow Expiration Processing to delete information that would
otherwise be deleted.  This would be in response to a Litigation Hold
demand from a legal issue at hand.  I've had three LitHold events in the
past 24 months; they're not much fun and I'm not in the court room, just
the TSM Server Admin.

Allowing a LitigationHold=Yes would avoid expiration on the File Space.

When the court case is lifted, simply revert to LitigationHold=No. The
next Expiration process would then begin the delete process as is normal.

The feature would avoid the complexity of assigning a no expire
management class to the node and trying to later revert to a more typical
class.

Please take a look at the RFE, and cast a vote if you feel it's a
valuable feature.

Thanks.

Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services STE 751-S
910 SW Jackson
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the
person or entity to which it is addressed and may contain confidential
or privileged information.  Any unauthorized review, use, or disclosure
is prohibited.  If you are not the intended recipient, please contact
the sender and destroy the original message, including all copies,
Thank you.
***


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255

Re: TSM RFE regarding Litigation Hold

2013-05-08 Thread Paul Zarnowski
Good points, Wanda.  Just to clarify, this is the definition for our PRESERVE 
domain (what I had been calling LITHOLD earlier):

tsm: ADSM6q copy preserve

PolicyPolicyMgmt  Copy  Versions Versions   Retain  Retain
DomainSet Name  Class Group Data DataExtraOnly
NameName  NameExists  Deleted Versions Version
- - - -    ---
PRESERVE  ACTIVESTANDARD  STANDARD  No Limit No Limit No Limit No Lim-
it
And the domain itself:
tsm: ADSM6q dom preserve f=d

  Policy Domain Name: PRESERVE
Activated Policy Set: STANDARD
Activated Default Mgmt Class: STANDARD
  Number of Registered Nodes: 0
 Description: R/O; Preserve Data
 Backup Retention (Grace Period): 9,999
Archive Retention (Grace Period): 9,999


..Paul


At 11:33 AM 5/8/2013, Prather, Wanda wrote:
Just want to clarify something for people who haven't dealt with this before.
It depends on what you mean when you say stop expiration.

Suppose you have the copy group limit set to 30 versions.
If the client is still backing up, the 31st version will still roll off and be 
not-restorable, no matter whether you run the command EXPIRE INVENTORY or not.

If you have the copy group limit set to 30 days,
the inactive versions will still roll off and be not-restorable, whether you 
run the command EXPIRE INVENTORY or not.

So it depends on what you mean by stop expiration.
The only way to keep versions from expiring, is to change the copy group 
settings to NOLIM, whether in the current domain or a new domain or a new 
server.

The only thing you can do by not running expire inventory, is to prevent 
yourself getting back DB space and scratch tapes, as the tape %utilization 
values won't get updated.  (I've always thought the command EXPIRE INVENTORY 
should be renamed to DBCLEANUP, as it doesn't really affect expiration of 
files.)

Of course a set of data created on external sequential media (via create 
backupset or export) isn't mapped by the DB and therefore isn't subject to 
rolling off.

W


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Tuesday, May 07, 2013 3:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

Great ideas Paul I'm preparing to build the alternate server without 
expiration approach as soon as I can scare up some resources.

I'll look at the alternate Domain approach also.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Tuesday, May 07, 2013 12:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

We deal with a variety of types of litigation hold here, as well.  What you 
can do now, easily, is to setup a parallel policy domain (i.e., LITHOLD) that 
has all the same management classes, but different retention policy (i.e., 
retain forever).  Then, to avoid expiration you just have to do this:

UPDATE NODE nodename DOMAIN=LITHOLD

This works if you have all the same management classes defined in LITHOLD that 
you had defined in the original domain.  You can move the node back and forth 
between domains as needed.  If LITHOLD is missing a management class, then 
retention would be controlled by the grace period definitions of the domain 
- something you'll probably want to avoid.

No changes needed on the client side since you're not changing management 
class names, just their attributes.

If you have associated a schedule with the node, then you'll need to have 
copies of the schedules in LITHOLD and re-associate the node with the schedule 
in the LITHOLD domain (which can be defined the same).

We also deal with other types of litigation holds that require is to take a 
snapshot of the data.  For this, we simply export (a copy of) the node to 
another TSM server instance where expiration does not run or has no effect.

..Paul


At 05:05 PM 5/3/2013, Vandeventer, Harold [BS] wrote:
To all...
I created an RFE to affect File Spaces and Expiration.  The feature would 
cause expiration processing to be skipped for a file space that has been 
selected.

It's RFE ID 33395 if you care to review and vote.

Briefly, the idea is to immediately respond to a situation in which we cannot 
allow Expiration Processing to delete information that would otherwise be 
deleted.  This would be in response to a Litigation Hold demand from a 
legal issue at hand.  I've had three LitHold events in the past 24 months; 
they're not much fun and I'm not in the court room, just the TSM Server Admin.

Allowing a LitigationHold=Yes would avoid expiration on the File Space.

When the court case is lifted, simply revert to LitigationHold=No.  The 
next Expiration process would then begin

Re: TSM RFE regarding Litigation Hold

2013-05-08 Thread Paul Zarnowski
I believe the TSM client enforces the # of versions limit specified in the 
management class, but not the retention attributes (# of days to keep inactive 
versions).  Only the TSM Expiration process will purge inactive objects when 
they reach their Retain limits (retextra, retonly).  If your management class 
is setup to retain 3 versions, and 3 versions already exist when the TSM client 
tries to backup a 4th, I believe the TSM client will roll off the oldest 
version making it unrestorable (and unquery-able).  Whether the database 
entries are actually purged at that point, or if that doesn't happen until 
expiration runs, doesn't really matter - you won't be able to restore that 4th, 
oldest version.

If you were not running Expiration before, then you would not have been purging 
inactive objects that exceeded their retention limits.

..Paul

At 12:33 PM 5/8/2013, Vandeventer, Harold [BS] wrote:
On the point by Wanda regarding when files are lost.

I was just visiting with one of my co-workers that built our original TSM 
environment several years ago;  IBM was here to help.

It was observed that storage capacity was filling quickly, even though policy 
was set to a few versions and some number of days for the last version.

IBM reviewed the setup and observed the Field Engineer had forgotten to have 
us run expiration; so each node had dozens of versions and long ago deleted 
files.

That makes it sound like the backup event actually marks a file to be deleted 
from the DB, and that a later expiration process does the actual removal.

It apparently took several hours to get through the expire inventory processes 
to remove all the old files.

In my case, my need was/is to make sure nothing is lost for selected file 
spaces on 5 nodes.  Expiration of other nodes, or even some file space of 
these 5, should proceed as normal, allowing to recover storage space.

However it happens under the cover, hopefully a simple checkbox would make 
it a very quick and simple task avoiding the management of alternate domains, 
etc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Wednesday, May 08, 2013 10:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

Just want to clarify something for people who haven't dealt with this before.
It depends on what you mean when you say stop expiration.

Suppose you have the copy group limit set to 30 versions.
If the client is still backing up, the 31st version will still roll off and be 
not-restorable, no matter whether you run the command EXPIRE INVENTORY or not.

If you have the copy group limit set to 30 days, the inactive versions will 
still roll off and be not-restorable, whether you run the command EXPIRE 
INVENTORY or not.

So it depends on what you mean by stop expiration.
The only way to keep versions from expiring, is to change the copy group 
settings to NOLIM, whether in the current domain or a new domain or a new 
server.

The only thing you can do by not running expire inventory, is to prevent 
yourself getting back DB space and scratch tapes, as the tape %utilization 
values won't get updated.  (I've always thought the command EXPIRE INVENTORY 
should be renamed to DBCLEANUP, as it doesn't really affect expiration of 
files.)

Of course a set of data created on external sequential media (via create 
backupset or export) isn't mapped by the DB and therefore isn't subject to 
rolling off.

W


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Tuesday, May 07, 2013 3:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

Great ideas Paul I'm preparing to build the alternate server without 
expiration approach as soon as I can scare up some resources.

I'll look at the alternate Domain approach also.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Tuesday, May 07, 2013 12:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM RFE regarding Litigation Hold

We deal with a variety of types of litigation hold here, as well.  What you 
can do now, easily, is to setup a parallel policy domain (i.e., LITHOLD) that 
has all the same management classes, but different retention policy (i.e., 
retain forever).  Then, to avoid expiration you just have to do this:

UPDATE NODE nodename DOMAIN=LITHOLD

This works if you have all the same management classes defined in LITHOLD that 
you had defined in the original domain.  You can move the node back and forth 
between domains as needed.  If LITHOLD is missing a management class, then 
retention would be controlled by the grace period definitions of the domain 
- something you'll probably want to avoid.

No changes needed on the client side since you're not changing management 
class names, just their attributes.

If you

Re: TSM RFE regarding Litigation Hold

2013-05-07 Thread Paul Zarnowski
We deal with a variety of types of litigation hold here, as well.  What you can 
do now, easily, is to setup a parallel policy domain (i.e., LITHOLD) that has 
all the same management classes, but different retention policy (i.e., retain 
forever).  Then, to avoid expiration you just have to do this:

UPDATE NODE nodename DOMAIN=LITHOLD

This works if you have all the same management classes defined in LITHOLD that 
you had defined in the original domain.  You can move the node back and forth 
between domains as needed.  If LITHOLD is missing a management class, then 
retention would be controlled by the grace period definitions of the domain - 
something you'll probably want to avoid.

No changes needed on the client side since you're not changing management class 
names, just their attributes.

If you have associated a schedule with the node, then you'll need to have 
copies of the schedules in LITHOLD and re-associate the node with the schedule 
in the LITHOLD domain (which can be defined the same).

We also deal with other types of litigation holds that require is to take a 
snapshot of the data.  For this, we simply export (a copy of) the node to 
another TSM server instance where expiration does not run or has no effect.

..Paul


At 05:05 PM 5/3/2013, Vandeventer, Harold [BS] wrote:
To all...
I created an RFE to affect File Spaces and Expiration.  The feature would 
cause expiration processing to be skipped for a file space that has been 
selected.

It's RFE ID 33395 if you care to review and vote.

Briefly, the idea is to immediately respond to a situation in which we cannot 
allow Expiration Processing to delete information that would otherwise be 
deleted.  This would be in response to a Litigation Hold demand from a legal 
issue at hand.  I've had three LitHold events in the past 24 months; they're 
not much fun and I'm not in the court room, just the TSM Server Admin.

Allowing a LitigationHold=Yes would avoid expiration on the File Space.

When the court case is lifted, simply revert to LitigationHold=No.  The 
next Expiration process would then begin the delete process as is normal.

The feature would avoid the complexity of assigning a no expire management 
class to the node and trying to later revert to a more typical class.

Please take a look at the RFE, and cast a vote if you feel it's a valuable 
feature.

Thanks.

Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services
STE 751-S
910 SW Jackson
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the
person or entity to which it is addressed and may contain confidential
or privileged information.  Any unauthorized review, use, or disclosure
is prohibited.  If you are not the intended recipient, please contact
the sender and destroy the original message, including all copies,
Thank you.
***


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: ? Best TSMv6 DB Config: 4 * 300GB or 6 * 200GB (LUN=PV=LV=FS) ?

2013-04-17 Thread Paul Zarnowski
I would say if you think you might grow it, start with 4 LUNs, not 6.  I think 
I heard somewhere that fewer DB volumes are better and too many can be a 
problem.  I have no experience with this.  Most of our databases are on 2-3 
volumes.

At 02:47 PM 4/17/2013, James R Owen wrote:
Seeking your experienced advice or reasoned opinion:

We have a TSMv5.5.7 service with very fragmented 500GB DB @ 76% utilized
but only 8% reduceable.  As soon as possible we will start TSMv5-v6
migration to new 1200GB DB and 200GB Log.   Another TSMv6 service with
800GB DB @ ~50% utilized performs well using 4 * 200GB LUNs = PVs = LVs
= JFS2 FileSpaces  (with 1 each per 200GB LUN on fast NetApp disk.)

For this TSMv5-v6.3.3 conversion, using same fast disk, which TSMv6 DB
config is better practice?

   a)   6 * 200GB LUNs  (LUN = PV = LV = FileSpace)
   b)   4 * 300GB LUNs  ...

We might eventually need to incrementally grow this TSMv6 DB up to twice
initial size adding similar sized LUNs!

--
jim.o...@yale.edu   (w#203.432.6693, c#203.494.9201, h#203.387.3030)


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: 1 GBit FC sufficient for tape library?

2013-02-12 Thread Paul Zarnowski
Yes, correct that it's rated at that max speed.  But if you don't have a pipe 
big enough to handle the max speed, the other question is how small can the 
pipe be before it becomes problematic and unusable.  I just checked, and 
according to Fuji, an IBM LTO4 drive will speed match down to 30MB/sec, which 
means you will still be able to get some useful work done with a smaller pipe.

http://www.google.com/url?sa=trct=jq=esrc=ssource=webcd=1ved=0CDIQFjAAurl=http%3A%2F%2Fwww.fujifilmusa.com%2Fshared%2Fbin%2FLTO_Overview.pdfei=i6IaUYuHGqn-0gHlmIGwDQusg=AFQjCNFcZqFcjcsiAWUCkWiXcmVwqbUgJQbvm=bv.42261806,d.dmQcad=rja

..Paul

At 02:38 PM 2/12/2013, Shawn DREW wrote:
LTO4 is rated at 120MB/s max speed, which comes out to just under 1gbit/sec 
(1gbit/s = 125MB/s).  I normally reserve 1gbit for each LTO4 drive.  (i.e. 4 
drives on a 4gb HBA)
I do see 110+MB/s streaming on those guys.  (using topas -T on AIX, so not 
sure about the accuracy)


Regards,
Shawn

Shawn Drew
 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Tuesday, February 12, 2013 12:04 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] 1 GBit FC sufficient for tape library?

 Another thought:  You really want to avoid the LTO4 drives dropping out of
 streaming mode, even if 1Gb/s is enough bandwidth to theoretically move
 the amount of data you need to move.  I believe LTO4 will operate down to
 30-40MB/s.  Worth checking, since I'm not sure, but I'm pretty sure you can
 drop as low as 40MB/s and still stay in streaming mode.  Once you drop out of
 streaming mode, your throughput will go in the dumper, and you won't get
 anything close to 1Gb/s throughput.

 At 10:15 AM 2/12/2013, Michael Roesch wrote:
 Hi Stefan,
 
 one quick question: what are the numbers you are using for your
 calculation?
 
 Did you convert 240 MB/s into Gigabit/s?
 
 Thanks
 
 Regards,
 Michael
 
 
 
 On Tue, Feb 12, 2013 at 1:01 PM, Stefan Folkerts
 stefan.folke...@gmail.comwrote:
 
  Yes, two LTO4 drives would not be happy at all behind a 1Gb/s HBA,
  even two 1Gb/s HBA's are not enough to fully utilize the drives.
  I would imagine you could also run into driver/firmware issues with
  this combination since it's a very strange one but then againit
  would probably work just slow.
 
 
  On Tue, Feb 12, 2013 at 12:34 PM, Michael Roesch
  michael.roe...@gmail.comwrote:
 
   Hi Stefan,
  
   the drives are LTO4 ones. So the HBA would be the bottleneck
  
  
   On Tue, Feb 12, 2013 at 11:36 AM, Stefan Folkerts 
   stefan.folke...@gmail.com
wrote:
  
If it is LTO1 or LTO2 you would be OK, if it is LTO3 or higher
you are limiting your drivers with your HBA.
   
   
On Tue, Feb 12, 2013 at 10:39 AM, Michael Roesch
michael.roe...@gmail.comwrote:
   
 Hi all,

 we have an old HP MSL 6030 with two drives that share one 1
 GBit FC
   port.
 Would that be enough to use with TSM? I have a feeling that says
 no
but I
 couldn't find any minimum FC requirements.
 Anyone having more info? Maybe even an IBM technote?

 Thanks

 Regards,
 Michael Roesch

   
  
 


 --
 Paul ZarnowskiPh: 607-255-4757
 CIT Infrastructure / Storage Services Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Tape library possible replacement - push/pull

2013-02-04 Thread Paul Zarnowski
-push-pull-reload.
  
   Never having dealt with any tape library other than the 3494, I am
   trying to collect as much info as I can on doing a swap-out.
  
   Is it as simple as defining the new library, moving the drives/paths
   to it,  changing the devclass to point to the new library, checkin
   everything (scratch/private)??  Am I missing anything?
  
   --
   *Zoltan Forray*
   TSM Software  Hardware Administrator Virginia Commonwealth
   University UCC/Office of Technology Services zfor...@vcu.edu -
   804-828-4807 Don't be a phishing victim - VCU and other reputable
   organizations will never use email to request that you reply with
   your password, social security number or confidential personal
   information. For more details visit
   http://infosecurity.vcu.edu/phishing.html
  
 
 
 
  --
  *Zoltan Forray*
  TSM Software  Hardware Administrator
  Virginia Commonwealth University
  UCC/Office of Technology Services
  zfor...@vcu.edu - 804-828-4807
  Don't be a phishing victim - VCU and other reputable organizations
  will never use email to request that you reply with your password,
  social security number or confidential personal information. For more
  details visit http://infosecurity.vcu.edu/phishing.html
 




--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: TS3500 test environment

2013-01-09 Thread Paul Zarnowski
On TS3500s, I believe the library control interface for a partition is assigned 
to one of the tape drives in that partition.  Thus, with no drives, there can 
be no control of the partition.  I believe the answer is that this is not 
possible - you must have at least one drive assigned to each partition.

..Paul

At 11:25 AM 1/9/2013, Thomas Denier wrote:
I am in the process of setting up a test environment for our Version 6
TSM servers. The servers run under zSeries Linux and are currently at
the 6.2.2.0 level. The main motivation for setting up a test environment
at this time is to support an upgrade in the near future.

Our production environment uses a TS3500 tape library with the ALMS
feature. We will need tape infrastructure available to the test
environment for some tests, but we do not wish to have any tape drives
permanently attached to the test environment. Is it possible to have a
logical library remain in existence with tape volumes assigned but with
no tape drives assigned?

Thomas Denier
Thomas Jefferson University


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: TS3500 test environment

2013-01-09 Thread Paul Zarnowski
There is a dynamic drive sharing capability, but be aware that you cannot use 
it if you are also using drive encryption.

At 05:01 PM 1/9/2013, Remco Post wrote:
You need at least one drive in your alms partition with a control path, but, 
that drive can (I think) also be in another alms partition. Just make sure 
it's only used actively in either PR or in Test at one time.

--

Met vriendelijke groeten/Kind regards,

Remco Post
r.p...@plcs.nl
+31 6 24821622



Op 9 jan. 2013, om 17:25 heeft Thomas Denier 
thomas.den...@jeffersonhospital.org het volgende geschreven:

 I am in the process of setting up a test environment for our Version 6
 TSM servers. The servers run under zSeries Linux and are currently at
 the 6.2.2.0 level. The main motivation for setting up a test environment
 at this time is to support an upgrade in the near future.

 Our production environment uses a TS3500 tape library with the ALMS
 feature. We will need tape infrastructure available to the test
 environment for some tests, but we do not wish to have any tape drives
 permanently attached to the test environment. Is it possible to have a
 logical library remain in existence with tape volumes assigned but with
 no tape drives assigned?

 Thomas Denier
 Thomas Jefferson University


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: 6.3.3.000 server wont HALT

2012-12-04 Thread Paul Zarnowski
 that you reply with
your password, social security number or confidential personal
information. For more details visit
http://infosecurity.vcu.edu/phishing.html
   
   
  
  
   --
   *Zoltan Forray*
   TSM Software  Hardware Administrator
   Virginia Commonwealth University
   UCC/Office of Technology Services
   zfor...@vcu.edu - 804-828-4807
   Don't be a phishing victim - VCU and other reputable organizations
   will never use email to request that you reply with your password,
   social security number or confidential personal information. For more
   details visit http://infosecurity.vcu.edu/phishing.html
  
  
**
   Information contained in this e-mail message and in any attachments
   thereto is confidential. If you are not the intended recipient,
please
   destroy this message, delete any copies held on your systems, notify
   the sender immediately, and refrain from using or disclosing all or
   any part of its content to any other person.
  
 
 
 
  --
  *Zoltan Forray*
  TSM Software  Hardware Administrator
  Virginia Commonwealth University
  UCC/Office of Technology Services
  zfor...@vcu.edu - 804-828-4807
  Don't be a phishing victim - VCU and other reputable organizations will
  never use email to request that you reply with your password, social
  security number or confidential personal information. For more details
  visit http://infosecurity.vcu.edu/phishing.html
 
  **
  Information contained in this e-mail message and in any attachments
  thereto is confidential. If you are not the intended recipient, please
  destroy this message, delete any copies held on your systems, notify
the
  sender immediately, and refrain from using or disclosing all or any
part of
  its content to any other person.
 



 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: 6.3.3.000 server wont HALT

2012-12-03 Thread Paul Zarnowski
Can anyone say whether this problem is limited to Linux, or does it exhibit 
itself on other server platforms?  Thanks.

At 09:34 AM 12/3/2012, Andrew Raibeck wrote:
Hi Zoltan,

Looks like we have one or two other customers reporting this. My
recommendation would be for you to open a PMR. Optionally, if you want to
be as pro-active as possible, perform the following steps in advance (this
is the current set of doc being requested for this issue):

1. Stop TSM, kill if necessary.

2. Start dsmserv in the foreground.

3. Wait for TSM to come completely up.

4. In the TSM server console, enable tracing by issuing:

trace enable ADM DB THREAD TM ADDMSG
trace begin /tmp/tsmhalttrace.out

(for the trace begin command, specify whatever directory and trace file
name if you want)

5. Issue the HALT command on the TSM server console

6. Wait 10 minutes.

7. Open a root terminal on the system where TSM is running and issue the
following commands (and save the output):

ps -ef | grep dsmserv
ps -ef | grep db2

8. Locate the process ID (PID) for dsmserv AND db2sysc from the ps output.

9. Issue the pstack command against the dsmserv PID and save the output,
e.g.:

$ procstack 1234

10. Issue the pstack command against the db2sysc PID and save the output,
e.g.:

$ procstack 1235

10. Wait 10 minutes.

11. Repeat steps 7, 8, 9, and 10.  Save the output.

13. Repeat steps 7, 8, 9, and 10 again, saving the output.

14. Copy and save all of the on-screen output from the TSM console.

15. Restart TSM.

16. Login as instance owner, issue db2support -d tsmdb1 -c -g -s, save
the db2support.zip generated

Once you have the PMR created, you can send in the doc that you collected:

1.The TSM server console output

2. The TSM server trace file

3. The three iterations of the ps output

4. The three iterations of pstack output against dsmserv

5. The three iterations of pstack output against db2sysc

6. The db2support.zip file

Best regards,

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Product Development
Level 3 Team Lead
Internal Notes e-mail: Andrew Raibeck/Hartford/IBM@IBMUS
Internet e-mail: stor...@us.ibm.com

IBM Tivoli Storage Manager support web page:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2012-12-03
08:45:17:

 From: Zoltan Forray zfor...@vcu.edu
 To: ADSM-L@vm.marist.edu,
 Date: 2012-12-03 08:55
 Subject: Re: 6.3.3.000 server wont HALT
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 This is now becoming a consistent / persistent problem.  I had to kill -9
 to stop the dsmserv process.  I restarted the server (via service ..
  start) and there didn't seem to be any damage done.

 However, attempting to stop/halt it, again, produced the same result -
 dsmserv using 200% CPU and after 2-hours I had to kill -9.

 So, obviously there are big enough changes in 6.3.3 vs 6.3.2, to cause
 problems like this, since none of my 6.3.x or 6.2.x servers exhibit
 this behavior.

 Any suggestions on how to diagnose this issue before I contact IBM and
 open a PMR?


 On Thu, Nov 29, 2012 at 2:04 PM, Zoltan Forray zfor...@vcu.edu wrote:

  Just did my first install/conversion of a 6.2.3 TEST server to
6.3.3.000
  (RH Linux)
 
  While the install and startup went fine, it won't HALT.
 
  After the install/upgrade, I got in via dsmadmc just fine.  Checked the
  actlog - saw all the schema changes/upgrades.  Updated/registered the
  licenses and then issued HALT.  Got the usually warning and said YES.
 
  Now it has been sitting for 25-minutes since the halt.
 
  Can't get back in via dsmadmc.
 
  Top shows dsmserv using 200% CPU.
 
  I tried standard kills, with no luck.   I hate to do a kill -9 but will
if
  I don't have a choice.
 
  What the heck is it doing?  Should I wait longer or just kill it with
  extreme prejudice?
 
  --
  *Zoltan Forray*
  TSM Software  Hardware Administrator
  Virginia Commonwealth University
  UCC/Office of Technology Services
  zfor...@vcu.edu - 804-828-4807
  Don't be a phishing victim - VCU and other reputable organizations will
  never use email to request that you reply with your password, social
  security number or confidential personal information. For more details
  visit http://infosecurity.vcu.edu/phishing.html
 
 


 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Who is performing Client Based Encryption and Compression

2012-11-28 Thread Paul Zarnowski
I am not aware of any way to detect client based encryption via a server-based 
query.  However, you can see this from a client-side CLI query (DSMC Q BACKUP), 
so there must be information somewhere that reflects this.

For anyone considering SUR pricing (capacity-based), you might consider looking 
at TSM-based deduplication instead of VTL-based deduplication, as your TB 
license requirements will be lower.  Or at least, that's what I was told by an 
IBMer describing the SUR pricing model to me.  Granted, you would need a more 
powerful TSM server, but you could offset that cost by purchasing disk less 
expensively than a VTL vendor would charge.

..Paul

At 04:13 PM 11/28/2012, Harris, Chad wrote:
Thank you all for your sage advice.

I am able to find the servers that are using compression, unfortunately 
though, some of my client Admins have let it slip that they are not always 
using compression when they are setting up encryption, despite our best 
efforts to guide them that way.

Anyone else know a way to find nodes only performing client encryption?

Thanks again,
Chad

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@vm.marist.edu] On Behalf Of Bill 
Boyer
Sent: Wednesday, November 28, 2012 12:01 PM
To: ADSM-L@vm.marist.edu
Subject: Re: [ADSM-L] Who is performing Client Based Encryption and Compression

You could start by getting a list of nodes that are set to either 
COMPRESSION=CLIENT or YES and those with DEDUPLICATION=CLIENTORSERVER. The 
default is SERVERONLY. Those would be good candidates to start with. You can't 
do client-side dedup without setting the node attribute.

Bill
When the pin is pulled, Mr. Grenade is NOT your friend! - USMC

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Ehresman,David E.
Sent: Wednesday, November 28, 2012 9:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Who is performing Client Based Encryption and Compression

I think q actlog begindate=today-ndays msgno=4968 is more efficient.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Tuesday, November 27, 2012 5:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Who is performing Client Based Encryption and Compression

For compression:

q actlog begindate=today-ndays search=ane4968

Non-zero values mean the client is compressing.




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Chad
Sent: Tuesday, November 27, 2012 5:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Who is performing Client Based Encryption and Compression

Fellow TSM Admins,

We are in the process of bringing VTLs into our TSM Environment.  In order to 
take full advantage of deduplication features on the VTL we need to go after 
the clients that are performing client based encryption and compression.  With 
that in mind, does anyone know an easy way to tell which clients are using 
these features?

Thanks,
Chad Harris


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: ANR1534I and DEDUPREQUIRESBACKUP ?

2012-09-28 Thread Paul Zarnowski
Wanda,
I understand your dilemma.  We are working on deploying dedup now, and are 
trying to figure this out too.  Like you, we want to have fast and slow pools, 
but we want a safety valve in case the fast pool fills up.  We haven't figured 
out how to do this, short of allocating extra space to try to ensure that a 
safety valve is never needed.

Let us know if you figure something out!

..Paul

At 12:18 AM 9/25/2012, Prather, Wanda wrote:
Need some help understanding the mechanics here, TSM 6.3 on Windows server 
with DEDUP.

I have a TSM file pool called FASTDEDUP with deduplication on, and NEXTSTGPOOL 
points to a second dedup filepool called SLOWDEDUP.
DEDUPREQUIRESBACKUP is set to yes.

So the first pool FASTDEDUP ran out of space, and BACKUP STGPOOL had not run, 
so it couldn't run reclamation with dedup or migrate to SLOWDEDUP.  That I 
understand.

I tried a MOVE DATA from FASTDEDUP to SLOWDEDUP, and got ANR1534I (below).  I 
figured out that's WAD, because the doc says move data does cause data 
reduction, which is forbidden by DEDUPREQUIRESBACKUP until after the backup 
stgpool has run.

So then I tried a MOVE DATA from FASATDEDUP to a tapepool, and still got 
ANR1534I.  Why is that?  I'm asking to move data that hasn't been deduped to a 
non-dedup pool.   If this is WAD, can somebody explain the rationale?



ANR1534I

Process process ID skipped Num Files deduplicated files on volume Volume name 
because copies of the files were not found.


Wanda Prather  |  Senior Technical Specialist  | wanda.prat...@icfi.com  |  
www.icfi.com
ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 | 
410.539.1135 (o)


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapshotroot for scheduled backups

2012-08-08 Thread Paul Zarnowski
Thanks Allen.

At 10:05 AM 8/8/2012, Allen S. Rout wrote:
You're in the position that the snapshot you want to use has a stable
name.  There are folks who have snapshots named related to the date of
consistency-point.

As it turns out, NetApp / nSeries snapshots do have predictable/static names.

If you're already preschedcmding, you might use that step to calculate
the command lines to run the per-filespace dsmc incr lines, drop
them in a temporary script, and then run that as a COMMAND schedule
instead of an INCREMENTAL sched.

At that point, I'm not sure what the value would be of using the TSM scheduler, 
as opposed to (e.g.) cron.

Thanks for taking the time to respond.  I'm thinking that if I want to do this, 
I'll probably abandon the TSM Scheduler in favor of home-grown scripting.  But 
I may just wait to see if snapdiff gets supported on vFilers, at which point 
this issue becomes moot for me.

..Paul

--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapshotroot for scheduled backups

2012-08-08 Thread Paul Zarnowski
Allen,

No, I'm sure you could put -snapshotroot=xxx in the options argument to the 
client scheduler.  But your response made me realize that I left out one 
relevant point:  We are backing up several NAS volumes via the scheduler, not 
just one.  We use the pre- and post- scheduler exits to mount and unmount those 
volumes.

So, we would need to specify a snapshotroot 'target' for each volume that we 
want to backup.  It seems this is what 'include.fs' was designed for, to 
specify fs-specific options.

..Paul


At 09:29 AM 8/8/2012, Allen S. Rout wrote:
On 08/07/2012 03:20 PM, Paul Zarnowski wrote:

It seems that the snapshotroot option would be perfect for doing
this, except for the fact that it only seems to work for 'selective'
or 'incremental' backups run from the command line.  I don't see a
way to do this for scheduled backups.

Paul, does this mean that when you put -snapshotroot in the 'Options'
argument to the client schedule, it doesn't work?

I notice in the docs that it seems pedantic about only permitting
snapshotroot when you're specifying one (and only one) filespace
target.  It may be that you can't leave the filespace implicit in
e.g. a DOMAIN statement, and you have to do something like


def sched [domain] [name] [ action=incr ]  OBJ=/nasfs 
OPT='-snapshootroot=/nasfs/.snapshot/nightly.0'


- Allen S. Rout


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Node replication vs. copy stgpools...

2012-08-08 Thread Paul Zarnowski
I haven't done node replication yet.

My opinion is that node replication and copy stgpools do different things.  
When I first saw node replication, I was thinking I could get rid of my copy 
stgpools (stored in a remotely attached tape library), but then I realized that 
it wasn't that simple.  copy pools provide protection against media failure.  
node replication provides protection against site failure.  It would be nice if 
you could configure node replication in a way that would make copy stgpools 
superfluous, but I don't think you can (at least, not without making some 
compromises).

I am intrigued by the possibility of using node replication to migrate a TSM 
server from one architecture to another.  We have archive data mixed in with 
our backup data, so just starting with fresh backups is a bit of a problem for 
us.  Node replication may provide a way to do this.

..Paul

At 02:35 PM 8/8/2012, Allen S. Rout wrote:
Howdy.

I'd be interested in hearing anyone who's done node replication
discussing their experiences.   I'm approaching the jump from v5 to v6,
and am musing about my configuration to be.

I'm currently doing virtual volume copy pools, which have protected me
adequately, but replicating everything by node sounds way sexier.

But if I do that, do I have something which can function like a copy
pool?I'm guessing that, if I lost a volume at the target location
and DESTROYed it, I could re-replicate to it.   Is there any way to
reverse the streams and repopulate a volume on the source server?

I could see something like locking the 'source' node, reversing the
replication config so you can populate the 'source' from the 'target',
then turning around and re-pointing the rep config and unlocking the
source, but that's a little itchy;  it's also complex to do this process
for every node with occupation on a volume.



I'm moving architecture _and_ version (AIX to Linux, sigh) so I figure
since I'll be recreating my whole infrastructure one step at a time,
it's a GREAT time to rethink everything.

- Allen S. Rout


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


snapshotroot for scheduled backups

2012-08-07 Thread Paul Zarnowski
NetApp / nSeries backup question (non-NDMP backup):

Has anyone else wanted to use the snapshotroot option for scheduled backups run 
from a NAS client?  We run scheduled backups from NAS clients for shares on a 
vFiler.  snapdiff backups are not supported on vFilers, so we would like to do 
the backup against a snapshot, rather than from the active share.  This will 
create a more atomic backup copy which is desirable especially for larger 
shares.  It seems that the snapshotroot option would be perfect for doing this, 
except for the fact that it only seems to work for 'selective' or 'incremental' 
backups run from the command line.  I don't see a way to do this for scheduled 
backups.

It would be nice if you could put something like:

include.fs /nas-filesystem/.snapshot/nightly.0 -snapshotroot=/nas-filesystem/
in the dsm.sys or dsm.opt file.

Has anyone figured a way around this, and/or are there other folks who have 
wished for this?

..Paul


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Mega-Exchange backup

2012-05-15 Thread Paul Zarnowski
 This is what we do to spread the load.  

..Paul

On May 14, 2012, at 9:55 PM, Prather, Wanda wprat...@icfi.com wrote:

 I have a customer with a 1.2 TB Exchange 2007 DB, expected to grow to 3 TB in 
 the next 4 months.
 Fulls are already a problem (14 hours).
 Is there any Exchange-related reason I can't do the fulls for a couple of 
 storage groups on Monday, the next 2 on Tuesday, next 2 on Weds. etc and 
 spread them out?
 
 Thanks ..


Re: Mega-Exchange backup

2012-05-15 Thread Paul Zarnowski
We enabled jumbo frames just for our Exchange backups, and that helped.  TSM 
server is a p740, 2 10Gb NICs.  Backups used to go direct to LTO4 tape which 
worked ok, but now go to FILE (7200 SAS-NL).  We are seeing network transfer 
rates (ANE4966I) of 66 to 180 MB/s, depending on the data size.  Larger 
transfers are in the 60-90 MB/s range, as Charles suggested.  We have 10 
Exchange servers, all feeding data to our TSM server somewhat in parallel.

HTH
..Paul

At 09:43 AM 5/15/2012, Hart, Charles A wrote:
We also stager full / incrs every other day, If you're backing up 1.2TB
in 14hrs you're only achieving 22MBS, you may have a performance
opportunity such as checking Disk I/O on your backup server as well as
the Exchange host for constraints, mutli stream your backups one thread
per stggroup and check your network links, ensure other Exchange
housekeeping tasks arnt running during backup window assuming you have a
gig NIC for backups you should see 60-80MBS assuming solid cfg as the
Exchange stggrps usually stream well.

Regards,

Charles


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Schaub, Steve
Sent: Tuesday, May 15, 2012 6:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Mega-Exchange backup

Wanda,
We do this now, along with running multiple concurrent SG backups.
Contact me offline and I'll send you a copy of our current Powershell
script.

Steve Schaub
Systems Engineer II, Windows Backup/Recovery
BlueCross BlueShield of Tennessee
steve_schaub AT bcbst.com
423-535-6574 (desk)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, May 14, 2012 9:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Mega-Exchange backup

I have a customer with a 1.2 TB Exchange 2007 DB, expected to grow to 3
TB in the next 4 months.
Fulls are already a problem (14 hours).
Is there any Exchange-related reason I can't do the fulls for a couple
of storage groups on Monday, the next 2 on Tuesday, next 2 on Weds. etc
and spread them out?

Thanks ..
-
Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: AIX vs Linux 2012

2012-04-20 Thread Paul Zarnowski
When comparing systems, I would use benchmarks suitable for transactional DB 
systems.  Power7 systems have 4 VPUs per CPU, which really makes a difference.  
Also look at how many I/O cards you will need, etc.  A big factor is what 
you're most comfortable with.

At 12:34 PM 4/20/2012, Shawn Drew wrote:
I know this has been discussed in various forms over the years, but I'm
specifically wondering about the current state of hardware
I have a long history with TSM on AIX.  It's stable, familiar and an I/O
powerhouse.  Our Unix admins also favor AIX for serious, heavy-duty
workloads.

We are looking at refreshing our largest P570 now.  I discussed this with
our unix admin, who also has a very high opinion of IBM.  He said that
current SandyBridge implementations can really make Linux a contender in
terms of I/O and CPU performance.  And at about 1/7th the cost.

I normally dismiss Linux because I was under the impression that you would
need many inexpensive servers to equal one P-series for I/O.  It wouldn't
be worth it with the added management of dealing with multiple TSM
Servers.  Now with DB2, TSM seems to be going more towards the monolithic
direction if anything.
But If I can get a single 32-core, 128GB ram intel server that can
actually push multiple 10gbe and 8gb FC interfaces I am finding Linux a
little more attractive.

Does anyone have any stories,  gotchas, or opinions with replacing a
P-series host with a modern Intel system 1 for 1?


Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Objects Assigned vs. Your Database.

2012-04-16 Thread Paul Zarnowski
At 05:08 PM 4/16/2012, Prather, Wanda wrote:
But from experience at a customer where we had similar problems (Win2K8 
clients were taking 8 hours for the incremental systemstate backup), the 
long-term solution is to get your clients to a V6.2+ TSM server.


I'll echo Wanda's observations.  We've been at 6.2 for awhile now, and it has 
solved a lot of these kinds of issues.  Expiration and DB backups fly now.  
System State backups (and expirations) are no longer the problem that they 
were.  I should mention that we did more than just upgrade to 6.2 - we also 
upgraded our server at the same time, so that might have helped a bit too.

..Paul



--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: File serving with TSM for Space Management

2012-04-10 Thread Paul Zarnowski
Thanks for everyone's feedback on my question.  Richard, your comments were 
just what I was looking for.  I'm looking at TSM/HSM and also an LTFS-based 
product from Crossroads Systems (Strongbox) that looks interesting.

If anyone is sharing an HSM-based filesystem via Samba, I'd like to hear your 
experiences on that.

..Paul


At 05:02 AM 4/10/2012, Richard Sims wrote:
On Apr 9, 2012, at 2:22 PM, Paul Zarnowski wrote:

 Does anyone who is using TSM for Space Management (aka HSM) know if it can 
 be used to share out filesystems using NFS (or CIFS)?  Or are there timeout 
 problems that make this unworkable?

You can, and we do it via NFS - but it can be a nightmare at times. Both HSM 
and NFS operate as kernel extensions: any problems thus become very severe. We 
had someone who thought it a good idea to have an HSM file system NFS-mounted 
to an FTP server for data feeding. That resulted in a hung FTP server system 
whenever data was being written faster than it could be migrated on the HSM 
server. Over the past weekend we had a situation where a user thought it 
reasonable to copy a movie file larger than the 64 GB HSM file system into 
that area: the file system was wedged, and because it was NFS-served as well, 
NFSd was hung; and the incoming files could not migrate because they were in 
an open state, even after the writing process was killed off: we had to reboot 
the server (which in turn somehow incited the failure of a IBM RAID adapter 
card and a marathon recovery effort that I'm just recovering from). A little 
known reality about HSM is that the space for a file must consist of 
contiguous blocks: you can have an HSM file system that is like 85% full as 
incited by a large file being written, and writing can proceed no further, 
because the file system is fragmented. Because of this, for file systems which 
get large files from users, we have an early morning job which forcibly 
migrates everything out of the file system to maximize free space.

But don't let me discourage you from using HSM...  :-)

 Richard Sims   enduring at Boston University 


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu  


File serving with TSM for Space Management

2012-04-09 Thread Paul Zarnowski
Does anyone who is using TSM for Space Management (aka HSM) know if it can be 
used to share out filesystems using NFS (or CIFS)?  Or are there timeout 
problems that make this unworkable?

Thanks.

..Paul


--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: SNAPDIFF with CIFS on Linux supported?

2012-02-28 Thread Paul Zarnowski
I would be happy to be proven wrong on this, but I believe that snapdiff for 
CIFS is only supported on Windows.  snapdiff for NFS is supported on AIX and 
Linux.

BTW, that particular error message seems to be issued for a variety of reasons. 
 We have found it hard to figure out what the problem is at times, and we have 
an open PMR for this message right now.

..Paul

At 03:05 PM 2/28/2012, Stackwick, Stephen wrote:
What's in the can...

Obviously, Linux systems can mount CIFS shares, but does anyone have the 
SNAPDIFF option working? I keep getting:

ANS2831E Incremental by snapshot difference cannot be performed on 
/mountpoint as is is not a NetApp NFS or CIFS volume.

It is too! The message kind of implies that CIFS is OK on Linux.

Steve

STEPHEN STACKWICK | Senior Consultant | 301.518.6352 (m) | 
sstackw...@icfi.commailto:sstackw...@icfi.com | 
icfi.comhttp://www.icfi.com/
ICF INTERNATIONAL | 410 E. Pratt Street Suite 2214, Baltimore, MD 21202 | 
410.539.1135 (o)


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Need some support for snapdiff RFE's

2012-02-14 Thread Paul Zarnowski
Grant,

The time stamps are not updated, because a snapdiff incremental does not 
deactivate deleted files.  You need to do periodic 'createnewbase' to do this, 
which walks the file system and updates the time stamps.  

..Paul

On Feb 14, 2012, at 12:32 AM, Grant Street gra...@al.com.au wrote:

 Hello
 
 Just thought I would plug two snapdiff RFE's(Request For Enhancement)
 that I created on the IBM developer works site.
 
 I am trying to garner some support for these in the hope that they will
 be implemented in the future.
 
 I have seen that some of you have had experience with Netapp snapdiff
 backups and you may have similar beliefs.
 
 Define snapdiff and other snap options on a per filesystem basis rather
 than per backup command
 When running an incremental backup be able to define the snapdiff or
 other snap option on a per filesystem basis rather than for the whole
 backup incremental job.
 
 http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=12858
 
 Snapdiff to update last backup fields in filespace data
 Could you please update the TSM snapdiff client to update the lastbackup
 information of filespaces when a query filespace f=d is issued. This
 would be the obvious behaviour and would be inline with the documented
 meaning of the columns
 
 http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=13145
 
 TIA
 
 Grant


Re: Need some support for snapdiff RFE's

2012-02-14 Thread Paul Zarnowski
Pete, et al,
My apologies for disseminating bad information, and thanks for the 
clarification.
..Paul

At 01:08 PM 2/14/2012, Pete Tanenhaus wrote:
Actually this really isn't correct, a snapshot differential backup will
detect deleted files and will expire them on the TSM server.

Periodic full progressive incremental backups are recommended because less
complete backups such as snapdiff,
incremental by date, and JBB can never be as comprehensive as a full
progressive given that a full progressive examines
every file on the local file system and every file in the TSM server
inventory.

Note that less complete implies that changes processed by a full
progressive might be missed by other less complete
backup methods, and this is the reasoning behind recommending periodic full
progressive incrementals.

JBB is somewhat better in that in makes every attempt to detect conditions
which indicate that the change journal is out of
sync with what has previously been backed up and to automatically force the
full progressive incremental backup instead
of requiring the user to manually schedule it.

For reasons that require a very detailed explanation (let me know if you
are interested), the current snapdiff implementation
doesn't have the robustness or resiliency that JBB does and therefore
really requires manually scheduling full progressive
incremental backups via the CreateNewBase option.

Hope this helps ...

Pete Tanenhaus
Tivoli Storage Manager Client Development
email: tanen...@us.ibm.com
tieline: 320.8778, external: 607.754.4213

Those who refuse to challenge authority are condemned to conform to it


|
| From:  |
|
  
 --|
  |Paul Zarnowski p...@cornell.edu   
   |
  
 --|
|
| To:|
|
  
 --|
  |ADSM-L@vm.marist.edu
   |
  
 --|
|
| Date:  |
|
  
 --|
  |02/14/2012 07:12 AM 
   |
  
 --|
|
| Subject:   |
|
  
 --|
  |Re: Need some support for snapdiff RFE's
   |
  
 --|
|
| Sent by:   |
|
  
 --|
  |ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
   |
  
 --|





Grant,

The time stamps are not updated, because a snapdiff incremental does not
deactivate deleted files.  You need to do periodic 'createnewbase' to do
this, which walks the file system and updates the time stamps.

..Paul

On Feb 14, 2012, at 12:32 AM, Grant Street gra...@al.com.au wrote:

 Hello

 Just thought I would plug two snapdiff RFE's(Request For Enhancement)
 that I created on the IBM developer works site.

 I am trying to garner some support for these in the hope that they will
 be implemented in the future.

 I have seen that some of you have had experience with Netapp snapdiff
 backups and you may have similar beliefs.

 Define snapdiff and other snap options on a per filesystem basis rather
 than per backup command
 When running an incremental backup be able to define the snapdiff or
 other snap option on a per filesystem basis rather than

Re: Isilon backup

2012-02-14 Thread Paul Zarnowski
Allen,

Thanks for your response.

I did not include your option 0, because I was thinking large file servers.

Your point about option 0 being potentially more viable if metadata is kept on 
SSD is interesting.  Does anyone have any first-hand experience with this that 
they can share?

Option 4 is admittedly expensive and in a different RTO class.  I included it 
because in some situations, it may indeed be the only viable option for offsite 
protection.  And as you say, if the replica can be on slower, cheaper disk, 
that brings the cost down some.

..Paul

At 09:26 AM 2/14/2012, Allen S. Rout wrote:
On 02/13/2012 11:46 AM, Paul Zarnowski wrote:


[...] I see the following major categorizations of how to protect
(large) file servers effectively.  Please feel free to comment on
this, as I'm looking to refine my view.

I should say that our protection goal would be to have a copy of
 data at our medical college campus, which is ~200+ miles away and
 accessible via a 10Gb WAN.

1. Backup using NDMP: breaks down as data size increases, because of
need for periodic full backups and lack of incremental forever.
However, probably fast recovery times from tape.  Most NAS products
support some version of NDMP.

2. Incremental file-level backup based on Journal Scanning, saving a
 walk of the filesystem.  Examples include Windows-based
 fileservers, and (I think) SONAS.

3. Incremental backup based on Snapshot differencing, again saving a
walk of the filesystem.  This would be NetApp, but not with
MultiStore vFilers - a problem for us.

4. Asynchronous replication to a second NAS.  Fast RTO, but also
expensive.

Options 1-3 use TSM.
Options 23 run faster and are more scalable than option 1, but likely have 
longer RTO.
Option 4 is probably most expensive.


I'm going to assume

0. Conventional TSM backup via a proxy node.  Requires keeping shares
quite small (no more than a few TB), if you want anything approaching
daily incrementals.  Bottleneck appears to be metadata scan, walk of
tree.

I mention that because Isilon (and others?) are starting to offer
options that let you keep metadata on SSD, which may change the
maximum size for reasonable conventional incrementals.  We're
expecting to do a POC of Isilon before too long, I intend to examine
that.


-


As for comment, I want to amplify what you said about RTO: Your option
4 is in a completely different universe from the 0-3.  For all of 0-3,
the DR critical path includes an acquisition cycle.  I'm not sure our
heirarchy could get a PO out in a week if their hair were on
fire... yours may be better.

Option 4 has nearly immediate recovery, in degraded performance.  So,
it's more expensive, but it moves you to a totally different recovery
mode.

Our thinking abount isilon backups includes a remote Near-line unit,
that has relatively large, slow, cheap drives.  It would be a
replication target for several other units, giving it good efficiency
of scale.



- Allen S. Rout


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Isilon backup

2012-02-13 Thread Paul Zarnowski
At 01:48 PM 2/12/2012, Prather, Wanda wrote:
NAS devices are closed operating systems, so you can't install a TSM client on 
them.

Depends on what you define as a NAS device.  I believe (not positive) that 
SoNAS has an integrated TSM client in it.  Some windows-based fileservers can 
use an on-board windows TSM client (and thus take advantage of JBB).

We don't have a lot of first-hand experience with NAS here, other than a netapp 
gateway (IBM N6210), but I've been reviewing a bunch of the technologies, with 
emphasis on how to protect them.  I see the following major categorizations of 
how to protect (large) file servers effectively.  Please feel free to comment 
on this, as I'm looking to refine my view.

I should say that our protection goal would be to have a copy of data at our 
medical college campus, which is ~200+ miles away and accessible via a 10Gb WAN.

1. Backup using NDMP:  breaks down as data size increases, because of need for 
periodic full backups and lack of incremental forever.  However, probably fast 
recovery times from tape.  Most NAS products support some version of NDMP.

2. Incremental file-level backup based on Journal Scanning, saving a walk of 
the filesystem.  Examples include Windows-based fileservers, and (I think) 
SONAS.

3. Incremental backup based on Snapshot differencing, again saving a walk of 
the filesystem.  This would be NetApp, but not with MultiStore vFilers - a 
problem for us.

4. Asynchronous replication to a second NAS.  Fast RTO, but also expensive.

Options 1-3 use TSM.
Options 23 run faster and are more scalable than option 1, but likely have 
longer RTO.
Option 4 is probably most expensive.

..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Detect client-level encryption from the TSM server?

2012-02-08 Thread Paul Zarnowski
Keith,

This is not something that the TSM admin controls, and it is not enabled by 
node.  The only way I know of to detect encrypted files is from the client-side 
DSMC CLI.  E.g., dsmc query backup  -detail, should show you which files 
are encrypted and using what encryption algorithm.  I do not think this will 
show you how the encryption keys are managed, however.

Note that if a file is backed up unencrypted, adding an include.encrypt rule 
to encrypt it does not automatically cause that file to be backed up again 
using encryption.  The addition of the encryption include is not recognized by 
TSM as a reason to backup the file.  We have had more than one user surprised 
by this.

Paul Zarnowski
Cornell University

At 03:52 PM 2/8/2012, Keith Arbogast wrote:
Can one detect from the TSM server whether client-level encryption is set on 
or off for each backup node? Inquiring security admins want to know.

With my thanks and best wishes,
Keith Arbogast
Indiana University


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Excessive number of filling tapes...

2012-02-06 Thread Paul Zarnowski
Allen,

We opened a PMR that sounds like it might match your observations.  We have way 
too many volumes in what I call barely filling status, with no explanation of 
how they got that way.  APAR IC76192 was opened for this:
http://www-01.ibm.com/support/docview.wss?uid=swg1IC76192

..Paul

At 04:17 PM 2/6/2012, Allen S. Rout wrote:
So, I've got an offsite machine which exists to accept remote virtual
volumes.  For years, now, the filling volumes have behaved in a way I
thought I understood.

The tapes are collocated by node.  There are about 20 server nodes which
write to it.

My number of filling volumes has rattled around 50-60 for years;  I
interpret this as basic node collocation, plus occasional additional
tapes allocated when more streams than tapes are writing at a time.  So
some of the servers have just one filling tape, some have two, and the
busiest of them might have as many as 6 (my drive count).

Add a little error for occasionally reclaiming a still-filling volume,
and that gives me a very clear sense of what's going on, and I can just
monitor scratch count.

Right now, I have 190 filling volumes.

None of them has data from more than one client.

I have some volumes RO and filling, and am looking into that, but it's
20 of them, not enough to account for this backlog.  Those are also the
only vols in error state.

I've been rooting through my actlogs looking for warnings or errors, but
I've never had occasion to introspect about how TSM picks which tape to
call for, when it's going to write.  It's always Just Worked.


Does this ring any bells for anyone?Any dumb questions I've
forgotten to ask?  I don't hold much hope for getting a good experience
out of IBM support on this.


- Allen S.Rout


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Isilon backup

2012-02-02 Thread Paul Zarnowski
We are looking at Isilon too, along with NetApp, BlueArc, and SoNAS.  Backups 
are a real challenge as these get large in scale.  Some observations:

1. NDMP requires periodic full backups.  This generates a lot of backup 
traffic.  If you're going over a long-haul network, this can be an issue.  Note 
there are 3 versions of NDMP (or 4?).  Some go direct to tape, some allow you 
to go to TSM.  When purchasing a NAS with NDMP support, make sure you 
understand this if you're planning to use NDMP.

2. NetApp has snapdiff integration with TSM.  Very nice.  Avoids having to walk 
the filesystem, which is a big win.  Restores can take a long time.  That's an 
advantage of NDMP, I think.  Backups are done from NAS clients.

3. SoNAS has integrated TSM client, which (I think) uses a journal in GPFS 
which again saves TSM from walking the filesystem each time.  I'm pretty sure 
about this, but not positive.  Also allows extension via TSM/HSM I believe.  
Nice on paper - no experience with it yet.

4. Isilon - has backup accellerators.  Yes, an integrated TSM client would be 
nice (to whoever suggested this), but what you'd really want is something to 
save TSM from walking the filesystem.  I don't see this happening for the 
reason stated earlier - there's no partnership between EMC and IBM.

5. BlueArc - no experience or insight with this yet.  I think you'd either use 
NDMP or a NAS client, similar to Isilon.

We're looking at all of these (and have a NetApp), so I'm keenly interested in 
others' experiences.

..Paul


At 04:12 PM 2/2/2012, Huebner,Andy,FORT WORTH,IT wrote:
Anyone backup an Isilon array? Using 3592 tape drives?  The sales guys say it 
is just NDMP.
I am looking for just basic information (good, bad, Oh Smurf!).  One may be in 
our future.


Andy Huebner


This e-mail (including any attachments) is confidential and may be legally 
privileged. If you are not an intended recipient or an authorized 
representative of an intended recipient, you are prohibited from using, 
copying or distributing the information in this e-mail or its attachments. If 
you have received this e-mail in error, please notify the sender immediately 
by return e-mail and delete all copies of this message and any attachments.

Thank you.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Testing conversion of V5.5 to V6.3

2012-01-12 Thread Paul Zarnowski
Zoltan,

That's exactly what we did - test on a restored copy of a v5 database.  We used 
restore db, but a snapshot should also work.  If you have the resources, I 
suggest doing this testing as it should give you a better idea of what problems 
you might run into and how long it will take.  Doing these test runs allowed 
our upgrades to go very smoothly.

..Paul

At 09:55 AM 1/12/2012, Zoltan Forray/AC/VCU wrote:
I am going through the docs preparing to do my first test of converting a
V5 server to V6.3 on a test server/machine.

I am looking for details on EXACTLY what the DSMUPGRD PREPAREDB does.

Since this is a test/dry run, I don't want to do this against a live
server,  just a copy of it's databases.

Can I just restore the V5 DB to the test machine, install the DSMUPGRD
utility and run it against the restored DB?

How about from a DBSNAPSHOT of this database we do to an offsite TSM
server?


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Windows Batch for Admin Jobs.

2011-12-07 Thread Paul Zarnowski
Dating myself, but Regina Rexx is another option. Very Small (~1MB exe), 
stable, flexible and very transportable across OS platforms. 

..Paul

On Dec 7, 2011, at 8:25 AM, Schaub, Steve steve_sch...@bcbst.com wrote:

 Your Windows Team might balk at having to install Perl on every Windows 
 server if it is a large environment.  Although you could use the old command 
 scripting, if you are starting from scratch anyway, I would suggest going 
 forward in Powershell for Windows.
 -steve schaub
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Prather, Wanda
 Sent: Tuesday, December 06, 2011 10:02 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Windows Batch for Admin Jobs.
 
 Just a suggestion:
 I think you'll get a lot more function, flexibility, and help if you wrap it 
 in perl.
 Active perl for Windows is free, open source, and safe - doesn't install a 
 single .dll file.
 I started writing perl for batch TSM tasks because all I have to do is change 
 / to \ and I can run it on either platform.
 
 W
 
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Huebner,Andy,FORT WORTH,IT
 Sent: Tuesday, December 06, 2011 5:05 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Windows Batch for Admin Jobs.
 
 I have looked with no luck, does any know where I can see examples of TSM 
 daily jobs written in Windows batch?  We prefer to run our scheduled jobs 
 through an enterprise scheduler and all of the batch stuff I have is for AIX 
 and I have not done Windows batch in a long time.
 
 
 Andy Huebner
 
 
 This e-mail (including any attachments) is confidential and may be legally 
 privileged. If you are not an intended recipient or an authorized 
 representative of an intended recipient, you are prohibited from using, 
 copying or distributing the information in this e-mail or its attachments. If 
 you have received this e-mail in error, please notify the sender immediately 
 by return e-mail and delete all copies of this message and any attachments.
 
 Thank you.
 -
 Please see the following link for the BlueCross BlueShield of Tennessee 
 E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: External Disk Unit and TSM Deduplication

2011-11-16 Thread Paul Zarnowski
I agree with Ian.  

You'll pay extra for dedup storage whether it's an appliance or done via TSM.  
TSM's will require extra resource (CPU, RAM) on the server, but will allow you 
to use cheap, dumb storage.  You'll pay a premium for disk storage associated 
with an appliance, offsetting your savings from data reduction. IMHO, it's 
probably a wash. We're going with TSM dedup, because we feel that TSM is in a 
better location in the data flow, knows more about the data, and can do more 
with it. I think one big reason that appliances are popular is that they were 
available first. 

When we compared cost of cheap dumb storage to dedup appliance, it was a wash, 
unless you got high dd ratios. As we encrypt more data, those ratios become 
more and more in doubt. 

..Paul

On Nov 16, 2011, at 5:40 AM, Ian Smith ian.sm...@oucs.ox.ac.uk wrote:

 Alper,
 
 We have tested TSM server-side deduplication and, yes, it works, but at a 
 cost. Specifically, in our opinion the sum of
 the additional  resource requirements  (CPU power and memory) and the time 
 required for the identification and reclamation processes meant that we 
 couldn't see how server-side dedupe would scale on busy systems such as ours. 
 Another factor that coloured our opinion was that not all our data deduped 
 very well. As a result we are looking at client-side dedupe for those clients 
 that we think will attain good dedupe ratios - however, client-side dedupe 
 means longer client sessions and still impacts the TSM server a bit. I'm 
 guessing that the apparent prevalence of hardware-based dedupe on this list 
 is due to people realising that dedupe on a busy system needs to be offloaded 
 from the TSM server.
 
 HTH
 Ian Smith
 
 On 16/11/11 06:32, Alper DİLEKTAŞLI wrote:
 Hi Neil,
 
 We don't need LAN free backup and we won't that.
 There is no problem about using file device class.
 We won't change TSM license model.
 We wonder about tsm deduplication is good and safe or not. We tested it in 
 the test system but we havent got experience on the production. If it can do 
 deduplication well(like as hardware solutions) we will buy an external disk 
 devices. If not we will buy a new LTO library and we won't use deduplication.
 
 Thanks
 Alper
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Neil Schofield
 Sent: Tuesday, November 15, 2011 8:17 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] External Disk Unit and TSM Deduplication
 
 Alper
 
 I've been giving a bit of thought to this myself recently, although it's 
 more of an academic exercise since we're not in the market for a new library 
 at the moment.
 
 Some things you may want to consider:
 
 Do you (now or in the future) have a requirement for LAN-free backup? I 
 would expect this to be significantly harder with a FILE device class 
 compared to a physical/virtual tape library solution.
 
 Are you considering switching to a capacity-based TSM license model? If so 
 then I've been told that it's the volume of primary storage after TSM 
 de-dupe but before hardware-level de-dupe/compression that counts, which may 
 incline you more towards the disk-based solution.
 
 Either way I'd be interested to hear what you decide.
 
 Regards
 Neil Schofield
 Technical Leader
 Data Centre Services Engineering Team
 Yorkshire Water Services Ltd.
 
  
 
 Spotted a leak?
 If you spot a leak please report it immediately. Call us on 0800 57 3553 or 
 go to http://www.yorkshirewater.com/leaks
 
 Get a free water saving pack
 Don't forget to request your free water and energy saving pack, it could 
 save you money on your utility bills and help you conserve water. 
 http://www.yorkshirewater.com/savewater
 
 The information in this e-mail is confidential and may also be legally 
 privileged. The contents are intended for recipient only and are subject to 
 the legal notice available at http://www.keldagroup.com/email.htm
 Yorkshire Water Services Limited
 Registered Office Western House, Halifax Road, Bradford, BD6 2SZ Registered 
 in England and Wales No 2366682
 
 Dikkat: Bu elektronik posta mesaji kisisel ve ozeldir. Eger size 
 gonderilmediyse lutfen gondericiyi bilgilendirip mesaji siliniz.Firmamiza 
 gelen ve giden mesajlar virus taramasindan gecirilmektedir. Mesajdaki 
 gorusler  gondericiye ait olup HAVELSAN A.S. resmi gorusu olmak zorunda 
 degildir.
 
 Attention: This e-mail message is private and privileged.If you are not the 
 recipient for whom this e-mail message is intended, please notify the sender 
 immediately and delete this e-mail message from your system.All sent and 
 received e-mail messages go through a virus scan. Any opinions presented in 
 this e-mail message are solely those of the author and do not necessarily 
 represent HAVELSAN A.S.`s formal and authorized views.


Re: Migrating from AIX to Linux (again)

2011-11-16 Thread Paul Zarnowski
You might look at new features introduced with TSM 6.3, for server/node 
replication.

At 10:47 AM 11/16/2011, Dury, John C. wrote:
Our current environment looks like this:
We have a production TSM server that all of our clients backup to throughout 
the day. This server has 2 SL500 tape libraries attached via fiber. One is 
local and the other at a remote site which is connected by dark fiber. The 
backup data is sent to the remote SL500 library several times a day in an 
effort to keep them in sync.  The strategy is to bring up the TSM DR server at 
the remote site and have it do backups and recovers from the SL500 at that 
site in case of a DR scenario.

I've done a lot of reading in the past and some just recently on the possible 
ways to migrate from an AIX TSM server to a Linux TSM server. I understand 
that in earlier versions (we are currently at 5.5.5.2) of the TSM server it 
allowed you to backup the DB on one platform (AIX for instance) and restore on 
another platform (Linux for instance) and if you were keeping the same 
library, it would just work but apparently that was removed by IBM in the TSM 
server code to presumably prevent customers from moving to less expensive 
hardware. (Gee, thanks IBM! sigh).
I posted several years ago about any possible ways to migrate the TSM Server 
from AIX to Linux.
The feasible solutions were as follows:

1.   Build new linux server with access to same tape library and then 
export nodes from one server to the other and then change each node as it's 
exported, to backup to the new TSM Server instead.  Then the old data in the 
old server can be purged. A lengthy and time consuming process depending on 
the amount of data in your tape library.

2.   Build a new TSM linux server and point all TSM clients to it but keep 
the old TSM server around in case of restores for a specified period of time 
until it can be removed.

There may have been more options but those seemed the most reasonable given 
our environment. Our biggest problem with scenario 1 above is exporting the 
data that lives on the remote SL500 tape library would take much longer as the 
connection to that tape library is slower than the local library.  I can 
probably get some of our SLAs adjusted to not have to export all data and only 
the active data but that remains to be seen.

My question. Has any of this changed with v6 TSM or has anyone come up with a 
way to do this in a less painful and time consuming way? Hacking the DB so the 
other platform code doesn't block restoring an AIX TSM DB on a Linux box? 
Anything?

Thanks again and sorry to revisit all of this again. Just hoping something has 
changed in the last few years.
John


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: vtl versus file systems for pirmary pool

2011-10-18 Thread Paul Zarnowski
Allen,

We are running 6.2.3 (just upgraded from 6.2.2 2 days ago) on two p740s, each 
with 8 CPUs (32 vCPUs) and 128GB of RAM.  10 TSM instances total spread across 
these two systems.  One has 3, the other 7.

System 1 DB sizes (Space Used by DB):
1) 258GB
2) 289GB
3) 287GB

System 2 DB sizes:
1) 172GB
2) 173GB
3)  39GB
4)  84GB
5) 317GB
6)  98GB
7)   1GB (dedicated shared library manager server)

- We do not do deduplication (yet).
- We have CPU to spare (will get used when we enable deduplication).
- We ingest ~20TB per day across all instances.  We use copy pools for all data 
(to a remote tape library).

I am not sure whether we will have sufficient CPU available to do all of the 
deduplication that we wish to.  Same for RAM.  The best practices have 
changed a couple of times since we originally started looking at this.

HTH.

..Paul


At 08:54 AM 10/17/2011, Allen S. Rout wrote:
Please elaborate on your experiences w.r.t. memory footprint?

It's my major planning unknown; I'm musing about instance count
vs. size.


- Allen S. Rout


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Ang: [ADSM-L] Dedup for DB's?

2011-10-18 Thread Paul Zarnowski
I'm also curious about that, as well as CPU impact on client side (Exchange 
server).  Thanks Daniel.

At 02:58 PM 10/18/2011, Prather, Wanda wrote:
Good to hear.
Has it had an effect on the backup elapsed time?  Or just the amount stored?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Daniel Sparrman
Sent: Tuesday, October 18, 2011 2:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: [ADSM-L] Dedup for DB's?

We're having customers using Exchange, DB2  SQL backups de-duping just fine. 
Exchange databases are a total of 2TB per machine, having (if I'm not 
recalling wrong) 200GB per storage group.

So unless you're talking huge sizes, no, de-dup works fine for databases. And 
the de-dup ratio is really good on databases tbh.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/18/2011 19:24
Ärende: [ADSM-L] Dedup for DB's?


Just asking.
As I recall, the announcements for client-side dedup said it is supported by 
the API, so therefore should work with TDP's.
Has anybody achieved (or attempted?) significant improvements in throughput 
with backup of large DB's using client-side dedup?

Wanda Prather  |  Senior Technical Specialist  | 
wprat...@icfi.commailto:wprat...@icfi.com  |  www.jasi.comwww.jasi.com%20
ICF Jacob  Sundstrom  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 | 
410.539.1135 


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu  


Re: Check signals on Power vs. x86...

2011-10-12 Thread Paul Zarnowski
We have kept with Power, largely because we (1) have AIX skills in-house, and 
(2) the effort to migrate to another platform would be great, because we have 
significant archive data.

Some thoughts (take them as you will):

1. You can scale power vertically, quite high.  We have 10 very sizable 
instances on two p750s.  Each p750 has many FC HBAs on it, which are shared by 
all instances on that server.  If we used intel instead, we'd likely have to 
purchase many more HBAs, or divide up our tape/disk resources so that they are 
not all sharable.  It is nice to be able to share everything with everything.  
We can keep adding processors and RAM for some time.  Higher-end p7's scale 
even higher, but we found the 750 to be a nice fit for our needs, giving us 
considerable head room.

2. I have two resource baskets to monitor/plan for instead of 10.  When I add 
RAM to a server, for example, it benefits all of the instances running on it, 
not just 1.

3. Power 7 has 4 SMTs per core, vs 2 for Power 6, and 1 for Power 5 and 
earlier.  If you look at the SPEC ratings, you will see that this translates to 
greater workload handling per core for Power 7 vs earlier Power, for the same 
GHz.  I am not sure what this looks like on Intel.  I would look at an 
appropriate SPEC benchmark when comparing platforms vs something simplistic 
like processor speed (GHz).  Speeds can be very misleading.

4. I am a fan of AIX's LVM and management suite.  I admit I may be biased, 
because I have lived with it for so long and know it, but as I have considered 
Linux, I have become aware of some things that it does not yet have, or doesn't 
have as nicely as AIX does.

I will be interested in what others have to say, as I share Allen's perspective 
on revisiting assumptions periodically.

..Paul


At 10:05 AM 10/12/2011, Stefan Folkerts wrote:
I would like to see that as well, I find it impossible to believe without
proof...and I love the power platform just not because it's 4x as fast (at
least) per core as x86 for TSM because I don't think it is. :-)

On Mon, Oct 10, 2011 at 10:36 AM, Gregor van den Boogaart 
gregor.booga...@rz.uni-augsburg.de wrote:

 @Howard Coles:
  For every 1 proc or core
  of Power you would need 4 or more of x86 (even at their best level).  I
  have seen the numbers from Intel comparing Newer x86 processors to
  Power6 and they are just below the Power 6 (using 2x's the number of
  cores).
 Do you have a reference, link, pdf, ... for this?

  The problem is, You can get Power7 cheaper than Power6, and get
  twice the performance.
 And for that?

 Thanks!

 Gregor van den Boogaart



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: vtl versus file systems for pirmary pool

2011-09-26 Thread Paul Zarnowski
One thing I didn't see mentioned:  Using TSM to deduplicate on FILE storage 
pools (serial access) allows you to do source-mode (client-side) deduplication. 
 Can't do that with a VTL appliance or TSM DISK storage pools (random access).

At 04:05 PM 9/26/2011, Tim Brown wrote:
What advantage does VTL emulation on a disk primary storage pool have

as compared to disk storage pool that is non vtl ?



It appears to me that a non vtl system would not require the daily reclamation 
process

and also allow for more client backups to occur simultaneously.



Thanks,



Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the intended 
recipient. If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended 
recipient, please notify the sender immediately by replying to this note and 
deleting all copies and attachments.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-09-22 Thread Paul Zarnowski
I add my voice to this question.  We would really like to see this.

At 11:31 AM 9/22/2011, Shawn Drew wrote:
I'd like to add a question to the FAQ if possible.  I'll ask it frequently
if it helps getting it added!

The documentation explicitly states that vfiler support (Multistore) is
not supported.  Is support for this somewhere on the roadmap? or is there
something on the Netapp side that prevents this?

Regards,
Shawn

Shawn Drew





Internet
ra...@us.ibm.com

Sent by: ADSM-L@VM.MARIST.EDU
09/22/2011 09:07 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] snapdiff advice






Hello Paul and David,

A frequently asked questions website has been created for snapshot
differencing.
We have attempted to answer the questions you have recently raised.

https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot

+Differencing+FAQ



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-07-18 Thread Paul Zarnowski
We're running 8.0.1P3 on an n6210 gateway, in front of an SVC.  The error we 
get is:
tsm incr -snapdiff Y:

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported for Data
ONTAP file server version '0.0.0'. Upgrade the file server 'x.x.x.x' to Dat
a ONTAP version '7.3' or later in order to perform incremental backup 
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for 
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.

I'll be opening an ETR on this, but if anyone has any ideas, let me know.  
Thanks!  Note that TSM thinks the ONTAP version is '0.0.0' for some reason.

..Paul



At 01:31 PM 7/15/2011, Frank Ramke wrote:
Hi Paul,

8.0.1 should work.  Ensure the NetApp user id has sufficient capabilities
and that it's password is not expired.


Frank Ramke



From:   Paul Zarnowski p...@cornell.edu
To: ADSM-L@vm.marist.edu
Date:   07/15/2011 12:10 PM
Subject:Re: snapdiff advice
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



David,

Did you get this working with 8.0.1?  We're getting this error:

tsm incremental -snapdiff Y: -diffsnapshot=latest

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported
for Data
ONTAP file server version '0.0.0'. Upgrade the file server '10.16.78.101'
to Dat
a ONTAP version '7.3' or later in order to perform incremental backup
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.


At 03:41 PM 6/29/2011, David Bronder wrote:
Clark, Margaret wrote:

 Back in March, I watched a recorded presentation about DB2 reorgs
 within TSM server 6.2.2.0, and discovered that OnTap will only allow
 snapdiff backups to work correctly with releases 7.3.3 and 8.1, NOT
 8.0.  Apparently OnTap 7.3.3 and 8.1 contain the File Access Protocol
 (FAP), but 8.0 does not, so snapdiff would fail.  - Margaret Clark

My understanding is that it's more convoluted than that, but the FAP
and Unicode support for ONTAP 8 came with version 8.0.1, not 8.1.
So I should be good to go on my array running 8.0.1P5 with a 6.2.2.0
or newer TSM client.  (My NFS testing is on an older array at 7.3.2,
but that should still be OK if there are no Unicode filenames.  Even
if there were, it should fail in a different way than I'm seeing.)

--
Hello World.David Bronder - Systems
Admin
Segmentation Fault  ITS-EI, Univ. of
Iowa
Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


NetApp NDMP backups to TSM server?

2011-07-15 Thread Paul Zarnowski
I thought I read somewhere that you could configure NetApp servers to direct 
their NDMP backup data stream to a TSM server (over IP network) instead of to 
an attached tape drive.  I am having trouble finding this documented anywhere, 
however, and am beginning to doubt my memory.  Can anyone tell me definitively 
whether this is possible or not?
Thanks.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: NetApp NDMP backups to TSM server?

2011-07-15 Thread Paul Zarnowski
Thanks everyone.  Good to know my memory is not completely shot.  IMHO, the 
docs in the Admin Guide and RedBook need some work.  Lots of typos and some 
misleading information.  I saw phrases in more than one place that indicated 
that NDMP _must_ go to tape.

At 09:51 AM 7/15/2011, Prather, Wanda wrote:
Yep.  Doing it.
In the TSM 5.5 Admin Guide for Windows, Chap 7, look for the heading
Performing NDMP Filer to Tivoli Storage Manager Server Backups



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Friday, July 15, 2011 9:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] NetApp NDMP backups to TSM server?

I thought I read somewhere that you could configure NetApp servers to direct 
their NDMP backup data stream to a TSM server (over IP network) instead of to 
an attached tape drive.  I am having trouble finding this documented anywhere, 
however, and am beginning to doubt my memory.  Can anyone tell me definitively 
whether this is possible or not?
Thanks.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-07-15 Thread Paul Zarnowski
David,

Did you get this working with 8.0.1?  We're getting this error:

tsm incremental -snapdiff Y: -diffsnapshot=latest

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported for Data
ONTAP file server version '0.0.0'. Upgrade the file server '10.16.78.101' to 
Dat
a ONTAP version '7.3' or later in order to perform incremental backup 
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for 
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.


At 03:41 PM 6/29/2011, David Bronder wrote:
Clark, Margaret wrote:

 Back in March, I watched a recorded presentation about DB2 reorgs
 within TSM server 6.2.2.0, and discovered that OnTap will only allow
 snapdiff backups to work correctly with releases 7.3.3 and 8.1, NOT
 8.0.  Apparently OnTap 7.3.3 and 8.1 contain the File Access Protocol
 (FAP), but 8.0 does not, so snapdiff would fail.  - Margaret Clark

My understanding is that it's more convoluted than that, but the FAP
and Unicode support for ONTAP 8 came with version 8.0.1, not 8.1.
So I should be good to go on my array running 8.0.1P5 with a 6.2.2.0
or newer TSM client.  (My NFS testing is on an older array at 7.3.2,
but that should still be OK if there are no Unicode filenames.  Even
if there were, it should fail in a different way than I'm seeing.)

--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: EXPORTING clients

2011-07-08 Thread Paul Zarnowski
It's possible that your tapes are dropping out of streaming mode and 
shoeshining, in which case moving tape to disk, and then exporting disk to tape 
could speed things up.

At 07:57 AM 7/8/2011, Hughes, Timothy wrote:
Thanks paul,

Yes, We are doing Server-to-tape exports. I did notice a couple times long 
input tape mounts on the destination TSM SERVER one was around 6,000 seconds 
another around 14,000 seconds and another 17,000 seconds.


Also, I noticed the one clients SYSTEMSTATE number of files is 815,709? This 
export is going take forever

q occu Kqdc5

Node Name  Type Filespace   FSID StorageNumber of  Physical   Logical
Name Pool Name  Files Space Space
   Occupied  Occupied
   (MB)  (MB)
--  -- - -- - - -
KQDC5 Bkup kqdc5\Sy- 1 LTO30POOL815,709 65,734.62 65,656.25
 stemStat-
 e\NULL\S-
 ystem
 State\Sy-
 stemState
Regards

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Thursday, July 07, 2011 3:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: EXPORTING clients

We have found that moving the data to a serial disk pool first, before 
exporting the data (server-to-server), can really speed things up.  In fact, I 
am suspicious that TSM may not be optimizing tape mounts for node exports, or 
at least under some circumstances.  We observed very iffy behavior in this 
respect but never had time to investigate it enough to open an ETR on it.  
Check your actlog to see if you are seeing a lot of tape mount activity 
associated with the export process.

If you are doing server-to-tape export, the above recommendation may not apply.

..Paul

At 02:04 PM 7/7/2011, Hughes, Timothy wrote:
Hello,

I am currently doing exports  and some of the exports are large (in my 
opinion)  and they are taking days anyone know how these can be sped up? Or 
its just waiting game? Also we can't leave the running overnight during 
backups because a least 2 times they have caused server the clients are being 
imported to hang.


Also, has anyone had a issue where suspending exports caused the destination 
TSM  server to Hang or crash?



Some Export total files and files remaining

Total files to be transferred: 132,929
   Files remaining: 119,517

Total files to be transferred: 511,889
   Files remaining: 426,470

Total files to be transferred: 356,733
   Files remaining: 191,494


Total files to be transferred: 816,599
   Files remaining: 815,917


Total files to be transferred: 132,929
   Files remaining: 119,517


Total files to be transferred: 511,889
   Files remaining: 426,470


 Total files to be transferred: 356,733
   Files remaining: 191,494

Total files to be transferred: 1,414,575
   Files remaining: 1,229,671



TSM SERVERS VERSION 6.2.2

Thanks in advance for any comments!


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: EXPORTING clients

2011-07-08 Thread Paul Zarnowski
Richard,

Here is something to think about:

What Timothy is doing is essentially tape-to-tape data movement for one node, 
with that node's data likely scattered to some degree across the input tape.  
This means that there will be some tape repositioning needed to read the input 
tape.  If the output was to disk, it is likely that the input tape could stay 
pretty much in streaming mode.  However, with the output being directed to 
tape, IMHO it is much more likely that between the input tape repositions and 
not being able to keep the output tape in streaming mode, the combination will 
likely result in an inordinate amount of backhitching.  I don't think this is a 
hardware error, so much as a tape technology limitation.  We get around this by 
using serial disk as an interim step, which allows the whole process to run 
more quickly (because there is less overall backhitching).

..Paul


At 08:34 AM 7/8/2011, Richard Sims wrote:
On Jul 8, 2011, at 7:57 AM, Hughes, Timothy wrote:

 Yes, We are doing Server-to-tape exports. I did notice a couple times long 
 input tape mounts on the destination TSM SERVER one was around 6,000 seconds 
 another around 14,000 seconds and another 17,000 seconds.

Export/Import operations are, per the Admin Guide manual topic Preemption of 
client or server operations, high priority, which would pre-empt lower 
priority operations in order to get a tape mount started.  If prompt 
allocation of tape drives is being reflected in your Activity Log, but then 
tape mount and positioning is taking an inordinate amount of time, that would 
suggest hardware issues with tapes or drives. If non-Export operations do not 
exhibit such delays, then something else is going on, where analysis of the 
Activity Log and operating system logs may reveal factors.  If no cause is 
evident, contacting TSM Support would be in order.

Richard Sims


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: EXPORTING clients

2011-07-07 Thread Paul Zarnowski
We have found that moving the data to a serial disk pool first, before 
exporting the data (server-to-server), can really speed things up.  In fact, I 
am suspicious that TSM may not be optimizing tape mounts for node exports, or 
at least under some circumstances.  We observed very iffy behavior in this 
respect but never had time to investigate it enough to open an ETR on it.  
Check your actlog to see if you are seeing a lot of tape mount activity 
associated with the export process.

If you are doing server-to-tape export, the above recommendation may not apply.

..Paul

At 02:04 PM 7/7/2011, Hughes, Timothy wrote:
Hello,

I am currently doing exports  and some of the exports are large (in my 
opinion)  and they are taking days anyone know how these can be sped up? Or 
its just waiting game? Also we can't leave the running overnight during 
backups because a least 2 times they have caused server the clients are being 
imported to hang.


Also, has anyone had a issue where suspending exports caused the destination 
TSM  server to Hang or crash?



Some Export total files and files remaining

Total files to be transferred: 132,929
   Files remaining: 119,517

Total files to be transferred: 511,889
   Files remaining: 426,470

Total files to be transferred: 356,733
   Files remaining: 191,494


Total files to be transferred: 816,599
   Files remaining: 815,917


Total files to be transferred: 132,929
   Files remaining: 119,517


Total files to be transferred: 511,889
   Files remaining: 426,470


 Total files to be transferred: 356,733
   Files remaining: 191,494

Total files to be transferred: 1,414,575
   Files remaining: 1,229,671



TSM SERVERS VERSION 6.2.2

Thanks in advance for any comments!


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Deduplication and Collocation

2011-06-22 Thread Paul Zarnowski
This is my understanding as well. I'm almost certain this is the case, though 
we have not yet used source dedup. 

..Paul


On Jun 22, 2011, at 3:34 AM, Grigori Solonovitch 
grigori.solonovi...@ahliunited.com wrote:

 As far as I know client site de-duplication will not work with primary 
 storage pool DISK. It must be FILE as well like for server site 
 de-duplication.
 Am I right?
 
 Grigori G. Solonovitch
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Roger Deschner
 Sent: Wednesday, June 22, 2011 9:37 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Deduplication and Collocation
 
 Back to client side dedupe, which we're about to deploy for a branch
 campus 90 miles away in Rockford IL.
 
 The data is sent from the clients in Rockford via tin cans and string to
 the TSM server in Chicago already dedpued. We're using source dedupe
 because the network bandwidth is somewhat limited. So if it is received
 into a DEVCLASS DISK stgpool, then I assume it is still deduped, because
 that's how it arrived. Then finally when it's migrated to tape, we've
 already established that it gets reinflated, and then you can collocate
 or not as you wish.
 
 But the question is, does this imply that deduped data CAN exist in
 random access DEVCLASS DISK stgpools if client-side dedupe is being
 used? I sure hope so, because that's what we're planning to do.
 
 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
 == You will finish your project ahead of schedule. ===
 = (Best fortune-cookie fortune ever.) ==
 
 
 On Tue, 21 Jun 2011, Paul Zarnowski wrote:
 
 Even if a FILE devclass has dedup turned on, when the data is migrated, 
 reclaimed, or backed up (backup stgpool) to tape, then the files are 
 reconstructed from their pieces.
 
 You cannot dedup on DISK stgpools.
 DISK implies random access disk - e.g., devclass DISK.
 FILE implies serial access disk - e.g., devclass FILE.
 
 But I think there is still an open question about collocation and 
 deduplication.  Deduplication must be done using FILE stgpools, but FILE 
 stgpools CAN use collocation.  I don't know what happens in this case.
 
 ..Paul
 
 At 02:38 PM 6/21/2011, Prather, Wanda wrote:
 If it is a file device class with dedup turned off, yes.
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Mark Mooney
 Sent: Tuesday, June 21, 2011 2:29 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Deduplication and Collocation
 
 So data is deduplicated in a disk storage pool but when it is written to 
 tape the entire reconstructed file is written out?  Is this the same for 
 file device classes?
 
 
 
 --
 Paul ZarnowskiPh: 607-255-4757
 Manager, Storage Services Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu
 
 
 
 Please consider the environment before printing this Email.
 
 CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
 message and any attachments hereto may be legally privileged and 
 confidential. The information is intended only for the recipient(s) named in 
 this message. If you are not the intended recipient you are notified that any 
 use, disclosure, copying or distribution is prohibited. If you have received 
 this in error please contact the sender and delete this message and any 
 attachments from your computer system. We do not guarantee that this message 
 or any attachment to it is secure or free from errors, computer viruses or 
 other conditions that may damage or interfere with data, hardware or software.


Re: Deduplication and Collocation

2011-06-21 Thread Paul Zarnowski
Even if a FILE devclass has dedup turned on, when the data is migrated, 
reclaimed, or backed up (backup stgpool) to tape, then the files are 
reconstructed from their pieces.

You cannot dedup on DISK stgpools.
DISK implies random access disk - e.g., devclass DISK.
FILE implies serial access disk - e.g., devclass FILE.

But I think there is still an open question about collocation and 
deduplication.  Deduplication must be done using FILE stgpools, but FILE 
stgpools CAN use collocation.  I don't know what happens in this case.

..Paul

At 02:38 PM 6/21/2011, Prather, Wanda wrote:
If it is a file device class with dedup turned off, yes.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Mark 
Mooney
Sent: Tuesday, June 21, 2011 2:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication and Collocation

So data is deduplicated in a disk storage pool but when it is written to tape 
the entire reconstructed file is written out?  Is this the same for file 
device classes?



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Performance in 6.2.2.x with Win2k8 64bit?

2011-06-16 Thread Paul Zarnowski
Are you backing up System State?  They changed how this works (I think in 
6.2.?), which puts more processing on the client side and allows server-side 
expiration to run more quickly.  I can confirm that the server-side expiration 
does indeed run much more quickly for System State backups.  We've not been 
able to quantitatively measure the impact on the client side.

At 12:32 AM 6/16/2011, you wrote:
Can anyone who has converted from V5 to V6.2.2.0 or V6.2.2.2 on Win2K8 64b 
confirm for me that you are running OK?

Have a customer who did the conversion from V5 on Win2K3 32b  to V6.2.2.2 on 
Win2K8 64b and new server hardware.  Performance is measurably degraded.  
Background processes are slower by 10-18%; client backups are showing commwait 
increased by an order of magnitude.

I'm suspecting it's in the hardware config somewhere, but can't find it.  If 
anybody can confirm upgrade success on Win2K8 64b I would appreciate it before 
I go upgrade anyone to this level.   :)

Thanks!

Wanda Prather  |  Senior Technical Specialist  | 
wprat...@icfi.commailto:wprat...@icfi.com  |  www.jasi.comwww.jasi.com%20
ICF Jacob  Sundstrom  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 | 
410.539.1135


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: tsm and data domain

2011-06-16 Thread Paul Zarnowski
At 05:59 PM 6/16/2011, Nick Laflamme wrote:
We need to do a bake-off -- or study someone else's -- between using 
deduplication in a DataDomain box and using both client-side deduplication and 
server-side deduplication in TSM V6 and then writing to relatively 
inexpensive, relatively simple (but replicating) storage arrays. However, we 
keep pushing the limits of stability with our TSM V6 servers, so we haven't 
dared tried such a back-off yet. 

Nick,

We are heading down this path.  My analysis is that in a TSM environment, the 
fairly low dedup ratio does not justify the higher price of duduping VTLs.  
Commodity disk arrays have gotten very inexpensive.  We're using DS3500s which 
are nice building blocks.  We put some behind IBM SVCs for servers, some 
attached to TSM or Exchange servers (without SVC).  Common technology - 
different uses.  We use them for both TSM DB, LOG and FILE (different RPM 
disks, obviously).  Using cheap disk vs VTLs has different pros and cons.  
using disk allows for source-mode (client-side) dedup, which a VTL will not do. 
 VTLs, on the other hand, allow for global dedup pools and LAN-free virtual 
tape targets.  deduping VTLs will be more effective in TSM environments where 
you have known duplicate data, such as lots of Oracle or MSSQL full backups, or 
other cases where you have multiple full backups.  For normal progressive 
incremental file backups, however, TSM already does a good job of reducing data 
so VTL dedup doesn't get you as much, and in this case IMHO cheap disk is, 
well, cheaper and gets you source-mode dedup as well.

We are in process of implementing this, but I know a few others are a bit 
further along.

We will continue to use TSM Backup Stgpool to replicate offsite.

..Paul


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu  


Re: TSM client machine renames

2011-06-07 Thread Paul Zarnowski
Thank you for all of your suggestions.

To answer some of your questions:
- The nodename is usually specified in the dsm.opt, not defaulted from the 
machine name;
- Our nodenames are generally NOT related to the machine name (but this may 
change as a result of our AD reorg);
- We do not have central administration of all of our desktop systems, so we 
cannot reach in to each system from one central location (again, this will 
likely change AFTER the AD reorg, but that doesn't help me now);
- Most of our systems use Scheduler Service, with a smaller percentage using 
CAD;
- We have no standard disk configuration, so cloptsets are probably not a 
solution.

Here is what I think we need to do:

For each machine being renamed:
1. On TSM server, rename filespaces at same time period as machine rename 
(before the next daily backup runs); Preferably have tool that will allow us to 
trigger this rename remotely, in a secure manner;
2. On client system, edit the dsm.opt file and change any references to machine 
names to have the new machine name (e.g., in DOMAIN, INCLUDE, or EXCLUDE 
options);
3. IF the TSM nodename is being renamed, also on client system modify the 
registry entries to change the old nodename to the new nodename;  Uninstalling 
and re-installing the scheduler will be cumbersome, so we will be looking for a 
script that can do this quickly;  I'm also hoping to avoid a password re-seed.  
I don't know (yet) if we will be renaming TSM nodes as part of this exercise or 
not, but suspect we might be.

..Paul


At 12:22 PM 6/7/2011, Huebschman, George J. wrote:
Paul,
Regarding the characteristic where the TSM Scheduler remembers the nodename.

For Windows:

 The service seems to remember the node name in effect when the service was 
created, even in the absence of a 'nodename' option in dsm.opt. I don't know 
whether there is a way to change the node name the service uses; we advise 
client system administrators to remove the service and create a new one when a 
Windows system is renamed.

** Your way of fixing this is best; uninstalling then reinstalling the 
scheduler. **

There is a registry entry for the TSM Scheduler service that contains 
 the node name.  I used to have a problem with some of the new Windows servers 
 that came into TSM MISSing their backup for authentication reasons.
A Windows admin eventually told me why and their way to fix it.
In our case, the servers were all built from the same image.  They all 
 had the same machine name.  When TSM was installed and the Scheduler set up 
 and tested, it was under the standard build nodename.  Subsequently, before 
 the machine went into production, it was renamed.  Sometimes the nodename in 
 the dsm.opt file was changed, but authentication still failed.

The Windows folks used to edit the Registry entry for the TSM Scheduler 
service...but your method is much safer.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Thomas Denier
Sent: Thursday, June 02, 2011 3:20 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM  client machine renames

-Paul Zarnowski wrote: -

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive or action-oriented messages to us via 
electronic mail. This message is intended for the addressee only and may 
contain privileged or confidential information. Unless you are the intended 
recipient, you may not use, copy or disclose to anyone any information 
contained in this message. If you have received this message in error, please 
notify the author by replying to this message and then kindly delete the 
message. Thank you.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: TSM client machine renames

2011-06-07 Thread Paul Zarnowski
You get a second new filespace on the server.  Confusion  waste.

At 03:06 PM 6/7/2011, Hughes, Timothy wrote:
What happens if you don't rename the filespaces at the same time period of a 
machine rename?

regards

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Tuesday, June 07, 2011 2:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM  client machine renames

Thank you for all of your suggestions.

To answer some of your questions:
- The nodename is usually specified in the dsm.opt, not defaulted from the 
machine name;
- Our nodenames are generally NOT related to the machine name (but this may 
change as a result of our AD reorg);
- We do not have central administration of all of our desktop systems, so we 
cannot reach in to each system from one central location (again, this will 
likely change AFTER the AD reorg, but that doesn't help me now);
- Most of our systems use Scheduler Service, with a smaller percentage using 
CAD;
- We have no standard disk configuration, so cloptsets are probably not a 
solution.

Here is what I think we need to do:

For each machine being renamed:
1. On TSM server, rename filespaces at same time period as machine rename 
(before the next daily backup runs); Preferably have tool that will allow us 
to trigger this rename remotely, in a secure manner;
2. On client system, edit the dsm.opt file and change any references to 
machine names to have the new machine name (e.g., in DOMAIN, INCLUDE, or 
EXCLUDE options);
3. IF the TSM nodename is being renamed, also on client system modify the 
registry entries to change the old nodename to the new nodename;  Uninstalling 
and re-installing the scheduler will be cumbersome, so we will be looking for 
a script that can do this quickly;  I'm also hoping to avoid a password 
re-seed.  I don't know (yet) if we will be renaming TSM nodes as part of this 
exercise or not, but suspect we might be.

..Paul


At 12:22 PM 6/7/2011, Huebschman, George J. wrote:
Paul,
Regarding the characteristic where the TSM Scheduler remembers the nodename.

For Windows:

 The service seems to remember the node name in effect when the service was 
created, even in the absence of a 'nodename' option in dsm.opt. I don't know 
whether there is a way to change the node name the service uses; we advise 
client system administrators to remove the service and create a new one when 
a Windows system is renamed.

** Your way of fixing this is best; uninstalling then reinstalling the 
scheduler. **

There is a registry entry for the TSM Scheduler service that contains 
 the node name.  I used to have a problem with some of the new Windows 
 servers that came into TSM MISSing their backup for authentication reasons.
A Windows admin eventually told me why and their way to fix it.
In our case, the servers were all built from the same image.  They 
 all had the same machine name.  When TSM was installed and the Scheduler set 
 up and tested, it was under the standard build nodename.  Subsequently, 
 before the machine went into production, it was renamed.  Sometimes the 
 nodename in the dsm.opt file was changed, but authentication still failed.

The Windows folks used to edit the Registry entry for the TSM Scheduler 
service...but your method is much safer.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Thomas Denier
Sent: Thursday, June 02, 2011 3:20 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM  client machine renames

-Paul Zarnowski wrote: -

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive or action-oriented messages to us via 
electronic mail. This message is intended for the addressee only and may 
contain privileged or confidential information. Unless you are the intended 
recipient, you may not use, copy or disclose to anyone any information 
contained in this message. If you have received this message in error, please 
notify the author by replying to this message and then kindly delete the 
message. Thank you.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


TSM client machine renames

2011-06-02 Thread Paul Zarnowski
Hello all,

We are contemplating a massive Active Domain reorganization which would involve 
renaming hundreds of Windows machines that we backup into TSM.  We forsee a few 
problems with this, and I am looking to see if any other TSM sites have faced a 
similar problem and what they did to address it.

The problems:
1. Renaming a Windows system will result in TSM making a fresh backups for the 
volumes on that system (because the system name is part of the filespace name). 
 Renaming the filespace on the TSM server will address this, but timing is a 
problem.  If you rename the filespace a day early or a day late, you will still 
end up with extra backups.

2. TSM likes to replace DOMAIN C: statements with DOMAIN \\systemname\C$.  If 
the systemname changes, then the TSM backup will fail, because it won't be able 
to find the old systemname (unless and until the DOMAIN statement is updated).  
Again, with so many machines, updating all those DSM.OPT files will be 
problematic.

3. If we have a large number of unintended extra backups, TSM server resources 
(database size and stgpool capacity) will be stretched.


Having a tool that would allow our customers to rename their TSM filespaces 
on-demand would be a big help.  As we do not give out policy domain privileges, 
we cannot use dsmadmc to do this.  I am looking for other solutions that any of 
you might have developed, or even just thought about.  If the TSM BA client 
allowed a user to rename their filespace, that would be a great solution.  But 
it's not there.

Thanks for any help (or condolences).

..Paul


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


  1   2   3   4   5   >