Re: dsmc won't start/hangs

2016-10-06 Thread Steven Langdale
99 times out of 100 it's a broken NFS mount.  I've also never seen
nfstimeout make any difference whatsoever.

On Thu, 6 Oct 2016, 17:19 Zoltan Forray,  wrote:

> It was as suspected, the NFS mount.  Eventhough we tried dsmc
> -nfstimeout=10 it still hung/never comes back.  Once the server owner
> unmounted the NFS, the problem cleared up.
>
> On Thu, Oct 6, 2016 at 11:03 AM, Andrew Raibeck 
> wrote:
>
> > Hi Zoltan,
> >
> > Yes, it could be a stale NFS mount point. Take a look at the NFSTIMEOUT
> > client option which lets you set a non-indefinite value for waiting on
> NFS
> > mount points.
> >
> > Regards,
> >
> > Andy
> >
> > 
> > 
> >
> > Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
> >
> > IBM Tivoli Storage Manager links:
> > Product support:
> > https://www.ibm.com/support/entry/portal/product/tivoli/
> > tivoli_storage_manager
> >
> > Online documentation:
> > http://www.ibm.com/support/knowledgecenter/SSGSG7/
> > landing/welcome_ssgsg7.html
> >
> > Product Wiki:
> > https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%
> > 20Storage%20Manager
> >
> > "ADSM: Dist Stor Manager"  wrote on 2016-10-06
> > 10:59:55:
> >
> > > From: Zoltan Forray 
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 2016-10-06 11:01
> > > Subject: dsmc won't start/hangs
> > > Sent by: "ADSM: Dist Stor Manager" 
> > >
> > > An odd situation popped up on a Linux client (7.1.3.1).
> > >
> > > This client had been backing up just fine and suddenly it stopped.
> > Nothing
> > > in dsmerror.log.
> > >
> > > What is strange is you cant get into dsmc.  It simply hangs so hard you
> > > have to kill -9 it.
> > >
> > > The TSM server is still fully ping-able.  Tried renaming
> dsm.sys/dsm.opt
> > so
> > > it could not find it and properly reports this but as soon as they are
> > > available/found, it hang?
> > >
> > > The only clue/oddity we see is an NFS mount that is also
> > > non-responsive/non-pingable and we are talking to the server owner
> about
> > > rebooting the box.Can this be causing the dsmc process to hang?
> > >
> > > Any thoughts on how to diagnose this?
> > >
> > > --
> > > *Zoltan Forray*
> > > TSM Software & Hardware Administrator
> > > Xymon Monitor Administrator
> > > VMware Administrator (in training)
> > > Virginia Commonwealth University
> > > UCC/Office of Technology Services
> > > www.ucc.vcu.edu
> > > zfor...@vcu.edu - 804-828-4807
> > > Don't be a phishing victim - VCU and other reputable organizations will
> > > never use email to request that you reply with your password, social
> > > security number or confidential personal information. For more details
> > > visit http://infosecurity.vcu.edu/phishing.html
> > >
> >
>
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator (in training)
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
>


Re: Managed servers out of sync

2016-04-19 Thread Steven Langdale
Don't forget if you do that, you will loose all of your schedule assoc's,
so will need to re-assoc them.

On Tue, 19 Apr 2016, 14:50 Skylar Thompson, 
wrote:

> The problem is that a domain that has any nodes in it and is out-of-sync
> cannot be fixed; I believe that operation is functionally like a
> delete It's really no big deal to do the temp domain though:
>
> copy domain dom1 temp-dom1
> update node * wheredom=dom1 dom=temp-dom1
> (sync up)
> update node * wheredom=temp-dom1 dom=dom1
> delete domain temp-dom1
>
> The copy domain operation even copies the active policyset and default
> management classes, so there's no risk to having the wrong retention
> policies applied.
>
> On Tue, Apr 19, 2016 at 01:39:54PM +, Kamp, Bruce (Ext) wrote:
> > I have 4 TSM servers running on AIX the library manager/configuration
> manger is now 7.1.4.100 (upgraded from 7.1.0) the other 3 are 7.1.0.
> > About a month ago the server to server communications stopped working
> because of authentication failure.  In working with IBM it was decided that
> I need to upgrade all my servers to a higher version of TSM.  With all the
> changes going on at the moment it will take a while for me to upgrade the
> rest of the servers.
> > I have figured out a temporary work around to get the communications
> working until I can upgrade.  What I found out when I "fixed" the first
> server is that domains have become out of synch.
> >
> > ANR3350W Locally defined domain FS_PROD_DOMAIN_04 contains
> > at least one node and cannot be replaced with a
> > definition from the configuration manager. (SESSION: 4)
> >
> > When I asked IBM how to figure out which nodes are causing this problem
> I was told I had to move all nodes to a temp domain delete the domain run
> notify subscribers & than move the nodes back into the domain.
> >
> > What I am wondering is if anyone knows how I can identify what nodes are
> causing this so I only have to move them out & back?
> >
> >
> > Thanks,
> > Bruce Kamp
> > GIS Backup & Recovery
> > (817) 568-7331
> > e-mail: mailto:bruce.k...@novartis.com
> > Out of Office:
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>


Restore Domino database without domino server

2015-09-10 Thread Steven Langdale
Hi all

I have some long term archives of Lotus Notes/domino databases, and need
one back.

Is it possible to recover it straight into a filesystem and not need a
working Domino server as the restore target?

Thanks

Steven


Re: How to dedicate a network port for backup

2015-08-26 Thread Steven Langdale
Can you show your changes in dsmsta.opt please?

I'm not aware of a way of telling the storage agent to listen on a specific
address only.  However as the STA def on the TSM instance states the backup
address, I'd expect to see data going across that link anyway.

Steven

On 26 August 2015 at 07:36, Harwansh harwa...@gmail.com wrote:

 Hi Farooq,

 After making the necessary changes in dsm.sys, dsmsta.opt file i have
 restarted the dsmsta service to take effect. But it is observed that the
 storage agent in running on service ip address. I have checked by netstat
 -an |grep 1500 command in aix.



 On Tue, Aug 25, 2015 at 7:54 PM, Mohammed Farooq mofar...@sbm.com.sa
 wrote:

  Hi Kumar,
 
  Have you restarted BAclient  schedule  SAN agent services if not you
  should stop and start SAN agent in order to get the changes.
 
  Thanks  Regards
 
  Mohammed Farooq
  Backup  Storage Specialist.
  SBM-Saudi Arabia.
 
 
 
  From:   S Kumar our@gmail.com
  To: ADSM-L@VM.MARIST.EDU,
  Date:   08/24/2015 11:46 AM
  Subject:[ADSM-L] How to dedicate a network port for backup
  Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 
 
 
  Hi Team,
 
  In one of our setup, customer has given a dedicated network port for
  backup. We have to enable this IP and  port from TSM server side and
  client
  side for backup.
 
  Basis details are
 
  Server physical host name is : eccdb
  Service ip address for SAP application: 192.168.1.100
 
  /etc/hosts having the first entry against Service IP address, ie
   192.168.1.100  eccdb
  ...
  ...
  172.16.10.100   eccbkp
 
  They have dedicated the ip address 172.16.10.100 for backup.
  Here they are also running the LAN Free backup. and i have updated the HL
  address in and in storage agent config. but when we start the storage
  agent
  service, it is picking the the 192.168.1.100. I have checked it from the
  dsmsta process. dsmstat process is listing on port 1500 against ip
 address
  192.168.1.100.
 
  I have also updated the HL address in TSM server for storage agent
 server.
  My backup is running perfectly but running of service ip address. Where
 as
  it should run on 172.16.10.100 ip address as i have mentioned in the
  config
  file dsmsta.opt.
 
  Any one has any idea on this.
 
  Regards,
 
  SKumar
 



 --


 Regards,
 Harwansh
 09903979774



Re: Share permission changes

2015-05-11 Thread Steven Langdale
Share perms are stored in the registry, so backed up with a system state
backup.  If all the files are getting backed up again he must have changed
the filesystem perms too.

On Mon, 11 May 2015 22:13 Paul Zarnowski p...@cornell.edu wrote:

 This is a problem for NTFS because the amount of metadata associated with
 an object is more than you can put into the TSM database.  Thus, TSM puts
 it into the storage pool, along with the object.  What this means is that
 when the meta changes, the object has to be backed up again.  This is not a
 problem for Unix/NFS, because there isn't much metadata and it can all be
 put into the TSM DB, which means if it changes it's just a DB update and
 not another backup of the object.

 Bad enough for backups, but imagine if you had a PB-scale GPFS filesystem
 and someone unwittingly makes such a change.  Now you're talking about
 having to recall all of those objects in order to back them up again.
 Ugh.  End of game.

 ..Paul


 At 04:54 PM 5/11/2015, Nick Marouf wrote:
 Hello
 
  From my experience changing share permission will force tsm to backup all
 the data once more. A solution we used in the past was to assign groups
 instead of users to shares.
 
 Changes to group membership is behind the scenes in AD, and is not picked
 up by TSM at the client level.
 
 
 On Mon, May 11, 2015 at 2:39 PM, Thomas Denier 
 thomas.den...@jefferson.edu
 wrote:
 
  One of our TSM servers is in the process of backing up a large part of
 the
  contents of a Windows 2008 file server. I contacted the system
  administrator. He told me that he had changed share permissions but not
  security permissions, and did not expect all the files in the share to
 be
  backed up. Based on my limited knowledge of share permissions I wouldn't
  have expected that either. Is it normal for a share permissions change
 to
  have this effect? How easy is it to make a security permissions change
  while trying to make a share permissions change?
 
  Thomas Denier,
  Thomas Jefferson University
  The information contained in this transmission contains privileged and
  confidential information. It is intended only for the use of the person
  named above. If you are not the intended recipient, you are hereby
 notified
  that any review, dissemination, distribution or duplication of this
  communication is strictly prohibited. If you are not the intended
  recipient, please contact the sender by reply email and destroy all
 copies
  of the original message.
 
  CAUTION: Intended recipients should NOT use email communication for
  emergent or urgent health care matters.
 


 --
 Paul ZarnowskiPh: 607-255-4757
 Assistant Director for Storage Services   Fx: 607-255-8521
 IT at Cornell / InfrastructureEm: p...@cornell.edu
 719 Rhodes Hall, Ithaca, NY 14853-3801



Re: How to restore (already expired) objects from tape?

2015-04-16 Thread Steven Langdale
We didn't have any client encryption, if that's what you mean.  But I doubt
it would work, in essence it's just dumping data straight off the tape.

On Thu, 16 Apr 2015 12:32 Krzysztof Przygoda przy...@gmail.com wrote:

 Hi
 @ Steven
 Is this recovery program available only for systems which don't use any
 sort of encryption or something changed in that matter?
 @Gregor
 If you have such possibility(and still have tsm db backups) I would
 recommend to restore tsm db not on production but on some separate machine
 with read-only access to copy pools (set primary stg's to unavailable) and
 its devices/libraries. In that way you can go on with production and if you
 will mess up with copy pool volumes the last thing you will have to do will
 be recreation of damaged copy pools.

 Kind regards
 Krzysztof


 2015-04-14 18:14 GMT+02:00 Gregor van den Boogaart 
 gregor.booga...@rz.uni-augsburg.de:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  Hi Krzysztof, hi Steven, hi Steven,
 
  thanks for all the hints on restoring the DB and commercial
  alternatives! I had the feeling that this would be a supported
  approach, but was not aware of the details. I am now... Due to unlucky
  timing our DB backups are probably expired, but there would be a
  chance the older DB backup tapes could be identified and have not been
  reused. So it would theoretically be possible.
 
  As only a single - although important - object has been lost, the
  decision over here was to focus on preventing this from happening
  again. Restoring the DB and all that would possibly have a to big
  impact on normal TSM operations for us this time.
 
  However I still would like to restore the object. Therefore the last
  question. I came across this
 
 
 
 http://www.backupcentral.com/mr-backup-blog-mainmenu-47/13-mr-backup-blog/179-reading-tsm-tapes.html
 
  which mentions
 
  http://sourceforge.net/projects/tsmtape/
 
  as an open source possibility to read TSM written tapes:
 
  TSMtape allows recovery of files found on TSM v5.x (tested against
  5.2) tapes.
 
  I wonder whether anybody has used this successfully (we are still
  running TSM 5.5)?
 
  Gregor
 
 
 
  On 04/14/2015 03:14 PM, Steven Langdale wrote:
   We had a data loss incident a couple of years back.  There is no
   way to re-populate the TSM DB from a tape. You have 2 options: 1.
   Do a DB restore to before the data was deleted from the TSM DB 2.
   Use a data recovery process to re-read the tapes.
  
   We used Option 2, and got most of it back.  IBM supplied the
   program to read the data from tape and put it into a directory
   structure.
  
   Steven
  
   On Tue, 14 Apr 2015 at 14:05 Krzysztof Przygoda przy...@gmail.com
   wrote:
  
   Hi I guess its better to disable schedules before you start (by
   adding disablescheds yes in dsmserv.opt file) the resorted server
   than to do it after start (as some jobs can start just after
   startup).
  
  
 
 http://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.0/com.ibm.itsm.srv.ref.doc/r_opt_server_disablescheds.html
  
  
  
  Kind Regards
   Krzysztof
  
   2015-04-14 14:48 GMT+02:00 Steven Harris
   st...@stevenharris.info:
   Hi Gregor.
  
   IBM used to offer a special service to get data off tape. Not
   sure if it is still available.
  
   Other than that the only way I know is to restore your DB to a
   point before expiration.  Start up the restored instance with
   NOMIGRECL and immediately stop your daily schedules, in
   particular stop running expiration.  At this point you can
   restore your file.  I would tend to mark the primary volumes as
   unavailable and restore from the copypool. That way you if you
   have restored to another instance you can keep your main
   instance running close to normal (except for copypool stuff)
   for however long the restore takes you.  But it is a lot of
   work and a lot of fiddling about.
  
   Regards and good luck.
  
   Steve
  
   Steven Harris TSM Admin Canberra Australia.
  
  
  
  
   On 14/04/2015 10:10 PM, Gregor van den Boogaart wrote:
   Dear List,
  
   the retention settings in the copygroup were to strict... The
   version of the file we would like to restore has already been
   expired. :-( However it is very likely the desired version of
   the file is still on tape, as no reclamation has taken place.
   So my understanding would be: if the file is still on tape,
   the only thing missing is the database entry. Correct?
  
   So the question would be: How to restore already expired
   objects from a tape cartridge? Or: How to generate database
   entries for all versions of all objects on a tape cartridge?
  
   I guess the first step would be to move the respective node
   to a copygroup, with better retention settings. I looked on
   the documentation for audit volume, but I did not get the
   feeling this would understand checking for inconsistencies
   between database information and a storage pool volume in
   un-expiring previously

Re: How to restore (already expired) objects from tape?

2015-04-14 Thread Steven Langdale
We had a data loss incident a couple of years back.  There is no way to
re-populate the TSM DB from a tape.
You have 2 options:
1. Do a DB restore to before the data was deleted from the TSM DB
2. Use a data recovery process to re-read the tapes.

We used Option 2, and got most of it back.  IBM supplied the program to
read the data from tape and put it into a directory structure.

Steven

On Tue, 14 Apr 2015 at 14:05 Krzysztof Przygoda przy...@gmail.com wrote:

 Hi
 I guess its better to disable schedules before you start (by adding
 disablescheds yes in dsmserv.opt file) the resorted server than to do
 it after start (as some jobs can start just after startup).

 http://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.0/com.ibm.itsm.srv.ref.doc/r_opt_server_disablescheds.html

 Kind Regards
 Krzysztof

 2015-04-14 14:48 GMT+02:00 Steven Harris st...@stevenharris.info:
  Hi Gregor.
 
  IBM used to offer a special service to get data off tape. Not sure if it
  is still available.
 
  Other than that the only way I know is to restore your DB to a point
  before expiration.  Start up the restored instance with NOMIGRECL and
  immediately stop your daily schedules, in particular stop running
  expiration.  At this point you can restore your file.  I would tend to
  mark the primary volumes as unavailable and restore from the copypool.
  That way you if you have restored to another instance you can keep your
  main instance running close to normal (except for copypool stuff) for
  however long the restore takes you.  But it is a lot of work and a lot
  of fiddling about.
 
  Regards and good luck.
 
  Steve
 
  Steven Harris
  TSM Admin
  Canberra Australia.
 
 
 
 
  On 14/04/2015 10:10 PM, Gregor van den Boogaart wrote:
  Dear List,
 
  the retention settings in the copygroup were to strict... The version
  of the file we would like to restore has already been expired. :-(
  However it is very likely the desired version of the file is still on
  tape, as no reclamation has taken place. So my understanding would be:
  if the file is still on tape, the only thing missing is the database
  entry. Correct?
 
  So the question would be: How to restore already expired objects from
  a tape cartridge? Or: How to generate database entries for all
  versions of all objects on a tape cartridge?
 
  I guess the first step would be to move the respective node to a
  copygroup, with better retention settings. I looked on the
  documentation for audit volume, but I did not get the feeling this
  would understand checking for inconsistencies between database
  information and a storage pool volume in un-expiring previously
  expired objects...
 
  Any tips, pointers, ideas?
  Perhaps I am just searching for the wrong key words?
 
  Gregor
 
 
 



Re: So long, and thank you...

2015-04-04 Thread Steven Langdale
As the rest of the gang have said, thanks for the valuable assistance, and
have a great retirement!

Don't miss us too much ;)

Steven

On Sat, 4 Apr 2015 17:47 J. Pohlmann jpohlm...@shaw.ca wrote:

 Hi Wanda. All the best for your retirement.

 Best regards,

 Joerg Pohlmann

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Prather, Wanda
 Sent: April 3, 2015 14:10
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] So long, and thank you...

 This is my last day at ICF, and the first day of my retirement!

 I'm moving on to the next non-IT-support chapter in life.


 I can't speak highly enough of the people who give of their time and
 expertise on this list.

 I've learned most of what I know about TSM here.


 You all are an amazing group, and it has been a  wonderful experience in
 world-wide collaboration.


 Thank you all!


 Best wishes,

 Wanda



Re: DB2/Oracle backup reporting and scheduling

2015-03-07 Thread Steven Langdale
We have the same scenario.  Oracle, db2 and MSSQL clients all prefer to do
their own backups.

It comes down to accountability.  If they want to run their own backups
then THEY are responsible for reporting on them.

Maybe a harsh stance, but if I can't schedule it I will not report on it or
be accountable for its success or fail.

As long as they, and their management know that, then that's fine.

FYI, oracle grid will report on rman backups.

Steven

On Sat, 7 Mar 2015 12:15 Rhodes, Richard L. rrho...@firstenergycorp.com
wrote:

 Yea, that's a major problem we struggle with also.

 We've had major problem with the dba's making sure that the Oracle backup
 are completed/good.  We've had several instances where major systems have
 had failing backups that they didn't know about to the point where there
 was no backups for them.

 For Oracle/Rman backups I have a report that crawls throught the backups
 table and reports on old files, but that is mostly to catch RMAN not
 expiring items.

 Sorry, but no good solution here.

 Rick



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Rick Adamson
 Sent: Friday, March 06, 2015 1:38 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: DB2/Oracle backup reporting and scheduling

 Rick,
 This all began after a recent audit revealed many systems either had
 missed backup schedules, excessive retention, or no backups at all, which
 led to the question of how we can better account for them on a day-to-day
 basis.  Of course then the usual finger pointing ensued and management
 asked what could be done to address it.

 How do you assure your business, and auditors, that the expected data is
 available in the event a recovery is needed?

 ~Rick


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Rhodes, Richard L.
 Sent: Friday, March 06, 2015 12:37 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] DB2/Oracle backup reporting and scheduling

 Our Oracle backups have three scenarios.

 1)  Home grown scripts are scheduled via cron on the Oracle server,
 copy/compress the db to local disk, then pushed the db backup to TSM via a
 dsmc backup of the backup disk area.

 2)  RMAN backups are scheduled via cron which push data to TSM via
 LanFree/SAN or Network.

 3)  Some RMAN backups run via cron and write direct to DataDomain via NFS.
 (no TSM involvement)

 Note - archive logs are pushed to TSM via scripts and run around the clock.


 Rick


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Rick Adamson
 Sent: Friday, March 06, 2015 12:12 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: DB2/Oracle backup reporting and scheduling

 I assume someone has dealt with this I would like to hear how they handled
 it.

 The issue:
 DB2 and/or Oracle database backups that are dependent on completion of
 external processes.

 Currently our DBA's utilize a variety of methods to initiate DB2 and
 Oracle database backups (CRON, external schedulers, etc) which presents
 challenges to confirm that they are being completed as expected. As a
 start, I proposed creating a client schedule and using the TSM scheduler to
 trigger these events, which would minimally provide a
 completed/missed/failed status. Complemented by routine reporting of stored
 objects it would give me some assurance that TSM had what it needed to
 assure their recovery.

 The DBA's are pushing back (surprise!) claiming that some backups have
 special requirements, such as not running during other tasks like payroll
 processing, runstats, etc. so they use the external scheduler to set
 conditions that are met before the backup is initiated.

 The question proposed to me is can a TSM schedule be triggered by the
 external scheduler once the conditions have been met?

 I would be grateful to hear how others handle this, or if they use a
 different approach altogether to assure all DP database backups are
 completing on a timely basis.
 TIA

 ~Rick


 -

 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If the
 reader of this message is not the intended recipient or an agent
 responsible for delivering it to the intended recipient, you are hereby
 notified that you have received this document in error and that any review,
 dissemination, distribution, or copying of this message is strictly
 prohibited. If you have received this communication in error, please notify
 us immediately, and delete the original message.


 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If the
 reader of this message is not the intended recipient or an agent
 responsible for delivering it to the intended recipient, you are hereby
 notified that you have received 

Re: ANS4174E error

2015-03-02 Thread Steven Langdale
Eric

What does q co vmctl active vmctl_mc t=b f=d come back with?


Re: DEVCLASS=FILE - what am I missing

2015-02-13 Thread Steven Langdale
As the number of vols is damn close to Max scratch. I'd say you ran out.
No thing in actlog?  Bung it up and see what happens.

On Fri, 13 Feb 2015 17:15 Zoltan Forray zfor...@vcu.edu wrote:

 Up until recently, I have always used DEVCLASS=DISK for disk storage and
 always preformatted/allocated the disk volumes into multiple chunks to all
 for multi-I/O benefits.

 When I recently stood-up a new server, I decided to try DEVCLASS=FILE for
 disk-based storage/incoming backups.

 I thought I understood that FILE type storage was basically
 tape/sequential files on disk and would act accordingly and things like
 reclamation now applied so when the file chunks (I defined 50GB file sizes)
 got below the reclaim value, it would reclaim such files, create new ones
 and delete the old ones automagically.

 Well, last night became a disaster.  Backups failing all over because it
 couldn't allocate any more files and also would not automatically shift to
 use the nextpool which is defined as a tape pool.

 So, what am I doing wrong?  What assumptions are wrong?  Here is the
 devclass values with the empty values left out...:

  Device Class Name: TSMFS
 Device Access Strategy: Sequential
 Storage Pool Count: 1
Device Type: FILE
 Format: DRIVE
  Est/Max Capacity (MB): 51,200.0
Mount Limit: 40
  Directory: /tsmpool

 Here is the lone stgpool that used this devclass:

 12:06:21 PM   GALAXY : q stg backuppool f=d
 Storage Pool Name: BACKUPPOOL
 Storage Pool Type: Primary
 Device Class Name: TSMFS
Estimated Capacity: 7,106 G
Space Trigger Util: 84.5
  Pct Util: 80.9
  Pct Migr: 80.9
   Pct Logical: 99.2
  High Mig Pct: 85
   Low Mig Pct: 75
   Migration Delay: 0
Migration Continue: Yes
   Migration Processes: 1
 Reclamation Processes: 1
 Next Storage Pool: PRIMARY-ONSITE
  Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
   Description:
 Overflow Location:
 Cache Migrated Files?:
Collocate?: No
 Reclamation Threshold: 59
 Offsite Reclamation Limit:
   Maximum Scratch Volumes Allowed: 143
Number of Scratch Volumes Used: 137
 Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
  Amount Migrated (MB): 0.00
  Elapsed Migration Time (seconds): 1,009
  Reclamation in Progress?: No
Last Update by (administrator): ZFORRAY
 Last Update Date/Time: 02/13/2015 11:44:23
  Storage Pool Data Format: Native
  Copy Storage Pool(s):
   Active Data Pool(s):
   Continue Copy on Error?: Yes
  CRC Data: No
  Reclamation Type: Threshold
   Overwrite Data when Deleted:
 Deduplicate Data?: No
  Processes For Identifying Duplicates:
 Duplicate Data Not Stored:
Auto-copy Mode: Client
 Contains Data Deduplicated by Client?: No

 I calculated the Max Scratch Volumes value based on having ~7.6TB
 filesystem so 50GB * 143 = 7.1TB

 This morning when I checked, there were plenty of volumes with 40%
 utilized.  SO why didn't reclaim kick-in?  or am I totally off on this
 assumption?   I manually performed move data on them and it freed things
 up.
 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 BigBro / Hobbit / Xymon Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



Re: size of objects in the backups table

2015-02-04 Thread Steven Langdale
An alternative to the export node option (and works well if you want the
active data sum for everything, is to set up an empty active data pool and
do a preview of a stgpool copy.

It'll be faster than multiple exports.

Steven

On 4 February 2015 at 12:08, Rhodes, Richard L. rrho...@firstenergycorp.com
 wrote:

 Yes, that worked great.  An occupancy gives you the totals, and this gives
 just the active.

 Just for curiosity I created a sql cmd to join contents and backups for a
 single node with very little data.  It never returned, as expected.

 Rick



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Jeanne Bruno
 Sent: Tuesday, February 03, 2015 5:00 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: size of objects in the backups table

 Hello.  I tested this and got the output:

 ANR0986I Process 206 for EXPORT NODE running in the BACKGROUND processed
 37,158 items for a total of 10,206,832,635 bytes with a completion state of
 SUCCESS at 16:58:30.

 So I have 37,158 active items for this particular node, correct?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 TH
 Sent: Tuesday, February 03, 2015 12:21 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] size of objects in the backups table

 Maybe a different way would be suitable for you - try to do EXPORT NODE
 xxx FILEDATA=BACKUPACTIVE PREVIEW=YES

 The end of process will give you a total size of active data for a node.

 Regards,

 Tomasz Hubicki


 -- Wiadomość oryginalna --
 Temat: [ADSM-L] size of objects in the backups table
 Nadawca: Rhodes, Richard L. rrho...@firstenergycorp.com
 Adresat: ADSM-L@VM.MARIST.EDU
 Data: Tue Feb 03 2015 15:25:52 GMT+0100

  We are on TSM v6.2.5.
 
  We keep running into the normal question that seems to come up when we
 start analyzing our backups.  We can tell the number of active/inactive
 files from the backups table, but not the size, which is in the contents
 table.  Does anyone have a way to get the active/inactive objects and their
 size without killing your system with a massive SQL join?  Maybe some kind
 of SQL join for a specific node.
 
  I just can't believe TSM doesn't provide this info easily from the
 server!
  (I suppose this belongs under the Rant thread!)
 
 
  Rick
 
 
  -
 
  The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If the
 reader of this message is not the intended recipient or an agent
 responsible for delivering it to the intended recipient, you are hereby
 notified that you have received this document in error and that any review,
 dissemination, distribution, or copying of this message is strictly
 prohibited. If you have received this communication in error, please notify
 us immediately, and delete the original message.
 


 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If the
 reader of this message is not the intended recipient or an agent
 responsible for delivering it to the intended recipient, you are hereby
 notified that you have received this document in error and that any review,
 dissemination, distribution, or copying of this message is strictly
 prohibited. If you have received this communication in error, please notify
 us immediately, and delete the original message.



Re: basic DR questions

2015-01-28 Thread Steven Langdale
I've done it just like that in the past (Unix host lost its root disk) it
does work.
On 28 Jan 2015 07:07, Remco Post r.p...@plcs.nl wrote:

  Op 28 jan. 2015, om 00:25 heeft Andrew Ferris afer...@mrl.ubc.ca het
 volgende geschreven:
 
  Thanks for the reply Rick.
 
  Pointing the new TSM server at the old db and log files didn't work so
 Skylar was correct. Got messages saying that they belonged to another TSM
 server. So I will pull back my one DB tape and double check that the server
 can talk to our 3584/TS3500 and IBM drives.
 

 I’m convinced that if you replace your ‘new’ DB files by the old ones and
 remove this one line that doesn’t contain a path to a file from the
 dsmserv.dsk that you should be fine.

  Andrew
 
 
  Rick Adamson rickadam...@biloholdings.com 1/27/2015 11:32 AM 
  Andrew,
  Been there, done that.
  Here's how I handled it:
 
  -Get the server operational. Like others have said it is advantageous to
 have several files from the TSM instance directory (volhist, devconfig, and
 optionally dsmserv.opt). On 5.x it is possible to recover without them, but
 the situation gets a bit more complicated.
  - Assure the system has access to the tape library, (real or virtual),
 and update the devconfig file to reflect any changes needed.
  - Install the TSM server software and perform a minimal configuration.
 This can be done via the management console wizards.
  - Place/replace the volhist, devconfig, and dsmserv files in the
 instance directory.
  - Use the dsmserv restore db command to restore the latest data base
 copy. (If the library is physical tape you may have to manually load the
 tapes as requested.)
  - Bring the TSM Server online and inspect for proper operation.
  -Unless you determine it is needed I would forego the volume auditing,
 the time it takes per volume to complete is extensive. Be critically
 selective here.
 
  If you perform a point-in-time database restore (versus a roll forward)
 I strongly recommend that once the server is up you review the original
 volhist file and resolve any potential issues, such as volumes
 created/deleted in between the time of the database backup used for the
 restore and the time the server crashed.
 
 
  Rick Adamson
  Jacksonville,Fl.
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
 Of Andrew Ferris
  Sent: Tuesday, January 27, 2015 12:16 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: [ADSM-L] basic DR questions
 
  Hello ADSM-ers,
 
  Our ancient 5.5 (EOL I know) TSM server on windows just corrupted it's
 C: drive (so OS + Server Program Files) but everything else is fine - the
 diskpools, the logs, the db files, the library, etc. I even have copies of
 dsmserv.opt, devconfig.out, and volhist.out. I have a plan file but I would
 prefer to pull back as few tapes as possible from offsite.
 
  What would be the quickest way to restore TSM given the large amount of
 non-destroyed material I have?
 
  Sorry my DRM skills are so rusty.
 
  thanks,
  Andrew Ferris
  Network  System Management
  UBC Centre for Heart  Lung Innovation
  St. Paul's Hospital, Vancouver
 
 https://urldefense.proofpoint.com/v2/url?u=http-3A__www.hli.ubc.cad=AwIFAgc=AzgFQeXLLKhxSQaoFCm29Ar=eqh5PzQPIsPArLoI_uV1mKvhIpcNP1MsClDPSJjFfxwm=tOvkkg88gL_qIi-t-hizMiMh4elw6_Vx6ZomA3sqQE8s=bYWNR0zk8MR7W8DusALKHG319cUGFjDFjE_IVZx0rQEe=

 --

  Met vriendelijke groeten/Kind Regards,

 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622



Re: Printing labels locally for LTO tapes (physically)

2015-01-16 Thread Steven Langdale
I've done it - generally in an emergency though.

I've found all of the IBM libraries to be very forgiving of rather
amateurish looking lables.  I've only ever done them on a laser printers
though.

Getting them to stick (and stay) on is always the most challenging bit!

Steven

On 16 January 2015 at 12:01, Nick Laflamme n...@laflamme.us wrote:

 Does anyone have any experience with trying to produce labels in-house to
 relabel physically LTO tapes?

 We’re going to start using different series of barcode labels as we start
 working with outside customers; I want to know just by looking at a tape
 whose data should be on that tape. My manager is worried that if we stock
 on up tapes as we add each customer, we may end up with too many for one
 customer and not enough for another, so he wants to be able to physically
 relabel the tapes.

 I found one article from HP warning against using inkjets or even “office
 quality” laser printers as being insufficiently precise for the job. They
 also warn about alignment issues, and I can imagine issues with labels
 falling off. However, before I say, “No, we shouldn’t even try; we should
 work with our tape vendor if we need to relabel tapes,” I want to make sure
 I’m not running contrary to actual experiences.

 So, have you tried printing your own labels for LTO tapes, and how’d that
 work out?

 Thanks,
 Nick


Re: Update: Frank will work from home to wait for packages

2015-01-02 Thread Steven Langdale
Now I'm interested!  What are the packages Frank? ;)
 On 2 Jan 2015 16:37, Frank Tsao, email is ts...@sce.com 
frank.t...@sce.com wrote:

 This is a multipart message in MIME format.
 --=_mixed BD24E56E6135EADD88257DC1005B1C58_=
 Content-Type: multipart/related; boundary==_related
 BD24E56E6135EADD88257DC1005B1C58_=


 --=_related BD24E56E6135EADD88257DC1005B1C58_=
 Content-Type: multipart/alternative; boundary==_alternative
 BD24E56E6135EADD88257DC1005B1C58_=


 --=_alternative BD24E56E6135EADD88257DC1005B1C58_=
 Content-Type: text/plain; charset=US-ASCII


Update: Frank will work from home to wait for packages

01/05/2015 -


















   You are no longer required to attend this meeting.























 --=_alternative BD24E56E6135EADD88257DC1005B1C58_=
 Content-Type: text/html; charset=US-ASCII
 Content-Disposition: inline

 htmlbody
 p
 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=1%img width=101 height=1
 src=/icons/ecblank.gif border=0 alt=br

 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=100%img
 src=cid:1__=07BBF752DFC89AC88f9e8a93df938690918c07B@ width=65
 height=65/td/tr
 /table
 /tdtd width=100%img width=1 height=1 src=/icons/ecblank.gif
 border=0 alt=br
 font size=3bUpdate: Frank will work from home to wait for
 packages/b/font
 table id=NoticeTable width=100% border=0 cellspacing=0
 cellpadding=0
 tr valign=toptd width=1% valign=middleimg width=639
 height=1 src=/icons/ecblank.gif border=0 alt=br
 font size=2b01/05/2015/b/fontfont size=2nbsp;-
 /font/tdtd width=1% valign=middleimg width=72 height=1
 src=/icons/ecblank.gif border=0 alt=br
 /tdtd width=100% valign=middleimg width=1 height=1
 src=/icons/ecblank.gif border=0 alt=br
 /td/tr
 /table
 br
 font size=1No Location Information/font/td/tr
 /table

 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=100%
 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=100% bgcolor=#EBECED valign=middle
 ul style=padding-left: 7ptfont size=1You are no longer required to
 attend this meeting./font
 table id=ShowComments width=100% border=0 cellspacing=0
 cellpadding=0
 tr valign=toptd width=100%
 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=0%img width=1 height=1
 src=/icons/ecblank.gif border=0 alt=/tdtd width=100%img
 width=1 height=1 src=/icons/ecblank.gif border=0 alt=br
 /td/tr
 /table
 /td/tr
 /table
 /ul
 /td/tr
 /table
 /td/tr
 /table

 table id=ShowDescription width=100% border=0 cellspacing=0
 cellpadding=0
 tr valign=toptd width=100%
 table id=Description width=100% border=0 cellspacing=0
 cellpadding=0tbody title=Description
 tr class=domino_tablabelth scope=rowgroup
 colspan=1Description/th/tr

 tr valign=toptd width=100% bgcolor=#E7E9F0 valign=middle
 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=100% bgcolor=#FF
 table width=100% border=0 cellspacing=0 cellpadding=0
 tr valign=toptd width=100%img width=1 height=1
 src=/icons/ecblank.gif border=0 alt=/td/tr
 /table
 /td/tr
 /table
 /td/tr
 /tbody/table
 /td/tr
 /table
 /body/html

 --=_alternative BD24E56E6135EADD88257DC1005B1C58_=
 Content-Type: text/calendar; method=PUBLISH; charset=UTF-8

 BEGIN:VCALENDAR
 X-LOTUS-CHARSET:UTF-8
 VERSION:2.0
 PRODID:-//Lotus Development Corporation//NONSGML Notes 8.5.3//EN_C
 METHOD:CANCEL
 BEGIN:VTIMEZONE
 TZID:Pacific
 BEGIN:STANDARD
 DTSTART:19501105T02
 TZOFFSETFROM:-0700
 TZOFFSETTO:-0800
 RRULE:FREQ=YEARLY;BYMINUTE=0;BYHOUR=2;BYDAY=1SU;BYMONTH=11
 END:STANDARD
 BEGIN:DAYLIGHT
 DTSTART:19500312T02
 TZOFFSETFROM:-0800
 TZOFFSETTO:-0700
 RRULE:FREQ=YEARLY;BYMINUTE=0;BYHOUR=2;BYDAY=2SU;BYMONTH=3
 END:DAYLIGHT
 END:VTIMEZONE
 BEGIN:VEVENT
 DTSTART;TZID=Pacific:20150105T06
 DTEND;TZID=Pacific:20150105T060500
 TRANSP:OPAQUE
 DTSTAMP:20150102T163511Z
 SEQUENCE:0
 CLASS:PUBLIC
 SUMMARY:Frank will work from home to wait for packages
 ORGANIZER;CN=Frank Tsao/SCE/EIX:mailto:frank.t...@sce.com
 UID:1CC04BE7553CD95888257DC1005AD377-Lotus_Notes_Generated
 X-LOTUS-BROADCAST:FALSE
 X-LOTUS-UPDATE-SEQ:1
 X-LOTUS-UPDATE-WISL:$S:1;$L:1;$B:1;$R:1;$E:1;$W:1;$O:1;$M:1
 X-LOTUS-NOTESVERSION:2
 X-LOTUS-NOTICETYPE:S
 X-LOTUS-APPTTYPE:3
 X-LOTUS-CHILD-UID:BD24E56E6135EADD88257DC1005B1C58
 END:VEVENT
 END:VCALENDAR

 --=_alternative BD24E56E6135EADD88257DC1005B1C58_=--
 --=_related BD24E56E6135EADD88257DC1005B1C58_=
 Content-Type: image/gif; name=pic07766.gif
 Content-Disposition: inline; filename=pic07766.gif
 Content-ID: 1__=07BBF752DFC89AC88f9e8a93df938690918c07B@
 Content-Transfer-Encoding: base64


 R0lGODlhQQBBAPf/AJGy2FiDMsToqVSFusnyrNfV0sXypIS1Wo26Z8DynO775Lr0kdTq/bnT93Sp

 R2aaONLk+6rE8a/vgZ/paX2l0eLs/On1/+T51Z+83PL5/8HW7ZPGq3OCZrLqib25s8TBvNP2umSh

 Msnc+c31sKenp6LqbLTXmbXxis7utmKIs73Rq9r7wuLx/prmYX6Vrre3t8Te+qTrcKvtfd7d3MXW

 t8vJxcTExH2pV9nZ2crKyrPFodHPy2KYtdX5vMDZ+cLc+sjFwry8vJLRZaKiopa7eWCQxFiJwL3W

 

Re: checkin libv command

2014-12-23 Thread Steven Langdale
Where are these tapes you want to label?  In the library or in the I/O
hopper?

On Tue, 23 Dec 2014 16:38 Jeanne Bruno jbr...@cenhud.com wrote:

 Hello.  hhmmm.  I tried status=private.  Results were the same.

 I tried:
 label libv xx search=yes checkin=scratch labels=b
 volr=A00560L5,A00561L5
 same results.

 Also, the maxscr on all our copy pools are fine. (difference of allowed
 and used is plenty)

 I have tried status=bulk in the past, on a tape I had issues with earlier
 in the year, but that tape is just sitting in the library and no backup
 process has ever used it.  So I'm hesitant to use 'bulk'.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Huebner, Andy
 Sent: Tuesday, December 23, 2014 10:37 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] checkin libv command

 Below is the command I use on my virtual L180 libraries which are SCSI
 libraries.
 I think the difference is the volume range.

 I do not know the library you have, but some libraries are partitioned and
 you have to make sure the tapes are assigned to the correct partition.

 label libv vlibTSM_4 checkin=scr labelsource=barcode volr=s4,s4
 search=y

 Andy Huebner

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Jeanne Bruno
 Sent: Tuesday, December 23, 2014 8:39 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] checkin libv command

 Hello.  We are using TSM manager 6.3.3.0.
 Tape Library is a DS3512.
 I'm trying to check in some brand new tapes (LTO Ultrium 1.5TB).
 On the DS3512, I choos the library and insert the tape into the machine.
 On the command line I use this command:

 label libv xx search=yes checkin=scratch labels=b overwrite=y

 My output is this:

 ANR2017I Administrator ADMIN issued command: LABEL LIBVOLUME 
 search=yes checkin=scratch labels=b overwrite=y ANR0984I Process 120 for
 LABEL LIBVOLUME started in the BACKGROUND at 09:29:14.
 ANR8799I LABEL LIBVOLUME: Operation for library XX started as process
 120.
 ANR8801I LABEL LIBVOLUME process 120 for library XX completed; 0
 volume(s) labeled, 0 volume(s) checked-in.
 ANR0985I Process 120 for LABEL LIBVOLUME running in the BACKGROUND
 completed with completion state SUCCESS at 09:29:19.

 0 volumes, but finished successfully and when I query the library XX,
 the new tape does not show in the list.

 I have also tried this command:
 CHECKIN LIBVOLUME xx SEARCH=YES CHECKLABEL=YES status=scratch
 WAITTIME=0 VOLLIST=A00L5.
 Results are the same.

 I have also tried putting the tapes on the door of the machine and not
 doing the 'insert media'...again results on the same.
 We just received 30 new tapes last week and I've tried about 10 of them so
 far.
 I have used the cleaning tape and did the clean process and also rebooted
 the DS3512.same results afterwards.

 Any help, suggestions would be greatly appreciated.



 
 Jeannie Bruno
 Senior Systems Analyst
 jbr...@cenhud.commailto:jbr...@cenhud.com
 Central Hudson Gas  Electric
 (845) 486-5780



Re: Restore and mounts

2014-12-17 Thread Steven Langdale
The point of that technote is that you do not need DIRMC anymore.

On Wed, 17 Dec 2014 16:17 Hans Christian Riksheim bull...@gmail.com wrote:

 Thanks, Nick.

 Of course this was the one TSM server where I forgot to create the DIRMC
 diskpool and that explains the restore behavior.

 Regards,

 Hans Chr.

 On Wed, Dec 17, 2014 at 2:52 PM, Nick Marouf mar...@gmail.com wrote:
 
  This could be normal if TSM is trying to recreate all the directory
  structures. It creates this first, before restoring actual data.
 
 
 
  With the newer versions of TSM, using a directory class management
 (DIRMC)
  shouldn’t be necessary, since ACL information is applied at a later point
  in time. However with that said, I’ve seen fileservers with millions of
  directory structures that could be spread  across many tapes, or even one
  tape.
 
 
 
  You may want to open a ticket with support for
  confirmation, but the symptoms you are reporting are similar to a
 problem I
  had a while back.
 
 
 
  See this technote with a bit more background.
 
 
 
  http://www-01.ibm.com/support/docview.wss?uid=swg21669468
 
 
 
  On Wed, Dec 17, 2014 at 3:37 AM, Hans Christian Riksheim 
  bull...@gmail.com
  wrote:
  
   I am doing a file system restore. The number of volumes for this node
 is
  35
   and is collocated by filespace.
  
   In the last 24 hours there has been 700 tape mounts for this restore
   session. One volume has been mounted 346 times. Total amount restored
 is
   about 200 GB.
  
   q ses f=d tells me that this is a NoQueryRestore.
  
  
   Is this to be expected?
  
  
   Regards
  
   Hans Chr.
  
 



Re: Client node replication

2014-12-15 Thread Steven Langdale
No they don't.  I have Linux to AIX.
 On 15 Dec 2014 16:20, Bill Boyer bjdbo...@comcast.net wrote:

 Been RTFM'ing trying to find out whether the source and target TSM Servers
 for node replication have to be the same OS or can be different? Like I
 have
 a customer with AIX as production, but would like to set up a small D/R
 server and use node replication but would like that to be Winders to help
 save on cost. I've found all kinds of versions, but not OS compatibilities
 in the documentation.



 Anyone help out?



 TIA,

 Bill





 Bill Boyer
 DSS, Inc.
 (610) 927-4407
 Enjoy life. It has an expiration date. - ??



Re: TSM 7.1 usage of volumes for dedupe

2014-10-22 Thread Steven Langdale
What's your colocation set too for that stgpool?
On 22 Oct 2014 16:50, Martha M McConaghy martha.mccona...@marist.edu
wrote:

 We just installed TSM 7.1 during the summer and have been working on
 migrating our backups over from our old v5.5 system.  We are using
 deduplication for our main storage pool and it seems to work great.
 However, I'm concerned about how it is using the volumes in the
 storage pool.  Since we never ran v6, I don't know if this is normal
 or if we have stumbled upon a bug in 7.1.  So, I figured I'd ask on the
 list and see if any of you have some insight.

 Our dedupe storage pool is dev class FILE, of course.  It is set up to
 acquire new scratch volumes as it needs over time.  Originally, I had
 the max scratch vols allowed at 999, which seemed reasonable. After
 about a month, though, we hit that max and I had to keep raising it.
 When I query the volumes belonging to this pool, I see many, many of
 them in full status, with pct util=0:
 Volume Name Storage Device
 Estimated   Pct  Volume
 Pool Name Class
 Name  Capacity  Util  Status
 ---
 --  - - 
 /data0/0B55.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0B8F.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0BCF.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0BD6.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C16.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C2A.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C63.BFS  DEDUPEPOOL  FILE  49.9
 G 100.0   Full
 /data0/0C72.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C79.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C84.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C8C.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C93.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0C9A.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0CA1.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full
 /data0/0CB3.BFS  DEDUPEPOOL  FILE  50.0
 G   0.0   Full

 Literally, hundreds of them.  I run reclamations, but these volumes
 never get touched nor reclaimed.  Seems to me that they should. I've
 gone over the admin guide several times, but have found nothing touching
 on this.  We just applied the 7.1.1.0 updates, but that has not helped
 either.  If I do a move data on each, they will disappear.  However,
 more will return to take their place.  Anyone seen this before, or have
 any suggestions?

 Martha

 --
 Martha McConaghy
 Marist: System Architect/Technical Lead
 SHARE: Director of Operations
 Marist College IT
 Poughkeepsie, NY  12601



Re: maxscratch

2014-10-09 Thread Steven Langdale
Eric

It's the max number of vols that can be used from the scratch pool by the
particular stgpool.  so in your instance (assuming you have a single
stgpool with all 4000 private takes in it AND it got them from the scratch
pool), if you set it to 1000 you won't be able to use anymore new scratch
tapes until the usage goes below 1000.

You can look as it as a way to stop a single stpool stealing all of the
scratch tapes.

Steven

On 9 October 2014 08:53, Loon, EJ van (SPLXM) - KLM eric-van.l...@klm.com
wrote:

 Hi guys!
 I never used the maxscratch value for our VTL libraries, but I'm just
 wondering how it works.
 From the TSM Reference manual:

 MAXSCRatch
 Specifies the maximum number of scratch volumes that the server can
 request for this storage pool. This parameter is optional. You can specify
 an integer from 0 to 1. By allowing the server to request scratch
 volumes as needed, you avoid having to define each volume to be used.

 What is meant by scratch volumes here? The total amount of scratch tapes
 when you start with an empty TSM server or the amount of scratches in the
 current situation?
 For instance, I have a server with 4000 private tapes and 1000 scratch
 tapes. Should I set it to 1000 or 5000?
 Thanks for your help in advance!
 Kind regards,
 Eric van Loon
 AF/KLM Storage Engineering
 
 For information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only. If
 you are not the addressee, you are notified that no part of the e-mail or
 any attachment may be disclosed, copied or distributed, and that any other
 action related to this e-mail or attachment is strictly prohibited, and may
 be unlawful. If you have received this e-mail by error, please notify the
 sender immediately by return e-mail, and delete this message.

 Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
 employees shall not be liable for the incorrect or incomplete transmission
 of this e-mail or any attachments, nor responsible for any delay in receipt.
 Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
 Airlines) is registered in Amstelveen, The Netherlands, with registered
 number 33014286
 




Re: TSM scheduler service issue

2014-07-19 Thread Steven Langdale
Start by excluding TSM, does it run from a cron or at job?

It's most likely a variable thats set when you run it as a user that isn't
when run from TSM


On 19 July 2014 13:44, Saravanan evergreen.sa...@gmail.com wrote:

 Hi,

 TSM scheduler service is running as root user in the background

 But when we trigger oracle job from TSM it doesn't work

 su - oracle -c /opt/tsm/ora.0.sh

 But when we trigger TSM scheduler service in foreground it works well

 Has anybody seen this issue ?

 By Sarav
 +65-82284384



Re: VTL label unreadable, I'm stuck.

2014-04-22 Thread Steven Langdale
The 1st thing I'd do is get a call logged with your vtl vendor.  You need
to find out if this corruption is more widespread.
On 22 Apr 2014 16:13, Loon, EJ van (SPLXM) - KLM eric-van.l...@klm.com
wrote:

 Hi TSM-ers!
 In one of my virtual libraries I found 4 tapes which were unavailable. I
 updated them to read-only and issued a move data for them. This failed due
 to the following error:

 04/22/2014 14:39:13 ANR8355E I/O error reading label for volume 020408 in
 drive TVTL_TLSP2_1_DR38

 This looks like an unlabeled tape, but it's a private tape used in a
 primary storage pool! So I tried relabeling it with overwrite=yes:

 04/22/2014 14:59:11 ANR8816E LABEL LIBVOLUME: Volume 020408 in library
 TVTL_TLSP2_1 cannot be labeled because it is currently defined in a storage
 pool or in the volume history file.

 Now what... I can't read the data on the tape since the label is corrupt
 and I can't relabel it because there is data on it!
 Thank you very much for your suggestions in advance!
 Kind regards,
 Eric van Loon
 AF/KLM Storage Engineering
 
 For information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only. If
 you are not the addressee, you are notified that no part of the e-mail or
 any attachment may be disclosed, copied or distributed, and that any other
 action related to this e-mail or attachment is strictly prohibited, and may
 be unlawful. If you have received this e-mail by error, please notify the
 sender immediately by return e-mail, and delete this message.

 Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
 employees shall not be liable for the incorrect or incomplete transmission
 of this e-mail or any attachments, nor responsible for any delay in receipt.
 Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
 Airlines) is registered in Amstelveen, The Netherlands, with registered
 number 33014286
 



Re: V5-V6 conversion failed on INSERTDB with File system full

2014-04-02 Thread Steven Langdale
Bill, what was the outcome?


On 1 April 2014 23:22, Bill Boyer bjdbo...@comcast.net wrote:

 V5.5.7.0 on AIX converting over the network to V6.3.4.0 on Linux RedHat.

 The V5 DB is 260GB at 90%.



 The TSM DB is defined on 4x100GB file systems /tsmdb01 - /tsmdb04.  Active
 log is 32GB and archive log file system is 200GB.

 The extractdb phase seems to complete successfully:

 ANR0409I Session 1 ended for server $UPGRADETARGET$ (Linux/x86_64).

 ANR1382I EXTRACTDB: Process 1, database extract, has completed.

 ANR1383I EXTRACTDB: Found 119 database objects.

 ANR1384I EXTRACTDB: Processed 75 database objects.

 ANR1385I EXTRACTDB: Skipped 44 empty database objects.

 ANR1386I EXTRACTDB: Failed to process 0 database objects.

 ANR1387I EXTRACTDB: Processed 1,178,563,762 database records.

 ANR1388I EXTRACTDB: Read 47,215,261 database pages.

 ANR1389I EXTRACTDB: Wrote 128,524,753,412 bytes.

 ANR1390I EXTRACTDB: Elapsed time was 2:16:21.

 ANR1391I EXTRACTDB: Throughput was 53936.19 megabytes per hour.



 After the ExtractDB phase completes in the InsertDB log:

 ANR1379I INSERTDB: Read 123,388,191,411 bytes and inserted 1,126,658,997

 database entries in 2:10:00 (54310.15 megabytes per hour).

 ANR1526I INSERTDB: Building indices and checking table integrity.

 ANR0409I Session 2 ended for server $UPGRADESOURCE$ (AIX-RS/6000).

 ANR1527I INSERTDB: Checked 69 of 75 database objects in 0:04:03.

 The file system is full.

 ANR0171I tbcli.c(10847): Error detected on 27:2, database in evaluation
 mode.

 ANR0131E tbcli.c(10847): Server DB space exhausted.

 ANR0162W Supplemental database diagnostic information:  -1: :-968

 ([IBM][CLI Driver][DB2/LINUXX8664] SQL0968C  The file system is full.

 ).



 And a bunch more Transaction hash table..



 Lock hash table contents (slots=3002):

 Note: Enabling trace class TMTIMER will provide additional timing info on
 the

 following locks

   *** no locks found ***

 ANR1527I INSERTDB: Checked 70 of 75 database objects in 0:14:03.

 ANR1527I INSERTDB: Checked 70 of 75 database objects in 0:24:03.

 ANR1527I INSERTDB: Checked 70 of 75 database objects in 0:34:03.

 ANR1527I INSERTDB: Checked 70 of 75 database objects in 0:44:03.

 ANR1527I INSERTDB: Checked 70 of 75 database objects in 1:04:03.

 ANR1527I INSERTDB: Checked 70 of 75 database objects in 1:24:03.



 And then nothing. It just quits. But it appears to have run several hours
 after the extract/insert phase had completed. The Linux file systems shows
 /tsmdb01 at 48%, /tsmdb02 at 100%, /tsmdb03 at 100% and /tsmdb04 at 43%. 2
 of the 4 file systems were at 100%, but I should have still had 100GB of
 database space left.



 I have since added another /tsmdb05 at 100GB and restarted the process.



 The other file systems: /home had 4.7GB of free space. /tmp 6.4GB, /var
 4.4GB.



 Any ideas? I've got about 2-3 more hours to see if adding the 100GB of
 database space will resolve the problem.



 Bill Boyer
 DSS, Inc.
 (610) 927-4407
 Enjoy life. It has an expiration date. - ??



Re: Why?

2014-02-24 Thread Steven Langdale
There are quite a few operations that take the VTL offline.  The word
temporary  is a bit of a stretch too.  Mine take 25 mins to go offline
and back. Though I understand the time relates to the size of the VTL.

Agreed though,  hardly enterprise class.
 On 24 Feb 2014 14:44, Remco Post r.p...@plcs.nl wrote:

 Hi All,

 I just noticed something disturbing in the protectier manual: The
 ProtecTIER system temporarily goes offline to WTF!?!? Did I just read
 that correctly? Imagine a disk system going offline every time you add a
 LUN

 --

 Remco Post
 re...@plcs.nl
 06-248 21 622


Re: TSM

2014-02-04 Thread Steven Langdale
There isn't a straight answer to that :)

loads of little files DIRECTLY to tape will be slow.
Loads of little files to disk, then tape may not be.

600GB is not that much data, but I think the group would need more
information before giving better guidance.

Steven


On 4 February 2014 10:17, madu...@gmail.com madu...@gmail.com wrote:

 Dears,

 Our local IBM agent, has staed in his offer the following:
 Please note if you have  any application  with large size more than 600GB,
 and contain small files with size less than 16kb, the backup will be slow
 through TSM (Tivoli Storage Manager Suite for Unified Recover 6TB via
 TS3200 LTO6

 Can you please confirm the above?

 BR
 -mad



Oracle rman - specify management class

2014-02-04 Thread Steven Langdale
All

We have quite a few large oracle nodes that backup via RMAN  TDPO.  They
currently run LAN Free.

Does anyone know if it's possible to do the main DB LAN Free and specify a
different management class for the archive redo logs?  they are ~1GB each,
so a disk pool is a better destination.

The Oracle nodes are AIX if that makes a difference.

Thanks

Steven


Re: TSM

2014-02-04 Thread Steven Langdale
If your go no-go decision for TSM is just this one server,  is it the only
one to backup?  If so, then maybe TSM is not the best product. Not for
backing up a single server.

Either way, you've not given us enough detail.  We don't even know your
backup window.
On 4 Feb 2014 20:31, madu...@gmail.com madu...@gmail.com wrote:

 If I can´t finish the backup inside the backup window, I might decide
 against TSM
 As stated above, you mean large quantities of small files should not
 go direct to tape; then it will be relatively slow, it should be send to
 disk, then migrate to tape, using journal backup or image backup should do
 the job

 -mad


 On Tue, Feb 4, 2014 at 7:36 PM, Sergio O. Fuentes sfuen...@umd.edu
 wrote:

  Also, if you do image-level backups of 600GB to LTO6 via a large-enough
  network link (1Gbps), you should be able to get a fast enough backup,
 and
  object size doesn't factor.  You can also turn on journaling if on
 Windows
  or Linux or AIX, and you can do incrbydate.  Lots of options.  We'll need
  more info on your backup requirements.  Any backup product with 16kb
 files
  in a 600GB filesystem will perform poorly, if that product catalogs all
  objects in that filesystem.  I suppose your IBM rep is covering his
 bases.
 
  SF
 
  On 2/4/14 5:17 AM, madu...@gmail.com madu...@gmail.com wrote:
 
  Dears,
  
  Our local IBM agent, has staed in his offer the following:
  Please note if you have  any application  with large size more than
  600GB,
  and contain small files with size less than 16kb, the backup will be
 slow
  through TSM (Tivoli Storage Manager Suite for Unified Recover 6TB via
  TS3200 LTO6
  
  Can you please confirm the above?
  
  BR
  -mad
 



Re: POLL: Backing up Windows Systemstate

2014-01-09 Thread Steven Langdale
Hi

In my opinion (and IBM's if you ask) you should use vss anyway, regardless
of your systemstate preference.

We backup the systemstate so we can recover the whole system. No
systemstate  and you have to reinstall apps etc.

Just my $2

Steven
On 9 Jan 2014 18:13, Zoltan Forray zfor...@vcu.edu wrote:

 A question about backing up systemstate, which seems to give us numerous
 headaches.

 Do you backup systemsstate on your Windows servers and WHY?

 My Windows folks constantly contact me about errors backup up and getting
 failures related to the systemstate files/process or VSS.

 For example, today's headache is a 2008 box failing with *System Writers
 'system writer'  do not exist * and yes, doing a vssadmin list writers
 does not list system writers.  They have been fighting this since
 November, with no solution to fixing this problem.  All hits/suggested
 solutions from Google searches have been tried, to no avail. I don't thing
 the server owners are willing to do a complete rebuild.

 The simple solution would be to NOT backups systemstate.   For a simple
 2008 server, what do we lose by not backing up systemstate?

 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



Re: Recover Archived Data from Tapes without Catalog

2013-12-11 Thread Steven Langdale
As others have said, the best way is to recover the old server (assuming
you can get the hardware).

IBM do offer a recovery service (I've used it), assuming the tapes have not
been overwritten, I'd expect you to get everything back.

It's not cheap though and I'm pretty sure you'll need a server and drive
the same as the original TSM server, along with plenty of disk space.

There is also an open source version of the same thing - but I've never
tried it.

Steven


Backing up Windows scheduled tasks in 2003 2008

2013-10-23 Thread Steven Langdale
Hello all

I have a requirement to backup scheduled tasks on a couple of windows
servers.  I though I may be able to get away with just C:\Windows\Tasks
(and a list of the relevant passwords), but that dir is an OS exclude on
2008 (only one I've tried so far)

Has anyone else needed to do this?  If so, how did you go about it?

My current test platform is 2008 + 6.4.0.11 client

Thanks

Steven


Re: Node Replication Sessions

2013-09-05 Thread Steven Langdale
Every hour?

Do you use a config manager with a refresh time of 60 mins?  The message is
very similar.




On 4 September 2013 21:06, Vandeventer, Harold [BS] 
harold.vandeven...@ks.gov wrote:

 I'm on Windows, running 6.3.3.100.

 Replication runs only in the early morning hours, and I just saw the ANRs
 with mid-afternoon time stamps.

 A detailed search for string started for server name returns ANRs on
 about an hourly basis.
 5:02
 6:04
 7:05
 8:06
 9:08
 10:09
 11:10
 And on-going.

 These are all well after the scripted command ran to start replication of
 a group had finished.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Steven Langdale
 Sent: Wednesday, September 04, 2013 2:52 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Node Replication Sessions

 Harold

 Just checked one of my node replication source servers, and I'm only
 seeing those messages when it is supposed to be replicating.  None at any
 other time.

 Server is Linux, TSM ver 6.3.2.2

 Steven


 On 4 September 2013 20:40, Vandeventer, Harold [BS] 
 harold.vandeven...@ks.gov wrote:

  I've started replicating a few nodes; works well it appears.
 
  But, I'm seeing Activity Log entries of ANR0408I Session nnn started
  for server Replication Target Server (Windows) (Tcp/Ip) for
 replication.
  Likewise, the target server has similar ANRs referencing the source
 server.
 
  But, these are occurring during times of day when the replication
  activity is not running.
 
  Is it simply raising a heartbeat/ping type session for a testing purpose?
 
  
  Harold Vandeventer
  Systems Programmer
  State of Kansas - Office of Information Technology Services STE 751-S
  910 SW Jackson
  (785) 296-0631
 
 
  [Confidentiality notice:]
  ***
  This e-mail message, including attachments, if any, is intended for the
  person or entity to which it is addressed and may contain confidential
  or privileged information.  Any unauthorized review, use, or disclosure
  is prohibited.  If you are not the intended recipient, please contact
  the sender and destroy the original message, including all copies,
  Thank you.
  ***
 



Re: Node Replication Sessions

2013-09-05 Thread Steven Langdale
easy way to check: q subscription




On 5 September 2013 13:51, Vandeventer, Harold [BS] 
harold.vandeven...@ks.gov wrote:

 Not that I know of...l'll do some more research.



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Steven Langdale
 Sent: Thursday, September 05, 2013 3:48 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Node Replication Sessions

 Every hour?

 Do you use a config manager with a refresh time of 60 mins?  The message
 is very similar.




 On 4 September 2013 21:06, Vandeventer, Harold [BS] 
 harold.vandeven...@ks.gov wrote:

  I'm on Windows, running 6.3.3.100.
 
  Replication runs only in the early morning hours, and I just saw the
  ANRs with mid-afternoon time stamps.
 
  A detailed search for string started for server name returns ANRs
  on about an hourly basis.
  5:02
  6:04
  7:05
  8:06
  9:08
  10:09
  11:10
  And on-going.
 
  These are all well after the scripted command ran to start replication
  of a group had finished.
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
  Of Steven Langdale
  Sent: Wednesday, September 04, 2013 2:52 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Node Replication Sessions
 
  Harold
 
  Just checked one of my node replication source servers, and I'm only
  seeing those messages when it is supposed to be replicating.  None at
  any other time.
 
  Server is Linux, TSM ver 6.3.2.2
 
  Steven
 
 
  On 4 September 2013 20:40, Vandeventer, Harold [BS] 
  harold.vandeven...@ks.gov wrote:
 
   I've started replicating a few nodes; works well it appears.
  
   But, I'm seeing Activity Log entries of ANR0408I Session nnn
   started for server Replication Target Server (Windows) (Tcp/Ip)
   for
  replication.
   Likewise, the target server has similar ANRs referencing the source
  server.
  
   But, these are occurring during times of day when the replication
   activity is not running.
  
   Is it simply raising a heartbeat/ping type session for a testing
 purpose?
  
   
   Harold Vandeventer
   Systems Programmer
   State of Kansas - Office of Information Technology Services STE
   751-S
   910 SW Jackson
   (785) 296-0631
  
  
   [Confidentiality notice:]
   
   *** This e-mail message, including attachments, if any, is intended
   for the person or entity to which it is addressed and may contain
   confidential or privileged information.  Any unauthorized review,
   use, or disclosure is prohibited.  If you are not the intended
   recipient, please contact the sender and destroy the original
   message, including all copies, Thank you.
   
   ***
  
 



Re: Node Replication Sessions

2013-09-04 Thread Steven Langdale
Harold

Just checked one of my node replication source servers, and I'm only seeing
those messages when it is supposed to be replicating.  None at any other
time.

Server is Linux, TSM ver 6.3.2.2

Steven


On 4 September 2013 20:40, Vandeventer, Harold [BS] 
harold.vandeven...@ks.gov wrote:

 I've started replicating a few nodes; works well it appears.

 But, I'm seeing Activity Log entries of ANR0408I Session nnn started for
 server Replication Target Server (Windows) (Tcp/Ip) for replication.
 Likewise, the target server has similar ANRs referencing the source server.

 But, these are occurring during times of day when the replication activity
 is not running.

 Is it simply raising a heartbeat/ping type session for a testing purpose?

 
 Harold Vandeventer
 Systems Programmer
 State of Kansas - Office of Information Technology Services
 STE 751-S
 910 SW Jackson
 (785) 296-0631


 [Confidentiality notice:]
 ***
 This e-mail message, including attachments, if any, is intended for the
 person or entity to which it is addressed and may contain confidential
 or privileged information.  Any unauthorized review, use, or disclosure
 is prohibited.  If you are not the intended recipient, please contact
 the sender and destroy the original message, including all copies,
 Thank you.
 ***



Expanding ProtecTIER Library

2013-08-27 Thread Steven Langdale
Guys.

I'm about to expand an existing virtual library.

The lib manager is running 5.5.5.2

I'm assuming I'll just need to restart the lib manager and define the extra
drives  paths (also adding slots).  Anyone done this and had to do any
more to get it working i.e. delete and reconfig the whole library?

Thanks

Steven


Re: ANR8758W message

2013-08-07 Thread Steven Langdale
I think this came up a few days ago...  It's a requirement of libtype VTL
that all clients have all paths to all the drives.


On 7 August 2013 09:27, Robert Ouzen rou...@univ.haifa.ac.il wrote:

 Hi to all

 Today I tried to backup one node connected lanfree  thru Data Domain  as
 VTL backup, when changing the libtype of the library from SCSI to VTL.

 The backup run fine but  got a lot of this warning 

 ANR8758W The number of online drives in the VTL library DDVTL2 does not
 match the number of online drive paths for source EXCHSRVAN_SA.

 I have 25 drives but for the lanfree of this specific node I created only
  8 paths ..

 So if I understand correctly the number of paths need to match the number
 of drives ?

 Or did it a way to get rid of this warning ?

 Tsm server version:  6.3.4
 Lanfrre STA version: 6.3.4
 Data Domain version: 5.1.1

 Best  Regards

 Robert



Re: Long pause for password prompt

2013-07-31 Thread Steven Langdale
If you'd said ssh or telnet, the 1st thing I'd say is DNS.

Check reverse lookups are working on the server for the clients IP.
On 31 Jul 2013 20:44, Ehresman,David E. deehr...@louisville.edu wrote:

 One customer is reporting a fairly consistent 30 second or so wait after
 entering the dsmadmc command before receiving the password prompt.  After
 the password is entered, response time is normal.  Any ideas?

 David



Re: Deduplication/replication options

2013-07-26 Thread Steven Langdale
Hello Stefan

Have you got cases of this?  I ask because I have been specifically told by
our rep that any dedupe saving for capacity licensing is TSM dedupe only,
regarless of the backend storage.



On 26 July 2013 09:16, Stefan Folkerts stefan.folke...@gmail.com wrote:

 No, this is correct, IBM does give Protectier (for example) customers an
 advantage with deduplication and factor in the dedup for billing.


 On Wed, Jul 24, 2013 at 10:18 PM, Colwell, William F.
 bcolw...@draper.comwrote:

  Hi Norman,
 
  that is incorrect.  IBM doesn't care what the hardware is when measuring
  used capacity
  in the Suite for Unified Recovery licensing model.
 
  A description of the measurement process and the sql to do it is at
  http://www-01.ibm.com/support/docview.wss?uid=swg21500482
 
  Thanks,
 
  Bill Colwell
  Draper Lab
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
  Gee, Norman
  Sent: Wednesday, July 24, 2013 11:29 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: Deduplication/replication options
 
  This why IBM is pushing their VTL solution.  IBM will only charge for the
  net amount using an all IBM solution.  At least that is what I was told.
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
  Loon, EJ van - SPLXM
  Sent: Tuesday, July 23, 2013 11:59 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: Deduplication/replication options
 
  Hi Sergio!
  Another thing to take into consideration: if you have switched from PVU
  licensing to sub-capacity licensing in the past: TSM sub-capacity
  licensing is based on the amount of data stored in your primary pool. If
  this data is stored on a de-duplicating storage device you will be
  charged for the gross amount of data. If you are using TSM
  de-duplication you will have to pay for the de-duplicated amount. This
  will probably save you a lot of money...
  Kind regards,
  Eric van Loon
  AF/KLM Storage Engineering
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
  Sergio O. Fuentes
  Sent: dinsdag 23 juli 2013 19:20
  To: ADSM-L@VM.MARIST.EDU
  Subject: Deduplication/replication options
 
  Hello all,
 
  We're currently faced with a decision go with a dedupe storage array or
  with TSM dedupe for our backup storage targets.  There are some very
  critical pros and cons going with one or the other.  For example, TSM
  dedupe will reduce overall network throughput both for backups and
  replication (source-side dedupe would be used).  A dedupe storage array
  won't do that for backup, but it would be possible if we replicated to
  an identical array (but TSM replication would be bandwidth intensive).
  TSM dedupe might not scale as well and may neccessitate more TSM servers
  to distribute the load.  Overall, though, I think the cost of additional
  servers is way less than what a native dedupe array would cost so I
  don't think that's a big hit.
 
  Replication is key. We have two datacenters where I would love it if TSM
  replication could be used in order to quickly (still manually, though)
  activate the replication server for production if necessary.  Having a
  dedupe storage array kind of removes that option, unless we want to
  replicate the whole rehydrated backup data via TSM.
 
  I'm going on and on here, but has anybody had to make a decision to go
  one way or the other? Would it make sense to do a hybrid deployment
  (combination of TSM Dedupe and Array dedupe)?  Any thoughts or tales of
  woes and forewarnings are appreciated.
 
  Thanks!
  Sergio
  
  For information, services and offers, please visit our web site:
  http://www.klm.com. This e-mail and any attachment may contain
  confidential and privileged material intended for the addressee only. If
  you are not the addressee, you are notified that no part of the e-mail or
  any attachment may be disclosed, copied or distributed, and that any
 other
  action related to this e-mail or attachment is strictly prohibited, and
 may
  be unlawful. If you have received this e-mail by error, please notify the
  sender immediately by return e-mail, and delete this message.
 
  Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
  employees shall not be liable for the incorrect or incomplete
 transmission
  of this e-mail or any attachments, nor responsible for any delay in
 receipt.
  Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
  Airlines) is registered in Amstelveen, The Netherlands, with registered
  number 33014286
  
 
 



Re: TSM VE backup of Orcale Windows server

2013-07-18 Thread Steven Langdale
TSM4VE does get VMWare to create a snapshot, it's VMWare that then
integrates with the VM to do the VSS stuff.

As you don't have VSS, VMWare will use it's own driver to do this (SYNC).
it has been know for this quiesce stage to kill a busy server:

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKCdocType=kcdocTypeID=DT_KB_1_1externalId=5962168

Feedback if it makes a difference.

Steven

P.S. loved your comment about old W2K servers!


On 17 July 2013 19:13, Huebner, Andy andy.hueb...@alcon.com wrote:

 In the physical world, we stopped the application, used the SAN to make a
 snap shot of the disks then restarted the application.  The snaps where
 then given to another server where they are backed up.  The virtual version
 would be similar, stop the application, start the backup, start the
 application.

 Our understanding of the TSM VE process is that TSM has VMWare make a snap
 of the disks then TSM backs up the snap.  If that is the case why would VSS
 matter on the guest?  Or do we have it wrong?

 Our problem is we have been given about 30 minutes to backup an
 application that spans 3 servers and dozens of disks.  On the disks we have
 Oracle and millions of files.  All have to be in sync to be able to do a
 complete restoration of the application.
 The physical version of this process has worked for more than 10 years,
 now we need to convert it to virtual.


 Andy Huebner


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Ryder, Michael S
 Sent: Wednesday, July 17, 2013 11:25 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM VE backup of Orcale Windows server

 Andy:

 VE uses Microsoft VSS (Volume Shadow copy Service), which was not
 available with Windows 2000.
 Oracle VSS Writer is only available with Oracle 9i or later.

 On Windows 2003 and newer, and Oracle 9i and alter, we have no trouble
 with hot-backups of Oracle systems where we have the Oracle writer for VSS
 installed.

 I can't explain why your DB would get corrupt, but without VSS and
 VSS-Writer for Oracle, there isn't any integration between Oracle and the
 snapshot process.  That in itself might be enough to explain it.

 On servers where we couldn't get VSS Writer for Oracle installed, we only
 do cold-backups by using VMware tools to execute batch-commands to properly
 shutdown and restart applications and their Oracle databases.

 I think you will only be able to get away with cold-backups in your
 current configuration.

 Mike

 Best regards,

 Mike
 RMD IT, x7942


 On Wed, Jul 17, 2013 at 11:20 AM, Huebner, Andy andy.hueb...@alcon.com
 wrote:

  We ran our first backup of a Oracle server using TSM VE and the Oracle
  DB reported many errors and it caused the Oracle DB to become corrupt.
  I believe Oracle crashed and it was later recovered.
 
  Has anyone had any issues backing up a live Oracle system with TSM VE?
 
  Oracle - v.Old
  Windows - 2000 (laugh it you want, but I bet you have some too) TSM
  agent 6.4.0.0 TSM Server 6.2.3.100
 
  There is far more to the process and well thought out reasons, but
  this is the bit that is having an issue.
 
 
  Andy Huebner
 



Re: Relabaling empty volumes in DataDomain VTL

2013-07-15 Thread Steven Langdale
shoudln't you check them in the 1st time with label libvol...?


On 15 July 2013 11:34, Grigori Solonovitch 
grigori.solonovi...@ahliunited.com wrote:

 Hello Robert,
 Thank you very much for fast response.
 I have a lot of other volumes in the same library and all of them are
 relabeled after reclamation except newly defined.
 By the way:
 Library Name: DAILY_WIN
   Library Type: VTL
   ACS Id:
  Private Category:
  Scratch Category:
  WORM Scratch Category:
External Manager:
 Shared: No
   LanFree:
  ObeyMountRetention:
 Primary Library Manager:
   WWN: 2021884C5C5D
  Serial Number: 3440460009
  AutoLabel: OVERWRITE
 Reset Drives: Yes
Relabel Scratch: Yes
   ZosMedia:
 Last Update by (administrator): SA25577
Last Update Date/Time: 03/25/2013 11:47:31


 Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank
 Kuwait, www.ahliunited.com.kw

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Robert Ouzen
 Sent: 15 07 2013 1:29 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Relabaling empty volumes in DataDomain VTL

 Hi Grigori

 Check your Data Domain VTL library has  the function  Relabel Scratch: Yes

 q libr XX f=d

 Regards

 Robert Ouzen
 Haifa University
 Israel

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Grigori Solonovitch
 Sent: Monday, July 15, 2013 1:15 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Relabaling empty volumes in DataDomain VTL

 Hello everybody,

 I have added additional volumes in Data Doamin VTL and checked in them by
 using next command checkin libvol library search=yes stat=scr
 checkl=barcode After adding volumes I have found all of them from q vol
 in status Scratch Volume?: No
 As a result, all new volumes are not relabeled after reclamation to
 release disk space in Data Domain.
 They are becoming just EMPTY private volumes and reused without relabeling.
 What was done wrongly?
 How to fix problem now?

 Thank you very much in advance.

 Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank
 Kuwait, www.ahliunited.com.kw


 

 CONFIDENTIALITY AND WAIVER: The information contained in this electronic
 mail message and any attachments hereto may be legally privileged and
 confidential. The information is intended only for the recipient(s) named
 in this message. If you are not the intended recipient you are notified
 that any use, disclosure, copying or distribution is prohibited. If you
 have received this in error please contact the sender and delete this
 message and any attachments from your computer system. We do not guarantee
 that this message or any attachment to it is secure or free from errors,
 computer viruses or other conditions that may damage or interfere with
 data, hardware or software.


 Please consider the environment before printing this Email.


 

 CONFIDENTIALITY AND WAIVER: The information contained in this electronic
 mail message and any attachments hereto may be legally privileged and
 confidential. The information is intended only for the recipient(s) named
 in this message. If you are not the intended recipient you are notified
 that any use, disclosure, copying or distribution is prohibited. If you
 have received this in error please contact the sender and delete this
 message and any attachments from your computer system. We do not guarantee
 that this message or any attachment to it is secure or free from errors,
 computer viruses or other conditions that may damage or interfere with
 data, hardware or software.



Re: Node Replication and BA STGP

2013-07-10 Thread Steven Langdale
Yes you can.
On 10 Jul 2013 12:56, Stackwick, Stephen stephen.stackw...@icfi.com
wrote:

 Can I run a BA STGP on the target server, with the primary pool being the
 pool in which the replicated nodes reside? I'd like to replicate nodes,
 then create copypool tapes on the target side. Can't seem to find this
 information on the IBM site or wiki, maybe it's too obvious?

 Steve S.

 STEPHEN STACKWICK | Senior Consultant | 301.518.6352 (m) |
 stephen.stackw...@icfi.commailto:sstackw...@icfi.com | icfi.com
 http://www.icfi.com/
 ICF INTERNATIONAL | 410 E. Pratt Street Suite 2214, Baltimore, MD 21202 |
 410.539.1135 (o)



Poor LAN Free performance VIOS

2013-06-19 Thread Steven Langdale
All

Got a new LAN Free client, nothing out of the ordinary, but LAN Free perf
is slower than I'd hope.

I'm going to a ProtecTier VTL and have 20 sessions going.

Not started doing any fault finding yet, but the one big difference with
this client is that the tape HBA's are virtual ones (NPIV).

Has anyone seen less than stellar performance with this kind of setup, or
should the virtual HBA's be lower down the list of things to check?

For info, the VIO's do have seperate real HBA's for tape  disk.

Thoughts/comments welcome

Thanks

Steven


BA Client Ver 6.3.1.0 for AIX

2013-04-10 Thread Steven Langdale
Guys

A quick one..

Had an admin ping me saying he's installed the 6.3.1.0 client on an AIX box
and dsmcad is missing.  Before I spend too much time fault finding this
one, I assume is SHOULD be there?

Thanks

Steven


Re: BA Client Ver 6.3.1.0 for AIX

2013-04-10 Thread Steven Langdale
Update... Found it, the web fileset wasn't installed, and it's in there :)

Thanks for the fast feedback folks!

Steven


On 10 April 2013 14:38, Andrew Raibeck stor...@us.ibm.com wrote:

 Yes, dsmcad should be there.

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Product Development
 Level 3 Team Lead
 Internal Notes e-mail: Andrew Raibeck/Hartford/IBM@IBMUS
 Internet e-mail: stor...@us.ibm.com

 IBM Tivoli Storage Manager support pages:

 http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

 http://www.ibm.com/developerworks/wikis/display/tivolidoccentral/Tivoli
 +Storage+Manager
 https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/Tivoli
 +Storage+Manager/page/Home

 ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2013-04-10
 08:30:28:

  From: Steven Langdale steven.langd...@gmail.com
  To: ADSM-L@vm.marist.edu,
  Date: 2013-04-10 08:33
  Subject: BA Client Ver 6.3.1.0 for AIX
  Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
 
  Guys
 
  A quick one..
 
  Had an admin ping me saying he's installed the 6.3.1.0 client on an AIX
 box
  and dsmcad is missing.  Before I spend too much time fault finding this
  one, I assume is SHOULD be there?
 
  Thanks
 
  Steven
 



Export to tape, Import on another platform

2013-04-09 Thread Steven Langdale
Hell all

A quick one, If I do an export to tape (LTO3), from a Windows TSM 5.5
instance, can I import it into a 5.5 instance on AIX?

Thanks

Steven


Re: Export to tape, Import on another platform

2013-04-09 Thread Steven Langdale
Thanks Wanda.
James, I can't in this case - it's 3TB+ of data over a slow WAN link.

Thanks

Steven


On 9 April 2013 21:16, James Choate jcho...@chooses1.com wrote:

 Or.
 If you have server-to-server communication between the Windows TSM server
 and the AIX TSM server, you can export/import without going to tape.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Prather, Wanda
 Sent: Tuesday, April 09, 2013 2:13 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: Export to tape, Import on another platform

 Yes.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Steven Langdale
 Sent: Tuesday, April 09, 2013 3:02 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Export to tape, Import on another platform

 Hell all

 A quick one, If I do an export to tape (LTO3), from a Windows TSM 5.5
 instance, can I import it into a 5.5 instance on AIX?

 Thanks

 Steven



SAP Migration - sanity check

2013-03-21 Thread Steven Langdale
Hi All

I need to migrate an SAP enviroment from one country to another - and would
appreciate a sanity check of my thoughts so far...

Current environ is SAP on Oracle on AIX, TSM (5.5) is on Windows.
Target environ is SAP on Oracle on AIX, TSM (5.5) is on AIX.

My bit is the db migration.  The customer wants to do a standard
brbackup/brrestore on their end.

With that in mind, I'm looking to do the brbackup then export to tape.
Ship the tapes, then import into TSM.  (The bandwidth  latency between the
sites will not support an export direct to the target).

I think it is, but is that doable from TSM on Windows to TSM on AIX?
Also, is there an easy way to encrypt the export tape? i.e. without
resorting to tape encryption and key managers etc.

Thanks

Steven


Re: Happy Holiday TSM Server upgrade V5 to V6

2012-12-23 Thread Steven Langdale
Not me Roger, but good luck too you and I hope it goes well!

Keep us informed how you are getting on, I'd be interested in how long it
takes.

All the best, Steven.
On Dec 23, 2012 4:16 AM, Roger Deschner rog...@uic.edu wrote:

 Anybody else out there spending the holiday upgrading a TSM V5 server to
 TSM V6? The one I'm doing has a 150GB database, and I'm using the media
 method, so it's going to take a while. It's going from V5.5.1.0 to
 V6.2.3.0, on AIX 5.3. This was the only time I could get permission to
 bring this server down for several days.

 No huge problems yet; just wondering how many others out there are doing
 the same thing right now?

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
 = While it may be true that a watched pot never boils, the one =
  you don't keep an eye on can make an awful mess of your stove. 
 = -- Edward Stevenson ==



Re: issue with offsite reclamation

2012-12-05 Thread Steven Langdale
David

Absolutely no other messages jsut before the  ANRD's?

Either way, I think your looking at logging a call with IBM (if you've not
already).

Steven




On 5 December 2012 13:22, Tyree, David david.ty...@sgmc.org wrote:

 I run reclaim stg offsitecopy thre=60 du=600 to kick off the process.
 Sometime I vary the duration or threshold but I still get this error  every
 time and I get it almost continuously:

 ANRD Thread32 issued message  from:
 ANRD Thread32  07FEF7A56A99 OutDiagToCons()+159
 ANRD Thread32  07FEF7A504EC outDiagfExt()+fc
 ANRD Thread32  07FEF74D63DC AfRclmOnsiteVols()+20fc
 ANRD Thread32  07FEF74D6B5F AfRclmVolumeThread()+1bf
 ANRD Thread32  07FEF72782A4 startThread()+124
 ANRD Thread32  73211D9F endthreadex()+43
 ANRD Thread32  73211E3B endthreadex()+df
 ANRD Thread32  76D5652D BaseThreadInitThunk()+d
 ANRD Thread32  76E8C521 RtlUserThreadStart()+21
 ANRD_1147918115 RclmOffsiteVols(afrclm.c:2975) Thread32: Error 3051
 queuing
 storage pool volumes for offsite reclamation. Offsite reclamation
 processing may be
 incomplete.

 Out of about 175 offsite tapes I have about 30-40 below the 55% mark.




 David Tyree
 Interface Analyst
 South Georgia Medical Center
 229.333.1155


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 James Choate
 Sent: Tuesday, December 04, 2012 10:35 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] issue with offsite reclamation

 Hi David.

 When you run reclamation on your offsite volumes, what is the command you
 run?

 You also said you get a few tapes back, but that you get error.
 What error message are you constantly getting?

 And, last but not least, you mention that a lot of the offsite volumes
 have a percent utilization of less than 25%.
 I usually run the following query to show me  what is reclaimable.

 select volume_name,stgpool_name,pct_utilized,PCT_RECLAIM from volumes
 where  pct_reclaim55

 I usually reclaim tapes that have min of 55% - 60% reclaimable space.

 ~james

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Tyree, David
 Sent: Tuesday, December 04, 2012 7:52 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: issue with offsite reclamation

 Sorry might be a misunderstanding here.
 I don't have an offsite server. I trying to do reclamation on offsite
 volumes or trying too.

 I get a few tapes back sometimes but I'm constantly getting that error
 message. A lot of the offsite volumes have a percent utilization of less
 than 25% that should be reclaiming.


 David Tyree
 Interface Analyst
 South Georgia Medical Center
 229.333.1155


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Arbogast, Warren K
 Sent: Tuesday, December 04, 2012 9:34 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] issue with offsite reclamation

 David,
 If cartridges aren't coming back, you may need to run 'reconcile volumes'
 . Don't combine 'reconcile volumes' of a virtual-volume based copypool on
 the on-site server with reclamation of the primary target pool on the
 offsite server.

 tsm: reconcile volumes devcl fix=yes

 Keith Arbogast
 Indiana University



Re: Monitor open volumes for a file deviceclass session

2012-12-05 Thread Steven Langdale
How about just pick big (ish) number and keep an eye out for mount
waits/failed jobs (i'm assuming you'll get a mount wait if the number isn't
big enough - that may not be correct)

Not scientific I know - but much easier :)

Steven


On 5 December 2012 19:09, Arbogast, Warren K warbo...@indiana.edu wrote:

 Richard,
 Thank you for that detailed answer.  It sounds difficult to track and map,
 and possibly difficult to correlate increases and decreses of
 numopenvolumesallowed with system throughput. TSM likes no easy answers.

 Best wishes,
 Keith



Deploying 6.3 storage agent on AIX

2012-11-07 Thread Steven Langdale
Fellow TSM'ers

I'd appreciate some feedback on a question.

We are currently roling out 6.3 to a few facilities, and also need to role
out the storage agent to a number of AIX boxes.

Previosuly I'd have the 5.5 STA on an NFS export and the individual admins
just mount it and install themselves.  However the 6.3 one is bundled into
the huge server install as well.

So what do you do?

1.  Give them the whole lot, with instructions on how to install the STA
only.
2.  Give them the whole lot, with a silent install script to install the
STA only.
3.  Pull out the STA package install files (they are simple to find) and
use the traditional method?

Thoughts?

Thanks

Steven


Re: TSM database backup versus db2 backup of TSM database

2012-11-06 Thread Steven Langdale
Ruud

Not sure on advantaged, but the only supported way of getting a V6.x TSM
instance back from a DB backup is one done with ba db...

Steven

On 6 November 2012 14:11, Meuleman, Ruud ruud.meule...@tatasteel.comwrote:

 All,

 One can make TSM database backups within the TSM application (ba db
 dev=devclass t=f), since version 6 one can make also a db2 backup of it
 (db2 backup db tsminst1 online use tsm).

 Does anybody knows if there are advantages of making db2 backups compared
 to TSM database backups?

 Kind Regards,
 Ruud Meuleman

 **

 This transmission is confidential and must not be used or disclosed by
 anyone other than the intended recipient. Neither Tata Steel Europe Limited
 nor any of its subsidiaries can accept any responsibility for any use or
 misuse of the transmission by anyone.

 For address and company registration details of certain entities within
 the Tata Steel Europe group of companies, please visit
 http://www.tatasteeleurope.com/entities

 **



Re: unaccounted for volumes

2012-10-06 Thread Steven Langdale
I get these from time to time in a completely hands off environment (so no
opportunity for an incorrect checkin)
Every time so far it's been down to a bad tape - TSM will grab a scratch
tape, encounter an error and mark it private.

Steven


Re: TSM V6.3 with IBM TS3200 I/O stations issue

2012-07-26 Thread Steven Langdale
Hello

What are you trying and what errors/messages are you getting?
 On Jul 26, 2012 1:06 PM, Victor Shum victors...@cadex.com.hk wrote:

 Dear all :



 We were trying to use the TS-3200 I/O station to handle checkin and
 checkout
 of tape.  After we change the TS3200 library to enable I/O station; then
 reboot the tape library and restart the TSM server.  We still cannot
 checkout any tape to the I/O station.



 Anyone has idea how to trace where is the problems or anything I had
 missed.





 Best regards,

 Victor Shum



Re: Moving to TSM Capacity based licensing from PVU - experiences

2012-07-16 Thread Steven Langdale
We moved over to per TB licensing last year, and were told categorically
that the only dedupe they would take into account was TSM's dedupe.  We
also have ProtecTIER, and they would not take that into account.

Steven

On 16 July 2012 19:22, Stackwick, Stephen stephen.stackw...@icfi.comwrote:

 Thanks, Rick. The link I provided does make it appear more rigid now. I'm
 not a salesman, but I guess everything is negotiable if the parties are
 willing to dicker and it would make sense for IBM to push it's solution.

 Steve

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Rick Adamson
 Sent: Monday, July 16, 2012 2:05 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving to TSM Capacity based licensing from PVU -
 experiences

 Steve,
 Perhaps I should have stated YMMV as our negotiation with IBM took place
 when the cap model was in its infancy and from reviewing the link you
 provided it appears some aspects have changed.

 Basically if you use TSM compression, deduplication, and/or ProtecTier, it
 would be reflected in the licensing costs, if you choose another solution
 as we did with Data Domain it is not. In the end we asked IBM to negotiate
 a middle-ground number but were denied.

 I only mention this for those who use Data Domain, or other non-IBM
 solutions for dedupe and compression as it will ultimately affect the
 capacity license model cost.


 Hope that helps


 ~Rick


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Stackwick, Stephen
 Sent: Monday, July 16, 2012 12:42 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving to TSM Capacity based licensing from PVU -
 experiences

 I'm a little surprised by this, as the TSM macros you run to calculate the
 storage don't know (or care) about the storage device, i.e., they just
 report the uncompressed storage amount:

 https://www-304.ibm.com/support/docview.wss?uid=swg21500482wv=1

 That said, if you are running TSM deduplication, that *is* reported with
 the macros, so there would be a cost saving. Was IBM talking about a
 discount for ProtecTier, maybe?

 Steve

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Rick Adamson
 Sent: Monday, July 16, 2012 8:44 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving to TSM Capacity based licensing from PVU -
 experiences

 Ian,
 Our company looked into it and thought it may save some $$ and at the same
 time simplify the OVERLY complex PVU license model used for TSM/IBM.

 I'll start by saying to make sure you understand what TSM products are
 included in the capacity license proposal. From memory I don't remember
 the exact ones but it does not apply to all TSM licenses. This obviously
 means that the capacity license model may be attractive to some and
 unattractive to others. Your IBM rep should be able to clarify this.

 Also, in our environment we use a Data Domain backend which as you may
 know prefers all incoming data to be uncompressed and unencrypted. Since
 the TSM servers have no knowledge of the DD processes it reports the raw
 storage numbers before compression and deduplication which negatively
 affected the capacity licensing pricing.

 We opened discussions on this issue with IBM but they refused to budge or
 negotiate an adjustment for the actual storage used. Needless to say that
 position was not too warmly received and we 86'ed the whole discussion.

 Interestingly, had we used IBM storage/deduplication on the backend they
 would use the actual storage, but no such provision for Data Domain.

 Good luck

 ~Rick


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Ian Smith
 Sent: Monday, July 16, 2012 7:13 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving to TSM Capacity based licensing from PVU -
 experiences

 Hi,

 We are in the midst of discussions on moving to capacity-based licensing
 from the standard PVU-based method for our site. We have a large number of
 clients ( licensed via TSM-EE, TDP agents, and on client-device basis
 ) and around 1PB of primary pool data. As I understand it, there is no
 published metric for the conversion from PVU to per TB licensing so I would
 be really interested and grateful if anyone would like to share their
 experiences of that conversion in a private email to me.

 Many thanks in advance.
 Ian Smith
 Oxford University
 England



Re: Teaching Problem Solving?

2012-06-06 Thread Steven Langdale
Well it's good (sort of) to see this is an issue a few of us have.

Whilst the guys have come up with some good suggestions there, I notice you
said contractors.  Maybe it's just me after a long day, but I'd be back
to the agency and get some new contractors.

On 6 June 2012 13:24, Nick Laflamme n...@laflamme.us wrote:

 Slightly OT, but I expect this might resonate with some of us.

 How do you teach someone to solve problems? How do you teach them to look
 past the first symptom (or the user's problem description) to gather all
 the symptoms and determine if a common cause might cause many of the
 symptoms?

 We've got some contractors are are TSM-certified but lack this skill of
 looking past the first symptom. I really need to tune up their problem
 solving skills so they handle more incidents themselves without punting to
 us all the time.

 Help?

 Nick


Re: DATA Corruption using Deduplication in TSM 6.3.1.1 WARNING

2012-05-30 Thread Steven Langdale
Ray,  on top of what Remco posted, woudl you also post your call ref?

I'm about to implement a new 6.3 environ using dedupe so would like to
press our IBM technical rep on the issue.  You never know, with a few
people asking about it, it mya get fixed quicker.

Best of luck with your issue though!

Steven

On 30 May 2012 07:08, Remco Post r.p...@plcs.nl wrote:

 Hi Ray,

 Thanks for the warning.

 I was wondering if you could tell us a bit more about this TSM server. Is
 it a converted 5.5 server? At which 6.x level did you start using TSM
 version 6 for this server, 6.1, 6.2 or 6.3? Do the errors occur with any
 particular kind of data? Is it 'old' data, or fresh data recently written
 to TSM? Are you able to restore the files from copypool if this error
 occurs?

 On 29 mei 2012, at 20:46, Ray Carlson wrote:

  Or as IBM called it, Orphaned deduplicate references.
 
  We are running TSM 6.3.1.1 on a Windows 2008 Server, and using the
 Identify command to do deduplication on the Server, not the client.
 
  Interestingly, everything seemed to be mostly working.  We had a few
 volumes that would not be reclaimed or moved because it said the
 deduplicated data had not been backed up to the copy pool, but that was jut
 an annoyance.
 
  Then we discovered that we could not do restores of various servers.
  The error we got was:
  05/21/2012 20:52:45 ANRD_2547000324 bfRtrv(bfrtrv.c:1161)
 Thread129: Error  obtaining deduplication information for object
 254560532 in super bitfile 664355697 in pool 7 (SESSION: 8235, PROCESS:
 375).
 
  A Severity 1 trouble ticket was opened with IBM back on 5/21 and various
 information was gathered and provided to IBM.  So far IBM has not been able
 to identify the root cause or provide a fix.  They have transferred the
 ticket to the Development team.
 
  So here I sit, not knowing which servers, if any, I could restore if
 needed.  Unfortunately, most operations appear to be fine and report
 Success.  Only when I try to do a Generate Backupset, or do a Restore, do I
 discover that there is a problem and the job fails.  Also, it doesn't just
 skip the file/files that it can't restore and restore everything else, it
 simply stops the restore and says it failed.
 
  I'm wondering how many other people are in the same situation, but do
 not realize it.
 
  BEWARE Deduplication
 
  Ray Carlson

 --
 Met vriendelijke groeten/Kind Regards,

 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622



Re: Tape Volume usage

2012-05-26 Thread Steven Langdale
Hello

It sounds, from your post, that you are ejecting PRIMARY volumes to put in
the safe,  this is not really how it's supposed to work.

Whilst you can of course take them out of the library, TSM expects to be
able to use them whenever it wants.

You are seeing incresed tape usage because TSM cannot append to a tape that
is not available.

You should keep all primary volumes in the library and have a copy storage
pool to keep in the safe, that way you will not use loads of volumes AND
reclaimation will also work.

The other benefit is that you will have TWO copies of the data.  Ejecting
primary volumes means only one copy - if you have a volume go bad (or
missing) you have lost data.

Steven

On 25 May 2012 10:10, Botelho, Tiago (External) tiago.bote...@volkswagen.pt
 wrote:

 I have several  Tape Archive Pools defined on TSM.
 The Archive's tapes are removed periodically from TSM library to a safe.
 Note: Collocation are not used in  Storage Pool

 My question is: There are any way to configure storage pool to use the
 minimal number of tapes (use tape maximum capacity on storage pool)?



Re: sending backup to more than one device classes

2012-05-23 Thread Steven Langdale
It is valid but will send data to one or the other drive based on the
management class you use to do the backup.

As i'm sure you know one device class per storage pool and one target
storage pool per copy group, per management class.

Thanks

Steven

On 23 May 2012 16:24, Mehdi Salehi ezzo...@gmail.com wrote:

 Hi,

 Two drives from two separate libraries are defined in the lanfree
 configuration of a node:
 drive1 from devclass1 and library1
 drive2 from devclass2 and library2

 maxnummp is 2 to use both drives concorrently.

 Is this a valid configuration? I mean does TSM server care to send parts of
 a single backup to more than one library?

 Thanks,
 Mehdi



Re: TSM Operation Reporting Showing False Failed

2012-05-23 Thread Steven Langdale
Charles

I assume you are seeing the Completed when you look at the schedule?

All that means is that the schedule ran, and not that you have a good 
complete backup.

A return code of 8 indicates you had at least one warning - depending on
what the warning is about, it could well be that you do have a failed
backup.

Fix the warning and see what you get.

Steven

On 23 May 2012 17:39, Welton, Charles charles.wel...@mercy.net wrote:

 Hello:

 We are using TSM Operational Reporting for our TSM instance running
 5.5.2.0.  We have one particular client that shows Completed for the last
 three days... all three completing with a return code of 8.  However, the
 TSM Operation Report is reporting this client as Failed.

 How can I adjust the TSM Operational Report to reflect Completed?
  Shouldn't the report show Completed versus Failed in this particular
 case?

 Any help would be greatly appreciated!

 Thank you...


 Charles
 This email contains information which may be PROPRIETARY IN NATURE OR
 OTHERWISE PROTECTED BY LAW FROM DISCLOSURE and is intended only for the use
 of the addresses(s) named above.  If you have received this email in error,
 please contact the sender immediately.



Re: TSM-Client 6.2.2.0 and Server 5.5.6.0 -preschedulecmd

2012-05-17 Thread Steven Langdale
How about putting that command into a small script on the host and just run
that?

That does work 100% (as I use it alot)

On 17 May 2012 14:13, Guenther Bergmann guenther_bergm...@gbergmann.dewrote:

 Hi TSM'ers,

 I am trying to do backups with the TSM Client, 6.2.2.0, running on AIX 5.3,
 TL11 against a TSM Server 5.5.6.0

 The backup itselfs works ok, but i face difficulties defining and running a
 preschedulecmd.

 Some diagnosis below:

 root@tsm-client:/var/log/tsm dsmc q sched
 IBM Tivoli Storage Manager
 Command Line Backup-Archive Client Interface
  Client Version 6, Release 2, Level 2.0
  Client date/time: 05/11/2012 14:35:17
 (c) Copyright by IBM Corporation and other(s) 1990, 2010. All Rights
 Reserved.

 Node Name: TSM-NODE
 Session established with server TSMSERV: AIX-RS/6000
  Server Version 5, Release 5, Level 6.0
  Server date/time: 05/11/2012 14:35:17  Last access: 05/11/2012 01:30:58

Schedule Name: CSCH_PPZ_SC3_ORA_INC
  Description: Taegliche Sicherung der SC3-Rechner mittels shell-Skript
 f.
 Oracle-Backup
   Schedule Style: Classic
   Action: Incremental
  Options: -preschedulecmd=/usr/bin/su - appuser -c
 /appdir/2/admin-
 tools/backup/bin/onbkup -x 1/dev/null 2/dev/null
  Objects:
 Priority: 5
   Next Execution: 10 Hours and 55 Minutes
 Duration: 1 Hour
   Period: 1 Day
  Day of Week: Any
Month:
 Day of Month:
Week of Month:
   Expire: Never

 ***
 dsmsched.log says:


 05/11/2012 01:30:57
 Executing scheduled command now.
 05/11/2012 01:30:57
 Executing Operating System command or script:
   /usr/bin/su - appuser -c /appdir/admin-tools/backup/bin/onbkup -x
 1/dev/null 2/dev/null
 05/11/2012 01:30:57 Finished command.  Return code is: 127
 05/11/2012 01:30:57 ANS1902E The PRESCHEDULECMD command failed. The
 scheduled
 event will not be executed.
 05/11/2012 01:30:57 ANS1512E Scheduled event 'CSCH_PPZ_SC3_ORA_INC' failed.
 Return code = 12.
 05/11/2012 01:30:57 Sending results for scheduled event
 'CSCH_PPZ_SC3_ORA_INC'.


 I have fiddled a bit with  and ' in the Options definition, but to no
 success.
 It seems that the operating system (or ksh to be exact) always gets the
 command with leading , so it cant' be executed.

 Any solution to this other than upgrading to server 6.x ?

 regards Günther

 --
 Guenther Bergmann, Am Kreuzacker 10, 63150 Heusenstamm, Germany
 Guenther_Bergmann at gbergmann dot de



Re: Antwort: [ADSM-L] Antwort: [ADSM-L] dedup question

2012-04-10 Thread Steven Langdale
 dedupe of pool 1 should not affect pool 2 or 3 and vice versa. so that in
 case of a restore of a node, only one pool is needed. not the others
 because a chunk is stored there.
 so dedup of a chunk within a pool is ok, but not across all pools.

 Dedupe is not across pools.  Even if all 3 pools are deduped, it is done
within the storagepool.

Steven


Re: More tsm encryption questions

2012-03-22 Thread Steven Langdale
They restored because the client had an encryption key, delete that, or
possibly the encryptiontype line and you will be prompted for it.

As for testing to see if they ARE encrypted, i think the client may say
with a q backup (but not sure).  The test I used was to try a restore after
I had removed the key file.

One aside, if you are using tape technology that compresses, the
compression will do down the drain.

Steven



On 22 March 2012 18:01, Lee, Gary g...@bsu.edu wrote:

 Ok.  Think I have encryption working.

 Tried the following experiment.

 1. Added these lines to dsm.opt

 encryptiontype aes128
 encryptkey generate
 include.encrypt c:\Documents and Settings\glee.BSU\My
 Documents\crypt\...\*

 2. did an incremental backup to pick up the crypt folder just created and
 filled.

 3. deleted all files starting with phon

 4.  restored files starting with phon back to crypt folder, .  Went well.

 5. commented all encryption related lines out of dsm.opt.

 6. removed phone* from crypt folder again.

 7. restored phone* back to crypt folder.

 I thought that with encryption lines removed from dsm.opt, either the
  encrypted files wouldn't restore, or would be restored as garbage.  Not
 so. Restored perfectly.

 What have I missed?
 Also, is there a way to verify that the specified files are truly
 encrypted?

 Thanks again for the assistance.




 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310




Re: Small TSM environment with removable file volumes for offsite

2012-03-22 Thread Steven Langdale
Not really answering your question (sorry), but have you thought about
client side de-dupe and backing up to a TSM instance in your main DC?

For something that small, I'd look to a local TSM server as a last resort.

Steven

On 22 March 2012 16:29, Bob Levad ble...@winnebagoind.com wrote:

 Greetings!

 I'm looking at building a very small TSM environment to support a remote
 site.

 Since there are only a couple of terabytes of data and the change rate
 should be only a few hundred gig per day, what I'm thinking of is a TSM
 server with about 5TB of internal storage for onsite deduped file pools.

 For offsite, I want to use bare SATA drives as removable file volumes (not
 deduped).

 For the initial proof of concept, I'd use an inexpensive SATA dock with
 3.5 inch 1TB or 2TB drives for the offsite copy volume and small (100GB or
 so) 2.5 inch drives for the data base backups.

 These drives would go offsite in plastic drive sleeves daily and reclaims
 would run from the onsite pools.

 I've found a little about using removable file pool volumes, but I wanted
 to run this by the experts to see what others may have tried.

 For disaster recovery, it might be nice if all the drives could be mounted
 in an enclosure (I don't think there will be over about a dozen offsite
 volumes) -- Maybe a Promise SAN in JBOD mode or something that can be used
 with bare drives.

 Bob
 This electronic transmission and any documents accompanying this
 electronic transmission contain confidential information belonging to the
 sender. This information may be legally privileged. The information is
 intended only for the use of the individual or entity named above. If you
 are not the intended recipient, you are hereby notified that any
 disclosure, copying, distribution, or the taking of any action in reliance
 on or regarding the contents of this electronically transmitted information
 is strictly prohibited.



Re: More tsm encryption questions

2012-03-22 Thread Steven Langdale
Well, there you go. you're spot on there Bill!

I'm struggling to see what use generate is,  What't the point of encrypting
the data when the key is handed out whenever a restore is performed?

That must be why I've only ever used encryptkey save in the past.


On 22 March 2012 19:57, Bill Boyer bjdbo...@comcast.net wrote:

 With the ENCRYPTKEY GENERATE specified the client creates the key at the
 beginning of the backup and that key is kept with the data stream stored on
 the TSM server. When you restore this the key in the data stream is used. I
 believe they also refer to this as transparent encryption.

 The include.encrypt will only effect future backups, not any backups
 already
 encrypted and stored on the TSM server.


 Bill Boyer
 There are 10 kinds of people in the world. Those that understand binary
 and
 those that don't. - ??




 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Steven Langdale
 Sent: Thursday, March 22, 2012 2:21 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] More tsm encryption questions

 They restored because the client had an encryption key, delete that, or
 possibly the encryptiontype line and you will be prompted for it.

 As for testing to see if they ARE encrypted, i think the client may say
 with
 a q backup (but not sure).  The test I used was to try a restore after I
 had
 removed the key file.

 One aside, if you are using tape technology that compresses, the
 compression
 will do down the drain.

 Steven



 On 22 March 2012 18:01, Lee, Gary g...@bsu.edu wrote:

  Ok.  Think I have encryption working.
 
  Tried the following experiment.
 
  1. Added these lines to dsm.opt
 
  encryptiontype aes128
  encryptkey generate
  include.encrypt c:\Documents and Settings\glee.BSU\My
  Documents\crypt\...\*
 
  2. did an incremental backup to pick up the crypt folder just created
  and filled.
 
  3. deleted all files starting with phon
 
  4.  restored files starting with phon back to crypt folder, .  Went well.
 
  5. commented all encryption related lines out of dsm.opt.
 
  6. removed phone* from crypt folder again.
 
  7. restored phone* back to crypt folder.
 
  I thought that with encryption lines removed from dsm.opt, either the
  encrypted files wouldn't restore, or would be restored as garbage.
  Not so. Restored perfectly.
 
  What have I missed?
  Also, is there a way to verify that the specified files are truly
  encrypted?
 
  Thanks again for the assistance.
 
 
 
 
  Gary Lee
  Senior System Programmer
  Ball State University
  phone: 765-285-1310
 
 



Re: Multiple restore session fails in TSM5.3 on AIX5.5

2012-03-16 Thread Steven Langdale

 Thanks for the reply.

 But I want to initiate multiple session in parallel (concurrent).

 I mean to say, 4 Active parallel restore session.

 If I cancel the restore session, it will defeat my objective.


This has come up before, but i can't remember the outcome.

Not sure it it makes a difference, but are these no query restores?

Is this a single filesystem you are trying to restore?

I assume your is to perform a faster restore?

Steven


Re: Controling FILLING tapes at end of Migration

2012-03-13 Thread Steven Langdale
I'm assuming the number of filling tapes isn't increasing, it's higher than
you want, but stable. yes?

As i'm sure you are aware, this is normal behavior.  I've also been there
and tried to fight it.  In that instance I ran a job after the migration
that did a move data on the smallest volumes.

You're fighting against something you're not going to win though,  they
will keep coming back.

If you need collocation groups and you are in this situation, the best
approach is to procure extra library capacity.  Easier said than done I
know, but a MUCH easier approach.

Steven

On 12 March 2012 23:32, Roger Deschner rog...@uic.edu wrote:

 I'm having a problem with FILLING tapes multiplying out of control,
 which have very little data on them.

 It appears that this happens at the end of migration, when that one last
 collocation group is being migrated, and all others have finished. TSM
 sees empty tape drives, and less than the defined number of migration
 processes running, and it decides it can use the drives, so it mounts
 fresh scratch tapes to fill up all the drives. This only happens when
 the remaining data to be migrated belongs to more than one node - but
 that's still fairly often. The result is a large number of FILLING tapes
 that contain almost no data. A rough formula for these almost-empty
 wasted filling tapes is:

  (number of migration processes - 1) * number of collocation groups

 Is there a way, short of combining collocation groups, to deal with this
 problem? We've got a very full tape library, and I'm looking for any
 obvious ways to get more data onto the same number of tapes. Ideally,
 I'd like there to be only one FILLING tape per collocation group.

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center ==I have
 not lost my mind -- it is backed up on tape somewhere.=



Re: Expiration performance TSM 5.5 (request)

2012-02-16 Thread Steven Langdale
Hi Eric

Using your selects on my local TSM instance I am getting between 761 to
1057 obs/s the vast majority are round 800.

The DB is a tiddler @ 92GB and 77% util.  I have 12 DB vols at about 12.2GB
each (don't recall why that size tbh) across TWO filesystems.  Most of the
in use volumes are on the 1st filesystem.  So, in summary, NOT an optimal
design.

What is your server platform?  Mine is AIX.

I know little about VMAX as we are 100% an IBM shop - but the 1st thing i'd
check for in your environ is the queue depth on the DB disks and the HBA's
that service it.  I know that the AIX defaults can be very low.

Steven


On 16 February 2012 14:02, Loon, EJ van - SPLXO eric-van.l...@klm.comwrote:

 Hi TSM-ers!
 I'm struggling with the performance of our expiration process. I can't
 get it any faster than 100 object/second max. We tried everything, like
 using more or less database volumes, multiple volumes per filesystem,
 mirroring, unmirroring, but nothing seems to have any positive effect.
 We are using SAN attached enterprise class storage (EMC Vmax) with the
 fastest disks available.
 I have seen other users with similar (or larger) databases with much
 higher figures, like more than 1000 objects/sec, so there must be
 something I can do to achieve this. In 2007 at the Oxford TSM Symposium
 (http://tsm-symposium.oucs.ox.ac.uk/2007/papers/Dave%20Canan%20-%20Disk%
 20Tuning%20and%20TSM.pdf page 25) IBM also stated that 1000 object/sec
 is possible.
 I would really like to know from other TSM 5.5 users how their
 expiration is performing. Could you please let me know by sending me the
 output from the following two SQL queries, along with the platform you
 are using:

 select activity, cast((end_time) as date) as Date,
 (examined/cast((end_time-start_time) seconds as decimal(18,13))*3600)
 Objects Examined/Hr from summary where activity='EXPIRATION' and
 days(end_time)-days(start_time)=0

 select capacity_mb as Capacity MB, pct_utilized as Percentage in
 use, cast(capacity_mb*pct_utilized/100 as integer) as Used MB from db

 Thank you VERY much for your help in advance
 Kind regards,
 Eric van Loon
 KLM Royal Dutch Airlines
 /prebrFor
 information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only. If
 you are not the addressee, you are notified that no part of the e-mail or
 any attachment may be disclosed, copied or distributed, and that any other
 action related to this e-mail or attachment is strictly prohibited, and may
 be unlawful. If you have received this e-mail by error, please notify the
 sender immediately by return e-mail, and delete this
 message.brbrKoninklijke Luchtvaart Maatschappij NV (KLM), its
 subsidiaries and/or its employees shall not be liable for the incorrect or
 incomplete transmission of this e-mail or any attachments, nor responsible
 for any delay in receipt.brKoninklijke Luchtvaart Maatschappij N.V. (also
 known as KLM Royal Dutch Airlines) is registered in Amstelveen, The
 Netherlands, with registered number  33014286
 brpre



Export node and management classes

2012-02-15 Thread Steven Langdale
Hi All

I can't seem to find a definitive answer on this one, so thought I'd ask
the wider group.

I have some nodes i'd like to export from one instance to another.  The
nodes use 3-4 different managemant classes and also some archive data with
7 year retention.

If I have a target domain with all the correct management classes, all
named the same, and create the node name on this domain, will everything
get rebound correctly?  i.e. will my archive data still be intact and
maintain the same retention?

Thanks

Steven


Re: intermittent errors on -snapdiff backups

2012-01-31 Thread Steven Langdale
I had similar problems when using snapdiff.  The fix was to have a pre-exec
map all of the volumes you wanted to backup.  It doesn't matter what the
drive letters are as you carry one backing up the UNC.

Worth a try if you're not ding it already.

Steven

On 31 January 2012 15:41, Prather, Wanda wprat...@icfi.com wrote:

 No, not a vfiler.
 And we do get some snapdiffs.  It's just intermittent failrues.

 Wanda

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Christian Svensson
 Sent: Monday, January 30, 2012 2:43 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] SV: intermittent errors on -snapdiff backups

 Hi Wanda,
 Is your NetApp a vFiler? If yes it will not work until NetApp implement
 the SnapDiff API to the vFilers.
 I have talk to NetApp a couple of times and it seams like they maybe going
 to implement the Snapdiff API in the end of Q4.

 Best Regards
 Christian Svensson

 Cell: +46-70-325 1577
 E-mail: christian.svens...@cristie.se
 Supported Platform for CPU2TSM::
 http://www.cristie.se/cpu2tsm-supported-platforms



Re: Saving a particular sql backup

2012-01-26 Thread Steven Langdale
A backupset would be the 1st thing that crossed my mind - though you may be
stuck if it's a backup done with a TDP, as I don't think they are supprted
for backupsets.

Failing that another option would be to rename the node and start again,
though the useability of that can depend on the node occupancy.

Steven

On 26 January 2012 12:16, Lee, Gary g...@bsu.edu wrote:

 My dba wants to save a particular sqlserver full backup from twoweeks ago.
  Short of doing a restore to another server, is there a way within tsm to
 save a single full?
 Maybe a backupset or something similar?

 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310




Re: TSM in AIX WPARS

2012-01-25 Thread Steven Langdale
Remco

Am I missreading this technote:
http://www-01.ibm.com/support/docview.wss?uid=swg21461901

That is saying virtual NPIV adapters are OK.

Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Remco Post
 Sent: Wednesday, January 25, 2012 3:20 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM in AIX WPARS

 I don't think there are any major issues, but if you need access to tape
 drives you must have physical HBA's in each WPAR if you want to run an
 officially supported configuration. Virtual HBA's work, but are not
 officially supported, so if you run into any issues you might be on your
 own.




Re: TSM for ERP

2012-01-17 Thread Steven Langdale
Did it ever work?  What messages are you seeing on the TSM server log?

Steven


Re: Memory/CPU requirements for TSM 6.2 Storage Agent on AIX

2012-01-05 Thread Steven Langdale
Not sure about CPU, but I may be able to check that tonight with a server
whilst it's backing up, but currenly the memory footprint is small @ approx
60MB

Steven

On 5 January 2012 06:09, Steve Harris st...@stevenharris.info wrote:

 Hi All and a happy 2012 to everyone. May all your DR plans be ready for
 Dec 21 ;-)

 I'm looking at rolling out a number of storage agents across a SAP
 landscape of AIX LPARS over multiple CECs, the plan being to backup
 large databases across the AIX virtual network to a storage agent on the
 same CEC and from there to VTL.  One monster TSM 6.2 instance will be in
 charge.

 Looking at the Support web site, the requirements for the storage agent
 are nowhere spelled out, there is just a circular loop of page
 references.  While I'm waiting for Passport Advantage access to be able
 to raise a support ticket against this customer, I thought I'd ask you
 guys.  How much memory and CPU do I need for a 6.2 storage agent in its
 own AIX lpar?  P6-595s running AIX 6.1

 Thanks.

 Steve.

 Steven Harris
 TSM Admin
 Canberra Australia



Re: Simple include (probably)

2011-12-22 Thread Steven Langdale
How are you currently excluding it? Perhaps you have a domain line in
your dsm.opt file.

If so remove from there, and don't forget to restart the scheduled.

Steven
 On Dec 22, 2011 8:47 AM, Minns, Farren - Chichester fmi...@wiley.com
wrote:

 Morning all



 I'm trying to add a filesystem back in to the a Solaris client backup.



 The filesystem is called /u03



 I assumed ...





 include /u03/.../*





 ... would work but it doesn't.



 What am I doing wrong ?







 Farren


 
 John Wiley  Sons Limited is a private limited company registered in
 England with registered number 641132.
 Registered office address: The Atrium, Southern Gate, Chichester, West
 Sussex, United Kingdom. PO19 8SQ.

 



Re: 3494 library questions

2011-12-15 Thread Steven Langdale
Right I get it.

I assume you are wanting to do this so you can offsite your data
electronically rather than manually?

I know this is going a bit OT, but what are your options as far as moving
TSM2 offsite to where the library is?  does it have many/any clients that
need it where it is?

Also, what experience do you have with TSM server to serve stuff i.e. using
a remote library manager AND what experience with 3494 libs?

Thanks

Steven

On 15 December 2011 12:41, Lee, Gary D. g...@bsu.edu wrote:

 Tsm2 server is onsite.
 Connectivity to 1120 drives is fibre channel


 This strange config grew because I have just finished converting all our
 netbackup environment to tsm.  This offsite library, lib2, is the old
 netbackup library, and server tsm2 is an old netbackup server.

 Yep, odd setup, I just inherrited it.
 I would eventually like to replace lib1 with a virtual tape, but until
 then, this is what it is.



 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Steven Langdale
 Sent: Wednesday, December 14, 2011 5:10 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] 3494 library questions

 Lee

 OK, so TSM2 and lib2 are both offsite? You're implying that but it's not
 100% clear.

 I'm not particularly familliar with TS1120's are these SCSI or FC attach?

 What connectivity is available between the 2 sites?

 Thanks

 Steven

 On 14 December 2011 17:59, Lee, Gary D. g...@bsu.edu wrote:

  To clarify.
 
  Two tsm servers tsm1 and tsm2.
  Two librarys lib1 and lib2
 
  Tsm1 is  connected to lib1 directly.
  Tsm2 is connected to lib2.
 
  Lib2 is actually in our offsite location, and I would like to create a
  connection from tsm1 to lib2 and get our current offsite data moved into
  it. Then, I can copy storage pool directly to lib2 and illiminate
 carrying
  tapes to a static offsite storage location.
 
  Both libraries are 3494s containing ts1120 drives.
 
  No library sharing is being used at the present time, but has not been
  ruled out.
 
  Hopefully that has clarified things a bit.
 
  Questions are
 
  1. If I define both 3494s to one machine, how to define them both to tsm?
   By serial number?
 
  2. If I can't define both to a single server, will library sharing with
  tsm work?
 
  Some sample definitions would be helpful.
 
 
 
 
  Gary Lee
  Senior System Programmer
  Ball State University
  phone: 765-285-1310
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
  Daniel Sparrman
  Sent: Wednesday, December 14, 2011 10:07 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] 3494 library questions
 
  Hi Steven
 
  A TSM library manager isnt mandatory in a 3494 library environment. If
 the
  libraries are partitioned, he can just connect his two hosts to each
  library and assign access through the 3494 CU.
 
  Which is simplest is just a matter of how the existing environment looks.
  If the libraries are already partitioned for the two hosts, no TSM
 library
  manager is needed, just make an entry for each library in the ibmatl.conf
  and assign access in the 3494 CU.
 
  if the libraries are not partitioned, and there is limited 3494
 competence
  on-site, perhaps TSM library sharing is the way to go. It all depends on
  Gary's requirements (like, can we share volumes between the two TSM
  servers, or do they need to be divided).
 
  Best Regards
 
  Daniel Sparrman
 
 
  Daniel Sparrman
  Exist i Stockholm AB
  Växel: 08-754 98 00
  Fax: 08-754 97 30
  daniel.sparr...@exist.se
  http://www.existgruppen.se
  Posthusgatan 1 761 30 NORRTÄLJE
 
  -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
  Till: ADSM-L@VM.MARIST.EDU
  Från: Steven Langdale
  Sänt av: ADSM: Dist Stor Manager
  Datum: 12/14/2011 15:39
  Ärende: Re: [ADSM-L] 3494 library questions
 
  Gary
 
  A library manager is mandatory to coordinate library access.  Than can
 be a
  standalone instance or the existing instance that has exclusive access
 (the
  latter being the easier option with an environment of this size)
 
  Thanks
 
  Steven
 
  On 14 December 2011 14:22, Lee, Gary D. g...@bsu.edu wrote:
 
   I have two 3494 libraries at different locations.
   I would like to have both defined to both of my tsm servers.
  
   Is it possible to define two libraries in the ibmatl.conf file, and if
  so,
   how to define them to tsm?
  
   If not, am I correct that the other option is to share the libraries as
   necessary defining appropriate tsm servers as library managers?
  
   Thanks for the assistance.
  
  
  
   Gary Lee
   Senior System Programmer
   Ball State University
   phone: 765-285-1310
  
  
 



Re: 3494 library questions

2011-12-14 Thread Steven Langdale
Gary

A library manager is mandatory to coordinate library access.  Than can be a
standalone instance or the existing instance that has exclusive access (the
latter being the easier option with an environment of this size)

Thanks

Steven

On 14 December 2011 14:22, Lee, Gary D. g...@bsu.edu wrote:

 I have two 3494 libraries at different locations.
 I would like to have both defined to both of my tsm servers.

 Is it possible to define two libraries in the ibmatl.conf file, and if so,
 how to define them to tsm?

 If not, am I correct that the other option is to share the libraries as
 necessary defining appropriate tsm servers as library managers?

 Thanks for the assistance.



 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310




Re: 3494 library questions

2011-12-14 Thread Steven Langdale
Hello Daniel

Yes, you are 100% right.

I suppose I was coming from the position that Gary already had most of it
setup for a TSM library managing config, as the existing instances appear
to be managing a library each.

As is often the case, there are multiple ways to achieve the desired
results.  Which is one if the benefits of this list :)

Thanks

Steven

On 14 December 2011 15:07, Daniel Sparrman daniel.sparr...@exist.se wrote:

 Hi Steven

 A TSM library manager isnt mandatory in a 3494 library environment. If the
 libraries are partitioned, he can just connect his two hosts to each
 library and assign access through the 3494 CU.

 Which is simplest is just a matter of how the existing environment looks.
 If the libraries are already partitioned for the two hosts, no TSM library
 manager is needed, just make an entry for each library in the ibmatl.conf
 and assign access in the 3494 CU.

 if the libraries are not partitioned, and there is limited 3494 competence
 on-site, perhaps TSM library sharing is the way to go. It all depends on
 Gary's requirements (like, can we share volumes between the two TSM
 servers, or do they need to be divided).

 Best Regards

 Daniel Sparrman


 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE

 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
 Till: ADSM-L@VM.MARIST.EDU
 Från: Steven Langdale
 Sänt av: ADSM: Dist Stor Manager
 Datum: 12/14/2011 15:39
 Ärende: Re: [ADSM-L] 3494 library questions

 Gary

 A library manager is mandatory to coordinate library access.  Than can be a
 standalone instance or the existing instance that has exclusive access (the
 latter being the easier option with an environment of this size)

 Thanks

 Steven

 On 14 December 2011 14:22, Lee, Gary D. g...@bsu.edu wrote:

  I have two 3494 libraries at different locations.
  I would like to have both defined to both of my tsm servers.
 
  Is it possible to define two libraries in the ibmatl.conf file, and if
 so,
  how to define them to tsm?
 
  If not, am I correct that the other option is to share the libraries as
  necessary defining appropriate tsm servers as library managers?
 
  Thanks for the assistance.
 
 
 
  Gary Lee
  Senior System Programmer
  Ball State University
  phone: 765-285-1310
 
 



Re: 3494 library questions

2011-12-14 Thread Steven Langdale
Lee

OK, so TSM2 and lib2 are both offsite? You're implying that but it's not
100% clear.

I'm not particularly familliar with TS1120's are these SCSI or FC attach?

What connectivity is available between the 2 sites?

Thanks

Steven

On 14 December 2011 17:59, Lee, Gary D. g...@bsu.edu wrote:

 To clarify.

 Two tsm servers tsm1 and tsm2.
 Two librarys lib1 and lib2

 Tsm1 is  connected to lib1 directly.
 Tsm2 is connected to lib2.

 Lib2 is actually in our offsite location, and I would like to create a
 connection from tsm1 to lib2 and get our current offsite data moved into
 it. Then, I can copy storage pool directly to lib2 and illiminate carrying
 tapes to a static offsite storage location.

 Both libraries are 3494s containing ts1120 drives.

 No library sharing is being used at the present time, but has not been
 ruled out.

 Hopefully that has clarified things a bit.

 Questions are

 1. If I define both 3494s to one machine, how to define them both to tsm?
  By serial number?

 2. If I can't define both to a single server, will library sharing with
 tsm work?

 Some sample definitions would be helpful.




 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Daniel Sparrman
 Sent: Wednesday, December 14, 2011 10:07 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] 3494 library questions

 Hi Steven

 A TSM library manager isnt mandatory in a 3494 library environment. If the
 libraries are partitioned, he can just connect his two hosts to each
 library and assign access through the 3494 CU.

 Which is simplest is just a matter of how the existing environment looks.
 If the libraries are already partitioned for the two hosts, no TSM library
 manager is needed, just make an entry for each library in the ibmatl.conf
 and assign access in the 3494 CU.

 if the libraries are not partitioned, and there is limited 3494 competence
 on-site, perhaps TSM library sharing is the way to go. It all depends on
 Gary's requirements (like, can we share volumes between the two TSM
 servers, or do they need to be divided).

 Best Regards

 Daniel Sparrman


 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE

 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
 Till: ADSM-L@VM.MARIST.EDU
 Från: Steven Langdale
 Sänt av: ADSM: Dist Stor Manager
 Datum: 12/14/2011 15:39
 Ärende: Re: [ADSM-L] 3494 library questions

 Gary

 A library manager is mandatory to coordinate library access.  Than can be a
 standalone instance or the existing instance that has exclusive access (the
 latter being the easier option with an environment of this size)

 Thanks

 Steven

 On 14 December 2011 14:22, Lee, Gary D. g...@bsu.edu wrote:

  I have two 3494 libraries at different locations.
  I would like to have both defined to both of my tsm servers.
 
  Is it possible to define two libraries in the ibmatl.conf file, and if
 so,
  how to define them to tsm?
 
  If not, am I correct that the other option is to share the libraries as
  necessary defining appropriate tsm servers as library managers?
 
  Thanks for the assistance.
 
 
 
  Gary Lee
  Senior System Programmer
  Ball State University
  phone: 765-285-1310
 
 



Re: Restoring (DB2) API data using the BA client

2011-12-13 Thread Steven Langdale
Stefan

Anyone feel free to correct me, but I don't think you can.  Have you tried
running up a BA client to see what is actually there?

Steven

On 13 December 2011 08:29, Stefan Folkerts stefan.folke...@gmail.comwrote:

 Hi all,

 I am looking into restoring DB2 (version 7) data of a node that has since
 been physically removed but still has data in TSM.
 Has anybody ever restored API data using a BA client to disk, it doesn't
 have to be restored to DB2, no logs tricks..just the plain data to disk
 restore using the same platform BA client.

 Please advise.

 Regards,
  Stefan



Quick destroyed volume question

2011-11-29 Thread Steven Langdale
Hi all

Can someone confirm my thinking...

Client backs up to a volume (stgpool has NO copypool)
Volume is marked destroyed.

So now restores will fail (obviously) but i'm assuming incremental backups
will NOT re-backup the missing files because TSM still knows about them.
I'm also assuming they will only get picked up in the next incremental only
after the volume is deleted.

True / False???

Thanks

Steven


Re: Quick destroyed volume question

2011-11-29 Thread Steven Langdale
Hello Richard

That's just as I'd assumed. Thanks for the quick response.

Steven

On Nov 30, 2011 1:03 AM, Richard Sims r...@bu.edu wrote:

 Steven -

 The TSM storage pool info remains in stasis until the situation is
resolved by a TSM server administrator.  Whereas the database entries
remain intact, incremental backups will not cause new backup copies to be
made.  The Destroyed state is just a flag, which can be removed; and it is
the case that a volume in that (perhaps temporary) state does not mean that
files on it cannot be copied (when that state is un-done for a Move Data,
for example).

 I guess the thing to do is assess what the actual significance is of
destroyed in this volume's case.

   Richard Sims

 On Nov 29, 2011, at 5:42 PM, Steven Langdale wrote:

  Hi all
 
  Can someone confirm my thinking...
 
  Client backs up to a volume (stgpool has NO copypool)
  Volume is marked destroyed.
 
  So now restores will fail (obviously) but i'm assuming incremental
backups
  will NOT re-backup the missing files because TSM still knows about them.
  I'm also assuming they will only get picked up in the next incremental
only
  after the volume is deleted.
 
  True / False???
 
  Thanks
 
  Steven


Re: TSM for VE Demonstration Site

2011-11-11 Thread Steven Langdale
Pam

IBM can do a POT (Proof of Technology), where you can go to an BM site and
play with it yourself.  Create VM's delete them, restore them. anything you
want really.

I'm just taking some of our VMWare folks to one in a couple of weeks.

Press your IBM rep, they are not doing their job.

Steven

On 11 November 2011 14:12, Pagnotta, Pam (CONTR) pam.pagno...@hq.doe.govwrote:

 Hello,

 Would someone please tell me if there is a TSM for VE demo site that
 someone on our team could log onto to get a feel for how it works? We are
 looking for a better/easier way to back up the servers in our virtual
 environments, but would like to have a hands-on type session, if possible,
 before making a purchase.

 Our IBM Tivoli Sales Rep has been unable to get this information for us
 and I would very much appreciate a little help.

 Thank you,
 Pam


 Pam Pagnotta
 Sr. Systems Engineer
 Energy Enterprise Solutions (EES), LLC
 Supporting IM-621.1, Enterprise Service Center East
 Contractor to the U.S. Department of Energy
 Office: 301-903-5508
 Email: pam.pagno...@hq.doe.gov
 Location: USA (EST/EDT)



Re: Using TSM to backup Windows System State

2011-11-09 Thread Steven Langdale
IBM hold these things pretty regularly.  The ones I have attended have been
excelent.

I'm Pretty sure if you email David Daun (djd...@us.ibm.com) he can add you
to the notification list.  They are also advertised on this list.

Steven


Hope in the future we can get more IBM webcast .


 I've just finished watching a recording of an IBM webcast by Andrew
 Raibeck and his colleagues on 'Using Tivoli Storage Manager to Backup
 Windows System State'. It offers a lot of insights into the System State
 backup process and the issues which can often surround it.

 I wouldn't hesitate to recommend it to anyone on the list who has a
 requirement to backup Windows clients.

 Available to view again here:

 http://www-01.ibm.com/support/docview.wss?uid=swg27023299myns=swgtivmynp=OCSSGSG7mync=E

 Don't be put off by the duration - a lot of this is made up of Andy
 tirelessly answering questions from participants after the presentation was
 concluded. However I have no regrets about sitting through the full 2 hours
  because even the QA threw up some nuggets.

 My sincere thanks go to Andy and his colleagues (and IBM) for putting the
 time into supporting the user community in this way.

 Regards
 Neil Schofield
 Technical Leader
 Data Centre Services Engineering Team
 Yorkshire Water Services Ltd.





Odd (or not) profile subscription issue

2011-11-09 Thread Steven Langdale
Hi all, I'm hoping we have some config manager experts here.

I'm trying to resolve an issue where a distributed domain will not update
after I've added a node to it (which I know should work).

looking at the profile on the config manager the domain is there, but on
the managed server it is not there.  This particular policy domain was
added after the subscription was setup.

The local profile seems not to havve been updated since it was initially
subscribed.  Is that normal behavior?  Should I have to re-subscribe if new
stuff is associated with an existing policy?

Thanks

Steven


Re: FODC (First Occurrence Data Capture) dumps

2011-10-27 Thread Steven Langdale
Zoltan

These are DB2 dumps.  Assuming you don't need them, and by the dates you
don't, they are OK to remove

Steven

On 25 October 2011 15:17, Zoltan Forray/AC/VCU zfor...@vcu.edu wrote:

 I have been looking around on our servers to cleanup large/unnecessary
 files and came upon the /dumps/FODC_Panic_ folders with many gigs of
 cores and such.

 Any need to keep these around and can they be deleted?  Some of them date
 back to 2009.


 Zoltan Forray
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html



Re: Illegal request. Picker is full!

2011-10-20 Thread Steven Langdale
I assume there really isn't a tape in the picker when it says it's full?

Could be microcode, could be hardware - either way the easiest thing is to
log a call with your hardware vendor.


On 20 October 2011 15:19, Mehdi Salehi ezzo...@gmail.com wrote:

 Hi,
 TSM server grumbles that cannot mount media, but the actual problem stems
 from the library: When I try to move cartridges manually via the web-based
 tool of this TS3310, I get this message: Illegal request. Picker is full!
 If you re-IPL the library, the problem will be vanished temporarily, but
 appears again after a couple of TSM operations. Have you seen this before?
 Can it be a microcode issue?

 Thank you.



Re: First Solaris node

2011-10-11 Thread Steven Langdale
It's not just the bandwidth, if latency is high the backups will be dog
slow.
Whats the ping time to the host?

On 11 October 2011 21:08, Vandeventer, Harold [BS] 
harold.vandeven...@da.ks.gov wrote:

 Thanks for the advice

 My only reason for considering compression is based on the fact this node
 is about 60 miles away over a relativly slow link.  I've asked the network
 team what they expect for bandwidth, but don't have an answer yet.

 Otherwise, I'd planned to leave the Solaris option set details blank till
 we figure out more about this node.



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Zoltan Forray/AC/VCU
 Sent: Tuesday, October 11, 2011 2:50 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] First Solaris node

 Welcome to the world of mixed operating systems..

 Don't take this the wrong way but what does the OS have to do with things
 like compressability?  Compressability is based on your data.  For
 instance, I have dozens of Notes/Domino servers that are both Solaris AND
 Windows.  Domino databases don't compress well or at all, so I don't.  I
 have hundreds of Windows servers that don't compress well due to the
 application/data and then I have hundreds that compress very wellagain
 based on the data/application.

 I don't have any CLOPTSET for any Solaris or Linux servers.  They usually
 don't have problems with files being locked/exclusive access.  But you may
 be running an application that does.  As to what you may want to exclude
 from backups, that depends on how the server is setup and the application
 running on it.  Most of my non-Domino Solaris servers run Oracle and
 therefore backup their databases using the TDP so the directories with the
 actual Oracle database are excluded.

 It really is all about knowing your data.



 From:   Vandeventer, Harold [BS] harold.vandeven...@da.ks.gov
 To: ADSM-L@VM.MARIST.EDU
 Date:   10/11/2011 03:33 PM
 Subject:[ADSM-L] First Solaris node
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 I need to prepare my TSM environment (running on Windows) for it's first
 Solaris client.  All 400+ existing clients are Windows-based.

 Anyone have suggestions for setting any specialized Option Set values?

 My first thought is to have a Solaris option set, and leave most of it
 blank.  Probably set COMPRESSSION YES as this node is physically about 50
 miles away.

 Definitely don't hook them to an option set for Windows filters.

 Thanks Harold.



Re: Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-04 Thread Steven Langdale
The logical error question has come up before.  With no TSM managed copy
pool you are perhaps at a slightly higher risk.

An option is to still have a copy pool, but on the same DD.  So little real
disk usage, but some protection from a TSM logical error.  That obviously
does not protect you from a DD induced one.

FWIW, when we implement VTL's, and if the bandwidth allows, we use TSM to
create the copy.  Small sites with limited bandwidth, we rely on the
appliance.

Steven



On 4 October 2011 06:41, Daniel Sparrman daniel.sparr...@exist.se wrote:

  If someone puts a high-caliber bullet through my Gainesville DD, then
  I recover it from the replicated offsite DD, perhaps selecting a
 snapshot.
 
  If someone puts a high-caliber bullet through both of them, then I
  have lost my backups of a bunch of important databases.

 And if you have a logical error on your primary box, which is then
 replicated to your 2nd box? Or even worse, a hash conflict?

 I dont consider someone putting a bullet through both the boxes a high
 risk, I do however consider other errors to be more of a high risk.

 Best Regards

 Daniel





 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE


 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
 Till: ADSM-L@VM.MARIST.EDU
 Från: Allen S. Rout
 Sänt av: ADSM: Dist Stor Manager
 Datum: 10/03/2011 23:38
 Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L]
 vtl versus file systems for pirmary pool

 On 09/28/2011 02:16 AM, Daniel Sparrman wrote:

  In this mail, it really sounds like you're using your DD as both
  primary storage and for TSM storage.

 I am, right now, using the DD as a target for direct-written database
 backups, only.  So that's not really primary storage, as I think
 about it.


  If the DD box fails, what are your losses?

 If someone puts a high-caliber bullet through my Gainesville DD, then
 I recover it from the replicated offsite DD, perhaps selecting a snapshot.

 If someone puts a high-caliber bullet through both of them, then I
 have lost my backups of a bunch of important databases.



  Sorry for all the questions, I'm just trying to get an idea how
  you're using this box.

 No problem. Our conversation is fuzzed by the fact that I am also
 talking about how one _might_ use it for TSM storage.  I'm
 contemplating it, but not doing it at the moment.

  [ ... if you lose a DD, then ... ] you have to restore the data from
  somewhere else (tape?).


 In my planning, the DD gets copied / offsited to a remote DD, so
 that's the somewhere else.

 - Allen S. Rout



Re: Getting unix permissions back from TSM

2011-09-15 Thread Steven Langdale
This one has come up before.  I'm pretty sure there isn't I'm afraid.


 Hi All

 One of my accounts has just had a unix admin tried to run something
 like

 chown -R something:something /home/fred/*

 but he had an extra space in there and ran it from the root directory

 chown -R something:something /home/fred/ *

 This has destroyed the ownership of the operating system binaries and
 trashed the system.  Worse it was done using a distributed tool, so
 quite a number of AIX lpars are affected including the TSM server.

 Once we get the TSM Server back up, is there any way to restore just
 the file permissions without restoring the data?  I can't think of a
 way.  Maybe there is a testflag to do this?  Even a listing of the file
 and permissions for all active files would be enough to be able to fix
 the problem.

 TSM Server 5.5 AIX 5.3


Re: electronic vaulting for archive

2011-09-14 Thread Steven Langdale
Hi

Simple really.  We have a seperate primary stgpool for archives (because if
the much lower reclaim reqs), and the offsite is on a remote site with FC
connectivity.

On 14 September 2011 05:13, Mehdi Salehi ezzo...@gmail.com wrote:

 Thanks Steven, would you please explain more how you vault archive data?

 Mehdi



Re: electronic vaulting for archive

2011-09-13 Thread Steven Langdale
My definition of Electronic vaulting is using electronic means to transfer
backup/archive data to your offsite repository.  This is really regardless
of what the target is.

I currently electronically vault to both ProtecTIER and TS3500 libraries.

So, in essence if your primary to copy stgpool is not done by physically
moving media, you are vaulting electronically.

Steven

On 13 September 2011 13:51, Mehdi Salehi ezzo...@gmail.com wrote:

 Hi,
 Electronic vaulting for backup data is possible by active data pools and
 equipments like TS7650G. The question is how to vault archive data which
 are
 not inherently permitted in active data pools?

 Thank you,
 Mehdi



Backupset encryption - quick question

2011-09-07 Thread Steven Langdale
Hi all

I need to transport a backup set and need at least some basic encryption.

Before I do a load of testing, I thought I'd ask the group

Does anyone know if client side encryption include.encrypt works with
backup sets, or rather, can you restore the stuff!

I'll be restoring via a locally attach drive.

Thanks

Steven


Re: limit backup for big files

2011-08-27 Thread Steven Langdale
 Is there any way to prevent b/a clients from backing up files bigger than
a
 specific value in TSM? Say always files smaller than 10 MB are backed up.


Not as far as I know.  unless you wrap some logic around the backup to
list/find the required files and then back them up.

Steven


Re: Ang: [ADSM-L] The volume has data but I get this: ANR8941W

2011-08-14 Thread Steven Langdale
That error is saying that there is no label, as you have nothing to loose,
you could always check it out and back in again with a label libvol ...
And see what happens.

Steven

On 14 August 2011 09:02, Mehdi Salehi ezzo...@gmail.com wrote:

 Thanks Daniel,
 Yes, what TSM database shows means that the volume contains data. We found
 this problem when a client tried to restore their data, but part of it was
 unavailable! As the data is not critical, we have not implemented backup
 pool. Means backup data loss :(



Re: TSM multi libraries in a single stgpool

2011-07-06 Thread Steven Langdale
 Can TSM support having two libraries in a single stgpool?  Looking to add
 another library but would to make sure it is used in tandem with our current
 library and not just sit there.

 Thank you,
 Vince


Vince

No you can't, it's one devclass in a stgpool and a devclass can only point
to one library.

Theres obviously nothing stopping you creating a new devclass and stgpool
for this new library and pointing some servers at it to spread the workload.

Steven


Re: volumes associated with a node

2011-06-29 Thread Steven Langdale
On 29 June 2011 13:39, Tim Brown tbr...@cenhud.com wrote:

 Is there a select statement that will list all tape volumes that have files
 for

 a given TSM node.


select distinct VOLUME_NAME,STGPOOL_NAME from volumeusage where
NODE_NAME=''

 = the node name in upper case.

It also returns the stgpool name as well.

Steven


Re: volumes associated with a node

2011-06-29 Thread Steven Langdale
On 29 June 2011 14:09, Saravanan Palanisamy evergreen.sa...@gmail.comwrote:

 select volume_name from contents where node_name like '%nodename%' and
 FILESPACE_NAME like '%Filespacename%' and FILE_NAME like '%Filename%'

 On Wed, Jun 29, 2011 at 3:48 PM, molin gregory gregory.mo...@afnor.org
 wrote:

  Hello,
 
  Try : q nodedata NODENAME stg=
 
  Cordialement,
  Grégory Molin
  Tel : 0141628162
  gregory.mo...@afnor.org
 
  -Message d'origine-
  De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de
  Tim Brown
  Envoyé : mercredi 29 juin 2011 14:40
  À : ADSM-L@VM.MARIST.EDU
  Objet : [ADSM-L] volumes associated with a node
 
  Is there a select statement that will list all tape volumes that have
 files
  for
 
  a given TSM node.
 
 
 
  Thanks,
 
 
 
  Tim Brown
  Systems Specialist - Project Leader
  Central Hudson Gas  Electric
  284 South Ave
  Poughkeepsie, NY 12601
  Email: tbr...@cenhud.com mailto:tbr...@cenhud.com
  Phone: 845-486-5643
  Fax: 845-486-5921
  Cell: 845-235-4255
 
 
 
 
  This message contains confidential information and is only for the
 intended
  recipient. If the reader of this message is not the intended recipient,
 or
  an employee or agent responsible for delivering this message to the
 intended
  recipient, please notify the sender immediately by replying to this note
 and
  deleting all copies and attachments.
 
  ATTENTION.
 
  Ce message et les pièces jointes sont confidentiels et établis à
  l'attention exclusive de leur destinataire (aux adresses spécifiques
  auxquelles il a été adressé). Si vous n'êtes pas le destinataire de ce
  message, vous devez immédiatement en avertir l'expéditeur et supprimer ce
  message et les pièces jointes de votre système.
 
  This message and any attachments are confidential and intended to be
  received only by the addressee. If you are not the intended recipient,
  please notify immediately the sender by reply and delete the message and
 any
  attachments from your system. 
 



 --
 Thanks  Regards,
 Sarav
 +974-3344-1538

 There are no secrets to success. It is the result of preparation, hard work
 and learning from failure - Colin Powell



Re: HSM for database files!!

2011-06-22 Thread Steven Langdale

Is is technically feasible to put the tablespace of a database on an
HSM-managed filesystem? For example, if there are some rarely-accesed huge
tables, is it possible/feasible to define their tablespace(s) on a disk
space that can be migrated to tape by HSM?


Feasable, yes. Advisable, probably not.

The 1st question that would arise is how long the app is prepared to
wait/be blocked for while the data is retrieved.

Steven


Re: tsm and data domain

2011-06-17 Thread Steven Langdale

 I have had DD implemented for about a year now, but I fail to understand
 why anyone would utilize the DD VTL license when using TSM?

 Mine are setup as a simple SAN device with defined directories that
 correspond to my TSM primary storage pools. I have the device calss in
 TSM set as the type file and let TSM manage the virtual volumes as it
 would any other disk storage. There is another DD system that is located
 at our DR facility, and all data including TSM DB backups are replicated
 to that location. This allows me to no longer have copy pools.


Rick

A not uncommon configuration, I have also used DD's of NFS for disk pools as
well.  Then only time I've seen them as VTL's is when LAN Free was required.

I would however think again about not having a copy pool as you are leaving
yourself open to TSM logically corrupting data and having no backup.


Re: tsm and data domain

2011-06-17 Thread Steven Langdale

We debated having a copy pool when doing our DD setup. We finally decided



that using a daily DD snapshot was an acceptable solution. We take a


Rick

The problem with that is that you'd just be replicating a logical
corruption. So unless you noticed it before your last snapshot expired, you
be stuck.
Obvoliusly a simple solution would be to have a copy stpool on exactly the
same DD - inefficient copying wize, but it would dedupe down to nothing.


  1   2   3   >