Unsubscribe

2023-10-23 Thread Burton, Robert (He/Him/His)
SIGN-OFF  ADSM-L

___

If you received this email in error, please advise the sender (by return email 
or otherwise) immediately. You have consented to receive the attached 
electronically at the above-noted email address; please retain a copy of this 
confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.


Re: Ideal storage solution for SP and SP+ ???

2020-06-08 Thread Burton, Robert
We use IBM Cloud Object Storage boxes for our backend Spectrum storage...have 
eliminated all requirements for physical tape devices...

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Thursday, June 4, 2020 5:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Ideal storage solution for SP and SP+ ???

What storage solution is at the top of your list for Spectrum Protect and 
Spectrum Protect Plus?

I am a former a Data Domain abuser who (regretfully) moved to EMC Isilon and 
now looking for the best possible replacement.
Usable storage requirement of somewhere in the neighborhood of 500+ tb.

Operation protects roughly 2,100 clients. About 75% VMware VMs and SQL with a 
scattering of AIX hosting DB2 & Oracle.

Currently primarily protected using Spectrum Protect servers using BA, TDP, and 
SP4VE but looking to migrate all possible operations to Spectrum Protect Plus.

At this time the plan would be to retain only the most recent (active) data at 
our production facility on high performing storage to provide quick recovery 
then using replicate inactive data to our DR facility to be offloaded to 
Spectrum Protect servers and/or cloud.

All feedback and advice welcome. Thanks !

**CONFIDENTIALITY NOTICE** This electronic message contains information from 
Southeastern Grocers, Inc and is intended only for the use of the addressee. 
This message may contain information that is privileged, confidential and/or 
exempt from disclosure under applicable Law. This message may not be read, 
used, distributed, forwarded, reproduced or stored by any other than the 
intended recipient. If you are not the intended recipient, please delete and 
notify the sender.
___
If you received this email in error, please advise the sender (by return email 
or otherwise) immediately. You have consented to receive the attached 
electronically at the above-noted email address; please retain a copy of this 
confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.


Re: more sophisticated way to delete nodes that are replicated?

2018-12-17 Thread Robert Talda
This one has been on my wish list for several months - so thanks in advance for 
the idea!

I am curious, though - has anyone tried to utilize the API to the OC for this?  
It might not be possible - that is how little time I’ve had to research that 
idea - but can’t hurt to ask…


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Dec 17, 2018, at 6:12 AM, Szabi Nagy  wrote:
> 
> Actually you would indeed struggle to pass $1 through to dsmadmc, however you 
> could use macros instead:
> 
> 
> function delnode {
> 
>if [ -z "$1" ]; then
> 
># Empty $1 positional parameter. Show some help message
> 
>echo "delnode "
> 
>else
> 
># Create a dsmadmc.command macro file
> 
>echo "rem repln $1" >> dsmadmc.command
> 
>echo "decom n $1" >> dsmadmc.command
> 
> 
># Run the macro on both servers
> 
>dsmadmc -id= -se= "macro dsmadmc.command"
> 
>dsmadmc -id= -se= "macro dsmadmc.command"
> 
> 
># Delete the dsmadmc.command file
> 
>rm dsmadmc.command
> 
>fi
> 
> }
> 
> 
> ---
> Szabi Nagy
> Systems and Storage Administrator
> HFS Backup & Archive Services
> 
> From: Szabi Nagy
> Sent: 17 December 2018 10:46:26
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] more sophisticated way to delete nodes that are 
> replicated?
> 
> 
> Why not use a function in your .bashrc instead?
> 
> 
> Maybe something like this:
> 
> function delnode {
> 
>if [ -z "$1" ]; then
> 
># Empty $1 positional parameter. Show some help message
> 
>echo "delnode "
> 
>else
> 
># Remove and decommission the node from both primary and replication 
> servers
> 
>dsmadmc -id= -se= "rem repln $1"   <<<=== 
> Might be tricky to get $1 across to dsmadmc
> 
>dsmadmc -id= -se= "decom n $1"
> 
>dsmadmc -id= -se= "rem repln $1"
> 
>dsmadmc -id= -se= "decom n $1"
> 
>fi
> 
> }
> 
> 
> You can add some more error checking, etc to it. After all, you are deleting 
> nodes.
> 
> 
> ---
> Szabi Nagy
> Systems and Storage Administrator
> HFS Backup & Archive Services
> 
> From: ADSM: Dist Stor Manager  on behalf of Bjørn 
> Nachtwey 
> Sent: 14 December 2018 12:36:36
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] more sophisticated way to delete nodes that are replicated?
> 
> Dear all,
> 
> i just started to use replication and it looks to be a good approach to
> replace the copy pools i till now.
> 
> But one point seems to become much more complicated: deleting nodes.
> The only way i see up to now takes some steps and two times to log on:
> 
> log on to the primary server:
> 1) rem repln 
> 2) decom n 
> 
> log on to the repl server:
> 3) rem repln 
> 4) decom n 
> 
> but how to simplify this?
> i tried to put the commands in script, doing step 3 and 4 using command
> routing, but it does not work.
> 
> So:
> Does anybody know a more sophisticated approach?
> 
> => perfectly it would work with a single command / script call :-)
> 
> thanks in advance
> Bjørn
> 
> 
> 
> --
> --
> Bjørn Nachtwey
> 
> Arbeitsgruppe "IT-Infrastruktur“
> Tel.: +49 551 201-2181, E-Mail: bjoern.nacht...@gwdg.de
> --
> Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG)
> Am Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de
> Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: g...@gwdg.de
> Service-Hotline: Tel.: +49 551 201-1523, E-Mail: supp...@gwdg.de
> Geschäftsführer: Prof. Dr. Ramin Yahyapour
> Aufsichtsratsvorsitzender: Prof. Dr. Norbert Lossau
> Sitz der Gesellschaft: Göttingen
> Registergericht: Göttingen, Handelsregister-Nr. B 598
> --
> Zertifiziert nach ISO 9001
> --



Re: Windows ARL change

2018-11-14 Thread Robert Talda
Andy, et al:
  I should have mentioned this - this is exactly the strategy we are pursuing: 
moving backups of large Windows filesystems to container storage pools and 
enabling client-side deduplication.  SP client will be working hard - but 
little or no data will be transferred to the server.

  Do we get any credit for thinking of this ourselves in advance?

Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Nov 13, 2018, at 7:59 AM, Andrew Raibeck  wrote:
> 
> Yes, technically speaking, it is possible, but is certainly not trivial. If
> this is a requirement please consider opening an RFE for this.
> 
> Have you looked at deduplication? Those would certainly mitigate the impact
> of security-only changes.
> 
> Best regards,
> 
> Andy
> 
> 
> 
> Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
> 
> "ADSM: Dist Stor Manager"  wrote on 2018-11-13
> 05:16:25:
> 
>> From: Hans Christian Riksheim 
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 2018-11-13 07:30
>> Subject: Re: Windows ARL change
>> Sent by: "ADSM: Dist Stor Manager" 
>> 
>> Is it technically possible to develop a client that extracts the ACLs
>> during backup and store these as separate metadata? We are having the
> same
>> challenge and who knows how much unnecessary backup data our customers
> are
>> paying for.
>> 
>> Hans Chr.
>> 
>> On Tue, Nov 6, 2018 at 7:25 PM Andrew Raibeck  wrote:
>> 
>>> Hi Bob,
>>> 
>>> If SKIPNTPERMISSIONS YES is in effect, then SKIPNTSECURITYCRC might be
>>> moot, but yes, you have the idea.
>>> 
>>> I would agree that setting SKIPNTSECURITYCRC YES and using the default
>>> SKIPNTPERMISSIONS value can present a challenge since you cannot know
> which
>>> files are restored with the latest permissions.
>>> 
>>> If security permissions are important to the organization, then best to
>>> just use the default values and back them up. If the organization
> chooses
>>> to skip the permissions during the regular incremental backup, then
> best be
>>> sure an alternate permissions backup and recovery plan is in place.
> Example
>>> of a follow-on complication: What if someone does a restore, but
> forgets to
>>> apply the separate procedure to put the correct permissions back?
>>> 
>>> For the general usage case, I can only repeat what I said earlier (lest
>>> someone else coming into the middle of this discussion misses it):
> Best
>>> practice to ensure the most complete incremental backup is to allow the
>>> data to be backed up, even if only the security has changed. Before
> doing
>>> otherwise, proceed with caution, thinking about future ramifications of
>>> using non-default values for these options.
>>> 
>>> Regards,
>>> 
>>> Andy
>>> 
>>> 
>>> 
> 
> 
>>> 
>>> Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
>>> 
>>> "ADSM: Dist Stor Manager"  wrote on 2018-11-06
>>> 11:13:21:
>>> 
>>>> From: Robert Talda 
>>>> To: ADSM-L@VM.MARIST.EDU
>>>> Date: 2018-11-06 11:19
>>>> Subject: Re: Windows ARL change
>>>> Sent by: "ADSM: Dist Stor Manager" 
>>>> 
>>>> Andy, et al:
>>>>  Shared the meat of this discussion with our end users.  A question
>>>> came up about restores.
>>>> 
>>>> Scenario:  Large Windows File Server on which the SP client is
>>>> configured with the SKIPNTSECURITYCRC is set to YES.
>>>> 
>>>> End of academic year arrives in June with the usual turnover in
>>>> faculty and grad students, requiring a major permissions change that
>>>> impacts most of the files on the server.
>>>> However, the backups aren’t impacted because the SKIPNTSECURITYCRC
>>>> is set.  So the only files backed up with the new security are those
>>>> that are new or those with content changes.
>>>> This is an annual event
>>>> 
>>>> Middle of November, file server melts down and all the files have to
>>>> be restored.
>>>> 
>>>> Because new backup of copies were NOT made when security changes
>>>> were made, won’t the files be restored with the permissions as of
>>

Re: Windows ARL change

2018-11-06 Thread Robert Talda
Andy, et al:
  Shared the meat of this discussion with our end users.  A question came up 
about restores.

Scenario:  Large Windows File Server on which the SP client is configured with 
the SKIPNTSECURITYCRC is set to YES.

End of academic year arrives in June with the usual turnover in faculty and 
grad students, requiring a major permissions change that impacts most of the 
files on the server.
However, the backups aren’t impacted because the SKIPNTSECURITYCRC is set.  So 
the only files backed up with the new security are those that are new or those 
with content changes.
This is an annual event

Middle of November, file server melts down and all the files have to be 
restored.

Because new backup of copies were NOT made when security changes were made, 
won’t the files be restored with the permissions as of the time of backup? 

I believe the answer is “YES” - which results in a potential mess for the end 
users.  Consider the case of a Word document created by a postdoc 3 years ago 
that hasn’t been modified since but is still used on an on-going basis by one 
or more classes.  The postdoc left 2 years ago, so the file permissions on the 
file server were updated accordingly - but no new backup copy created. In 
particular, a new owner was assigned.   But when the file is restored, it is 
restored with the permissions that have the postdoc as the owner.

If that is so, then the end users don’t see much of a win from the 
SKIPNTSECURITYCRC - while it saves a backup copy, it introduces the potential 
for a lot more work finding and fixing all the outdated permissions.

Now, if the SKIPNTPERMISSIONS option was also enabled, then this issue goes 
away, because end users have already accepted that upon restore, a permissions  
setting has to occur.  

So, in our environment, it appears that settings should be either
- SKIPNTPERMISSIONS and SKIPNTSECURITYCRC enabled; OR
- SKIPNTPERMISSIONS and SKIPNTSECURITYCRC disabled

Am I on the right track?

Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Nov 1, 2018, at 10:39 AM, Andrew Raibeck  wrote:
> 
> Hi Bob,
> 
>> Upon restore, user must rebuild security permissions.
> 
> Are you perhaps thinking of the SKIPNTPERMISSIONS option, which forgoes
> backup of any permissions? In that case, for sure you have to rebuild
> security permissions after the restore. Outside of a user-specific edge
> case, do not specify that option, just use the implicit default (which
> happens to be NO).
> 
> SKIPNTSECURITYCRC is not the same, as it merely removes the permissions
> from the criteria that go into determining whether files have changed.
> However, when files are backed up, permissions are included with the
> backup, provided you did not set SKIPNTPERMISSIONS YES.
> 
> In the scenario I described previously, if you restore the backup taken on
> Day 1, then the permissions in effect at the time of the backup are
> restored.
> 
> Best regards,
> 
> Andy
> 
> 
> 
> Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
> 
> "ADSM: Dist Stor Manager"  wrote on 2018-11-01
> 09:39:55:
> 
>> From: Robert Talda 
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 2018-11-01 09:41
>> Subject: Re: Windows ARL change
>> Sent by: "ADSM: Dist Stor Manager" 
>> 
>> Andy:
>>  As always, thanks for chiming in.
>> 
>>  I was aware of this option but was hesitant to recommend it
>> because of a side effect that you didn’t list:
>> 
>>  Upon restore, user must rebuild security permissions.
>> 
>>  So, if the fileserver/NAS had a strict permissions scheme that was
>> easy to apply after a restore, then this option is viable.
>> 
>>   This is what bedevils us here - a lot of our units here at
>> Cornell have file servers and NASs with complex permissions, and so
>> the issue comes down to what is the least amount of pain: bumps in
>> cost due to occasional big backups (we are a charge-back service) or
>> trying to figure out a way to save permissions outside of Spectrum
>> Protect that can be used to rebuild those permissions in a disaster
> scenario.
>> 
>>  Needless to say, all our customers have gone the big backup route.
>> Probably attribute additional charge to usual "Cost of working at a
>> university” account
>> 
>> Bob
>> 
>> Robert Talda
>> EZ-Backup Systems Engineer
>> Cornell University
>> +1 607-255-8280
>> r...@cornell.edu
>> 
>> 
>>> On Oct 31, 2018, at 8:46 AM, Andrew Raibeck  wrote:
>>> 
>>> The backup-archive client has an option, SKIPNTSECURITYCRC [NO | YES]
> with
>>

Re: Windows ARL change

2018-11-05 Thread Robert Talda
Andy:
  Indeed I was - and now I have another option to explore…thanks for raising my 
(SP) consciousness…

Bob

> On Nov 1, 2018, at 10:39 AM, Andrew Raibeck  wrote:
> 
> Hi Bob,
> 
>> Upon restore, user must rebuild security permissions.
> 
> Are you perhaps thinking of the SKIPNTPERMISSIONS option, which forgoes
> backup of any permissions? In that case, for sure you have to rebuild
> security permissions after the restore. Outside of a user-specific edge
> case, do not specify that option, just use the implicit default (which
> happens to be NO).
> 
> SKIPNTSECURITYCRC is not the same, as it merely removes the permissions
> from the criteria that go into determining whether files have changed.
> However, when files are backed up, permissions are included with the
> backup, provided you did not set SKIPNTPERMISSIONS YES.
> 
> In the scenario I described previously, if you restore the backup taken on
> Day 1, then the permissions in effect at the time of the backup are
> restored.
> 
> Best regards,
> 
> Andy
> 
> 
> 
> Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
> 
> "ADSM: Dist Stor Manager"  wrote on 2018-11-01
> 09:39:55:
> 
>> From: Robert Talda 
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 2018-11-01 09:41
>> Subject: Re: Windows ARL change
>> Sent by: "ADSM: Dist Stor Manager" 
>> 
>> Andy:
>>  As always, thanks for chiming in.
>> 
>>  I was aware of this option but was hesitant to recommend it
>> because of a side effect that you didn’t list:
>> 
>>  Upon restore, user must rebuild security permissions.
>> 
>>  So, if the fileserver/NAS had a strict permissions scheme that was
>> easy to apply after a restore, then this option is viable.
>> 
>>   This is what bedevils us here - a lot of our units here at
>> Cornell have file servers and NASs with complex permissions, and so
>> the issue comes down to what is the least amount of pain: bumps in
>> cost due to occasional big backups (we are a charge-back service) or
>> trying to figure out a way to save permissions outside of Spectrum
>> Protect that can be used to rebuild those permissions in a disaster
> scenario.
>> 
>>  Needless to say, all our customers have gone the big backup route.
>> Probably attribute additional charge to usual "Cost of working at a
>> university” account
>> 
>> Bob
>> 
>> Robert Talda
>> EZ-Backup Systems Engineer
>> Cornell University
>> +1 607-255-8280
>> r...@cornell.edu
>> 
>> 
>>> On Oct 31, 2018, at 8:46 AM, Andrew Raibeck  wrote:
>>> 
>>> The backup-archive client has an option, SKIPNTSECURITYCRC [NO | YES]
> with
>>> NO being the default.
>>> 
>>> Reference:
>>> 
>>> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.6/client/
>> r_opt_skipntsecuritycrc.html
>>> 
>>> When this option is set to YES, security changes are not used to
> determine
>>> whether the file is changed. That is, if the ONLY change to the file is
> to
>>> the security attributes, then the file is not considered changed (but
> if
>>> the file is backed up due to other changes, the security attributes are
>>> included in the backup).
>>> 
>>> Be careful setting this option, though:
>>> 
>>> 1. Practically speaking, this is the kind of change you make once. That
> is,
>>> do not toggle this option back and forth between YES and NO. Of course
> you
>>> can certainly do that, but you will defeat the purpose of changing the
>>> value to begin with. It is not a "one-time" skip of security changes.
> It is
>>> a "lifestyle" for your backups. :-)
>>> 
>>> 2. Files that undergo ONLY security changes are not backed up during
>>> incremental backup. Consider this scenario:
>>> 
>>> - Day 1: File is backed up (for reasons other than security changes)
>>> 
>>> - Day 2: Security is changed on the file. File is not backed up because
> no
>>> other changes occurred.
>>> 
>>> - Day 3. Security is changed on the file. File is not backed up because
> no
>>> other changes occurred.
>>> 
>>> - Day 4. User wants to revert the most recent (Day 3) security changes,
> and
>>> asks you to restore the file with the security changes from day 2. That
>>> will not be possible if you configure SKIPNTSECURITYCRC Y

Re: Windows ARL change

2018-11-01 Thread Robert Talda
Andy:
  As always, thanks for chiming in.
 
  I was aware of this option but was hesitant to recommend it because of a side 
effect that you didn’t list:

  Upon restore, user must rebuild security permissions.
 
  So, if the fileserver/NAS had a strict permissions scheme that was easy to 
apply after a restore, then this option is viable.

   This is what bedevils us here - a lot of our units here at Cornell have file 
servers and NASs with complex permissions, and so the issue comes down to what 
is the least amount of pain: bumps in cost due to occasional big backups (we 
are a charge-back service) or trying to figure out a way to save permissions 
outside of Spectrum Protect that can be used to rebuild those permissions in a 
disaster scenario.

  Needless to say, all our customers have gone the big backup route.  Probably 
attribute additional charge to usual "Cost of working at a university” account

Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Oct 31, 2018, at 8:46 AM, Andrew Raibeck  wrote:
> 
> The backup-archive client has an option, SKIPNTSECURITYCRC [NO | YES] with
> NO being the default.
> 
> Reference:
> 
> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.6/client/r_opt_skipntsecuritycrc.html
> 
> When this option is set to YES, security changes are not used to determine
> whether the file is changed. That is, if the ONLY change to the file is to
> the security attributes, then the file is not considered changed (but if
> the file is backed up due to other changes, the security attributes are
> included in the backup).
> 
> Be careful setting this option, though:
> 
> 1. Practically speaking, this is the kind of change you make once. That is,
> do not toggle this option back and forth between YES and NO. Of course you
> can certainly do that, but you will defeat the purpose of changing the
> value to begin with. It is not a "one-time" skip of security changes. It is
> a "lifestyle" for your backups. :-)
> 
> 2. Files that undergo ONLY security changes are not backed up during
> incremental backup. Consider this scenario:
> 
> - Day 1: File is backed up (for reasons other than security changes)
> 
> - Day 2: Security is changed on the file. File is not backed up because no
> other changes occurred.
> 
> - Day 3. Security is changed on the file. File is not backed up because no
> other changes occurred.
> 
> - Day 4. User wants to revert the most recent (Day 3) security changes, and
> asks you to restore the file with the security changes from day 2. That
> will not be possible if you configure SKIPNTSECURITYCRC YES. But you can
> revert to the Day 1 backup, assuming your copygroup retention settings have
> not caused the older backup to expire.
> 
> 
> 2. If you later decide to put this option back to its default
> (SKIPNTSECURITYCRC NO) then you might see a one-time very big backup,
> because at that time, any files whose change is only to security, will be
> backed up, resulting possibly in a backup workload that is larger than
> usual.
> 
> In sum: Best practice to ensure most complete incremental backup is to
> allow the data to be backed up, even if only the security has changed.
> Proceed with caution, thinking about future ramifications of using this
> option.
> 
> Best regards,
> 
> Andy
> 
> ____
> 
> Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
> 
> "ADSM: Dist Stor Manager"  wrote on 2018-10-30
> 14:57:47:
> 
>> From: Robert Talda 
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 2018-10-30 14:58
>> Subject: Re: Windows ARL change
>> Sent by: "ADSM: Dist Stor Manager" 
>> 
>> Harold:
>>  Have same issue - often several times a year as departments here
>> administer their own servers - and no magic bullet.  Probably
>> doesn’t help other than “misery loves company”
>> 
>>  Keeping fingers crossed someone smarter than I has a workaround,
>> Bob
>> 
>> Robert Talda
>> EZ-Backup Systems Engineer
>> Cornell University
>> +1 607-255-8280
>> r...@cornell.edu
>> 
>> 
>>> On Oct 30, 2018, at 11:20 AM, Vandeventer, Harold [OITS]
>>  wrote:
>>> 
>>> Our "server security" folks are making several changes in Windows
>> Access Rights at the top of a folder structure which then cascades
>> all the way down to all files.  This is being done on several nodes.
>>> 
>>> Is there a way to avoid Spectrum Protect from backing up all those
>> files, most of which haven't changed in content other than the ARL
> property?
>>> 
>>> We charge back to agencies based on the bytes in auditocc table.
>> The ARL change results in a double up of the cost.
>>> 
>>> The nodes are running various 7.x versions.
>>> 
>>> Thanks for any guidance you might have.
>>> 
>> 



Re: Windows ARL change

2018-10-30 Thread Robert Talda
Harold:
  Have same issue - often several times a year as departments here administer 
their own servers - and no magic bullet.  Probably doesn’t help other than 
“misery loves company”

  Keeping fingers crossed someone smarter than I has a workaround,
Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Oct 30, 2018, at 11:20 AM, Vandeventer, Harold [OITS] 
>  wrote:
> 
> Our "server security" folks are making several changes in Windows Access 
> Rights at the top of a folder structure which then cascades all the way down 
> to all files.  This is being done on several nodes.
> 
> Is there a way to avoid Spectrum Protect from backing up all those files, 
> most of which haven't changed in content other than the ARL property?
> 
> We charge back to agencies based on the bytes in auditocc table. The ARL 
> change results in a double up of the cost.
> 
> The nodes are running various 7.x versions.
> 
> Thanks for any guidance you might have.
> 



Re: Netapp snapdiff issues having TSM client on a Linux machine, making use of NFS mounts

2018-09-18 Thread Robert Talda
Brion:
  Here at Cornell, we have a TSM client on a Linux system backing up NFS shares 
via NFS mounts and snapdiff differentials - and have been doing so for years.

  First, the gory details:
- For TSM
TSM Client Version 7, Release 1, Level 6.0 (*red faced, thought I had upgraded 
to SP 8.1.2.0 months ago*)
Running on CentOS Linux release 7.5.1804 (kernel 3.10.0-862.2.3.el7.x86_64)
Accessing TSM Server for Linux Version 7, Release 1, Level 7.200
- For NetApp
Filer Version Information: NetApp Release 9.3P6
 
  We aren’t seeing your issue, and I suspect I know why.  We are telling the 
TSM client to create its own snapshots - and not use the snapshots being 
created hourly (or daily or weekly even) by the team managing the Netapp - said 
snapshots being equivalent to the ones you are trying to take advantage of in 
your backups.

  That is, the relevant part of the daily backup schedule looks like:
 Schedule Name: DAILY.INCR.NFS.SNAPDIFF
   Description: Daily SnapDiff Incremental
Action: Incremental
 Subaction: 
   Options: -snapdiff -diffsnapshot=create -snapdiffhttps


  During a backup, then, the following type of information is written to the 
dsmsched.log:
09/13/2018 00:38:08 Incremental by snapshot difference of volume 
'/nas-backup/vol/cit-pea1-test'
09/13/2018 00:38:10 ANS2328I Using Snapshot Differential Change Log.
09/13/2018 00:38:10
Connected to NetApp Storage Virtual Machine
Management Filer Host/IP Address : ccc-cdot01.cit.cornell.edu
Filer Version Information: NetApp Release 9.3P6: Thu Jun 14 
20:21:25 UTC 2018
Storage VM Name  : cit-sfshared05
Storage VM Host/IP Address   : cit-sfshared05-backup.files.cornell.edu
Storage VM Volume Style  : Flex
Login User   : tsmbackup
Transport: HTTPS

09/13/2018 00:38:10 Performing a Snapshot Differential Backup of volume 
'/nas-backup/vol/cit-pea1-test'
09/13/2018 00:38:10 Creating Diff Snapshot.
09/13/2018 00:38:10 Using Base Snapshot 
'TSM_SANT5B9759FC1AB6_CIT_PEA1_TEST_VOL' with timestamp 09/11/2018 06:00:28
09/13/2018 00:38:10 Using Diff Snapshot 
'TSM_SANT5B99E9B24F814_CIT_PEA1_TEST_VOL' with timestamp 09/13/2018 04:38:10

Then, on the next backup (48 hours later thanks to a large amount of data to 
backup on 9/13), the Diff Snapshot becomes the base:
09/15/2018 01:29:22 ANS2328I Using Snapshot Differential Change Log.
09/15/2018 01:29:22 
Connected to NetApp Storage Virtual Machine
Management Filer Host/IP Address : ccc-cdot01.cit.cornell.edu
Filer Version Information: NetApp Release 9.3P6: Thu Jun 14 
20:21:25 UTC 2018
Storage VM Name  : cit-sfshared05
Storage VM Host/IP Address   : cit-sfshared05-backup.files.cornell.edu
Storage VM Volume Style  : Flex
Login User   : tsmbackup
Transport: HTTPS

09/15/2018 01:29:22 Performing a Snapshot Differential Backup of volume 
'/nas-backup/vol/cit-pea1-test'
09/15/2018 01:29:22 Creating Diff Snapshot.
09/15/2018 01:29:22 Using Base Snapshot 
'TSM_SANT5B99E9B24F814_CIT_PEA1_TEST_VOL' with timestamp 09/13/2018 04:38:10
09/15/2018 01:29:22 Using Diff Snapshot 
'TSM_SANT5B9C98B1A15B2_CIT_PEA1_TEST_VOL' with timestamp 09/15/2018 05:29:21

I think that consistency is what is missing in your approach - I suspect that 
there are intervening snapshots with information about files that have been 
deleted or changed that are not part of what the TSM client is seeing.

I will add that our NetApp team is very happy with this approach.  The TSM 
client has been very efficient in its management of the snapshots, and in those 
rare cases when it doesn’t, the snapshots are easy to identify and clean up.

FWIW,
Bob


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Sep 18, 2018, at 8:35 AM, PAC Brion Arnaud  
> wrote:
> 
> Hi Tommaso,
> 
> The backup command I'm using (dsmc i /Airwarder -snapdiff -snapdiffhttps  
> -diffsnapshotname="daily.*" -diffsnapshot=latest -useexistingbase 
> -optfile=dsm_netapp.opt)  includes the "useexistingbase" statement.
> Based on my understanding of the documentation, this means that the S.P. 
> client will not trigger the creation of a new snapshot on the Netapp device, 
> but instead, make use of a snapshot that had been created previously by some 
> internal mechanism of the Netapp. 
> Therefore, it seems very unlikely that any process still has a grip on any of 
> the files being part of this snapshot ...
> My humble opinion only of course, I'm waiting on others fellow members of 
> this list feedback !
> 
> In addition to this, I would like to avoid the use of any exclude statement 
> during the backup : I doubt that any of my customers would be 

Re: Multiple processes opening for one scheduled backup

2018-08-21 Thread Robert Talda
George:
  First thing that comes to mind: how many dsmc (not dsmcad) processes are 
running on the client system when you see this?

  I haven’t seen this specifically, but I have seen on occasion a “rogue” (my 
term) dsmc process running in schedule mode and interacting with the server - 
and thus getting the same “next scheduled backup window” information as the 
legitmate dsmc process invoked by the dsmcad.

HTH,
Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Aug 16, 2018, at 5:26 PM, George Huebschman  
> wrote:
> 
> I have a handful of Linux baclients that show more than one schedule begin
> and schedule end for the same backup schedule on the same day.
> 
> 08/10/2018 15:00:03 --- SCHEDULEREC OBJECT BEGIN DAILY_LINUX1 08/10/2018
> 15:00
> 08/10/2018 15:56 --- SCHEDULEREC OBJECT BEGIN DAILY_LINUX1 08/10/2018 15:00
> 
> 08/10/2018 16:35 --- SCHEDULEREC OBJECT END DAILY_LINUX1 08/10/2018 15:00
> 08/10/2018 17:47 --- SCHEDULEREC OBJECT END DAILY_LINUX1 08/10/2018 15:00
> 
> These are accompanied by massive numbers of errors once the second
> "instance" of the backup event starts.
> 
> Backup goes to VTL and maxnummp is limited.
> I see ANS0326E (exceeds max number of mount points)
> Tsmdedupdb lock errors and
> ANS 8010E (exceeded max number of retries); along with one ANS1512E
> (Scheduled event Failed. Return Code 12) for each "instance" that day.
> It does not happen every day, but it happens several times a week.
> I've been told that it is normal for a client to open more than one
> session, that this is not a problem.
> 
> Yes, clients open up several, sometimes many "SESSIONS", that is
> normal.  That is not what I feel the is the problem.
> These clients start the same backup event twice.  That isn't supposed to
> happen.  They send a complete backup summary twice with different values
> for each of the metrics.
> 
> Version info:
> Linux baclient V7.1.6.3
> RHEL Enterprise Linux server 5.11
> ISP Server AIX V6.3.5.100 (Yes, I know, it needs an update or two.)
> oslevel 7.1.0.0
> 
> There is only one dsmcad process running, and the dsmcad Service is
> restarted every time there is a backup exception.
> There is just the one schedule for the client, there aren't multiple
> schedules per day.  Each client actually has it's own personal schedule.
> The client schedule options dictate the FS to be backed up.
> Cloptset manages includes/excludes.
> 
> These clients are not part of a cluster.
> 
> Some of them are very distant, but other clients just as far off don't do
> this.
> 
> ...what did I leave out?



Re: Proxy/asnodename restore and strange Registry entries?

2018-08-03 Thread Robert Talda
Zoltan:
 Willing to test here - which platform (Windows, Linux x86, etc) are you 
running the client on?

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Aug 2, 2018, at 10:35 AM, Zoltan Forray  wrote:
> 
> We are working through trying to move to using Proxy/asnodename processes
> to replace the html interface for our ISILON backups and are seeing some
> strangeness in the 8.1.0.2 GUI
> 
> When I bring up the GUI and the process to access another node, when I
> expand the "File Level" section, 6 "Registry" appear?  Besides there being
> 6-of them, this makes no sense since the backups are ISILON file level -
> not OS level.  There aren't any systemstate/registry level.
> 
> What gives?
> 


Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-16 Thread Robert Talda
Zoltan:
  I wish I could give you more details about the NAS/storage device 
connections, but either a) I’m not privy to that information; or b) I know it 
only as the SAN fabric.  That is, our largest backups are from systems in our 
server farm that are part of the same SAN fabric as both the system running the 
SP client doing the backups AND the system hosting the TSM server.  There is a 
10 GB pipe connecting the two physical systems but that hasn’t ever been the 
bottleneck.  And the system running the SP client is a VM as well.

  Our bigger challenge was filesystems or shares with lots of files.  This is 
where the proxy node strategy came into play.  We were able to work with the 
system admins to split the backup of the those filesystems into many smaller 
(in terms of number of files) backups that started deeper in the filesystem.  
That is, instead of running a backup against
\\rams\som\TSM\FC\*
We would have one backup running through PROXY.NODE1 for
\\rams\som\TSM\FC\dir1\*
While another was running through PROXY.NODE2 for
\\rams\som\TSM\FC\dir2\*
And so on and so forth.

We did this using a set of client schedules that used the “objects” option to 
specify the directory in question:

Def sched DOMAIN PROXY.NODE1.HOUR01 action=incr options=“-subir=yes 
-asnodename=DATANODE” -objects=‘“\\rams\som\TSM\RC\dir1\” startt=01:00 dur=1 
duru=hour

Where DATANODE is the target for agent PROXY.NODE1.

Currently, we are running up to 144 backups (6 Proxy nodes, 24 hourly backups) 
for our largest devices.

HTH,
Bob

On Jul 16, 2018, at 8:29 AM, Zoltan Forray 
mailto:zfor...@vcu.edu>> wrote:

Robert,

Thanks for the extensive details.  You backup 5-nodes with as more data
then we do for 90-nodes.  So, my question is - what kind of connections do
you have to your NAS/storage device to process that much data in such a
short period of time?

I am not sure what benefit a proxy-node would do for us, other than to
manage multiple nodes from one connection/GUI - or am I totally off base on
this?

Our current configuration is such:

7-Windows 2016 VM's (adding more to spread out the load)
Each of these 7-VM's handle the backups for 5-30 nodes.  Each node is a
mountpoint for an user/department ISILON DFS mount -
i.e. \\rams\som\TSM\FC\*, \\rams\som\TSM\UR\* etc.  
FWIW, the reason we are
using VM's is the connection is actually faster then when we were using
physical servers since they only had gigabit nics.

Even when we moved the biggest ISILON node (20,000,000+ files) to a new VM
with only 4-other nodes, it still took 4-days to scan and backup 102GB of
32TB.  Below are a recent end-of-session statistics (current backup started
Friday and is still running)

07/09/2018 02:00:06 ANE4952I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects inspected:   20,276,912  (SESSION: 21423)
07/09/2018 02:00:06 ANE4954I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects backed up:   26,787  (SESSION: 21423)
07/09/2018 02:00:06 ANE4958I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects updated: 31  (SESSION: 21423)
07/09/2018 02:00:06 ANE4960I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects rebound:  0  (SESSION: 21423)
07/09/2018 02:00:06 ANE4957I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects deleted:  0  (SESSION: 21423)
07/09/2018 02:00:06 ANE4970I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects expired: 20,630  (SESSION: 21423)
07/09/2018 02:00:06 ANE4959I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects failed:  36  (SESSION: 21423)
07/09/2018 02:00:06 ANE4197I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects encrypted:0  (SESSION: 21423)
07/09/2018 02:00:06 ANE4965I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of subfile objects:  0  (SESSION: 21423)
07/09/2018 02:00:06 ANE4914I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of objects grew: 0  (SESSION: 21423)
07/09/2018 02:00:06 ANE4916I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of retries:124  (SESSION: 21423)
07/09/2018 02:00:06 ANE4977I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of bytes inspected:  31.75 TB  (SESSION: 21423)
07/09/2018 02:00:06 ANE4961I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Total number of bytes transferred:   101.90 GB  (SESSION: 21423)
07/09/2018 02:00:06 ANE4963I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Data transfer time:  115.78 sec  (SESSION: 21423)
07/09/2018 02:00:06 ANE4966I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Network data transfer rate:  922,800.00 KB/sec  (SESSION: 21423)
07/09/2018 02:00:06 ANE4967I (Session: 21423, Node: ISILON-SOM-SOMADFS2)
Aggregate data transfer rate:271.46 KB/sec  (SESSION: 21423)
07/09/2018 02:00:06 ANE4968I (Session:

Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-15 Thread Robert Talda
Zoltan:
 Finally get a chance to answer you.  I :think: I understand what you are 
getting at…

 First, some numbers - recalling that each of these nodes is one storage device:
Node1: 358,000,000+ files totalling 430 TB of primary occupied space
Node2: 302,000,000+ files totaling 82 TB of primary occupied space
Node3: 79,000,000+ files totaling 75 TB of primary occupied space
Node4: 1,000,000+ files totalling 75 TB of primary occupied space
Node5: 17,000,000+ files totalling 42 TB of  primary occupied space
  There are more, but I think this answers your initial question.

 Restore requests are handled by the local system admin or, for lack of a 
better description, data admin.  (Basically, the research area has a person 
dedicated to all the various data issues related to research grants, from 
including proper verbiage in grant requests to making sure the necessary 
protections are in place). 

  We try to make it as simple as we can, because we do concentrate all the data 
in one node per storage device (usually a NAS).  So restores are usually done 
directly from the node - while all backups are done through proxies.  
Generally, the restores are done without permissions so that the appropriate 
permissions can be applied to the restored data.  (Oft times, the data is 
restored so a different user or set of users can work with it, so the original 
permissions aren’t useful)

  There are some exceptions - of course, as we work at universities, there are 
always exceptions - and these we handle as best we can by providing proxy nodes 
with restricted priviledges.

  Let me know if I can provide more,
Bob


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Jul 11, 2018, at 3:59 PM, Zoltan Forray  wrote:
> 
> Robert,
> 
> Thanks for the insight/suggestions.  Your scenario is similar to ours but
> on a larger scale when it comes to the amount of data/files to process,
> thus the issue (assuming such since you didn't list numbers).  Currently we
> have 91 ISILON nodes totaling 140M objects and 230TB of data. The largest
> (our troublemaker) has over 21M objects and 26TB of data (this is the one
> that takes 4-5 days).  dsminstr.log from a recently finished run shows it
> only backed up 15K objects.
> 
> We agree that this and other similarly larger nodes need to be broken up
> into smaller/less objects to backup per node.  But the owner of this large
> one is balking since previously this was backed up via a solitary Windows
> server using Journaling so everything finished in a day.
> 
> We have never dealt with proxy nodes but might need to head in that
> direction since our current method of allowing users to perform their own
> restores relies on the now deprecated Web Client.  Our current method is
> numerous Windows VM servers with 20-30 nodes defined to each.
> 
> How do you handle restore requests?
> 
> On Wed, Jul 11, 2018 at 2:56 PM Robert Talda  wrote:
> 



Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-11 Thread Robert Talda
Zoltan, et al:
  :IF: I understand the scenario you outline originally, here at Cornell we are 
using two different approaches in backing up large storage arrays.

1. For backups of CIFS shares in our Shared File Share service hosted on a 
NetApp device,  we rely on a set of Powershell scripts to build a list of 
shares to backup, then invoke up to 5 SP clients at a time, each client backing 
up a share.  As such, we are able to backup some 200+ shares on a daily basis.  
I’m not sure this is a good match to your problem...

2. For backups of a large Dell array containing research data that does seem to 
be a good match, I have defined a set of 10 proxy nodes and 240 hourly 
schedules (once each hour for each proxy node) that allows us to divide the 
Dell array up into 240 pieces - pieces that are controlled by the specification 
of the “objects” in the schedule.  That is, in your case, instead of 
associating node  to the schedule ISILON-SOM-SOMDFS1 with 
object " \\rams.adp.vcu.edu\SOM\TSM\SOMADFS1\*”, I would instead have something 
like
Node PROXY1.ISILON associated to PROXY1.ISILON.HOUR1 for object " 
\\rams.adp.vcu.edu\SOM\TSM\SOMADFS1\DIRA\SUBDIRA\*”
Node PROXY2.ISILON associated to PROXY1.ISILON.HOUR1 for object " 
\\rams.adp.vcu.edu\SOM\TSM\SOMADFS1\DIRA\SUBDIRB\*”
…
Node PROXY1.ISILON associated to PROXY1.ISILON.HOUR2 for object " 
\\rams.adp.vcu.edu\SOM\TSM\SOMADFS1\DIRB\SUBDIRA\*”

And so on.   For known large directories, slots of multiple hours are 
allocated, up to the largest directory which is given its own proxy node with 
one schedule, and hence 24 hours to back up.

There are pros and cons to both of these, but they do enable us to perform the 
backups.

FWIW,
Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Jul 11, 2018, at 7:49 AM, Zoltan Forray  wrote:
> 
> I will need to translate to English but I gather it is talking about the
> RESOURCEUTILZATION / MAXNUMMP values.  While we have increased MAXNUMMP to
> 5 on the server (will try going higher),  not sure how much good it would
> do since the backup schedule uses OBJECTS to point to a specific/single
> mountpoint/filesystem (see below) but is worth trying to bump the
> RESOURCEUTILIZATION value on the client even higher...
> 
> We have checked the dsminstr.log file and it is spending 92% of the time in
> PROCESS DIRS (no surprise)
> 
> 7:46:25 AM   SUN : q schedule * ISILON-SOM-SOMADFS1 f=d
>Policy Domain Name: DFS
> Schedule Name: ISILON-SOM-SOMADFS1
>   Description: ISILON-SOM-SOMADFS1
>Action: Incremental
> Subaction:
>   Options: -subdir=yes
>   Objects: \\rams.adp.vcu.edu\SOM\TSM\SOMADFS1\*
>  Priority: 5
>   Start Date/Time: 12/05/2017 08:30:00
>  Duration: 1 Hour(s)
>Maximum Run Time (Minutes): 0
>Schedule Style: Enhanced
>Period:
>   Day of Week: Any
> Month: Any
>  Day of Month: Any
> Week of Month: Any
>Expiration:
> Last Update by (administrator): ZFORRAY
> Last Update Date/Time: 01/12/2018 10:30:48
>  Managing profile:
> 
> 
> On Tue, Jul 10, 2018 at 4:06 AM Jansen, Jonas 
> wrote:
> 
>> It is possible to da a parallel backup of file system parts.
>> https://www.gwdg.de/documents/20182/27257/GN_11-2016_www.pdf (german)
>> have a
>> look on page 10.
>> 
>> ---
>> Jonas Jansen
>> 
>> IT Center
>> Gruppe: Server & Storage
>> Abteilung: Systeme & Betrieb
>> RWTH Aachen University
>> Seffenter Weg 23
>> 52074 Aachen
>> Tel: +49 241 80-28784
>> Fax: +49 241 80-22134
>> jan...@itc.rwth-aachen.de
>> www.itc.rwth-aachen.de
>> 
>> -Original Message-
>> From: ADSM: Dist Stor Manager  On Behalf Of Del
>> Hoobler
>> Sent: Monday, July 9, 2018 3:29 PM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: Re: [ADSM-L] Looking for suggestions to deal with large backups
>> not
>> completing in 24-hours
>> 
>> They are a 3rd-party partner that offers an integrated Spectrum Protect
>> solution for large filer backups.
>> 
>> 
>> Del
>> 
>> 
>> 
>> "ADSM: Dist Stor Manager"  wrote on 07/09/2018
>> 09:17:06 AM:
>> 
>>> From: Zoltan Forray 
>>> To: ADSM-L@VM.MARIST.EDU
>>> Date: 07/09/2018 09:17 AM
>>> Subject: Re: Looking for suggestions to deal with large backups not
>>> completing in 

Re: How to set default include/exclude rules on server but allow clients to override?

2018-06-14 Thread Robert Talda
Efim is correct, both about the force and  about processing order.  I got ahead 
of myself.  We built our client option sets to be as inclusive as possible, 
within minimal include and exclude statements, so that clients could override 
the behavior with includes and excludes that got processed because there were 
no conflicting server side options.  Resulted in a blind spot in my thinking - 
we haven’t had a customer need to override the management classes provided.

A possible alternative solution: create a separate domain with the same 
management classes, but with different retention and storage characteristics 
defined by those classes as needed.  Then, assign the nodes as needed to that 
domain.  That is, get to the same destination - system state information 
handled in a different manner - by changing the underlying behavior of the 
assigned management class, not the name

FWIW,
Bob 

(And thanks, Efim, for catching my error!)



Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Jun 13, 2018, at 2:26 PM, Efim  wrote:
> 
> Hi,
> parametr Force has no affect to include/exclude in cloptset.
> use q inclexcl client command to check resulting (cloptset+dsm.opt) 
> include/exclude in normal order (from top to bottom).
> You will find that server include/exclude processed first.
> Efim
> 
> 
>> 13 июня 2018 г., в 16:43, Genet Begashaw  написал(а):
>> 
>> Thanks for your response, on the server side it was included by Optionset
>> with force=no, i tried to overwrite in the client side in dsm.opt file with
>> different mgmt class like shown below
>> 
>> include.systemstate ALL MC-mgmt-name
>> ​
>> still did not work​
>> 
>> Thank you
>> 
>> 
>> 
>> 
>> 
>> On Wed, Jun 13, 2018 at 9:37 AM Robert Talda  wrote:
>> 
>>> Genet:
>>> 
>>> For us it is simple: create a client option set with the list of
>>> include/exclude rules with FORCE=NO.  Complexity comes from number of
>>> client options sets needed to support different platforms (Win 7, Win 10,
>>> OS X, macOS, various Linux destroys) and determining which client option
>>> set to assign a given node
>>> 
>>> YMMV,
>>> Bob
>>> 
>>> Robert Talda
>>> EZ-Backup Systems Engineer
>>> Cornell University
>>> +1 607-255-8280
>>> r...@cornell.edu
>>> 
>>> 
>>>> On Jun 13, 2018, at 8:56 AM, Genet Begashaw  wrote:
>>>> 
>>>> Thank you
>>>> 
>>>> Genet Begashaw
>>> 



Re: How to set default include/exclude rules on server but allow clients to override?

2018-06-13 Thread Robert Talda
Genet: 

  For us it is simple: create a client option set with the list of 
include/exclude rules with FORCE=NO.  Complexity comes from number of client 
options sets needed to support different platforms (Win 7, Win 10, OS X, macOS, 
various Linux destroys) and determining which client option set to assign a 
given node

YMMV,
Bob

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Jun 13, 2018, at 8:56 AM, Genet Begashaw  wrote:
> 
> Thank you
> 
> Genet Begashaw


Re: Has any one seen this: client restore mounting copy storage pool tape volumes?

2017-11-16 Thread Robert Talda
Thanks, Marc!  Need to improve my tech note search terms…


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Nov 16, 2017, at 12:08 PM, Marc Lanteigne <marclantei...@ca.ibm.com> wrote:
> 
> 
> Hi Robert,
> 
> Yes this can happen.  It's explained here:
> http://www-01.ibm.com/support/docview.wss?uid=swg21380647
> 
> -
> Thanks,
> Marc...
> 
> Marc Lanteigne
> Accelerated Value Specialist for Spectrum Protect
> 416.478.0233 | marclantei...@ca.ibm.com
> Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern
> 
> Follow me on: Twitter, developerWorks, LinkedIn
> 



Has any one seen this: client restore mounting copy storage pool tape volumes?

2017-11-16 Thread Robert Talda
Folks:
  TSM Server 7.1.7.200 running on Linux x86_64 (Red Hat 6.8)
  TSM client 7.1.6.0 running on what appears to be Red Hat 5.7 (kernel 
2.6.18-300.el5)

  Local system admin is performing a BMR on a system that has been backing for 
years to the same TSM node.  So there are a lot of old files being pulled down.

  What is puzzling me is that the session associated with the restore just 
mounted a volume from our copy storage pool:

tsm: ADSM7>select * from sessions where session_id=179758

   SESSION_ID: 179758
   START_TIME: 2017-11-16 11:00:50.00
   COMMMETHOD: Tcp/Ip
STATE: MediaW
 WAIT_SECONDS: 2933
   BYTES_SENT: 10329945
   BYTES_RECEIVED: 883
 SESSION_TYPE: Node
  CLIENT_PLATFORM: Linux x86-64
  CLIENT_NAME: NASH01.SERVERFARM.CORNELL.EDU
   OWNER_NAME: 
 MOUNT_POINT_WAIT: 
 INPUT_MOUNT_WAIT: 
   INPUT_VOL_WAIT:  ,141013,225
 INPUT_VOL_ACCESS:  ,250152,740: ,/adsm7T20/T037,2678
OUTPUT_MOUNT_WAIT: 
  OUTPUT_VOL_WAIT: 
OUTPUT_VOL_ACCESS: 
LAST_VERB: MediaMount
   VERB_STATE: Sent
   FAIL_OVER_MODE: No

Current mounted volumes includes:
11/16/2017 11:39:39   ANR8337I LTO volume 250152 mounted in drive DRIVE8503  
(/dev/rmt.e203.0.0). (SESSION: 179758)   

Which is a copy storage pool volume.  Fortunately, we had drives available and 
the volume mounted so sys admin isn’t seeing a delay.

Not seen this before, not sure what the condition would be to have server reach 
to copy storage pool to get data.  

Has anyone seen/experienced this?

Thanks!



Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu




Re: Huge differences in file count after exporting node to a different server

2017-10-30 Thread Robert Talda
Zoltan:
 For what it is worth, I’ve seen this behavior for years.  I attempted to 
pursue it a couple of times with IBM support but never got anywhere.

 In fact, it just occurred last week//
OS: Red Hat Enterprise Linux Server release 6.8 (Santiago)
TSM Server Version 7, Release 1, Level 7.200


Source node statistics (from BACKUPS table):
FSIDType State   Count
1 DIR INACTIVE_VERSION353
1 DIR ACTIVE_VERSION57355
1 FILE INACTIVE_VERSION   1055
1 FILE ACTIVE_VERSION   392870
2 DIR ACTIVE_VERSION 4161
2 FILE ACTIVE_VERSION25751

Target node statistics (from BACKUPS table):
FSIDType StateCount
1 DIR INACTIVE_VERSION706
1 DIR ACTIVE_VERSION   114710
1 FILE INACTIVE_VERSION   1055
1 FILE ACTIVE_VERSION   393246
2 DIR ACTIVE_VERSION 4161
2 FILE ACTIVE_VERSION25751

Note the duplication of the directories - so obvious, I didn’t pursue.  I did 
look into the additional active files, and many were inconsequential (e.g., 
files named .ds_store; this is a macOS X platform) but there were a number of 
other files with real content.  In this case, the additional files were not a 
problem, so I didn’t pursue further, other than verifying that expiration 
wasn’t running on the source server, so the origin was uncertain


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu<mailto:r...@cornell.edu>


On Oct 25, 2017, at 3:35 PM, Zoltan Forray 
<zfor...@vcu.edu<mailto:zfor...@vcu.edu>> wrote:

I am curious if anyone has seen anything like this.

A node was exported (filedata=all) from one server to another (all servers
are 7.1.7.300 RHEL)

After successful completion (took a week due to 6TB+ to process) and
copypool backups on the new server, the Total Occupancy counts are the same
(13.52TB).  However, the file counts are waaay off (original=17,561,816 vs
copy=12,471,862)

There haven't been any backups performed to either the original (since the
export) or new node. Policies are the same on both servers and even if they
weren't, that wouldn't explain the same occupancy size/total.

Neither server runs dedup (DISK based storage volumes).

Any thoughts?

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu<http://www.ucc.vcu.edu>
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/



Odd behavior during CIFS file restore

2017-08-10 Thread Robert Talda
Folks:
  Now that Paul Zarnowski and his extensive knowledge and experience have 
retired to greener pastures, we need to reach out to the community for some 
insight.

  We have a NetApp appliance that supports our Shared File Service for campus, 
offering both CIFS and NFS shares.  In line with a recent list discussion about 
the problems with NDMP backups, we use two proxy systems for backup (Windows 
Server 2012 R2 for CIFS shares; RHEL 6 for NFS shares).  While we have our fair 
share of issues with the backups, in general this architecture has worked for 
us.

  Recently, though, we had to perform a restore of a directory on a CIFS share 
for a user.  This is outside our normal boundaries, but it was an emergency.  
In the process, we encountered something odd that I can’t explain.  Actually, 
there were 2 oddities.

So, starting with the facts:

   NetApp Filer Version Information: NetApp Release 9.1P5: Sun Jun 11 
02:26:58 UTC 2017
   SP Client Version 8, Release 1, Level 0.1
   TSM Server Version 7, Release 1, Level 7.200

The restore was not back to the source directory, but rather to a new directory 
created by the user as a target for the restore.  As such, we initially started 
out using UNC names for both the source AND the larget directory, namely

 Dsmc restore -optfile=“share.opt” -latest -subdir=yes 
“\\share.files.cornell.edu<http://share.files.cornell.edu>\vol\user\source 
dir\*” 
“\\share.files.cornell.edu<http://share.files.cornell.edu>\vol\user\target dir”

Obviously, the actual UNC names differ, but the “ “ in both the source and 
target directory names did exist.

Oddity 1: Issuing the command as written resulted in the SP client attempting 
to restore to the source directory, for we got a prompt about overwriting an 
existing file.  However, SP client wrote no message of any kind about not being 
able to access the target directory to either standard output or the 
dsmerror.log.  It just tried to restore back to the source directory.

Oddity 2: it appears that the “ “ in the target directory was causing the 
problem.  That is, we eventually got the restore to work, via trial and error, 
via the following sequence
Net use Z: 
“\\share.files.cornell.edu<http://share.files.cornell.edu>\vol\user\target dir”
Dsmc restore -optfile=“share.opt” -latest -subdir=yes 
“\\share.files.cornell.edu<http://share.files.cornell.edu>\vol\user\source 
dir\*” “z:\”

However, any attempt to set the drive letter to a higher level directory 
failed, i.e.
Net use Z:  “\\share.files.cornell.edu<http://share.files.cornell.edu>\vol\user”
Dsmc restore -optfile=“share.opt” -latest -subdir=yes 
“\\share.files.cornell.edu<http://share.files.cornell.edu>\vol\user\source 
dir\*” “z:\target dir\”
failed.

Has any one seen anything like this?

Thanks,
Bob T



Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu<mailto:r...@cornell.edu>




Re: Operations Center at risk setting jumps from 30 to 60 days

2017-06-28 Thread Robert Jose

Hi Stefan
I will run your idea past our UX design team and see what we can do.
Thanks for the feedback. We appreciate the opportunity to make our product
better 


Rob Jose
Spectrum Protect OC UI Developer / L3 Support

-Original Message-
From: Stefan Folkerts [mailto:stefan.folke...@gmail.com]
Sent: Wednesday, June 28, 2017 7:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Operations Center at risk setting jumps from 30 to 60
days

Hi Robert.
I would really like to be able to pick an at-risk setting using a
blank-field where I can just enter the day's for a selection of VM's, since
the CLI supports this I guess the GUI can too right? Correct at-risk
settings for monthly backups are especially painful right now, this
customer has 400+ nodes.

On Wed, Jun 28, 2017 at 12:16 AM, Robert Jose <rj...@us.ibm.com> wrote:

> Hello
> Starting in version 7.1.5 ( I think ), we changed so that you can
> update the default per week. If it isn't too much trouble, you can
> change the default to 7 or 8 weeks. Then you can change the other
> machines to use a custom value that we support. This isn't a good
> option if you have a lot of clients.
>
> The other option is to use the command builder to update the 4
> clients, this way you don't have to leave OC.
>
> Rob Jose
> Spectrum Protect OC UI Developer / L3 Support
>
> -Original Message-
> From: Stefan Folkerts [mailto:stefan.folke...@gmail.com]
> Sent: Wednesday, June 14, 2017 3:54 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Operations Center at risk setting jumps from 30
> to 60 days
>
> @Remco, do you mind sharing? I can only set custom at risk settings
> for the defaults, I can't find where I can set (ie) 31 day's for 4
> servers using the GUI.
>
>
> On Wed, Jun 14, 2017 at 11:15 AM, Remco Post <r.p...@plcs.nl> wrote:
>
> > > On 14 Jun 2017, at 11:08, Stefan Folkerts
> > > <stefan.folke...@gmail.com>
> > wrote:
> > >
> > > Thanks Steven, too bad you can't set this in the GUI.
> >
> > yes you can
> >
> > > Trying to keep this
> > > customer on the OC for as much as possible.
> > >
> > > On Wed, Jun 14, 2017 at 9:40 AM, Harris, Steven <
> > > steven.har...@btfinancialgroup.com> wrote:
> > >
> > >> Hi Stephan
> > >>
> > >> Set vmatriskinterval
> > >>
> > >> Syntax
> > >>
> > >>>> ---Set VMATRISKINTERVAL--node_name--fsid-->
> > >>
> > >>> --TYPE--=--+-DEFAULT--+--++---><
> > >>+-BYPASSED-+  '-Interval--=--value-'
> > >>'-CUSTOM---'
> > >>
> > >> Type is custom and interval is in *Hours* between 6 and 8808 (367
> > >> days)
> > >>
> > >> Oh Heck.  I had plans to set it much higher, as we have to keep
> > monthlies
> > >> for 7 years even after decommission.  I will have to set to
> > >> bypassed instead.
> > >>
> > >>
> > >> Cheers
> > >>
> > >> Steve
> > >>
> > >> Steven Harris
> > >> TSM Admin/Consultant
> > >> Canberra Australia
> > >>
> > >> -Original Message-
> > >> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
> > >> Behalf
> > Of
> > >> Stefan Folkerts
> > >> Sent: Wednesday, 14 June 2017 5:05 PM
> > >> To: ADSM-L@VM.MARIST.EDU
> > >> Subject: [ADSM-L] Operations Center at risk setting jumps from 30
> > >> to 60 days
> > >>
> > >> Hi all,
> > >>
> > >> Is there a way to manually set the at risk for a node to, for
> > >> example,
> > 31
> > >> days?
> > >> Monthly backups now have to be set to 30 or 60, both isn't what
> > >> we want because 30 will cause false positives and 60 is a bit too
late.
> > >>
> > >> The default can be set to 5 weeks for example, I think the best
> > >> would be if we have the resolution of the default settings per
> > >> node (or
> > selection of
> > >> nodes) in the OC.
> > >>
> > >> Regards,
> > >>   Stefan
> > >>
> > >>
> > >> This message and any attachment is confidential and may be
> > >> privileged or otherwise protected from disclosure. You should
> > >> immediately delete the message if you are not the intended
> > >> recipient. If you have rece

Re: Operations Center at risk setting jumps from 30 to 60 days

2017-06-27 Thread Robert Jose
Hello
Starting in version 7.1.5 ( I think ), we changed so that you can update
the default per week. If it isn't too much trouble, you can change the
default to 7 or 8 weeks. Then you can change the other machines to use a
custom value that we support. This isn't a good option if you have a lot of
clients.

The other option is to use the command builder to update the 4 clients,
this way you don't have to leave OC.

Rob Jose
Spectrum Protect OC UI Developer / L3 Support

-Original Message-
From: Stefan Folkerts [mailto:stefan.folke...@gmail.com]
Sent: Wednesday, June 14, 2017 3:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Operations Center at risk setting jumps from 30 to 60
days

@Remco, do you mind sharing? I can only set custom at risk settings for the
defaults, I can't find where I can set (ie) 31 day's for 4 servers using
the GUI.


On Wed, Jun 14, 2017 at 11:15 AM, Remco Post  wrote:

> > On 14 Jun 2017, at 11:08, Stefan Folkerts
> > 
> wrote:
> >
> > Thanks Steven, too bad you can't set this in the GUI.
>
> yes you can
>
> > Trying to keep this
> > customer on the OC for as much as possible.
> >
> > On Wed, Jun 14, 2017 at 9:40 AM, Harris, Steven <
> > steven.har...@btfinancialgroup.com> wrote:
> >
> >> Hi Stephan
> >>
> >> Set vmatriskinterval
> >>
> >> Syntax
> >>
>  ---Set VMATRISKINTERVAL--node_name--fsid-->
> >>
> >>> --TYPE--=--+-DEFAULT--+--++---><
> >>+-BYPASSED-+  '-Interval--=--value-'
> >>'-CUSTOM---'
> >>
> >> Type is custom and interval is in *Hours* between 6 and 8808 (367
> >> days)
> >>
> >> Oh Heck.  I had plans to set it much higher, as we have to keep
> monthlies
> >> for 7 years even after decommission.  I will have to set to
> >> bypassed instead.
> >>
> >>
> >> Cheers
> >>
> >> Steve
> >>
> >> Steven Harris
> >> TSM Admin/Consultant
> >> Canberra Australia
> >>
> >> -Original Message-
> >> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
> >> Behalf
> Of
> >> Stefan Folkerts
> >> Sent: Wednesday, 14 June 2017 5:05 PM
> >> To: ADSM-L@VM.MARIST.EDU
> >> Subject: [ADSM-L] Operations Center at risk setting jumps from 30
> >> to 60 days
> >>
> >> Hi all,
> >>
> >> Is there a way to manually set the at risk for a node to, for
> >> example,
> 31
> >> days?
> >> Monthly backups now have to be set to 30 or 60, both isn't what we
> >> want because 30 will cause false positives and 60 is a bit too late.
> >>
> >> The default can be set to 5 weeks for example, I think the best
> >> would be if we have the resolution of the default settings per node
> >> (or
> selection of
> >> nodes) in the OC.
> >>
> >> Regards,
> >>   Stefan
> >>
> >>
> >> This message and any attachment is confidential and may be
> >> privileged or otherwise protected from disclosure. You should
> >> immediately delete the message if you are not the intended
> >> recipient. If you have received this email by mistake please delete
> >> it from your system; you should not copy
> the
> >> message or disclose its content to anyone.
> >>
> >> This electronic communication may contain general financial product
> advice
> >> but should not be relied upon or construed as a recommendation of
> >> any financial product. The information has been prepared without
> >> taking into account your objectives, financial situation or needs.
> >> You should
> consider
> >> the Product Disclosure Statement relating to the financial product
> >> and consult your financial adviser before making a decision about
> >> whether to acquire, hold or dispose of a financial product.
> >>
> >> For further details on the financial product please go to
> >> http://www.bt.com.au
> >>
> >> Past performance is not a reliable indicator of future performance.
> >>
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


Recovering Windows Server 2012 using TSM client 7.1.6.x (was [Re: [ADSM-L] [ADSM-L] ISP 81 Discontinued functions])

2016-12-16 Thread Robert Talda
Folks:

We’ve been aware of these documents for some time, and have used them 
successfully in the past.  In light of the current conversation, though, 
ironically, my Windows System Administrator was working through the Best 
Practices for Recovering Windows Server 2012 and Windows 8 document

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Best%20Practices%20for%20Recovering%20Windows%20Server%202012%20and%20Windows%208

using the TSM client for Windows v7.1.6.4 when he came across this gem.

The instructions state
a. Unpack the Tivoli Storage Manager Backup-Archive client installation files 
into the media subdirectory of the WinPE destination directory you specified in 
step 2b above.

1. Run the installation package executable file.

2. When prompted for the folder name in which to save the installation files, 
specify the media subdirectory of the WinPE destination directory you specified 
in step 2b above. For example, c:\winpe_x64\media.

However, when he run the installation package installation file, he wasn’t 
prompted for a folder name - rather, the files were written to a directory 
named TSMClient created in the same location as the downloaded installation 
package (e.g. package was downloaded to C:\Users\\Downloads - and the 
created directory was thus C:\Users\\Downloads\TSMClient).

Further, there is no mention of the “TSMClient” directory in the document.

Now, it seems pretty obvious what we should do - namely, copy the files in the 
newly created “TSMClient” directory to the “c:\winpe_x64\media” directory.  But 
what we don’t know is how further steps might be impacted.

Has anyone attempted to do a recovery with a 7.1.6.x client for Windows?  We’d 
appreciate any knowledge you have to share.

Bob T

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu<mailto:r...@cornell.edu>


On Dec 15, 2016, at 11:08 AM, Marco Batazzi 
<mbata...@unoinformatica.it<mailto:mbata...@unoinformatica.it>> wrote:

Please look @ 
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli+Storage+Manager/page/Best+Practices+for+Recovering+Windows+Server+2008,+Windows+Server+2008+R2,+Windows+7,+and+Windows+Vista
And 
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Best%20Practices%20for%20Recovering%20Windows%20Server%202012%20and%20Windows%208




POSTSCHEDULECMD command failing after a long running backups with "File not found" message

2016-12-09 Thread Robert Talda
Folks:
  Wondering if anyone has seen this one.  

Environment: 
TSM client for Linux x86_64 v7.1.6.0 running on a Red Hat Enterprise Linux 
Server release 5.11 (Tikanga) platform accessing a TSM server for Linux v 
7.1.5.200.

  We have a NetApp appliance which we backup from a dedicated proxy system.  
The NetApp appliance is offered to campus as a service, so shares are created 
and deleted randomly as new customers subscribe to the service and old 
customers drop off.  The resulting dynamic nature of the shares on the NetApp 
appliance requires a daily redefinition of the backup configuration.   So, 
every evening at 6 pm, a script fires that creates:
- the dsm.opt file defining the domains to backup; and
- a preschedule command script to mount the necessary shares on the proxy 
system before the backup; and
- a postschedule command script to unmount the shares
Then, around 9 pm, a scheduled backup is initiated.

  Most days, this functions flawlessly.  However, on occasion, the post 
schedule command script fails, creating the following message in the 
dsmsched.log:
12/09/2016 07:19:26 ANS1821E Unable to start POSTSCHEDULECMD/ PRESCHEDULECMD 
'/opt/tivoli/tsm/client/ba/bin/vfilers_postsched’

 Now, the file /opt/tivoli/tsm/client/ba/bin/vfilers_postsched exists - and can 
be ran manually without issue.  This has occurred at seeming random intervals 
over the past year, but I’ve realized that this seems to happen when the backup 
runs long.   For example, the backup that generated the error message started 
on 12/07/16 at 21:11;44, so it ran for roughly 34 hours.  This happens from 
time to time, for various reasons.

  What I think is happening:
- 12/07 18:00, the daily backup configuration script ran, creating, among other 
files, the vfilers_postsched command script containing the unmount commands
- 12/07 12:11 the scheduled backup commenced
- 12/08 18:00 the daily backup configuration script ran, creating a new version 
of the vfilers_postsched command script
- 12/09 07:19 the scheduled backup finished - and the TSM client tried to run 
the vfilers_postsched command - but it failed, generating the message above 
instead

  I think the TSM client is looking for the vfilers_postsched file that was 
created on 12/07  but no longer exists, having been overwritten on 12/08.   
This seems a little far fetched, but knowing that the TSM client reads the 
configuration file once on startup, and never again, leads me to suspect that 
the client opens the postschedulecmd file at that time as well - and not when 
the backup is over.  And thus, can’t read it when the backup is over.

  We can work around this a variety of ways, but it would be nice to have a 
root cause..

 Hypothesizing,
Bob T


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu




Re: Help running commands via the TSM REST api

2016-11-22 Thread Robert Jose
Hello Jakob
I do not know the curl application very well, but I did notice that when
using the -data option, it sets the consume type to
"application/x-www-form-urlencoded". Unfortunately, the OC REST API does
not support this consumption. You will need to change it to consume
"text/plain".

Sorry I couldn't be more help.

Rob Jose
Spectrum Protect OC UI Developer / L3 Support

-Original Message-
From: "Jakob Jóhannes Sigurðsson" [mailto:jakob.sigurds...@nyherji.is]
Sent: Friday, November 18, 2016 3:13 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Help running commands via the TSM REST api

Hi all.

I saw a message here regarding running registration of clients against the
REST API.
I am not able to get this working and the user did not include his fix in
his reply.

Has anyone successfully tried this? I keep getting Internal Server error
replies and the OC log says:

[11/18/16 11:08:42:206 GMT] 0001eeaf
org.apache.wink.server.internal.registry.ResourceRegistryI The system
cannot find any method in the com.ibm.tsm.gui.rest.CliResource class that
consumes application/x-www-form-urlencoded media type. Verify that a method
exists that consumes the  media type specified.

Here is my command:
[jakob@centos-minimal ~]$ curl -s -X POST -H "OC-API-Version: 1.0" -H
"Accept: application/json" --insecure --user jakob:foobar
https://192.168.1.10:11090/oc/api/cli/issueCommand --data "query option"
Response:
{"error":{"httpStatus":"Internal Server
Error","errorCode":500,"message":null}}

Regards, Jakob
jak...@nyherji.is


Re: Can't Restore on CIFS/DFS backup server yet can backup

2016-11-17 Thread Robert Talda
Zoltan:
  Okay, I’m getting mixed messages from your post.  So my answer may be of 
limited value.

  “Signing in via browser” implies to me you are using the Web client - am I 
correct?

  Assuming so,
- BACKUP is looking at the local filesystems - not the information from the TSM 
Server
- RESTORE would be looking at the information from the TSM Server.

  Based on that, I suspect it may be the id you are using to sign on via the 
Web client.  

FWIW,
Bob T


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Nov 16, 2016, at 2:19 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
> 
> I have an interesting situation when attempting to perform a restore on a
> node that performs backups of CIFS/DFS mounts.
> 
> The client is accessed via http / dsmcad since this server is setup to
> backup multiple CIFS shares and now moving to DFS.
> 
> After signing in via browser, I can bring up BACKUP and see lots and lots
> of file shares/directories, etc.
> 
> But when I bring up RESTORE, I don't see any of that.  Just three
> higher-level directories.
> 
> Occupancy shows 4.7-million files backed up and yes I tried toggling
> active/inactive.  Checking the dsmsched log shows it scanning 4.7 million
> files and backed up 1300 just last night.
> 
> Both the client and server are 7.1.6.3 so I can't go any higher.
> 
> Recently rebooted everything.  Nothing out of the ordinary in
> dsmwebcl.log.  Just shows sessions starting and there was a password issue
> for the user who contacted me about this.  Even shows idle-timeouts.
> 
> So what gives?  Why can't restore/see anything?  Obviously, I am a full
> system admin so that's not be an issue.
> 
> 
> 
> 
> 
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator (in training)
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html



STG type directory

2016-11-10 Thread Robert Ouzen
Hello all

I am now in TSM server 7.1.7.0 , in the past I used the command move nodedata  
to move client from stg1 to stg2.

Now my storages are of type DIRECTORY , I figure how now to move client from 
one STG pool of type directory to another STG pool of type directory.

Any ideas 

Regards Robert

[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Is anyone supporting Spectrum Protect clients running on systems using Microsoft's System Center Endpoint Protection ?

2016-11-04 Thread Robert Talda
Folks:
  Our university just began an major rollout of the Microsoft System Center 
Endpoint Protection (SCEP) product (replacing the Symantec equivalent) and 
we’ve noticed some new behavior from the Spectrum Protect client for Windows.

  In particular, when the client encounters an infected file, it ends the 
backup, writing the summary statistics to the log and exiting with an error 
code 12.  We caught this behavior when a backup of a volume with 2.9 million 
files ended after only 260K files were processed.  

  What we :think: might be happening: the Spectrum Protect client has detected 
the change to file, and attempts to read it to back it up.  This triggers a 
scan by the SCEP that reveals the virus.  The SCEP then blocks the Spectrum 
Protect client request, resulting in the following error message:

ANSE ..\..\common\winnt\ntrc.cpp(771): Received Win32 RC 225 (0x00e1) 
from FileOpen(): CreateFile. Error description: Operation did not complete 
successfully because the file contains a virus or potentially unwanted software.

  It appears that this stops the Spectrum Protect client producer “thread” from 
walking the filesystem; and then, when the “consumer” thread catches up, the 
client exits; such as in the backup that generated the error message above, the 
last lines written were:

Normal File-->   444 \\\  ** Unsuccessful **

Total number of objects inspected:  267,699
Total number of objects backed up:   26
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired: 98
Total number of objects failed:   0
Total number of objects encrypted:0
Total number of subfile objects:  0
Total number of objects grew: 0
Total number of retries:  0
Total number of bytes inspected: 168.77 GB
Total number of bytes transferred:40.08 MB
Data transfer time:0.10 sec
Network data transfer rate:  375,325.92 KB/sec
Aggregate data transfer rate:  5.19 KB/sec
Objects compressed by:0%
Total data reduction ratio:   99.98%
Subfile objects reduced by:   0%
Elapsed processing time:   02:11:39

  And the backup exits with a return code 12.  

  It :feels: like the Spectrum Protect client is being “nicely” killed by the 
SCEP for accessing the infected file, but it may also be a bug in the Spectrum 
Protect code.  

  I should note that guaranteeing and deleting the infected file allows the 
next backup to proceed - until it encounters an infected file.  (a.k.a. Apply, 
lather, rinse, repeat)

This output was generated by a Spectrum Protect client for Windows v7.1.4.0 
running on a Windows 2012 Server R2 platform.  The backup was initiated by a 
script, rather than by the TSM scheduler, but I believe the same behavior will 
happen for a scheduled backup.  I also believe this will happen on Apple 
Macintosh OS X systems that are running SCEP as well.  I’m working with our 
security office to set up test scenarios to verify this behavior.

But I thought I’d reach out and see if others have noticed this.

Thanks!
Bob T.


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu




Re: Client issues after upgrade

2016-11-03 Thread Robert Talda
Zoltan:
  Just a hunch, but those files sound like the Microsoft redistributable 
libraries that are required for the client.  So the upgrade consists of more 
than just running msiexec; there are a couple of vcredist_x86.exe calls to make 
as well.  It just feels like those calls were missed.


FWIW,
Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Nov 3, 2016, at 12:54 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
> 
> Thanks for the suggestion.  My OS person with the problem just emailed with
> "*Found solution - the files "msvcr110.dll" and "msvcr110.dll" are missing
> from C:\windows\system32.  Copying the files to the system32 directory
> fixed the problem. **By attempting to launch the TSM Backup BUI and command
> line led to further error messages that pointed to the files missing. *"
> 
> Still puzzling why the upgrade would cause this
> 


Convert stg help

2016-10-29 Thread Robert Ouzen
Hello All

I want to begin the process CONVERT STG from File to   Directory  om my TSM 
server version 7.1.7.0 .

Any suggestions , tips as when to run the convert process, preparation to do 
about defining the directory storage etc etc…

Any help will be really appreciate….

T.I.A  Best Regards

Robert


Versions

2016-10-04 Thread Robert Ouzen
Hello all

A quick question , if my TSM server version is 7.1.6.0  can it possible to 
upgrade my TSM for VE to 7.1.6.2 ???

Or I need before to upgrade my TSM Server version to 7.1.7.0 before ?

T.I.A Regards

Robert


השב: [ADSM-L] Add on tools /software

2016-09-30 Thread Robert Ouzen
Hello
Try tsmmanager program
Robert



נשלח מסמארטפון ה-Samsung Galaxy שלי.


 הודעה מקורית 
מאת: Saravanan Palanisamy <evergreen.sa...@gmail.com>
תאריך: 30.9.2016 18:39 (GMT+02:00)
אל: ADSM-L@VM.MARIST.EDU
נושא: [ADSM-L] Add on tools /software

Greetings !!

Do we have any tool or software to manage tsm client scheduler from centralised 
tool/software ??

Something like checking tsm logs and restarting tsm services without logging 
into client server

Would really appreciate your inputs

Regards
Sarav
+65 9857 8665


Re: VM restore output

2016-09-25 Thread Robert Ouzen
Hello Fran

First thanks for the script .

What is strange is for activity BACKUP I got in Sub_entity the name of the 
machine but for activity RESTORE the Sub_entity is null !

Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Francisco Molero
Sent: Sunday, September 25, 2016 12:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VM restore output

Hello, 

I think you are not choosing the right table ( summary_extended ). 

One example: 

select cast((sub_entity) as char(36)) as "VM MACHINE",  cast((entity) as 
char(36)) as "Data Mover",DATE(start_time) as DATE, TRANSLATE('a bc:de:fg', 
DIGITS(end_time-start_time), '___abcdefgh_',' ') as "ELAPTIME",  
cast((activity) as char(15)) as OPERATION,  CAST((bytes/1024/1024/1024) AS 
decimal(8,1)) AS "GBYTES", successful   from summary_extended   WHERE ( 
activity='RETRIEVE' OR activity='RESTORE') and Sub_entity is not null  ORDER BY 
DATE  DESC 

I also think the VMware operations are only restores and not retrieves.. 

I hope this help you. 

Fran



  De: Dmitry Dukhov <dem...@gmail.com>
 Para: ADSM-L@VM.MARIST.EDU
 Enviado: Domingo 25 de septiembre de 2016 10:34
 Asunto: Re: VM restore output
   
nope
it’s impossible

names of vm are inside filespace

Dmitry


> On 25 Sep 2016, at 09:26, Robert Ouzen <rou...@univ.haifa.ac.il> wrote:
> 
> Hello all
> 
> Try to figure how to retrieve information of restore VM from TSM for VE.
> 
> I wrote a script to retrieve , restore information but got for entity the 
> name of the proxy and not the name of the VM machine.
> 
> Here the script:
> 
> SELECT cast((entity) as char(36)) as "Node Postback" , 
> DATE(start_time) as DATE, cast((activity) as char(15)) as OPERATION, 
> cast(float(SUM(bytes))/1024/1024/1024 as DEC(8,1)) as GB FROM summary 
> WHERE  activity='RETRIEVE' OR activity='RESTORE' GROUP BY 
> entity,DATE(start_time),activity ORDER BY DATE  DESC
> 
> 
> T.I.A Regards

   


VM restore output

2016-09-25 Thread Robert Ouzen
Hello all

Try to figure how to retrieve information of restore VM from TSM for VE.

I wrote a script to retrieve , restore information but got for entity the name 
of the proxy and not the name of the VM machine.

Here the script:

SELECT cast((entity) as char(36)) as "Node Postback" , DATE(start_time) as 
DATE, cast((activity) as char(15)) as OPERATION, 
cast(float(SUM(bytes))/1024/1024/1024 as DEC(8,1)) as GB FROM summary WHERE  
activity='RETRIEVE' OR
activity='RESTORE' GROUP BY entity,DATE(start_time),activity ORDER BY DATE  DESC


T.I.A Regards


VM restore output

2016-09-25 Thread Robert Ouzen
Hello all

Try to figure how to retrieve information of restore VM from TSM for VE.

I wrote a script to retrieve , restore information but got for entity the name 
of the proxy and not the name of the VM machine.

Here the script:

SELECT cast((entity) as char(36)) as "Node Postback" , DATE(start_time) as 
DATE, cast((activity) as char(15)) as OPERATION, 
cast(float(SUM(bytes))/1024/1024/1024 as DEC(8,1)) as GB FROM summary WHERE  
activity='RETRIEVE' OR
activity='RESTORE' GROUP BY entity,DATE(start_time),activity ORDER BY DATE  DESC

Output:

tsm: POSTBACK>run restore_list

Node Postback DATE  
   OPERATION GB
-  --- 
   ---
HAIFA_DATACENTER (VMPROXY2)2016-09-22 RESTORE   
 100.0

T.I.A Regards


Script help

2016-09-24 Thread Robert Ouzen
Hi  to all


I have this script to receive information about restore and retrieve activity.

SELECT cast((entity) as char(36)) as "Node Adsm2" , DATE(start_time) as DATE, 
cast((activity) as char(15)) as OPERATION, 
cast(float(SUM(bytes))/1024/1024/1024 as DEC(8,1)) as GB FROM summary WHERE  
activity='RETRIEVE' OR activity='RESTORE' GROUP BY 
entity,DATE(start_time),activity ORDER BY DATE   ESC

I want to add ADDRESS  , field in summary table too.

But when I added it I got a lot of entries of each ADDRESS  even when I added 
ADDRESS to GROUP BY

Any idea to get for a single  ADDRESS ?

Best Regards

Robert


Re: IBM reneges on Enhanced support policy for TSM 6.4?

2016-09-17 Thread Robert J Molerio
No wonder TSM customers have been migrating to other solutions.

On Sep 17, 2016 15:09, "Schofield, Neil (Storage & Middleware, Backup &
Restore)"  wrote:

> Classification: Public
>
> I always thought IBM's 'Enhanced' support policy was supposed to guarantee
> customers a *minimum* of 5 years support from the GA date of a product
> version.
>
> Was anyone else surprised to see the recent announcement that TSM 6.4
> customers will be cut adrift after only 4 years and 10 months?
> http://www.ibm.com/software/support/lifecycleapp/PLCDetail.wss?synkey=
> V649976C05962P29-Z752595R47563G59-C915146G66275T24
>
> Regards
>
> Neil Schofield
> IBM Spectrum Protect SME
> Backup & Recovery | Storage & Middleware | Central Infrastructure Services
> | Infrastructure & Service Delivery | Group IT
> LLOYDS BANKING GROUP
> 
>
>
>
> Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ.
> Registered in Scotland no. SC95000. Telephone: 0131 225 4555. Lloyds Bank
> plc. Registered Office: 25 Gresham Street, London EC2V 7HN. Registered in
> England and Wales no. 2065. Telephone 0207626 1500. Bank of Scotland plc.
> Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in Scotland no.
> SC327000. Telephone: 03457 801 801. Cheltenham & Gloucester plc. Registered
> Office: Barnett Way, Gloucester GL4 3RL. Registered in England and Wales
> 2299428. Telephone: 0345 603 1637
>
> Lloyds Bank plc, Bank of Scotland plc are authorised by the Prudential
> Regulation Authority and regulated by the Financial Conduct Authority and
> Prudential Regulation Authority.
>
> Cheltenham & Gloucester plc is authorised and regulated by the Financial
> Conduct Authority.
>
> Halifax is a division of Bank of Scotland plc. Cheltenham & Gloucester
> Savings is a division of Lloyds Bank plc.
>
> HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in
> Scotland no. SC218813.
>
> This e-mail (including any attachments) is private and confidential and
> may contain privileged material. If you have received this e-mail in error,
> please notify the sender and delete it (including any attachments)
> immediately. You must not copy, distribute, disclose or use any of the
> information in it or any attachments. Telephone calls may be monitored or
> recorded.
>


Backup image question

2016-09-13 Thread Robert Ouzen
Hi to all

I want to make a backup image of a drive , we not need to backup anymore.

My question is if it’s a limit of the image created ?

Windows drive , backup image P:

tsm: ADSM2>q occ vod4

Node Name  Type Filespace  FSID Storage Number of   
 Physical Logical
  Name  
Pool Name   FilesSpace Space

   Occupied Occupied

   (MB) (MB)
--  --  --  
---   --- ---
VOD4 Bkup \\vod4\p$ 4 LTO5
14,185  1,775,444.5 1,774,923.7

TSM  client 7.1.2.2
Tsm Server  7.1.6.0

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: Which process generates ANR1423W messages?

2016-09-01 Thread Robert Talda
Folks:
  Thanks for all the responses!

  In this case, the culprit was my co-worker, who didn’t realize that I was 
working on the issue, and issued a pair of “DELETE VOLUME” commands that 
ultimately resulted in the messages below. 

  Somewhat disturbing, though, that the only record was the ANR2017I messages 
documenting the command issuance…

Bob T

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Sep 1, 2016, at 3:07 PM, Reese, Michael A (Mike) CIV USARMY 93 SIG BDE 
> (US) <michael.a.reese62@mail.mil> wrote:
> 
> My observation has been that it is one of those mysterious threads running in 
> the background that generates the ANR1423W messages so there is no associated 
> session/process and no entry in the activity log.  Typically, these threads 
> run once an hour, measured from the time that TSM was last started.
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Robert Talda
> Sent: Thursday, September 01, 2016 2:23 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [Non-DoD Source] [ADSM-L] Which process generates ANR1423W messages?
> 
> Folks:
>  Does anyone know which process generates ANR1423W messages?  The message 
> itself is somewhat innocuous:
> 
> ANR1423W Scratch volume VV is empty but will not be deleted - volume 
> access mode is “offsite”
> 
>  but the intriguing part is there is no session or process associated with 
> the message, either in the activity log or in the ACTLOG table.  Nor are 
> there any entries in the summary table for these entities - and the only 
> process running at the time was a storage pool backup for a different storage 
> pool.  There were client backups in progress, but this message originated 
> from the server.
> 
>  I had two volumes with errors that I was struggling to get the data off for 
> several days - and suddenly, magically, an ANR1423W appeared for both 
> volumes.  Headache gone, curiosity piqued.
> 
> Thanks in advance,
> Bob T
> 
> 
> Robert Talda
> EZ-Backup Systems Engineer
> Cornell University
> +1 607-255-8280
> r...@cornell.edu
> 
> 



Which process generates ANR1423W messages?

2016-09-01 Thread Robert Talda
Folks:
  Does anyone know which process generates ANR1423W messages?  The message 
itself is somewhat innocuous:

ANR1423W Scratch volume VV is empty but will not be deleted - volume access 
mode is “offsite”

  but the intriguing part is there is no session or process associated with the 
message, either in the activity log or in the ACTLOG table.  Nor are there any 
entries in the summary table for these entities - and the only process running 
at the time was a storage pool backup for a different storage pool.  There were 
client backups in progress, but this message originated from the server.

  I had two volumes with errors that I was struggling to get the data off for 
several days - and suddenly, magically, an ANR1423W appeared for both volumes.  
Headache gone, curiosity piqued.

Thanks in advance,
Bob T


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu




Re: Fine tuning INCLUDE/EXCLUDE and EXCLUDE.DIR

2016-08-26 Thread Robert Talda
Harold:
   For what is worth, been there, done this.

  Yes, adding the EXCLUDE.DIR statements will improve backup performance - 
AFTER the first backup, during which the client will tell the server to expire 
all the files and folders that are no longer backed up.  If you delete the 
filespace first, then you avoid this performance hit - but you lose the older 
data already stored in TSM (which I’m sure you know).  We’ve let customers 
choose, because they better than we do the value of that older data..

Hope this helps
Bob 


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Aug 26, 2016, at 2:21 PM, Vandeventer, Harold [OITS] 
> <harold.vandeven...@ks.gov> wrote:
> 
> One of my clients has a server running ImageNow.  There are millions of files 
> and several thousand folders that take over 24 hours for the daily 
> incremental.
> 
> My goal is to backup a single ImageNow folder and reduce the elapsed time to 
> traverse thousands of folders that don't need to be backed up.
> 
> We worked out a setup for a new node in an effort to backup only a given path 
> within the ImageNow structure.
> 
> The following dsm.opt statements are in place:
> DOMAIN J:
> EXCLUDE J:\...\*
> INCLUDE J:\IMAGENOW\SUBOB_OSM\...\*
> 
> That was successful in backing up only the files and subfolders/files in 
> SUBOB_OSM.  But it also backed up thousands of directories on J:\.  Most of 
> them are in the \IMAGENOW path.
> 
> My plan is to add a few EXCLUDE.DIR statements such as:
> EXCLUDE.DIR J:\IMAGENOW\54_OSM
> EXCLUDE.DIR J:\IMAGENOW\DEFAULT_OSM
> 
> The question:  will adding EXCLUDE.DIR now, after the node has backed up all 
> those paths, result in a performance improvement?
> 
> I suspect Spectrum Protect server will send all the directory names to the 
> node, but hopefully the work on the node side will be much faster.
> 
> Or, will we need to delete the filespace, then start over with a fresh 
> "incremental forever" backup to clear out all that directory structure?
> 
> Thanks.
> 
> Harold Vandeventer
> Office of Information Technology Services
> 2800 SW Topeka Blvd
> Topeka, KS 66611
> (785) 296-0631
> harold.vandeven...@ks.gov | www.oits.ks.gov
> 



Re: Windows Client Upgrades

2016-08-19 Thread Robert Talda
Rick, et al:
  I’m going to wade in cautiously here, as in my experience, every situation is 
different.

  We’ve experienced the reboot issue here at Cornell when installing or 
upgrading the TSM client for Windows, but the reboots always seem to be 
associated with the redistributable library installers provided by Microsoft.  
These appeared in v6.3.1 (for us), and while the reboot issue was documented 
for silent installs:

http://www.ibm.com/support/knowledgecenter/SSGSG7_6.3.0/com.ibm.itsm.client.doc/t_inst_winclient.html

we certainly missed it.  When the issue was highlighted in the next release of 
the client (v.6.4):

http://www.ibm.com/support/knowledgecenter/SSGSG7_6.4.0/com.ibm.itsm.client.doc/c_inst_winclientreboot.html

we figured we weren’t the only ones.  We worked with the various support groups 
on campus to separate the update of the redistributable libraries from the 
install/update of the client itself; most such groups included the 
redistributable libraries in with security patches known to require a reboot, 
so there were no “surprise” reboots.

Flash foward to the current client (v 7.1.6.0) which we are evaluating for 
rollout to our users.  While the main install documentation has remained 
unchanged, cautioning about the possibility of reboot, there is an interesting 
update to the documentation in this tech note:

http://www-01.ibm.com/support/docview.wss?uid=swg27039350

In the section "Per APAR IT14776” there are examples of invocations of the 
redistributable library package (vcredist_x86.exe) with switches that appear to 
quash the restart.  I emphasize the “appear” - it hasn’t happened yet in our 
testing, but I am sharing this in the interest of learning what others 
experience.

Bob


Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu<mailto:r...@cornell.edu>


On Aug 19, 2016, at 8:22 AM, Rick Adamson 
<rickadam...@segrocers.com<mailto:rickadam...@segrocers.com>> wrote:

Roger,
Agree.  I should clarify the issue I mentioned happens even on new installs.

-Rick Adamson


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Friday, August 19, 2016 12:37 AM
To: ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@vm.marist.edu>
Subject: Re: [ADSM-L] Windows Client Upgrades

My experience has been that you can avoid the reboot IFF you stop all scheduler 
and/or Client Acceptor Daemon processes before you begin installing the new 
version. It also helps to remove schedulers and/or C.A.D., using the wizard in 
the old-version GUI client before upgrading.
Then put them back after the upgrade, so that they are guaranteed to be running 
on the new version, and you shouldn't have to reboot. This actually makes sense.

Roger Deschner  University of Illinois at Chicago 
rog...@uic.edu<mailto:rog...@uic.edu>
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 18 Aug 2016, Rick Adamson wrote:

I have experienced random "unprompted" reboots performing manual
installs on Windows clients from 7.1.1.0 to 7.1.6.0 Haven't nailed down exactly 
why yet, still gathering info
On one machine MS updates had been applied and may not have been restarted, on 
another it seemed to happen after installing the first C++ prerequisite.

-Rick Adamson


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of Kamp, Bruce (Ext)
Sent: Thursday, August 18, 2016 10:40 AM
To: ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@vm.marist.edu>
Subject: Re: [ADSM-L] Windows Client Upgrades

If all prerequisites are already installed & all TSM processes are stopped you 
shouldn't need to.

From what I have seen even if it asks to reboot basic functionality remains.



Bruce Kamp
GIS Backup & Recovery
(817) 568-7331
e-mail: mailto:bruce.k...@novartis.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of David Ehresman
Sent: Thursday, August 18, 2016 9:31 AM
To: ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@vm.marist.edu>
Subject: [ADSM-L] Windows Client Upgrades

Can one upgrade a Windows 7.1.x client to a 7.1.somethinghigher client without 
a reboot or do all Windows 7.1 upgrades require a reboot?

David




Re: Backup VE via Resource Pool

2016-08-10 Thread Robert Ouzen
Hello Marcel and Bas

Thank you for the information

I  will try with the option VMFOLDER=, 

It's a possibility to do it in command line for testing (before adding it at 
dsm.opt)

BACKUP VM ..

 I did not find the correct syntax ?

T.I.A Regards

Robert


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marcel 
Anthonijsz
Sent: Wednesday, August 10, 2016 3:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup VE via Resource Pool

Hi Robert,

There is no option (yet) to directly backup a VMware Resource Pool. If you have 
a look at the dsmc backup command, there are following options for VMware 
backup:
Ref:
https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/client/r_cmd_bkupvm.html
and :
https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/client/r_opt_domainvmfull.html

The domain level parameters are:
all-vm
all-windows
vmhost
vmfolder
vmhostcluster
vmdatastore

No vmresourcepool option here :-(. You could create an RFE for that.

I manage a lot of backups using the VMFOLDER option, i.e. move a VM into a 
dedicated VMware folder and it gets backupped by that datamover.
A workaround that we have discussed in last Dutch Spectrum Protect User group 
is to create a script to list the VMs and create a backup script from that list.
This requires scheduled (manual or automated) regeneration of the scripts, so I 
opted for the VMFOLDER option as this is easy manageable, also by the VMware 
administrators. We have found out that nested VMfolders also work, so you can 
have multiple folders with the same name in your folder structure, the TDP for 
VE picks up all those VMs in the folders.




2016-08-10 3:40 GMT+02:00 Robert Ouzen <rou...@univ.haifa.ac.il>:

> Hello to all
>
> We decided to back up our VMware environment with TSM VE 7.1.6 via
> Resource Pool   !
>
> Did you have example what line I need to add in  adequate dsm.opt  to 
> configure Resource Pool backup???
>
> T.I.A .Best Regards
>
> Robert
>



--
Kind Regards, Groetje,

Marcel Anthonijsz
T: +31(0)299-776768
M:+31(0)6-53421341


Backup VE via Resource Pool

2016-08-09 Thread Robert Ouzen
Hello to all

We decided to back up our VMware environment with TSM VE 7.1.6 via Resource 
Pool   !

Did you have example what line I need to add in  adequate dsm.opt  to configure 
Resource Pool backup???

T.I.A .Best Regards

Robert


Move Container question

2016-08-06 Thread Robert Ouzen
Hi to all

Can anybody give me more information of the process for Storage of type 
DIRECTORY

ANR0986I Process 263 for Move Container (Automatic)   running in the BACKGROUND 
processed 73,945 items for a
  total of 10,705,469,440 bytes with a completion state 
of  SUCCESS at 03:23:06. (PROCESS: 263)

It’s run automatic , where is the configuration about it ?

There with a way to create a script to get report of this running. When I tried 
the regular  Q ACT begind=-3 s= ANR0986I

Got also Reclamation and Move Container and Identify Duplicates and ….

Thanks Tobert


Re: AIX to Linux migration performance

2016-07-29 Thread Robert J Molerio
I thought that functionality was long gone in 5.x?



Thank you,

Systems Administrator

Systems & Network Services
NYU IT/Technology Operations Services

212-998-1110

646-539-8239

On Fri, Jul 29, 2016 at 11:43 AM, Bill Mansfield <
bill.mansfi...@us.logicalis.com> wrote:

> Has anyone migrated a big TSM server from AIX to Linux using the
> Extract/Insert capability in 7.1.5?  Have a 2.5TB DB to migrate, would like
> some idea of how long it might take.
>
> Bill Mansfield
>


Re: Spectrum Protect support for I series

2016-07-29 Thread Robert J Molerio
Start using BRMS?

On Jul 29, 2016 6:22 AM, "Chavdar Cholev"  wrote:

> Yeap,
> very strange strategy I have customers that need this
>
> On Fri, Jul 29, 2016 at 12:52 PM, Saravanan Palanisamy <
> evergreen.sa...@gmail.com> wrote:
>
> > Just noticed that TSM support has been withdrawn for IBM i on TSM 6.2 ,
> Is
> > this confirmed and latest Spectrum Protect version doesn't support IBM i
> ?
> >
> >
> > With the EOS of TSM 6.2, access to TSM Servers through the TSM API on
> IBM i
> > will also reach EOS. TSM 6.2 is the *last *version of TSM that will
> support
> > IBM i. *No further versions of TSM will have API support for IBM i.*
> >
> >
> >
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20Backup%2C%20Recovery%20and%20Media%20Services%20%28BRMS%29%20for%20i/page/Tivoli%20Storage%20Manager%20%28TSM%29
> >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Saravanan
> > Mobile: +65-8228 4384
> >
>


Question about restore db

2016-07-09 Thread Robert Ouzen
Hello All

A quick question about restore db . I f I made the backup db with  TSM server  
version 7.1.5.200.

Can I do a restore db if I previously install TSM server 7.1.6 ?

Or I need first to install TSM server version 7.1.5.200 , made the restore db 
and after it update version to 7.1.6 ?

T.I.A Regards

Robert


Re: FW: v7 upgrade woes

2016-06-21 Thread Robert J Molerio
When they first incorporated DB2 into TSM IBM used to have a dedicated DB2
support group to help with these issues. I'm wondering if they were let go
during the last IBM purge?

Thank you,

Systems Administrator

Systems & Network Services
NYU IT/Technology Operations Services

212-998-1110

646-539-8239

On Tue, Jun 21, 2016 at 9:43 AM, Rhodes, Richard L. <
rrho...@firstenergycorp.com> wrote:

> Ahhh, the joy of IBM support!
>
> We are stuck in between the narrow IBM support silos.
>
> TSM support sees that the db2 database isn't working after an upgrade test
> and sent the case to DB2 support.
> DB2 support says the database got a "already in use" error of some kind
> and wants to know what else is running. DB2 support has no idea what TSM is
> or that a TSM upgrade does.
>
> We need someone who knows TSM, the upgrade process, DB2 and all this on
> AIX.  But IBM support is incapable of having someone who crosses their
> narrow boundaries.  It seems our only way out is to either up this case to
> a Sev 1 or declare a CRITSIT and force IBM to get multiple areas involved.
>
> We are SO frustrated.  Sometimes I wonder if the people inside IBM who use
> TSM have to use the same TSM support we do.
>
> Rick
>
>
>
>
> From: Rhodes, Richard L.
> Sent: Friday, June 10, 2016 10:59 AM
> To: 'ADSM: Dist Stor Manager' 
> Subject: FW: v7 upgrade woes
>
> We've been testing . . . we tried walking all the upgrades:
>
> - brought up a 6.3.5 tsm db
> - tried upgrade to 7.1.4, it failed/hung (yup, responds like all our tests)
> - restored snapshots back to the 6.3.5 setup
> - tried upgrade v7.1- WORKED, TSM comes up fine
> - tried upgrade to v7.1.1.1 - upg utility said there was nothing to
> upgrade!
> - tried upgrade to v7.3 - Upgrade worked, but TSM hangs on startup
>
> The v7.3 upgrade installed just fine and the upgrade utility ended.  But
> when we try to start TSM it just sits there hung.  While TSM is sitting
> there "hung" trying to come up, we do a "ps -ef | grep db2" and found that
> DB2 was NOT UP. (should have checked that sooner!)
>
>   ps -ef | egrep -i "db2|dsm"
> tsmuser 12910806 14221316   0 10:26:23  pts/4  0:00
> /opt/tivoli/tsm/server/bin/dsmserv -i /tsmdata/tsmsap1/config
> tsmuser 17432632 12910806   0 10:26:24  pts/4  0:00 db2set -i tsmuser
> DB2_PMODEL_SETTINGS
>root 199885461   0 09:55:59  -  0:00
> /opt/tivoli/tsm/db2/bin/db2fmcd
>
>
> You can leave the TSM startup "hung" like this for hours and it never goes
> any further.
>
> We keep digging.  Support sent us to the DB2 team, hopefully they can help.
>
> Rick
>
>
>
>
>
> From: Rhodes, Richard L.
> Sent: Tuesday, June 07, 2016 3:32 PM
> To: 'ADSM: Dist Stor Manager' >
> Cc: Ake, Elizabeth K. >
> Subject: v7 upgrade woes
>
> This is half plea for help and half rant (grr).
>
> For the past two months we've been trying to test a TSM v6.3.5 to
> v7.1.4/v7.1.5 upgrade . . . . and have completely failed!
>
> TSM v6.3.5
> AIX 6100-09
> upgrade to TSM v7.1.4 and/or v7.1.5
>
> When we run the upgrade it runs to this status message:
> Installing: [(99%) com.tivoli.dsm.server.db2.DB2PostInstall ]
> Then it sits, doing nothing.  AIX sees no db2 activity, no java activity,
> no nothing.  It sits for hours!  We've let it sit once for 13 hours
> (overnight) and the above message was still there.  Eventually we kill it.
>
> We've tried:
>   two separate AIX systems
>   two separate TSM's (one 400gb db and one 100gb db)
>   probably tried 8-10 upgrades, all failed
>
> Support had us try a DB2 standalone upgrade and it also never finished.
>
> We've worked with support and got nowhere.  They have no idea what is
> happening, let alone any idea of how to figure out what is happening.  They
> did find one issue - we have to install a base XLC runtime v13.1.0.0 before
> the v7 upgrade.
>
> Any thoughts are welcome.
>
> Rick
>
>
>
>
>
> -
>
> The information contained in this message is intended only for the
> personal and confidential use of the recipient(s) named above. If the
> reader of this message is not the intended recipient or an agent
> responsible for delivering it to the intended recipient, you are hereby
> notified that you have received this document in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> us immediately, and delete the original message.
>


VM snapshot question

2016-06-21 Thread Robert Ouzen
HI to all

A quick question before I investigate , using  TSM for VE V7.1.4 to back up our 
VMware environment .

I  wonder if I have a VM  with on it several snapshot ,  if after a VE backup 
and restoring the VM in alternate place.

What will be the status of this machine meaning If I will  see as  at the 
original machine those snapshots ???

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: selective backup

2016-06-09 Thread Robert Ouzen
Or INCREMENTAL  with Absolute mode (no need of parameter)

Robert Ouzen
Haifa University
Israel

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Thursday, June 9, 2016 5:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] selective backup

Ok, for whatever reason I am having a bad brain day.

I would like to do a selective backup for an entire client.
Subdir=yes is in the options file.

What should be in the "object" field in the schedule?

"/*" perhaps?


Percent Total Space Saved calculation

2016-04-21 Thread Robert Ouzen
I Guys

I tried to figure with version TSM server 7.1.5 and the compression feature is 
On , on a stg pool of type directory.

How the percent of a Total Space Saved is calculated  in my case 58.26%

The command  Q STG DEDUPCONTAINER f=d   show:

sm: TSMREP>q stg dedup* f=d

   Storage Pool Name: DEDUPCONTAINER
   Storage Pool Type: Primary
   Device Class Name:
   Storage Type: DIRECTORY
   Estimated Capacity: 10,240 G
  Space Trigger Util:
  Pct Util: 3.5
  Description: TSMREP Replication
Delay Period for Container Reuse: 0
 Deduplicate Data?: Yes
Processes For Identifying Duplicates:
Compressed: Yes
   Deduplication Savings: 166,578 M (18.89%)
  Compression Savings: 339 G (48.54%)
   Total Space Saved: 502 G (58.26%)

If I run a select * for stgpools got only the SPACE_SAVED_MB and not the 
percent.

Any ideas ???

Best Regards


Robert


Re: SQL statement

2016-04-12 Thread Robert Talda
Skylar:
  Never thought of this - thanks for sharing!  Gonna be experimenting with some 
of my existing queries

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Apr 12, 2016, at 9:58 AM, Skylar Thompson <skyl...@u.washington.edu> wrote:
> 
> The SQL engine actually does support OUTER JOIN, it's just that the view
> schema exposed to us depends on joining on multiple keys; occupancy and
> filespace are joined with (node_name,filespace_name). You can kind of fake
> it by generating those keys in subqueries:
> 
> SELECT -
>   f.fs_key -
> FROM -
>   (SELECT node_name || ',' || filespace_name AS fs_key FROM filespaces) f -
> WHERE -
>   f.fs_key NOT IN (SELECT node_name || ',' || filespace_name AS occ_key FROM 
> occupancy)
> 
> This obviously is not going to be particularly performant but should get
> the job done.
> 
> On Tue, Apr 12, 2016 at 01:39:03PM +, Robert Talda wrote:
>> EJ:
>> 
>>  Wish I could be as helpful this time.  Sadly, I???ve not found a way to 
>> generate this output with a single SQL statement - the TSM SQL engine 
>> doesn???t seem to support the concept of OUTER JOIN.  I???ve had to resort 
>> to doing 2 queries - one of the occupancy table and one of the filespaces 
>> table - and then using an external application (MS Excel, join, etc) to 
>> merge the results.  And my employer frowns on running SQL directly against 
>> DB2.
>> 
>>  Perhaps someone more clever than I has found a way that they can share.
>> 
>>  BTW, I???ve got the subquery to limit occupancy data to PRIMARY storage 
>> memorized???
>> 
>> Robert Talda
>> EZ-Backup Systems Engineer
>> Cornell University
>> +1 607-255-8280
>> r...@cornell.edu
>> 
>> 
>>> On Apr 12, 2016, at 7:10 AM, Loon, EJ van (ITOPT3) - KLM 
>>> <eric-van.l...@klm.com> wrote:
>>> 
>>> Hi Robert!
>>> Thanks for the tip! You're right, all missing filespaces were not in the 
>>> occupancy table. And to make things even more difficult, some filespaces 
>>> were listed twice, because data for the same filespace resides in the 
>>> diskpool and the primary pool... The latter issue I can solve by adding a 
>>> o.stgpoolpool_name parameter and only include the primary (VTL) pool 
>>> entries.
>>> Did you find a way to include the filespaces that were not in the occupancy 
>>> table? I want them listed in the exception report too, even though they are 
>>> not taking up space on the TSM server...
>>> Thanks again for your help!
>>> Kind regards,
>>> Eric van Loon
>>> Air France/KLM Storage Engineering
>>> 
>>> -Original Message-
>>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
>>> Robert Talda
>>> Sent: maandag 11 april 2016 19:34
>>> To: ADSM-L@VM.MARIST.EDU
>>> Subject: Re: SQL statement
>>> 
>>> EJ:
>>> 
>>> Are you sure the missing filespaces have data?  if not, they won???t have 
>>> associated occupancy records and thus won???t appear in the output.
>>> 
>>> I trip over that from time to time myself
>>> 
>>> 
>>> 
>>> Robert Talda
>>> EZ-Backup Systems Engineer
>>> Cornell University
>>> +1 607-255-8280
>>> r...@cornell.edu
>>> 
>>> 
>>>> On Apr 11, 2016, at 10:07 AM, Loon, EJ van (ITOPT3) - KLM 
>>>> <eric-van.l...@klm.com> wrote:
>>>> 
>>>> Hi guys!
>>>> I'm trying to create a SQL statement which should list all filespaces, 
>>>> along with their occupancy, with a backup date longer than 2 days ago, but 
>>>> only for nodes with an last access date of today or yesterday. If the node 
>>>> hasn't contacted the server for two days or more it's reported in a 
>>>> different report.
>>>> This is what I came up with thus far:
>>>> 
>>>> SELECT f.node_name AS "Node name", f.filespace_name AS "Filespace", 
>>>> to_char(char(f.backup_end),'-MM-DD') AS "Last Backup Date", 
>>>> CAST(ROUND(o.physical_mb/1024) as int) as "GB Stored" FROM nodes n, 
>>>> filespaces f, occupancy o WHERE o.node_name=n.node_name AND 
>>>> n.node_name=f.node_name AND o.filespace_name=f.filespace_name AND 
>>>> days(f.backup_end)<(days(current_date)-2) AND cast(timestampdiff(16, 
>>>> current_timestamp - n.lastacc_time) as decimal(5,1))>= 2 ORDER BY 
>>>>

Re: SQL statement

2016-04-12 Thread Robert Talda
EJ:

  Wish I could be as helpful this time.  Sadly, I’ve not found a way to 
generate this output with a single SQL statement - the TSM SQL engine doesn’t 
seem to support the concept of OUTER JOIN.  I’ve had to resort to doing 2 
queries - one of the occupancy table and one of the filespaces table - and then 
using an external application (MS Excel, join, etc) to merge the results.  And 
my employer frowns on running SQL directly against DB2.

  Perhaps someone more clever than I has found a way that they can share.

  BTW, I’ve got the subquery to limit occupancy data to PRIMARY storage 
memorized…

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Apr 12, 2016, at 7:10 AM, Loon, EJ van (ITOPT3) - KLM 
> <eric-van.l...@klm.com> wrote:
> 
> Hi Robert!
> Thanks for the tip! You're right, all missing filespaces were not in the 
> occupancy table. And to make things even more difficult, some filespaces were 
> listed twice, because data for the same filespace resides in the diskpool and 
> the primary pool... The latter issue I can solve by adding a 
> o.stgpoolpool_name parameter and only include the primary (VTL) pool entries.
> Did you find a way to include the filespaces that were not in the occupancy 
> table? I want them listed in the exception report too, even though they are 
> not taking up space on the TSM server...
> Thanks again for your help!
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Robert Talda
> Sent: maandag 11 april 2016 19:34
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: SQL statement
> 
> EJ:
> 
>  Are you sure the missing filespaces have data?  if not, they won’t have 
> associated occupancy records and thus won’t appear in the output.
> 
>  I trip over that from time to time myself
> 
> 
> 
> Robert Talda
> EZ-Backup Systems Engineer
> Cornell University
> +1 607-255-8280
> r...@cornell.edu
> 
> 
>> On Apr 11, 2016, at 10:07 AM, Loon, EJ van (ITOPT3) - KLM 
>> <eric-van.l...@klm.com> wrote:
>> 
>> Hi guys!
>> I'm trying to create a SQL statement which should list all filespaces, along 
>> with their occupancy, with a backup date longer than 2 days ago, but only 
>> for nodes with an last access date of today or yesterday. If the node hasn't 
>> contacted the server for two days or more it's reported in a different 
>> report.
>> This is what I came up with thus far:
>> 
>> SELECT f.node_name AS "Node name", f.filespace_name AS "Filespace", 
>> to_char(char(f.backup_end),'-MM-DD') AS "Last Backup Date", 
>> CAST(ROUND(o.physical_mb/1024) as int) as "GB Stored" FROM nodes n, 
>> filespaces f, occupancy o WHERE o.node_name=n.node_name AND 
>> n.node_name=f.node_name AND o.filespace_name=f.filespace_name AND 
>> days(f.backup_end)<(days(current_date)-2) AND cast(timestampdiff(16, 
>> current_timestamp - n.lastacc_time) as decimal(5,1))>= 2 ORDER BY 
>> f.node_name DESC
>> 
>> I am however missing several filespaces in the output returned. I must be 
>> doing something wrong but I can't find what.
>> Thanks in advance for any help!
>> Kind regards,
>> Eric van Loon
>> Air France/KLM Storage Engineering
>> 
>> 
>> For information, services and offers, please visit our web site: 
>> http://www.klm.com. This e-mail and any attachment may contain confidential 
>> and privileged material intended for the addressee only. If you are not the 
>> addressee, you are notified that no part of the e-mail or any attachment may 
>> be disclosed, copied or distributed, and that any other action related to 
>> this e-mail or attachment is strictly prohibited, and may be unlawful. If 
>> you have received this e-mail by error, please notify the sender immediately 
>> by return e-mail, and delete this message.
>> 
>> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
>> employees shall not be liable for the incorrect or incomplete transmission 
>> of this e-mail or any attachments, nor responsible for any delay in receipt.
>> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
>> Airlines) is registered in Amstelveen, The Netherlands, with registered 
>> number 33014286
>> 
> 
> 
> For information, services and offers, please visit our web site: 
> http://www.klm

Re: SQL statement

2016-04-11 Thread Robert Talda
EJ:

  Are you sure the missing filespaces have data?  if not, they won’t have 
associated occupancy records and thus won’t appear in the output.

  I trip over that from time to time myself



Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Apr 11, 2016, at 10:07 AM, Loon, EJ van (ITOPT3) - KLM 
> <eric-van.l...@klm.com> wrote:
> 
> Hi guys!
> I'm trying to create a SQL statement which should list all filespaces, along 
> with their occupancy, with a backup date longer than 2 days ago, but only for 
> nodes with an last access date of today or yesterday. If the node hasn't 
> contacted the server for two days or more it's reported in a different report.
> This is what I came up with thus far:
> 
> SELECT f.node_name AS "Node name", f.filespace_name AS "Filespace", 
> to_char(char(f.backup_end),'-MM-DD') AS "Last Backup Date", 
> CAST(ROUND(o.physical_mb/1024) as int) as "GB Stored" FROM nodes n, 
> filespaces f, occupancy o WHERE o.node_name=n.node_name AND 
> n.node_name=f.node_name AND o.filespace_name=f.filespace_name AND 
> days(f.backup_end)<(days(current_date)-2) AND cast(timestampdiff(16, 
> current_timestamp - n.lastacc_time) as decimal(5,1))>= 2 ORDER BY f.node_name 
> DESC
> 
> I am however missing several filespaces in the output returned. I must be 
> doing something wrong but I can't find what.
> Thanks in advance for any help!
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
> 
> 
> For information, services and offers, please visit our web site: 
> http://www.klm.com. This e-mail and any attachment may contain confidential 
> and privileged material intended for the addressee only. If you are not the 
> addressee, you are notified that no part of the e-mail or any attachment may 
> be disclosed, copied or distributed, and that any other action related to 
> this e-mail or attachment is strictly prohibited, and may be unlawful. If you 
> have received this e-mail by error, please notify the sender immediately by 
> return e-mail, and delete this message.
> 
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
> employees shall not be liable for the incorrect or incomplete transmission of 
> this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
> Airlines) is registered in Amstelveen, The Netherlands, with registered 
> number 33014286
> 



Re: missing dsmlicense

2016-04-06 Thread Robert Talda
Charles, et al:
  Been dealing with this type of issue on the various TDP software packages for 
years.  Best I can tell, the licensing people had their own sense of 
version/release/level/sub-level, and so we would often have to install older 
versions of the TDP software then upgrade to the version we wanted OR extract 
the license file from the old installer and apply it post-new install.  

  Now that IBM has discarded the version/release/level/sub-level model, I only 
expect this incongruency to only get worse.  Until, of course, it gets better 
;-)



Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Apr 6, 2016, at 10:37 AM, Nixon, Charles D. (David) 
> <cdni...@carilionclinic.org> wrote:
> 
> I second this question.  So the answer we got from our storage software sales 
> team is to download 7.1.3, extract the license file.  Then, download the 
> version you want to want to use, extract and install that, then copy the file 
> over.  Seems like way more work than it should be.  Do we only get new 
> license files included once a year?
> 
> 
> 
> 
> From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of Matthew 
> McGeary [matthew.mcge...@potashcorp.com]
> Sent: Wednesday, April 06, 2016 9:23 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] missing dsmlicense
> 
> Hello Rick,
> 
> Last time I checked Passport Advantage, I only had 7.1.3 available for 
> download and the license file from 7.1.3 works fine in 7.1.4 and 7.1.5.  If 
> you check your 7.1.3 files, you'll find the license there.
> 
> Can anyone from IBM tell me why 7.1.4 or 7.1.5 is not available on the 
> Passport site?
> 
> 
> 
> From:"Rhodes, Richard L." <rrho...@firstenergycorp.com>
> To:ADSM-L@VM.MARIST.EDU
> Date:04/06/2016 06:33 AM
> Subject:[ADSM-L] missing dsmlicense
> Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> 
> 
> 
> 
> 
> Hello,
> 
> We did a new install of TSM 7.1.4 just to check it out.
> (ok, we screwed up a upgrade on a test instance and decided to do a new/clean 
> install.)
> 
> After doing this install, it comes up with these messages:
> 
> 03/17/16   08:27:06 ANR9649I An EVALUATION LICENSE for IBM System Storage
> Archive Manager will expire on 04/15/16. (PROCESS: 6)
> 
> When you try and run reg lic, it throws this:
> 
> ANR9613W Error loading /opt/tivoli/tsm/server/bin/dsmlicense for Licensing 
> function.
> 
> That file does NOT exist.  It DOES exist on our existing TSM 6.4 systems.
> 
> IBM support thinks we have a mixed 64bit(dsmserv)/32bit(dsmlicense) problem,
> but the file simply does not exist.
> 
> 
> Q)  Where do you get the dsmlicense file for v7.1.4?
> 
> Thanks
> 
> Rick
> 


Re: TSM for SQL client license

2016-03-25 Thread Robert Talda
Worked for us at Cornell.

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


> On Mar 24, 2016, at 1:44 PM, David Ehresman <david.ehres...@louisville.edu> 
> wrote:
> 
> Does anyone know if a 7.1.0 TSM for SQL license is sufficient for the TSM for 
> SQL 7.1.4.0 update or do I have to pick up a new license somewhere?
> 
> David


Re: Help on Directory Container

2016-03-22 Thread Robert Ouzen
Hello Guys

Thanks’ a lot for the output…. Really appreciate

Best Regards

Robert Ouzen
Haifa University
Israel


From: Ron Delaware [mailto:ron.delaw...@us.ibm.com]
Sent: Tuesday, March 22, 2016 1:30 PM
To: ADSM: Dist Stor Manager <ADSM-L@VM.MARIST.EDU>
Cc: רוברט אוזן <rou...@univ.haifa.ac.il>
Subject: Fw: [ADSM-L] Help on Directory Container




Robert,

you run protect stgpool prior to replicate node. By doing the protect first, 
you can reduce the amount of traffic needed for the replicate node process. 
Also, without performing protect stgpool, you loose the ability to use REPAIR 
STGPOOL as it depends on Protect Stgpool being performed.

Protect Stgpool and Node replication perform two different functions, but 
complement each other. Node replication will send stgpool data and catalog 
metadata.  To insure the logical consistency of the metadata, it will lock the 
data it is replicating (not positive at what level).  After doing node 
replication, you can recover client data.

Protect stgpool moves the data, as well, but does not lock as badly, so it will 
have less of an impact to other operations.  Once it completes, you can recover 
stgpool data (which can happen automatically if Protect senses a data 
corruption in the source pool), also Protect Stgpool gives you chunk level 
recovery that hooks into REPAIR Stgpool.


Best Regards.

Ron Delaware





From:    Robert Ouzen 
<rou...@univ.haifa.ac.il<mailto:rou...@univ.haifa.ac.il>>
To:ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@VM.MARIST.EDU>
Date:03/20/16 22:52
Subject:[ADSM-L] Help on Directory Container
Sent by:"ADSM: Dist Stor Manager" 
<ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@VM.MARIST.EDU>>




Hi to all

I need a little bit more information about all the administrative process on my 
TSM environment .

I have a TSM server 7.1.5 on an O.S Windows server 2008 R2 64B.
I have mix STGPOOLS , some of type FILE (with DEDUP on server side) and some of 
type DIRECTORY

For now my order process of administrative tasks is:


1.   At late afternoon for few hours , run IDENTIFY DUPLICATES

2.   At morning 07:00 ,   run EXPIRE INVENTORY

3.   After it, run 
BACKUP DB

4.   After it, run 
RECLAIM STG

Now because I created new STGpools with type DIRECTORY (inline dedup), and  
create too an TARGET  server version 7.1.5..
Update my stgpools of type DIRECTORYwith the option:  
PROTECTstgpool=Targetstgppol  (on Target server)

Now the questions:

What will be the correct order process to add the new tasks as:

PROTECT STGPOOL
REPLICATE NODE

And what will be the correct command to repair damage container in source STG   
?

As  AUDIT CONTAINER ( with which options ??) and REPAIR CONTAINER???

T.I.A  Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il<mailto:rou...@univ.haifa.ac.il>
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/><http://computing.haifa.ac.il/>


Help on Directory Container

2016-03-20 Thread Robert Ouzen
Hi to all

I need a little bit more information about all the administrative process on my 
TSM environment .

I have a TSM server 7.1.5 on an O.S Windows server 2008 R2 64B.
I have mix STGPOOLS , some of type FILE (with DEDUP on server side) and some of 
type DIRECTORY

For now my order process of administrative tasks is:


1.   At late afternoon for few hours , run IDENTIFY DUPLICATES

2.   At morning 07:00 ,   run EXPIRE INVENTORY

3.   After it, run 
BACKUP DB

4.   After it, run 
RECLAIM STG

Now because I created new STGpools with type DIRECTORY (inline dedup), and  
create too an TARGET  server version 7.1.5..
Update my stgpools of type DIRECTORYwith the option:  
PROTECTstgpool=Targetstgppol  (on Target server)

Now the questions:

What will be the correct order process to add the new tasks as:

PROTECT STGPOOL
REPLICATE NODE

And what will be the correct command to repair damage container in source STG   
?

As  AUDIT CONTAINER ( with which options ??) and REPAIR CONTAINER???

T.I.A  Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: VM machine with SQL database

2016-02-16 Thread Robert Ouzen
HI Del



Thanks for the paper ...



I made a test on one of my machine IBROWSE with Microsoft SQL



I add in dsm.opt:   INCLUDE.VMTSMVSS  ibrowse



I run too:   dsmc set password -type=vmguest ibrowse  guest_admin_id guest 
admin_pw



Run the backup form console:



tsm> backup vm ibrowse -vmbackuptype=fullvm -mode=ifincremental



And got:



Full BACKUP VM of virtual machines 'ibrowse'.





Backup VM command started.  Total number of virtual machines to process: 1

<   0  B> [  -]

Starting Full VM backup of VMware Virtual Machine 'ibrowse'

   mode:'Incremental Forever - Incremental'

target node name:'HAIFA_DATACENTER'

data mover node name:'HAIFA_DATACENTER'

application protection type: 'TSM VSS'

application(s) protected:'MS SQL 2008 R2'

ANS9417E IBM Tivoli Storage Manager application protection could not freeze the 
VSS writers on the virtual machine named 'ibrowse'

. See the TSM error log for more details.



ANS2330E Failed to unfreeze the VSS writers because the snapshot time exceeded 
the 10 second timeout limitation.



ANS4174E Full VM backup of VMware Virtual Machine 'ibrowse' failed with RC=6511 
mode=Incremental Forever - Incremental, target nod

e name='HAIFA_DATACENTER', data mover node name='HAIFA_DATACENTER'



On dsmerror.log got:



02/16/2016 10:50:11 ANS4174E Full VM backup of VMware Virtual Machine 'ibrowse' 
failed with RC=6511 mode=Incremental Forever - Incremental, target node 
name='HAIFA_DATACENTER', data mover node name='HAIFA_DATACENTER'

02/16/2016 10:50:11 ANS1228E Sending of object 'ibrowse' failed.

02/16/2016 10:50:11 ANS9431E IBM Tivoli Storage Manager application protection 
failed to thaw VSS writers on virtual machine 'ibrowse'. See the TSM error log 
for more details.



Any Idea what wrong 



TDP for VE V7.1.4.0

TSM Client V7.1.4.1

TSM Server V7.4.1.100





Regards Robert















-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: Monday, February 15, 2016 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VM machine with SQL database



Hi Robert,







http://www-01.ibm.com/support/knowledgecenter/SS8TDQ_7.1.4/ve.user/t_vesql_configvm.html







Del









"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@VM.MARIST.EDU>> 
wrote on 02/14/2016

03:54:37 AM:



> From: Robert Ouzen <rou...@univ.haifa.ac.il<mailto:rou...@univ.haifa.ac.il>>

> To: ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@VM.MARIST.EDU>

> Date: 02/14/2016 03:56 AM

> Subject: VM machine with SQL database

> Sent by: "ADSM: Dist Stor Manager" 
> <ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@VM.MARIST.EDU>>

>

> Hi to all

>

> I backup a VM machine  name ibrowse with on it SQL  database , with

> TDP for VE V7.1.4

>

> I add on the dsm.opt the line:

>

> INCLUDE.VMTSMVSS  ibrowse

>

> I wonder if this backup delete the logs at the end of the backup or I

> need to install  a TDP for SQL

>

> T.I.A  Best Regards

>

> Robert


VM machine with SQL database

2016-02-14 Thread Robert Ouzen
Hi to all

I backup a VM machine  name ibrowse with on it SQL  database , with TDP for VE 
V7.1.4

I add on the dsm.opt the line:

INCLUDE.VMTSMVSS  ibrowse

I wonder if this backup delete the logs at the end of the backup or I need to 
install  a TDP for SQL

T.I.A  Best Regards

Robert




[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Delete containers from stg pool directory

2016-02-06 Thread Robert Ouzen
Hi to all

I figure it it is a way to delete all containers from a STGpool name of type 
directory in V7.1.4

When running q container stg=v3700_D1  I saw:

Container Storage  Containe-
 State
  Pool Namer Type
-  -
 ---
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
a.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Non Dedup
 Available
00010.ncf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
0001e.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
0002f.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
0003a.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
00044.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
00050.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Non Dedup
 Available
0005b.ncf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available
0007e.dcf
c:\mountpoints\V3700_D1\stg10\00\000- V3700_D1 Dedup
 Available

I want to empty all those containers from this STG but did not find any delete 
command 

Regards Robert


Re: VM backup question

2016-01-30 Thread Robert Ouzen
Hi Ray

Thanks for this input ..

It is a way to do a general  INCLUDE for "Hard Disk 1"   (ALL VM needs the 
"Hard Disk 1"  to be backup) .

For machines I need to back up more than the "Hard disk 1"  I will add for each 
of them :

INCLUDE.VMDISK VMACHINE1  "Hard Disk 2"
INCLUDE.VMDISK VMACHINE1  "Hard Disk 3"

Best Regards

Robert



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Storer, Raymond
Sent: Thursday, January 28, 2016 3:16 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VM backup question

Robert, you might use the -vm parameter to exclude certain VMs in your DSM.OPT 
file.
DOMAIN.VMFULL VMHOSTCLUSTER=CLUSTER_NAME;-VM=VM_TO_EXCLUDE1,VM_TO_EXCLUDE2

Also, in your DSM.OPT file you can add INCLUDE.VMDISK options to specifically 
target VM disks you want (which excludes the other disks you do not 
specifically include).
INCLUDE.VMDISK EXACT VM NAME GOES HERE "Hard Disk 1"

Disk(s) not specifically included restore blank at their full size during a 
full VM restore.  I forget if a way exists to prevent the other disks from 
restoring in a full restore.

If you see issues with VM backups or restores, you might also consider adding 
testflag VMBACKUP_UPDATE_UUID to your DSM.OPT file to force the backup to save 
the UUID every time you perform a VM backup.

Good luck!

Ray Storer
NIBCO INC.
574.295.3457

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Robert 
Ouzen
Sent: Wednesday, January 27, 2016 11:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VM backup question

Hi to all

I have a VMware farm with around 300 VM machines and I try to figure the best  
way to configure the backup with TSM for VE V7.1.4 .


1.   Need to backup for all the machines the "Hard disk 1"

2.   Our structure don't let us to do it with VMfolder , think about 
domain.vmfull vmdatastore= data1.,data2, etc

3.   Need for some VM machines to backup all disks or at least more than 
"Hard disk 1" ,  and some machines not to back up at all

Any help will be appreciated 

Best Regards

Robert



CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive 
and confidential use of the intended recipient. If you are not the intended 
recipient, please do not read, distribute or take action in reliance upon this 
message. If you have received this in error, please notify us immediately by 
return email and promptly delete this message and its attachments from your 
computer system. We do not waive attorney-client or work product privilege by 
the transmission of this message.


VM backup question

2016-01-27 Thread Robert Ouzen
Hi to all

I have a VMware farm with around 300 VM machines and I try to figure the best  
way to configure the backup with TSM for VE V7.1.4 .


1.   Need to backup for all the machines the “Hard disk 1”

2.   Our structure don’t let us to do it with VMfolder , think about 
domain.vmfull vmdatastore= data1.,data2, etc….

3.   Need for some VM machines to backup all disks or at least more than 
“Hard disk 1” ,  and some machines not to back up at all

Any help will be appreciated ….

Best Regards

Robert


VM dsm.opt configuration

2016-01-23 Thread Robert Ouzen
Hello all

I want to figure how to back up in TSM for VE V7.1.4  all my VM’machine (300) 
in my Data Center .

For all of them only “Hard Disk 1” and only for specific VM’s more disks

What will be the correct syntax in my dsm.opt.

T.I.A Regards

Robert


Re: Copy Mode: Absolute misunderstanding

2016-01-23 Thread Robert Ouzen
Hi Keith

I am curious of the result . 

I need too, to back up to a new storage pool of type directory container (new 
feature inV7.1.4).  Cannot do a nextstg or move nodedata.

So want to try your suggestion COPY MODE ABSOLUTE for now I did in another way.

Rename the node to node_OLD   (all FI are with extension _OLD after it) , 
recreate the node and change the destination to the new storage directory.

After a wild delete the OLD FI and OLD node.

My copypool looks like this:

Policy Domain Policy Set Name  Mgmt Class Name   Versions Data Exists   
 Versions Data Deleted Retain Extra Versions  Retain Only 
VersionCopy Mode Copy Destination

CC   ACTIVE  MGCC12 
  4  1  
 No Limit60 
 Modified  NET_TSM

So  need to change to

Policy Domain Policy Set Name  Mgmt Class Name   Versions Data Exists   
 Versions Data Deleted Retain Extra Versions  Retain Only 
VersionCopy Mode  Copy Destination

CC   ACTIVE  MGCC12 
  4  1  
 10  10 
 Absolute V7000_D1

So after 10-11 days all my 3 inactive versions will vanish  from STG NET_TSM 
... correct >

Best Regards

Robert
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Arbogast, Warren K
Sent: Friday, January 22, 2016 8:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Copy Mode: Absolute misunderstanding

Hi Thomas,
Thank you for your insight. RETONLY was 60 days during the absolute backups. We 
just reduced it to 30 days this morning when we saw unexpected files on the 
VTL. Perhaps tomorrow the VTL will look better.

Best wishes,
Keith Arbogast
Indiana University






On 1/22/16, 13:11, "ADSM: Dist Stor Manager on behalf of Thomas Denier" 
<ADSM-L@VM.MARIST.EDU on behalf of thomas.den...@jefferson.edu> wrote:

>Is RETONLY also set to 30 days? If RETONLY is longer than 30 days, backup 
>copies some files deleted before the absolute backups would be retained for 
>more than 30 days after the absolute backups.
>
>Thomas Denier
>Thomas Jefferson University
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
>Arbogast, Warren K
>
>We have implemented directory container pools in TSM version 7.1.4 and are 
>happy with it.  All new backups are written to the DEDUP pool, and old backups 
>are being migrated from the VTL to the DEDUP pool through a process of 
>replicating existing nodes' files to a DEDUP pool on a target server,  then 
>replicating them back to the DEDUP pool on the source server. That process 
>works well.
>
>There is an urgency to empty the storage previously used on the VTL so it can 
>be re-purposed. To hurry the migration to the DEDUP pool along we devised a 
>strategy of running FULL backups (COPY MODE: ABSOLUTE) on certain small 
>servers in policy domains whose destination was the DEDUP pool. That would 
>promote all of a node’s previous backups on the VTL to Inactive status. And, 
>since the pertinent RETAIN EXTRA VERSIONS setting was ’30’, we expected within 
>30 days all inactive versions would be expired and removed from the VTL.
>
>It’s 30 days later, and the strategy did not work perfectly. There are 
>thousands of files remaining on the VTL for nodes which had  a COPY MODE: 
>ABSOLUTE backup.
>
>What am I misunderstanding about COPY MODE: ABSOLUTE?  I had understood it 
>would force a 100% full backup, with no mitigation by include and exclude  
>statements. Apparently that’s not the case. Could someone clarify how it works?
>
>The information contained in this transmission contains privileged and 
>confidential information. It is intended only for the use of the person named 
>above. If you are not the intended recipient, you are hereby notified that any 
>review, dissemination, distribution or duplication of this communication is 
>strictly prohibited. If you are not the intended recipient, please contact the 
>sender by reply email and destroy all copies of the original message.
>
>CAUTION: Intended recipients should NOT use email communication for emergent 
>or urgent health care matters.
>


Re: Copy Mode: Absolute misunderstanding

2016-01-23 Thread Robert Ouzen
Hi Erwann

IN client 6.x.x..x I have to the option of copy mode ABSOLUTE . so why I 
nned client vrsion 7+ ?
It will not work ?

Best Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Erwann 
SIMON
Sent: Saturday, January 23, 2016 8:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Copy Mode: Absolute misunderstanding

Hello Robert,

If you only have client whose version is 7.1 or later, you can also use the 
absolute option :

Use the absolute option with the incremental command to force a backup of all 
files and directories that match the file specification or domain, even if the 
objects were not changed since the last incremental backup.

This option overrides the management class copy group mode parameter for backup 
copy groups; it does not affect the frequency parameter or any other backup 
copy group parameters.


--
Best regards / Cordialement / مع تحياتي
Erwann SIMON

- Mail original -
De: "Robert Ouzen" <rou...@univ.haifa.ac.il>
À: ADSM-L@VM.MARIST.EDU
Envoyé: Samedi 23 Janvier 2016 16:12:07
Objet: Re: [ADSM-L] Copy Mode: Absolute misunderstanding

Hi Keith

I am curious of the result . 

I need too, to back up to a new storage pool of type directory container (new 
feature inV7.1.4).  Cannot do a nextstg or move nodedata.

So want to try your suggestion COPY MODE ABSOLUTE for now I did in another way.

Rename the node to node_OLD   (all FI are with extension _OLD after it) , 
recreate the node and change the destination to the new storage directory.

After a wild delete the OLD FI and OLD node.

My copypool looks like this:

Policy Domain Policy Set Name  Mgmt Class Name   Versions Data Exists   
 Versions Data Deleted Retain Extra Versions  Retain Only 
VersionCopy Mode Copy Destination

CC   ACTIVE  MGCC12 
  4  1  
 No Limit60 
 Modified  NET_TSM

So  need to change to

Policy Domain Policy Set Name  Mgmt Class Name   Versions Data Exists   
 Versions Data Deleted Retain Extra Versions  Retain Only 
VersionCopy Mode  Copy Destination

CC   ACTIVE  MGCC12 
  4  1  
 10  10 
 Absolute V7000_D1

So after 10-11 days all my 3 inactive versions will vanish  from STG NET_TSM 
... correct >

Best Regards

Robert
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Arbogast, Warren K
Sent: Friday, January 22, 2016 8:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Copy Mode: Absolute misunderstanding

Hi Thomas,
Thank you for your insight. RETONLY was 60 days during the absolute backups. We 
just reduced it to 30 days this morning when we saw unexpected files on the 
VTL. Perhaps tomorrow the VTL will look better.

Best wishes,
Keith Arbogast
Indiana University






On 1/22/16, 13:11, "ADSM: Dist Stor Manager on behalf of Thomas Denier" 
<ADSM-L@VM.MARIST.EDU on behalf of thomas.den...@jefferson.edu> wrote:

>Is RETONLY also set to 30 days? If RETONLY is longer than 30 days, backup 
>copies some files deleted before the absolute backups would be retained for 
>more than 30 days after the absolute backups.
>
>Thomas Denier
>Thomas Jefferson University
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
>Of Arbogast, Warren K
>
>We have implemented directory container pools in TSM version 7.1.4 and are 
>happy with it.  All new backups are written to the DEDUP pool, and old backups 
>are being migrated from the VTL to the DEDUP pool through a process of 
>replicating existing nodes' files to a DEDUP pool on a target server,  then 
>replicating them back to the DEDUP pool on the source server. That process 
>works well.
>
>There is an urgency to empty the storage previously used on the VTL so it can 
>be re-purposed. To hurry the migration to the DEDUP pool along we devised a 
>strategy of running FULL backups (COPY MODE: ABSOLUTE) on certain small 
>servers in policy domains whose destination was the DEDUP pool. That would 
>promote all of a node’s previous backups on the VTL to Inactive status. And, 
>since the pertinent RETAIN EXTRA VERSIONS setting was ’30’, we expected within 
>30 days all inactive versions would be expired and removed from the VTL.
>
>It’s 30 days later, and the strategy did not work perfectly. There are 
>thousands of files remaining on the VTL for nodes which had  a CO

Directory Container STG pool

2016-01-16 Thread Robert Ouzen
Hi to all

In TSM server version  7.1.4.0 , new stgpool configuration as Directory 
Container . But still not possible to do  a move nodedata from an old stg pool 
configuration to the new stg directory container  pool.

Anybody now when is plan ?  In which version ? For now as to begin from scratch 
.

Best Regards

Robert Ouzen


PROTECT STG feature

2015-12-23 Thread Robert Ouzen
Hello

Today I made a test on the source sever I run:

Protect stg sourrce stg purgedata=all

Got the output:

2/23/2015 11:44:49  ANR4980I The protect storage pool process for V7000_5 
is   completed. Extents protected: 0 of 0. Extents deleted:
 940733 of 940733. Amount protected: 0.00 of 0.00. 
Amount  failed: 0.00. Amount transferred: 0.00. Elapsed time: 0   Day(s), 0 
Hour(s), 3 Minute(s). (SESSION: 12979)

So far so good …

Run on both servers  source and target: an expire inv

But still om my target server have data  !!!  (1.3%)

tsm: TSMREP>q stg dedupcontainer

Storage Device Storage   Estimated   Pct
Pool Name   Class Name TypeCapacity  Util

--- -- - -- -
DEDUPCONTA-DIRECTORY   10,231 G   1.3
IINER

And ..

tsm: TSMREP>q container

Container Storage  Containe-
 State
  Pool Namer Type
-  -
 ---
h:\dcd1\00\004e.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\004f.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0051.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0052.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0053.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0054.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0055.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0056.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0057.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0058.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\0059.dcf   DEDUPCONTAI- Dedup
 Available
   NER
h:\dcd1\00\005a.dcf   DEDUPCONTAI- Dedup
 Available
   NER

My reuse is:   Delay Period for Container Reuse: 0

I made a protect stg again to see if it will begin from scratch but NO:

12/23/2015 12:12:38  ANR4980I The protect storage pool process for V7000_5 
is  completed. Extents protected: 0 of 0. Extents deleted: 0
  of 0. Amount protected: 0.00 of 0.00. Amount failed:  
  0.00. Amount transferred: 0.00. Elapsed time: 0 Day(s),  0 Hour(s), 0 
Minute(s). (SESSION: 12979)


What I missed ???

Best Regards  Robert



Hello all

A question about the new feature PROTECT STG in V7.1.3 , I made a test and it’s 
working fine.

I wonder how to delete the data in the target server , if I want to stop to  
protect my primary stg .

I figure I need first to run:

1.protect stgpoolsourcepool   purgedata=all
Will delete only the data on the targetpool of the specific sourcepool 
(correct ?)

2.upd stg sourcepool protect=””

Did I need other commands ?

Best Regards

Robert


PROTECT STG feature

2015-12-22 Thread Robert Ouzen
Hello all

A question about the new feature PROTECT STG in V7.1.3 , I made a test and it’s 
working fine.

I wonder how to delete the data in the target server , if I want to stop to  
protect my primary stg .

I figure I need first to run:

1.protect stgpoolsourcepool   purgedata=all
Will delete only the data on the targetpool of the specific sourcepool 
(correct ?)

2.upd stg sourcepool protect=””

Did I need other commands ?

Best Regards

Robert


Re: Protect stgpool question

2015-12-20 Thread Robert Ouzen
Hi Stefan

So it's a mix between

1. PROTECT STG  for protecting containers
2. REPLICATE NODE   for node protection node  

So I will create a script with those steps:

protect stg v7000_5 maxsess=4 FORCEREConcile=no  purgedata=no wait=yes
replicate node VIEW_HU_DATACENTER maxsess=4

Now questions :

When to use the parameter  FORCEREConcile=YES  in PROTECT STG (Once a while or 
in daily period)
When to use the parameter PURGEdata= deleted in PROTECT STG (Once a while or in 
daily period)

On REPLICATE NODE it's alsso a parameter  FORCEREConcile , when to run it with 
=YES ?

If I have a container damaged , what will be the correct process to repair it , 
by running and REPAIR STG or AUDIT  CONTAINER ???

Best Regards

Robert


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Sunday, December 20, 2015 11:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Protect stgpool question

Hi Robert,

The protect stgpool replicates the data in the pool to the replication target 
server you defined, you still need to run a replicate node to replicate the 
metadata.
I believe that automatic recovery of damaged files happen at the end of the 
replicate node command but they might have put that in the protect storagepool 
command as well, it used to be at the end of the replicate node command iirc.

So you don't use the restore command and since there can only be one 
replication server it is replicating to it will know where to get the data from.

Regards,
   Stefan


On Sun, Dec 20, 2015 at 10:35 AM, Robert Ouzen <rou...@univ.haifa.ac.il>
wrote:

> Hello all
>
> In TSM server version 7.1.3 is a news feature called PROTECT STG for 
> storage at type DIRECTORY
>
> I wonder if I update  a storage  with option proctect=target_stgpool  
> , run a PROTECT STG command.
>
> After it will made my primary storage pool at state UNAVAILABLE , run 
> a restore … TSM will know to  restore from the target_stgpool  
>
> T.I.A  Regards
>
> Robert
>


Protect stgpool question

2015-12-20 Thread Robert Ouzen
Hello all

In TSM server version 7.1.3 is a news feature called PROTECT STG for storage at 
type DIRECTORY

I wonder if I update  a storage  with option proctect=target_stgpool  , run a 
PROTECT STG command.

After it will made my primary storage pool at state UNAVAILABLE , run a restore 
… TSM will know to  restore from the target_stgpool  

T.I.A  Regards

Robert


Re: Node replication information?

2015-12-18 Thread Robert Clark
Thank you for your gracious response.

To demonstrate just how long its been, I forgot to mention that I'm working 
with TSM 7.1.1, on some AIX (6.1 I believe) and RedHat on Intel (7.2?) servers.

I think I'll be running some basic network performance tests first. The 
throughput I'm seeing doesn't appear to match people's glowing descriptions of 
how big the pipes are.

Thanks,
[RC]


On Dec 18, 2015, at 6:34 AM, "Vandeventer, Harold [OITS]" 
<harold.vandeven...@ks.gov> wrote:

> Welcome back Robert.  Lots of changes from the TSM 5 days.
> 
> I don't have any links to docs, but my experience has been...
> 
> I'm running Windows, 32 GB RAM, TSM 6.3.4.300.
> 
> I run 40 sessions on each Replication process.
> 
> The process is replicating a group that may have several members, 2 up to 15 
> or 20; semi-random on how I assigned.
> 
> I have a script that contains several REPLICATE NODE  commands.  
> Each command runs one group, with WAIT=YES, then another, with an exception 
> described next.
> 
> I have a group that has nodes with a very large numbers of files for those 
> nodes, thus the time required to determine what has to be migrated is long.  
> The amount of data to replicate isn't huge, just the time required to 
> inventory the status on a zillion files.
> 
> Therefore, that group is first in the script, with WAIT=NO, to immediately 
> start the next group in the script.  Subsequent replicate groups all have 
> WAIT=YES.
> 
> The only performance issue that I've seen as an issue is monitoring bandwidth 
> on the link between the source and target servers.  I've watched Network 
> stats in Windows Task Manager and depending on how many replication processes 
> are running, the impact on the bandwidth goes up.  I haven't seen any hits on 
> CPU or RAM.
> 
> 
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Robert Clark
> Sent: Thursday, December 17, 2015 4:39 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Node replication information?
> 
> After a long hiatus from working with TSM, I am back to working with it 
> again. (I left during the TSM 5 timeframe.)
> 
> 
> 
> Does anyone have pointers to docs useful in figuring out how node replication 
> determines how much to attempt to do in parallel, or general information on 
> how to troubleshoot its performance.
> 
> 
> 
> Thanks,
> 
> [RC]
> 
> [Confidentiality notice:]
> ***
> This e-mail message, including attachments, if any, is intended for the
> person or entity to which it is addressed and may contain confidential
> or privileged information.  Any unauthorized review, use, or disclosure
> is prohibited.  If you are not the intended recipient, please contact
> the sender and destroy the original message, including all copies,
> Thank you.
> ***


Node replication information?

2015-12-17 Thread Robert Clark
After a long hiatus from working with TSM, I am back to working with it
again. (I left during the TSM 5 timeframe.)



Does anyone have pointers to docs useful in figuring out how node
replication determines how much to attempt to do in parallel, or general
information on how to troubleshoot its performance.



Thanks,

[RC]


Re: Question about protectstgpool

2015-12-15 Thread Robert Ouzen
Hi Del

Great it was my question  !

Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: Tuesday, December 15, 2015 2:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question about protectstgpool

Hi Robert,

I am not entirely sure what you are asking here, but yes, PROTECT STGPOOL can 
be run from multiple source pools to the same target.


Thank you,

Del



"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 12/14/2015
12:05:24 PM:

> From: Robert Ouzen <rou...@univ.haifa.ac.il>
> To: ADSM-L@VM.MARIST.EDU
> Date: 12/14/2015 12:06 PM
> Subject: Question about protectstgpool Sent by: "ADSM: Dist Stor 
> Manager" <ADSM-L@VM.MARIST.EDU>
> 
> Hello All
> 
> I create in my source TSM server version 7.1.3.100 ,  2 new STG pools  
> (SOURCE1  SOURCE2) with type= directory (for inbound dedup).
> 
> At my TSM replication  server (target) , I create too an STG pool
> (TARGET) with type=directory
> 
> I wonder if I can in my 2 source STG update them with the sane option  
> protectstgpool=TARGET.
> 
> Meaning the same protectstpool=TARGET for the both STG source.
> 
> And when occur the update of the STG in the target ?
> 
> Regards Robert
> 


Question about protectstgpool

2015-12-14 Thread Robert Ouzen
Hello All

I create in my source TSM server version 7.1.3.100 ,  2 new STG pools  (SOURCE1 
 SOURCE2) with type= directory (for inbound dedup).

At my TSM replication  server (target) , I create too an STG pool  (TARGET) 
with type=directory

I wonder if I can in my 2 source STG update them with the sane option  
protectstgpool=TARGET.

Meaning the same protectstpool=TARGET for the both STG source.

And when occur the update of the STG in the target ?

Regards Robert


השב: [ADSM-L] AW: Bring back TSM Administr ator's Guide

2015-12-12 Thread Robert Ouzen
Me too



נשלח ממכשיר ה-Samsung שלי


 הודעה מקורית 
מאת: Michael Malitz 
תאריך: 12/12/2015 12:43 (GMT+02:00)
אל: ADSM-L@VM.MARIST.EDU
נושא: [ADSM-L] AW: Bring back TSM Administrator's Guide

Agree Agree Agree!!!

Michael Malitz

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Im Auftrag von
Roger Deschner
Gesendet: Samstag, 12. Dezember 2015 02:39
An: ADSM-L@VM.MARIST.EDU
Betreff: Bring back TSM Administrator's Guide

A great book is gone. The TSM Administrator's Guide has been obsoleted as of
v7.1.3. Its priceless collection of how-to information has been scattered to
the winds, which basically means it is lost. A pity, because this book had
been a model of complete, well-organized, documentation.

The "Solution Guides" are suitable only for planning a new TSM installation,
and do not address existing TSM sites. Furthermore, they are much too
narrow, and do not cover real-world existing TSM customers.
For instance, we use a blended disk and tape solution (D2D2D2T) to create a
good working compromise between faster restore and storage cost.

Following links to topics in the online Information Center is a haphazard
process at best, which is never repeatable. There is no Index or Table of
Contents for the online doc - so you cannot even see what information is
there. Unless you actually log in, there is no way to even leave a "trail of
breadcrumbs". Browser bookmarks are useless here, due to IBM's habit of
changing URLs frequently. This is an extremely inefficient use of my time in
finding out how to do something in TSM.

Search is not an acceptable replacement for good organization. Search is
necessary, but it cannot stand alone.

Building a "collection" is not an answer to this requirement. It still lacks
a coherent Index or Table of Contents, so once my collection gets sizeable,
it is also unuseable. And with each successive version, I will be required
to rebuild my collection from scratch all over again.

Despite the fact that it had become fairly large, I humbly ask that the
Administrator's Guide be published again, as a single PDF, in v7.1.4.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Fwd: FLASH: Security Bulletin: Vulnerability in Apache Commons affects IBM Tivoli Storage Manager Operations Center (OC) and Client Management Services (CMS) (CVE-2015-7450) (2015.12.11)

2015-12-11 Thread Robert Talda
Wonderful.  Security risk in Operations Center 7.1.3.000 that I just installed 
- and the only resolution?  Upgrade to Operations Center 7.1.4.000 - which just 
so happens to require TSM Server v 7.1.4.000.  We’ve just installed TSM Server 
v 7.1.3.000 in our test environment; upgrade to v7.1.4.000 is months away.  So 
hence, is our use of the Operations Center.

Unless, that is, there will be a patch for Op Center 7.1.3.000 forthcoming?

Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu<mailto:r...@cornell.edu>


Begin forwarded message:

From: IBM My Notifications 
<mynot...@stg.events.ihost.com<mailto:mynot...@stg.events.ihost.com>>
Subject: FLASH: Security Bulletin: Vulnerability in Apache Commons affects IBM 
Tivoli Storage Manager Operations Center (OC) and Client Management Services 
(CMS) (CVE-2015-7450) (2015.12.11)
Date: December 11, 2015 at 1:26:42 PM EST
To: <r...@cornell.edu<mailto:r...@cornell.edu>>


My notifications for  Software - 11 Dec 2015

--
1.  Tivoli Storage Manager Extended Edition: Security bulletin

- TITLE: Security Bulletin: Vulnerability in Apache Commons affects IBM Tivoli 
Storage Manager Operations Center (OC) and Client Management Services (CMS) 
(CVE-2015-7450)
- URL: 
http://www.ibm.com/support/docview.wss?uid=swg21971533=swgtiv=OCSSSQWC=E_sp=swgtiv-_-OCSSSQWC-_-E
- ABSTRACT: An Apache Commons Collections vulnerability for handling Java 
object deserialization was addressed by IBM Tivoli Storage Manager Operations 
Center (IBM Spectrum Protect Operations Center) and IBM Tivoli Storage Manager 
Client Services (IBM Spectrum Protect Client Management Services)..



Directory Container

2015-12-08 Thread Robert Ouzen
Hello all,

I define a new storage pool with type=directory container , new feature in 
7.1.3.

I wonder now how.


Þ The equivalent command of Q MOUNT  (to view volumes mounted during a 
backup / restore).

The only thing I found is Q CONTAINER * STG=pool_name  , give me the containers 
 which belong to a specific stg



Þ Or the equivalent command of Q NODEDATA  nodename for nodename on stg 
container ?


Regards Robert



Re: Operations Center browser requirements

2015-11-25 Thread Robert Jose
Hi Thomas
I believe you should read that without the word "minimum". The higher
versions may work, but we don't go back and test that they do. I will get
the document updated next week. Thanks for letting us know.


Rob Jose
TSM OC UI Developer / L3 Support

"ADSM: Dist Stor Manager"  wrote on 11/24/2015
01:32:45 PM:

> From: Thomas Denier 
> To: ADSM-L@VM.MARIST.EDU
> Date: 11/24/2015 01:35 PM
> Subject: [ADSM-L] Operations Center browser requirements
> Sent by: "ADSM: Dist Stor Manager" 
>
> I am trying to make sense of the Operations Center browser
> requirements shown in http://www-01.ibm.com/support/docview.wss?
> uid=swg21653418.
>
> One of the headings in the browser section reads "TSM Operations
> Center V710 minimum requirement:" and one of the bullet points under
> this heading reads "Microsoft Internet Explorer versions 9 and 10".
> If the heading used the word "minimum" as shown and the bullet point
> mentioned only version 9, it would be clear that versions 9, 10, and
> 11 were supported. If the heading omitted the word "minimum" and the
> bullet point were as shown, it would be clear that versions 9 and 10
> were supported but not version 11. As it is, I see no way of
> determining which reading IBM really intended. Is there any IBM
> document in which the browser requirements are stated unambiguously?
>
> Thomas Denier
> Thomas Jefferson University
> The information contained in this transmission contains privileged
> and confidential information. It is intended only for the use of the
> person named above. If you are not the intended recipient, you are
> hereby notified that any review, dissemination, distribution or
> duplication of this communication is strictly prohibited. If you are
> not the intended recipient, please contact the sender by reply email
> and destroy all copies of the original message.
>
> CAUTION: Intended recipients should NOT use email communication for
> emergent or urgent health care matters.
>


Question about replication 7.1.3.100

2015-11-16 Thread Robert Ouzen
Hi to all

I want to implant a replication server with TSM server 7.1.3.100 and to use the 
new feature for STGpool “Directory Containers”.

I want to replicate backups of Oracle DB , TDP for SAP , TDP for Exchange and 
regular files.

I am wondering if is better to create on the replicate server several STG pools 
or only a big one ???

Any experience ? ideas will be really appreciate.

Source and Target servers are with:


· Windows 2008/2012 R2 64B

· TSM server version 7.1.3.100

· TSM client version   7.1.3.1.
Best Regards


Robert Ouzen
Haifa University


Re: Operation Center Blues!!

2015-10-28 Thread Robert Jose
Hi Vikas
I am so sorry it has taken me so long to respond. I am looking at your
output that you provided, and it looks like all of the values are in hours,
which used to be in days and weeks. I am pretty sure that what you are
seeing is Listener that OC uses to get updates from Spectrum Protect has
crashed. This would result in OC not updating results. We have fixed this
crashing in 7.1.3. The only other work around is to restart OC. Sorry.

Rob Jose
TSM OC UI Developer / L3 Support
rj...@us.ibm.com
(971) 238 - 8998

"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 10/13/2015
05:52:05 AM:

> From: Vikas Arora <vikar...@gmail.com>
> To: ADSM-L@VM.MARIST.EDU
> Date: 10/13/2015 05:53 AM
> Subject: Re: [ADSM-L] Operation Center Blues!!
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Hello Robert,
>
> Did you find any possibilities for any improvements?
>
> thanks
>
> Vikas
>
>
>
>
>
>
>
>
> Grüße
> Vikas Arora
>
> *e Mail* : vikar...@gmail.com
>
> On 9 October 2015 at 09:20, Vikas Arora <vikar...@gmail.com> wrote:
>
> >
> > On 8 October 2015 at 19:33, Robert Jose <rj...@us.ibm.com> wrote:
> >
> >> name, sec_last_catalog_backup from tsmgui_allsrv_grid
> >
> >
> > Hi Robert,
> >
> > Here is the select output.
> >
> > tsm: TSM197LM>select name, sec_last_catalog_backup from
tsmgui_allsrv_grid
> >
> > NAME
> >  SEC_LAST_CATALOG_BACKUP
> > --
> > 
> > ERR_TAPES
> > 0
> > TSM101
> >19538
> > TSM102D
> > 21248
> > TSM103D
> >  3421
> > TSM104D
> > 32247
> > TSM105
> >21536
> > TSM106D
> > 15261
> > TSM202D
> > 54993
> > TSM203D
> > 38068
> > TSM204D
> >  7104
> > TSM205
> >22205
> > TSM229D
> > 16253
> > TSM232D
> > 30587
> > TSM197LM
> > 6912
> > TSM209D
> >  8198
> > TSM206D
> > 18679
> > TSM201D
> > 40166
> > TSM231D
> > 28114
> > TSM230D
> > 33762
> > TSM210D
> > 23540
> > TSM215
> >35205
> > TSM226
> >35257
> > TSM227
> >45998
> > TSM228
> >14412
> > TSM211
> >20907
> > TSM296LM
> >0
> > TSM291LM
> >0
> > TSM292LM
> >66247
> > TSM293LM
> >43328
> > TSM294LM
> >0
> > TSM191LM
> >43889
> > TSM192LM
> > 4721
> > TSM21
> > 65029
> > TSM207D
> >   3951802
> > TSMNLAMS101
> > 10279
> >
> >
> > So TSM201D was just an example, there are other servers too which shows
> > last data collected 2 days ago in  'sh monservers' output.
> >
> > is the value in the bottom right of the grid updating the last
> > refreshed time?   >>> Yes, see the link
> >
> >
> > https://drive.google.com/file/d/0BwDDdmz6R7PtUmxqblVER3BKRmc/view?
> usp=sharing
> >
> >
> > Also TSM server count sometime is false, or may be due to no data
> > collection, see Yellow marks in the screenshots.
> >
> > Many thanks
> >
> >
> >
> >
> > Grüße
> > Vikas Arora
> >
> > *e Mail* : vikar...@gmail.com
> >
>


Re: Question about O.C V7.1.3

2015-10-26 Thread Robert Ouzen
Hi Erwann

My previous version of server and O.C  was V7.1.1.200

I have to server one HUB ( Postback) and one spoke (Adsm2)  , both are now  at 
version V7.1.3.100

O.C works fine (I think) no problem , but as I said in my previous message a 
lot of entries in Q SE

Bet Regards,

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Erwann 
Simon
Sent: Monday, October 26, 2015 8:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question about O.C V7.1.3

Hello Robert,

Which were your previous versions of server and oc ?

Do you have a single server, or do you have spoke servers also ? If so, what 
are their respective versions ?

Is OC responsive or is it slow ?

No idea or supposed known defect, just to be curious. I'll work on a similar 
environnement this week : one hub server (with no clients) just upgraded to 
7.1.3, and 3 spoke servers (one 7.1.1.300 and two 6.3.5.100 to be upgraded to 
7.1.1.300).

Regards,
Erwann

Le 26 octobre 2015 06:57:37 CET, Robert Ouzen <rou...@univ.haifa.ac.il> a écrit 
:
>Hi to all
>
>After upgrading my TSM server to version V7.1.3.0 + V7.1.3.100 and my 
>Operations center to V7.1.3.0.
>
>I notice that I have in  Q SE a lot of:  IBM-OC-POSTBACK client name 
>which is the client name for O.C in the Hub server
>
>Sess  Comm.  SessWait   Bytes   Bytes  Sess
>   Platform Client Name
>Number Method StateTimeSent   Recvd
>Type
>-- -- -- -- --- ---
>---  --
>20 Tcp/Ip Run  0 S    8.7 M 145 Admin  
>WinNTROBERT
>31 Tcp/Ip Run  0 S   23.2 M 262.2 K Admin  
>DSMAPI   IBM-OC-POSTBACK
>13,742 Tcp/Ip IdleW3 S   62.2 K  40.2 M
>Admin   Windows  ADSM2
>26,735 Tcp/Ip Run  0 S  351.0 K 163
>Admin   WinNTROBERT
>27,126 Tcp/Ip     Run  0 S   40.5 K 715
>Admin   WinNTROBERT
>27,847 Tcp/Ip IdleW2 S  269.3 K 117.3 K
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,068 Tcp/Ip IdleW  3.4 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,078 Tcp/Ip IdleW  3.3 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,086 Tcp/Ip IdleW  3.2 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,094 Tcp/Ip IdleW  3.1 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,101 Tcp/Ip IdleW  3.1 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,116 Tcp/Ip IdleW  2.9 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,122 Tcp/Ip IdleW  2.8 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,127 Tcp/Ip IdleW  2.7 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,133 Tcp/Ip IdleW  2.6 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,138 Tcp/Ip IdleW  2.6 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,144 Tcp/Ip IdleW  2.5 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,150 Tcp/Ip IdleW  2.4 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,155 Tcp/Ip IdleW  2.3 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,158 Tcp/Ip IdleW  2.3 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,163 Tcp/Ip IdleW  2.2 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,168 Tcp/Ip IdleW  2.1 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,174 Tcp/Ip IdleW  2.1 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,179 Tcp/Ip IdleW  2.0 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,185 Tcp/Ip IdleW  1.9 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,193 Tcp/Ip IdleW  1.8 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,201 Tcp/Ip IdleW  1.7 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,208 Tcp/Ip IdleW  1.6 M  170 198
>Admin   DSMAPI   IBM-OC-POSTBACK
>28,216 Tcp/Ip IdleW  1.6 M  170 198
>Admin   DSMAPI

Question about different MGM in inclexcl file

2015-10-26 Thread Robert Ouzen
Hi to all

I want to be in the safe side , I need to back up the PST files on another 
management class ( another storage pool)

Here  what I have in my inclexcl file

exclude"/.../*"

include"/media/nss/.../*.*" mgcc12
include"/media/nss/.../*.pst"   mgpst

Did the syntax is correct ??

Here the output of q copypool  for mgm MGCC12 and mgm MGPST

tsm: ADSM2>q co cc active mgcc12 f=d

PolicyPolicyMgmt  Copy  Copy  Version- Version-  Retain  
Retain  Copy Mode   Copy Serializ- Copy  Copy Destin-
DomainSet Name  Class Group Group  s Data   s DataExtra
Only  ation  Frequency ation
NameName  Name  TypeExists  Deleted Versions 
Version

- - - - -    
--- --- -- - 
CCACTIVEMGCC12STANDARD  Backup   41 No Limit
  60 ModifiedDynamic0 DEDUPTSM


tsm: ADSM2>q co cc active mgpst f=d

PolicyPolicyMgmt  Copy  Copy  Version- Version-  Retain  
Retain  Copy Mode   Copy Serializ- Copy  Copy Destin-
DomainSet Name  Class Group Group  s Data   s DataExtra
Only  ation  Frequency ation
NameName  Name  TypeExists  Deleted Versions 
Version

- - - - -    
--- --- -- - 
CCACTIVEMGPST STANDARD  Backup   41 No Limit
  60 ModifiedShared Static  0 V7000_2

Best Regards

Robert


Question about Define a directory-container storage pool in V7.1.3

2015-10-26 Thread Robert Ouzen
Hello to all

I wonder  if in the new feature on V7.1.3.100 call  “Define a 
directory-container storage pool”   , improving dedup inbound.  Did I need the 
same process as in previous version.

In previous version for running a deduplication from the client side need to 
registering or updating a node with  the option deduplication=ClientorServer .

And to add in the client opt the line:  deduplication=yes

Best Regards

Robert


Question about O.C V7.1.3

2015-10-25 Thread Robert Ouzen
Hi to all

After upgrading my TSM server to version V7.1.3.0 + V7.1.3.100 and my 
Operations center to V7.1.3.0.

I notice that I have in  Q SE a lot of:  IBM-OC-POSTBACK client name  which is 
the client name for O.C in the Hub server

Sess  Comm.  SessWait   Bytes   Bytes  Sess
Platform Client Name
Number Method StateTimeSent   Recvd Type
-- -- -- -- --- --- --- 
 --
20 Tcp/Ip Run  0 S8.7 M 145 Admin   
WinNTROBERT
31 Tcp/Ip Run  0 S   23.2 M 262.2 K Admin   
DSMAPI   IBM-OC-POSTBACK
13,742 Tcp/Ip IdleW3 S   62.2 K  40.2 M Admin   
Windows  ADSM2
26,735 Tcp/Ip Run  0 S  351.0 K 163 Admin   
WinNTROBERT
27,126 Tcp/Ip Run  0 S   40.5 K 715 Admin   
WinNTROBERT
27,847 Tcp/Ip IdleW2 S  269.3 K 117.3 K Admin   
DSMAPI   IBM-OC-POSTBACK
28,068 Tcp/Ip IdleW  3.4 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,078 Tcp/Ip IdleW  3.3 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,086 Tcp/Ip IdleW  3.2 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,094 Tcp/Ip IdleW  3.1 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,101 Tcp/Ip IdleW  3.1 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,116 Tcp/Ip IdleW  2.9 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,122 Tcp/Ip IdleW  2.8 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,127 Tcp/Ip IdleW  2.7 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,133 Tcp/Ip IdleW  2.6 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,138 Tcp/Ip IdleW  2.6 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,144 Tcp/Ip IdleW  2.5 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,150 Tcp/Ip IdleW  2.4 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,155 Tcp/Ip IdleW  2.3 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,158 Tcp/Ip IdleW  2.3 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,163 Tcp/Ip IdleW  2.2 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,168 Tcp/Ip IdleW  2.1 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,174 Tcp/Ip IdleW  2.1 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,179 Tcp/Ip IdleW  2.0 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,185 Tcp/Ip IdleW  1.9 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,193 Tcp/Ip IdleW  1.8 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,201 Tcp/Ip IdleW  1.7 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,208 Tcp/Ip IdleW  1.6 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,216 Tcp/Ip IdleW  1.6 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,224 Tcp/Ip IdleW  1.5 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,233 Tcp/Ip IdleW  1.4 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,240 Tcp/Ip IdleW  1.3 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,250 Tcp/Ip IdleW  1.2 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,257 Tcp/Ip IdleW  1.1 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,266 Tcp/Ip IdleW  1.1 M  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,273 Tcp/Ip IdleW   58 S  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,281 Tcp/Ip IdleW   53 S  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,293 Tcp/Ip IdleW   43 S  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,303 Tcp/Ip IdleW   33 S  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,309 Tcp/Ip IdleW   28 S  170 198 Admin   
DSMAPI   IBM-OC-POSTBACK
28,319 Tcp/Ip IdleW   17 S  170 198 Admin   
DSMAPI   IBM-OC

Re: Operation Center Blues!!

2015-10-08 Thread Robert Jose
Hi Vikas
Hmmm, I thought that would do the trick.

Ok, can you get me the results of this query from your hub server:  select
name, sec_last_catalog_backup from tsmgui_allsrv_grid

Also, is the value in the bottom right of the grid updating the last
refreshed time?

Thanks


Rob Jose
Storage Protect OC UI Developer / L3 Support

"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 10/08/2015
02:58:24 AM:

> From: Vikas Arora <vikar...@gmail.com>
> To: ADSM-L@VM.MARIST.EDU
> Date: 10/08/2015 02:59 AM
> Subject: Re: [ADSM-L] Operation Center Blues!!
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Hi Robert,
>
> Thanks for your reply.
>
> 1. I understand inorder to upgrade my OC to v7.1.3, hub server must be
> first upgraded to v7.1.3 >> in that case we will need to wait a little
> while.
>
> 2. Please see the commands output.
>
> tsm: TSM197LM>sh monservers
> List of monitored server sessions:
>   TSM226:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM201D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM105:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSMNLAMS101:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM232D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM209D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM203D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM229D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM202D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM205:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM206D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM207D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM227:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM106D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM215:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM211:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM228:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM101:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM104D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM231D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM21:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM103D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM192LM:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM102D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM230D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM204D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM191LM:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM210D:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM294LM:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM291LM:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM293LM:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM296LM:
> admin IBM-OC-TSM197LM, running, rc=0.
>   TSM292LM:
> admin IBM-OC-TSM197LM, running, rc=0.
> PUSHSTATUS is ON.
> List of known servers:
>   ERR_TAPES:
> last collection 2015-10-08 11:45:58.742750
> deltaSec=   0, buildGrids=True
>   TSM102D:
> last collection 2015-10-07 06:11:24.956812
> deltaSec=   0, buildGrids=True
>   TSM103D:
> last collection 2015-10-06 19:09:41.844010
> deltaSec=   0, buildGrids=True
>   TSM104D:
> last collection 2015-10-06 11:53:36.380075
> deltaSec=   0, buildGrids=True
>   TSM204D:
> last collection 2015-10-07 04:43:01.985216
> deltaSec=   0, buildGrids=True
>   TSM197LM:
> last collection 2015-10-08 11:45:58.742750
> deltaSec=   0, buildGrids=True
>   TSM231D:
> last collection 2015-10-06 12:07:17.936507
> deltaSec=   0, buildGrids=True
>   TSM230D:
> last collection 2015-10-07 06:14:22.935757
> deltaSec=   0, buildGrids=True
>   TSM210D:
> last collection 2015-10-08 01:51:48.960110
> deltaSec=   0, buildGrids=True
>   TSM296LM:
> last collection 2015-10-08 11:03:19.105700
> deltaSec=  -3, buildGrids=True
>   TSM291LM:
> last collection 2015-10-08 08:32:40.00
> deltaSec=   0, buildGrids=False
>   TSM292LM:
> last collection 2015-10-08 09:46:08.00
> deltaSec=   0, buildGrids=False
>   TSM293LM:
> last collection 2015-10-08 10:49:38.169843
> deltaSec=   0, buildGrids=True
>   TSM294LM:
> last collection 2015-10-08 06:49:36.00
> deltaSec=   0, buildGrids=False
>   TSM191LM:
> last collection 2015-10-07 16:30:26.00
> deltaSec=  -3, buildGrids=False
>   TSM192LM:
> last collection 2015-10-06 19:25:09.00
> deltaSec=  -3, buildGrids=False
>   TSM21:
> l

Re: Operation Center Blues!!

2015-10-07 Thread Robert Jose
Hi Vikas
I am sorry you are having trouble with Operations Center. Let me see if I
can help. The first thing is that although 7.1.0 is a great version, we
have made a lot of updates and improvements in our latest OC (version
7.1.3).

Second, can you run query monitorsettings on both the hub and the spoke?
We want to make sure that the monitor settings is set to On and that the
spoke is in the monitored servers value on the hub (seen in the
monitorsettings or by running show monservers on the hub).

If the monitor setting is off on the spoke, you can run this command on the
spoke to turn it on:  set statusmonitor on

Now, the other issue is that the Notification Listener could have crashed
(713 will restart the listener). You can usually tell if the notification
listener has crashed because the last refresh value in the bottom right of
the grid will be in the hours. The only way to get the notification
listener started in 7.1.0 is to restart the OC.

I hope this helps

Rob Jose
TSM OC UI Developer / L3 Support

"ADSM: Dist Stor Manager"  wrote on 10/07/2015
12:55:33 AM:

> From: Vikas Arora 
> To: ADSM-L@VM.MARIST.EDU
> Date: 10/07/2015 12:57 AM
> Subject: [ADSM-L] Operation Center Blues!!
> Sent by: "ADSM: Dist Stor Manager" 
>
> Hello All,
>
> I recently installed Operation Center version 7.1.0.100 to see the good
> stuff it has to offer, But I see that server status are not frequently
> updated.
>
> For example.
>
> TSM DB backup was done for spoke server TSM201D at  06:41 am today
morning,
> but as of 09:37 Am, I still see in OPeration center that last DB backup
was
> 12 hours ago.  Any Ideas why.
>
> Although when I choose that server I can see under completed tasks, that
> last DB was 3 hours ago.
>
>
>
> Spoke server is at v6.3.4
>
> I have ADMINCOMMITTIMEOUT 7200
>  ADMINIDLETIMEOUT20
>   PUSHSTATUS  YES
>
> Please see the snapshots below, was unable to send them here.
>
>
>
https://drive.google.com/file/d/0BwDDdmz6R7PteDZUWDVURkdpUm8/view?usp=sharing

>
>
https://drive.google.com/file/d/0BwDDdmz6R7PtOGdmLTljdVJibG8/view?usp=sharing

>
>
> thanks
>
>
>
>
>
>
> Grüße
> Vikas Arora
>
> *e Mail* : vikar...@gmail.com
>


Re: TSM 7.1.3 and Directory-container storage pools

2015-10-07 Thread Robert Ouzen
Del 

Great news ! 

Thanks Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: Wednesday, October 7, 2015 9:37 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 7.1.3 and Directory-container storage pools

For those that want to move existing data into the new directory-container 
storage pools, and don't want to use node replication, you will need to wait 
until 1H16. 
That is when we are targeting "move in" for the new directory-container storage 
pools.


Del



"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 10/07/2015
10:39:21 AM:

> From: "Ryder, Michael S" <michael_s.ry...@roche.com>
> To: ADSM-L@VM.MARIST.EDU
> Date: 10/07/2015 10:41 AM
> Subject: Re: TSM 7.1.3 and Directory-container storage pools Sent by: 
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> 
> Hello Deirdre --
> 
> Can you comment on when we might see a better way to migrate existing 
> storagepools into directory-container storagepools?  Perhaps an
adjustment
> of "move data" or a new command, that doesn't require multiple steps 
> hopping through a replication server?  It sounds like this is a needed 
> feature that some of us can't get to without this functionality.
> 
> Best regards,
> 
> Mike <http://rbbuswiki.bbg.roche.com/wiki/ryderm_page:start>, x7942 
> RMD IT Client Services <http://rmsit.dia.roche.com/Pages/default.aspx>
> 
> On Tue, Oct 6, 2015 at 5:52 AM, Erwann SIMON <erwann.si...@free.fr>
wrote:
> 
> > Hi,
> >
> > As previously said by Deirdre, see the FAQ :
> >
> > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/
> wiki/Tivoli%20Storage%20Manager/page/Directory-container%20storage%
> 20pools%20FAQs
> >
> > --
> > Best regards / Cordialement / مع تحياتي Erwann SIMON
> >
> > - Mail original -
> > De: "Robert Ouzen" <rou...@univ.haifa.ac.il>
> > À: ADSM-L@VM.MARIST.EDU
> > Envoyé: Mardi 6 Octobre 2015 11:47:12
> > Objet: Re: [ADSM-L] TSM 7.1.3 and Directory-container storage pools
> >
> > Hi to all
> >
> > Today in my V7.1.3 test environment , tried to move data from old 
> > stg fashion (file) to  directory container storage without any success
> > Tried with commands:   move nodedata and move data
> >
> > Here output
> >
> > tsm: TSMTEST>move nodedata test from=tsmstg1 to=stg_dir ANR3385E 
> > MOVE NODEDATA: The operation is not allowed for container
storage
> > pools.
> >
> > tsm: TSMTEST>move data C:\MOUNTPOINTS\STORAGE_1\STG1\0267.BFS
> > stg=stg_dir
> > ANR3385E MOVE DATA: The operation is not allowed for container 
> > storage pools.
> >
> > I tried too , to do a nextstg
> > TSMTEST>upd stg tsmstg1 nextstg=stg_dir
> > ANR2399E UPDATE STGPOOL: Storage pool STG_DIR is not a sequential
pool.
> >
> > I of course define stg with stgtype=directory (stg_dir) and after it 
> > define stgpooldirectory
> >
> > Anybody know a way to move old stg to new stg with stgtype=directory

> >
> > Regards
> >
> > Robert
> >
> >
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 
> > Behalf
Of
> > Stefan Folkerts
> > Sent: Friday, September 18, 2015 9:49 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: [ADSM-L] TSM 7.1.3 and Directory-container storage 
> > pools
> >
> > I believe you can "move data" data into the container pools, you 
> > just can't get the data out with traditional methods at the moment 
> > but only
via
> > TSM node replication.
> > I like the new pool type but they are only suitable to a specific 
> > type
of
> > setup, the good news is that this setup covers a lot of the new type
of TSM
> > deployments we are doing!
> >
> > On Wed, Sep 16, 2015 at 9:24 PM, Sergio O. Fuentes 
> > <sfuen...@umd.edu>
> > wrote:
> >
> > > This question is relevant.  How do I move from a file devclass
stgpool
> > > to a directory-container pool.  And what's the impact on the DB if 
> > > I
do
> > this?
> > >  I already had multi-site configured for our environment with the 
> > > tools that exist in versions <7.1.3.  I'm not getting another 
> > > 200TB array to move data to new directory-container pools.
> > >
> > > Thanks!
> > >
> > > SF
> > >
> > > On 9/16/15, 10:4

Re: TSM 7.1.3 and Directory-container storage pools

2015-10-06 Thread Robert Ouzen
Hi to all

Today in my V7.1.3 test environment , tried to move data from old stg fashion 
(file) to  directory container storage without any success
Tried with commands:   move nodedata and move data

Here output

tsm: TSMTEST>move nodedata test from=tsmstg1 to=stg_dir
ANR3385E MOVE NODEDATA: The operation is not allowed for container storage 
pools.

tsm: TSMTEST>move data C:\MOUNTPOINTS\STORAGE_1\STG1\0267.BFS stg=stg_dir
ANR3385E MOVE DATA: The operation is not allowed for container storage pools.

I tried too , to do a nextstg
TSMTEST>upd stg tsmstg1 nextstg=stg_dir
ANR2399E UPDATE STGPOOL: Storage pool STG_DIR is not a sequential pool.

I of course define stg with stgtype=directory (stg_dir) and after it define 
stgpooldirectory

Anybody know a way to move old stg to new stg with stgtype=directory  

Regards 

Robert


 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Friday, September 18, 2015 9:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 7.1.3 and Directory-container storage pools

I believe you can "move data" data into the container pools, you just can't get 
the data out with traditional methods at the moment but only via TSM node 
replication.
I like the new pool type but they are only suitable to a specific type of 
setup, the good news is that this setup covers a lot of the new type of TSM 
deployments we are doing!

On Wed, Sep 16, 2015 at 9:24 PM, Sergio O. Fuentes <sfuen...@umd.edu> wrote:

> This question is relevant.  How do I move from a file devclass stgpool 
> to a directory-container pool.  And what's the impact on the DB if I do this?
>  I already had multi-site configured for our environment with the 
> tools that exist in versions <7.1.3.  I'm not getting another 200TB 
> array to move data to new directory-container pools.
>
> Thanks!
>
> SF
>
> On 9/16/15, 10:49 AM, "ADSM: Dist Stor Manager on behalf of Ryder, 
> Michael S" <ADSM-L@VM.MARIST.EDU on behalf of michael_s.ry...@roche.com> 
> wrote:
>
> >I am very interested in directory-container storage pools.
> >
> >But...
> >
> >If Migration or Move Data are not options, then how does one 
> >transition data from existing primary storage pools to a 
> >directory-container storage pool?
> >
> >Mike
> >
> >Best regards,
> >
> >Mike <http://rbbuswiki.bbg.roche.com/wiki/ryderm_page:start>, x7942 
> >RMD IT Client Services 
> ><http://rmsit.dia.roche.com/Pages/default.aspx>
> >
> >On Wed, Sep 16, 2015 at 9:23 AM, Rick Adamson 
> ><rickadam...@segrocers.com>
> >wrote:
> >
> >> I may be wrong but from reading the 7.1.3 doco new container 
> >> approach combined with the inline dedup eliminate the need for some 
> >> of these processes. Also, where traditionally a "copy" storage pool 
> >> was used they now refer to it as a "protect" storage pool which has 
> >> the ability to be replicated to another "onsite" or "offsite" container 
> >> storage pool.
> >>
> >> Remember that with deduplicated data many of the processes you 
> >>mentioned  required that the data be rehydrated to be performed. An 
> >>added benefit is  if in fact these processes are no longer required 
> >>it will free up system  resources and as a result lower storage 
> >>costs and increase scalability.
> >>
> >> Fortunately, there are some good resources for additional information:
> >> IBM you tube channel:
> >> https://www.youtube.com/channel/UCGkjRNkO0AQNyQbWhS1tTzw
> >> IBM knowledge center for 7.1.3:
> >>
> >>
> http://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.3/tsm/welcome
> .ht
> >>ml
> >>
> >> I will be installing it on two systems today to begin testing, 
> >> hopefully I'll be able to comment more soon..
> >>
> >>
> >> -Rick Adamson
> >>
> >>
> >> -Original Message-
> >> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 
> >> Behalf
> Of
> >> James Thorne
> >> Sent: Wednesday, September 16, 2015 5:41 AM
> >> To: ADSM-L@VM.MARIST.EDU
> >> Subject: Re: [ADSM-L] TSM 7.1.3 and Directory-container storage 
> >> pools
> >>
> >> Hi Karel.
> >>
> >> That's the conclusion we came to too.
> >>
> >> James.
> >>
> >> -Original Message-
> >> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 
> >> Behalf
> Of
> >> Karel Bos
> >>

Version 7.1.3

2015-09-23 Thread Robert Ouzen
Hello All

I wonder if people  , already upgrade to version 7.1.3 and have comments , 
appreciation …. I understand that it’s a new big concept as STG container for 
inline dedup.

Anyone use it ? and can share remarks !

T.I.A Robert


very low dedup percent on TDP for SAP archives

2015-09-15 Thread Robert Ouzen
Hi to all

We had to move our clients with TDP for SAP oracle 6.2.1 from our Data Domain 
to a new storage V7000 with RTC.

This Storage V7000_1 is configure with dedup and only from the server side

Our policy looks like:

1.Once a week the SAP is DOWN OFFLINE and an ARCHIVE for The fs is taken (fie 
level)
2..Every day there is a FULL backup of TDP for SAP
3..Monthly is ALSO a Full TDP for SAP

4..All SAP clients ( TDP and F.S) are backup to the same storage pool V7000_1

In dsmverv.opt we have the configuration:

numopenvolsallowed 200

DedupTier2FileSize100

DedupTier3FileSize400

ServerDedupTxnLimit   2000



Our devclass looks like:



tsm: ADSM2>q devc fileclassv7000



Device  Device  Storage Device  
Format   Est/Max  Mount

Class  Access  PoolType 
Capacity  Limit

Name   Strategy   Count 
  (MB)

--- ---  
-   --    --

FILECLASSV7000   Sequential   2   FILE  
 DRIVE  51,200.04,096



I saw a new parameter on register node call: (Leave it at default  YES)


Split Large Objects: Yes  (default)

My problem after a lot of full archives got a very LOW dedup percent. Here an 
output of a script

tsm: ADSM2>run dedupreport

Stgpool on Adsm2 serverREPORTING_MB before Dedup in GB  
 LOGICAL_MB after Dedup in GB   Saved space in GB   
Saved in %
 -- 
 -- 
 -- ---
V7000_1   23685.36  
22079.84
   1605.52  
  6

Our SAP clients are configured in utl files with:

MULTIPLEXING  1
RC_COMPRESSION   NO

>From the TSM server all clients are with:  Compression: No

Any ideas ? Maybe missed something ?

TSM server version 7.1.1.200
O.S Windows 2008R2 64B 128G memory

Best Regards

Robert


Re: Script question

2015-09-08 Thread Robert Ouzen
Sorry I did not mention the operating system ... for Windows 2008 R2 TSM server

Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Tuesday, September 08, 2015 2:06 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Script question

You didn't indicate which operating system.
Here is a AIX/KSH script to prompt for and change a node password.

#!/usr/bin/ksh

print ""
print "Dialog to change a NODE password."
print ""
print "Please enter a node name, or partial node name.  I'll do a lookup on it 
for you!"
read Inode
print ""
print "You entered $Inode , performing lookukp . . . "
/tsmdata/tsm_scripts/q_node.ksh $Inode
print ""
print "There, it should be one of the above. "
print "Please enter the FULL NODE NAME from above."
Inode=""
read Inode
print ""
print "The node we are going to change is $Inode"
print ""
print "Please enter the TSM instance of the node."
read Itsm
print ""
print "The node is in  $Itsm"
print ""
print "Please enter the new password."
read Ipwd
print ""
print "TODO: chg pwd for  node= $Inode  tsm= $Itsm  newpwd= $Ipwd"
print "type \"ok\" to proceed"
read Iok
if [[ $Iok == 'ok' ]]; then
print ""
print "performing change . . . "
/tsmdata/tsm_scripts/run_cmd.ksh $Itsm "update node $Inode $Ipwd"
print ""
print "all done"
else
print ""
print "terminating"
fi




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Robert 
Ouzen
Sent: Monday, September 07, 2015 9:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Script question

Hi to all

Anybody have an example for during a script , asking for a parameter. Waiting 
for the input and after pass the parameter to a command

For example something like this:

"Please enter the process number   $1 "
Waiting ...
can pr   $1

Best Regards

Robert


-

The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.


Script question

2015-09-07 Thread Robert Ouzen
Hi to all

Anybody have an example for during a script , asking for a parameter. Waiting 
for the input and after pass the parameter to a command

For example something like this:

“Please enter the process number   $1 “
Waiting ………..
can pr   $1

Best Regards

Robert


Configuring TDP for SQL Server 2012 in Win Server 2012 R2 systems in a failover cluster

2015-08-28 Thread Robert Talda
Folks:
  We’ve been asked on short notice to start backing up a high visibility SQL 
Server 2012 database that is residing on a pair of Window Server 2012 R2 
systems configured as a Failover Cluster.W

  Oh, and the DBA left.

  Our first attempts have not been promising;
*  installing and configuring the TDP for SQL on one server, it came up looking 
for Flash Copy Manager
* when attempting to use the TDP for SQL on the second server, we discovered 
the VSS Writer for SQL Server was missing
* adding the recommended options for backing up in a Failover Cluster 
environment (e.g.Data Protection for SQL Server in a Windows failover cluster 
environmenthttp://www-01.ibm.com/support/knowledgecenter/SSTFZR_7.1.2/com.ibm.itsm.db.sql.doc/c_dpfcm_plan_sqlwsfc_sql.html?lang=en-us)
 resulted in the base TSM client skipping the backups of all the drives in the 
system

  So, we would like to talk off-list with someone with experience in 
configuring such.

Thanks in advance!

Bob
Robert Talda
EZ-Backup Service Manager
Cornell University
r...@cornell.edumailto:r...@cornell.edu
607-255-8280


Re: mismatch capacity

2015-08-19 Thread Robert Ouzen
Hello Erwann

Thanks I completely forgot !

Best Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Erwann 
SIMON
Sent: Wednesday, August 19, 2015 1:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] mismatch capacity

Hello Robert,

As your storage pools are configured for deduplication, you have to choose the 
reporting_mb column (and not logical_mb) in order to have the same statistics 
with Q OCC ans SELECT FROM OCCUPANCY.

-- 
Best regards / Cordialement / مع تحياتي
Erwann SIMON

- Mail original -
De: Robert Ouzen rou...@univ.haifa.ac.il
À: ADSM-L@VM.MARIST.EDU
Envoyé: Mercredi 19 Août 2015 11:19:19
Objet: [ADSM-L] mismatch capacity

H I to all

I wanted to know for a specific nodename the amount of data backup on a 
specific stgpool

Here the script:

tsm: ADSM2select sum(logical_mb) / 1024 as Capacity of Data from occupancy 
where node_name='MNHL2_CLUSTER' and stgpool_name='V7000_2'

  Capacity of Data
--
745.50

But when done the query:  q   occ mnhl2_clusterI got a different 
summary !

As you see much more data on storage V7000_2 ?

I run too before a audit license

tsm: ADSM2q occ mnhl2_cluster

Node Name  Type Filespace  FSID Storage  Number of  
  Physical Logical
NamePool NameFiles  
 Space   Space

  OccupiedOccupied

  (MB)(MB)
--  --  -- --- 
--- ---
MNHL2_CLU- Bkup /media/ns-1 DEDUPTSM49  
 -2.04
STERs/APPS
MNHL2_CLU- Bkup /media/ns-3 DEDUPTSM   367,258  
 -  540,014.69
STERs/CSMS1
MNHL2_CLU- Bkup /media/ns-3 V7000_2 44  
 -   74,920.13
STERs/CSMS1
MNHL2_CLU- Bkup /media/ns-4 DEDUPTSM   221,781  
 - 1,224,645.8
STERs/CSMS2 
8
MNHL2_CLU- Bkup /media/ns-5 DEDUPTSM   135,968  
 - 1,324,964.7
STERs/CSMS3 
5
MNHL2_CLU- Bkup /media/ns-5 V7000_2 10  
 -3,060.52
STERs/CSMS3
MNHL2_CLU- Bkup /media/ns-2 DEDUPTSM19,294  
 -  471,033.91
STERs/CSMS4
MNHL2_CLU- Bkup /media/ns-2 V7000_2  1  
 -  137.14
STERs/CSMS4
MNHL2_CLU- Bkup /media/ns-6 DEDUPTSM 1,758,166  
 -  799,055.63
STERs/DATA
MNHL2_CLU- Bkup /media/ns-6 V7000_2176  
 -  441,244.84
STERs/DATA
MNHL2_CLU- Bkup /media/ns-7 DEDUPTSM   366,650  
 -  901,211.44
STERs/DATA2
MNHL2_CLU- Bkup /media/ns-7 V7000_2332  
 -  733,649.63
STERs/DATA2
MNHL2_CLU- Bkup /media/ns-9 DEDUPTSM 1,286,727  
 - 2,253,114.2
STERs/NSCI  
5
MNHL2_CLU- Bkup /media/ns-9 V7000_2 46  
 -   14,892.99
STERs/NSCI


Best Regrards

Robert


mismatch capacity

2015-08-19 Thread Robert Ouzen
H I to all

I wanted to know for a specific nodename the amount of data backup on a 
specific stgpool

Here the script:

tsm: ADSM2select sum(logical_mb) / 1024 as Capacity of Data from occupancy 
where node_name='MNHL2_CLUSTER' and stgpool_name='V7000_2'

  Capacity of Data
--
745.50

But when done the query:  q   occ mnhl2_clusterI got a different 
summary !

As you see much more data on storage V7000_2 ?

I run too before a audit license

tsm: ADSM2q occ mnhl2_cluster

Node Name  Type Filespace  FSID Storage  Number of  
  Physical Logical
NamePool NameFiles  
 Space   Space

  OccupiedOccupied

  (MB)(MB)
--  --  -- --- 
--- ---
MNHL2_CLU- Bkup /media/ns-1 DEDUPTSM49  
 -2.04
STERs/APPS
MNHL2_CLU- Bkup /media/ns-3 DEDUPTSM   367,258  
 -  540,014.69
STERs/CSMS1
MNHL2_CLU- Bkup /media/ns-3 V7000_2 44  
 -   74,920.13
STERs/CSMS1
MNHL2_CLU- Bkup /media/ns-4 DEDUPTSM   221,781  
 - 1,224,645.8
STERs/CSMS2 
8
MNHL2_CLU- Bkup /media/ns-5 DEDUPTSM   135,968  
 - 1,324,964.7
STERs/CSMS3 
5
MNHL2_CLU- Bkup /media/ns-5 V7000_2 10  
 -3,060.52
STERs/CSMS3
MNHL2_CLU- Bkup /media/ns-2 DEDUPTSM19,294  
 -  471,033.91
STERs/CSMS4
MNHL2_CLU- Bkup /media/ns-2 V7000_2  1  
 -  137.14
STERs/CSMS4
MNHL2_CLU- Bkup /media/ns-6 DEDUPTSM 1,758,166  
 -  799,055.63
STERs/DATA
MNHL2_CLU- Bkup /media/ns-6 V7000_2176  
 -  441,244.84
STERs/DATA
MNHL2_CLU- Bkup /media/ns-7 DEDUPTSM   366,650  
 -  901,211.44
STERs/DATA2
MNHL2_CLU- Bkup /media/ns-7 V7000_2332  
 -  733,649.63
STERs/DATA2
MNHL2_CLU- Bkup /media/ns-9 DEDUPTSM 1,286,727  
 - 2,253,114.2
STERs/NSCI  
5
MNHL2_CLU- Bkup /media/ns-9 V7000_2 46  
 -   14,892.99
STERs/NSCI


Best Regrards

Robert


Re: Different management class

2015-08-13 Thread Robert Ouzen
Hi Karel

Did is a way to trigger the move ? I think a full backup maybe will do it . But 
I tried to figure another workaround , huge client.

Best Regards 

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Karel 
Bos
Sent: Thursday, August 13, 2015 10:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Different management class

New backup copies will go to new stg, existing backup copies gets rebound to 
new mgmt class but won't be automatically moved from 1 stg pool to another.
Op 13 aug. 2015 08:05 schreef Robert Ouzen rou...@univ.haifa.ac.il:

 I to all

 I try to figure a scenario to change a backup for a client that only 
 pst files to go to another storage I created a new management class 
 and different copygroup destination for pst files call:  mgtest1


 ? For all files   management mgtest (default)   go to TSMSTG1 storage

 ? For pst files management mgtest1   go to TSMSTG2
 storage

 ?
 Here the copygroup output:

 tsm: TSMTESTq co test active mgtest* f=d

 Policy Policy Mgmt  Copy   Copy
 Versions  Versions  Retain  Retain Copy Mode   Copy
 Serialization  CopyCopy Destination  Table of
 DomainSet Name  ClassGroupGroup  Data
 Data   ExtraOnly
  Frequency
 Contents
 Name  Name  Name   Type
   Exists DeletedVersions Version

(TOC) Destination

 -
 TEST  ACTIVEMGTESTSTANDARDBackup   3
 1No Limit  90  Modified
Shared Static   0 TSMSTG1
TSMSTG1

 TEST   ACTIVEMGTEST1   STANDARDBackup  2
130  60Modified
 Shared Static  0
  TSMSTG2   TSMSTG2

 In my dsm.opt I add another include statement:

 include c:\...\*.* mgtest
 include c:\...\*.pst mgtest1

 Run an incremental backup with the new include pst statement:

 Here the output:

 08/13/2015 08:02:19  ANE4952I (Session: 1629, Node: TEST)  Total
 number of  objects inspected:   44,375
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4954I (Session: 1629, Node: TEST)  Total
 number of  objects backed up:   24
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4958I (Session: 1629, Node: TEST)  Total
 number of  objects updated:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4960I (Session: 1629, Node: TEST)  Total
 number of  objects rebound:  9
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4957I (Session: 1629, Node: TEST)  Total
 number of  objects deleted:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4970I (Session: 1629, Node: TEST)  Total
 number of  objects expired:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4959I (Session: 1629, Node: TEST)  Total
 number of  objects failed:   0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4197I (Session: 1629, Node: TEST)  Total
 number of  objects encrypted:0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4965I (Session: 1629, Node: TEST)  Total
 number of  subfile objects:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4914I (Session: 1629, Node: TEST)  Total
 number of  objects grew: 0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4916I (Session: 1629, Node: TEST)  Total
 number of  retries:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4977I (Session: 1629, Node: TEST)  Total
 number of  bytes inspected:  12.99 GB
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4961I (Session: 1629, Node: TEST)  Total
 number of  bytes transferred:25.16 MB
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4963I (Session: 1629, Node: TEST)  Data
 transfer time:0.21 sec
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4966I (Session: 1629, Node: TEST)  Network
 data  transfer rate:  117,985.53 KB/sec
 (SESSION:  1629)
 08/13/2015 08:02:19  ANE4967I (Session: 1629, Node: TEST)  Aggregate
 data

Re: Different management class

2015-08-13 Thread Robert Ouzen
Hi Karel

I think I found the solution, to change on the backup copy group on the 
management class  mgtest1 the copy mode to absolute.

Run an incremental backup  , the pst will be rebound and store to the correct 
storage pool and put back the copy mode to modified

Best Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Karel 
Bos
Sent: Thursday, August 13, 2015 10:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Different management class

New backup copies will go to new stg, existing backup copies gets rebound to 
new mgmt class but won't be automatically moved from 1 stg pool to another.
Op 13 aug. 2015 08:05 schreef Robert Ouzen rou...@univ.haifa.ac.il:

 I to all

 I try to figure a scenario to change a backup for a client that only 
 pst files to go to another storage I created a new management class 
 and different copygroup destination for pst files call:  mgtest1


 ? For all files   management mgtest (default)   go to TSMSTG1 storage

 ? For pst files management mgtest1   go to TSMSTG2
 storage

 ?
 Here the copygroup output:

 tsm: TSMTESTq co test active mgtest* f=d

 Policy Policy Mgmt  Copy   Copy
 Versions  Versions  Retain  Retain Copy Mode   Copy
 Serialization  CopyCopy Destination  Table of
 DomainSet Name  ClassGroupGroup  Data
 Data   ExtraOnly
  Frequency
 Contents
 Name  Name  Name   Type
   Exists DeletedVersions Version

(TOC) Destination

 -
 TEST  ACTIVEMGTESTSTANDARDBackup   3
 1No Limit  90  Modified
Shared Static   0 TSMSTG1
TSMSTG1

 TEST   ACTIVEMGTEST1   STANDARDBackup  2
130  60Modified
 Shared Static  0
  TSMSTG2   TSMSTG2

 In my dsm.opt I add another include statement:

 include c:\...\*.* mgtest
 include c:\...\*.pst mgtest1

 Run an incremental backup with the new include pst statement:

 Here the output:

 08/13/2015 08:02:19  ANE4952I (Session: 1629, Node: TEST)  Total
 number of  objects inspected:   44,375
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4954I (Session: 1629, Node: TEST)  Total
 number of  objects backed up:   24
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4958I (Session: 1629, Node: TEST)  Total
 number of  objects updated:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4960I (Session: 1629, Node: TEST)  Total
 number of  objects rebound:  9
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4957I (Session: 1629, Node: TEST)  Total
 number of  objects deleted:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4970I (Session: 1629, Node: TEST)  Total
 number of  objects expired:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4959I (Session: 1629, Node: TEST)  Total
 number of  objects failed:   0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4197I (Session: 1629, Node: TEST)  Total
 number of  objects encrypted:0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4965I (Session: 1629, Node: TEST)  Total
 number of  subfile objects:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4914I (Session: 1629, Node: TEST)  Total
 number of  objects grew: 0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4916I (Session: 1629, Node: TEST)  Total
 number of  retries:  0
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4977I (Session: 1629, Node: TEST)  Total
 number of  bytes inspected:  12.99 GB
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4961I (Session: 1629, Node: TEST)  Total
 number of  bytes transferred:25.16 MB
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4963I (Session: 1629, Node: TEST)  Data
 transfer time:0.21 sec
 (SESSION: 1629)
 08/13/2015 08:02:19  ANE4966I (Session: 1629, Node: TEST)  Network
 data  transfer rate:  117,985.53 KB/sec

Different management class

2015-08-13 Thread Robert Ouzen
: 1629, Node: TEST)  Subfile objects  
reduced by:   0%   (SESSION: 1629)
08/13/2015 08:02:19  ANE4964I (Session: 1629, Node: TEST)  Elapsed 
processing  time:   00:00:57  (SESSION: 
1629)

As you see I have rebound files (pst files) , I check in the restore the pst 
files are connected to mgtest1

But when I did a q occ everything is on TSMSTG1

tsm: TSMTESTq occ test

Node Name  Type Filespace  FSID Storage  Number of  
  Physical Logical
NamePool NameFiles  
 Space   Space

  OccupiedOccupied

  (MB)(MB)
--  --  -- --- 
--- ---
TEST   Bkup \\nasw\c$ 3 TSMSTG1 41,084  
 -   12,726.42

And too for a  q nodedata :

tsm: TSMTESTq nodedata test

Node NameVolume NameStorage Pool 
Physical
Name  
Space
 
Occupied
   
(MB)
 --  

TEST C:\MOUNTPOINTS\STORAGE_1\STG3- TSMSTG1  
5,803.03
  \025C.BFS

Why it didn’t write the pst files to TSMSTG2 storage where the mgtest1 
management is configured ??

Best Regards

Robert


  1   2   3   4   5   6   7   8   9   10   >