Re: Vote for feature: Media lifecycle

2019-04-04 Thread Harris, Steven
When I moved over from Mainframe to midrange a very long time ago, I was used 
to the old mainframe TMS/CA-1 tape management software.
Never understood why TSM did not implement a scratch pool as in that product to 
keep track of its scratch tapes, even if just to keep the usage stats.

I too have voted for the RFE

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Skylar 
Thompson
Sent: Friday, 5 April 2019 1:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Vote for feature: Media lifecycle

I voted for it as well. We can get some of this information from ACSLS but it 
would be nice to have something that is not tied to Oracle/STK.

On Tue, Mar 26, 2019 at 10:21:12AM +0100, Hans Christian Riksheim wrote:
> Voted.
>
> A few years ago I made an RFE requesting the ability to do audit
> library without using half a day shutting down all the library clients
> but it was rejected. Hope this goes better.
>
> hc
>
> On Tue, Mar 26, 2019 at 9:28 AM Jansen, Jonas
> 
> wrote:
>
> > Hello,
> >
> > maybe some of you may require automated media lifecycle to. In case
> > you wish this feature, vote for this feature request:
> >
> > https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_I
> > D=131259
> >
> > Best regards,
> > ---
> > Jonas Jansen
> >
> > IT Center
> > Gruppe: Server & Storage
> > Abteilung: Systeme & Betrieb
> > RWTH Aachen University
> > Seffenter Weg 23
> > 52074 Aachen
> > Tel: +49 241 80-28784
> > Fax: +49 241 80-22134
> > jan...@itc.rwth-aachen.de
> > www.itc.rwth-aachen.de
> >
> >
> >
> >

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: What client version do you keep for long term

2019-03-14 Thread Harris, Steven
Hi Rick

What I'd like is for a yearly refresh of the client to the latest stable level. 
 So .0 unless there is a particular patch that we need.

What actually happens is that I'm not permitted to use TSM to roll out client 
updates. The windows guys have their tools for that and the unix guys have 
their tools. What this means is that they never get updated.

I have a version built into the standard build, and everything gets built with 
that until the next standard build.  Seems to work.

Cheers

Steve

Steven Harris
TSM Admin/Consultant 
Canberra Australia 

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Rhodes, 
Richard L.
Sent: Friday, 15 March 2019 4:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] What client version do you keep for long term

Hi Everyone!

Curious what others do for holding onto BA client versions.
For the long term, what versions of clients do you hang onto?
  All maintenance levels?   ie:  7.1.8.0
  All patch levels?  ie:  7.1.8.x
  Just highest patch levels?  ie:  7.1.8.4 (just highest last digit)


Rick





--

The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: dsm.sys question

2019-02-27 Thread Harris, Steven
Rick

The m4 macro processor is a standard unix offering and can do anything from 
simple includes and variable substitutions to lisp-like processing that will 
boggle your mind. An m4 macro with some include files and a makefile with a 
cron job to build your dsm.sys might do the job.

Cheers

Steve 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Thursday, 28 February 2019 6:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] dsm.sys question

Hi Rick,

I'm not aware of a mechanism that allows one to do that with dsmc/dsm.sys, but 
Puppet does have the ability to include arbitrary lines in a file, either via a 
template or directly in a rule definition.

Another option would be to use server-side client option sets:

https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/t_mgclinod_mkclioptsets.html

These options mirror what can be set in dsm.sys, and can either be overridden 
by the client, or enforced by the server.

On Wed, Feb 27, 2019 at 06:58:30PM +, Rhodes, Richard L. wrote:
> Hello,
>
> Our Unix team in implementing a management application named Puppet.
> They are running into a problem using Puppet to setup/maintain the TSM 
> client dsm.sys files.  They create/maintain the dsm.sys as per a 
> template of some kind.  If you change a dsm.sys with a unique option, 
> it gets overwritten by the standard template when Puppet 
> refreshes/checks the file.  The inclexcl option pulls include/excludes 
> from a separate local file so this works fine for local specific needs.
> But some systems need other settings or whole servername stanzas that 
> are unique.  I've looked through the BA client manual and see no way 
> to include arbitrary lines from some other file into dsm.sys.
>
> Q) Is there a way to source options from another file into the dsm.sys, kind 
> of like the inclexcl option does?
>
>
> Thanks
>
> Rick
> --
> 
>
> The information contained in this message is intended only for the personal 
> and confidential use of the recipient(s) named above. If the reader of this 
> message is not the intended recipient or an agent responsible for delivering 
> it to the intended recipient, you are hereby notified that you have received 
> this document in error and that any review, dissemination, distribution, or 
> copying of this message is strictly prohibited. If you have received this 
> communication in error, please notify us immediately, and delete the original 
> message.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Bottomless pit

2019-02-26 Thread Harris, Steven
Zoltan

I had something similar.  Prod node of application had a reasonable number of 
systemstate objects. Two nonprod nodes of same application had huge numbers of 
systemstate.  This first came to my attention when daily expiration was taking 
3 or more days instead of the usual 30 minutes.

As far as I can tell the nonprod nodes were set up in a lazy manner and the 
database and application were dumped on the C: drive in a place that is part of 
systemstate.  I excluded the systemstate for these two nodes and deleted the 
filespaces - which took a week or more.  The local ticket is still open with 
the application people, almost a year on.

Hope that helps

Steve.

Steven Harris
TSM Admin/Consultant 
Canberra Australia.
  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Deschner, Roger Douglas
Sent: Wednesday, 27 February 2019 6:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Bottomless pit

I set up a cron job that does filespace and node deletions in a batch on 
weekends. Then if it takes a long time, I don't care. I set this up back on 
server version 5, when deletions took a REALLY long time, and I kept it in V6 
to deal with exactly this issue with System State. We've been using Client 
Option Sets to prevent System State backup for several years now, but sometimes 
one slips through. Also we now have actual data filespaces with many millions 
of stored objects, and sometimes one of those needs to be deleted, another 
reason to batch deletions on weekends.

Roger Deschner
University of Illinois at Chicago
"I have not lost my mind -- it is backed up on tape somewhere."

From: Sasa Drnjevic 
Sent: Monday, February 25, 2019 15:25
Subject: Re: Bottomless pit

FYI,
same here...but my range/ratio was:

~2 mil occ to 25 mil deleted objects...

Never solved the mystery... gave up :->


--
Sasa Drnjevic
www.srce.unizg.hr/en/




On 2019-02-25 20:05, Zoltan Forray wrote:
> Here is a new one...
>
> We turned off backing up SystemState last week.  Now I am going 
> through and deleted the Systemstate filesystems.
>
> Since I wanted to see how many objects would be deleted, I did a "Q 
> OCCUPANCY" and preserved the file count numbers for all Windows nodes 
> on this server.
>
> For 4-nodes, the delete of their systemstate filespaces has been 
> running for 5-hours. A "Q PROC" shows:
>
> 2019-02-25 08:52:05 Deleting file space 
> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1) 
> (backup
> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
>
> Considering the occupancy for this node was *~5-Million objects*, how 
> has it deleted *105-Million* objects (and counting).  The other 
> 3-nodes in question are also up to *>100-Million objects deleted* and 
> none of them had more than *6M objects* in occupancy?
>
> At this rate, the deleting objects count for 4-nodes systemstate will 
> exceed 50% of the total occupancy objects on this server that houses 
> the backups for* 263-nodes*?
>
> I vaguely remember some bug/APAR about systemstate backups being 
> large/slow/causing performance problems with expiration but these 
> nodes client levels are fairly current (8.1.0.2 - staying below the 
> 8.1.2/SSL/TLS enforcement levels) and the ISP server is 7.1.7.400.  
> All of these are Windows 2016, if that matters.
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon 
> Monitor Administrator VMware Administrator Virginia Commonwealth 
> University UCC/Office of Technology Services www.ucc.vcu.edu 
> zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and 
> other reputable organizations will never use email to request that you 
> reply with your password, social security number or confidential 
> personal information. For more details visit http://phishing.vcu.edu/
>

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


TSMWorks and ART product

2019-02-24 Thread Harris, Steven
Hi all

I'm trying to recommend an automated restore testing tool like TSMWorks' ART to 
management.

TSMWorks web site appears to have been hijacked.
Lindsay Morris' Linkedin page seems to be dead. Twitter feed too.
Nothing from TSMWorks by Google search since about 2009

Does anyone know what has happened to Lindsay, the company or the product?  If 
it's dead are there any alternatives?  This company uses Netbackup and 
Networker as well as Spectrum Protect.

While I was looking I see that ADSM.ORG has no adsm-l posts since April 18. Is 
that dead too?

Cheers

Steve

TSM Admin/Consultant
Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: how to exclude vm's using sp for ve

2019-02-19 Thread Harris, Steven
Steve

I think you have to specify what you want to back up in the DOMAIN.VMFULL with 
one of the available options before excluding anything.
Then leave off the * in your backup vm command.

Regards

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Schaub, Steve
Sent: Wednesday, 20 February 2019 7:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] how to exclude vm's using sp for ve

8.1.2.0 client
Have a batch script that backs up all vm's in our lab, now I have a list of 
vm's I need to exclude.

Here is the backup command:
dsmc.exe backup vm * -VmBackupType=FullVm -Mode=IfIncremental -VmMaxParallel=10 
-VmMaxBackupSession=20 -VmLimitPerHost=10 -VmLimitPerDatastore=10 
-optfile=BCLAB.opt  > %DetailLog%

in BCLAB.opt I put this statement:
Domain.VMFull  
-vm=guest*,local*,nb_diskexpandtest,rlt00015,tlpvs001,tlsf001,tlsql003,tlsql012,tlsql200,tlsql201,tlxddc001

But the command is still attempting backups of these.

What am I missing?

Thanks,
-steve

--
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Any body knows how to backup SMB share ?

2019-02-17 Thread Harris, Steven
Its not hard Genet. 

Did you search the archives first?

The trick is to run  the scheduler under a Windows Domain user id not the usual 
System id.  You can either permanently map the share or, use a 
preschedule/postschedule command pair to map it as you need it.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Genet 
Begashaw
Sent: Saturday, 16 February 2019 2:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Any body knows how to backup SMB share ?

Thank you

Genet Begashaw

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: backing up multiple VMs with standard scheduleing

2019-01-08 Thread Harris, Steven
Hi Gary 

We schedule manually here.

We have one VBS for each of the separate backup streams and edit the 
domain.vmfull manually to add VMs.   The problem with this approach is that it 
is manual, and we end up having the same large VMs running backup all day.  We 
also have a mixture of some domain.vmfulls specifying only what VMs to backup 
and others specifying the cluster to backup and excluding what we don't want 
backed up.  Messy and confusing, but I inherited this.

I did recently need to rerun a stack of VMs that were not getting run because 
the backup VM process was die-ing every night, silently, and I wanted to 
control the execution order so the small ones ran first.  I did this by 
generating backup VM commands for 10 vms at a time and putting all of them into 
a macro that I then ran in the conventional way.  Macros work on the client 
too! 

You would probably be better off in the long run using VM tagging, if you are 
on a vSphere and SP level that supports it.  I'm trying to move that way now 
that the underlying VMWare infrastructure is at 6.0.  You can manipulate the 
tags from the TSM for VE command line with set commands, so that should be 
reasonably reader friendly. VM tagging has the ability to spread the VMDKs for 
a large VM across multiple VBSs so you can deal with those multi-terabyte 
monsters.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra, Australia

PS: I am in awe that you can function at all without vision in such a complex 
environment.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Schneider, Jim
Sent: Wednesday, 9 January 2019 7:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backing up multiple VMs with standard scheduleing

My VM backup schedule executes a .bat file on the VM proxy server.  The file 
has one or more 'dsmc backup vm host1,host2,host3.etc' commands.

Jim

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Tuesday, January 8, 2019 2:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backing up multiple VMs with standard scheduleing

Tsm client 8.1.4, running on RHEL 6.9 VMs. TSM server 7.1.7.1 on RHEL 6.9.
We do not use the VMWare plugin to manage our TSM for VE backup schedules. 
Schedules backup by host using standard tsm scheduling.
Did this because the vmware client is not very screen reader friendly for a 
blind user.

How can I backup more than one vm possibly on disperate hosts in a single 
schedule?
i.e. specifying multiple VMs in the "object" parm of a standard schedule?

Any help appreciated.

**
Information contained in this e-mail message and in any attachments thereto is 
confidential. If you are not the intended recipient, please destroy this 
message, delete any copies held on your systems, notify the sender immediately, 
and refrain from using or disclosing all or any part of its content to any 
other person.

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: user maintenance

2018-10-06 Thread Harris, Steven
Not the scripts, the schedules need to be updated.

select schedule_name from admin_schedules where chg_admin='USER_X' and 
active='YES'

To fix just update the schedule by setting active yes again

Select 'upd sched '||schedule_name||' t=a active=yes' from admin_schedules 
where chg_admin='USER_X' and active='YES'

Save the output, edit as necessary and then run it.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Efim
Sent: Saturday, 6 October 2018 4:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] user maintenance

Hi,
There will be problems with the scripts because they run with the rights of the 
administrator who created them.
All affected scripts must be updated by the existing administrator. (ex. add 
description or comment line).
Efim

> 5 окт. 2018 г., в 18:49, Solomon Miler  написал(а):
> 
> User_X created various entities on tsm (schedules, associations, scripts  etc 
> ).
> 
> User_X  leaves the company,
> 
> User_X  needs to be removed  from tsm.   (remove admin User_X)
> 
> Is there a concern / any side-effects you can think of.
> 
> Thank you.
> 
> 
> Solomon Miler
> Senior Data Protection Engineer, VP
> --
> Desk: 201.577.313
> Cell  :  917.287.2332
> smi...@jri-america.com
> 
> 
> ***
> 
> This email and any files transmitted with it are confidential and 
> intended solely for the use of the individual or entity to whom they 
> are addressed. If you have received this email in error please notify 
> the sender by replying to this email and then delete it from your 
> system.
> 
> No reliance may be placed upon this email without written confirmation 
> of its contents and any liability arising from such reliance without 
> written confirmation is hereby excluded.
> 
> JRI America
> 
> ***
> 

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


VM backup file names and retonly

2018-10-03 Thread Harris, Steven
Hi All

I'd like to start using DECOM VM to manage our obsolete VM backups.  Our 
current practice is to rename the filespace to 
\\VMFULL-_DECOM and let it sit for 
a year before deleting manually.
In order to accomplish that with DECOM VM I would need first to alter the 
retentions from the current unlimited,unlimited,32,32 to  
unlimited,unlimited,32,366

Now if I were to do that for a copygroup used by Windows or Unix  BA Backups, 
the space and database usage would blow out because of all the temporary files 
that those OSes spin off by the hundred, and which would then hang about for a 
year.  I'm reasonably sure that VM backups (VMware) name the files in a manner 
that is invariant, and the number of files should normally only ever increase, 
so increasing retonly for VMs will not have this issue.

Am I correct?  Is anyone else using this method successfully?  Any other 
gotchas I might not have thought of?

Thanks

Steve
Steven Harris
TSM Admin/Consultant
Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Proxy/asnodename restore and strange Registry entries?

2018-08-12 Thread Harris, Steven
Zoltan, 

As much as I hate Powershell it does have its uses

One thing it can do is create an encrypted authorization token that can be used 
to authenticate.  That token can be applied when you run a command, so it 
allows the use of the restricted id without providing the password in clear.

Invoking dsmadmc from powershell is a whole other world of pain, but just 
starting dsm for your user may not be so difficult.

Note I have looked into this several times, but never actually implemented it. 
My use-case was to save my password for a dsmadmc invocation.  
https://blog.kloud.com.au/2016/04/21/using-saved-credentials-securely-in-powershell-scripts/
 may be a good place to start.

Regards

Steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Saturday, 11 August 2018 3:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Proxy/asnodename restore and strange Registry entries?

Thanks for the suggestion Steven.

After much machinations and struggling against the admin lockdowns, we were 
able to get it to work, but in a totally illegal way.

1.  We had to access the AD account/password that is used for backing up
*ALL* of the CIFS/DFS nodes.  We certainly can not give this information out 
and audit/ISO certainly would not allow it.
2.  We had to add the account (#1) to the Backup Operators group on the desktop 
used for the ISP client restore process (very few people are allowed to do this 
and nobody has access to desktop/local administrator
accounts)

As I mentioned, the backups we need to access via proxy are run via a special 
AD account (identified in the scheduler service). So looking for suggestions on 
how to do this a different way, if possible.

I do have a question about the proxy process.  To test this, I created a dummy 
ISP node so the desktop client can sign-in to it to be able to use "Access 
another node".  My question is, since I setup the proxy target (node that has 
the data/backups) and proxy agent (dummy node), on the ISP server, do I still 
need to go to the target node and give the agent access?

On Mon, Aug 6, 2018 at 7:27 PM Harris, Steven < 
steven.har...@btfinancialgroup.com> wrote:

> Runas?
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Zoltan Forray
> Sent: Tuesday, 7 August 2018 5:57 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Proxy/asnodename restore and strange Registry 
> entries?
>
> I have another issue with the Proxy/asnodename process I hope someone 
> can direct me to the answer since I am still kinda lost in this Proxy process.
>
> To use the Proxy process, we had to install the standard Windows 
> GUI/client
> (8.1.0.2) on a desktop. I created a new node and use the proxy grant 
> process to give it agent authority over the other nodes we want to 
> restore from/for.  Also made the proxy authority the other way - just in case.
>
> Now, every time we try to restore a file, we get a "Permissions Denied"
> authority issue.  We think we know why due don't know how to get 
> around it.  In the current setup, the Windows services that perform 
> the backups and restores (via WebClient) use a specific AD account 
> that has the right authority.
>
> So how do you associate a specific AD account with a GUI 
> session/client so it has the proper rights to do restores?
>
> On Sat, Aug 4, 2018 at 7:50 AM Zoltan Forray  wrote:
>
> > I guess I should have mentioned that. Windows 10 Enterprise desktop 
> > is what I am using to access the proxy node of a Windows 2016 Server backup.
> >
> > Zoltan Forray
> > IBM Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator 
> > VMware Administrator Xymon Administrator VCU Computer Center 
> > zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and 
> > other reputable organizations will never use email to request that 
> > you reply with your password, social security number or confidential 
> > personal information. For more details visit 
> > https://phishing.vcu.edu
> >
> > On Fri, Aug 3, 2018, 9:36 AM Robert Talda  wrote:
> >
> >> Zoltan:
> >>  Willing to test here - which platform (Windows, Linux x86, etc) 
> >> are you running the client on?
> >>
> >> Robert Talda
> >> EZ-Backup Systems Engineer
> >> Cornell University
> >> +1 607-255-8280
> >> r...@cornell.edu
> >>
> >>
> >> > On Aug 2, 2018, at 10:35 AM, Zoltan Forray  wrote:
> >> >
> >> > We are working through trying to move to using Proxy/asnodename
> >> processes
> >> > to replace the html interface for our ISILON backups and are 
> >> &

Re: Proxy/asnodename restore and strange Registry entries?

2018-08-06 Thread Harris, Steven
Runas?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Tuesday, 7 August 2018 5:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Proxy/asnodename restore and strange Registry entries?

I have another issue with the Proxy/asnodename process I hope someone can 
direct me to the answer since I am still kinda lost in this Proxy process.

To use the Proxy process, we had to install the standard Windows GUI/client
(8.1.0.2) on a desktop. I created a new node and use the proxy grant process to 
give it agent authority over the other nodes we want to restore from/for.  Also 
made the proxy authority the other way - just in case.

Now, every time we try to restore a file, we get a "Permissions Denied"
authority issue.  We think we know why due don't know how to get around it.  In 
the current setup, the Windows services that perform the backups and restores 
(via WebClient) use a specific AD account that has the right authority.

So how do you associate a specific AD account with a GUI session/client so it 
has the proper rights to do restores?

On Sat, Aug 4, 2018 at 7:50 AM Zoltan Forray  wrote:

> I guess I should have mentioned that. Windows 10 Enterprise desktop is 
> what I am using to access the proxy node of a Windows 2016 Server backup.
>
> Zoltan Forray
> IBM Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator 
> VMware Administrator Xymon Administrator VCU Computer Center 
> zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and 
> other reputable organizations will never use email to request that you 
> reply with your password, social security number or confidential 
> personal information. For more details visit https://phishing.vcu.edu
>
> On Fri, Aug 3, 2018, 9:36 AM Robert Talda  wrote:
>
>> Zoltan:
>>  Willing to test here - which platform (Windows, Linux x86, etc) are 
>> you running the client on?
>>
>> Robert Talda
>> EZ-Backup Systems Engineer
>> Cornell University
>> +1 607-255-8280
>> r...@cornell.edu
>>
>>
>> > On Aug 2, 2018, at 10:35 AM, Zoltan Forray  wrote:
>> >
>> > We are working through trying to move to using Proxy/asnodename
>> processes
>> > to replace the html interface for our ISILON backups and are seeing 
>> > some strangeness in the 8.1.0.2 GUI
>> >
>> > When I bring up the GUI and the process to access another node, 
>> > when I expand the "File Level" section, 6 "Registry" appear?  
>> > Besides there
>> being
>> > 6-of them, this makes no sense since the backups are ISILON file 
>> > level - not OS level.  There aren't any systemstate/registry level.
>> >
>> > What gives?
>> >
>>
>

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://phishing.vcu.edu/

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-19 Thread Harris, Steven
Is there no journaling/logging service on these Isilions that could be used to 
maintain a list of changed files and hand-roll a dsmc-selective-with-file-list 
process similar to what GPFS uses? 

Cheers

Steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Cowen
Sent: Friday, 20 July 2018 6:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Looking for suggestions to deal with large backups not 
completing in 24-hours

Canary! I like it!
Richard

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Skylar 
Thompson
Sent: Thursday, July 19, 2018 10:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Looking for suggestions to deal with large backups not 
completing in 24-hours

There's a couple ways we've gotten around this problem:

1. For NFS backups, we don't let TSM do partial incremental backups, even if we 
have the filesystem split up. Instead, we mount sub-directories of the 
filesystem root on our proxy nodes. This has the double advantage of letting us 
break up the filesystem into multiple TSM filespaces (giving us directory-level 
backup status reporting, and parallelism in TSM when we have 
COLLOCG=FILESPACE), and also parallelism at the NFS level when there are 
multiple NFS targets we can talk to (as in the case with Isilon).

2. For GPFS backups, in some cases we can setup independent filesets and let 
mmbackup process each as a separate filesystem, though we have some instances 
where the end users want an entire GPFS filesystem to have one inode space so 
they can do atomic moves as renames. In either case, though, mmbackup does its 
own "incremental" backups with filelists passed to "dsmc selective", which 
don't update the last-backup time on the TSM filespace. Our workaround has been 
to run mmbackup via a preschedule command, and have the actual TSM incremental 
backup be of an empty directory (I call them canary directories in our 
documentation) that's set as a virtual mountpoint. dsmc will only run the 
backup portion of its scheduled task if the preschedule command succeeds, so if 
mmbackup fails, the canary never gets backed up, and will raise an alert.

On Wed, Jul 18, 2018 at 03:07:16PM +0200, Lars Henningsen wrote:
> @All
> 
> possibly the biggest issue when backing up massive file systems in parallel 
> with multiple dsmc processes is expiration. Once you back up a directory with 
> ???subdir no???, a no longer existing directory object on that level is 
> expired properly and becomes inactive. However everything underneath that 
> remains active and doesn???t expire (ever) unless you run a ???full??? 
> incremental on the level above (with ???subdir yes???) - and that kind of 
> defeats the purpose of parallelisation. Other pitfalls include avoiding 
> swapping, keeping log files consistent (dsmc doesn???t do thread awareness 
> when logging - it assumes being alone), handling the local dedup cache, 
> updating backup timestamps for a file space on the server, distributing load 
> evenly across multiple nodes on a scale-out filer, backing up from snapshots, 
> chunking file systems up into even parts automatically so you don???t end up 
> with lots of small jobs and one big one, dynamically distributing load across 
> multiple ???proxies??? if one isn???t enough, handling exceptions, handling 
> directories with characters you can???t parse to dsmc via the command line, 
> consolidating results in a single, comprehensible overview similar to the 
> summary of a regular incremental, being able to do it all in reverse for a 
> massively parallel restore??? the list is quite long.
> 
> We developed MAGS (as mentioned by Del) to cope with all that - and more. I 
> can only recommend trying it out for free.
> 
> Regards
> 
> Lars Henningsen
> General Storage

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future 

Protectier volume shortage.

2018-07-10 Thread Harris, Steven
Hi Guys

I'm looking for some help with managing a protectier VTL.

A bit of background  we have two sites TSM 7.1.7.300 and a protectier gateway 
at each with 1 PB of storage behind it.
There are two main TSM instances at each site, one for VM snaps and one for BA 
Client backups.  Offsite replication is by Global Mirror of the disk and 
Protectier replication. No TSM node replication. Each instance has its own 
Virtual Library

So we went close to filling up one of the Protectiers a while back. Got down to 
5% free space, when the limit is 10%.  The situation has now improved by moving 
most of the BA client load on to a Data Domain, and  unloading long term 
backups to real tape, so we are about 25% free on the protectier.

Before the space issues, we were allocating Protectier carts at 100GB max.  The 
protectier will allocate up to that limit but will truncate sooner if it runs 
out of space.
When we got short on space we needed to allocate additional volumes and did so 
as we needed to.  While we still specified 100GB most are smaller than that, 
some down as small as 30GB.
What has happened now is that even though the protectier free space has 
improved, the size of volumes has not, and, as TSM reclaims volumes they are 
being truncated at a smaller size than they were.  Thus we are running out of 
volumes again and reclaim is making things worse rather than better.

A complicating factor is that if a volume is unavailable when needed by the VBS 
Datamover, which uses lan-free to the VTL, the VM backup fails, so running lean 
is not really an option.

Questions:
The space saving has come from a different virtual library to where the space 
is needed.  Is this significant?
Has anyone come across this and developed a management strategy?  All I can 
think of is moving empty carts to the "shelf" and deleting them from there.  If 
I do this from the same library with the space shortages that will make things 
worse in  terms of number-of-volumes.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia






This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-05 Thread Harris, Steven
Zoltan

I kind of agree with Ung Yi

What is the purpose of your TSM  backups?  DR?  Long term retention for 
auditability/sarbox/other regulation?

It may well be that a daily or even more frequent snapshot regime might be the 
best way to get back that recently lost/deleted/corrupted file.
Use a TSM backup of a weekly point-of-consistency snapshot as your long term 
strategy.

Of course a better option would be an embedded TSM client on the Isilon itself, 
but the commercial realities are that will never happen.

Cheers

Steve
Steven Harris
TSM Admin
Canberra Australia 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Yi, Ung
Sent: Friday, 6 July 2018 6:36 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Looking for suggestions to deal with large backups not 
completing in 24-hours

Hello,
I don’t know much about Isilon.
There might be SAN level snap backups option for Isilon.

For our Data domain, we replicate from Main site to DR site, then take snap at 
our DR site every night. Each snap is consider a backup.

Thank you.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, July 05, 2018 2:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Looking for suggestions to deal with large backups not completing in 
24-hours

As I have mentioned in the past, we have gone through large migrations to DFS 
based storage on EMC ISILON hardware.  As you may recall, we backup these DFS 
mounts (about 90 at last count) using multiple Windows servers that run 
multiple ISP nodes (about 30-each) and they access each DFS mount/filesystem 
via -object=\\rams.adp.vcu.edu\departmentname.

This has lead to lots of performance issue with backups and some departments 
are now complain that their backups are running into multiple-days in some 
cases.

One such case in a department with 2-nodes with over 30-million objects for 
each node.  In the past, their backups were able to finish quicker since they 
were accessed via dedicated servers and were able to use Journaling to reduce 
the scan times.  Unless things have changed, I believe Journling is not an 
option due to how the files are accessed.

FWIW, average backups are usually <50k files and <200GB once it finished 
scanning.

Also, the idea of HSM/SPACEMANAGEMENT has reared its ugly head since many of 
these objects haven't been accessed in many years old. But as I understand it, 
that won't work either given our current configuration.

Given the current DFS configuration (previously CIFS), what can we do to 
improve backup performance?

So, any-and-all ideas are up for discussion.  There is even discussion on 
replacing ISP/TSM due to these issues/limitations.

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ucc.vcu.edu=DwIBaQ=k9MF1d71ITtkuJx-PdWme51dKbmfPEvxwt8SFEkBfs4=p7OfdQbObZllF-mnDqjrQg=jX8fSV2xXtioczSetX1viQa6EzVNOcZlBX9ddwWGXRM=KsUuBwu8G3pWJ7R7hedi0ZISk0CjIRrWQMJneyjNxD4=
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
https://urldefense.proofpoint.com/v2/url?u=http-3A__phishing.vcu.edu_=DwIBaQ=k9MF1d71ITtkuJx-PdWme51dKbmfPEvxwt8SFEkBfs4=p7OfdQbObZllF-mnDqjrQg=jX8fSV2xXtioczSetX1viQa6EzVNOcZlBX9ddwWGXRM=xiPt_TkUv02i1b7VQfKybQwokZGIKegAHQtBFG_G19U=

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: How to set default include/exclude rules on server but allow clients to override?

2018-06-13 Thread Harris, Steven
Genet

Personally I would like all possible client options to be set from the server.  
The client option set mechanism is limited in that only one can be specified 
and they cannot, for example, include one set in another.

At one time, I had a set of m4 macros (I'm from a unix background) that 
generated a client option set.  When a new, non-standard box needed something 
different, I created a new macro with the node's name, included the standard 
options and then added any more that were necessary, then generated a tailored 
option set for the node, also with the node's name.

It had the advantage that if a global change needed to be made, that was quite 
easy, and unix build tools could be leveraged. (I could have used make, but 
preferred Ruby's rake) 

Hope that helps... it might if you are unix savvy

Cheers

Steve

Steven Harris 
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Genet 
Begashaw
Sent: Wednesday, 13 June 2018 11:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] How to set default include/exclude rules on server but 
allow clients to override?

Thanks for your response, on the server side it was included by Optionset
with force=no, i tried to overwrite in the client side in dsm.opt file with
different mgmt class like shown below

include.systemstate ALL MC-mgmt-name
​
still did not work​

Thank you





On Wed, Jun 13, 2018 at 9:37 AM Robert Talda  wrote:

> Genet:
>
>   For us it is simple: create a client option set with the list of
> include/exclude rules with FORCE=NO.  Complexity comes from number of
> client options sets needed to support different platforms (Win 7, Win 10,
> OS X, macOS, various Linux destroys) and determining which client option
> set to assign a given node
>
> YMMV,
> Bob
>
> Robert Talda
> EZ-Backup Systems Engineer
> Cornell University
> +1 607-255-8280
> r...@cornell.edu
>
>
> > On Jun 13, 2018, at 8:56 AM, Genet Begashaw  wrote:
> >
> > Thank you
> >
> > Genet Begashaw
>


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: TSM Instance can't find its instance directory.

2018-05-25 Thread Harris, Steven
Hi Eric and list.

I found the problem.

The home directory of the user is /tsm/wtsmt01.  That is also the mount point 
of a file system

When mounted the permissions were fine 
drwxrwxrwx   11 wtsmt01  tsm   16384 25 May 16:55 wtsmt01

When unmounted however the permissions were not correct.
drwx--7 root system  256 10 Apr 14:37 wtsmt01

During start up db2 uses /bin/pwd which is symlinked to /usr/bin/pwd

I ran your db2idrop successfully but the db2icrt failed.  The log said 

1. Verify the access permissions of the home directory of the instance
   owner by using the command "su".

   Example:

   /usr/bin/su  -c /bin/pwd

2. If the permissions do not include "r" and "x", unmount the drive,
   change the permissions, and remount the drive.

   Example:

   unmount /
   chown  /
   chmod  755 /
   mount /

   Related information:
   File permission requirements for the instance and database
   directories

#1 failed, so I performed #2 and that fixed the issue. 

The symptom that I had seen before I started this debugging is that the 
instance user is getting a permissions error executing pwd

Many thanks

Steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
Eric van (ITOPT3) - KLM
Sent: Wednesday, 23 May 2018 10:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Instance can't find its instance directory.

Hi Steve,
Did you check the content of the .profile file of the instance user on the DR 
host? It should be identical to the one on the prime site.
If that's OK you could try and drop the instance and recatalog the database on 
the DR node:

# /opt/tivoli/tsm/db2/instance/db2idrop  # 
/opt/tivoli/tsm/db2/instance/db2icrt -a server -s ese -u  
 $ db2 catalog db tsmdb1 on /tsm/wtsmt01

Maybe that helps?
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Steven
Sent: woensdag 23 mei 2018 1:36
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Instance can't find its instance directory.

Hi guys

I have a curly one here. TSM 7.1.1.300 on AIX  7.1

I'm preparing for a DR test.

The DR process is that the TSM instance directory, database, logs and disk/file 
storage pools reside on an IBM V840 flash device.  These are mirrored to the DR 
site using  remote copy.  The old way was to use the remote copy directly, the 
new way is to run a flashcopy backup of the remote-copied volumes and mount 
those. Old process or new I get the same error. It last worked November last 
year.

On my test instance,  when I start at prime site as the instance user, it finds 
the instance directory /tsm/wtsmt01.  It then opens the dsmserv.opt and 
initialization completes as usual.

When I start at the DR site, also as the instance user, with the same 
filesystems  mounted, and in the instance directory, it does not find the 
instance directory, but tries to use the root directory.  Obviously the 
dsmserv.opt file is not found and the process goes down hill from there.

Primary:
ANR0900I Processing options file /tsm/wtsmt01/dsmserv.opt.

DR:
ANR0905W Options file /dsmserv.opt not found.
ANR0010W Unable to open message catalog for language en_AU.8859-15. The default 
language message catalog will be used.
ANR7811I Using instance directory .

I have compared the two sets of environment variables and they are identical, 
apart from obvious things like PID and RANDOM.

According to the doc, the instance directory is supposed to be set from the DB2 
instance directory.  This is checked with db2 get dbm cfg | grep DFTDBPATH and 
is the same on both boxes.  I even explicitly set it again on the DR side to no 
effect.

TSM Support has been no help.

Can anyone point me in the right direction?  Even some idea of how to trace 
what's happening would be useful.

Thanks

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable i

Re: TSM Instance can't find its instance directory.

2018-05-23 Thread Harris, Steven
Much appreciated,  Eric.

Yes I did look at the .profile and it is the same on both systems.
Dropping and recreating the instance is the way to go.  I can flashcopy the 
rootvg so its cheap to roll back if it all goes awry.
Will try tomorrow.

Thanks

Steve.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
Eric van (ITOPT3) - KLM
Sent: Wednesday, 23 May 2018 10:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Instance can't find its instance directory.

Hi Steve,
Did you check the content of the .profile file of the instance user on the DR 
host? It should be identical to the one on the prime site.
If that's OK you could try and drop the instance and recatalog the database on 
the DR node:

# /opt/tivoli/tsm/db2/instance/db2idrop 
# /opt/tivoli/tsm/db2/instance/db2icrt -a server -s ese -u  

$ db2 catalog db tsmdb1 on /tsm/wtsmt01

Maybe that helps?
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Steven
Sent: woensdag 23 mei 2018 1:36
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Instance can't find its instance directory.

Hi guys

I have a curly one here. TSM 7.1.1.300 on AIX  7.1

I'm preparing for a DR test.

The DR process is that the TSM instance directory, database, logs and disk/file 
storage pools reside on an IBM V840 flash device.  These are mirrored to the DR 
site using  remote copy.  The old way was to use the remote copy directly, the 
new way is to run a flashcopy backup of the remote-copied volumes and mount 
those. Old process or new I get the same error. It last worked November last 
year.

On my test instance,  when I start at prime site as the instance user, it finds 
the instance directory /tsm/wtsmt01.  It then opens the dsmserv.opt and 
initialization completes as usual.

When I start at the DR site, also as the instance user, with the same 
filesystems  mounted, and in the instance directory, it does not find the 
instance directory, but tries to use the root directory.  Obviously the 
dsmserv.opt file is not found and the process goes down hill from there.

Primary:
ANR0900I Processing options file /tsm/wtsmt01/dsmserv.opt.

DR:
ANR0905W Options file /dsmserv.opt not found.
ANR0010W Unable to open message catalog for language en_AU.8859-15. The default 
language message catalog will be used.
ANR7811I Using instance directory .

I have compared the two sets of environment variables and they are identical, 
apart from obvious things like PID and RANDOM.

According to the doc, the instance directory is supposed to be set from the DB2 
instance directory.  This is checked with db2 get dbm cfg | grep DFTDBPATH and 
is the same on both boxes.  I even explicitly set it again on the DR side to no 
effect.

TSM Support has been no help.

Can anyone point me in the right direction?  Even some idea of how to trace 
what's happening would be useful.

Thanks

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines

TSM Instance can't find its instance directory.

2018-05-22 Thread Harris, Steven
Hi guys

I have a curly one here. TSM 7.1.1.300 on AIX  7.1

I'm preparing for a DR test.

The DR process is that the TSM instance directory, database, logs and disk/file 
storage pools reside on an IBM V840 flash device.  These are mirrored to the DR 
site using  remote copy.  The old way was to use the remote copy directly, the 
new way is to run a flashcopy backup of the remote-copied volumes and mount 
those. Old process or new I get the same error. It last worked November last 
year.

On my test instance,  when I start at prime site as the instance user, it finds 
the instance directory /tsm/wtsmt01.  It then opens the dsmserv.opt and 
initialization completes as usual.

When I start at the DR site, also as the instance user, with the same 
filesystems  mounted, and in the instance directory, it does not find the 
instance directory, but tries to use the root directory.  Obviously the 
dsmserv.opt file is not found and the process goes down hill from there.

Primary:
ANR0900I Processing options file /tsm/wtsmt01/dsmserv.opt.

DR:
ANR0905W Options file /dsmserv.opt not found.
ANR0010W Unable to open message catalog for language en_AU.8859-15. The default
language message catalog will be used.
ANR7811I Using instance directory .

I have compared the two sets of environment variables and they are identical, 
apart from obvious things like PID and RANDOM.

According to the doc, the instance directory is supposed to be set from the DB2 
instance directory.  This is checked with db2 get dbm cfg | grep DFTDBPATH and 
is the same on both boxes.  I even explicitly set it again on the DR side to no 
effect.

TSM Support has been no help.

Can anyone point me in the right direction?  Even some idea of how to trace 
what's happening would be useful.

Thanks

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: ISILON storage/FILE DEVCLASS performance issues

2018-05-13 Thread Harris, Steven
Zoltan

I have a similar issue TSM 7.1.1.300 AIX -> Data Domain.  Have dual 10Gb links, 
but can only get ~4000 writes/sec and 120MB/sec throughput. AIX only supports 
NFS3, and as others have pointed out in this forum recently, the stack does not 
have a good reputation.

I'm finding that the heavy NFS load has other knock on effects, e.g. TSMManager 
keeps reporting the instance offline when it's very busy as it gets a network 
error on some of its regular queries, but these work ok when load is light.  
Also getting a lot of Severed/reconnected sessions.  CPU/IO/Paging are not a 
problem.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Saturday, 12 May 2018 1:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] ISILON storage/FILE DEVCLASS performance issues

Folks,

ISP 7.1.7.300 on RHEL 6   10G connectivity

We need some guidance on trying to figure out why ISP/TSM write perform to 
ISILON storage (via FILE DEVCLASS) is so horrible.

We recently attached 200TB of ISILON storage to this server so we could empty 
the 36TB of onboard disk drives to move this server to new hardware.

However, per my OS and SAN folks, we are only seeing 1Gbs level of data 
movement from the ISP server.  Doing a regular file copy to this same storage 
peaks at 10Gbs speeds.

So what, if anything, are we doing wrong when it comes to configuring the 
storage for ISP to use?  Are there some secret controls/settings/options to 
tell it to use the storage at max-speeds?

We tried changing the Est/Max capacity thinking larger files would reduce the 
overhead of allocating new pieces constantly.  Changed the Mount Limit to a 
bigger number.  Nothing has helped.

Only thing uses the storage right now is migrations from the original disk 
stgpool.



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://phishing.vcu.edu/


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Can we disable client journaling with Server CLOPT ?

2018-05-09 Thread Harris, Steven
Luc,

There is a flag you can set that stops the database from being invalidated by a 
stop of the Journal service.

I have usually set this because when journaling is required, an arbitrary full 
scan is usually not wanted.

Regards

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Michaud, Luc [Analyste principal - environnement AIX]
Sent: Thursday, 10 May 2018 1:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Can we disable client journaling with Server CLOPT ?

Hi Marc,



Thanks for the feedback.



I forgot to state that the Windows admins have historically put in place a 
batch file to take backups.

It's been forked into more that 50 versions since.

None of which forward any of its cmdline arguments to the actual "dsmc i" 
command.



I've since been looking at the "Journal-based backup" section of the BA client 
install and users guide.

It lists about 7 possible ways to invalidate the journal db.

Most are not applicable to steady state (changed node, fs, server)

However, I see 2 ways that would work in my universe :

1.  "A policy changes occurs (new policy set activation)"

2.  "The journal service is not running"

3.  The journal service is stopped or started for any reason, even if it is 
restarted because the system is rebooted.

I think I'll focus on the 2nd way.

Looking into using DSMCAD to run "dsmcutil stop /name:"TSM Journal Service"" 
and "dsmcutil start /name:"TSM Journal Service"" as one-shots.



I'll keep you guys posted.



Still open to other approaches.



Regards,



Luc



-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Marc 
Lanteigne Envoyé : 9 mai 2018 10:29 À : ADSM-L@VM.MARIST.EDU Objet : Re: 
[ADSM-L] Can we disable client journaling with Server CLOPT ?



Hi Luc,



You should be able to define a client schedule with "option=-nojournal"



So you would have two schedule.  The current one, plus one that periodically 
does -nojournal.



-

Thanks,

Marc...



Marc Lanteigne

Spectrum Protect Specialist AVP / SRT

416.478.0233 | marclantei...@ca.ibm.com

Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern



Follow me on: Twitter, developerWorks, LinkedIn





-Original Message-

From: Michaud, Luc [Analyste principal - environnement AIX] 
[mailto:luc.micha...@stm.info]

Sent: Wednesday, May 9, 2018 11:17 AM

To: ADSM-L@VM.MARIST.EDU

Subject: [ADSM-L] Can we disable client journaling with Server CLOPT ?



Greetings everyone,



Our shop is TSM 717000 on AIX, with lots of Win BA clients v7164 with 
journaling enabled.



In a restore test of a Windows node, we found that a file that was changed in 
2017 was never backed up.

We thus kept the previous iteration (circa 2016) as active in the actual node 
data.



We were led to journaling FAQ at

http://www-01.ibm.com/support/docview.wss?uid=swg21681523

So, now we are looking to implement the recommended "periodic FULL INCREMENTAL 
BACKUPS".



We have no actual access to the Windows clients, except for DSMCAD...

Thus I thought of doing a periodic rotation of node clopt on the server-side, 
however it seems that the NOJOURNAL option is only valid on the dsmc 
command-line.



How have you guys gone about implementing this ?



Luc

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: dsmc retrieve syntax for a remote node?

2018-04-12 Thread Harris, Steven
Hi Jim

\\thatnode\c$\*.*

Retrieves only files in the C$ root directory with a period in the name.

If you want everything under c$ I'd use

ret \\thatnode\c$\*  -fromnode=thatnode -subdir=y

you will also have to specify a destination  as without it will try to put the 
data back into \\thatnode

I find it interesting that you use -fromnode.  I had sort of forgotten it 
exists.  I normally use -virtualn and if necessary change the node password.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Davis, 
Jim J - (jjdavis)
Sent: Friday, 13 April 2018 8:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] dsmc retrieve syntax for a remote node?

I'm trying to retrieve some old archived data with dsmc from thatnode to 
thisnode, and I seem to have forgotten the strange magic involved in correctly 
specifying the paths.  So on thisnode,

dsmc> q filespace -fromnode=thatnode

shows me things like

\\thatnode\c$
\\thatnode\e$
\\thatnode\f$

but something like

dsmc> ret \\thatnode\c$\*.* -fromnode=thatnode

and various permutations just returns a message about nothing being archived by 
that path name.  What am I missing?  

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: NTNamedStreamRead() openNamedStream errors

2018-04-11 Thread Harris, Steven
Zoltan

I think the problem is the colon in the file name.

Windows has the capability to write multiple streams to the same file.  The 
streams are differentiated by a colon and the the stream id.

At least that’s the gist of it.  See 
https://www.owasp.org/index.php/Windows_::DATA_alternate_data_stream
Often used by hackers to hide stuff apparently.

Cheers

Steve

Steven Harris
TSM Admin
Canberra Australia


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, 12 April 2018 3:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] NTNamedStreamRead() openNamedStream errors

Has anyone seen these kind of errors?  A user has been having these errors for 
a long time (they manage their own box and thus din't notice these errors for 
9+ months)

   TSM return code   : 104
   TSM file  : ..\..\common\winnt\psmech.cpp (6583)
04/10/2018 22:58:04 ANS0361I DIAG: NTNamedStreamRead(): openNamedStream 
S:\arts-data\Private 
Directories\jwaltonen\Muse\Muse\Exceptions\AdobeMuse7.0-mul\Install Adobe 
Muse.app\Contents\Resources\Adobe
Muse\Contents\Resources\META-INF\AIR\extensions\com.adobe.oobelib.extensions\META-INF\ANE\MacOS-x86\OOBELibANE.framework\Versions\Current:AFP_AfpInfo:
RC=104
04/10/2018 22:58:04 ANS5250E An unexpected error was encountered.
   TSM function name : openNamedStream
   TSM function  : CreateFile() returned '2' for file ':AFP_AfpInfo'

All Google hits are for a much older client - they are fairly current at
7.1.6.5 on a Windows 2012 server.

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://phishing.vcu.edu/


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: v7.1.8/8.1.2 SSL Upgrade: Rethinking servers first or clients first

2018-03-06 Thread Harris, Steven
Eric

Really old-school...

Schedule a one time admin schedule to run a script that as the last step 
schedules itself again some time in the future

e.g

def scr reset_fred
upd scr reset_fred 'upd admin fred  sessionsecurity=transitional'   line=5  
 check the syntax
upd scr reset_fred 'upd sched reset_fred  t=a start=+0:05'  line=10


def sched reset_fred t=a cmd='run reset_fred'  active=yes  


Regards

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
Eric van (ITOPT3) - KLM
Sent: Wednesday, 7 March 2018 2:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] v7.1.8/8.1.2 SSL Upgrade: Rethinking servers first or 
clients first

Hi Krzysztof,
Agreed, it will work but it sure aint pretty. And again, we are trying to find 
a fix for something IBM has broken...
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Krzysztof Przygoda
Sent: dinsdag 6 maart 2018 15:40
To: ADSM-L@VM.MARIST.EDU
Subject: Re: v7.1.8/8.1.2 SSL Upgrade: Rethinking servers first or clients first

Hi Eric
Solution for admin schedule to run more often without crontabs is to have 
several of them starting at different moment of each hour (startt value).
Eg:
def sched ADMIN_TRANSITIONAL_1 type=admin active=yes  STARTT=15:01 CMD="RUN 
ADMIN_TRANSITIONAL" duru=min peru=hour def sched ADMIN_TRANSITIONAL_2 
type=admin active=yes  STARTT=15:11 CMD="RUN ADMIN_TRANSITIONAL" duru=min 
peru=hour etc.
I know, this make the "fix" even more ridiculous ...but again, it works:-)

Kind regards
Krzysztof

2018-03-06 15:17 GMT+01:00 Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com>:

> Hi Roger,
> I'm struggling with the exact same issues as you are. I'm running a
> 7.1.8 server and all procedures we are using for years to deploy new 
> clients fail because of the admins STRICT issue. And migrating 
> existing (< 7.1.8) versions from another server to this 7.1.8 server 
> is only possible after a manual update of the admin to TRANSITIONAL, 
> each and every time. You can't bypass this by installing the 
> certificate first because the dsmcert utility does not exist in pre-7.1.8 
> clients!
> I really think IBM has screwed up here big time. They clearly 
> underestimated the impact of this "small" security "enhancements" they 
> implemented. :-( I too thought about the fix of having the admin 
> account updated to TRANSITIONAL every minute or so, but I haven't been 
> able to find a way through the administrative scheduler to schedule a 
> command more often that once per hour (PERunits=H)... So you have to 
> build your own scripts and schedule it through cron, which isn't 
> allowed in our shop.
> I too have a hard time finding a simple solution. I think the best 
> thing IBM could do is admit that they have underestimated this issue 
> and create a
> 7.1.8.100 patch level with the option to set an admin account to 
> TRANSITIONAL permanently.
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Deschner, Roger Douglas
> Sent: vrijdag 2 maart 2018 2:00
> To: ADSM-L@VM.MARIST.EDU
> Subject: v7.1.8/8.1.2 SSL Upgrade: Rethinking servers first or clients 
> first
>
> I've been using our test setup for further testing, and I'm thinking 
> of reversing my strategy. I may want to upgrade clients first, and 
> then servers.
>
> The basic issue is still how to overcome the roadblock of having an 
> Administrator ID automatically switched from TRANSITIONAL to STRICT 
> upon first login from a 7.1.8/8.1.2+ dsmadmc client. IBM seems to 
> think we can upgrade all servers and all clients to 7.1.8/8.1.2+ 
> simultaneously. That is not practical.
>
> In the worst case, this automatic switching could cause the System 
> Administrator's worst nightmare - to lose control over a running system.
>
> I am still considering the (very ugly) bypass of an administrative 
> schedule that sets it back to TRANSITIONAL for all Admin IDs every 5 
> minutes. There will still be some failures.
>
> But I am also considering reversing the strategy I had considered 
> earlier, to a different strategy of upgrading all of the clients 
> involved (about 7 of them, I think, but I'm not sure) to 7.1.8 or
> 8.1.4 first, while the servers are all still running older versions. 
> So far, everything would be working.
>
> Then doublecheck that there are not any left behind by scanning 
> activity logs, the summary file, etc.
>
> Then once the operation of these clients was stabilized, upgrade our 4 
> servers one at a time. As each server is upgraded, the already-updated 
> client would cause certificates to be exchanged and that Admin ID to 
> be switched to STRICT, which would be OK since all of the client nodes 
> where that Admin ID 

Re: Command Routing Gotcha in v7.1.8

2018-02-22 Thread Harris, Steven
So what we need is a server option

SESSIONSECFORCE   TRANSITIONAL

Only able to be set by editing dsmserv.opt and defaulting to NO.   If its set 
then the automatic update to SESSIONSECURITY strict  is not permitted.

Update everything you need to then turn the option off.

I understand why session security has been forced on and I understand that we 
don't want it to be easily bypassed from any admin session as that leaves a 
simple back door, but seriously, someone did not think through the implications.

Cheers

Steve

Steven Harris
TSM Admin/Consultant

Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Deschner, Roger Douglas
Sent: Friday, 23 February 2018 1:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Command Routing Gotcha in v7.1.8

There is a known and somewhat documented restriction where an administrative ID 
which connects to a New (7.1.8 or 8.1.2+) server from a New dsmadmc client, 
cannot connect from an Old administrative client anymore, because 
SESSIONSECURITY has been switched to STRICT.

I have now discovered that this affects Command Routing among servers. It makes 
sense, if you think about it, but it bit me. My test setup has two servers, one 
running 6.3.5 and the other 7.1.8. They both have Admin ID roger with the same 
password. Command routing initially worked fine between the two servers using 
Admin ID roger. But then Admin ID roger used a 7.1.8 client dsmadmc to connect 
to the 7.1.8 server, and all that SSL magic happened and SESSIONSECURITY got 
changed to STRICT. As documented, now Admin ID roger cannot use an older client 
dsmadmc to reach the 7.1.8 server. Although roger can still connect to the 
6.3.5 server using any version client dsmadmc, now command routing no longer 
works. It fails with "ANR0454E Session rejected by server ADSM-3, reason: 7 - 
Down level." It does work when Admin ID roger connects to the 7.1.8 server. UPD 
ADMIN ROGER SESSIONSECURITY=TRANSITIONAL is a bypass, and I'm keeping the 
(ugly) suggestion in mind to issue it every 5 minutes from a schedule if this 
becomes an issue.

I have noticed that, if SESSIONSECURITY=TRANSITIONAL is in effect, and you use 
an Old client to connect to an Old server, and you use command routing to route 
a command to a New server, it does NOT change SESSIONSECURITY to STRICT for 
that Admin ID on the New server. That is good. This feature of automatically 
setting SESSIONSECURITY to STRICT on Admin IDs is turning into one of our worst 
stumbling blocks in this major update. I'm the administrator; don't mess with 
my own ID!

This looks like another reason to upgrade ALL servers to 7.1.8/8.1.2+ before 
upgrading ANY clients. We have several admin IDs that are used by a variety of 
cron processes to monitor and control the backup systems. Some of these 
processes use command routing. I am now inventorying them, because the clients 
they connect from must all be upgraded together at the same time to avoid 
failures of these monitoring and control processes.

Roger Deschner
University of Illinois at Chicago
"I have not lost my mind; it is backed up on tape somewhere."

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Command to only backup and clear archive logs

2018-02-19 Thread Harris, Steven
Hans Chr.

What are you backing your DB up to?  Could you use some flash disk or tiered 
disk storage?  Create a special device class just for this situation, backup 
your database to it, then immediately start your normal DB backup.  As soon as 
the normal backup is complete you can delete the fast disk backup and return 
the space.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Remco 
Post
Sent: Tuesday, 20 February 2018 2:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Command to only backup and clear archive logs

you need a bigger archive log, or better backup performance (more recent 
versions of TSM allow you to specify the number of parallel sessions in set 
dbrecovery).

> Op 19 feb. 2018, om 10:22 heeft Hans Christian Riksheim  
> het volgende geschreven:
> 
> Does TSM/DB2 have that possibility, like Oracle MSSQL?
> 
> Problem is when the archivelog filesystem goes full we have to wait 
> for the full dbbackup to be finished before archive logs are backed up and 
> cleared.
> Since we have to disable all sessions and other processes(this 
> generates even more logs) we have several hours of outage each time this 
> happens.
> 
> 
> 
> Hans Chr.

-- 

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Exchange backup speed

2018-02-18 Thread Harris, Steven
Tom

That rings a bell.  These backups use VSS so you need to stagger them by 10 
minutes or so to allow VSS to do its thing, and yes, you need separate logs for 
each  process for them to be useful but that’s just flags to the commands.

I did get a sample script from someone internal to IBM who monitors this list.  
He may care to share it with you.  I never implemented it because at that stage 
I did not know powershell and time and projects march on, so I had to go with 
what worked rather than what I would have liked.

Good luck

Steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Alverson
Sent: Monday, 19 February 2018 1:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Exchange backup speed

Couldn't I just set up multiple schedulers that all start up at about the same 
time to make it  parallel.  That way I don't need to try and extract the 
errorlevels for each process and try to combine them somehow.

I tried adding a START command in front of the FULL backup at the beginning of 
my batch file (to make the first FULL backup run in parallel with the
rest)  but I ended up with a 418 error, and an unreadable tdpexc.log file (the 
outputs from the parallel backup jobs are all mixed together often in
mid sentence).   I will have to ask our storage team what they saw on their
side but in my logs I got

5060E A Tivoli Storage Manager API error has occurred.
02/18/2018 17:02:34 ANS1245E (RC122)  The file has an 0n2/18/2018 17:02:34 
ANS1245E (RC122)  The file has an unknown format.

02/18/2018 17:13:20 ANS1236E (RC115)  An unexpected error occurred.
02/18/2018 17:13:20 ACN5060E A Tivoli Storage Manager API error has occurred.

02/18/2018 17:14:05 ACN5918W The mailbox history did not update successfully on 
the TSM Server.
02/18/2018 17:14:05 ACN5060E A Tivoli Storage Manager API error has occurred.

02/18/2018 17:24:30 ANS1236E (RC115)  An unexpected error occurred.
02/18/2018 17:24:30 ACN5060E A Tivoli Storage Manager API error has occurred.



On Sun, Feb 18, 2018 at 4:55 PM, Harris, Steven < 
steven.har...@btfinancialgroup.com> wrote:

> Tom
>
> It is a failing of TSM/SP that a basic function is deemed "good 
> enough" by the people who decide such things within IBM and the 
> real-world implementation is left to users. Your problem is not 
> uncommon and a solution should be a standard part of the marketed offering.
>
> You will need some powershell skills. Use the powershell cmdlets that 
> come with TDP for Exchange and run your processes in parallel. You 
> will need to code some funky error checking to make sure the correct 
> return codes are returned.
>
> Regards
>
> Steve
>
> Steven Harris
> TSM Admin/Consultant
> Canberra ACT
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Tom Alverson
> Sent: Monday, 19 February 2018 6:45 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Exchange backup speed
>
> Remco:
>
> I appreciate all feedback, blunt or not.  I am relatively new to TSM 
> but I only work on windows client issues.  A separate team works on 
> the TSM storage servers and they are very experienced
>
> The servers are loafing, they have 4 cores with 32 processors, and 
> 384GB of ram, not of which is anywhere near the limit.  The only 
> bottleneck right now is the 10GB interfaces in the exchange server and 
> TSM storage servers must pass through a 1GB embedded rack switch that 
> I have been urging them upgrade.  If we could get anywhere near 1GB 
> network throughput on the exchange backups that would be good.
>
> I'm sure the storage servers are not under stress based on performance 
> of other backups we have running.
>
> On Sun, Feb 18, 2018 at 1:03 PM, Remco Post <r.p...@plcs.nl> wrote:
>
> > Hoi Tom,
> >
> > this might sound a bit blunt, but from what you’re asking I get the 
> > strong impression that this the first time you’re working with TSM. 
> > So I’m a bit anxious to give you any advise, fearing that it might 
> > lead to
> more problems.
> >
> > In general with performance issues I would look into the generic 
> > performance indicators of the exchange servers first. Secondly, 
> > check for any network bottlenecks between the exchange server and 
> > the TSM
> server.
> > Thirdly you can look into the performance indicators of your TSM server.
> > All with normal tools.
> >
> > > Op 17 feb. 2018, om 00:55 heeft Tom Alverson 
> > > <tom.alver...@gmail.com>
> > het volgende geschreven:
> > >
> > >>
> > >>
> > >> We are trying to speed up our Exchange backups that are currently 
> > >> only
> > > using about 15%

Re: Exchange backup speed

2018-02-18 Thread Harris, Steven
Tom

It is a failing of TSM/SP that a basic function is deemed "good enough" by the 
people who decide such things within IBM and the real-world implementation is 
left to users. Your problem is not uncommon and a solution should be a standard 
part of the marketed offering.

You will need some powershell skills. Use the powershell cmdlets that come with 
TDP for Exchange and run your processes in parallel. You will need to code some 
funky error checking to make sure the correct return codes are returned. 

Regards

Steve

Steven Harris
TSM Admin/Consultant 
Canberra ACT

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Alverson
Sent: Monday, 19 February 2018 6:45 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Exchange backup speed

Remco:

I appreciate all feedback, blunt or not.  I am relatively new to TSM but I only 
work on windows client issues.  A separate team works on the TSM storage 
servers and they are very experienced

The servers are loafing, they have 4 cores with 32 processors, and 384GB of 
ram, not of which is anywhere near the limit.  The only bottleneck right now is 
the 10GB interfaces in the exchange server and TSM storage servers must pass 
through a 1GB embedded rack switch that I have been urging them upgrade.  If we 
could get anywhere near 1GB network throughput on the exchange backups that 
would be good.

I'm sure the storage servers are not under stress based on performance of other 
backups we have running.

On Sun, Feb 18, 2018 at 1:03 PM, Remco Post  wrote:

> Hoi Tom,
>
> this might sound a bit blunt, but from what you’re asking I get the 
> strong impression that this the first time you’re working with TSM. So 
> I’m a bit anxious to give you any advise, fearing that it might lead to more 
> problems.
>
> In general with performance issues I would look into the generic 
> performance indicators of the exchange servers first. Secondly, check 
> for any network bottlenecks between the exchange server and the TSM server.
> Thirdly you can look into the performance indicators of your TSM server.
> All with normal tools.
>
> > Op 17 feb. 2018, om 00:55 heeft Tom Alverson 
> > 
> het volgende geschreven:
> >
> >>
> >>
> >> We are trying to speed up our Exchange backups that are currently 
> >> only
> > using about 15% of the network bandwidth.  Our servers are running
> Windows
> > 2012R2 and Exchange 2013 CU15 with TSM 7.1.0.1 and TDPEXC 7.1.0.1.
> > Currently we are backing up 15 DAGS per Exchange server (we have 
> > multiple exchange servers) and we are only backing up on servers 
> > that are standby replicas.  Currently we are trying a 14 day 
> > schedule were we do a full backup of a different DAG per day, and 
> > incrementals on the rest.  Even doing this we are having trouble 
> > completing them in 24 hours (before the next day's backup is supposed to 
> > start).
> >
> > I saw an old posting from Del saying to increase RESOURCEUTILIZATION 
> > on
> the
> > DSMAGENT.  Does that mean the DSM.OPT in the BACLIENT folder?  It 
> > was set at 2.  Do either the buffers or buffrsize options make any 
> > difference?
> >
> > Also if we want to "parallelize" the backups does that mean separate 
> > scheduler services for each one?  We currently use 14 different 
> > batch
> files
> > (for the 14 days of the cycle) with something like this:
> >
> > [day1.bat]
> >
> > tdpexcc.exe backup dag1 full
> > tdpexcc.exe backup dag2,dag3,dag4,dag5 incr tdpexcc.exe backup 
> > dag6,dag7,dag8,dag9 incr tdpexcc.exe backup dag10,dag11,dag12,dag13 
> > incr tcpexcc.exe backup dag14,dag15 incr exit
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Influencing order of VM backup.

2018-02-10 Thread Harris, Steven
Hi Mike and Marc

Thanks for your responses.

Order of presentation of VMs seems to be dependent on the order that vCenter 
provides them to the TSM for VE query.  It is certainly not alphabetical or any 
other obvious ordering (to me at least) and it can change over time.  One  
problem VM was starting at the beginning of the schedule and suddenly moved to 
late in the order with no changes that we could see.

We are running multiple data movers based on class of VM.  There are two 
separate vCenters and each of those have prod and non-prod backup streams 
organized by VMWare cluster with a datamover for each.  Leaving out unnecessary 
detail, there is one VBS server for each vCenter/stream and one datamover on 
each VBS with its own storage agent.  Data transport is SAN.  All this is 
controlled through a single TSM Server.   

VMMAXSESSIONS is set to 10 in prod... we have been experimenting in non-prod 
with making this higher, but so far have found that more allowed sessions does 
not necessarily mean more active sessions: the most we have had in non-prod was 
16 active when VMAXSESSIONS was 20, and then only briefly.  More sessions also 
seem to push out the elapsed time:  10 to 20 pushed out the end of backup from 
8 to 10 hours.

For non-prod we have therefore now settled at 12 sessions and have started to 
play with VMLIMITPERDATASTORE and VMLIMITPERHOST which have both been 2 until 
now.

Yes we have considered multiple datamovers and schedules.  As we control things 
by editing dsm.sys that gets messy very quickly.  The level of TSM for VE code 
we are on means that the automatic exclusion of PRDMs, VMDKs>2TB etc does not 
work so we have to exclude these individually. 

 I am really tempted to, but the effort to script an automatic generation 
process for the dsm.sys stanzas is probably not warranted. There are also 
issues with having to generate using powershell on Windows and then updating 
control files on Linux and security implications in our environment or having 
to teach myself python for this one task (and getting python3 installed and the 
prereqs for the VMware API  etc etc etc). 

Looks like I may be stuck until we can at least get a code upgrade.  The 
Spectre/Meltdown security issue will likely force a VMWare upgrade, at which 
point we can move off our current TSM for VE level and some of the pain will go 
away,  however if anyone has more information on how the TSM for VE VM 
selection algorithm works and how it can be influenced, I'd be interested to 
hear it. 

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ryder, 
Michael S
Sent: Saturday, 10 February 2018 3:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Influencing order of VM backup.

You can take more control by setting up specific schedules to backup some VMs 
first.  You might use VM folders to control that as well.

Do you let VMs backup in parallel with "VMMAXParallel" ?  It defaults to 1...

You can also have multiple jobs running simultaneously.  It's not clear to me, 
are you using the "IBM Spectrum Protect for Virtual Environments: Data 
Protection for VMware"?  If so, in addition you could be running multiple jobs 
on multiple datamovers simultaneously.

Enabling #>1 on VMMAXParallel and multiple datamovers helped us to drastically 
reduce our backup windows.

Best regards,

Mike

On Fri, Feb 9, 2018 at 6:08 AM, Marc Lanteigne 
wrote:

> Hi,
>
>  You could configure a new DataMover to handle that VM.
>
>  In the preview, is the order alphabetical?  If so, can you rename the VM?
>
>  Marc...
>
>  Sent from my iPhone using IBM Verse
>
>  On Feb 8, 2018, 11:17:20 PM, steven.har...@btfinancialgroup.com wrote:
>
>  From: steven.har...@btfinancialgroup.com
>  To: ADSM-L@VM.MARIST.EDU
>  Cc:
>  Date: Feb 8, 2018, 11:17:20 PM
>  Subject: [ADSM-L] Influencing order of VM backup.
>
>
>Hi All
>   Thanks for the input on my recent query about 7 Year VM backups.  
> I'll let you know when I decide something.
>   Moving on..
>   TSM Server  7.1.1.300 AIX,  Datamovers and Storage Agents on Redhat, 
> writing to Protectier VTL, TSM for VE 7.1.1/2 hybrid.
>   We can't use the VMware plugin because of separation of duties 
> concerns, so we edit the DOMAIN.VMFULL lines in the dsm.sys stanzas. 
> VMs have a range of different sizes that all back up on the same 
> schedule and we'd prefer not to split this up.  The execution order of 
> the VM backups is determined by TSM for VE, somehow, and can be seen when a 
> backup vm -preview is run.
>   There are some large VMs that take quite a while to back up, but 
> unfortunately run late in the execution order, so we overrun our 
> backup window.
>   Changing the order of the VMs in the DOMAIN.VMFULL statement does 
> not influence execution order.  Is there any way to make the big ones run 
> first?
>   Thanks
>   Steve
>   

Influencing order of VM backup.

2018-02-08 Thread Harris, Steven
Hi All

Thanks for the input on my recent query about 7 Year VM backups.  I'll let you 
know when I decide something.

Moving on..

TSM Server  7.1.1.300 AIX,  Datamovers and Storage Agents on Redhat, writing to 
Protectier VTL, TSM for VE 7.1.1/2 hybrid.

We can't use the VMware plugin because of separation of duties concerns, so we 
edit the DOMAIN.VMFULL lines in the dsm.sys stanzas. VMs have a range of 
different sizes that all back up on the same schedule and we'd prefer not to 
split this up.  The execution order of the VM backups is determined by TSM for 
VE, somehow, and can be seen when a backup vm -preview is run.

There are some large VMs that take quite a while to back up, but unfortunately 
run late in the execution order, so we overrun our backup window.

Changing the order of the VMs in the DOMAIN.VMFULL statement does not influence 
execution order.  Is there any way to make the big ones run first?

Thanks

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


VMCTLMC space for a full backup.

2018-02-06 Thread Harris, Steven
Hi Guys

There is a really good explanation of VMCTLMC Sizing for incremental VM backups 
at  http://adsm.se/?p=546
I run a monthly full for compliance reasons on all the Prod Vms, and am trying 
to understand the implications for   VMCTLMC Sizing.  So far I suppose its 8000 
megablocks  as well as 8000 control files per TB so  8000*(128+73) KB ~= 1.5 
MB/TB.

As there is a 7 year retention for these I see why I'm running out of space.

The only option I can see would be a 1 year retention and a monthly export to 
tape for this data.  What do others do?

Cheers

Steve

TSM Admin/Consultant
Canberra/Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Need guidance on how to protect stgpool to cloud

2018-02-01 Thread Harris, Steven
Hello Luc.

I'm not sure that Glacier is a good fit for most TSM data. 

TSM has a lot of churn.  Data expires and then you are left with a choice of 
reclamation or storing excess data.  Glacier retrieval is slow, so you would 
have to mark volumes as offsite to allow reclamation from the primary copy.  
(that’s something that we need BTW if anyone from development is reading this, 
the ability to mark a copypool as offsite so that the offsite reclamation 
strategy is always used).  Glacier storage is cheap but there are access and 
retrieval costs, and there is a charge for keeping less than 90 days.

What it would be good for is storing exports or backupsets of data that needs 
to be kept for long periods for regulatory purposes, as once written that never 
changes.

If it were me I'd go with standard S3 until I understood my data access pattern 
and move to Glacier once I was convinced that was a good deal.

I did once set up a system that lived entirely on AWS.  Two TSM Servers in 
different zones, that replicated to one another.  We used S3 there, but I was 
not further involved after set up.  The TSM Servers were, of course, the 
biggest and most expensive resource consumers in the whole fleet.

It would be great if you could report here what you eventually decide

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia






-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Michaud, Luc [Analyste principal - environnement AIX]
Sent: Friday, 2 February 2018 1:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Need guidance on how to protect stgpool to cloud

Greetings to you all,

We've setup a blueprint replicated environment with directory container pools 
on both sides.

We protect the primary node stgpools to tape as well, with offsite movements.

Now we want to get rid of the tapes, most likely by leveraging AWS Glacier.

We have been exposed to a limitation of containers only being able to have 1 
container protection and 1 tape protection.

How have you guys done it ?  Any trick or caveat that we should be aware of ? 

Regards,

Luc Michaud
Société de Transport de Montréal


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Looking for a solution for DFS backups - a.k.a. how do you do it

2018-01-11 Thread Harris, Steven
I feel for you Zoltan

My users are demanding, but at least the corporate management structure means I 
have some measure of control over their demands.

There is a TSM client REST API that came out at 7.1.3.  This can be used to run 
backups and restores although the API guide explicitly states that it is not 
supposed to be used for long-running tasks.  I don’t know if later versions 
have lifted that restriction.

Since you are a university, you might have some smart coders available. I 
envisage a web-based service that at the back end runs this API to select what 
to restore and then run it.  That could all reside on the one box, with it 
effectively being a reverse proxy to your Windows backup servers.

As you aren’t the only one in this particular bind, the result might be 
commercially viable as a paid product.  If it gets that far I claim 5% OK?  

Cheers

Steve

Steven Harris
TSM Admin/Consultant 
Canberra Australia


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Friday, 12 January 2018 7:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Looking for a solution for DFS backups - a.k.a. how do you do 
it

With the demise of the B/A web-client in 7.1.8+, we are in desperate need of an 
alternative solution to handling our DFS/ISILON backups.

Being a university, the big issue is that everyone wants control over backups 
to be able to perform restores by themselves!

Our current (soon to be unusable) solution is 3-dedicated physical Windows 
servers with 25-configurations/services (each) of the B/A client (each with 
unique ports for the web-client).  The backup schedules contain the specific 
filesystem/mount it backs up.  So, department level folks can use a web browser 
to connect to the correct port on the backup servers to manage their restores.

The volume of backups makes it almost impossible to shift everything to one 
node (380TB / 225M objects) and if one server was able to handle it, it would 
shift restore responsibility to some sort of "help desk"!

So, how do you handle this kind of scenario in your institution/organization?

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://phishing.vcu.edu/


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Select question

2018-01-10 Thread Harris, Steven
Guys all of those are far more complex than they need be

select node_name, lastacc_time from nodes where lastacc_time>(current timestamp 
- 90 days)

Cheers

Steve

TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc 
Lanteigne
Sent: Wednesday, 10 January 2018 11:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Select question


Hello,

There's that exact query on Thobias:
http://thobias.org/tsm/sql/#toc17

-
Thanks,
Marc...

Marc Lanteigne
Accelerated Value Specialist for Spectrum Protect
416.478.0233 | marclantei...@ca.ibm.com
Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern

Follow me on: Twitter, developerWorks, LinkedIn


-Original Message-
From: rou...@univ.haifa.ac.il [mailto:rou...@univ.haifa.ac.il]
Sent: Wednesday, January 10, 2018 4:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Select question

Hi to alll

Try to figure how to run a select to get for nodename the parameter “Days since 
last access”  greater than 3 months

When I run the q node ides got

Protect: ADSM2>q node ides

Node Name Platform  Policy Domain  Days Since
Days Since  Locked?
 Name
Last AccessPassword Set
- --
--  -----
IDESLinux x86-64   CC
178   1,497No

select node_name , LASTACC_TIME from nodes where ???

T.I.A Regards


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: TSM 8.1.1 on Linux crash

2018-01-09 Thread Harris, Steven
Remco

Can you please explain  what the fall-out is?

I'm using TSM 7.1.0 on AIX and have issues with LTO6 drives emulating LTOs.  
Sometimes I just power cycle the drive and that clears the problem, but that 
does not always work.

Whilst I don't have any linux servers or storage agents in that particular mix, 
I'd like to understand what you are seeing for future reference.

Thanks

Steve

Steven Harris
TSM Admin/Consultant

Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Using TDP to backup NDF files

2017-12-13 Thread Harris, Steven
Bill

There is an option in TDP to back up a file group.  I've never understood what 
it is for.   Looking online for information about NDF files, that same term, 
"file group", popped up. So, I'd start there are see if that fits your use case.

Cheers

Steve

Steven Harris
TSM Admin/Consultant 
Canberra, Australia

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bill 
Boyer
Sent: Thursday, 14 December 2017 2:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Using TDP to backup NDF files

Is there a way to backup NDF files as part of the database backup? I have a 
customer with a database with an NDF file the use for 'archiving'. That doesn't 
get included as part of the TDPSQLC BACKUP * FULL command. How can I get that 
included in the SQL backup of the server? Or do I need to use the BAClient for 
that?



TIA,



Bill

"Enjoy life. It has an expiration date." - ??

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Backup of file attributes Windows

2017-11-22 Thread Harris, Steven
I haven’t used it for some years now, but the old Domino client used to store 
two entries for each client, one for the data and one for the permissions.  It 
would probably go against the TSM (Hate the new name!) philosophy to store any 
client data in the database, but this case might be a reasonable exception. 

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Reese, 
Michael A (Mike) CIV USARMY 21 SIG BDE (US)
Sent: Thursday, 23 November 2017 3:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup of file attributes Windows

Hi Hans,

I've been snagged by this on many occasions myself.  The worst time was when 40 
TB of data had to get backed up again because a system admin made a simple 
permission change at the root directory.

I agree with you that since TSM now uses DB2 for its database, it should be 
able to handle storing permissions for Windows clients in the TSM database, 
just like it does for Linux clients.  If you're willing to create an RFE, I'll 
vote for it!

Mike Reese


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Hans 
Christian Riksheim
Sent: Wednesday, November 22, 2017 7:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [Non-DoD Source] [ADSM-L] Backup of file attributes Windows

When our customers changes permissions on their file servers there is total 
chaos with new full backup of everything and no practical method to get rid of 
the extra backup data. I think our customers should be able to do this without 
paying twice as much for their backups.

From what I have heard TSM couldn't just update these files because there is 
too much overhead keeping this information in the database. However in the 
meantime TSM has gone from the proprietary database in v5 to a full DB2 which 
should handle this well.

It would be nice if IBM took a look at this problem and came up with a 
solution. Don't know if other TSM admins think this is an issue though.

Hans Chr. Riksheim


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Monthly backups of VMs

2017-11-16 Thread Harris, Steven
Thanks for the reply Richard

Backupsets apply only to BA client data.  Theoretically exports are possible. 
I've had issues with backupsets in the past and even if it were not possible 
would be loath to go there again (e.g. backupset is essentially a restore so it 
would take a drive to start, but then not have priority to take another drive 
to write its data and fail, so I didn't get a good backupset and whatever was 
interrupted also failed).

Management of exports is also less than ideal. And they are slow, hmmm, unless 
an active pool was used.

The problem with mixing monthlies and dailies is that they both use the 
same-named snapshots and so if one is running and the other starts it causes 
the existing snapshot to be deleted and the running backup fails.  If there 
were a way to alter the snapshot name for the monthlies, that might help, but 
afaik there is not.  Without that then we need to manipulate the domain.vmfull 
(or any alternatives) on a daily basis to exclude that day's monthlies from 
daily backups and include into that day's monthlies.  Not simple.

Thanks for making me explain this.  Active pool and exports may be the way to 
go.  Define the export volumes explicitly with a name that identifies their 
contents, then back them up with the BA client.

Cheers

Steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Cowen
Sent: Friday, 17 November 2017 9:06 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Monthly backups of VMs

Can you use backupsets or export nodes to real tape (no client impact.) Or full 
restores to a dummy node and then archive those to real tape  (once a month), 
again no direct client impact.
Can the "monthlles" be spread over 30 days?

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Steven
Sent: Thursday, November 16, 2017 4:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Monthly backups of VMs

HI All

Environment is
TSM 7.1.1 server on AIX. 7.1.1 Storage agents on Linux,  7.1.1  BA clients, 
7.1.1 VE clients,  VMWare 5.5.  The VMware backups are via the SAN to a 
Protectier VTL.

My Client is an international financial organization so we have lots or 
regulatory requirements including SARBOX.  All of these require a monthly 
backup retained 7 years.  Recent trends in application design have resulted in 
multiple large MSSQL databases - up to 10 TB that never delete their data.  
Never mind the logic, the hard requirement is that these be backed up monthly 
and kept for 7 years, and that no variation will be made to the application 
design.

Standard process has been a daily VE incremental backup to a daily node  and 
monthly full to a separate node.  The fulls are becoming untenable on several 
grounds.  The VBS Servers need to run a scsi rescan on weekdays to pick up any 
changed disk allocations, and this interrupts any running backups.  The 
individual throughput of the Virtual tape drives is limited so sessions run for 
a long time and there is not enough real tape to use that.   Long running 
backups cause issues with the storage on the back end because the snapshots are 
held so long.

Does anyone have any practical alternate approaches for taking a monthly VMware 
backup for long term retention?

Thanks

Steve

Steven Harris

TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives,

Monthly backups of VMs

2017-11-16 Thread Harris, Steven
HI All

Environment is
TSM 7.1.1 server on AIX. 7.1.1 Storage agents on Linux,  7.1.1  BA clients, 
7.1.1 VE clients,  VMWare 5.5.  The VMware backups are via the SAN to a 
Protectier VTL.

My Client is an international financial organization so we have lots or 
regulatory requirements including SARBOX.  All of these require a monthly 
backup retained 7 years.  Recent trends in application design have resulted in 
multiple large MSSQL databases - up to 10 TB that never delete their data.  
Never mind the logic, the hard requirement is that these be backed up monthly 
and kept for 7 years, and that no variation will be made to the application 
design.

Standard process has been a daily VE incremental backup to a daily node  and 
monthly full to a separate node.  The fulls are becoming untenable on several 
grounds.  The VBS Servers need to run a scsi rescan on weekdays to pick up any 
changed disk allocations, and this interrupts any running backups.  The 
individual throughput of the Virtual tape drives is limited so sessions run for 
a long time and there is not enough real tape to use that.   Long running 
backups cause issues with the storage on the back end because the snapshots are 
held so long.

Does anyone have any practical alternate approaches for taking a monthly VMware 
backup for long term retention?

Thanks

Steve

Steven Harris

TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Huge differences in file count after exporting node to a different server

2017-10-25 Thread Harris, Steven
Hi Zoltan

Are the  old server numbers from export time or current time?  If current time 
you may have had some 5 million small files expire in the interim.

Cheers

Steve

Steven Harris
TSM Admin/Consultant 
Canberra, Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, 26 October 2017 6:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Huge differences in file count after exporting node to a 
different server

I am curious if anyone has seen anything like this.

A node was exported (filedata=all) from one server to another (all servers
are 7.1.7.300 RHEL)

After successful completion (took a week due to 6TB+ to process) and
copypool backups on the new server, the Total Occupancy counts are the same
(13.52TB).  However, the file counts are waaay off (original=17,561,816 vs
copy=12,471,862)

There haven't been any backups performed to either the original (since the
export) or new node. Policies are the same on both servers and even if they
weren't, that wouldn't explain the same occupancy size/total.

Neither server runs dedup (DISK based storage volumes).

Any thoughts?

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: TSM for VE - VMCLI

2017-10-20 Thread Harris, Steven
Thanks Ray

The idea of a roll-your-own scripted backup of VMs was the original conception 
of this environment, some years ago now, but it was hard to make it work and 
there were time pressures, so we went with the simple option first.  Only now 
is that becoming unmanageable.

It's still a lot of work and of course we have an extensive test environment to 
play with and a team of crack programmers to do it .  It's strictly a 
side project and we have to be careful to not break anything.

As to job submission a TSM command schedule should do it. We have ControlM 
if we need it.

Cheers

Steve. 

-Original Message-
From: Storer, Raymond [mailto:stor...@nibco.com] 
Sent: Thursday, 19 October 2017 4:19 AM
To: ADSM: Dist Stor Manager <ADSM-L@VM.MARIST.EDU>
Cc: Harris, Steven <steven.har...@btfinancialgroup.com>
Subject: RE: TSM for VE - VMCLI

Steven,

Sorry, I cannot help you with the vmcli commands.  However, you can easily use 
PowerShell or another scripting language to get vCenter to provide you a list 
of VMs and then sort, slice, dice, filter, and store them.  You could easily 
"roll your own" VM tagging structure for backups too--you don't need a later 
version of TSM for VE to do it if you plan to parse what vCenter gives you 
anyway.  Then, when you upgrade your TSM for VE at some future date you can 
migrate the tags you created to the ones the new version of TSM for VE uses.  
If running the script on the TSM for VE server, you could execute dsmc commands 
as your script spits them out.  If not, what options do you have for some sort 
of "job submission" to the TSM for VE server for it to execute the dsmc 
commands?

Ray

Ray Storer
NIBCO INC. | Intel Systems Specialist
P: 574.295.3457 | F: 574.295.1298 | M: 574.742.0192 stor...@nibco.com | 
www.nibco.com




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Steven
Sent: Tuesday, October 17, 2017 6:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM for VE - VMCLI

Hi Guys

TSM Server 7.1.1 AIX,  TSM VE 7.1.1 linux X64

This shop has issues with us using the vSphere plugin as it requires too many 
permissions. We cannot move to later VE clients with expanded facilities 
because of a dependency on vCenter 5.5

So far, we have been managing by editing the dsm.sys files, but that is getting 
to be unsustainable.  The docs describe a VMCLI interface that may suit when 
scripted.  The idea would be to write Powershell/python scripts to query vmware 
for a list of vms, apply some filters that would allow includes and excludes 
from a database or maybe json files and generate a list of VMs to backup 
invoked by vmcli.

Is anyone using vmcli this way?  Any war stories to tell? Maybe its working 
well for you.

Please let me know

Thanks

Steve

Steven Harris
TSM Admin/Consultant

Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone.

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product.

For further details on the financial product please go to 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.bt.com.au=02%7C01%7Cstorerr%40NIBCO.COM%7C83db0417d83444957da208d515acb54d%7C3d27b36909d5422a8817fef5d8cba6cc%7C0%7C0%7C636438753892284762=j%2BE9CRZVyZR59IlUjyxM7vxrgnPVp%2F7ng8aA1r4Obpk%3D=0

Past performance is not a reliable indicator of future performance.



CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive 
and confidential use of the intended recipient. If you are not the intended 
recipient, please do not read, distribute or take action in reliance upon this 
message. If you have received this in error, please notify us immediately by 
return email and promptly delete this message and its attachments from your 
computer system. We do not waive attorney-client or work product privilege by 
the transmission of this message.

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its cont

Invoking backup from powercli

2017-10-17 Thread Harris, Steven
Hello again.

TSM Server 7.1.1 AIX,  TSM VE 7.1.1 linux X64

I have an application that needs to co-ordinate the taking of their TSM for VE 
backups with their housekeeping.  It's a large distributed postgres database 
and they want to shut down the database on all members of the cluster and 
trigger a snapshot backup across them all once a week to get a consistent 
restore point.

vSphere web client is not an option, and neither is running a dsmc command on 
the vbs. Neither do we want to give them access to run a clientaction on the 
TSM Server.

Is there any known way to trigger a TSM for VE backup on demand? I'm thinking 
that VMware  PowerCLI could be the mechanism here, but would be pleased to hear 
of any other.

Thanks

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


TSM for VE - VMCLI

2017-10-17 Thread Harris, Steven
Hi Guys

TSM Server 7.1.1 AIX,  TSM VE 7.1.1 linux X64

This shop has issues with us using the vSphere plugin as it requires too many 
permissions. We cannot move to later VE clients with expanded facilities 
because of a dependency on vCenter 5.5

So far, we have been managing by editing the dsm.sys files, but that is getting 
to be unsustainable.  The docs describe a VMCLI interface that may suit when 
scripted.  The idea would be to write Powershell/python scripts to query vmware 
for a list of vms, apply some filters that would allow includes and excludes 
from a database or maybe json files and generate a list of VMs to backup 
invoked by vmcli.

Is anyone using vmcli this way?  Any war stories to tell? Maybe its working 
well for you.

Please let me know

Thanks

Steve

Steven Harris
TSM Admin/Consultant

Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Automatic password change gets lost between server and client

2017-09-19 Thread Harris, Steven
Hi all

This is a big issue for me.

We run linux datamovers to back up our VMWare vms.  Passexp is set to zero, 
but, for some reason every so often the password tries to change.  It does this 
in the middle of a VE Schedule and all subsequent backups for that schedule 
fail with a security error ( sorry I have no detail at the moment).  We reset 
the node password and recycle the dsmcad and its solved until next time.

I have not subbed a support ticket as it is intermittent and not reproducible 
on demand and also because the likely fix would be to update the client and we 
are wedged by backlevel vcenter and certain bugs at the current client version. 
Server is 7.1.1 client is 7.1.2 with VE at 7.1.1.  

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, 20 September 2017 7:01 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Automatic password change gets lost between server and 
client

We've had this issue intermittently as well, but not often enough to spend time 
tracking it down. We just have an alert that makes a ticket if an ANR0424W 
message is seen in the TSM server activity log.

On Tue, Sep 19, 2017 at 05:06:54PM +, Lee, Gary wrote:
> Had it with a linux client last week.
> Only twice now in two years, and no apparent pattern.
> This time 6.3.0 client with 7.1.7.1 server.
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Sasa Drnjevic
> Sent: Tuesday, September 19, 2017 12:33 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Automatic password change gets lost between 
> server and client
>
> Yes, I've seen that, and if I can remember correctly, it happens 
> sometimes, but only on Windows servers.
>
> Did not find the answer...Updated password on the Win client, and that 
> was it...
>
> Regards.
>
> --
> Sasa Drnjevic
> https://na01.safelinks.protection.outlook.com/?url=www.srce.unizg.hr
> ata=02%7C01%7Cglee%40BSU.EDU%7C2f7c63f465c94719a09608d4ff7c3f80%7C6fff
> 909f07dc40da9e30fd7549c0f494%7C0%7C0%7C636414356483419375=zN5rPt
> Nhvk%2BC4wN5CvKL3ndER9h94UEZX%2FSOhJeyCmA%3D=0
>
>
>
>
> On 2017-09-19 17:50, Zoltan Forray wrote:
> > Last night, something strange happened and am wondering if anyone 
> > else has seen this.
> >
> > One of my nodes came up for it's every 90-day password automatic 
> > expire-and-generate-new-password.  While this is normal, the 
> > password change got lost between the client and server.  Here are 
> > console messages from that time:
> >
> > *Normal connection:*
> > 09/18/2017 20:47:38  ANR0406I Session 155733 started for node 
> > ISILON-LS
> > (WinNT) (Tcp/Ip 192.168.21.128(58779)). (SESSION: 155733)
> > 09/18/2017 20:47:38  ANR0403I Session 155733 ended for node 
> > ISILON-LS (WinNT). (SESSION: 155733)
> > 09/18/2017 20:47:42  ANR0406I Session 155734 started for node 
> > ISILON-LS
> > (WinNT) (Tcp/Ip 192.168.21.128(58781)). (SESSION: 155734)
> >
> > *Password has expired:*
> > 09/18/2017 20:47:42  ANR0424W Session 155734 for node ISILON-LS  
> > (WinNT) refused - invalid password submitted. (SESSION: 155734)
> > 09/18/2017 20:47:42  ANR0403I Session 155734 ended for node 
> > ISILON-LS (WinNT). (SESSION: 155734)
> > 09/18/2017 20:57:48  ANR0406I Session 155748 started for node 
> > ISILON-LS
> > (WinNT) (Tcp/Ip 192.168.21.128(58835)). (SESSION: 155748)
> >
> > *Everything looks normal:*
> > 09/18/2017 20:57:50  ANR0403I Session 155748 ended for node 
> > ISILON-LS (WinNT). (SESSION: 155748)
> > 09/18/2017 20:57:53  ANR0406I Session 155750 started for node 
> > ISILON-LS
> > (WinNT) (Tcp/Ip 192.168.21.128(58838)). (SESSION: 155750)
> >
> > *Password is Invalid?  Whatwhat?*
> > 09/18/2017 20:57:53  ANR0424W Session 155750 for node ISILON-LS  
> > (WinNT) refused - invalid password submitted. (SESSION: 155750)
> > 09/18/2017 20:57:53  ANR0403I Session 155750 ended for node 
> > ISILON-LS (WinNT). (SESSION: 155750)
> >
> > Repeat 'invalid password message' every 10-minutes for the rest of 
> > the night?
> >
> > Client is 7.1.6.3 and server is 7.1.7.300
> >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator 
> > Xymon Monitor Administrator VMware Administrator Virginia 
> > Commonwealth University UCC/Office of Technology Services
> > https://na01.safelinks.protection.outlook.com/?url=www.ucc.vcu.edu
> > ata=02%7C01%7Cglee%40BSU.EDU%7C2f7c63f465c94719a09608d4ff7c3f80%7C6f
> > ff909f07dc40da9e30fd7549c0f494%7C0%7C0%7C636414356483419375=zc
> > Q93GWJMlvxHNGcwY7mHvQxyBCTNGmIIyn5cFZ06JE%3D=0
> > zfor...@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations 
> > will never use email to request that you reply with your password, 
> > social security number or confidential personal information. For 
> > more details visit 
> > 

Re: LTO7 tape capacity

2017-08-20 Thread Harris, Steven
Hi Robert

Yes, the capacity is worked out in the Device class entry.  You'd need to show 
us that.

Cheers

Steve

Steven Harris
TSM Admin/Consultant

Canberra Australia


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
rou...@univ.haifa.ac.il
Sent: Sunday, 20 August 2017 5:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] LTO7 tape capacity

Hi to all

I think I found the explanation:

ULTRIUM7C - Specifies that Spectrum Protect writes data that uses the ULTRIUM7 
recording format with compression. Only Ultrium 7 media can be written to with 
the ULTRIUM7C format. The average capacity of an Ultrium 7 cartridge when the 
ULTRIU7C format is used is 15 TB.

Best Regards

Rovert

Hi to all

More information:   Used in DEVCLASS  format=DRIVE

My drivers are at version 6.2.5.7

Best Regards

Robert

Hi to all

I add a new library scsi LTO7 , in documentation capacity between 6TB – 15TB .

I run a backup of a client with audio / video files and I got this output

Protect: ADSM2>q vol stg=lto7_2

Volume Name Storage Pool Name Device Class NameEstimated 
Pct UtilVolume Status
-   ---  -- 
-  -   
ESJ407L7 LTO7_2  LTO7CLASS  
 5.7 T100.0Full

I was expected to get more capacity ! In client I didn’t have compression 
CLIENT or YES

Did I missed something to define ?

Here the ouput of q stg LTO7_2 f=d

Protect: ADSM2>q stg lto7_2 f=d

 Storage Pool Name: LTO7_2
 Storage Pool Type: Primary
 Device Class Name: LTO7CLASS
  Storage Type: DEVCLASS
Cloud Type:
 Cloud URL:
Cloud Identity:
Cloud Location:
Estimated Capacity: 29,709 G
Space Trigger Util:
  Pct Util: 20.1
  Pct Migr: 40.0
   Pct Logical: 100.0
  High Mig Pct: 100
   Low Mig Pct: 99
   Migration Delay: 0
Migration Continue: Yes
   Migration Processes: 1
 Reclamation Processes: 1
 Next Storage Pool:
  Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
   Description: VIDEOS
 Overflow Location:
 Cache Migrated Files?:
Collocate?: No
 Reclamation Threshold: 100
 Offsite Reclamation Limit:
   Maximum Scratch Volumes Allowed: 5
Number of Scratch Volumes Used: 2
 Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
  Amount Migrated (MB): 0.00
  Elapsed Migration Time (seconds): 0
  Reclamation in Progress?: No
Last Update by (administrator): ROBERT
 Last Update Date/Time: 08/16/2017 13:32:22
  Storage Pool Data Format: Native
  Copy Storage Pool(s):
   Active Data Pool(s):
   Continue Copy on Error?: Yes
  CRC Data: No
  Reclamation Type: Threshold
   Overwrite Data when Deleted:
 Deduplicate Data?: No
  Processes For Identifying Duplicates:
Compressed:
   Additional space for protected data:
Total Unused Pending Space:
 Deduplication Savings:
   Compression Savings:
 Total Space Saved:
Auto-copy Mode: Client Contains Data Deduplicated by 
Client?: No
  Maximum Simultaneous Writers:
 Protect Processes:
   Protection Storage Pool:
 Protect Local Storage Pool(s):
  Reclamation Volume Limit:
Date of Last Protection to Remote Pool:
Date of Last Protection to Local Pool:
  Deduplicate Requires Backup?:
 Encrypted:
 Cloud Space Utilized (MB):
   Bucket Name:
  Local Estimated Capacity:
Local Pct Util:
 Local Pct Logical:

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838 אתר אגף 
מחשוב ומערכות מידע: 

Re: Question adding new scratch at library

2017-08-14 Thread Harris, Steven
Hi Robert

My suggestion is that overwrite=yes won't make much difference.

You still have to mount the tape to write the label, so an extra read and 
rewind is nothing.  Personally I always let it default to overwrite=no unless I 
know I need overwrite=yes.  Less chance of accidents that way - such as 
specifying the wrong volume range and overwriting something you really wanted.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

Steve at stevenharris dot info

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
rou...@univ.haifa.ac.il
Sent: Tuesday, 15 August 2017 3:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Question adding new scratch at library

Hi to all

A quick question about adding new scratch's  on a TS3200 Library (SCSI). What 
will be the correct or more efficient command to run

label libv LTO7LIB checkin=scratch search=bulk labelsource=barcode 
volrange=$1,$2 overwrite=yes or checkin libv LTO7LIB status=scratch  
search=bulk checklabel=barcode volrange=$1,$2

Best Regards

Robert

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


AIX Atape driver on IBM drives in Oracle Library

2017-07-12 Thread Harris, Steven
Hi All

I have a bit of a conundrum.

I have an Oracle SL8500 library with 6x IBM LTO-6 drives installed.  The drives 
are dual connected. These are being used by multiple AIX 7.1 TSM 7.1 servers in 
a library sharing arrangement.

AIX shows these drives as using the generic LTO driver, rather than the IBM 
Atape driver.

For example

rmt55rh Available 24-T1-01 LTO Ultrium Tape Drive (FCP)

Rather than the expected

vtl00rg Available 24-T1-01 IBM 3580 Ultrium Tape Drive (FCP)  (this is an 
LTO-3 emulated drive on our protectier VTL)

LTO 6 Performance is atrocious.

Running instAtape gives one of the following for each drive

Method error (/etc/methods/cfgAtape -l rmt55rh ):
0514-045 Error building a DDS structure.

We have no licence keys applied for Data Path Failover They come free with 
an IBM library but cost for a non-IBM one ... I'm trying to track down whether 
the licences were purchased back when this equipment was new.

So, does anyone know to  apply the Atape driver to these drives?  I can live 
without DPF for now.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra, Australia






This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: tdp for databases support for sqlserver 2016

2017-06-14 Thread Harris, Steven
Gary

I am just starting to use this.  8.1 is good.  Do not use the earlier version, 
I think its 7.1.6, even if your SQL server is compatible, as there is a bug 
that causes the backup to be taken and then fail during post processing clean 
up.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, 15 June 2017 2:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tdp for databases support for sqlserver 2016

From the "*Data Protection for Microsoft SQL Server updates V8.1.0*" book:

*Microsoft SQL Server 2016 is now supported*.

I don't see why it wouldn't be compatible with 7.1.7.1 servers - IBM usually 
maintains a minimum of 2-levels backwards compatibility.  All of my TSM servers 
are 7.1.6.3 and many clients are 8.1.0.2.  We just put up
2-2016 SQL servers using the 8.1 TDP and no issues so far.




On Wed, Jun 14, 2017 at 12:22 PM, Lee, Gary  wrote:

> 1.   Is this out yet, if not when?
>
> 2.   Is it compatible with tsm server 7.1.7.1?
> Thanks for the help.
> Looked on the site, but never found.
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Operations Center at risk setting jumps from 30 to 60 days

2017-06-14 Thread Harris, Steven
Hi Stephan

Set vmatriskinterval 

Syntax

>>---Set VMATRISKINTERVAL--node_name--fsid-->

>--TYPE--=--+-DEFAULT--+--++---><
+-BYPASSED-+  '-Interval--=--value-'
'-CUSTOM---'

Type is custom and interval is in *Hours* between 6 and 8808 (367 days)

Oh Heck.  I had plans to set it much higher, as we have to keep monthlies for 7 
years even after decommission.  I will have to set to bypassed instead.


Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Wednesday, 14 June 2017 5:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Operations Center at risk setting jumps from 30 to 60 days

Hi all,

Is there a way to manually set the at risk for a node to, for example, 31 days?
Monthly backups now have to be set to 30 or 60, both isn't what we want because 
30 will cause false positives and 60 is a bit too late.

The default can be set to 5 weeks for example, I think the best would be if we 
have the resolution of the default settings per node (or selection of
nodes) in the OC.

Regards,
   Stefan


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


TDP for SQL and UAC

2017-06-01 Thread Harris, Steven
Hi All

I've got a bit of a show-stopper here that I could use some help with.

Previously, SQL backups have been mostly  handled by dump to disk and backup 
with BA Client. Database sizes have grown over time and now we cannot get 
through the whole dump/backup cycle in a reasonable window, so we have been 
pushing to get TDP used as the standard backup mechanism for SQL.  There are a 
couple of issues with this ; I'll  be submitting an RFE for one of them shortly 
however the big one is to do with security.

The TDP for SQL backup is to be run using the control-m scheduling tool as the 
backup has to run at a particular point in the processing cycle (also for 
restores as we can't afford to have a large restore fail because someone's RDP 
session timed out) .  It must use a domain account, nothing else is 
permissible.  When doing this we get a windows UAC pop-up.  Again we are not 
permitted to turn off UAC..  don't worry about how reasonable that is, IT 
Security and Auditors don't cope well with arguments about 'reasonable' or 'low 
risk' it's all black and white.

So, has anyone else solved this issue: run a TDP backup from a domain account 
and somehow bypass the UAC prompt without turning it off in a blanket fashion?

Any and all suggestions gratefully received.

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Linux on Power Support - Available now

2017-06-01 Thread Harris, Steven
Thanks for that Del.

I realize you can't pre-announce anything here, but a Power-linux LE vBS server 
to handle TDP for VE would be awesome and avoid many of the architectural 
limitations of Intel boxes.  In my case that would imply TS3500 driver support 
and Lan free as well.

Can we assume that's in the pipeline?

Cheers

Steve.

Steven Harris
TSM Admin/Consultant
Canberra Australia.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: Friday, 2 June 2017 3:31 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Linux on Power Support - Available now

IBM Spectrum Protect 8.1 now supports the Linux on Power -Llittle Endian

Details here:

https://www.ibm.com/developerworks/community/blogs/869bac74-5fc2-4b94-81a2-6153890e029a/entry/Feel_the_power_IBM_Spectrum_Protect_now_supports_the_Linux_on_Power_Systems_little_endian_operating_system?lang=en



Del



- Forwarded by Del Hoobler/Endicott/IBM on 06/01/2017 01:27 PM -

Del Hoobler/Endicott/IBM wrote on 04/24/2017 02:47:21 PM:

> From: Del Hoobler/Endicott/IBM
> To: "ADSM: Dist Stor Manager" 
> Date: 04/24/2017 02:47 PM
> Subject: Re: Linux on Power Support
> 
> Hi Jonathan,
> 
> Spectrum Protect 8.1 server dropped Linux on Power - Big Endian (you 
> can still get it for Spectrum Protect 7.1 server).
> 
> Spectrum Protect 8.1 is adding support for Linux on Power - Little 
> Endian. It is targeted to be released for RHEL within the next few
months.
> 
> 
> Del
> 
> 

> "ADSM: Dist Stor Manager"  wrote on 04/24/2017
> 02:38:28 PM:
> 
> > From: Jonathan Dresang 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 04/24/2017 02:39 PM
> > Subject: Linux on Power Support
> > Sent by: "ADSM: Dist Stor Manager" 
> > 
> > Hello all,
> > 
> > Does IBM no longer support running TSM/Spectrum Protect on Linux
systems
> > running on Power with 8.1?  I went to Passport Advantage and 
> > couldn't
find
> > it there, but found 7.1.7 downloads for the server code for Linux on 
> > Power?
> > 
> > Thanks,
> > 
> > 
> > Jonathan Dresang
> > Madison Gas & Electric
> > 608-252-7296
> > 

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Tape mounts and VE backups.

2017-05-22 Thread Harris, Steven
Thanks for your reply David

Yes that is an option, but numbers of virtual tape drives, while large, are not 
infinite, and change control effort is, as always, three times more than the 
technical effort.  So before just doing that I wanted to know if it was a 
restriction or something tuneable. 

Cheers

Steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Tuesday, 23 May 2017 1:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Tape mounts and VE backups.

If it's a VTL, why not just increase the number of "tape" drives?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Harris, Steven
Sent: Sunday, May 21, 2017 9:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Tape mounts and VE backups.

HI all

At my current gig, we are running a TSM VE environment.  Server is AIX, storage 
agents and vBS on physical linux X86_64.  Disk is on V840 flash, back end is 
protectier VTL. TSM Server 7.1.1,  Storage agent 7.1.1, TSM For VE 7.1.1 can't 
go higher because  of an old Vcenter version.

Backups run  using SAN transport to VBS servers, through the storage agents 
direct to the VTL. Now we are ramping up, adding more VMware clusters.
Now everywhere else in TSM, when you run up against the devclass device limits, 
TSM queues nicely and waits for a tape drive. However for the VE backups,  when 
we have used all the drives (48!) we get a "server media mount not possible" 
error and a failed backup for the VM.

Have others found this?  Is there a work around?

Regards

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.bt.com.au=AwIFAg=SgMrq23dbjbGX6e0ZsSHgEZX6A4IAf1SO3AJ2bNrHlk=dOGCMY197NTNH1k_wcsrWS3_fxedKW4rpKJ8cHCD2L8=MD2srDAsrnCj0irbBY-8UcfprGA82Nx8894VbxXVWnI=NbRKv-lOO-gbwVb1FNA5NvHglD6uCqyqEmhUNklvLAM=
  

Past performance is not a reliable indicator of future performance.

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Tape mounts and VE backups.

2017-05-21 Thread Harris, Steven
HI all

At my current gig, we are running a TSM VE environment.  Server is AIX, storage 
agents and vBS on physical linux X86_64.  Disk is on V840 flash, back end is 
protectier VTL. TSM Server 7.1.1,  Storage agent 7.1.1, TSM For VE 7.1.1 can't 
go higher because  of an old Vcenter version.

Backups run  using SAN transport to VBS servers, through the storage agents 
direct to the VTL. Now we are ramping up, adding more VMware clusters.
Now everywhere else in TSM, when you run up against the devclass device limits, 
TSM queues nicely and waits for a tape drive. However for the VE backups,  when 
we have used all the drives (48!) we get a "server media mount not possible" 
error and a failed backup for the VM.

Have others found this?  Is there a work around?

Regards

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Power failures

2017-04-30 Thread Harris, Steven
Hi Ricky

To further the conversation, I have been around a long time and the number of 
total outages of supposedly redundant data centers I have seen is astonishing.

1. Small bank.  Tested the generator every month, no one checked the fuel 
level.  When we had a power failure the generator ran for only a few minutes.
2. Same bank  new building.  Had an unexpected power failure just as the switch 
that changed over the power from mains to generator was disassembled for 
maintenance.
3. State government computer centre.  Electrician dropped a spanner which 
shorted out the inside path of the UPS
4. Enterprise level data centre run by big IT outsourcer.  Big voltage 
fluctuations on external grid, so decision was taken to go to generator. 
Generator was running but no power because switch was in manual mode not auto.  
Ran out of air conditioning which caused some shutdowns, then ran out of UPS 
just before the problem was addressed.
5. Different Enterprise level data centre run by big IT outsourcer.  Total 
power outage after fierce thunderstorm.

Now that’s five, since 1990, but that’s is just what has happened to me. The 
moral is,  always expect an unexpected shutdown.

Cheers

Steve.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Saturday, 29 April 2017 1:31 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DR Rebuild please chime in.

Strange, first problem will only get worse without the TSM replication because 
you lose failover restores and future failover backup functions, and the second 
problem should be a reason for a generator and not a different DR solution I 
would think. :-)

Anyway, goog luck with the project. :-)

On Fri, 28 Apr 2017 at 15:48, Plair, Ricky <rpl...@healthplan.com> wrote:

> There were a number of problems this year that caused management to 
> rethink the TSM solution.
>
> One,  we have our TSM server on a Windows 2008 R2 Enterprise system 
> running TSM server version 6.3.4.0. and it uses Microsoft clustering.
> Somehow our clustering died and we lost the secondary TSM server and 
> it took almost 2 days to get back the primary TSM server. Then about a 
> month later we had a power outage (complete power outage) and lost the 
> entire data center. This corrupted the data on the TSM server and 
> caused a lot of different problems and basically had to be rebuilt 
> from scratch. Then when we got to the DR exercise approximately 2 
> months later a couple of the DB2 database were corrupt and could not 
> be restored from TSM. So, that meant that TSM was a problem and we 
> needed to change our backup solution around.
>
> And below is what they want to do. Fantastic!
>
> Around here if there is a problem, blame TSM.
>
>
> Ricky M. Plair
> Storage Engineer
> HealthPlan Services
> Office: 813 289 1000 Ext 2273
> Mobile: 813 357 9673
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Stefan Folkerts
> Sent: Friday, April 28, 2017 7:52 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] DR Rebuild please chime in.
>
> That should work, I am wondering why you are stopping with TSM 
> replication because as mentioned by Matthew moving away from the 
> application integration has it's downsides.
> So can you share the reasons with us?
> On the plus side, you get something close to continues replication so 
> things like DB log backups are offsite the moment they are done 
> locally something that TSM replication does not currently support.
>
> For me, the biggest thing you lose is that you lose the recovery of 
> damaged data on the local server from the replica, an automatic 
> mechanism with TSM replication.
> That would mean you have to have a copypool locally to protect the 
> data in the same way.
> This would be an important point for me, if you are running the 
> directory containerpool the impact on housekeeping might be limited 
> but I find the impact of a large copypool with deduplicating file 
> device type storagepools to be a disaster.
>
>
>
>
> On Fri, Apr 28, 2017 at 1:02 AM, Harris, Steven < 
> steven.har...@btfinancialgroup.com> wrote:
>
> > Ricky
> >
> > I have something similar at my current gig.
> >
> > Database and landing storage pools are on V840 flash and migrate to 
> > Protectier VTL. The V840 data is remote copied and the VTL uses its 
> > own replication mechanism. Recovery is to bring up the instance on 
> > hot AIX LPARs using the replicated database at the  remote site.  
> > This is used for multiple TSM Servers, in both directions.
> >
> > DR has been tested twice by others, and appears to work.  I'll find 
> > out for myself in a week or

Re: DR Rebuild please chime in.

2017-04-27 Thread Harris, Steven
Ricky

I have something similar at my current gig.

Database and landing storage pools are on V840 flash and migrate to Protectier 
VTL. The V840 data is remote copied and the VTL uses its own replication 
mechanism. Recovery is to bring up the instance on hot AIX LPARs using the 
replicated database at the  remote site.  This is used for multiple TSM 
Servers, in both directions.

DR has been tested twice by others, and appears to work.  I'll find out for 
myself in a week or so, change control willing.

Cheers

Steve

Steven Harris
TSM Admin/Consultant 
Canberra Australia.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Matthew McGeary
Sent: Friday, 28 April 2017 6:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DR Rebuild please chime in.

If I'm understanding correctly, your DR site will have a storage-level copy of 
all your TSM storage pools, database, logs, etc.

In that case, yes, what is being proposed should work.  However, you're trading 
a replication that can be monitored and validated to a storage-level model that 
isn't application aware.

AND, if you're not doing anything on the DB2 side during replication (ie: 
quiescing) then the server will do a crash-recovery startup at the DR site.

Crash-recovery has always worked for me in DB2, but it's not as fool-proof as 
DB2 HA/DR replication, recovering from a DB2 backup or using the TSM 
replication that you're ripping out.  There may come a time when you do a DR 
test or actual DR and your TSM database won't recover properly from that 
crash-level snapshot.  Then what do you do?

Why in god's name is this change happening?
__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Thursday, April 27, 2017 1:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DR Rebuild please chime in.

All,

Our last DR was a disaster.

Right now,  we do the TSM server to TSM server replication and it works fairly 
well but, they have decide we need to fix something that is not broken.

So, the idea is to upgrade to SP 8.1 and install on a zLinux machine. Our 
storage is on an IBM V7000, and where we were performing  the TSM replication, 
we are trashing that and going to IBM V7000 replicating to V7000.

Now,  the big twist in this is,  we will not have a TSM server at our DR 
anymore. The entire primary TSM server will be backed up to the V7000 and 
replicated to our V7000 at the DR site.

There is no TSM server at the DR site so, IBM will build us one when we have 
our DR exercise and then according to our trusty DB2 guys we should just be 
able to break the connection to the Primary TSM server,  do a little DB2 magic 
and WOLA the TSM server will be up.

This is my question, if the TSM server is built in DR and the primary TSM 
servers database in on the DR V7000,  then that database will still have to be 
restore to the TSM server. You're not going to be able to just bring it up 
because its DB2 and point to the TSM server and it work, right?

Please let me know your thought's. I know I have left a lot of details out but 
I'm just trying to get some views. If you need more information I will be happy 
to provide it.

I appreciate your time.




Ricky M. Plair
Storage Engineer



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to 

Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?

2017-03-27 Thread Harris, Steven
Tom 

It's true that DBAs are the 600 pound gorillas in the room and often get what 
they want out of fear and management ignorance.  A lot of it is simple need for 
control over their own destiny as they see it and that is understandable.

Once you have tuned the DB backups as much as you can there are further options.

1.  Use appropriate Flashcopy products to provide point in time snapshots.  
This could be VSS Snapshots or hardware snapshots depending on your particular 
mix of hardware and software.
2.  There used to be a way of offloading the backup job to another box.  - take 
a hardware snapshot mount it elsewhere and back that up -  I assume that still 
works on a virtual source and physical target.  
3.  Once you have something on physical hardware, you can use Storage agent to 
offload that data direct to tape, bypassing your  TSM server.  Note also that 
you can use more exotic combinations of client to storage agent, for example 
multiple backups can use the same storage agent and can be situated on the same 
network.  E.g. in AIX multiple LPARs on the same CEC can all talk across the 
internal network to a LPAR with a storage agent.
4. If that is not possible because of tape restrictions, then you have to go to 
storage agent and shared disk, and that requires GPFS.

Lay these options out, come up with some indicative costings.

Then go back so something more reasonable.  About 5 years ago I had a big DB2 
database that took most of a weekend to back up.  As a result we used a Full 
monthly, Differential weekly, incremental daily + logs strategy.  This 
particular database was then exported/imported and restored as a User 
acceptance environment.  It all worked without a hitch once set up.  You could 
use some sort of similar Full+differential+logs strategy for these MSSQL 
databases.  Do not make the mistake of thinking we will do all the fulls on the 
weekend,  you have no more bandwidth then than you do during the week and will 
have more disruption due to system changes.

Upper Management will baulk at the cost and the DBAs will be asked to change to 
the cheaper strategy.  

Take a sample application, set up your new strategy and test it forward and 
backward until the DBAs understand how it works and are comfortable that it 
does indeed work.  Then implement more widely.

I'm having the same sort of discussions with my own DBAs in my current role..  
They still rely on disk dumps(!) and that is even less sustainable that your 
case.

Good luck.

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Matthew McGeary
Sent: Tuesday, 28 March 2017 8:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A Client 
Settings ?

I haven't had to change the buffers or any other settings and it's made a big 
difference but I didn't really experiment with different numbers.  I landed on 
10 based on similar experience using 10 sessions for DB2 API backup/recovery.  
Our largest SQL server backup is probably in the 500-600 GB range, so we aren't 
operating on the same scale as you are.

Del is right that this is only for 'legacy' backups, not for VSS-offloaded.

__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Alverson
Sent: Monday, March 27, 2017 2:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A Client 
Settings ?

I will try that tonight.  Do you change any of the other "performance"
settings from the defaults?

Thanks!

Tom

On Mon, Mar 27, 2017 at 3:24 PM, Matthew McGeary < 
matthew.mcge...@potashcorp.com> wrote:

> If you're using TDP for SQL you can specify how many stripes to use in 
> the tdpo.cfg file.
>
> For our large SQL backups, I use 10 stripes.
>
> __
> Matthew McGeary
> Senior Technical Specialist – Infrastructure Management Services 
> PotashCorp
> T: (306) 933-8921
> www.potashcorp.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Tom Alverson
> Sent: Monday, March 27, 2017 1:11 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A 
> Client Settings ?
>
> Our biggest performance issue is with SQL backups of large databases. 
> Our DBA's all want full backups ever night (and log backups every
> hour) and for the databases that are around 1TB the backup will start 
> at Midnight and finish 5 to 13 hours later (varies day to day).  When 
> these backups start extending into the daytime hours they complain but 
> I don't know how we could improve the speed.  Our Storage servers all 
> have 10GB interfaces but they are backing up hundreds of clients 

Re: How to backup Billions of files ??

2017-03-20 Thread Harris, Steven
Bo

The problem with small files is that the TSM database entry may well be larger 
than the file you are storing. If your files are less than about 3000 bytes 
that will be the case.  

What is happening is that the file system is being used as a database.  A 
complex file path becomes the key and the file content is the data.
I realize this has probably been dumped on you without consultation, but a 
database is probably a better fit.  It could be something as simple as a 
key/value store (maybe one per day) or as complex as a document DB like 
Couchbase.

A previous customer of mine did something similar.  It was logs of ecommerce 
transactions that averaged about 1500 bytes each and had to be kept for 7 
years.  A million transactions a day and growing.  They killed a TSM 5.5 
database in 2 years, and when I left were well on the way to killing a TSM 6.3 
database as well.  Any requests to alter the application were met with active 
hostility.

Good Luck

Steve
Steven Harris
TSM Admin/Consultant 
Canberra Australia

   



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Tuesday, 21 March 2017 12:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] How to backup Billions of files ??

Bo
I suggest you provide a few more details about the data and you backup 
environment.
For example; what is this data, how frequently will it be accessed on average, 
what is its total space requirements, what is the source stored on?
Type of backup storage; tape, disk, cloud? (specifics) Bandwidth/network speed 
between data and target backup server?

-Rick Adamson
 Jacksonville,Fl.
 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bo 
Nielsen
Sent: Monday, March 20, 2017 7:20 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] How to backup Billions of files ??

Hi TSM's

I have earlier asked for help with archiving of 80 Billion very small files, 
But now they want the files backed up. They expect an average change rate of 3 
percent/Month.

Anyone with experience of such an exercise, and will share it with me??

Regards

Bo


Bo Nielsen


IT Service



Technical University of Denmark

IT Service

Frederiksborgvej 399

Building 109

DK - 4000 Roskilde

Denmark

Mobil +45 2337 0271

boa...@dtu.dk

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Shared file systems other than GPFS

2017-02-23 Thread Harris, Steven
Thanks Frank 

I appreciate the input.

Steve 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Frank 
Kraemer
Sent: Thursday, 23 February 2017 10:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Shared file systems other than GPFS

Steve,

Spectrum Scale (aka GPFS) is a scalable paralell Filesystem for High 
Performance I/O applications:

the most important thing is that Spectrum Scale (GPFS) is a paralell filesystem 
and of course it's a shared filesystem. Each GPFS Server nodes offers something 
called an "NSD" service (NSD = network shared disk). A filesystem is striped of 
all available NSD's. The more NSD's the more I/O performance you get. Each NSD 
is a 1:1 mapping of a local raw disk (on aix e.g. /dev/hdisk67 on e.g. Linux 
/dev/sdb). GPFS can use all sorts of disks provided by the underlaying OS 
(SATA, SAS, FC, SVC, SSD, NVMe, RAM disks, etc.).

Network Shared Disk (NSD) is the key factor:

A filesystem is the sum of all the NSD's that you allocate to it. Yes you can 
span a single filesystem over AIX and Linux at the same time.
Adding/removing GPFS nodes while the system is up and running - no problem!
The new GUI helps newbies to be learn fast. To setup GPFS from ground is a 
matter of minutes...it takes longer for me to write this email than to setup a 
GPFS cluster :-)

It's all about performance:

We have customers that run workloads north of 400 GB/sec from a single 
application to the filesystem. A single ESS (Elastic Storage Server model
GL6) will give you 27.000 MB/sec file I/O per box - just adding boxes will add 
a multiple of this number to your performance. There are customers who run 40 
and more ESS GL6 boxes as one single filesystem. TSM servers can take the full 
benefit of this performance. TSM DB, Log and Storage Pools can be placed on 
GPFS filesystems.

Steps to setup GPFS on a number of nodes:

1) Install OS on Nodes, Images,...(VMware is supported - yes)
2) Install GPFS LPP on AIX, RPMs on Linux or MSI on Windows (Yes - we have full 
windows support)
3) Define/Prepare OS raw disks with OS level commands (e.g. on AIX mkvg, mklv, 
...)
3) Define/Setup a GPFS Cluster using the existing IP connection; define and 
distribute ssh keys to nodes; NTP setup;
4) Create NSD's from existing RAW disks (it's just a 1:1 mapping)
5) Define GPFS filesystem(s) on related NSD's
6) Mount GPFS filesystems on all nodes in the cluster
7) -done-

(For Step 3, 4 and 5 there is just ONE text file that you can prepare/edit and 
reuse it for all 3 steps. Takes about 40 sec to do all steps even for a big 
system.)

How to start ?

1) Download the IBM GPFS VMware image from the GPFS web site. It's a full 
function image with no limts for testing & learning.
2) On a laptop create 3 VMware machines and install the images.
3) Add some VMware defined disk for testing.
4) Start installation and setup. Perfect for playing with the system.
5) After 2 day's you are the GPFS expert :-)

Are there other filesystems?

Yes, there are a large number of filesystems on the market. Every week there is 
a new kid on the block as long as VC money is flooding Silicon Valley startups 
:-) If you think GPFS is complex than please try Luste and you will find GPFS a 
piece of cake. Lustre is an overlay filesystem. You need local filesystem on 
each node and than Lustre will run "ontop" of the multiple local filesystems to 
distribute the data. Runs only on Linux, you need much more skills and know how 
from my point of view...but it's a good system, if you can deal with the 
complexity. (Linux only of course no AIX
support.)

Gluster from RedHat is easy to setup but it's very slow and does not scale 
well. It's not used in any large installation where performance is relevant. 
Redhat has too many filesystems on the truck there is some confusion were to 
go. They have GFS, GFS2, Gluster and Ceph...just too much choice for me. (Linux 
only of course no AIX support.)

BeeGFS from Kaiserslautern in the Pfalz is another alternative filesystem.
It's a spin-off of the German research community.

Hedvig - new kid on the block...I never saw a customer in real life but strong 
marketing on the web.

Ceph from ex-Inktank is very good for running OpenStack block device support 
but the filesystem "plugin" is a poor implentation from my experience.

NFS - not a parallel filesystem at all it's 1990' technology, has a lack of 
security and limited I/O efficiency. Works fine but is painfully slow. Of 
course NetApp and EMC Isilon will tell you a different story here.

[]

GPFS is proven for TSM workloads, it's fully supported and speed is just a 
matter of adding more hardware to the floor. It has full support for DB2 and 
yes also Oracle (!). You can perfectly use GPFS as target for the Oracle RMAN 
tool. SAP HANA runs on GPFS as well as SAS.

-frank-

P.S. You can get GPFS not only from IBM - Lenovo, NEC, DDN,  and a lot of 
others can help too; this option will help you to 

Shared file systems other than GPFS

2017-02-22 Thread Harris, Steven
Further to my last post.

TSM server 7.1 on AIX, 7.1 TSM for VE on Linux X86_64 with storage agent, 
currently backing up to Protectier VTL.

My customer needs a significant capacity expansion in reasonably short order.  
Disk is cheap, Protectier is out of the picture for non-technical reasons.  The 
network needs a significant upgrade if traffic is to go from VMDK->vBS->TSM 
Server, so that is to be avoided, and traffic from vBS should flow over the SAN.

The only supported file sharing is via GPFS, and I don't think I can justify 
the complexity and expense of that. However there are a lot of shared 
filesystems out there.  Is anyone running Gluster, Lustre, or something similar 
and doing storage agent backups on that.  It will obviously need to run on AIX 
and Linux and ideally should have minimal set up.

Ideas welcome.

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia





This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Protectier Tuning/ Linux tape Tuning

2017-02-22 Thread Harris, Steven
Hi all

My customer is running a TSM for VE backup solution involving Linux x86_64 as 
the vBS with storage agent installed, writing direct to Protectier VTL.
Throughput to the VTL is needs improvement both at a whole of VTL level and at 
the individual tape session level, to handle the workload.

I've trawled through searches but can find nothing on either Protectier tuning 
or  on Linux tape (e.g changing SCSI block size, number of buffers, size of 
buffers)

If anyone aware of such resources could you please post a link to them?

Thanks

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: DOMAIN.vmfull on multiple lines in dsm.opt

2017-02-21 Thread Harris, Steven
Del

 I can't speak for anyone else, but I can't go to 8.1 because of the vCenter 6 
dependency.

Regards

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del 
Hoobler
Sent: Wednesday, 22 February 2017 6:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DOMAIN.vmfull on multiple lines in dsm.opt

Hi Hans Chr.,

I am not sure I fully understand. Technically, you no longer need to use 
the options file if you use the new V8.1 features (tagging) with vSphere 
6. 

Why can't you use these new features? What is missing that you need?



Del




"ADSM: Dist Stor Manager"  wrote on 02/21/2017 
08:18:26 AM:

> From: Hans Christian Riksheim 
> To: ADSM-L@VM.MARIST.EDU
> Date: 02/21/2017 08:18 AM
> Subject: Re: DOMAIN.vmfull on multiple lines in dsm.opt
> Sent by: "ADSM: Dist Stor Manager" 
> 
> Hi Del,
> 
> yes I have taken a look and it is a nice feature. We would prefer that 
the
> exclusion and inclusion of VMs be policy and rule based though.
> 
> Regards,
> 
> Hans Chr.
> 
> On Tue, Feb 21, 2017 at 12:58 PM, Del Hoobler  
wrote:
> 
> > Hi Hans Chr.
> >
> > Have you looked at the tagging support?
> >
> >
> > https://www.ibm.com/support/knowledgecenter/SSERB6_8.1.0/
> > ve.user/t_ve_dpext_cfg_bup_policy.html
> >
> > It was specifically added to enable you to avoid these types of 
issues.
> >
> >
> > Del
> >
> > 
> >
> >
> > "ADSM: Dist Stor Manager"  wrote on 02/21/2017
> > 03:57:02 AM:
> >
> > > From: Hans Christian Riksheim 
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 02/21/2017 03:57 AM
> > > Subject: DOMAIN.vmfull on multiple lines in dsm.opt
> > > Sent by: "ADSM: Dist Stor Manager" 
> > >
> > > Really two questions.
> > >
> > > We have a lot of exceptions where VMs are not to be backed up and 
the
> > > DOMAIN.vmfull statement is becoming unpractically long:
> > >
> > > domain.vmfull vmhostcluster=X,Y;-vm=abc*,*def,ghi* (times 100)
> > >
> > > Any possibility to have that statement span separate lines or some 
other
> > > measure to increase readability?
> > >
> > > Second question is about case sensitivity. In previous versions
> > > DOMAIN.VMFULL was not case sensitive but now it is. Unfortunately we
> > have
> > > no control of how the VMware guys name their vms so how do we 
exclude a
> > vm
> > > like
> > >
> > > VMNAME
> > >
> > > and cover all variations Vmname, vmNAME and so on?
> > >
> > > Regards,
> > >
> > > Hans Chr.
> > >
> >
> 

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: 7.1.7.1

2017-02-14 Thread Harris, Steven
My spin is don’t use any new server feature for 2 years unless you really 
cannot do without  it.

New features imply new bugs.  Most TSM shops do not have a true test 
environment, and even if they do a lot of these problems only appear when you 
have a lot of data.

Example: the dedup corruption bugs that only surfaced after a while. 

My 2cents

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Wednesday, 15 February 2017 6:23 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] 7.1.7.1

I agree.  I did not mean to imply I immediately rush out and put on the patch 
release if it doesn't address something I am specifically looking for.  But 
even that doesn't always work.

For instance, replication is still very problematic. When we first tried 
setting up replication under 6.3.5.100, we had constant problems/failures.
I had one server so screwed up after the initial replication, it refused to run 
any replication.

Then 6.3.6.000 came out with tons of fixes addressing replication issues, 
including a few I had experienced and needed the fix for.  Unfortunately, after 
upgrading one server, I was one of the few who starting experiencing 
never-ending expiration runs. But replication was better on this server.
Just couldn't run expire inventory.

Now with 7.1.6.3, replication is a lot better.  I can actually run most 
replications on a daily basis.  But I still have replications issues / failures 
on an almost daily basis that require hand-fixing.

So, yes I am looking at 7.1.7.x in hopes it fixes replication issues I am 
seeing, but unless there is a specific fix - I won't rush to install it.

On Tue, Feb 14, 2017 at 10:04 AM, Stefan Folkerts  wrote:

> > Now that the first major patch/update is out, time to play with 
> > 7.1.7 and wait for first patches for 8.1.
> >
>
> I wonder how many people look at the releases like that.
> I always try to stick with a.b.c.0 releases unless the interim release 
> (ie
> a.b.c.100) fixes a *specific *issue we are seeing.
> The patch versions are interim fixes and IBM states (in the readme 
> even) that they put these releases thru very limited testing.
> So we keep running 7.1.7.0 unless 7.1.7.100 fixes a specific issue we 
> see at a specific site or we wait for 7.1.8 to be released that has 
> been more extensively tested and contains all the 7.1.7.100 fixes (and more).
> We would never run the a.b.0.0 release for production and always wait 
> for the a.b.1.0 release, so 8.1.1.0 would be the first 8 version we 
> would be running.
>
> This only applies to the server, with the client we tend to be a 
> little more aggressive since features here have a bigger impact on 
> what the customer can or cannot use client wise.
>
>
>
>
>
> >
> > On Mon, Feb 13, 2017 at 7:08 PM, Tom Alverson 
> > 
> > wrote:
> >
> > > We had a phone meeting with some IBM people last week and they 
> > > highly recommended we get the 7.1.7.1 update as we are in the 
> > > process of
> > upgrading
> > > some older 6.X storage server that are going EOL (next month).  
> > > They
> did
> > > not explain exactly what the fixes were.
> > >
> > > On Mon, Feb 13, 2017 at 9:28 AM, David Ehresman < 
> > > david.ehres...@louisville.edu> wrote:
> > >
> > > > TSM server 7.1.7.1 is available.  Does anyone know what 
> > > > changes/fixes
> > it
> > > > has compared to 7.1.7.0.
> > > >
> > > > David
> > > >
> > >
> >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator 
> > Xymon Monitor Administrator VMware Administrator Virginia 
> > Commonwealth University UCC/Office of Technology Services 
> > www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be a phishing 
> > victim - VCU and other reputable organizations will never use email 
> > to request that you reply with your password, social security number 
> > or confidential personal information. For more details visit 
> > http://infosecurity.vcu.edu/phishing.html
> >
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This 

RFE for powershell interface to dsmc and dsmadmc

2017-02-07 Thread Harris, Steven
Hi Guys and Gals

Powershell is the goto scripting language for windows these days, buyt if you 
have ever tried to script dsmadmc with it you find it is fraught with trouble.  
Just passing the right parameters in is difficult, getting output that 
powershell can use also requires jumping through hoops.

Everything else these days provides a powershell interface, so TSM/ISP should 
also.

I have raised an RFE 
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=100660

Vote if you are so inclined

Steve

Steven Harris

TSM Admin/Consultant
Canberra Australia

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: DB2 back-up issues on Windows

2017-02-02 Thread Harris, Steven
Karel

In my previous role I had a couple of TPC (now known as "Spectrum Control") 
database servers running DB2 under windows with exactly this problem.

They ran for years without issue then started being unable to find dsm.opt 10 
or so days after a reboot.

Never did get to the bottom of why, but rest assured you are not going mad.

Regards

Steve

Steven Harris
TSM Admin/Consultant 
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Karel 
Bos
Sent: Friday, 3 February 2017 12:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DB2 back-up issues on Windows

Yeah, but don't you think that would be allways the case and not after some 
time Restart DB2, backup runs again (no change to any config item) and 
after X amount of time errors start again.


Btw, its a windows system and DB2 runs under local admin account so default 
permissions are always enough for such an account. Further more, just to be 
sure, directory of opt and log file has gotten full control for everybody and 
automaticly created .lock files confirmed to have expected and sufficient 
rights.

-K

-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Namens Rick Adamson
Verzonden: donderdag 2 februari 2017 14:33
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] DB2 back-up issues on Windows

I have found that most all of the "cannot find, or cannot read" I run into are 
ultimately permission related, especially with DB2.


-Rick


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Karel 
Bos
Sent: Thursday, February 02, 2017 6:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DB2 back-up issues on Windows

Hi,



Every day I have to restart DB2 (sometimes multiple times) due to backups stop 
working. The DB2 database is doing log file and normal backups to TSM and the 
backups stop working after the following couple of errors:



31.01.2017 10:48:04 instrStart: Unable to get write file lock or report file 
handler.

31.01.2017 10:48:16 ANS1035S Options file 'D:\TSM\client\dsm.opt' could not be 
found, or it cannot be read.

31.01.2017 10:48:31 instrStop: Unable to get write file lock or report file 
handler.



My idea is this has nothing to do with the dsm.opt itself but with the 
dsmopt.lock file and the handler stops working. This leads me to believe it has 
something to do with:



https://urldefense.proofpoint.com/v2/url?u=http-3A__www-2D01.ibm.com_support
_docview.wss-3Fuid-3Dswg21646969=DwICAg=AzgFQeXLLKhxSQaoFCm29A=eqh5PzQ
PIsPArLoI_uV1mKvhIpcNP1MsClDPSJjFfxw=NYVXmJstnwXOL1D-oICuOCJtMLO01UY9BP_P6
Hazv40=ppbHkreaOeHpE_RvUd4NGYQUx08O5dTlezN0U_pVems=



but ofc its al directed to Unix as although IBM supports their products on 
Windows their support and documentation are mostly based on AIX installations.



Also note, this symptom should be fixed in TSM V7.1.4 but this environment is 
running BAclient V7.1.6.2 on Windows 2008R2 64b. Support ticket with IBM has 
been opened but sofar they are not really helpfull.



I wonder if someone is running into the same issues and if its only on windows 
or also still on AIX (as I have multiple AIX servers running DB2 without the 
option to restart them everytime the backup stops working which are about to be 
upgraded to this TSM client version).



Kind regards,



Karel

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: How to prevent leading spaces in SQL output

2017-01-26 Thread Harris, Steven
Hi Eric

There are also the STRIP and TRIM sql functions - same function for these but 
slightly different syntax.

SELECT strip(cast(float(used_space_mb)*100/total_space_mb AS DECIMAL(4,2))) 
FROM log

However, when I tested this on a number less than 1, the implicit cast of the 
number to a string lost the leading zero.  That may or may not suit your use 
case.

Regards

Steve

Steven Harris
TSM Consultant
Canberra, Australia


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
Eric van (ITOPT3) - KLM
Sent: Wednesday, 25 January 2017 11:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] How to prevent leading spaces in SQL output

Hi Martin!
Again, thank you very much for your help!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Martin 
Janosik
Sent: woensdag 25 januari 2017 11:04
To: ADSM-L@VM.MARIST.EDU
Subject: Re: How to prevent leading spaces in SQL output

Hi Eric,

'-tab' does the magic:
[~]$ dsmadmc -tab -se=TSM -id=admin -pa=admin -dataonly=yes "SELECT CAST 
(float(used_space_mb)*100/total_space_mb AS DECIMAL(4,2)) FROM log"
0.98

[~]$ dsmadmc -se=TSM -id=admin -pa=admin -dataonly=yes "SELECT CAST(float 
(used_space_mb)*100/total_space_mb AS DECIMAL(4,2)) FROM log"
  0.98

Bye

Martin Janosik

"ADSM: Dist Stor Manager"  wrote on 01/25/2017
10:01:51 AM:

> From: "Loon, Eric van (ITOPT3) - KLM" 
> To: ADSM-L@VM.MARIST.EDU
> Date: 01/25/2017 10:06 AM
> Subject: [ADSM-L] How to prevent leading spaces in SQL output Sent by: 
> "ADSM: Dist Stor Manager" 
>
> Hi TSM-ers!
> I'm executing the following command in a script:
>
> /usr/bin/dsmadmc -se=$INSTANCE -id=$ID -password=$PASSWORD - 
> dataonly=yes "SELECT CAST(float(used_space_mb)*100/total_space_mb AS
> DECIMAL(4,2)) FROM log"
>
> The result is this (without quotes):
>
> '  0.49'
>
> I'm looking for a way to remove the leading spaces, so the output is 
> just '0.49'. I know I can do it in the script itself, but I'm just 
> wondering if there is a way to remove all formatting through the SQL 
> statement itself.
> Thanks for any help in advance!
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain 
> confidential and privileged material intended for the addressee only.
> If you are not the addressee, you are notified that no part of the 
> e-mail or any attachment may be disclosed, copied or distributed, and 
> that any other action related to this e-mail or attachment is strictly 
> prohibited, and may be unlawful. If you have received this e-mail by 
> error, please notify the sender immediately by return e-mail, and 
> delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/ or 
> its employees shall not be liable for the incorrect or incomplete 
> transmission of this e-mail or any attachments, nor responsible for 
> any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal 
> Dutch Airlines) is registered in Amstelveen, The Netherlands, with 
> registered number 33014286
> 
>

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but