Azure blob soft delete

2022-03-24 Thread Matthew McGeary
We are currently using Azure blob storage for Spectrum Protect Plus offloading 
and are looking to start using it for Cloud Tiering on Spectrum Protect but we 
want to use the soft delete function to give the cloud copy a measure of 
immutability.

Are there any pitfalls to enabling the soft delete feature?  It looks to be 
transparent to the client so my hope is that SP/ SPP will not ‘notice’ that 
it’s on and continue to operate normally.

Thanks!
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


Re: [EXT] Re: [ADSM-L] WORM Azure blob for cloud container pool

2021-03-19 Thread Matthew McGeary
Thanks for the tip Del, I'll look into that.

On 2021-03-01, 12:40 PM, "ADSM: Dist Stor Manager on behalf of Del Hoobler" 
 wrote:

WARNING: This email originated from outside of the organization. Exercise 
caution when viewing attachments, clicking links, or responding to requests.


Hi Matt,

Spectrum Protect does not support object storage object lock yet. It's on
the roadmap.

You can look at Retention Sets "lock" to help with this requirement.


Del




"ADSM: Dist Stor Manager"  wrote on 03/01/2021
11:28:53 AM:

> From: Matthew McGeary 
> To: ADSM-L@VM.MARIST.EDU
> Date: 03/01/2021 11:29 AM
> Subject: [EXTERNAL] WORM Azure blob for cloud container pool
> Sent by: "ADSM: Dist Stor Manager" 
>
> Hey folks,
>
> We’re implementing time-based retention for Azure blobs on our Plus
> deployment for ransomware protection but I don’t see similar
> language in the Spectrum Protect documentation for cloud container
> pools.  Has anyone implemented a cloud container pool on a cloud
> blob with WORM style retention rules? Is it possible or will it
> cause issues with either writing to the objects or cloud container
    > reclamation?
>
> Thanks,
>
> -
> Matthew McGeary
> Service Delivery Manager / Solutions Architect
> Data Center & Network Management, Nutrien IT
> T: (306) 933-8921
> www.nutrien.com<https://urldefense.proofpoint.com/v2/url?
> u=http-3A__www.nutrien.com_=DwIGaQ=jf_iaSHvJObTbx-
>

siA1ZOg=0hq2JX5c3TEZNriHEs7Zf7HrkY2fNtONOrEOM8Txvk8=9vKluwsPTPOmxt99WsGbh2bFYelgU1hVS8Rb_EVeZ5A=Dl_nFNRAJF1aPGQjvTV0QwdKo6ExzgV5wAOKBZMqTYg=
> >
>
> For more information on Nutrien's email policy or to unsubscribe, click
here:
> https://urldefense.proofpoint.com/v2/url?
>
u=https-3A__www.nutrien.com_important-2Dnotice=DwIGaQ=jf_iaSHvJObTbx-
>

siA1ZOg=0hq2JX5c3TEZNriHEs7Zf7HrkY2fNtONOrEOM8Txvk8=9vKluwsPTPOmxt99WsGbh2bFYelgU1hVS8Rb_EVeZ5A=zfJCBuN6YXBlzor3CJXsDofQe0kockj9Uf8l3qG6jm4=
> Pour plus de renseignements sur la politique de courrier
> électronique de Nutrien ou pour vous désabonner, cliquez ici:
> https://urldefense.proofpoint.com/v2/url?
> u=https-3A__www.nutrien.com_avis-2Dimportant=DwIGaQ=jf_iaSHvJObTbx-
>

siA1ZOg=0hq2JX5c3TEZNriHEs7Zf7HrkY2fNtONOrEOM8Txvk8=9vKluwsPTPOmxt99WsGbh2bFYelgU1hVS8Rb_EVeZ5A=0KJV9ZEYbu3mYxPzFZ09PG4hR5cFy_RzC9D6PntJXXE=
>



For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


WORM Azure blob for cloud container pool

2021-03-01 Thread Matthew McGeary
Hey folks,

We’re implementing time-based retention for Azure blobs on our Plus deployment 
for ransomware protection but I don’t see similar language in the Spectrum 
Protect documentation for cloud container pools.  Has anyone implemented a 
cloud container pool on a cloud blob with WORM style retention rules? Is it 
possible or will it cause issues with either writing to the objects or cloud 
container reclamation?

Thanks,

-
Matthew McGeary
Service Delivery Manager / Solutions Architect
Data Center & Network Management, Nutrien IT
T: (306) 933-8921
www.nutrien.com<http://www.nutrien.com/>

For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


Spectrum Protect Plus deduplication ratios

2020-07-10 Thread Matthew McGeary
It's a new day, so it must be time for more SPP questions. 

For the folks running SPP, what deduplication ratios are you seeing?  So far 
I'm still in the testing phase, with approximately 10 VMs that I'm running 
backup testing on.  Two of the VMs have similar database footprints (one is qa 
and one is dev, but the data is a close approximation of prod and each other.)  
However, even after a full backup of each system plus three incrementals, 
overall data deduplication is essentially 0, my ratio is sitting at 1.05.  I 
would think that, because the databases are so similar (not to mention that 
they're the same Windows version) that I'd be seeing much better dedup than 
that.  Compression is doing quite well, hovering at 2.7, but was hoping to at 
least get 1.5 or 2 to 1 dedup.

Thanks and have a good weekend!

__
Matthew McGeary
Service Delivery Manager / Solutions Architect
Data Center & Network Management, Nutrien IT
T: (306) 933-8921
www.nutrien.com
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


Re: Spectrum Protect Plus Backup Performance

2020-07-10 Thread Matthew McGeary
Thanks Steve,

I've been digging into it a bit more with the help of IBM support and I forgot 
a very important setting.  Currently I have a vsnap combined with vadp proxy, 
but I neglected to set the softcap for the vadp.  Consequently, backups were 
running on all available cores and crowding out the vsnap server.  Setting the 
softcap based on the blueprints greatly improved overall throughput.  Single VM 
performance was up and overall throughput was in excess of 400 MBps.

So that goes to show, RTFM. 

__
Matthew McGeary
Service Delivery Manager / Solutions Architect
Data Center & Network Management, Nutrien IT
T: (306) 933-8921
www.nutrien.com


-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Schaub, Steve
Sent: Wednesday, July 8, 2020 7:45 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXT] Re: [ADSM-L] Spectrum Protect Plus Backup Performance

WARNING: This email originated from outside of the organization. Exercise 
caution when viewing attachments, clicking links, or responding to requests.


Matthew,
Not sure if this is your issue or not, but verify that you are using nbd 
transport.  Hotadd performs better on initial full backups, but the overhead 
involved in setup/teardown makes it a poor choice once you start doing 
incrementals consistently.
Steve Schaub
Senior Platform Engineer II, Backup & Recovery BlueCross BlueShield of Tennessee

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Matthew 
McGeary
Sent: Wednesday, July 8, 2020 12:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Spectrum Protect Plus Backup Performance

Hey folks,

Starting a deployment of spectrum protect plus using virtual vsnap/vadp systems 
backed by v5030 RDM storage.  The vsnap system has been sized based on the 
recommendation of the blueprints for a 100TB repo and all of our VMWare 
infrastructure is hosted on Cisco UCS with multiple 10G interfaces per blade.  
Doing backup testing today and single backup performance is atrocious, 
averaging just 22 MB/s.

I've got a large TDP for VMWare deployment and I am familiar with setting 
parallelism with that product and we see much better individual VM backup 
performance with that product.  Is there anything similar I can tweak for the 
Plus deployment?  I can't find anything in the docs and the current performance 
makes me quite nervous as we push this product forward into production in the 
next month.

Any assistance or experience with a similar all-virtual deployment would be 
helpful.

Thanks

______
Matthew McGeary
Service Delivery Manager / Solutions Architect Data Center & Network 
Management, Nutrien IT
T: (306) 933-8921
http://www.nutrien.com

*
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important

--
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  
https://urldefense.com/v3/__https://www.bcbst.com/about/our-company/corporate-governance/privacy-security/email-policy.page__;!!DMYgIb-w!mA4Jnh1-SlCAcvIulHYlnHky8a8OIhx81yfKaxzK7_gkwMsm854QwvJFlt3qb7OEBHGXteE0$
  
<https://urldefense.com/v3/__https://www.bcbst.com/about/our-company/corporate-governance/privacy-security/email-policy.page__;!!DMYgIb-w!mA4Jnh1-SlCAcvIulHYlnHky8a8OIhx81yfKaxzK7_gkwMsm854QwvJFlt3qb7OEBHGXteE0$
 >
--

This email was sent by "Schaub, Steve"  securely using 
Transport Layer Security (TLS).
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


Spectrum Protect Plus Backup Performance

2020-07-07 Thread Matthew McGeary
Hey folks,

Starting a deployment of spectrum protect plus using virtual vsnap/vadp systems 
backed by v5030 RDM storage.  The vsnap system has been sized based on the 
recommendation of the blueprints for a 100TB repo and all of our VMWare 
infrastructure is hosted on Cisco UCS with multiple 10G interfaces per blade.  
Doing backup testing today and single backup performance is atrocious, 
averaging just 22 MB/s.

I've got a large TDP for VMWare deployment and I am familiar with setting 
parallelism with that product and we see much better individual VM backup 
performance with that product.  Is there anything similar I can tweak for the 
Plus deployment?  I can't find anything in the docs and the current performance 
makes me quite nervous as we push this product forward into production in the 
next month.

Any assistance or experience with a similar all-virtual deployment would be 
helpful.

Thanks

__
Matthew McGeary
Service Delivery Manager / Solutions Architect
Data Center & Network Management, Nutrien IT
T: (306) 933-8921
www.nutrien.com

*
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


Re: [EXT] Re: [ADSM-L] TDP for ERP (S4 Hana) questions about longer-term retention

2019-07-26 Thread Matthew McGeary
Yeah, you're right.  I dug into OneProtect a bit this week and it only supports 
B/A clients and backup objects.

So yep, only option is to do dumps to disk for the longer term backups.

Thanks

__
Matthew McGeary
Service Delivery Manager / Solutions Architect
Data Center & Network Management, Nutrien IT
T: (306) 933-8921
www.nutrien.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Thursday, July 25, 2019 3:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXT] Re: [ADSM-L] TDP for ERP (S4 Hana) questions about longer-term 
retention

WARNING: This email originated from outside of the organization. Exercise 
caution when viewing attachments, clicking links, or responding to requests.


I think one protect only works for B/A clients and VM's, not for TDP's, but I 
could be wrong or that information might be outdated (I think this was the case 
for 8.1.7).
So i'm thinking this might require dumps to disk and archives with removal via 
the B/A client for the montly's and yearly's.


On Mon, Jul 22, 2019 at 6:20 PM Matthew McGeary 
wrote:

> Folks,
>
> We are currently in the process of building out a S4 Hana environment
> as an ERP replacement and I have a requirement from business to retain
> backups of the S4 databases for the following periods:
>
> Daily incrementals - 30 days
> Weekly fulls - 90 days
> Monthly fulls - 1 year
> Yearly fulls - 7 years
>
> As the TDP for Hana client sends data as archive only and appears to
> only have the ability to tweak either:
>
> - retention of archive objects on the SP server
> - versions retained on the config file local to the SAP S4 Database
>
> What is my best strategy for delivering on these requirements?  Is the
> new retention function (One Protect) my best bet for this requirement?
>
> Thanks
>
> __
> Matthew McGeary
> Service Delivery Manager/Solutions Architect, Compute & Storage
> Information Technology
> T: (306) 933-8921
> www.nutrien.com
>
> For more information on Nutrien's email policy or to unsubscribe,
> click
> here: https://www.nutrien.com/important-notice
> Pour plus de renseignements sur la politique de courrier électronique
> de Nutrien ou pour vous désabonner, cliquez ici:
> https://www.nutrien.com/avis-important
>
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


TDP for ERP (S4 Hana) questions about longer-term retention

2019-07-22 Thread Matthew McGeary
Folks,

We are currently in the process of building out a S4 Hana environment as an ERP 
replacement and I have a requirement from business to retain backups of the S4 
databases for the following periods:

Daily incrementals - 30 days
Weekly fulls - 90 days
Monthly fulls - 1 year
Yearly fulls - 7 years

As the TDP for Hana client sends data as archive only and appears to only have 
the ability to tweak either:

- retention of archive objects on the SP server
- versions retained on the config file local to the SAP S4 Database

What is my best strategy for delivering on these requirements?  Is the new 
retention function (One Protect) my best bet for this requirement?

Thanks

__
Matthew McGeary
Service Delivery Manager/Solutions Architect, Compute & Storage
Information Technology
T: (306) 933-8921
www.nutrien.com

For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique de 
Nutrien ou pour vous désabonner, cliquez ici: 
https://www.nutrien.com/avis-important


Re: [EXT] [ADSM-L] VMFolder in VMware backups

2018-07-05 Thread Matthew McGeary
Good morning Hans,

Subfolders have never worked with the -vmfolder option, which has driven me 
crazy for years.  I gave up and went to cluster-level backups.

__
Matthew McGeary
Senior Advisor, Datacenter
Information Technology
T: (306) 933-8921
www.nutrien.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Hans 
Christian Riksheim
Sent: Wednesday, July 4, 2018 12:21 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXT] [ADSM-L] VMFolder in VMware backups

WARNING: This email originated from outside of the organization. Exercise 
caution when viewing attachments, clicking links, or responding to requests.


Any tip to include any sub folder? I have always believed they were included 
but now I see they are not. Not sure if this behavior has changed with 
versions. Not helping that the documentation tells nothing about this very 
important matter and turning yet another simple thing into a research project.

Hans C. Riksheim
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique d’Nutrien 
ou pour vous désabonnez, cliquez ici : https://www.nutrien.com/avis-important


Re: MAXNUMMP and REPLICATION

2018-05-29 Thread Matthew McGeary
Hey Zoltan,

I def all my nodes with maxnummp of 100.  It's overkill, but repl sessions do 
indeed count as mounts.

__
Matthew McGeary
Senior Advisor, Datacenter
Information Technology
T: (306) 933-8921
www.nutrien.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Monday, May 21, 2018 10:13 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXT SENDER] [ADSM-L] MAXNUMMP and REPLICATION

I think I know the answer but does active REPLICATION count against MAXNUMMP?  
Now that we have our new big, beefy server running, we are trying to get caught 
up with replication and I am starting to notice quite a few of these errors:

Transaction failed for session 37414 for node MCCDB01P. This node has exceeded 
its maximum number of mount points.

By default all nodes are configured MAXNUMMP=1.  What are you using?

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ucc.vcu.edu=DwIBaQ=zgHsTqBQkGAdNuoPRxUzSQ=8boJ5U4bJexvRn8nMjF_eCiecF_dbw7l7OBlwBvdwy4=za24pANHuOn8GlO_Vg6uvtKjvYilbaw6iN6wxZZ6JSE=OwpRPD1e9U0EbAlOVdp-c-cyk7Jb9TBrN7RMZanj2Sk=
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
https://urldefense.proofpoint.com/v2/url?u=http-3A__phishing.vcu.edu_=DwIBaQ=zgHsTqBQkGAdNuoPRxUzSQ=8boJ5U4bJexvRn8nMjF_eCiecF_dbw7l7OBlwBvdwy4=za24pANHuOn8GlO_Vg6uvtKjvYilbaw6iN6wxZZ6JSE=f7GnZLTQ29T1ca5XSXZ8DO01h85fV8Qa_vgblMYSRzc=
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique d’Nutrien 
ou pour vous désabonnez, cliquez ici : https://www.nutrien.com/avis-important


Re: Unable to backup SP database after 8.1.5 server/8.1.4 client update

2018-04-02 Thread Matthew McGeary
Anders,

Thanks for the tip.  I had thought I'd gotten the most recent client version 
but I guess I didn't look too closely!

Installed the fixpack and the DB backup is running normally.  Thanks!

__
Matthew McGeary
Senior Technical Specialist
Information Technology
T: (306) 933-8921
www.nutrien.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Anders 
Räntilä
Sent: Monday, April 2, 2018 9:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXT SENDER] Re: [ADSM-L] Unable to backup SP database after 8.1.5 
server/8.1.4 client update

Hi

I had the same problem.  The solution is to use client 
https://urldefense.proofpoint.com/v2/url?u=http-3A__8.1.4.1=DwIFAw=zgHsTqBQkGAdNuoPRxUzSQ=8boJ5U4bJexvRn8nMjF_eHn4i8OKnxLC20H4-NjkLr0=FMLaAqm3lUBmvXPEPMLGkVK3iVZ4wMEjp5oXlS6l2bw=s48u05yoBEH52EKNCpYzXQD-3YRVT4crFbFVKYLKBYk=

https://urldefense.proofpoint.com/v2/url?u=ftp-3A__ftp.software.ibm.com_storage_tivoli-2Dstorage-2Dmanagement_patches_client_v8r1_Windows_x64_v814_8.1.4.1-2DTIV-2DTSMBAC-2DWinX64.exe=DwIFAw=zgHsTqBQkGAdNuoPRxUzSQ=8boJ5U4bJexvRn8nMjF_eHn4i8OKnxLC20H4-NjkLr0=FMLaAqm3lUBmvXPEPMLGkVK3iVZ4wMEjp5oXlS6l2bw=s46zkwcMnGenSmigv7G1Gdu6kvflzoNm1jUMM6JFwvk=

Best Regards
Anders Räntilä

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
RÄNKAB - Räntilä Konsult AB
Klippingvägen 23
SE-196 32 Kungsängen
Sweden

Email: and...@rantila.com
Phone: +46 701 489 431
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Matthew McGeary
Sent: den 2 april 2018 17:18
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Unable to backup SP database after 8.1.5 server/8.1.4 client 
update

Hey folks,

I'm in a bit of bind, did a update to 8.1.5 on a pair of remote servers as a 
way of easing into this new security stuff and working out the kinks before it 
rolls out to the larger servers.  Here's what I did:

Server OS: Windows 2012 R2


  1.  Update client to 8.1.4 on the SP server machine itself (I back up some 
network shares that are mounted on the SP server for backup purposes
  2.  Update server to 8.1.5

After the server update to 8.1.5, the SP database backup process fails.  I get 
an RC=30 in the db2diag and a ANR4588E in the actlog.  Support has pointed me 
to this technote: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www-2D01.ibm.com_support_docview.wss-3Fuid-3Dswg22009265=DwIFAw=zgHsTqBQkGAdNuoPRxUzSQ=8boJ5U4bJexvRn8nMjF_eHn4i8OKnxLC20H4-NjkLr0=FMLaAqm3lUBmvXPEPMLGkVK3iVZ4wMEjp5oXlS6l2bw=F7574e8KQwpcEsyH4DAcoe5Ztwh3GbrawObe5gf8Lss=
 but the dsmsutil.exe referenced in the note DOES NOT EXIST, so I'm not able to 
run the update command to re-set the node password for the DB backup node.

So I'm pretty stuck, and support hasn't been much help over the past four days. 
 I'm starting to get antsy that these SP servers haven't had DB backups and I'm 
going to have to start deleting archive logs to keep them running, which 
doesn't make me happy.  I've tried uninstalling the client, following the 
technote as closely as I could, and using the dsmcutil.exe provided with the 
client to try and reset the DB backup node, but nothing has worked so far to 
correct this issue.

Anyone else run into this problem?

__
Matthew McGeary
Senior Technical Specialist
Information Technology
T: (306) 933-8921
http://www.nutrien.com<http://www.nutrien.com>

For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier ?lectronique d'Nutrien 
ou pour vous d?sabonnez, cliquez ici : https://www.nutrien.com/avis-important
For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier électronique d’Nutrien 
ou pour vous désabonnez, cliquez ici : https://www.nutrien.com/avis-important


Unable to backup SP database after 8.1.5 server/8.1.4 client update

2018-04-02 Thread Matthew McGeary
Hey folks,

I'm in a bit of bind, did a update to 8.1.5 on a pair of remote servers as a 
way of easing into this new security stuff and working out the kinks before it 
rolls out to the larger servers.  Here's what I did:

Server OS: Windows 2012 R2


  1.  Update client to 8.1.4 on the SP server machine itself (I back up some 
network shares that are mounted on the SP server for backup purposes
  2.  Update server to 8.1.5

After the server update to 8.1.5, the SP database backup process fails.  I get 
an RC=30 in the db2diag and a ANR4588E in the actlog.  Support has pointed me 
to this technote: http://www-01.ibm.com/support/docview.wss?uid=swg22009265 but 
the dsmsutil.exe referenced in the note DOES NOT EXIST, so I'm not able to run 
the update command to re-set the node password for the DB backup node.

So I'm pretty stuck, and support hasn't been much help over the past four days. 
 I'm starting to get antsy that these SP servers haven't had DB backups and I'm 
going to have to start deleting archive logs to keep them running, which 
doesn't make me happy.  I've tried uninstalling the client, following the 
technote as closely as I could, and using the dsmcutil.exe provided with the 
client to try and reset the DB backup node, but nothing has worked so far to 
correct this issue.

Anyone else run into this problem?

__
Matthew McGeary
Senior Technical Specialist
Information Technology
T: (306) 933-8921
www.nutrien.com<http://www.nutrien.com>

For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier ?lectronique d'Nutrien 
ou pour vous d?sabonnez, cliquez ici : https://www.nutrien.com/avis-important


Re: SPP licensing/included with SP Extended Edition/Suite?

2018-03-22 Thread Matthew McGeary
Strange, I was just in a product briefing and was informed that the stored data 
in the Plus repository counted against the TB in our entitled capacity, not 
this 10 VM/TB model.

Based on our existing deployments and stored data, the 10 VM/TB model will 
result in a fair bit of additional required licensing to deploy plus.

On Mar 22, 2018, at 2:13 PM, Matthew McGeary 
<matthew.mcge...@nutrien.com<mailto:matthew.mcge...@nutrien.com>> wrote:

Strange, I was just in a product briefing and was informed that the stored data 
in the Plus repository counted against the TB in our entitled capacity, not 
this 10 VM/TB model.

Based on our existing deployments and stored data, the 10 VM/TB model will 
result in a fair bit of additional required licensing to deploy plus.

For more information on Nutrien's email policy or to unsubscribe, click here: 
https://www.nutrien.com/important-notice
Pour plus de renseignements sur la politique de courrier ?lectronique d'Nutrien 
ou pour vous d?sabonnez, cliquez ici : https://www.nutrien.com/avis-important


Re: Moving archive data to a directory container

2017-11-29 Thread Matthew McGeary
Node replication would be the only real way to go directly to the container 
pool.

You could define the file-class pool on the same storage directories as the new 
container pool, so that once you land everything and convert, you have two 
pools but don't waste any storage.

__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
Eric van (ITOPT3) - KLM
Sent: Wednesday, November 29, 2017 10:01 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXT SENDER] [ADSM-L] Moving archive data to a directory container

Hi guys!
We are migrating our clients from a version 6 server to a new 7.1.7 server with 
a directory container pool. Switching clients is easy, just updating the option 
files and they will start a new backup cycle on the new TSM server. But a lot 
of clients also store long term archives. I can't think of a way to move the 
archives from the v6 server to the v7 server since import is not supported to a 
directory pool. The only trick I can come up with is defining a file pool on 
the v7 server, moving all archives in here and converting it to a directory 
container afterwards, but I need extra storage for it and I end up with two 
directory pools (at least until all archives are gone) and that is not what I 
want...
Does anybody else know some trick to move these archives?
Thanks for any help in advance!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering


For information, services and offers, please visit our web site: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.klm.com=DwIFAg=tBK9IFHToo3Ip4kInV6npfSphg9iMZSnuvLLtDL_h5E=9B53f9OJCAaZMdwQs8Rx4OInzZ1vusLF8Ft4jfrywjo=S9QLT9cTnOfkB2BTaZyDF_kGAtfzMnQibS2mOIu3TwA=rv0g65y9vujKE7SdQFc7zspq6wHCYNRQ94RdB4lUYFE=.
 This e-mail and any attachment may contain confidential and privileged 
material intended for the addressee only. If you are not the addressee, you are 
notified that no part of the e-mail or any attachment may be disclosed, copied 
or distributed, and that any other action related to this e-mail or attachment 
is strictly prohibited, and may be unlawful. If you have received this e-mail 
by error, please notify the sender immediately by return e-mail, and delete 
this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



Re: One to many replication in Spectrum Protect 8.1

2017-07-21 Thread Matthew McGeary
I'm very interested about this as well, since we will have requirements for 
one-many replication in the coming months.

Thanks,

__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Golbin, Mikhail
Sent: Friday, July 21, 2017 10:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] One to many replication in Spectrum Protect 8.1

Disk based replication is quite expensive, we are not using it now.
Any particular reason IBM does not recommend replication server switch as part 
of the script?
It appears to me that it will work just fine

On Fri, Jul 21, 2017 at 11:15 AM, Del Hoobler <hoob...@us.ibm.com> wrote:

> Hi Mikhail,
>
> IBM does not recommend the switching technique below.
>
> A few other ideas for creating a tertiary copy...
>- use disk-based replication (it's not integrated into Protect 
> replication... but could be done) and create a "consistency group" 
> that covers the the database DB (db, instance dir, other related 
> parts) and the storage pool
>- HADR (Example:
> https://www.ibm.com/developerworks/community/blogs/storageneers/entry/
> tsmptha?lang=en
> )
>
> Check Youtube... there are a few presentations out there that talk 
> about it as well.
>
>
> Del
>
> 
>
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 07/21/2017
> 10:41:51 AM:
>
> > From: "Golbin, Mikhail" <mikhail.gol...@roche.com>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 07/21/2017 10:42 AM
> > Subject: Re: One to many replication in Spectrum Protect 8.1 Sent 
> > by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Will switching replication server as part of the maintenance script
> work?
> > Something along these lines:
> > Set REPLSERVer tsm2
> > UPDate STGpool DIR_POOL_TSM1 PROTECTstgpool=DIR_POOL_TSM2 protect 
> > stgpool DIR_POOL_TSM1 maxsessions=2 wait=yes replicate node * 
> > maxsessions=2 wait=yes
> >
> > Set REPLSERVer tsm3
> > UPDate STGpool DIR_POOL_TSM1 PROTECTstgpool=DIR_POOL_TSM3 protect 
> > stgpool DIR_POOL_TSM1 maxsessions=2 wait=yes replicate node * 
> > maxsessions=2 wait=yes
> >
> >
> >
> > On Thu, Jul 20, 2017 at 1:36 PM, Mikhail Golbin
> <mikhail.gol...@roche.com>
> > wrote:
> >
> > > Hi Del,
> > >
> > > We are all disk, no tape, so that is not an option.
> > > Any possible workarounds?
> > >
> > > Thanks,
> > > Mike G
> > >
> > > On Thu, Jul 20, 2017 at 12:48 PM, Del Hoobler <hoob...@us.ibm.com>
> wrote:
> > >
> > >> Hi Mikhail,
> > >>
> > >> One-to-many node replication is not currently supported. It is a
> known
> > >> requirement.
> > >>
> > >> One way to create two copies of your backup data stored in 
> > >> directory-container storage pools is to use PROTECT STGPOOL to a
> second
> > >> Spectrum Protect server and a PROTECT STGPOOL to tape.
> > >>
> > >> Del
> > >>
> > >> 
> > >>
> > >>
> > >> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 
> > >> 07/20/2017
> > >> 11:07:28 AM:
> > >>
> > >> > From: "Golbin, Mikhail" <mikhail.gol...@roche.com>
> > >> > To: ADSM-L@VM.MARIST.EDU
> > >> > Date: 07/20/2017 11:12 AM
> > >> > Subject: One to many replication in Spectrum Protect 8.1 Sent 
> > >> > by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > >> >
> > >> > Hi guys,
> > >> >
> > >> > I was wondering if anyone tried to replicate to more than one
> server -
> > >> we
> > >> > would like to have one copy nearby and one offsite.
> > >> > I was reading IBM presentation from almost couple years ago
> discussing
> > >> node
> > >> > replication and directory pool replication in then brand new
> Spectrum
> > >> > Protect 7.1.3 -
> > >> >
> http://www.empalis.com/fileadmin/templates_empalis/PDFs/Events_2015/
> > >> >
> TSM_Symp_2015_Nachlese/02_TSM_Symp_2015_Nachlese_Node_Replication.pdf
> > >> >
> > >> > Towards the end of the presentation they talk about future
> direction and
> > >> on
> > >> > page 26 about one-to-many replication. We have the latest 
> > >> > Spectrum
> > >> Protect
> > >> > 8.1 and I can't find a way to do it.
> > >> >
> > >> > All suggestions and workarounds are greatly appreciated!
> > >> >
> > >> > Thanks,
> > >> >
> > >> > --
> > >> > Mikhail Golbin
> > >> > bus (908)635-5705
> > >> > cell (908)210-3393
> > >> > RMD IT Client Services
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > Mikhail Golbin
> > > bus (908)635-5705
> > > cell (908)210-3393
> > > RMD IT Client Services
> > >
> >
> >
> >
> > --
> > Mikhail Golbin
> > bus (908)635-5705
> > cell (908)210-3393
> > RMD IT Client Services
> >
>



--
Mikhail Golbin
bus (908)635-5705
cell (908)210-3393
RMD IT Client Services


Re: DR Rebuild please chime in.

2017-04-27 Thread Matthew McGeary
If I'm understanding correctly, your DR site will have a storage-level copy of 
all your TSM storage pools, database, logs, etc.

In that case, yes, what is being proposed should work.  However, you're trading 
a replication that can be monitored and validated to a storage-level model that 
isn't application aware.

AND, if you're not doing anything on the DB2 side during replication (ie: 
quiescing) then the server will do a crash-recovery startup at the DR site.

Crash-recovery has always worked for me in DB2, but it's not as fool-proof as 
DB2 HA/DR replication, recovering from a DB2 backup or using the TSM 
replication that you're ripping out.  There may come a time when you do a DR 
test or actual DR and your TSM database won't recover properly from that 
crash-level snapshot.  Then what do you do?

Why in god's name is this change happening?
__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Thursday, April 27, 2017 1:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DR Rebuild please chime in.

All,

Our last DR was a disaster.

Right now,  we do the TSM server to TSM server replication and it works fairly 
well but, they have decide we need to fix something that is not broken.

So, the idea is to upgrade to SP 8.1 and install on a zLinux machine. Our 
storage is on an IBM V7000, and where we were performing  the TSM replication, 
we are trashing that and going to IBM V7000 replicating to V7000.

Now,  the big twist in this is,  we will not have a TSM server at our DR 
anymore. The entire primary TSM server will be backed up to the V7000 and 
replicated to our V7000 at the DR site.

There is no TSM server at the DR site so, IBM will build us one when we have 
our DR exercise and then according to our trusty DB2 guys we should just be 
able to break the connection to the Primary TSM server,  do a little DB2 magic 
and WOLA the TSM server will be up.

This is my question, if the TSM server is built in DR and the primary TSM 
servers database in on the DR V7000,  then that database will still have to be 
restore to the TSM server. You're not going to be able to just bring it up 
because its DB2 and point to the TSM server and it work, right?

Please let me know your thought's. I know I have left a lot of details out but 
I'm just trying to get some views. If you need more information I will be happy 
to provide it.

I appreciate your time.




Ricky M. Plair
Storage Engineer



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.


Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?

2017-03-28 Thread Matthew McGeary
Hello Tom,

Yes, you will need a mountpoint for each stripe.  Unlike resourceutilization, 
stripes represent client sessions that send data, not data and control sessions 
combined.

Since we're totally in the container-class pool world, all my nodes have 
maxnummp=100 because I heavily use multiple sessions to increase throughput.

__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Alverson
Sent: Tuesday, March 28, 2017 1:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A Client 
Settings ?

I tried 10 and the backup failed due to not enough mount points.  I set it to 2 
and that did speed things up.  Do I need one mount point for each stripe?  We 
normally set the mount points to 2.  Does this mean that I need one mount point 
for my conventional TSM backup and 10 more to do 10 stripes?  I notice that 
when I set RESOUCEUTILIZATION to 10 for the conventional backups I get four 
parallel sessions.  Do I need 4 mount points just for that (plus whatever I 
need for SQL)?

Thanks!

On Mon, Mar 27, 2017 at 3:24 PM, Matthew McGeary < 
matthew.mcge...@potashcorp.com> wrote:

> If you're using TDP for SQL you can specify how many stripes to use in 
> the tdpo.cfg file.
>
> For our large SQL backups, I use 10 stripes.
>
> __________
> Matthew McGeary
> Senior Technical Specialist – Infrastructure Management Services 
> PotashCorp
> T: (306) 933-8921
> www.potashcorp.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Tom Alverson
> Sent: Monday, March 27, 2017 1:11 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A 
> Client Settings ?
>
> Our biggest performance issue is with SQL backups of large databases. 
> Our DBA's all want full backups ever night (and log backups every 
> hour) and for the databases that are around 1TB the backup will start 
> at Midnight and finish 5 to 13 hours later (varies day to day).  When 
> these backups start extending into the daytime hours they complain but 
> I don't know how we could improve the speed.  Our Storage servers all 
> have 10GB interfaces but they are backing up hundreds of clients every 
> night (mostly incremental file level backups).  I am running a test 
> right now to see if RESOURCEUTILIZATION 10 helps one of these database 
> backups but I suspect it will make no difference as 99% of the data is 
> all in one DB and I don't think SQL/TSM will split that into multiple streams 
> (will it?).
>
> On Sun, Mar 26, 2017 at 6:57 AM, Del Hoobler <hoob...@us.ibm.com> wrote:
>
> > Hi Tom,
> >
> > My original posting was an excerpt from best practices for container 
> > pools, and does not necessarily apply to other storage pool types.
> >
> > Yes, client-side deduplication and compression options should be 
> > avoided with a Data Domain storage pool.
> >
> > A fixed resourceutilization setting of 2 may underperform for 
> > clients that have a lot of data to back up and fast network 
> > connections, but this is not a black and white answer. There are 
> > various other conditions that can affect this and trying to narrow 
> > in on them in
> ADSM-L would be difficult.
> > If you want some help with a performance issue, please open a PMR.
> >
> >
> > Del
> >
> > 
> >
> > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 03/25/2017
> > 12:20:43 AM:
> >
> > > From: Tom Alverson <tom.alver...@gmail.com>
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 03/25/2017 05:40 AM
> > > Subject: Re: Best Practices/Best Performance SP/TSM B/A Client 
> > > Settings
> > ?
> > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > >
> > > Del:
> > >
> > > We have been using these settings as our defaults.  Is our 
> > > TCPWINDOWSIZE too large?
> > >
> > > RESOURCEUTILIZATION 2  (we increase this up to 10 for some WAN
> > > backups) TXNBYTELIMIT 2097152 TCPNODELAY YES TCPBUFFSIZE 512 
> > > TCPWINDOWSIZE 2048 LARGECOMMBUFFERS YES
> > >
> > > Also we never use compression because our storage folks believe it 
> > > will foul up the de-duplication that happens on our Data Domains??
> > >
> > > On Mon, Mar 20, 2017 at 9:11 PM, Del Hoobler <hoob...@us.ibm.com>
> 

Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?

2017-03-27 Thread Matthew McGeary
I haven't had to change the buffers or any other settings and it's made a big 
difference but I didn't really experiment with different numbers.  I landed on 
10 based on similar experience using 10 sessions for DB2 API backup/recovery.  
Our largest SQL server backup is probably in the 500-600 GB range, so we aren't 
operating on the same scale as you are.

Del is right that this is only for 'legacy' backups, not for VSS-offloaded.

__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Alverson
Sent: Monday, March 27, 2017 2:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A Client 
Settings ?

I will try that tonight.  Do you change any of the other "performance"
settings from the defaults?

Thanks!

Tom

On Mon, Mar 27, 2017 at 3:24 PM, Matthew McGeary < 
matthew.mcge...@potashcorp.com> wrote:

> If you're using TDP for SQL you can specify how many stripes to use in 
> the tdpo.cfg file.
>
> For our large SQL backups, I use 10 stripes.
>
> ______
> Matthew McGeary
> Senior Technical Specialist – Infrastructure Management Services 
> PotashCorp
> T: (306) 933-8921
> www.potashcorp.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Tom Alverson
> Sent: Monday, March 27, 2017 1:11 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A 
> Client Settings ?
>
> Our biggest performance issue is with SQL backups of large databases. 
> Our DBA's all want full backups ever night (and log backups every 
> hour) and for the databases that are around 1TB the backup will start 
> at Midnight and finish 5 to 13 hours later (varies day to day).  When 
> these backups start extending into the daytime hours they complain but 
> I don't know how we could improve the speed.  Our Storage servers all 
> have 10GB interfaces but they are backing up hundreds of clients every 
> night (mostly incremental file level backups).  I am running a test 
> right now to see if RESOURCEUTILIZATION 10 helps one of these database 
> backups but I suspect it will make no difference as 99% of the data is 
> all in one DB and I don't think SQL/TSM will split that into multiple streams 
> (will it?).
>
> On Sun, Mar 26, 2017 at 6:57 AM, Del Hoobler <hoob...@us.ibm.com> wrote:
>
> > Hi Tom,
> >
> > My original posting was an excerpt from best practices for container 
> > pools, and does not necessarily apply to other storage pool types.
> >
> > Yes, client-side deduplication and compression options should be 
> > avoided with a Data Domain storage pool.
> >
> > A fixed resourceutilization setting of 2 may underperform for 
> > clients that have a lot of data to back up and fast network 
> > connections, but this is not a black and white answer. There are 
> > various other conditions that can affect this and trying to narrow 
> > in on them in
> ADSM-L would be difficult.
> > If you want some help with a performance issue, please open a PMR.
> >
> >
> > Del
> >
> > 
> >
> > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 03/25/2017
> > 12:20:43 AM:
> >
> > > From: Tom Alverson <tom.alver...@gmail.com>
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 03/25/2017 05:40 AM
> > > Subject: Re: Best Practices/Best Performance SP/TSM B/A Client 
> > > Settings
> > ?
> > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > >
> > > Del:
> > >
> > > We have been using these settings as our defaults.  Is our 
> > > TCPWINDOWSIZE too large?
> > >
> > > RESOURCEUTILIZATION 2  (we increase this up to 10 for some WAN
> > > backups) TXNBYTELIMIT 2097152 TCPNODELAY YES TCPBUFFSIZE 512 
> > > TCPWINDOWSIZE 2048 LARGECOMMBUFFERS YES
> > >
> > > Also we never use compression because our storage folks believe it 
> > > will foul up the de-duplication that happens on our Data Domains??
> > >
> > > On Mon, Mar 20, 2017 at 9:11 PM, Del Hoobler <hoob...@us.ibm.com>
> wrote:
> > >
> > > > Hi Ben,
> > > >
> > > > Here are some items to get you started:
> > > >
> > > >
> > > > Backup-Archive client with limi

Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?

2017-03-27 Thread Matthew McGeary
If you're using TDP for SQL you can specify how many stripes to use in the 
tdpo.cfg file.

For our large SQL backups, I use 10 stripes.

__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Alverson
Sent: Monday, March 27, 2017 1:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A Client 
Settings ?

Our biggest performance issue is with SQL backups of large databases. Our DBA's 
all want full backups ever night (and log backups every hour) and for the 
databases that are around 1TB the backup will start at Midnight and finish 5 to 
13 hours later (varies day to day).  When these backups start extending into 
the daytime hours they complain but I don't know how we could improve the 
speed.  Our Storage servers all have 10GB interfaces but they are backing up 
hundreds of clients every night (mostly incremental file level backups).  I am 
running a test right now to see if RESOURCEUTILIZATION 10 helps one of these 
database backups but I suspect it will make no difference as 99% of the data is 
all in one DB and I don't think SQL/TSM will split that into multiple streams 
(will it?).

On Sun, Mar 26, 2017 at 6:57 AM, Del Hoobler <hoob...@us.ibm.com> wrote:

> Hi Tom,
>
> My original posting was an excerpt from best practices for container 
> pools, and does not necessarily apply to other storage pool types.
>
> Yes, client-side deduplication and compression options should be 
> avoided with a Data Domain storage pool.
>
> A fixed resourceutilization setting of 2 may underperform for clients 
> that have a lot of data to back up and fast network connections, but 
> this is not a black and white answer. There are various other 
> conditions that can affect this and trying to narrow in on them in ADSM-L 
> would be difficult.
> If you want some help with a performance issue, please open a PMR.
>
>
> Del
>
> 
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 03/25/2017
> 12:20:43 AM:
>
> > From: Tom Alverson <tom.alver...@gmail.com>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 03/25/2017 05:40 AM
> > Subject: Re: Best Practices/Best Performance SP/TSM B/A Client 
> > Settings
> ?
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Del:
> >
> > We have been using these settings as our defaults.  Is our 
> > TCPWINDOWSIZE too large?
> >
> > RESOURCEUTILIZATION 2  (we increase this up to 10 for some WAN 
> > backups) TXNBYTELIMIT 2097152 TCPNODELAY YES TCPBUFFSIZE 512 
> > TCPWINDOWSIZE 2048 LARGECOMMBUFFERS YES
> >
> > Also we never use compression because our storage folks believe it 
> > will foul up the de-duplication that happens on our Data Domains??
> >
> > On Mon, Mar 20, 2017 at 9:11 PM, Del Hoobler <hoob...@us.ibm.com> wrote:
> >
> > > Hi Ben,
> > >
> > > Here are some items to get you started:
> > >
> > >
> > > Backup-Archive client with limited, high latency network (WAN
> backups):
> > > ===
> > > TCPWINDOWSIZE   512
> > > RESOURCEUTILIZATION 4
> > > COMPRESSION Yes
> > > DEDUPLICATION   Yes
> > > ENABLEDEDUPCACHEYes
> > >
> > > Tip:  Do not use the client deduplication caching for applications
> that
> > > use the IBM Spectrum Protect API.  Refer to section 1.2.3.2.1 for 
> > > additional details.
> > >
> > >
> > > Backup/Archive client or Client API with limited network (Gigabit 
> > > LAN
> > > backups):
> > > ===
> > > TCPWINDOWSIZE   512
> > > RESOURCEUTILIZATION 10
> > > COMPRESSION Yes
> > > DEDUPLICATION   Yes
> > > ENABLEDEDUPCACHENo
> > >
> > >
> > > Backup/Archive client or Client API with high speed network (10
> Gigabit +
> > > LAN backups)
> > > ===
> > > TCPWINDOWSIZE   512
> > > RESOURCEUTILIZATION 10
> > > COMPRESSION No
> > > DEDUPLICATION   No
> > > ENABLEDEDUPCACHENo
> > >
> > >
> > > Tip:  For optimal data reduction, avoid the following client 
> > > option
> > > combination:
> > >
> > > COMPRESSION Yes
>

Re: Move Data from STG pool directory to another one.

2017-02-06 Thread Matthew McGeary
There is still no method for moving data from a directory based container pool 
to another one.

I think your only option would be to replicate to another server, then 
replicate back to the smaller pool.
__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Monday, February 06, 2017 7:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Move Data from STG pool directory to another one.

Wouldn't ordinary migration work for this?

UPDATE STGPOOL stg1 HI=1 LO=0 NEXTSTGPOOL=stg2

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Sun, 5 Feb 2017, rou...@univ.haifa.ac.il wrote:

>Hello all
>Working with TSM server version 7.1.7.0 and Directory stg pools.
>
>I  have a stg pool with a large capacity and I want to create a new stg pool 
>smaller. I wonder what will be  the correct process to move the data from stg1 
>to stg2.
>
>Both of them are with type directory.
>
>To use MOVE CONTAINER ???  or MOVE DATA ??? or another command ?
>
>Any suggestion , tips or commands
>
>Best Regards
>
>Robert
>
>
>[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
>? 
>???  ?? ??? ??.
>?? 
>: ? , ??? 5015
>?:  04-8240345 (?: 2345)
>: rou...@univ.haifa.ac.il
>_
>??  | ??' ???  199 | ?? ?,  | ?: 3498838 
>??? ??? ? ??? : 
>http://computing.haifa.ac.il<http://computing.haifa.ac.il/>
>


Re: DECOMMISSION NODE

2017-01-12 Thread Matthew McGeary
Hello Zoltan,

I use it every day, mostly because of changes to our VMware environment (VMs 
seem to breed like rabbits and die like fruit flies.)  It never seems to take 
much time in those cases, but the object count and data stored in those cases 
isn't typically very large.

I've never tried to decomm a node that is TB in size or one that contains 
millions of objects.
__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, January 12, 2017 1:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DECOMMISSION NODE

Anyone out there using the DECOMMISSION NODE command?  I tried it on an old, 
inactive node and after running for 4-days, I had to cancel it due to scheduled 
TSM server maintenance.

My issue is, since it was only 35% finished (based on the number of objects 
processed), will it start from the beginning or remember where it left off?

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator (in training) Virginia Commonwealth 
University UCC/Office of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 
804-828-4807 Don't be a phishing victim - VCU and other reputable organizations 
will never use email to request that you reply with your password, social 
security number or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


Re: TSM DB spaces

2017-01-06 Thread Matthew McGeary
I believe that IBM recommends 8 or 16 database filesystems, depending on your 
expected sizing.

We've experimented with 1,4 and 8 containers/filesystems and I can't say that 
it's made much, if any difference.  Much like Sasa points out, the factor that 
makes the most impact is the IOPS of the backing device.  Your XIV system 
should provide more than adequate IOPS, particularly since the new CONTAINER 
class pools are much less IO intensive on the database and storage pool 
filesystems than the old FILE class devices.

__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Sasa 
Drnjevic
Sent: Friday, January 06, 2017 10:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB spaces

On 2017-01-06 16:54, David Ehresman wrote:
> I am running TSM 7.1.5 on AIX 7.1.  My TSM DB is currently made up of 
> five 120GB filespaces.  I need to dramatically increase the size of 
> the DB as we prepare for dedup.  I know I can have 128 DB filespaces.
> What, if any, are the performance implications of just adding more 
> filespaces vs staying with a smaller number of much larger filespaces?  
> The filespaces are SAN attached XIV with the data spread over 180 
> drives.
>
> David Ehresman
>

>From experience, I believe it all comes down to IOPS.

So, it all depends on how are your TSM DBspaces distributed over as many as 
possible drives, SAS or SATA HDDs, or SDDs, RAID type and of course the type 
and throughput of (Fibre Channel?) SAN.

Your case with 180 drives sounds very good to me...

I've never had more than four TSM DB spaces, but I always distributed them well 
over FC SAN, some on HDDs, and some on pure SSDs (Storwize V7000), some only on 
RAID6, some only on RAID10. The sizes at the moment are 400 GB, 200 GB and 800 
GB TSM DBs.

Hope it helps...

--
Sasa Drnjevic


Re: R: [ADSM-L] AW: [ADSM-L] NOSIGN AW: Offsite access mode for container-copy volumes

2016-12-02 Thread Matthew McGeary
Marco,

No, restore from the tape copy is not possible.  That’s because the tape copy 
is of the deduplicated pool itself, not the hydrated node data as with the 
FILE-class dedup.

__
Matthew McGeary
Senior Technical Specialist – Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marco 
Batazzi
Sent: Friday, December 02, 2016 3:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] R: [ADSM-L] AW: [ADSM-L] NOSIGN AW: Offsite access mode for 
container-copy volumes

Hello,
What about restore directly from copy container storage pool? In disaster 
recovery it is possible without restore the entire primary container storage 
pool?

-Messaggio originale-
Da: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Per conto di Efim
Inviato: venerdì 2 dicembre 2016 10:38
A: ADSM-L@VM.MARIST.EDU
Oggetto: Re: [ADSM-L] AW: [ADSM-L] NOSIGN AW: Offsite access mode for 
container-copy volumes

I agree. Containers does not support the status offsite but they still support 
the status unavalable.
Probably it can be used as workaround when license type is Basic.
Efim


> 2 дек. 2016 г., в 11:19, Tobias Karnat - adcon GmbH <kar...@adcon.de> 
> написал(а):
> 
> Hello,
> 
> Yes, you can use tapes for copy container pools starting from 7.1.7 
> But you cannot update the access status of these tapes to offsite manually 
> (via update vol, only if you use move drmedia).
> 
> This is a problem if a customer has the requirement to store copy pool 
> volumes offsite and only has basic edition.
> 
> Mit freundlichem Gruß / With kind regards
> 
> Tobias Karnat
> - Systemberater -
> 
> Tel:  +49(0)231 946164-29
> Fax:  +49(0)231 946164-14
> Mail: kar...@adcon.de
> Web:  http://www.adcon.de
> 
> adcon Gesellschaft für EDV-Dienstleistungen/Beratung mbH 
> Martin-Schmeißer-Weg 15
> D-44227 Dortmund
> 
> GF: Dipl.-Inf. Norbert Keßlau Amtsgericht Dortmund - HRB 11759
> 
> Diese E-Mail enthält vertrauliche oder rechtlich geschützte Informationen. 
> Wenn Sie nicht der beabsichtigte Empfänger sind, informieren Sie bitte sofort 
> den Absender und löschen Sie diese E-Mail. Das unbefugte Kopieren dieser 
> E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen ist nicht 
> gestattet.
> 
> -Ursprüngliche Nachricht-
> Von: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Im Auftrag 
> von Efim
> Gesendet: Freitag, 2. Dezember 2016 09:03
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] NOSIGN AW: Offsite access mode for 
> container-copy volumes
> 
> Hi
> Starting from SP 7.1.7 you can use copy pools (only tapes) for protect 
> containers.
> http://www-01.ibm.com/support/docview.wss?uid=swg27048653
> <http://www-01.ibm.com/support/docview.wss?uid=swg27048653>
> Efim
> 
> 
>> 2 дек. 2016 г., в 10:43, Tobias Karnat - adcon GmbH <kar...@adcon.de> 
>> написал(а):
>> 
>> Hello,
>> 
>> The answer of development is not very helpful for small environments with 
>> basic edition (without DRM):
>> 
>> Development has made it clear that there is no plan in the future to 
>> have the UPD VOL command change the status of container volumes to be 
>> updated to offsite. Here is the developer's response:
>> "
>> The legacy technology of copy storage pools, etc. does not apply to 
>> container pools, and will not in the future.  Data in container 
>> storage pools are protected via the PROTECT and REPLICATE NODE 
>> commands. Customers who wish to have legacy behaviour should not 
>> deploy their data to container pools; rather they should keep their 
>> data in non-container storage pools. The legacy technology will 
>> remain supported for future releases of Spectrum Protect server for 
>> non-container storage pools.
>> "
>> In a small environment like this it makes sense for the customer to 
>> stay with the legacy technology.
>> 
>> Mit freundlichem Gruß / With kind regards
>> 
>> Tobias Karnat
>> - Systemberater -
>> 
>> Tel:+49(0)231 946164-29
>> Fax:   +49(0)231 946164-14
>> Mail: kar...@adcon.de<mailto:kar...@adcon.de>
>> Web:http://www.adcon.de<http://www.adcon.de/>
>> 
>> adcon Gesellschaft für EDV-Dienstleistungen/Beratung mbH 
>> Martin-Schmeißer-Weg 15
>> D-44227 Dortmund
>> 
>> GF: Dipl.-Inf. Norbert Keßlau Amtsgericht Dortmund - HRB 11759
>> 
>> Diese E-Mail enthält vertrauliche oder rechtlich geschützte Informationen. 
>> Wenn Sie nicht der beabsichtigte Empfänger sind, informieren Sie bitte

Re: Storage container replication

2016-12-01 Thread Matthew McGeary
David,

We're able to saturate our 1Gbps link to our DR site during stgpool protect 
processes.

The trick is to specify a relatively high number of sessions (50 in this case) 
to stick-handle around the latency.  We've also enabled the sliding TCP window 
on both source and target and raised the buffers high enough for the servers to 
allocate an 8M window size.

Our before-dedup intake is around 15TB a day, which translates to around 1-2TB. 
 With two protect processes scheduled to run at 1:30 am and pm, they tend to 
run to completion in 2-3 hours.

HTH
__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Thursday, December 01, 2016 1:03 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Storage container replication

If you are doing directory based storage containers and using "protect stg" to 
replicate data from a primary site to a secondary site, what kind of throughput 
are you getting on the data movement?

David


Re: Backing up Active Directory server using TSM client

2016-11-08 Thread Matthew McGeary
I believe that TSM for VMWare is Active Directory aware and creates consistent 
snaps of virtualized DCs.

I've handed our AD guys VM backups of their DCs for a couple years now and they 
seem to be able to recover without much issue but I'm not an expert in AD or 
what their procedure is.

__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Monday, November 07, 2016 10:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backing up Active Directory server using TSM client

Is there any point in backing up an Active Directory server using the TSM 
client? Would a restore of Active Directory work? I have read conflicting views 
on this - some say it would not work yet others say it has worked for them, 
while others say that they use another tool to backup Active Directory and then 
use TSM to backup those backup files.





Thanks & Regards

Paul





Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.


Re: Conversion from File to Storage Container on Linux

2016-09-29 Thread Matthew McGeary
Larry,

Yes, you will definitely need to maintain a FILE pool for data that cannot be 
stored in a container pool (virtual volumes, etc).

When I did my conversion, I defined the container directories to the same 
locations as the FILE devclass without removing anything.  This created a 
shared directory structure that both pools would use.  As the FILE pool emptied 
due to volume conversion, the container pool would gain that space.

I didn’t delete any volumes or worry about removing directories from the FILE 
devclass until the conversion was complete.

The conversion should go quick, probably a couple days tops.

Regards,

Matthew

> On Sep 29, 2016, at 1:18 PM, Larry Bernacki <lawrence.ctr.berna...@faa.gov> 
> wrote:
> 
> Matthew,
> 
> Thank you for the response.  At present I only have 9TB of disk space 
> available which is allocated to a volume group, broken up into 6-1.5TB mounts 
> points tsmfile01-06.  My devtype=FILE devclass has the6 mount points defined 
> to it.
> 
> What I've started doing it deleting the volumes from the tsmfile05 and 06 
> mount points so that I start with 3TB of space to define to a new container 
> pool.  I'll remove the tsmfile05 and tsmfile06 directories from the current 
> volumes devclass and create this new storage container pool using those two 
> mount points.  I've got more disk space ordered that I'll just add to the 
> container pool.  I'll probably leave 4.5 TB of the FILE type volumes for 
> files that cannot be put into the container pool and as a secondary pool for 
> the container pool.
> 
> From your experience, does that sound correct?
> 
> Thank you,
> Larry Bernacki
> Project Manager/Systems Programmer
> Laboratory Technical Services Branch, ANG-E13
> FAA William J. Hughes Technical Center Atlantic City International Airport, 
> NJ 08405
> (609) 485-7193
> 
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Matthew McGeary
> Sent: Thursday, September 29, 2016 1:55 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Conversion from File to Storage Container on Linux
> 
> Larry,
> 
> I recently performed this conversion on all of our TSM servers.
> 
> To simplify the process, I allocated the new container pool directories on 
> the same mountpoints and volume groups that held the FILE class data.  That 
> way, as your file volumes empty into the container class pool, you shouldn't 
> need more than 25% free space to keep things running smoothly.
> 
> The convert stgpool process ran relatively quickly, I converted 250TB FILE 
> data to container in a few weeks. 
> 
> For TSMB, I'd delete the storage pool with devclass file and create a new 
> container class pool to take it's place.  Then set up storage pool 
> replication to seed TSMB with TSMA data as the conversion process moves 
> through your data.  The convert process can (and should) be scheduled to run 
> during quiet periods for a set duration.  We ran ours for 6 hours a day.
> 
> Let me know how you make out, but you should have no issues.  It ran smoothly 
> on all of our servers, from the smallest instance with ~10TB to our primary 
> server with ~250TB.
> 
> Regards,
> 
> __
> Matthew McGeary
> Senior Technical Specialist - Infrastructure Management Services PotashCorp
> T: (306) 933-8921
> www.potashcorp.com
> 
> 
> 
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Larry Bernacki
> Sent: Thursday, September 29, 2016 8:02 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Conversion from File to Storage Container on Linux
> 
> Background - Earlier this year we moved TSM server from z/OS to Linuxx86_64 
> server and just then upgraded 2 TSM Servers to  7.1.7 from 7.1.1, each  with 
> 9TB of storage divided up across 6 mount points with the storage pool 
> allocated as a device class=FILE.  Each systems sequential FILE pool is 
> comprised of 174 - 50GB volumes. Currently the primary server TSMA has 3.2TB 
> of used space, with nothing yet on TSMB.   Our intention was to have TSMA be 
> the primary backup server, and TSMB be the offsite server, using Node 
> Replication to sync  the servers daily.
> 
> Deduplication is not active at this point, but based on my IBM videos and 
> documentation it should be turned on with storage containers.
> 
> Looking for some assistance. Has anyone converted from using a FILE dev class 
> configuration to Storage Containers?
> All disk drive space has been allocated.  Should I begin removing volumes to 
> free up space so that I can create an LVM for the storage pool container 
> directory?  Then slowly move the currently allocat

Re: Conversion from File to Storage Container on Linux

2016-09-29 Thread Matthew McGeary
Larry,

I recently performed this conversion on all of our TSM servers.

To simplify the process, I allocated the new container pool directories on the 
same mountpoints and volume groups that held the FILE class data.  That way, as 
your file volumes empty into the container class pool, you shouldn't need more 
than 25% free space to keep things running smoothly.

The convert stgpool process ran relatively quickly, I converted 250TB FILE data 
to container in a few weeks. 

For TSMB, I'd delete the storage pool with devclass file and create a new 
container class pool to take it's place.  Then set up storage pool replication 
to seed TSMB with TSMA data as the conversion process moves through your data.  
The convert process can (and should) be scheduled to run during quiet periods 
for a set duration.  We ran ours for 6 hours a day.

Let me know how you make out, but you should have no issues.  It ran smoothly 
on all of our servers, from the smallest instance with ~10TB to our primary 
server with ~250TB.

Regards,

__
Matthew McGeary
Senior Technical Specialist - Infrastructure Management Services
PotashCorp
T: (306) 933-8921
www.potashcorp.com




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Larry 
Bernacki
Sent: Thursday, September 29, 2016 8:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Conversion from File to Storage Container on Linux

Background - Earlier this year we moved TSM server from z/OS to Linuxx86_64 
server and just then upgraded 2 TSM Servers to  7.1.7 from 7.1.1, each  with 
9TB of storage divided up across 6 mount points with the storage pool allocated 
as a device class=FILE.  Each systems sequential FILE pool is comprised of 174 
- 50GB volumes. Currently the primary server TSMA has 3.2TB of used space, with 
nothing yet on TSMB.   Our intention was to have TSMA be the primary backup 
server, and TSMB be the offsite server, using Node Replication to sync  the 
servers daily.

Deduplication is not active at this point, but based on my IBM videos and 
documentation it should be turned on with storage containers.

Looking for some assistance. Has anyone converted from using a FILE dev class 
configuration to Storage Containers?
All disk drive space has been allocated.  Should I begin removing volumes to 
free up space so that I can create an LVM for the storage pool container 
directory?  Then slowly move the currently allocated node backup data on the 
volumes to the storage containers?

Any assistance would be greatly appreciated.

Thank you,
Larry Bernacki
Project Manager/Systems Programmer
Laboratory Technical Services Branch, ANG-E13 FAA William J. Hughes Technical 
Center Atlantic City International Airport, NJ 08405


Re: deleing data from a containerpool

2016-08-22 Thread Matthew McGeary
I bled hard and installed 7.1.5 while the paint was wet.

Compression was available on AIX concurrently with all other releases. WAN 
acceleration, on the other hand, is not.  Booo!
__
Matthew McGeary
Senior Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   David Ehresman <david.ehres...@louisville.edu>
To: ADSM-L@VM.MARIST.EDU
Date:   08/22/2016 08:16 AM
Subject:Re: [ADSM-L] deleing data from a containerpool
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



I can assure you that compression was NOT part of the earlier releases on 
7.1.5 on AIX.  I had to painfully tear down a system we were trying to 
convert to because compression was not included in the early 7.1.5 
releases and dedup alone was not meeting saving expectations.

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Del Hoobler
Sent: Monday, August 22, 2016 10:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] deleing data from a containerpool

Minor correction.



Inline compression for container pools was added in March in 7.1.5.





IBM Spectrum Protect 7.1.5 - Inline compression:

- Performed in-line after deduplication to provide additional storage 

savings

- Negligible impact on resources – uses latest and most efficient 

compression algorithms

- Can potentially double (or more) your storage savings after 

deduplication





Del









"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 08/22/2016 

09:53:08 AM:



> From: "Loon, Eric van (ITOPT3) - KLM" <eric-van.l...@klm.com>

> To: ADSM-L@VM.MARIST.EDU

> Date: 08/22/2016 09:53 AM

> Subject: Re: deleing data from a containerpool

> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>

> 

> Indeed, TSM 7.1.0 to 7.1.5 only supported deduplication, additional 

> compression was introduced in 7.1.6.

> Kind regards,

> Eric van Loon

> Air France/KLM Storage Engineering

> 

> -Original Message-

> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 

> Behalf Of David Ehresman

> Sent: maandag 22 augustus 2016 15:12

> To: ADSM-L@VM.MARIST.EDU

> Subject: Re: deleing data from a containerpool

> 

> At the most recent levels of TSM, it both dedups and compresses but 

> make sure you are at a level that does both.  There was a level that

> only did dedup but not compression.

> 

> David

> 

> -Original Message-

> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 

> Behalf Of Rhodes, Richard L.

> Sent: Monday, August 22, 2016 9:07 AM

> To: ADSM-L@VM.MARIST.EDU

> Subject: Re: [ADSM-L] deleing data from a containerpool

> 

> >But I totally agree, everyone who is using file device 

> 

> >classes or expensive backend deduplication (like Data Domain or 

Protectier) 

> 

> >should seriously consider switching to container pools.

> 

> 

> 

> 

> 

> We currently use DataDomains.

> 

> With a DD it dedups what it can, then compresses the rest. 

> 

> Does TSM also try and compress what is leftover after dedup?

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> 

> -Original Message-

> 

> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 

> Behalf Of Loon, Eric van (ITOPT3) - KLM

> 

> Sent: Tuesday, August 16, 2016 3:39 AM

> 

> To: ADSM-L@VM.MARIST.EDU

> 

> Subject: Re: deleing data from a containerpool

> 

> 

> 

> Hi Stefan!

> 

> Our database is on SSD in an IBM V3700, but the time needed for a 

> del filespace can be significant though. But I totally agree, 

> everyone who is using file device classes or expensive backend 

> deduplication (like Data Domain or Protectier) should seriously 

> consider switching to container pools. We are working on a design 

> for our next TSM servers and we are able to lower our costs per TB 

> by 75% compared to the old design based on the Data Domain!

> 

> Kind regards,

> 

> Eric van Loon

> 

> Air France/KLM Storage Engineering

> 

> 

> 

> -Original Message-

> 

> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 

> Behalf Of Stefan Folkerts

> 

> Sent: dinsdag 16 augustus 2016 8:33

> 

> To: ADSM-L@VM.MARIST.EDU

> 

> Subject: Re: deleing data from a containerpool

> 

> 

> 

> Yes, I too have noticed this and it is something to keep in mind.

> 

> At the same time, I think almost everybody using this pool will be 

> using SSD's for the database the impact will be overseeable.

> 

> 

Re: Need timeline/version for inline-dedup container stgpool with tape vaulting support

2016-08-02 Thread Matthew McGeary
Michaud,

You should sign up for the beta program.  Even if (like myself) you don't 
have the time or hardware to do direct testing, you will get early insight 
into roadmaps, have input on design choices and generally know and shape 
the future of the product.

In particular, the question you're asking will be answered, along with 
more. 

Regards,
__
Matthew McGeary
Senior Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   "Michaud, Luc [Analyste principal - environnement AIX]" 
<luc.micha...@stm.info>
To: ADSM-L@VM.MARIST.EDU
Date:   08/01/2016 11:22 AM
Subject:[ADSM-L] Need timeline/version for inline-dedup container 
stgpool with tape vaulting support
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



We are looking at upgrading our existing TSM 7.1.1.300 server (on AIX).
We would really like to gain benefits from the reworked inline dedup 
architecture (hopefully it wont pound the db2 as much), but our current DR 
strategy relies on external vaulting of LTO tapes.
Anyone has a roadmap (w/ versions and tentative dates) when the 
inline-dedup container stgpool will support this ?

Regards,
LUC MICHAUD, B.Ing
Analyste Principal Exploitation
Division Technologies de l'information
Sociéte de Transport de Montréal
(514) 280-7416
luc.micha...@stm.info<mailto:luc.micha...@stm.info>



Sharepoint backup strategies

2016-06-29 Thread Matthew McGeary
Folks,

We are in the process of putting Sharepoint in and I'm looking at how
exactly I should be backing the system up.  I see that IBM no longer
bundles the Sharepoint TDP, so we have to go out and purchase licenses
from Avepoint.

Is that TDP client required?  Sharepoint is essentially an SQL system, so
is the TDP for SQL Server sufficient?  For those of you that use the
DocAve product, what advantages does it provide over a straight SQL
backup?

Thanks!
__
Matthew McGeary
Senior Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com


Re: Proctect Stgpool

2016-06-07 Thread Matthew McGeary
David,

I'm using 'protect stgpool' exactly like I used 'backup stgpool'
previously.  Every day, after the backup window is completed, I run a
protect stgpool, then a repl node.  This ensures that the storage pool is
protected from corruption and that all node metadata is synced across my
two TSM servers.  Using protect stgpool is also much more consistent over
fast high-latency networks like our link between the primary datacenter
and our recovery center across the country.  Because protect stgpool runs
on an extent level, running a multi-session protect command will always
fill the pipe until process completion.  Then I cut a db backup to a local
disk and the server maintenance window is complete.

I don't use the forcerec command unless I see a replication error or
failed replication process.

Hope that helps,
__
Matthew McGeary
Senior Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   David Ehresman <david.ehres...@louisville.edu>
To: ADSM-L@VM.MARIST.EDU
Date:   06/07/2016 08:33 AM
Subject:[ADSM-L] Proctect Stgpool
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



If you are using container stgpools, how are you using the "protect
stgpool" command?  Are you running it throughout the backup window or
after backups have completed? When and how often do you run the "replicate
node" command?  Are you using the forcereconcile option?  Why or why not?

I'm getting ready to set up container replication and trying to understand
the implications of the various options.

David



Re: TSM Client upgrade on AIX

2016-04-27 Thread Matthew McGeary
Good morning Pam,

We encountered errors backing up filesystems with large numbers of files
until we set the root user ulimits to unlimited.  That fixed the problem
but can have other consequences, obviously.  Do you know if your AIX admin
tried changing the ulimits?

Regards,
__
Matthew McGeary
Senior Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   "Pagnotta, Pamela (CONTR)" <pamela.pagno...@hq.doe.gov>
To: ADSM-L@VM.MARIST.EDU
Date:   04/27/2016 07:47 AM
Subject:[ADSM-L] TSM Client upgrade on AIX
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hello,

Recently one of our AIX administrators upgraded the TSM client to 7.1.4.4
on her servers. Many of them started receiving errors like

calloc() failed: Size 31496 File ../mem/mempool.cpp Line 1092

I looked this up and the indication is that the AIX server could not
supply enough memory to TSM to complete the backup. We opened a ticket and
were told to try memoryefficientbackup with diskcachemethod. This did not
fix the issue.

In frustration the administrator reinstalled a TSM client version of
6.4.2.0 and is no longer experiencing the memory problems.

Any thoughts?

Thank you,

Pam

Pam Pagnotta
Sr. System Engineer
Criterion Systems, Inc./ActioNet
Contractor to US. Department of Energy
Office of the CIO/IM-622
Office: 301-903-5508
Mobile: 301-335-8177



Re: missing dsmlicense

2016-04-06 Thread Matthew McGeary
Hello Rick,

Last time I checked Passport Advantage, I only had 7.1.3 available for
download and the license file from 7.1.3 works fine in 7.1.4 and 7.1.5. If
you check your 7.1.3 files, you'll find the license there.

Can anyone from IBM tell me why 7.1.4 or 7.1.5 is not available on the
Passport site?
__
Matthew McGeary
Senior Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   "Rhodes, Richard L." <rrho...@firstenergycorp.com>
To: ADSM-L@VM.MARIST.EDU
Date:   04/06/2016 06:33 AM
Subject:[ADSM-L] missing dsmlicense
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hello,

We did a new install of TSM 7.1.4 just to check it out.
(ok, we screwed up a upgrade on a test instance and decided to do a
new/clean install.)

After doing this install, it comes up with these messages:

  03/17/16   08:27:06 ANR9649I An EVALUATION LICENSE for IBM System
Storage
  Archive Manager will expire on 04/15/16.
(PROCESS: 6)

When you try and run reg lic, it throws this:

  ANR9613W Error loading /opt/tivoli/tsm/server/bin/dsmlicense for
Licensing function.

That file does NOT exist.  It DOES exist on our existing TSM 6.4 systems.

IBM support thinks we have a mixed 64bit(dsmserv)/32bit(dsmlicense)
problem,
but the file simply does not exist.


Q)  Where do you get the dsmlicense file for v7.1.4?

Thanks

Rick







-

The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.



Re: Deduplication questions, again

2016-03-22 Thread Matthew McGeary
Arnaud,

I too am seeing odd percentages where containerpools and dedup is
concerned.  I have a small remote server pair that protects ~23 TB of pre
dedup data, but my containerpools show an occupancy of ~10 TB, which
should be a data reduction of over 50%.  However, a q stg on the
containerpool only shows a data reduction ratio of 21%.  Of note, I use
client-side dedup on all the client nodes at this particular site and I
think that's mucking up the data reduction numbers on the containerpool.
The 21% figure seems to be the reduction AFTER client-side dedup, not the
total data reduction.

It's confusing.

On the plus side, I just put in the new 7.1.5 code at this site and the
compression is working well and does not appear to add a noticeable amount
 CPU cycles during ingest.  Since the install date on the 18th, I've
backed up around 1 TB pre-dedup and the compression savings are rated at
~400 GB, which is pretty impressive.  I'm going to do a test restore today
and see how it performs but so far so good.
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   PAC Brion Arnaud <arnaud.br...@panalpina.com>
To: ADSM-L@VM.MARIST.EDU
Date:   03/22/2016 03:52 AM
Subject:[ADSM-L] Deduplication questions, again
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hi All,

Another question in regards of TSM container based deduplicated pools ...

Are you experiencing the same behavior than this : using "q stg f=d"
targeting a deduped container based storage pool, I observe following
output :


q stg f=d

Storage Pool Name: CONT_STG
Storage Pool Type: Primary
Device Class Name:
 Storage Type: DIRECTORY
   Cloud Type:
Cloud URL:
   Cloud Identity:
   Cloud Location:
   Estimated Capacity: 5,087 G
   Space Trigger Util:
 Pct Util: 55.8
 Pct Migr:
  Pct Logical: 100.0
 High Mig Pct:

Skipped few lines ...

   Compressed: No
Deduplication Savings: 0 (0%)
  Compression Savings: 0 (0%)
Total Space Saved: 0 (0%)
   Auto-copy Mode:
Contains Data Deduplicated by Client?:
 Maximum Simultaneous Writers: No Limit
  Protection Storage Pool: CONT_STG
  Date of Last Protection: 03/22/16   05:00:27
 Deduplicate Requires Backup?:
Encrypted:
   Space Utilized(MB):

Note the "deduplication savings" output ( 0 %)

However, using "q dedupstats" on the same stgpool, I get following output
: (just a snippet of it)

Date/Time: 03/17/16   16:31:24
Storage Pool Name: CONT_STG
Node Name: CH1RS901
   Filespace Name: /
 FSID: 3
 Type: Bkup
  Total Saving Percentage: 78.11
Total Data Protected (MB): 170

Date/Time: 03/17/16   16:31:24
Storage Pool Name: CONT_STG
Node Name: CH1RS901
   Filespace Name: /usr
 FSID: 4
 Type: Bkup
  Total Saving Percentage: 62.25
Total Data Protected (MB): 2,260

How does it come that on one side I witness dedup, but not on the other
one ?

Thanks for enlightenments !

Cheers.

Arnaud



Re: Reg: disk based LAN free setup for node replication

2016-02-10 Thread Matthew McGeary
There are no SAN-based technologies for node replication currently
available within TSM.  However, I have been able to get relatively good
node replication performance over a WAN link with 30-40 ms delay by
expanding the TCPWINDOWSIZE from the default 63K to 1024K.  That, coupled
with running 30-50 replication sessions concurrently has been able to fill
a 1 Gbps pipe.  The maximum supported value for TCPWINDOWSIZE is 2048K,
but I found that to be of limited utility.

You should sign up for the TSM beta program, as they are actively
developing the node replication technology and big changes are on the
horizon.

Hope that helps,
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Srikanth Kola23 <srkol...@in.ibm.com>
To: ADSM-L@VM.MARIST.EDU
Date:   02/09/2016 03:49 PM
Subject:[ADSM-L] Reg: disk based LAN free setup for node
replication
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hello,

I am having a WAN setup and we are doing node replication ( windows tsm
server 714 to linux tsm server 714) to another tsm server ( we do use file
type stg pools only) . ( The LAN FREE is only for node replication )

since it is wan the data transfer is very slow even we have roboust setup
,
we are looking to have a kind of setup for node replication . is it
possible any setup guidence like SANergy.


Help share your view and suggestions .

Thanks & Regards,

Srikanth kola
Backup & Recovery
IBM India Pvt Ltd, Chennai
Mobile: +91 9885473450



Re: TSM troubles

2015-12-10 Thread Matthew McGeary
Hello Stef,

Since your active log is filling, do you have the SAN capacity to increase
it?  We use the maximum active log size of 512GB, which is overkill for
our intake but might be exactly what you need.  I'd also think that with
that much intake, a few more cores wouldn't hurt.  There's a lot of
processing involved with large dedup workloads and we typically use all 10
of our allocated Power 8 cores just doing server-side dedupe.

Hope that helps,
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Stef Coene <stef.co...@docum.org>
To: ADSM-L@VM.MARIST.EDU
Date:   12/10/2015 04:05 AM
Subject:[ADSM-L] TSM troubles
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hi,

Some time ago I mailed in frustration that using DB2 as TSM backend was a
bad
idea.

Well, here I'm again with the same frustration.

This time I just want to know who is using deduplication successful?
How much data do you process daily? Client or server or mixed?

We are trying to process a daily intake between 10 - 40 TB, almost all
file
level backup. The TSM server is running on AIX, 6 x Power7, 128 GB ram.
Disk
is on SVC with FlashSystem 840. Diskpool is 250 TB on 2 x V7000 with 1 TB
NLSAS disks, SAN attached. We are trying to do client based dedup.

The problem is that the active log fills up (128 GB) in a few hours. And
this
up to 2 times per day! DB2 recovery takes 4 hours because we have to do a
'db2stop force' :(


Stef



Re: dsm.sys

2015-10-23 Thread Matthew McGeary
Hello David,

I believe you can use the DSMI_DIR environment variable to specify the
location of the dsm.sys or opt files.

Regards,
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   David Ehresman <david.ehres...@louisville.edu>
To: ADSM-L@VM.MARIST.EDU
Date:   10/23/2015 01:19 PM
Subject:[ADSM-L] dsm.sys
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



On AIX, can I put dsm.sys in a directory other than the ba install
directory?  If so, how do I point to it?

David



Re: A few questions about managing multiple Spectrum Protect servers with the Operations Center

2015-09-09 Thread Matthew McGeary
Hello Stefan,

I've been wanting to do the exact same thing myself but haven't tried it
because I thought that the documentation for the OC stated it was not
possible.  Your email prompted me to look again and now I can't find
anything to indicate it isn't possible.

As I understand it, adding a server into the OC does the following:
- defines the spoke server on the hub server just like doing a
server-server communication setup for config or virtual volumes
- defines an administrative account on the hub server to perform commands
for the OC
   - this account is unique to each hub server, which means you can
potentially have more than one account running on a spoke
- sets server alerts and triggers on the spoke based on the hub config

The issue I can see is that if the OC settings and alerts for a spoke are
stored on the spoke, rather than on the hub, conflicts will arise between
the hub server in your central location and the OC running on the spoke.
Both will attempt to set certain flags and override each other based on
the whichever one did the last operation.  This could get confusing.  If,
however, things like 'At Risk' settings and the like are stored within the
hub server itself, there should be no conflicts.

I decided to give it whirl and added one of my satellite TSM servers to my
head office OC about ten minutes ago.

I'll let you know how it goes.
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Stefan Folkerts <stefan.folke...@gmail.com>
To: ADSM-L@VM.MARIST.EDU
Date:   09/09/2015 06:42 AM
Subject:Re: [ADSM-L] A few questions about managing multiple
Spectrum Protect servers with the Operations Center
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hi David, thanks for your reply.

>I believe that the OC will want to create a named account (default name)
on each TSM server it monitors (it's ok to say TSM, I'm not IBM).  This
password would need to be the same across all the TSM servers.  I think
that's going to be the problem for you but I could be wrong.

But the admin picks the name. Couldn't I create one account for the HQ OC
instance that connects to all the servers and one account per branch
office
that connects only to the local branch office instance?

I don't know why that would not work.






On Wed, Sep 9, 2015 at 2:10 PM, Nixon, Charles D. (David) <
cdni...@carilionclinic.org> wrote:

> I believe that the OC will want to create a named account (default name)
> on each TSM server it monitors (it's ok to say TSM, I'm not IBM).  This
> password would need to be the same across all the TSM servers.  I think
> that's going to be the problem for you but I could be wrong.
>
> ---
> David Nixon
> System Programmer II, Enterprise Storage Team
> Carilion Clinic | 451 Kimball Avenue | Roanoke, VA 24016
> 540.224.3903 (Work)
> 540.525.8000 (Mobile)
>
> 
> From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of Stefan
> Folkerts [stefan.folke...@gmail.com]
> Sent: Wednesday, September 09, 2015 6:03 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] A few questions about managing multiple Spectrum
Protect
> servers with the Operations Center
>
> Hi all,
>
> I am looking into some things regarding the managing/monitoring of
multiple
> TSM (sorry, Spectrum Protect) servers using the Operations Center.
>
> I can build a test setup to look into it but that is not worth the
effort
> if somebody here can tell me it's not going to work. :-)
>
> Let's asume I have a HQ and 5 branch offices.
>
> I configure the branch offices to use Spectrum Protect replication to
the
> HQ location.
>
> I want the branch offices to be able to monitor and administer their
local
> Spectrum Protect server using a local OC setup on the BO server (version
> 7.1.3) but not see the server in the HQ.
>
> Next I want the OC install on the Spectrum Protect server in the HQ to
have
> a connection with the server in HQ but also al 5 of the branch offices.
>
> I can use seperate Spectrum Protect admin accounts.
> VPN access is all configured.
> It's just about being able to connect a Spectrum Protect server to
multiple
> instances of the Operations Center basically and creating something that
> looks like multi tiering this way.
>
> Any feedback is welcome, thanks in advance!
>
> Regards,
>Stefan
>
> 
>
> Notice: The information and attachment(s) contained in this
communication
> are intended for the addressee only, and may be confidential and/or
legally
> privileged. If you have received this communication in error, please
> contact the sender immediately, and delete this commu

Re: A few questions about managing multiple Spectrum Protect servers with the Operations Center

2015-09-09 Thread Matthew McGeary
A few things that could cause issues:

1) At-Risk status is stored as a node attribute on the server in question.
 So having two OC's manage a node will lead to conflicting At-Risk
assignments.
2) At-Risk intervals are also stored as a server attribute on each server.
 Same issue as above

It all depends on what you're looking to accomplish.  For my own part,
it'd be nice to have all my TSM servers at all the our locations visible
on the 'main' OC.  It's also nice for all my site IT staff to only see
their own TSM servers, which is why they all have their own OC installs.
They typically don't mess with OC settings because they use it as a
dashboard service only.  This means that I'm probably OK to merge all my
site TSM servers into the main OC without much fuss, since while the site
admins could change server settings such as 'At Risk' and alert intervals,
they're exceedingly unlikely to do so.

This is based quite a bit on a trust level that exists in our organization
that is not applicable everywhere.

What IBM really needs to do is institute some level of role-based
authentication and control.  One OC install should be able to service a
main office/branch model, where certain staff only have interest/rights to
see certain TSM workloads.


__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Evan Dunn <ed...@us.ibm.com>
To: ADSM-L@VM.MARIST.EDU
Date:   09/09/2015 02:24 PM
Subject:Re: [ADSM-L] A few questions about managing multiple
Spectrum Protect servers with the Operations Center
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



You guys are right that this is not a supported OC configuration.

But, to get this to work you would want to look at configuring the OC
manually (not using the Hub Configuration built in).

Take a look at serverConnection.properties in
(/opt/tivoli/tsm/ui/Liberty/usr/servers/guiServer/) on the OC you have
setup for the main office (use the OC to configure the main office).

Copy this to your spoke office, and point things to the spoke server
values (note: either set the IBM-OC- admin ID to never expire on all the
servers, or use the undocumented parameter SERVER_ADMIN_PASSWORD_DECRYPTED
with a unique admin)

Don't forget to restart the OC to pick up the changes!   (service
opscenter.rc restart)

By doing it this way, you can use the spoke as if it was a hub, without
making it a true hub.

With that said -- caveats:
1) replication visibility will only work on the main office OC
2) REPL_MODE=RECEIVE nodes won't be visible (not an issue in the
configuration you want), but would be an issues if replication from hub to
spoke (instead of spoke to hub)
3) this is unsupported, untested
4) other stuff may or may not work (I tried it out and didn't see things
explode)

(I do not represent IBM or Spectrum Protect)

Good luck!



Re: RTC on DB SSD disks

2015-07-27 Thread Matthew McGeary
Hello Robert,

The key with the real-time compression on v7000 is to buy the compression 
add-in card that IBM supplies.  That offloads all compression workloads 
from the primary CPU/RAM on the v7000 controllers and makes a big 
difference in IO throughput.

That said, TSM databases tend to be pretty IO intensive.  In our 
organization, the TSM database luns are the busiest in the company by a 
fair margin.  Unless you're really strapped for cash, I'd say you're 
better off to buy the capacity and not compress the TSM DB.  Even some IO 
degradation could make a big impact to your daily operations.

Hope that helps,
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   07/24/2015 01:13 AM
Subject:[ADSM-L] RTC on DB SSD disks
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hello all

We are in process installing a new Storage V7000 from IBM in our 
organization.

Coming with the feature RTC ( Real time compression) , I wonder if for our 
new SSD for TSM DB , I can enable RTC on those Lun’s.

Any experience , suggestions ???  Will be appreciate …

TSM server version 7.1.1.200 on O.S windows 2008R2 64B

Best Regards

Robert




Re: anr1880w and other db lock conflicts suddenly appearing

2015-07-27 Thread Matthew McGeary
Hello Gary,

Are you running deduplication?  The IBM technote on managing deduplication
and DB growth references this code directly:

http://www-01.ibm.com/support/docview.wss?uid=swg21452146

It can be caused by a long-running online reorg.  You can check db2 reorg
status by using db2pd or db2 list utilities as the TSM instance owner ID.

I've also seen locking messages due to an insufficiently high db2 LOCKLIST
parameter.  This can be adjusted by following this note:

http://www-01.ibm.com/support/docview.wss?uid=swg21430874

Hope that helps,
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Lee, Gary g...@bsu.edu
To: ADSM-L@VM.MARIST.EDU
Date:   07/27/2015 01:55 PM
Subject:[ADSM-L] anr1880w and other db lock conflicts suddenly
appearing
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Tsm server 6.2.5, RHEL 6.5.

I know, out of support, trying to get to 6.3.4 next week.

Suddenly I am getting anr1880w messages.

Processes being cancelled because of conflicting locks on table
af_bitfiles.

Never seen these before.
Happened while reclamation was running.
Also similar this morning during a delete filespace.

Is there any action I can take to mitigate this?

Message anr1880w is not in my copy of the 6.2 messages manual.



Re: TSM VE 7.1.2 Mount Proxy Pairs

2015-06-19 Thread Matthew McGeary
Hello David,

I reused my existing Windows and Linux datamovers for the proxy mount
nodes.  For the Windows box, I had to install the iSCSI service from the
Windows server manager and then I used the configuration wizard on the
datamover GUI to do the config.

For the Linux box, auto config is not available, so there is a document in
the 7.1.2 infocenter that details what's required to set up the Linux
mount node.  It's not much work.

So I now have two CAD services/daemons running on each of my datamover
nodes.  One for the datamover, one for the proxy node.
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   David Ehresman david.ehres...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   06/19/2015 09:52 AM
Subject:[ADSM-L] TSM VE 7.1.2 Mount Proxy Pairs
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



The 7.1.2 doc indicates I need Proxy Pairs for VE mount and instant
restore.  But I can't find the doc that tells me how to set them up.  I
appear to need a Windows and a Linux server.  Do they have to be
dedicated?  Can I/should I use a data mover as one of the proxies?  Do I
need to install anything on the proxy servers?  Has anyone set these up
and willing to offer some advice?

David



Re: ARCHLOG question

2015-05-17 Thread Matthew McGeary
Hello Robert,

The archlog space is defined by the filesystem it's hosted upon.  So you
can increase (or decrease) the available archive log space by changing the
size of the filesystem.

TSM will simply report on the additional freespace and continue to cut
logs until the filesystem fills.

Regards,
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   05/17/2015 04:31 AM
Subject:[ADSM-L] ARCHLOG question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi to All

Fast question, want to increase my ARCHLOG from 200GB to 400GB , can I do
it on the fly meaning without halting the Tsm server

TSM server Windows 2008R2 64B
TSM server version 7.1.1.100
TSM client 7.1.2.0

Best Regards Robert



Node replication and authorized nodes, proxies, etc

2015-05-05 Thread Matthew McGeary
Hello folks,

We are in the process of implementing node replication and I'm having some
difficulty with the reporting of success/failure on replication tasks. The
way that our environment is structured is that selected critical nodes
will replicate to an offsite server but our non-critical data will not.
That said, we have a number of nodes (DB2 API clients and VMWare in
particular) that are associated in some way to critical production nodes
(such as proxy relationships, backup access for cross-node restores, etc)
that are not themselves critical and should not be replicated.

However,  when a replicate node command is issued, it returns a Failure
code because these associated nodes are not enabled for replication.

This makes reporting on replication a pain and also obscures 'actual'
failures in the replication process that I need to keep on top of.

So my question is this: for those of you that are using node replication
for a subset of your nodes, how are you managing your node relationships
so that the replicate node command returns a Success code?

Thanks!
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com


Re: TSM 4 VE proxy crash

2015-04-13 Thread Matthew McGeary
Hello Remco,

I had this very same problem when we implemented TSM for VE when 6.4 came 
out.  I found that, during the snapshot process, the datamover tried to 
create a snapshot of itself, which caused the process to crash.  By 
design, the datamover will attempt to remove stale snapshot data when 
backup operations are started, which means that it tries to remove all 
mounted disks from the system that were added by the snapshot process. 
Unfortunately, this includes the datamover's own disk, which causes 
another crash.

And so the loop continues.

Snapshot data is stored in the windows\temp\vmware-inc folder.  You'll 
have to reboot the datamover into recovery mode so you can delete this 
folder.  It's locked during regular Windows boots.  Once that folder is 
deleted, you should be able to reboot into standard mode and perform 
scheduled (or GUI-initiated) backups normally.

Good luck and let us know how it goes!
__
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Remco Post r.p...@plcs.nl
To: ADSM-L@VM.MARIST.EDU
Date:   04/13/2015 09:30 AM
Subject:[ADSM-L] TSM 4 VE proxy crash
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi All,

I’ve been struggling with a TSM4VE issue for the past few weeks and so far 
IBM has been unable to help me. We’re running TSM4VE on a virtual machine 
and made the mistake of having that VM make a VM backup of itself. Ever 
since, even now the VM is no longer included in the VM backups TSM 
attempts to detach the VM’s own active disks from the VM. Of course this 
is successful, but a windows VM without a system disk crashes within 
seconds. This only happens during scheduled backups that run as ’local 
system’, not when I attempt a VM backup manually using my own user 
account.

My hypothesis is that TSM somehow retains information about which disks is 
has hot-added to the VE proxy, but I don’t know where or how. And 
apparently, nor does my support engineer. How do I, apart from completely 
reinstalling the VE proxy, cleanup this mess?

This all is TSM 6.4.2.0 and TSM4VE 6.4.2.0 on windows 2008 r2.

-- 

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622




Re: So long, and thank you...

2015-04-06 Thread Matthew McGeary
Wanda,

Best wishes to you in your retirement.  You've always been courteous and
helpful and your input into this group's questions will be sorely missed.

Regards,
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Prather, Wanda wanda.prat...@icfi.com
To: ADSM-L@VM.MARIST.EDU
Date:   04/03/2015 03:11 PM
Subject:[ADSM-L] So long, and thank you...
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



This is my last day at ICF, and the first day of my retirement!

I'm moving on to the next non-IT-support chapter in life.


I can't speak highly enough of the people who give of their time and
expertise on this list.

I've learned most of what I know about TSM here.


You all are an amazing group, and it has been a  wonderful experience in
world-wide collaboration.


Thank you all!


Best wishes,

Wanda



Re: Increase performance with SSD disls

2015-03-10 Thread Matthew McGeary
Hello Robert,

We use SSD arrays for both our database and our active log.  That said, 
unless you are using TSM deduplication or node replication, SSD disks 
should not be required for good server performance.  Standard 15K SAS 
drives are more than sufficient for regular server operations when dedup 
and replication are not used. 

Your restore steps appear correct and should function as desired.

Regards,
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   03/10/2015 07:28 AM
Subject:[ADSM-L] Increase performance with SSD disls
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi to all

To increase performance on my  TSM servers 7.1.1.0 , I think to install my 
databases on SSD disks.

I have some questions:


Þ Will be more efficient to put  the active log  too on the SSD disks

Þ Can I move  in the same step the database and the active log as:

o   On dsmserv.opt  change the activelogdir  to the new path for SSD disk  
(before I un the restore db)

o   dsmserv restore db todate=today on=dbdir.file(on dbdir.file the 
new path for SSD disks)
Best Regards

Robert



Re: TSM EXPORT NODE TO NODE

2015-03-03 Thread Matthew McGeary
Hello Ricky,

You can do an export node with a mergefilespace=yes option, but I've found
that to be problematic when replicating application data.  You could
rename the node on the source server first, before exporting, then the
node will import with a different name and will not have any disruption to
the existing data.

Hope that helps
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Plair, Ricky rpl...@healthplan.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/03/2015 09:57 AM
Subject:[ADSM-L] TSM EXPORT NODE TO NODE
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Good Morning All,

We have exchange data on an old TSM server that we are trying to
decommission. What I need to do is,  export the exchange data that still
exists on the old TSM server to our new TSM server that was just built
approximately 6 months ago. So my question is, can I export the node from
the old TSM server to the new TSM server, even though, the new TSM server
already has that node name registered.

In other words,  the node exists on both servers, but the domain are
different. If I start the export command will it export the data from the
old TSM server to the same node which already exists on the new TSM
server.

I appreciate the help

Ricky M. Plair
Storage Engineer
HealthPlan Services
Office: 813 289 1000 Ext 2273
Mobile: 757 232 7258


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_
CONFIDENTIALITY NOTICE: This email message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information and/or Protected Health Information (PHI)
subject to protection under the law, including the Health Insurance
Portability and Accountability Act of 1996, as amended (HIPAA). If you are
not the intended recipient or the person responsible for delivering the
email to the intended recipient, be advised that you have received this
email in error and that any use, disclosure, distribution, forwarding,
printing, or copying of this email is strictly prohibited. If you have
received this email in error, please notify the sender immediately and
destroy all copies of the original message.



Re: Is TSM for VE 7.1.1.1 pagefile aware? Is it contrained to a single process per datamover? What constitutes a successful backup?

2015-03-02 Thread Matthew McGeary
Hello Steve,

There's no mention of pagefile 'awareness' in the documentation, so it's
safe to say that the backup will consist of all changed blocks, regardless
of what's stored on them.  As for threaded behavior, I've pinned all the
cores on my datamover during heavy parallel backups using deduplication,
so while it may only register as a single process in the Task Manager, it
seems clear that TSM is acting like a multi-threaded application.  As for
your last point, I don't know why IBM changed the status codes for VM
schedules from 'Failed' to 'Completed' but I suspect it has something to
do with VMWare's abysmal ability to create quiesced snapshots on a
consistent basis.

We backup ~500 VMs on a daily basis and usually have 4-5 failures, all due
to quiescing errors.  I guess IBM got tired of customer complaints that
their VM backup schedules 'always failed' and quietly changed the status
code to make the product look a little better.


__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Schaub, Steve steve_sch...@bcbst.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/02/2015 11:24 AM
Subject:[ADSM-L] Is TSM for VE 7.1.1.1 pagefile aware?  Is it
contrained to a single process per datamover?  What constitutes a
successful backup?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



1.   Is VE pagefile aware?  I.E. is it smart enough to know not to
bother backing up changed blocks from within the pagefile.sys file?

2.   It appears to me that a given datamover only uses a single
Windows process, even for multiple concurrent backups.  That would seem to
constrain it to a single core.  Can someone confirm/deny?  If it is, does
it make sense to define multiple datamovers per proxy server in order to
take advantage of multi-core machines?

3.   Why do schedules with failed VM backups show successful?
02/28/2015 02:37:11 Total number of virtual machines failed: 4
02/28/2015 02:37:11 Total number of virtual machines processed: 375
02/28/2015 02:37:11 Scheduled event 'VE_INCR_N01_BCBST_DM02' completed
successfully.

Thanks,

Steve Schaub
Systems Engineer II, Backup/Recovery
Blue Cross Blue Shield of Tennessee

-
Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm



Re: TSM VE question

2015-02-25 Thread Matthew McGeary
Hello Robert,

That message is nothing to worry about.  Configuring TSM for VE using the 
IBM-supplied wizard only provisions one VMCLI node, even when there are 
multiple data movers in the configuration.

Regards,
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   02/25/2015 04:44 AM
Subject:[ADSM-L] TSM VE question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi o all

I  am backing my VMware   environment with TSM for VE 7.1.1.0.  I  have 
two data movers VMPROXY and VMPROXY2 .

My configuration is as:


· CLDVCENTER

o   VMCLI

§  MYSITE_DATACENTER

· LOC_MP_WIN / LOC_MP_LNX

o   VMPROXY

o   VMPROXY2
Every time I access thru one of my data mover I see in the actlog this 
entry:

02/25/2015 11:30:03  ANR1639I Attributes changed for node VMCLI: TCP 
Name from   VMPROXY2 to VMPROXY, TCP Address from XXX.XX.XX.XX to
  YYY.YY.YY.YY, GUID from 
xx.xx.xx.xx.xx.xx.xx.xx.xx to yy.yy.yy.yy.yy.yy.yy.yy.yy (SESSION: 109625)

I wonder if is the correct configuration (by the way it’s working well) or 
need another VMCLI  

Best Regards

Robert




Re: Convert to TSM Devices or use the rmt device

2015-01-22 Thread Matthew McGeary
Hello Jason,

I use the rmt devices provided by the IBM tape driver.  If you're using
non-IBM drives, I think you still have to use the TSM device driver.

Regards,
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Barkes, Jason jason.bar...@sse.com
To: ADSM-L@VM.MARIST.EDU
Date:   01/22/2015 08:35 AM
Subject:[ADSM-L] Convert to TSM Devices or use the rmt device
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi,
I am installing new systems and traditionally would have converted
my rmt devices to mt TSM devices.  Is this now a legacy activity to
perform?

We are using LTO Ultrium 6 drives and whilst interrogating the rmt device
I can see a lot more tuneable attributes than when converted to mt device
and using the TSM devices.

lsattr -El mt61
FCPORT_ID  0x64f200 FC Port ID   True
LUNMAP_ID  0x0  Mapped LUN ID
of the device True
PRODUCT_ID Ultrium 6-SCSI   Product ID of the device
False
WW_NAME0x500104f000axxx1 WW Name of the Port False
block_size 1024Block Size
   True



lsattr -El rmt61
block_size   0  BLOCK
size (0=variable length)True
delay 45   Set
delay after a FAILED command  True
density_set_1  0  DENSITY
setting #1True
density_set_2  0  DENSITY
setting #2True
extfmno   Use
EXTENDED file marks   True
location Location LabelTrue
lun_id   0x0 Logical Unit Number ID False
modeyes Use
DEVICE BUFFERS during writes  True
node_name  0x500104f00cx FC Node Name False
res_support   yes RESERVE/RELEASE support   True
ret_error no   RETURN
error on tape change or reset  True
rwtimeout  1200   Set timeout
for the READ or WRITE command True
scsi_id  0xc8dc00  SCSI ID
False
var_block_size 0  BLOCK SIZE
for variable length supportTrue
ww_name   0x500104f00cxx   FC World Wide Name  False

Many Thanks, Jason.
**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of the SSE Group.
It is intended solely for the addressees. Access to this E-Mail by anyone
else is unauthorised.
If you are not the intended recipient, any disclosure, copying,
distribution or any action taken or omitted to be taken in reliance on it,
is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of the
error in transmission. Unless specifically stated otherwise, this email
(or any attachments to it) is not an offer capable of acceptance or
acceptance of an offer and it does not form part of a binding contractual
agreement.

SSE plc
Registered Office: Inveralmond House 200 Dunkeld Road Perth PH1 3AQ
Registered in Scotland No. SC117119
Authorised and regulated by the Financial Conduct Authority for certain
consumer credit activities.
www.sse.com

**



Re: tsm for ve, nfs datastores and cbt

2015-01-15 Thread Matthew McGeary
Lee,

The VMWare documentation is a bit confusing, but it appears that while NFS
datastores may support CBT, they don't support it fully.  Any backup
software that leverages the VMWare APIs for backup will have the same
issue as TSM for VE when backup is initiated on a VM that's stored on a
NFS datastore.  In order to perform incremental-style backups on VMs
stored on an NFS datastore, you'd need something like VEEAM, which has its
own changed-block-tracking mechanism and doesn't leverage the VMWare API
for that purpose.

__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Lee, Gary g...@bsu.edu
To: ADSM-L@VM.MARIST.EDU
Date:   01/15/2015 08:14 AM
Subject:[ADSM-L] tsm for ve, nfs datastores and cbt
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



We are running vmware 5.1, using tsm for ve 6.4.2, and using nfs data
stores.
It appears that change block tracking does not work with tsm for ve on nfs
data stores.
This is forcing full backups every night.
Our environment is currently over 14 tB and growing.
With this limitation, we will not be able to continue using tsm for ve.

Have any on the list encountered this situation, and if so, have you found
a work around?

Further information is here.
http://www-01.ibm.com/support/docview.wss?uid=swg21681916



Re: TSM Scheduled SQL Backups

2015-01-15 Thread Matthew McGeary
Hello David,

We use the TDP for SQL client to perform our backups, using the sample
scripts that are shipped with the client.  The scripts are scheduled with
the TSM B/A client, using a separate scheduler that performs a 'command'
action.  For systems with multiple databases that you want to keep
separate nodes for, simply repeat the process of creating a TSM scheduler
using a separate dsm.opt file.

__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   David Ehresman david.ehres...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   01/14/2015 02:31 PM
Subject:[ADSM-L] TSM Scheduled SQL Backups
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Has anyone done TSM Server Scheduling of SQL backups where there are
multiple SQL databases on a single server, each using a unique TSM node
name for backups?  Any hints?  I'm a TSM server admin but know next to
nothing about SQL or TSM for SQL.

David



Re: DB2 LOCKLIST parameter

2015-01-12 Thread Matthew McGeary
Hello Thomas,

We've had issues with this parameter and I ended up setting it to a
stupidly high number because I had the RAM to spare and was tired of
seeing server hangs due to locklist exhaustion.  I ended up using 4X the
maximum recommended value of 1,220,000 and rounded it to an even
6,000,000.  We use deduplication and will see 4-7 TB ingest, with another
3-5 TB migration and a further 3-6 TB of reclamation during a given day.
If you're not using dedup on wide scale, I think you'd be safe using the
recommended value for 5TB movement.
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Thomas Denier thomas.den...@jefferson.edu
To: ADSM-L@VM.MARIST.EDU
Date:   01/09/2015 02:25 PM
Subject:[ADSM-L] DB2 LOCKLIST parameter
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Inventory expiration processes on one of our TSM servers have been failing
occasionally with no obvious explanation. We were able to get trace data
for the most recent failure. IBM reviewed the trace data and advised us to
increase the DB2 LOCKLIST parameter. They referred us to a technote at
http://www-01.ibm.com/support/docview.wss?uid=swg21430874 for information
on calculating the new value for the parameter.

The instructions in the document are puzzling, to put it politely. The
document notes that deduplication greatly increases the need for row
locks, but recommends the same LOCKLIST setting whether deduplication is
being used or not. The recommended value is based on gigabytes of data
moved rather than number of files moved. This sounds reasonable for
environments with deduplication but is inexplicable in environments
without deduplication. The document makes reference to concurrent data
movement. In one of the examples given, all incoming client data in a
four hour period and all data moved by migration in the same period is
counted as concurrent data movement. The other example treats data
movement spread over eight hours as concurrent. As far as I can see,
this makes sense only if every database transaction triggered by client
sessions and background processes remains uncommitted until all data
movement activity ends.

One of our clients is a 32 bit Windows file server with 18 million files
on rather slow disk drives. The backup for this client starts in the
middle of our intended backup window and almost always runs through the
day and an hour or two into the next backup window. The TSM server can run
for months at a time with at least one client session in progress at all
times. The examples in the technote seem to imply that all data movement
occurring during those months should be counted as concurrent.

Is there any documentation available in which the criteria for selecting a
LOCKLIST setting are explained more clearly?

We are currently using TSM 6.2.5.0 server code running under zSeries
Linux. We are preparing to upgrade to TSM 7.1.1.100. We are not currently
using deduplication. We may use deduplication for backups of databases on
client systems after the upgrade. We don't have the CPU and memory
resources to use deduplication for all client files.

Thomas Denier
Thomas Jefferson University
The information contained in this transmission contains privileged and
confidential information. It is intended only for the use of the person
named above. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of
this communication is strictly prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

CAUTION: Intended recipients should NOT use email communication for
emergent or urgent health care matters.



Re: 6.3.5.1 TSM server brings new annoyances

2015-01-06 Thread Matthew McGeary
Zoltan,

You can remove the exemption from that table in your dsmserv.opt.  If you
are not running dedupe, I would guess that the online reorg would work ok
but there is a chance that the reorg process will exhaust the active log
and halt the server.

See this for info on the dsmserv option:

https://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.srv.ref.doc/r_opt_server_disablereorgindex.html
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Zoltan Forray zfor...@vcu.edu
To: ADSM-L@VM.MARIST.EDU
Date:   01/05/2015 02:25 PM
Subject:[ADSM-L] 6.3.5.1 TSM server brings new annoyances
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Recently updated all of my TSM Linux servers to 6.3.5.1 and am seeing
2-issues.

The first is related to a RedHat Kernel issue. Originally thought to be
RHEL 6.6  / 2.6 kernel problem but now seems to go back to 6.5 (per the
IBM
person I am working with).  The doc is here:

http://www-01.ibm.com/support/docview.wss?uid=swg21691823

The second issue I am seeing is there messages popping up on most if not
all servers.

1/5/2015 2:38:09 PM ANR3497W Reorganization is required on excluded table
BACKUP_OBJECTS. The reason code is 2.

What is the recommended way to resolve this?  I have read some docs and
they talk about doing offline reorgs (no, I do not want to do this).  We
are not doing dedup and all servers are beefy with 48GB+ of memory - most
at 96GB so that should not be an issue.

Suggestions?
--
*Zoltan Forray*
TSM Software  Hardware Administrator
BigBro / Hobbit / Xymon Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



Re: 6.3.5.1 TSM server brings new annoyances

2015-01-06 Thread Matthew McGeary
Zoltan,

Yes, the default before 6.3.5.1 / 7.1.1 was to allow reorgs of all tables.
 The new options are an attempt to get around the larger tables causing
issues for shops running dedupe with a large database.  Fun fact, the
maximum log size in TSM 7.1.1 is 512 GB and when I attempted an online
reorg of the 'problem tables,' it exhausted the log even at that size.

__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Zoltan Forray zfor...@vcu.edu
To: ADSM-L@VM.MARIST.EDU
Date:   01/06/2015 08:06 AM
Subject:Re: [ADSM-L] 6.3.5.1 TSM server brings new annoyances
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Matthew,

Thanks for the link.  I assume the default before 6.3.5.1 was NOT
excluding
anything from regular reorg/reindexing so I don't see why I should have a
problem, now.  I will make sure to max the actlog to 128GB to be sure I
don't run out.  I see it automagically added these DISABLEs to the
dsmserv.opt file.

On Tue, Jan 6, 2015 at 8:52 AM, Zoltan Forray zfor...@vcu.edu wrote:

 Thanks.  I assume the default before 6.3.5.1 was NOT excluding anything
 from regular reorg/reindexing so I don't see why I should have a
problem,
 now.  I will make sure to max the actlog to 128GB to be sure I don't run
 out.

 On Tue, Jan 6, 2015 at 8:43 AM, Matthew McGeary 
 matthew.mcge...@potashcorp.com wrote:

 Zoltan,

 You can remove the exemption from that table in your dsmserv.opt.  If
you
 are not running dedupe, I would guess that the online reorg would work
ok
 but there is a chance that the reorg process will exhaust the active
log
 and halt the server.

 See this for info on the dsmserv option:



https://www-01.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.srv.ref.doc/r_opt_server_disablereorgindex.html

 __

 *Matthew McGeary*
 Technical Specialist - Operations
 PotashCorp
 T: (306) 933-8921
 *www.potashcorp.com http://www.potashcorp.com*




 From:Zoltan Forray zfor...@vcu.edu
 To:ADSM-L@VM.MARIST.EDU
 Date:01/05/2015 02:25 PM
 Subject:[ADSM-L] 6.3.5.1 TSM server brings new annoyances
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 --



 Recently updated all of my TSM Linux servers to 6.3.5.1 and am seeing
 2-issues.

 The first is related to a RedHat Kernel issue. Originally thought to be
 RHEL 6.6  / 2.6 kernel problem but now seems to go back to 6.5 (per the
 IBM
 person I am working with).  The doc is here:

 http://www-01.ibm.com/support/docview.wss?uid=swg21691823

 The second issue I am seeing is there messages popping up on most if
not
 all servers.

 1/5/2015 2:38:09 PM ANR3497W Reorganization is required on excluded
table
 BACKUP_OBJECTS. The reason code is 2.

 What is the recommended way to resolve this?  I have read some docs and
 they talk about doing offline reorgs (no, I do not want to do this). We
 are not doing dedup and all servers are beefy with 48GB+ of memory -
most
 at 96GB so that should not be an issue.

 Suggestions?
 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 BigBro / Hobbit / Xymon Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html




 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 BigBro / Hobbit / Xymon Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html




--
*Zoltan Forray*
TSM Software  Hardware Administrator
BigBro / Hobbit / Xymon Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



Re: TSM for VE question

2014-11-06 Thread Matthew McGeary
It's easy to find this out from the admin cli.  Just do a q sched * * 
nodes=datamover  That will show all schedules associated with a given 
node. 

ie:

tsm: TSMSKq sched * * nodes=stnsvpmover02

Domain   * Schedule NameAction Start Date/Time   
Duration Period Day
 -  --   
 -- ---
VE-PRODDAILY_INTEL_PROD Backup 12/06/2013 20:00:00 
 1 H1 D  Any
 VM
VE-PRODWEEKLY_TEMPLATE- Backup 12/06/2013 16:00:00 
 1 H1 D  Sat
S_NEWVM

Hope that helps.
__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Robert Ouzen rou...@univ.haifa.ac.il
To: ADSM-L@VM.MARIST.EDU
Date:   11/06/2014 02:45 AM
Subject:[ADSM-L] TSM for VE question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi to all

Using TSM for VE 7.1.1 to back up our VMware environment , using for now 
two virtual proxy machine, datamover node (vmprox1 vmproxy2). I configure 
the schedulers thru TSM for VMware UI gui.

Some VM’s are scheduled thru vmproxy1 and some VM’s are scheduled thru 
vmproxy2 . I tried to figure out how I can get a report to know  from 
which datamover node the scheduler (VM’S) program is , maybe script ?

Best Regards

Robert




Very slow DB RESTORE performance

2014-10-08 Thread Matthew McGeary
Folks,

I'm restoring my TSM DB to a new host and have run into some extremely
slow performance restoring the DB.  It seems to process a few hundred MB
and then pause for 5-10 seconds before starting up again.  The new host is
a Power8 S22 with 20 cores and 256GB of RAM, running AIX.  The tape drives
have two 4Gb HBAs connected and the disk has four 8Gb.  I'm looking at a
2.3 TB database that will take approx 13 hours to restore.

Any ideas on why it's slow and how it could be sped up would be
appreciated.

Thanks

__
Matthew McGeary
Technical Specialist - Operations
PotashCorp
T: (306) 933-8921
www.potashcorp.com



Backup STG expected throughput from 50G FILE devclass to LTO4 tape

2014-09-17 Thread Matthew McGeary
Hello All,

We've been struggling with somewhat anemic backup performance from our
dedup storage pools to LTO4 tape over the past year or so.  Data is
ingested from clients and lands directly into our dedup pool, but doesn't
get deduplicated until the offsite operation is complete.
(deduprequiresbackup=yes)  Our dedup pool is comprised of 50GB volumes,
residing on a V7000 pool of NL-SAS drives.  The drives don't appear taxed
(utilization is consistently in the 30-40% range) but average throughput
from the storage pool to tape is only 100-100 MB/s.  This is starting to
present challenges for meeting the offsite window and I am stumped as to
how I might improve performance.

The TSM server is running on AIX and has four 8Gb paths to the storage,
running sddpcm.  Mountpoints containing the data are mounted rbrw and are
JFS+ volumes.  Our tape drives are running off of two dedicated 4Gb HBAs
and our backup DB throughput is excellent, averaging 350-400 MB/s.

For those of you that are running TSM dedup, how are you managing your
offsiting process?  Are you using a random devclass pool as a 'landing
zone' for backup/offsite operations before migration to the dedup stg? Are
there tuning parameters that you have tweaked that have shown improvement
in FILE devclass stg pools to LTO devices?

Any and all tips would be appreciated.  I've been through the parameters
listed in the perf guide and have allocated large memory pages to the TSM
server but I haven't seen much, if any, improvement.

Thanks!

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: Backup STG expected throughput from 50G FILE devclass to LTO4 tape

2014-09-17 Thread Matthew McGeary
Thanks for the quick replies Wanda and Sergio!

We're running TSM server 7.1.0.100 at the moment and I'm not sure if the
fixes contained in 6.3.4.300 are also included in 7.1.0.100.

-The dedup storage pool is backed by 5 luns on the V7000 which are
presented to AIX in a volume group.  Then there are 10 striped volumes in
the VG, with a 256K stripe size to align with TSM I/O size.  This has had
a remarkable effect in evening out the IO load on the V7000 and driving
full utilization of all arrays in the extent pool.
-The database is on one lun presented to AIX and split into 4 volumes. The
lun resides in the SSD extent pool and consistently sees IOPS in the
15-20K range when taxed.
-The numopenvols allowed parameter was set to 25, as specified in the perf
guide.  Based on Wanda's recommendation, I set that value to 500, which is
high enough to cover all the mounted volumes typically seen in a backup
stg command.  I'll report back tomorrow to see if that change made a
difference in throughput.


Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Prather, Wanda wanda.prat...@icfi.com
To: ADSM-L@VM.MARIST.EDU
Date:   09/17/2014 09:48 AM
Subject:Re: [ADSM-L] Backup STG expected throughput from 50G FILE
devclass to LTO4 tape
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Been there done that.
BACKUP STGPOOL for a sequential-dedup-pool-which-hasn't-deduped-yet
performance is a nightmare.
Have been fighting this for a year.

We ingest and dedup 4-6 TB of data per day, in a 45TB sequential pool.
We had to back off and land the data on a random pool, backup stgpool from
there, then migrate to the dedup pool, as you suggested.

There are things you can try before giving up:

1.  Most likely culprit:
Do Q OPT, look for NUMOPENVOLSALLOWED.  Suppose it's set to 100.
Now when your BACKUP STGPOOL is running, do a Q MOUNT.  If you have 100 of
your dedup pool files open, then you need to increase the value of
NUMOPENVOLSALLOWED.  Rinse and repeat until your BACKUP STGPOOL isn't
bumping up against that limit (which means I guess it's waiting to get a
file handle?  I guess that's what these are?)

I haven't found any documentation to explain why the default is set low,
or what the drawbacks are of setting the number too high, or what too
high might be.  All I know is that I discovered the problem and when I
raised the number things got better with no apparent bad side effects.  I
finally set it to a number larger than the number of files in my dedup
pool.

2.  Server code
You don't say what server version you are running; get to 6.3.4.300.
Otherwise you can have things bogged down in the DEDUPDELETE queue.  Which
shouldn't affect the backup stgpool I guess, but seems to eventually
affect everything related to that pool.

3.  DB I/O
Disk storage we are using delivers 1 IOPS/sec; 90% of the I/O is on
the TSM/DB + active log.  So even if you are getting good throughput on
the DB Backup, what you need to be looking at is the DB I/O response time.
 Should be  5 ms.

And, your DB needs to be split across 8 or more filesystems so that DB2
will start more concurrent I/Os.  We went from 2 luns, to 4, to 8, then
moved those 8 to SSD, got a performance boost each step.   (You'll find
all sorts of discussion about whether they need to be filesystems or LUNS;
that depends on what kind of disk you have.  Point is, DB2 needs to start
more concurrent I/O's, which means the DB needs to be spread across 8
different directories, or more.  And the disk behind those directories
needs to support as much I/O as DB2 wants to do, as fast as it needs to do
it to get 5 ms or less response.)  If you can't get a good feel for that,
I'd open a performance ticket and have them interpret a server performance
trace for you.

That's all I know so far.
Wanda


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Matthew McGeary
Sent: Wednesday, September 17, 2014 9:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backup STG expected throughput from 50G FILE devclass to
LTO4 tape

Hello All,

We've been struggling with somewhat anemic backup performance from our
dedup storage pools to LTO4 tape over the past year or so.  Data is
ingested from clients and lands directly into our dedup pool, but doesn't
get deduplicated until the offsite operation is complete.
(deduprequiresbackup=yes)  Our dedup pool is comprised of 50GB volumes,
residing on a V7000 pool of NL-SAS drives.  The drives don't appear taxed
(utilization is consistently in the 30-40% range) but average throughput
from the storage pool to tape is only 100-100 MB/s.  This is starting to
present challenges for meeting the offsite window and I am stumped as to
how I might improve performance.

The TSM server is running on AIX and has four 8Gb paths to the storage,
running sddpcm.  Mountpoints containing the data are mounted rbrw and are
JFS+ volumes.  Our tape drives are running off of two

Re: 7.1.1 Announcement; GA for download Sep 12

2014-09-02 Thread Matthew McGeary
I've been looking around but can't find the APAR list for 7.1.1.  In
particular, I'm checking to see if any APARs fixed by 7.1.0.100 are
unfixed in 7.1.1.

Anyone know where I might find this information?

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Prather, Wanda wanda.prat...@icfi.com
To: ADSM-L@VM.MARIST.EDU
Date:   09/02/2014 11:14 AM
Subject:[ADSM-L] 7.1.1 Announcement; GA for download Sep 12
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



http://www.ibm.com/common/ssi/cgi-bin/ssialias?infotype=ANsubtype=CAappname=gpateamsupplier=897letternum=ENUS214-340pdf=yes


Wanda Prather  |  Senior Technical Specialist  | wanda.prat...@icfi.com
mailto:wanda.prat...@icfi.com  |  www.icfi.comhttp://www.icfi.com/ |
410-868-4872 (m)
ICF International  | 7125 Thomas Edison Dr., Suite 100, Columbia, Md
|443-718-4900 (o)


Re: tsm for ve DR config question

2014-08-29 Thread Matthew McGeary
We typically see VM restore speeds at 2/3 or 1/2 the rate of standard
file-level incremental restores.  That's from a FILE devclass with a 50GB
volume size.  We're still offsiting to tape, however, so we do monthly
fulls of the production VMs so we have a relatively recent full copy for
DR purposes.  Between the fulls and restoring the VM metadata back to a
disk storage pool before restoring VM's, we should see decent speed.

That said, we haven't done a test with the new backup strategy yet.  So I
could be wrong.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Schneider, Jim jschnei...@ussco.com
To: ADSM-L@VM.MARIST.EDU
Date:   08/29/2014 07:16 AM
Subject:Re: [ADSM-L] tsm for ve DR config question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



David,

I'm still using the defaults.  I've seen occasional spikes in backup
duration and had not considered that it might be caused by a megablock
being backed up again.  Most of our backups run in about 10 minutes, but
occasionally take 1.5 hours.

The only way I've found to get a list of backed-up VMs is query occupancy
for the data mover node, and parsing the file space name.  Has anybody
found a better way?

Jim Schneider


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
David Ehresman
Sent: Friday, August 29, 2014 7:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm for ve DR config question

Jim,

Are you using the default of 50 for mbobjrefreshthresh or have you
adjusted that value?  If so, how did you determine what to change it to?

David Ehresman-

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Schneider, Jim
Sent: Thursday, August 28, 2014 4:20 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm for ve DR config question

I back up both the VM and CNTL data to separate file device storage pools
on a Data Domain.  I restore from a snapshot of replicated data.

I was under the impression that the mbobjrefreshthresh parameter triggered
additional (megablock) backups and substituted for periodic fulls.  It's
my fervent hope that recoverin from file-based storage will not be slower
than standard incremental restore.

Jim

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Stefan Folkerts
Sent: Thursday, August 28, 2014 12:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm for ve DR config question

The documentation isn't kidding around when it recommends that a
periodic
full
strategy is best for tape copypools.

This and make sure you restore your VE metadata to disk first, this only
works if you place this metadata in a seperate copypool.



On Thu, Aug 28, 2014 at 5:35 PM, Matthew McGeary 
matthew.mcge...@potashcorp.com wrote:

 TSM for VE will connect to any VCenter server you specify.  When we
 did our test, the VMWare team built a new VCenter instance and I built
 a new datamover, connected it to the fresh VCenter and started restoring
VMs.

 A word of caution, however, if your data is on tape and you don't have
 any fulls to restore you should be prepared to wait a very long time.
 We were doing IFincr backups and saw restore speeds in the KB/s.  The
 documentation isn't kidding around when it recommends that a periodic
 full strategy is best for tape copypools.

 Matthew McGeary
 Technical Specialist
 PotashCorp - Saskatoon
 306.933.8921



 From:   Schneider, Jim jschnei...@ussco.com
 To: ADSM-L@VM.MARIST.EDU
 Date:   08/27/2014 11:51 AM
 Subject:[ADSM-L] tsm for ve DR config question
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Hi, folks;

 I have an upcoming disaster recovery test and will be asked to restore
 VE data for the first time.  I can't run tests because the
 hardware/network is not in place.  Will the DR VE backup proxy server
 have to connect to a duplicate of the current production vCenter
 server, or just A vCenter server on the correct network?

 Are there any other quirks recovering VE during a DR?

 Thanks in advance,
 Jim Schneider
 United Stationers

 **
 Information contained in this e-mail message and in any attachments
 thereto is confidential. If you are not the intended recipient, please
 destroy this message, delete any copies held on your systems, notify
 the sender immediately, and refrain from using or disclosing all or
 any part of its content to any other person.


**
Information contained in this e-mail message and in any attachments
thereto is confidential. If you are not the intended recipient, please
destroy this message, delete any copies held on your systems, notify the
sender immediately, and refrain from using or disclosing all or any part
of its content to any other person.


Re: tsm for ve DR config question

2014-08-28 Thread Matthew McGeary
TSM for VE will connect to any VCenter server you specify.  When we did
our test, the VMWare team built a new VCenter instance and I built a new
datamover, connected it to the fresh VCenter and started restoring VMs.

A word of caution, however, if your data is on tape and you don't have any
fulls to restore you should be prepared to wait a very long time.  We were
doing IFincr backups and saw restore speeds in the KB/s.  The
documentation isn't kidding around when it recommends that a periodic full
strategy is best for tape copypools.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Schneider, Jim jschnei...@ussco.com
To: ADSM-L@VM.MARIST.EDU
Date:   08/27/2014 11:51 AM
Subject:[ADSM-L] tsm for ve DR config question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi, folks;

I have an upcoming disaster recovery test and will be asked to restore VE
data for the first time.  I can't run tests because the hardware/network
is not in place.  Will the DR VE backup proxy server have to connect to a
duplicate of the current production vCenter server, or just A vCenter
server on the correct network?

Are there any other quirks recovering VE during a DR?

Thanks in advance,
Jim Schneider
United Stationers

**
Information contained in this e-mail message and in any attachments
thereto is confidential. If you are not the intended recipient, please
destroy this message, delete any copies held on your systems, notify the
sender immediately, and refrain from using or disclosing all or any part
of its content to any other person.


Shared libraries and DR - configuration questions

2014-08-25 Thread Matthew McGeary
Hello all,

We're in the process of setting up a second TSM server at our head office
to split the backup load.  Primary storage is disk, so that's no problem
but we offsite to LTO4 tape.

My original thought was that we'd set up library sharing using either the
current TSM server as the manager or a new TSM server instance that is
solely responsible for library operations.  I'm leaning towards a new,
library-manager-only TSM server, with two TSM library clients.

Where I'm unclear is how DR works with a shared library manager.  Do we go
through the process of restoring the TSM library manager instance first,
then the two (or more) library clients?  Or (if we have two libraries
available at our DR site) can I restore the two library clients
independently, each attached to their own library?

Thanks!

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: TSM server appears to hang

2014-07-16 Thread Matthew McGeary
We're having the exact same problem, have been for quite a few months now.
 It occurred on 6.3.4.100 and 7.1.  Running on AIX 6.1 TL7 SP6 hosted on a
P740.  It gets so bad on ours that I'll have to halt the dsmserv process,
perform a db2stop force and then restart TSM.  Because it happens at
random times and is totally infrequent, I've written a quick and dirty
script to make sure that TSM is running and to do the shutdown/restart if
the non-responsive behaviour kicks in again.

I don't have a solution for you but we've been all the way up the
developer chain without much success.  What hardware are you running your
server on?

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Rhodes, Richard L. rrho...@firstenergycorp.com
To: ADSM-L@VM.MARIST.EDU
Date:   07/16/2014 09:08 AM
Subject:[ADSM-L] TSM server appears to hang
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Everyone,

The past couple of days we're had a strange problem with one of our TSM
instances (v6.2.5).  At times it appears to hang.

Last night (and the previous night) it had many servers that got a dozen
or more sessions.  This is really strange!  This morning as I was looking
at this, cmds like q vol and q stgpool hang - no response!  Commands
like q node and q proc work.  The server was doing very little I/O.
All of a sudden the hung cmds all ran through and the server I/O jumped to
200-400MB/s.  Something was locking I/O.  I think the many sessions are
clients that retry because the server is not responding.

In the TSM actlog there are no unusual messages about the time it
un-stuck.  The only strange entry in the actlog is a ANRD with
lockwait error early the previous evening.There are no AIX errors.

Any thought?

Rick






-

The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.


Re: TSM 7.1 and dedup chunking issues

2014-07-04 Thread Matthew McGeary
Wanda,

From what I've read about DDP, it's not ideal for TSM workloads as it's at
a performance penalty for large sequential operations compared to RAID6.
This whitepaper from Dell (They license the same DDP technology) has some
numbers and at a 256K transfer size, DDP is at a 63% read penalty compared
to RAID6.  That would have a big impact on backup performance to tape.

http://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/Dynamic_Disk_Pooling_Technical_Report.pdf

Also, why use separate 'landing pools' for data rather than backing up to
the dedup pool directly?  So long as deduprequiresbackup is set to 'yes,'
data in the pool will remain duplicated until offsiting to tape.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Prather, Wanda wanda.prat...@icfi.com
To: ADSM-L@VM.MARIST.EDU
Date:   07/03/2014 03:09 PM
Subject:Re: [ADSM-L] TSM 7.1 and dedup chunking issues
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Well.  That was certainly a surprise!
Wonder what causes that - I've seen it happen to other people's posts
before and always assumed they were using an off-brand mail client.  But I
wasn't doing anything differently than usual (MS outlook).

Anyway, I had another question about your setup -

I have a customer I manage with a similar setup, except the 200 clients
are VMs generating about 4 TB of inc blocks daily.
We also use a non-dedup landing pool, then back up to tape copy pool, then
migrate to dedup pool.

We are using devclass of DISK for the landing pool.  It's a DS3500 with
SATA disks, but is a DDP array, but is also in the same array as the dedup
pool and the DB  (We've requested SSD for the DB and should have it soon.
The config is not ideal, but this is a leftover system which was
originally configured to back up something else much less I/O intense.)

Was thinking the backup stgpool might run faster if the landing pool is of
type FILE instead of DISK.  Don't know if it matters since it's a DDP
array.
What format is your landing pool, FILE or DISK?

Wanda

-Original Message-
From: Bent Christensen [mailto:b...@cowi.dk]
Sent: Thursday, July 03, 2014 3:32 AM
To: Prather, Wanda
Subject: RE: [ADSM-L] TSM 7.1 and dedup chunking issues

Hi Wanda,

The below is what got through of your response to TSM 7.1 and dedup
chunking issues on the TSM list server :-)

It is really a pain that you don't receive a copy of what you put up on
the list :-)

 - Bent


Re: TSM 7.1 and dedup chunking issues

2014-07-03 Thread Matthew McGeary
We're running about 200 nodes, another 320-ish (it varies day by day) VMs
with an average daily intake of 4.2 TB over the past 30 days.  We have a
high change-rate, however, and our occupancy is 'only' 270 TB and around
110,000,000 objects stored.

We back up directly to the dedup pool, which resides on a mix of 3 and 4
TB NL-SAS disks hosted on a V7000 and a DCS3700.  Total spindle count is
84 for this pool.  TSM DB and active log is stored on SSD (again hosted on
the V7000) and our TSM server is a P740 with 15 cores allocated to the TSM
LPAR and 125 GB of RAM.  We are in the process of migrating our TSM server
to the new-shiny Power 822 boxes, which will up our core-count to 18
Power8 cores and ~250GB of RAM (depending on how much is lost to VIO
servers)

How are you calculating the 2-300k IOPS?  What's the spindle count on the
Hitachi storage device?

We've been having some troubles with our database, which is ~2.3 TB
(though yours must be huge if you're managing 500 million objects).  Are
you doing offline db reorgs every six months or so?  We've found that
online reorgs are pretty much impossible, they fill the active log and
crash the server instance.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Stefan Folkerts stefan.folke...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   07/03/2014 05:13 AM
Subject:Re: [ADSM-L] TSM 7.1 and dedup chunking issues
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



1.5TB seems like a very reasonable amount for these spec's, if the amount
of managed deduped data isn't above the 400TB you seem well within the
configuration maximums that IBM provides.



On Thu, Jul 3, 2014 at 12:02 AM, Bent Christensen b...@cowi.dk wrote:

 This server manages 200 clients, approx. 500.000.000 files occupying
 approx. 800 TB, a mixture of various databases, Exchange and files, all
 Windows clients. Not all client data end up in a dedup pool.

 Daily backup ingest to the dedup pool is 1.5 TB on average. We aim to
 receive backup data on an internal SAS array, backup to tape copypool
and
 then migrate to the dedup pool. The dedup pool storage is Hitachi AMS,
2000
 family with SATA disks, fiber attached, should be able to deliver at
least
 2-300K IOPS in this configuration.

  - Bent

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Stefan Folkerts
 Sent: Wednesday, July 02, 2014 2:13 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM 7.1 and dedup chunking issues

 Also :

 32 cores, 256 GB RAM, DB and activelog on SSD.

 Wow, that's pretty serieus.

 Also, if I might ask, what is your daily backup/archive ingest, how much
 data do you manage and what type of disk (system/config) do you use for
 your filepool storage?



 On Wed, Jul 2, 2014 at 12:58 PM, Bent Christensen b...@cowi.dk wrote:

  Hi all,
 
  I remember noticing that there were some dedup housekeeping (removal
  of dereferenced chunks) issues with TSM Server 6.3.4.200 and that a
  fix was released. We used 6.3.4.200 for a while as stepping stone on
  our road from
  5.5 to 7.1 but without the fix.
 
  Now, on 7.1, I am seeing some stuff that makes me worry a bit -
  initiated by a gut feeling that there are more data in my dedup pool
  than there should be.
 
  SHOW DEDUPDELETEINFO shows that I have +30M chunks waiting in queue
  and the number is increasing. It also shows that I right now have 8
  active worker threads with a total of 5.8M  queued, but only approx.
  4000 chunks/hour get deleted.
 
  Any knowing if these numbers make sense?
 
  We use a full-blown TSM server on Windows, 32 cores, 256 GB RAM, DB
  and activelog on SSD.
 
   - Bent
 



Re: TSM 7.1 upgrade aftermath

2014-06-09 Thread Matthew McGeary
I don't know if it's documented anywhere but that hit us as well.  That,
and our datamovers aren't purging the c:\windows\temp directory after a
backup operation.

Ah, the joys of being on the bleeding edge.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Gee, Norman norman@lc.ca.gov
To: ADSM-L@VM.MARIST.EDU
Date:   06/05/2014 04:53 PM
Subject:[ADSM-L] TSM 7.1 upgrade aftermath
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I am not sure if this is documented, but after I upgraded my TSM servers
to 7.1 code.  Somehow the passexp setting for all of my TDP nodes got
change from 0 (zero, never expire) to blank, which means they now will
take the server default settings. Where is this documented?


Other issues are a bunch of new warning messages during migration of file
device class.


Re: TSM for VE going to tape copypool, recommendations?

2014-04-23 Thread Matthew McGeary
Yes, our copypool is on LTO4 tape.

Thanks for the recommendation to restore the VMCTL storage pool.  I'll add
it to my notes for the next test.  Has anyone experimented with changing
the megablock refresh values?  If I lower the threshold such that each
128MB block is completely refreshed more often, I will increase my daily
intake amounts but potentially reduce the number of tapes that are needed
for a DR restore, correct?

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Ehresman,David E. deehr...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   04/22/2014 12:36 PM
Subject:Re: [ADSM-L] TSM for VE going to tape copypool,
recommendations?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I don't know how the VMCTL metadata is used.  But I see multiple files
in the vmctl storage pool for a given VM which leads me to believe that
TSM will need to read more than one of those metadata entries for a given
VM restore.  If that is true, I am guessing it would be a significant
benefit to restore them from tape to disk before beginning the VM
restores.  By the way, are we talking about physical tape?

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Matthew McGeary
Sent: Tuesday, April 22, 2014 11:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM for VE going to tape copypool, recommendations?

Yes, the VMCTL data is stored in its own stgpool with a FILE devc and we
could request additional disk for the test to use for the RESTORE STG
command.  Would that be faster than restoring individual VM's?  Should I
still look at collocating the VMCTL data on a single tape?

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Ehresman,David E. deehr...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   04/22/2014 09:48 AM
Subject:Re: [ADSM-L] TSM for VE going to tape copypool,
recommendations?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Is your PRIMARY VMCTL storage pool on FILE type disk?  Can you create a
corresponding FILE storage pool at DR and do a RESTORE STGPOOL prior to
starting the VME restores?

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Matthew McGeary
Sent: Tuesday, April 22, 2014 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM for VE going to tape copypool, recommendations?

We've been using TSM for VE for about a year now and while it's been great
for onsite restores, our recent DR test was troubling.  We've been using
the IFINCR strategy and while I was expecting restores from tape to be
slow, I wasn't expecting restores to drag into the 5-6 hour range for a
small VM (40 GB.)   I know that the admin guide recommends periodic-full
for VM tape retention, but we use dedup disk onsite and we're already
experiencing pressure on our daily backup intake volumes and disk usage,
so I'd prefer not to have to change our backup strategy due to the
increase caused by periodic-full in both daily traffic and storage.

Is there any other way I could conceivably improve restore times from a
tape copypool while still using IFINCR as my backup method?  Here are a
few things I considered:

1) A separate collocated copypool for the VECTL data.  This would live on
one tape easily and would reduce the number of mounts required for a
restore.  However, if that tape goes bad, I'm totally screwed in a DR
situation because there's no way to restore VM backups without the control
files.  I'm assuming that if I use collocation by node, TSM will generate
a new tape every day with all the CTL files and call back the old one.  Am
I right in that assumption?
2) Could I do an IFFULL backup every few weeks or even every month to
limit tape mounts?
3) ???

If anyone with some practical experience in using TSM for VE with tape
copypools could offer some guidance, I'd appreciate it.

Thanks,

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: TSM 7.1 install on AIX problems

2014-04-23 Thread Matthew McGeary
Yeah, I had the same issue when I did the AIX system update to TL 7 SP4
last year.  All the recommendations from that article have been
implemented, but there's still about a 50% reduction in backup db
throughput since the TSM update.  Maybe TL 7 SP6 interacts with the
loopback device differently than SP4 does?

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Pagnotta, Pam (CONTR) pam.pagno...@hq.doe.gov
To: ADSM-L@VM.MARIST.EDU
Date:   04/23/2014 07:15 AM
Subject:[ADSM-L] TSM 7.1 install on AIX problems
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Matthew,

I had similar TSM DB backup issues and found this TechNote. I followed it
and it fixed the slow backups.

http://www-01.ibm.com/support/docview.wss?uid=swg21587513

Pam

Pam Pagnotta
AHE Tivoli Storage Manager Administrator/AHE Storage Administrator/ AHE
z890 Administrator
Criterion Systems, Inc
Contractor to U.S. Department of Energy
Data Center  System Services Group
301 903-5508 - Office
=
Date:Tue, 22 Apr 2014 09:18:10 -0600
From:Matthew McGeary matthew.mcge...@potashcorp.com
mailto:matthew.mcge...@potashcorp.com
Subject: Re: TSM 7.1 install on AIX problems

I just did the install last week and didn't see any issues with the
install script or memory dumps.  What version of AIX are you running?  I'm
on 6.1 TL 7 SP 6 (I updated from TL 7 SP 4 in order to do the TSM 7.1
install.)

I am seeing abysmal speeds during backup db commands (about half the
average speed I was getting before,) but that's neither here nor there.
Sigh.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: TSM 7.1 install on AIX problems

2014-04-22 Thread Matthew McGeary
I just did the install last week and didn't see any issues with the
install script or memory dumps.  What version of AIX are you running?  I'm
on 6.1 TL 7 SP 6 (I updated from TL 7 SP 4 in order to do the TSM 7.1
install.)

I am seeing abysmal speeds during backup db commands (about half the
average speed I was getting before,) but that's neither here nor there.
Sigh.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Kevin Kettner kkett...@doit.wisc.edu
To: ADSM-L@VM.MARIST.EDU
Date:   04/18/2014 12:19 PM
Subject:[ADSM-L] TSM 7.1 install on AIX problems
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I'm attempting to install TSM 7.1 on an AIX box for the first time. Did
anyone else notice that the install.sh script is missing a ! in the
first line? It has #/bin/sh and it should have #!/bin/sh. It's an
easy fix, but makes me wonder about the quality of the rest of the code.

After working around that it's throwing memory errors (see below), which
I have a PMR open on. Has anyone else had this problem and found a
solution? I Googled for people reporting either of these issues and
didn't find anything, which seems odd, especially the install script
issue. I can't be the only one to try to install it on AIX.

I'll post the fix when I figure it out or get it from IBM.

Bonus points for beating IBM support to the punch. ;-)

Thanks!

-Kevin

# ./install.sh
./install.sh[233]: 11993088 Memory fault(coredump)
Unhandled exception
Type=Segmentation error vmState=0x0004
J9Generic_Signal_Number=0004 Signal_Number=000b
Error_Value= Signal_Code=0033
Handler1=F1324208 Handler2=F131BF1C
R0=D336EDCC R1=3012FEF0 R2=F11C269C R3=F11C0450
R4= R5= R6= R7=0003
R8=0043 R9= R10=2FF22ABC R11=34E0
R12=03291A68 R13=30C4C400 R14=32040920 R15=F12891EC
R16=0007 R17= R18=F1326388 R19=30C4C450
R20=32BAE460 R21=32040938 R22= R23=3BC8
R24=10010E04 R25=F131D130 R26=301365A4 R27=007E
R28=CFA71C28 R29=F1325B7C R30=D3390410 R31=F11C0430
IAR=D33853A8 LR=D336EDE8 MSR=D032 CTR=D3631E70
CR=22004284 FPSCR=8200 XER=0005 TID=
MQ=
FPR0 32bc69000110 (f: 272.00, d: 2.697706e-64)
FPR1 41e0 (f: 0.00, d: 2.147484e+09)
FPR2 c1e0 (f: 0.00, d: -2.147484e+09)
FPR3 433001e0 (f: 31457280.00, d: 4.503600e+15)
FPR4 43300800 (f: 0.00, d: 4.512396e+15)
FPR5 41338518 (f: 0.00, d: 1.279256e+06)
FPR6 41338518 (f: 0.00, d: 1.279256e+06)
FPR7 433008138518 (f: 1279256.00, d: 4.512396e+15)
FPR8 0035002e00760032 (f: 7733298.00, d: 1.168203e-307)
FPR9 0030003100330030 (f: 3342384.00, d: 8.900711e-308)
FPR10 003700330030005f (f: 3145823.00, d: 1.279461e-307)
FPR11 0031003700350034 (f: 3473460.00, d: 9.457031e-308)
FPR12 3fe8 (f: 0.00, d: 7.50e-01)
FPR13 4028 (f: 0.00, d: 1.20e+01)
FPR14  (f: 0.00, d: 0.00e+00)
FPR15  (f: 0.00, d: 0.00e+00)
FPR16  (f: 0.00, d: 0.00e+00)
FPR17  (f: 0.00, d: 0.00e+00)
FPR18  (f: 0.00, d: 0.00e+00)
FPR19  (f: 0.00, d: 0.00e+00)
FPR20  (f: 0.00, d: 0.00e+00)
FPR21  (f: 0.00, d: 0.00e+00)
FPR22  (f: 0.00, d: 0.00e+00)
FPR23  (f: 0.00, d: 0.00e+00)
FPR24  (f: 0.00, d: 0.00e+00)
FPR25  (f: 0.00, d: 0.00e+00)
FPR26  (f: 0.00, d: 0.00e+00)
FPR27  (f: 0.00, d: 0.00e+00)
FPR28  (f: 0.00, d: 0.00e+00)
FPR29  (f: 0.00, d: 0.00e+00)
FPR30  (f: 0.00, d: 0.00e+00)
FPR31  (f: 0.00, d: 0.00e+00)
Target=2_40_20110203_074623 (AIX 7.1)
CPU=ppc (16 logical CPUs) (0x7c000 RAM)
--- Stack Backtrace ---
(0xD336E81C)
(0xD436CE48)
(0xD436F698)
(0xD4368D38)
(0xD4368B24)
(0xD369BBA0)
(0xD436A058)
(0xD436A200)
(0xD491BD18)
(0xD492236C)
(0xD4926438)
(0xD499BF48)
(0xD4965780)
(0xD4965A30)
(0xD49E4BAC)
(0xD49658E4)
(0xD4965E24)
(0xD496A6C4)
(0x100013C0)
(0xD04FED08)
---
JVMDUMP006I Processing dump event gpf, detail  - please wait.
JVMDUMP032I JVM requested System dump using '/bl2ata01/core.
20140418.112157.10354696.0001.dmp' in response to an event
Note: Enable full CORE dump in smit is set to FALSE and as a result
there will be limited threading information in core file.
JVMDUMP010I System dump written to /bl2ata01/core.
20140418.112157.10354696.0001.dmp
JVMDUMP032I JVM requested Java dump using '/bl2ata01/javacore.
20140418.112157.10354696.0002.txt' in response to an event
JVMDUMP010I Java dump written to /bl2ata01/javacore.
20140418.112157.10354696.0002.txt

TSM for VE going to tape copypool, recommendations?

2014-04-22 Thread Matthew McGeary
We've been using TSM for VE for about a year now and while it's been great
for onsite restores, our recent DR test was troubling.  We've been using
the IFINCR strategy and while I was expecting restores from tape to be
slow, I wasn't expecting restores to drag into the 5-6 hour range for a
small VM (40 GB.)   I know that the admin guide recommends periodic-full
for VM tape retention, but we use dedup disk onsite and we're already
experiencing pressure on our daily backup intake volumes and disk usage,
so I'd prefer not to have to change our backup strategy due to the
increase caused by periodic-full in both daily traffic and storage.

Is there any other way I could conceivably improve restore times from a
tape copypool while still using IFINCR as my backup method?  Here are a
few things I considered:

1) A separate collocated copypool for the VECTL data.  This would live on
one tape easily and would reduce the number of mounts required for a
restore.  However, if that tape goes bad, I'm totally screwed in a DR
situation because there's no way to restore VM backups without the control
files.  I'm assuming that if I use collocation by node, TSM will generate
a new tape every day with all the CTL files and call back the old one.  Am
I right in that assumption?
2) Could I do an IFFULL backup every few weeks or even every month to
limit tape mounts?
3) ???

If anyone with some practical experience in using TSM for VE with tape
copypools could offer some guidance, I'd appreciate it.

Thanks,

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: TSM for VE going to tape copypool, recommendations?

2014-04-22 Thread Matthew McGeary
Yes, the VMCTL data is stored in its own stgpool with a FILE devc and we
could request additional disk for the test to use for the RESTORE STG
command.  Would that be faster than restoring individual VM's?  Should I
still look at collocating the VMCTL data on a single tape?

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Ehresman,David E. deehr...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   04/22/2014 09:48 AM
Subject:Re: [ADSM-L] TSM for VE going to tape copypool,
recommendations?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Is your PRIMARY VMCTL storage pool on FILE type disk?  Can you create a
corresponding FILE storage pool at DR and do a RESTORE STGPOOL prior to
starting the VME restores?

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Matthew McGeary
Sent: Tuesday, April 22, 2014 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM for VE going to tape copypool, recommendations?

We've been using TSM for VE for about a year now and while it's been great
for onsite restores, our recent DR test was troubling.  We've been using
the IFINCR strategy and while I was expecting restores from tape to be
slow, I wasn't expecting restores to drag into the 5-6 hour range for a
small VM (40 GB.)   I know that the admin guide recommends periodic-full
for VM tape retention, but we use dedup disk onsite and we're already
experiencing pressure on our daily backup intake volumes and disk usage,
so I'd prefer not to have to change our backup strategy due to the
increase caused by periodic-full in both daily traffic and storage.

Is there any other way I could conceivably improve restore times from a
tape copypool while still using IFINCR as my backup method?  Here are a
few things I considered:

1) A separate collocated copypool for the VECTL data.  This would live on
one tape easily and would reduce the number of mounts required for a
restore.  However, if that tape goes bad, I'm totally screwed in a DR
situation because there's no way to restore VM backups without the control
files.  I'm assuming that if I use collocation by node, TSM will generate
a new tape every day with all the CTL files and call back the old one.  Am
I right in that assumption?
2) Could I do an IFFULL backup every few weeks or even every month to
limit tape mounts?
3) ???

If anyone with some practical experience in using TSM for VE with tape
copypools could offer some guidance, I'd appreciate it.

Thanks,

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: Using a VM for a TSM Server

2014-04-22 Thread Matthew McGeary
We haven't taken the plunge yet, but I've been looking at this option for
smaller remote sites that only have VMWare.  I'd be leery of doing
deduplication on a VM because of the resources required, but other than
that I don't see a reason why a VM would pose any issues.  This assumes,
of course, that you're using disk or a VTL rather than tape.  Neither IBM
or VMWare support using tape devices for backup from a VM.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Plair, Ricky rpl...@healthplan.com
To: ADSM-L@VM.MARIST.EDU
Date:   04/22/2014 09:51 AM
Subject:[ADSM-L] Using a VM for a TSM Server
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Has anyone every used a VM system for their TSM backup solution?

If so, are there any pros?

I would also like to hear the cons.



Ricky M. Plair
Storage Engineer
HealthPlan Services
Office: 813 289 1000 Ext 2273
Mobile: 757 232 7258


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_
CONFIDENTIALITY NOTICE: This email message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information and/or Protected Health Information (PHI)
subject to protection under the law, including the Health Insurance
Portability and Accountability Act of 1996, as amended (HIPAA). If you are
not the intended recipient or the person responsible for delivering the
email to the intended recipient, be advised that you have received this
email in error and that any use, disclosure, distribution, forwarding,
printing, or copying of this email is strictly prohibited. If you have
received this email in error, please notify the sender immediately and
destroy all copies of the original message.


Re: V7 Stable?

2014-04-03 Thread Matthew McGeary
I just did an update to 7.1 from 6.2 on a small Windows-based TSM server.
No drama or issues here.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Ryder, Michael S michael_s.ry...@roche.com
To: ADSM-L@VM.MARIST.EDU
Date:   04/03/2014 10:06 AM
Subject:Re: [ADSM-L] V7 Stable?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



On Thu, Apr 3, 2014 at 12:01 PM, Karel Bos tsm@gmail.com wrote:

 Don't,  just don't do 7.1 just yet.
 Op 3 apr. 2014 15:09 schreef Huebner, Andy
andy.hueb...@novartis.com:


Well THAT'S a cliff-hanger -- could you describe your experiences with 7.1
and why we shouldn't?

Mike


Re: TSM 7.1 upgrade path(s) on Windows

2014-04-03 Thread Matthew McGeary
You're much better off to simply do a dr prepare and do the upgrade
in-place.  If the upgrade fails for whatever reason, reinstall 6.2.3 and
do a db restore.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Francisco Parrilla francisco.parri...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   04/03/2014 10:36 AM
Subject:Re: [ADSM-L] TSM 7.1 upgrade path(s) on Windows
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



It's more easy install TSM V7.1 in new TSMserver , and then make the
upgrade by the upgrading using tape.

You  will never touch the TSM server V6.x to do it, make a dbbackup to
disk, copy files and then try to do the upgrade to TSM 7.x






2014-04-03 10:24 GMT-06:00 Ryder, Michael S michael_s.ry...@roche.com:

 Hello Folks:

 I'm preparing to upgrade our TSM 6.2.3 server to TSM 7.1 -- both are on
 Windows.

 It's a side-by-side upgrade, and if possible I'd like to leave the
existing
 TSM 6.2.3 environment alone during the upgrade, so that I can easily
halt
 the upgrade if I run into any problems.

 I've been searching TSM documentation and the Internet for a few days
now
 and can't find anything reliable -- is there an upgrade path I can take
to
 let me do that?  I was hoping for something like... this -- copy
database
 files and configuration to brand new server... install 6.2.3, and then
7.1
 over it.

 Is there any documentation in this respect, best practices or good
 community experiences?  Please relate to me anything you can share!

 Best regards,

 Mike Ryder
 RMD IT Client Services



TSM for VE: recommended VSwitch configuration?

2014-03-20 Thread Matthew McGeary
I've searched quite a bit and can't seem to find any guidelines on the
following:

1) VMWare VSwitch configuration to maximize throughput for VM
backups/restores
2) Expected throughput using 10Gb adapters

We are in a situation at the moment where mass backups work quite well and
are very fast: as fast as the TSM server can handle.  However, individual
restores/backups are quite slow.  We've tested on a VMWare cluster with
1Gbps NICs attached to the management network and with 10Gbps attached to
the management network.  Both scenarios are quite a bit slower than I'd
expect.  Average backup/restore transfer rates are in the 25 MB/s range
using 1Gb NICs and around 55 MB/s using 10Gb NICs.

How do I improve the throughput on these single-VM backup and restore
sessions?

Thanks!

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921


Re: TSM for VE: recommended VSwitch configuration?

2014-03-20 Thread Matthew McGeary
Sure thing:

1) The VMWare environment is connected via 8Gb fibre to an EMC VNX 5700
and uses mostly tiered storage pools (SSD-SAS),  The TSM server is
connected to a V7000 via fibre as well.  The TSM database is on SSD and
the storage pools are on 84 NL-SAS disks in a RAID6 extent pool provided
by the V7000.  The TSM server has a LACP nic team consisting of two 10 Gb
adapters.  Each ESXi host has either dual 1Gb NICs or dual 10Gb NICs
assigned to the management VKernel port.
2) The stated rates in my previous post are for IFIncremental traffic, be
it a fresh full or a subsequent CBT incremental backup.
3) The VMware client is 7.1 and the server version is 6.3.4.200.

Thanks!

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Ryder, Michael S michael_s.ry...@roche.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/20/2014 01:55 PM
Subject:Re: [ADSM-L] TSM for VE: recommended VSwitch
configuration?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Matthew

Can you tell us some more about your infrastructure?  What kind of disk
storage network are you using?  iSCSI?  FC?  block-data?  NAS?

Are your stated rates for image backups, or file-level incremental?

What version of TSM are you using?

Best regards,

Mike, x7942
RMD IT Client Services


On Thu, Mar 20, 2014 at 3:44 PM, Matthew McGeary 
matthew.mcge...@potashcorp.com wrote:

 I've searched quite a bit and can't seem to find any guidelines on the
 following:

 1) VMWare VSwitch configuration to maximize throughput for VM
 backups/restores
 2) Expected throughput using 10Gb adapters

 We are in a situation at the moment where mass backups work quite well
and
 are very fast: as fast as the TSM server can handle.  However,
individual
 restores/backups are quite slow.  We've tested on a VMWare cluster with
 1Gbps NICs attached to the management network and with 10Gbps attached
to
 the management network.  Both scenarios are quite a bit slower than I'd
 expect.  Average backup/restore transfer rates are in the 25 MB/s range
 using 1Gb NICs and around 55 MB/s using 10Gb NICs.

 How do I improve the throughput on these single-VM backup and restore
 sessions?

 Thanks!

 Matthew McGeary
 Technical Specialist
 PotashCorp - Saskatoon
 306.933.8921



Re: TSM for VE: recommended VSwitch configuration?

2014-03-20 Thread Matthew McGeary
The datamover is virtual and uses hotadd or nbd for transport.  All backup
traffic runs over the network as a result.  Image backups to TSM run
anywhere from 150-200 MB/s.  Our nightly backup traffic typically peaks at
350-400 MB/s.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Ryder, Michael S michael_s.ry...@roche.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/20/2014 02:12 PM
Subject:Re: [ADSM-L] TSM for VE: recommended VSwitch
configuration?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Great

SORRY -- I forgot to ask - where is your proxy?  physical or VM?  is your
proxy/datamover reading VMs via FC or ethernet?

Is the TSM server a VM or physical?

Offhand I would say those rates aren't horrible, much depends on the
distribution of file sizes, and number of files you are backing up.

What kind of rates do you get for image backups?  At least then it's
easier
to compare with other configurations, since we expect the proxy to send a
nice solid stream of data to the TSM server.

Best regards,

Mike, x7942
RMD IT Client Services


On Thu, Mar 20, 2014 at 4:01 PM, Matthew McGeary 
matthew.mcge...@potashcorp.com wrote:

 Sure thing:

 1) The VMWare environment is connected via 8Gb fibre to an EMC VNX 5700
 and uses mostly tiered storage pools (SSD-SAS),  The TSM server is
 connected to a V7000 via fibre as well.  The TSM database is on SSD and
 the storage pools are on 84 NL-SAS disks in a RAID6 extent pool provided
 by the V7000.  The TSM server has a LACP nic team consisting of two 10
Gb
 adapters.  Each ESXi host has either dual 1Gb NICs or dual 10Gb NICs
 assigned to the management VKernel port.
 2) The stated rates in my previous post are for IFIncremental traffic,
be
 it a fresh full or a subsequent CBT incremental backup.
 3) The VMware client is 7.1 and the server version is 6.3.4.200.

 Thanks!

 Matthew McGeary
 Technical Specialist
 PotashCorp - Saskatoon
 306.933.8921



 From:   Ryder, Michael S michael_s.ry...@roche.com
 To: ADSM-L@VM.MARIST.EDU
 Date:   03/20/2014 01:55 PM
 Subject:Re: [ADSM-L] TSM for VE: recommended VSwitch
 configuration?
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Hi Matthew

 Can you tell us some more about your infrastructure?  What kind of disk
 storage network are you using?  iSCSI?  FC?  block-data?  NAS?

 Are your stated rates for image backups, or file-level incremental?

 What version of TSM are you using?

 Best regards,

 Mike, x7942
 RMD IT Client Services


 On Thu, Mar 20, 2014 at 3:44 PM, Matthew McGeary 
 matthew.mcge...@potashcorp.com wrote:

  I've searched quite a bit and can't seem to find any guidelines on the
  following:
 
  1) VMWare VSwitch configuration to maximize throughput for VM
  backups/restores
  2) Expected throughput using 10Gb adapters
 
  We are in a situation at the moment where mass backups work quite well
 and
  are very fast: as fast as the TSM server can handle.  However,
 individual
  restores/backups are quite slow.  We've tested on a VMWare cluster
with
  1Gbps NICs attached to the management network and with 10Gbps attached
 to
  the management network.  Both scenarios are quite a bit slower than
I'd
  expect.  Average backup/restore transfer rates are in the 25 MB/s
range
  using 1Gb NICs and around 55 MB/s using 10Gb NICs.
 
  How do I improve the throughput on these single-VM backup and restore
  sessions?
 
  Thanks!
 
  Matthew McGeary
  Technical Specialist
  PotashCorp - Saskatoon
  306.933.8921
 



Re: [Pool] Using Dedup with TSM : What kind of Hardware ?

2014-03-06 Thread Matthew McGeary
Well, we have the deduprequiresbackup flag turned on, so data doesn't get 
reclaimed until after the backup to copypool is complete.  We also stop 
reclamation during our offsite window because locks on bitfiles can and 
will mess with either reclamation or backup processes.

We also used three backup processes at a time, as it appears to improve 
throughput without being too wasteful in terms of sending only partially 
full tapes offsite.  Something to consider would be to split your VMWare 
environment up into multiple nodes.  Ours is one giant node at the moment, 
but the plan is to split it up into nodes that represent the VM's that are 
critical for DR and then others like non-critical production and dev/test 
so that we can (eventually) node-replicate DR VM's only to an offsite 
location.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Prather, Wanda wanda.prat...@icfi.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/06/2014 09:21 AM
Subject:Re: [ADSM-L] [Pool] Using Dedup with TSM : What kind of 
Hardware ?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



We have been unable to successfully do backup stgpool from a deduped 
primary pool to tape.
It's so slow, can't get 4 TB backed up in 24 hours.

Haven't been able to figure out whether it's because identify duplicates 
and reclaim stgpool are running at the same time (locks?) or because it's 
all 1 filespace (VMware) so can only do 1 stream, or because of throughput 
issues in the DB.  Can't tell why it's so slow.
 
Solution is currently to backup to a (random) disk pool of 5 TB, then 
backup stgpool, then migrate to the dedup (file) disk pool.

How do you manage the backup stgpool?  Multiple processes?

Thanks!
Wanda

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Matthew McGeary
Sent: Wednesday, March 05, 2014 10:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] [Pool] Using Dedup with TSM : What kind of Hardware 
?

I know this is a bit of a necro-reply, but I just saw this question and 
thought I'd discuss our config

- Total data in dedup pools (Native/Deduped) We have a single deduplicated 
primary storage pool that is currently in the ~95 TB range.  Our copypool 
resides on LTO4 and occupies ~228TB
- Total data ingested daily (Native, server side/Deduped, client side) Our 
90-day daily average intake is 3.3 TB.  It can peak up to 8TB during our 
monthly cold backup of production Oracle data.
- Size of your DB
The DB fluctuates between 1.7 and 2TB presently, depending on reorg state.
- Do you backup your STG to Tape ?
Yes, as mentioned above.
- Disk subsystem for DB ?
V7000 SSD in RAID5. 
- Disk subsystem for Active Log ?
Our active logs are on a tiering pool and archive logs are on NL-SAS
- Opinion on that disk subsystem ? Enough, too few ?

A little more detail on our disk configuration:
1 V7000 with 4 drawers.  One is a mix of SSD and 15k SAS.  Another is pure 
SSD and the remaining two are 3TB NL-SAS.
1 DCS3700 populated with 36 3TB NL-SAS and 24 4TB NL-SAS.
Our primary storage resides in an extent pool that spans the DCS3700 
NL-SAS and the V7000 NL-SAS.  Both use RAID6.
The TSM server itself lives on a P740 and is configured to use all 
available resources on that system, so 16 POWER7 cores at 3.6 GHz and 120 
GB of RAM.  We use VIOS for NPIV storage and networking, which takes up 
the remainder of the RAM and steals CPU cycles when necessary.

Overall this system has managed data intake growth in excess of 50% 
year-over-year and database growth of 400%  The DCS3700 in particular is a
champ: great value for money.  We originally went with V7000 because it 
was pitched to us with compression for the storage pools in mind. 
Unfortunately, this was a bad fit due to the inherent (but poorly 
documented at the time) performance penalties that the IBM compression 
solution has for sequential workloads.  Other than this shortcoming, the
v7000 has been a good fit for TSM and has handled everything we've thrown 
at it.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Erwann Simon erwann.si...@free.fr
To: ADSM-L@VM.MARIST.EDU
Date:   01/28/2014 11:09 PM
Subject:[ADSM-L] [Pool] Using Dedup with TSM : What kind of 
Hardware ?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi,

I'm wondering what kind of hardware you are using if you're using TSM 
Dedup facilities (client and/or server) ?

Can you fill the following fields :
- Total data in dedup pools (Native/Deduped)
- Total data ingested daily (Native, server side/Deduped, client side)
- Size of your DB
- Do you backup your STG to Tape ?
- Disk subsystem for DB ?
- Disk subsystem for Active Log ?
- Opinion on that disk subsystem ? Enough, too few ?

Here's an example from one of my customers
- Total data in dedup pools (Native/Deduped) : a bit less than 10 TB 
(logical_mb), 21,5 TB (reporting_mb)
- Total data

Re: [Pool] Using Dedup with TSM : What kind of Hardware ?

2014-03-05 Thread Matthew McGeary
I know this is a bit of a necro-reply, but I just saw this question and 
thought I'd discuss our config

- Total data in dedup pools (Native/Deduped)
We have a single deduplicated primary storage pool that is currently in 
the ~95 TB range.  Our copypool resides on LTO4 and occupies ~228TB
- Total data ingested daily (Native, server side/Deduped, client side)
Our 90-day daily average intake is 3.3 TB.  It can peak up to 8TB during 
our monthly cold backup of production Oracle data.
- Size of your DB
The DB fluctuates between 1.7 and 2TB presently, depending on reorg state.
- Do you backup your STG to Tape ?
Yes, as mentioned above.
- Disk subsystem for DB ?
V7000 SSD in RAID5. 
- Disk subsystem for Active Log ?
Our active logs are on a tiering pool and archive logs are on NL-SAS
- Opinion on that disk subsystem ? Enough, too few ?

A little more detail on our disk configuration:
1 V7000 with 4 drawers.  One is a mix of SSD and 15k SAS.  Another is pure 
SSD and the remaining two are 3TB NL-SAS.
1 DCS3700 populated with 36 3TB NL-SAS and 24 4TB NL-SAS.
Our primary storage resides in an extent pool that spans the DCS3700 
NL-SAS and the V7000 NL-SAS.  Both use RAID6.
The TSM server itself lives on a P740 and is configured to use all 
available resources on that system, so 16 POWER7 cores at 3.6 GHz and 120 
GB of RAM.  We use VIOS for NPIV storage and networking, which takes up 
the remainder of the RAM and steals CPU cycles when necessary.

Overall this system has managed data intake growth in excess of 50% 
year-over-year and database growth of 400%  The DCS3700 in particular is a 
champ: great value for money.  We originally went with V7000 because it 
was pitched to us with compression for the storage pools in mind. 
Unfortunately, this was a bad fit due to the inherent (but poorly 
documented at the time) performance penalties that the IBM compression 
solution has for sequential workloads.  Other than this shortcoming, the 
v7000 has been a good fit for TSM and has handled everything we've thrown 
at it.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Erwann Simon erwann.si...@free.fr
To: ADSM-L@VM.MARIST.EDU
Date:   01/28/2014 11:09 PM
Subject:[ADSM-L] [Pool] Using Dedup with TSM : What kind of 
Hardware ?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi,

I'm wondering what kind of hardware you are using if you're using TSM 
Dedup facilities (client and/or server) ?

Can you fill the following fields :
- Total data in dedup pools (Native/Deduped)
- Total data ingested daily (Native, server side/Deduped, client side)
- Size of your DB
- Do you backup your STG to Tape ?
- Disk subsystem for DB ?
- Disk subsystem for Active Log ?
- Opinion on that disk subsystem ? Enough, too few ?

Here's an example from one of my customers
- Total data in dedup pools (Native/Deduped) : a bit less than 10 TB 
(logical_mb), 21,5 TB (reporting_mb)
- Total data ingested daily (Native, server side/Deduped, client side) : 1 
TB (native), no client side
- Size of your DB : 220 GB
- Do you backup your STG to Tape ? Yes, to LTO4
- Disk subsystem for DB ? 2*RAID1 15krpm (4 internal disks), 2 volumes, 2 
directories
- Disk subsystem for Active Log ? 1*RAID1 15krpm (2 internal disks)
- Opinion on that disk subsystem ? Disk for DB are 100% busy most of the 
time (expire inventory, reclaim stg...) I think adding two more disks (one 
RAID1) to the DB would be really effective.

Your turn now ! 

Thanks a lot.
-- 
Best regards / Cordialement / مع تحياتي
Erwann SIMON




Re: Antwort: Re: [ADSM-L] [Pool] Using Dedup with TSM : What kind of Hardware ?

2014-03-05 Thread Matthew McGeary
Alex,

We are only using server-side at the moment but my intention is to start 
rolling out client-side this year.  The number I quoted you was the 
logical GB stored for backup/archive data.  The space utilization is 
currently 114TB.
As for the 4TB disks, I have no complaints thus far.  With an extent pool 
backed by 84 spindles, even NL-SAS can support a decent level of IOPS.  I 
wouldn't bother with a 'cache' pool for backups as we typically see ingest 
rates in excess of 350 MB/s during busy periods of the backup window.  The 
limiting factor on backup performance is the network link providing the 
data, in our case.
We have far fewer objects stored than you do, with 106,000,000 in the 
primary storage pool.  Object count is the best predictor of database 
size, so you will see a larger database size than we do, I'd think.


Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Alexander Heindl alexander.hei...@generali.at
To: ADSM-L@VM.MARIST.EDU
Date:   03/05/2014 11:52 AM
Subject:[ADSM-L] Antwort: Re: [ADSM-L] [Pool] Using Dedup with TSM 
: What kind of Hardware ?
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Matthew,

I was quite happy when I read your message, because your system (size) 
seems to be very similar to mine with just one difference: your's is 
allready deduplicated.
A lot of numbers really fit my system, so yours gives me a glue, what I 
need to size it, and: it confirms the numbers I calculated whith this 
document (which is highly recommended ro read!):
https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/Tivoli%20Storage%20Manager/page/Effective%20Planning%20and%20Use%20of%20IBM%20Tivoli%20Storage%20Manager%20V6%20Deduplication


Could you maybe please answer the following questions:
- are you using server side or client side dedup or both (percentage)?
- the 95 TB you stated here: Is this what your TB-lic reports, your 
occupancy (logical_mb) tells you, the storage pool states (in %) or what 
you see on disk? The last one is for sure higher, as it also includes the 
pending volumes for let's say 7 days of reuse dealy. It would be 
interesting, what numbers you have there, because this is the only value 
that counts for sizing.
- how happy are you with the 4TB disks? I also plan to use them, as my 
daily ingest is quite low as yours, so I don't see a problem. Expecially 
with client side dedup I think this won't matter that much...
- do you use client side dedup for DB-(DB2, Oracle, ...) Backups?
- how many objects have you stored in primary- and copypools? (select 
sum(num_files) from occupancy)

In my case I also plan a cache filepool with smaller 10k disks for 
backups which need higher performance in writing the data to TSM. Those 
backups will be server side deduped and migrated to the big 4TB disk 
filepool

some information about my system:
~200 TB native data (Occupancy)
daily ingest: ~3,5 TB / day
530.000.000 objects (primary + one copy)

Regards,
Alex




Von:Matthew McGeary matthew.mcge...@potashcorp.com
An: ADSM-L@VM.MARIST.EDU, 
Datum:  05.03.2014 16:21
Betreff:Re: [ADSM-L] [Pool] Using Dedup with TSM : What kind of 
Hardware ?
Gesendet von:   ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I know this is a bit of a necro-reply, but I just saw this question and 
thought I'd discuss our config

- Total data in dedup pools (Native/Deduped)
We have a single deduplicated primary storage pool that is currently in 
the ~95 TB range.  Our copypool resides on LTO4 and occupies ~228TB
- Total data ingested daily (Native, server side/Deduped, client side)
Our 90-day daily average intake is 3.3 TB.  It can peak up to 8TB during 
our monthly cold backup of production Oracle data.
- Size of your DB
The DB fluctuates between 1.7 and 2TB presently, depending on reorg state.
- Do you backup your STG to Tape ?
Yes, as mentioned above.
- Disk subsystem for DB ?
V7000 SSD in RAID5. 
- Disk subsystem for Active Log ?
Our active logs are on a tiering pool and archive logs are on NL-SAS
- Opinion on that disk subsystem ? Enough, too few ?

A little more detail on our disk configuration:
1 V7000 with 4 drawers.  One is a mix of SSD and 15k SAS.  Another is pure 


SSD and the remaining two are 3TB NL-SAS.
1 DCS3700 populated with 36 3TB NL-SAS and 24 4TB NL-SAS.
Our primary storage resides in an extent pool that spans the DCS3700 
NL-SAS and the V7000 NL-SAS.  Both use RAID6.
The TSM server itself lives on a P740 and is configured to use all 
available resources on that system, so 16 POWER7 cores at 3.6 GHz and 120 
GB of RAM.  We use VIOS for NPIV storage and networking, which takes up 
the remainder of the RAM and steals CPU cycles when necessary.

Overall this system has managed data intake growth in excess of 50% 
year-over-year and database growth of 400%  The DCS3700 in particular is a 


champ: great value for money.  We originally went with V7000 because it 
was pitched to us

Re: TSM 6.3.3

2013-03-28 Thread Matthew McGeary
We did an upgrade of 5.5 to 6.2 in 2010 with a database in excess of 200 
GB and the conversion process was well under 48 hours.  Overall the 
upgrade process was painless and problem-free, so I'll add my voice to the 
chorus and say that if you can swing the outage, it's very worthwhile to 
go down the upgrade path.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Roger Deschner rog...@uic.edu
To: ADSM-L@VM.MARIST.EDU
Date:   03/27/2013 11:43 PM
Subject:Re: [ADSM-L] TSM 6.3.3
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Having just done an upgrade of a 120 GB TSM 5.5 server to 6.2, IBM's
time estimates were surprisingly accurate. The process was long and
labor intensive, but in the end it worked. You can see my notes in the
archives of this list from the end of December.

We're even considering altering our strategy for our oldest 5.5 server
with its bloated 300 GB database and 1000 clients. Plan A was to put up
the new V6 server alongside it and do exports and new backups, like
you're planning. That process is underway, but it may take a whole year,
and I want to junk the old hardware before then. Plan B is to get the
database of the 5.5 server down from 300 GB to about 150 GB by exporting
or doing new backups of the easy, large clients, and then do an upgrade
for the rest using the new system/media method to another instance on
the new hardware. The largest savings in my Plan B would be in not
having to change anything on most of those 1000 clients, which sit on
the desktops of professors, researchers, and deans, some of whose
offices I'd probably have to visit in person. That terrible prospect of
spending weeks walking around campus switching client nodes one at a
time, makes the long and somewhat excruciating TSM upgrade process look
like a walk in the park. Instead I will just have to change one DNS
definition after the upgrade is done.

In summary, do the upgrade. I think it will be easier for you.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 27 Mar 2013, Alex Paschal wrote:

Hi, Jerome.  I've found IBM's quote of 5GB/hr is pretty accurate across
a variety of hardware architectures, OSs, and disk arrays. Figure your
200GB database would take 40-ish hours to upgrade, possibly less if you
feel a reorg would shrink your database significantly.  That means you'd
probably miss two nights' backups, IF you left the old system down.
Here are some thoughts.

Migration takes a lot of work and time.  If you can possibly swing the
40-hour upgrade, I highly recommend it.  To help convince management,
figure the time you'd spend migrating and how much that would cost in
consultant time, vs 2-3 days of consultant time for just the upgrade.
Try to convince them your time isn't free and should be factored in
because, while you're migrating, there's other stuff you aren't doing.

If you use the New System/Media method, which is my personal preference,
you can even bring your 5.5 server back up after the extract is
complete, which means you can take backups and be able to do restores
for those two days.  This will not be possible if you use the Network
method.  This covers the what if someone needs a restore during those
two days and the that leaves us unprotected for two days complaints.
You would want to make sure your reusedelay is set correctly, though,
and don't bother with reclamation.  You would have to manually track
volumes that are sent offsite during those two days, but maybe you could
just skip the offsite process those two days.

If you can convince management to abandon that two days' worth of
backups to the old 5.5 server, e.g. shut down the 5.5 server, switch
production to the 6.x server, resume incrementals with a 2-day gap, this
is just as good because you don't have to do any migration. Again, save
time, effort, and years of premature aging.  Try using business phrases
like:  migration cost not commensurate with the benefits, migration
increases risk, lost opportunity cost of the migration time, and any
other BS-Bingo terms you can repurpose for the fight against evil.

If you can't convince management to abandon that two days' worth of
backups, see if you can convince them there's only a few nodes that you
can't abandon.  If you can limit it to just a few nodes, you could do an
export node filedata=all fromdate= fromtime= todate= totime= to skim
only the data from the time of the extract and import it into the new
server.  That will drastically reduce the amount of data you have to
migrate.  If you took those backups to new media, like a new library
partition or something, or a bunch of spare disk (if there is such a
thing), you could do it server-to-server.  If not, simply export to
tape, shut down the 5.5 server, and import those tapes into the new 
server.

If all of the above falls through, only then consider

Re: TSM for VE question

2013-03-28 Thread Matthew McGeary
That's what we did.  We're still using the old v5 reporting module and I
built a little report that does a filespace query and shows the age and
backup date of each.  That gives you a rough way to determine if a backup
failed because its age will differ from the rest of the nodes in that
schedule.

Matthew McGeary
Technical Specialist
PotashCorp - Saskatoon
306.933.8921



From:   Stackwick, Stephen stephen.stackw...@icfi.com
To: ADSM-L@VM.MARIST.EDU
Date:   03/18/2013 10:23 AM
Subject:Re: [ADSM-L] TSM for VE question
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



You can query the last backup date for the filespaces in your Datacenter
node, where all the VM backups live. That will at least let you know if
you're current.

Steve

STEPHEN STACKWICK | Senior Consultant | 301.518.6352 (m) |
stephen.stackw...@icfi.com | icfi.com
ICF INTERNATIONAL | 410 E. Pratt Street Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
 Of Robert Ouzen
 Sent: Monday, March 18, 2013 2:57 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] TSM for VE question

 Hi to all

 I try to figure how to get successful information of  VM backups without
 checking thru the TSM For VE Gui , just from the administrator command
line.

 The only think I can think about it is to create a script as: q act
begind=-$1
 s=ANE4173I($1 - number of days)

 03/15/2013 18:01:47  ANE4173I (Session: 1021, Node:
HAIFA_DATACENTER)
   Successful Full VM backup of VMware Virtual
Machine
   'virtqsia'mode: 'Incremental Forever -
   Incremental'  target node name:
'HAIFA_DATACENTER'
 data mover node name: 'VMPROXY'
(SESSION: 1021)
 03/15/2013 18:04:02  ANE4173I (Session: 1024, Node:
HAIFA_DATACENTER)
   Successful Full VM backup of VMware Virtual
Machine
   'cldVCenter'  mode: 'Incremental Forever
   - Incremental'target node name:
'HAIFA_DATACENTER'
 data mover node name: 'VMPROXY'
(SESSION: 1024)
 03/15/2013 18:12:22  ANE4173I (Session: 1027, Node:
HAIFA_DATACENTER)
   Successful Full VM backup of VMware Virtual
Machine
   'cldSQL'  mode: 'Incremental Forever -
   Incremental'  target node name:
'HAIFA_DATACENTER'
 data mover node name: 'VMPROXY'
(SESSION: 1027)
 03/16/2013 18:02:52  ANE4173I (Session: 3489, Node:
HAIFA_DATACENTER)
   Successful Full VM backup of VMware Virtual
Machine
   'shiltest'mode: 'Incremental Forever -
   Incremental'  target node name:
'HAIFA_DATACENTER'
 data mover node name: 'VMPROXY'
(SESSION: 3489)

 It's working but get an output of all my VM backups,  don't know how to
point to
 a specific VM  ?

 Any ideas ?

 Best Regards

 Robert