Re: Expanding ProtecTIER Library

2013-08-27 Thread Hart, Charles A
If your expanding the library dimensions (drives and slots) you'll want to do 

1) An audit lib once you bring that lib mngr back up of course making sure 
there's no libr activity.  (I like to leave the lib mngr down during the 
process.

2) Def drives and paths 

That should do it - the audit lib lets TSM know about the new "Element #'s" 
which are slots and drives.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Steven 
Langdale
Sent: Tuesday, August 27, 2013 2:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Expanding ProtecTIER Library

Guys.

I'm about to expand an existing virtual library.

The lib manager is running 5.5.5.2

I'm assuming I'll just need to restart the lib manager and define the extra 
drives & paths (also adding slots).  Anyone done this and had to do any more to 
get it working i.e. delete and reconfig the whole library?

Thanks

Steven

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: determining wwpn for a fibre channel adapter

2013-04-29 Thread Hart, Charles A
You may need to load Atape 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Monday, April 29, 2013 2:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] determining wwpn for a fibre channel adapter

RHEL v 6. Hp dl585 server.

I need to get some rezoning done to allow our new intel based tsm servers to 
see the ts1120 drives in our computer room.

How do I determine the wwpn for the hbas connected to the drives?

Looked through udev and its associated things, and just haven't found what I 
need.
Thanks for any assistance.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Restore symantec backup with TSM???

2013-01-11 Thread Hart, Charles A
You can convert it with a product known as Butterfly
http://www.theregister.co.uk/2011/03/22/butterfly_software/   Looks cool


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rick Adamson
Sent: Friday, January 11, 2013 12:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore symantec backup with TSM???

Most likely the only option is to restore the data somewhere and re-back
it up with TSM.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Stef Coene
Sent: Friday, January 11, 2013 1:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore symantec backup with TSM???

On Friday 11 January 2013 03:46:21 you wrote:
> Hello,
>
> I've several media with a full backup previously made with Symantec 
> Netbackup Enterprise Server v7.1.0.4... is it possible to import, 
> catalague and restores that bakcup with IBM TSM?
No.


Stef

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Deduplication candidates

2013-01-11 Thread Hart, Charles A
Also the Files Per Set parameter in Oracle will really get you -
Protectier Recommends no more than a setting of 4.  We have seen 10 and
we went from 10:1 to 2.5:1 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Friday, January 11, 2013 9:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Yep.

Oracle DB's, getting great dedup rates on the DB's (except the ones
where they have turned on Oracle compression to start with - that is,
the DB itself is compressed).

Poor dedup on the Oracle logs either way.

W 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rick Adamson
Sent: Friday, January 11, 2013 8:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I
also notice that log files for DB's such as Exchange pre 2010 using
legacy backups and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either
compressing this data or storing it on compressed volumes but I found no
evidence of it. After seeing this conversation and giving it further
thought I wonder if others experience poor de-dup rates on these data
types?
Thanks
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
bkupmstr
Sent: Friday, January 11, 2013 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication candidates

Thomas,
First off, with all the great enhancehancements and current high
stability levels I would recommend going straight to version 6.4 As you
have already stated there are certain data types hat are good candidates
for data deduplication and your database backup data definitely is and
image files definitely aren't.

>From my experience oracle export files are traditionally good dedupe
candidates also.
>From what you describe, the SQL backup data minus the compression would
also be a good candidate.

The one thing you do not mention is how many versions of this backup
data you are keeping?

>From my experience, unless you are keeping a minimum of 4 backup
versions, the dedupe ratios will suffer. Too many time I see folks
keeping only 2 backup versions nd they can't understand why they get
very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that
you write the backup data to a target disk pool that  will have good
enough performance to not negatively impact backup speed.

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Battles with TSM for VE

2013-01-08 Thread Hart, Charles A
Interesting ... we have experienced a very similar issue where our DB
(Oracle etc) restores were coming off the Copy pool.  We opened a PMR
and found out this was due to the way TSM was designed to take the least
path of resistance for restores meaning if there are less copy pools
volumes required for a restore than primary volumes TSM will choose the
pool with less volumes.  This is s a real pain with Virtual tape as we
make those smaller by design unlike physical tapes at 1+TB.  IBM did do
a "fix" with a option to put in the server option file but it didn't
work for us.  So if we have a critical restore and we see this happen
we'll mark the offsite tapes as unavailable.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Neil Schofield
Sent: Tuesday, January 08, 2013 1:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Battles with TSM for VE

We've been using TSM for VE in production for about 6 months now and
although it generally works well, there have been a number of minor
issues which remain unresolved. However one problem stands out above the
others and despite extensive discussions with IBM support, we've been
unable to achieve a resolution. I just wanted to run it past the ADSM-L
community to gain their perspective.

There are about 500 VMs (predominantly Windows Server) on ESX 4.1 that
we back up on a nightly basis, with each VM getting one full backup a
week and incrementals (using change block tracking)  on the other
nights.

Consider the following scenario:
- A backup proxy server accessing the ESX disk LUNs over a SAN running
TSM for VE and with LAN-free (Storage Agent) access to tape library A
- A dedicated primary disk storage pool (VMCTLDISK) on the TSM server
for storing the CTL data
- A dedicated primary tape storage pool (VMDATATAPE) using a device
class in library A for the VM backup data
- A dedicated copy tape storage pool (VMCTLTAPE) using a device class in
library  A to provide a backup for the CTL data on disk
- VMDATATAPE is collocated by filespace so each VM's backup data is on
the smallest number of tapes
- VMCTLTAPE contains only one or two tape volumes to hold the copy of
all the CTL data for all VMs (no collocation)
- A daily admin schedule backs up CTL data in the primary storage pool
to the copy storage pool after the backup window for the TSM for VE
clients

Now a full backup of a VM works fine. The backup proxy server sends CTL
data over the LAN to disk on the TSM server and VM backup data over the
SAN to library A.

Incremental VM backups work less well. Under the covers, incremental
backups involve a significant amount of restore processing by the client
as it restores previously backed up CTL data. In the scenario above, we
naively expected the process for restoring the CTL data to be the
reverse of the backup process - ie the CTL data would be accessed over
the LAN from the primary disk storage pool on the TSM servers.

However it quickly became evident that the TSM for VE client was
favouring the far slower tape volume in the copy storage pool when it
came to restoring CTL data for every incremental backup (presumably on
the basis that the tape volume could be mounted LAN-free while the disk
volume couldn't). For the relatively small amount of data involved when
restoring the CTL files (compared to the size of the backup data), the
overhead of mounting the tape was significant. Even worse though, those
one or two copy storage pool volumes became a massive source of
contention when running multiple concurrent incremental VM backups.

I can't find an easy way of inhibiting LAN-free access to the copy
storage pool volumes by the backup proxy server without affecting it's
ability to store (and restore) the VM backup data using LAN-free.

When we discovered this behaviour the only relief I could find was to
put in place a spectacularly ugly work-around which involved running for
99% of the day with the volumes in the copy storage pool holding the CTL
data updated to have an access mode of unavailable. This forces the TSM
for VE client to restore the CTL files from the primary disk storage
pool during incremental VM backups. The script which performs the
storage pool backup first updates the copy storage pool volumes to
read/write and then changes them back to unavailable upon completion.
This doesn't sit well with me because volumes which aren't read/write
typically indicate trouble and it means mods to our monitoring to ignore
these volumes.

At the same time as we introduced this, we logged a PMR with IBM. This
has now been relegated to the status of a documentation APAR to describe
the behaviour, so it looks like the kludge we implemented could become
permanent.

Am I being unreasonable in thinking the way IBM have implemented this is
flawed?

For info, TSM for VE is v6.2 and TSM Server is v5.5.6

Regards
Neil Schofield
Technical Leader
Yorkshire Water


 

Spotted a leak?
If you

Re: NetApp for Primary Disk Pool

2012-08-27 Thread Hart, Charles A
This would be interesting to hear about as we have started to look at
other NAS devices to use a TSM backup[ Target, we all know DataDomain
and Soon Protectier does the NAS mount + Dedupe, but would / could you
get better pricing or performance from a non Backup Appliance?  Netapp
Dedupes, HDS Bluearc so how well do they do and would they be cheaper
than an appliance developed and sold for the purpose of backups. 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Mayhew, James
Sent: Monday, August 27, 2012 3:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] NetApp for Primary Disk Pool

BUMP... Does anyone have thoughts on this?

From: Mayhew, James
Sent: Thursday, August 23, 2012 6:42 PM
To: 'ADSM-L@vm.marist.edu'
Subject: NetApp for Primary Disk Pool

Hello All,

We are considering using a NetApp V6210 with some attached shelves as a
block storage TSM primary disk pool. Do any of you have any experience
using NetApp storage as a TSM primary disk pool? If so, how was your
experience with this solution? Did you have any performance issues? How
was it with sequential workloads? Any insight that you all can provide
is greatly appreciated.

Best Regards,

James Mayhew
Storage Engineer
HealthPlan Services
E-mail: jmay...@healthplan.com


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately notify the sender by e-mail at the address shown.This
email transmission may contain confidential information.This information
is intended only for the use of the individual(s) or entity to whom it
is intended even if addressed incorrectly. Please delete it from your
files if you are not the intended recipient. Thank you for your
compliance.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: migrating tape storage pools

2012-08-09 Thread Hart, Charles A
You can only do move data's with in a copy pool, not from copypool a to
copy pool b which would be nice, but I suppose for a media change you
can mark the orig copy pools vols to RO then put your new media in and
copy with in the same pool but to your new media which is set to read
write.

We've had situation where our Copy pool Vtape library hit its max and
cant expand, we couldn't do move data to a nother vtl as the devclass
had to be diff and therefore a diff copy pool arggg ... 

Please correct any mis-statements I may have made as this is only from
one persons exp.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Huebner,Andy,FORT WORTH,IT
Sent: Thursday, August 09, 2012 9:33 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] migrating tape storage pools

I stand corrected.

Andy Huebner


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
BEYERS Kurt
Sent: Thursday, August 09, 2012 9:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] migrating tape storage pools

Hi Andy,

A 'move data' on a copy stg pool volume works, the primary stg pool
volume(s) is/are used for the operation. It just does not work for tapes
that contain NDMP backups (primary or copy).

regards,
Kurt

Van: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] namens
Huebner,Andy,FORT WORTH,IT [andy.hueb...@alconlabs.com]
Verzonden: donderdag 9 augustus 2012 16:00
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] migrating tape storage pools

You cannot use move data on a copy tape.  I have tried.
I am very interested if you find a good solution.  We are moving some of
our copies to a different drive type.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
BEYERS Kurt
Sent: Thursday, August 09, 2012 3:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migrating tape storage pools

Good morning,

We are in the process of migrating several tape storage pools, both
primary and copy, from LTO generation x to LTO generation y.

It is easy for primary storage pools, since the incremental backup
mechanism is taking all the primary storage pools in scope:

* Redirect the backups to an LTO_Y storage pool

* Migrate in the background  the LTO_X storage pool to the LTO_Y
with a duration of x minutes

However  this does not work for copy storage pools since there is a
valid reason why a backup would be kept in multiple copy storage pool
volumes. But this implies that the copy storage pool from generation
LTO_Y needs to be rebuild from scratch. Which is time consuming and
expensive (more tape volumes, more slots,more offsite volumes ). Are
there really no other workarounds available?

An option might be that given the fact we use dedicated device classes
for each  sequential storage pool and that multiple libraries will be or
are  defined for each LTO generation:


* A DRM volume is linked to a copy storage pool

* The copy storage pool is linked to a device class

* Hence change the library in the device class from LTO_X to
LTO_Y for the copy storage pool

Would this workaround work? Then I could perform a daily move data in
the background to get rid from the LTO_X copy storage pool volumes. Will
test it myself of course.

It would be great too if IBM  could consider introducing the concept of
a  'copy storage pool group' consisting of multiple copy storage pools
that contains only 1 backup of the item.  Perhaps I should raise an RFC
for it if other TSM users find it also a good feature. So please provide
me some feedback. Thanks in advance!

Regards,
Kurt





*** Disclaimer ***
Vlaamse Radio- en Televisieomroeporganisatie Auguste Reyerslaan 52, 1043
Brussel

nv van publiek recht
BTW BE 0244.142.664
RPR Brussel
http://www.vrt.be/gebruiksvoorwaarden

This e-mail (including any attachments) is confidential and may be
legally privileged. If you are not an intended recipient or an
authorized representative of an intended recipient, you are prohibited
from using, copying or distributing the information in this e-mail or
its attachments. If you have received this e-mail in error, please
notify the sender immediately by return e-mail and delete all copies of
this message and any attachments.

Thank you.
*** Disclaimer ***
Vlaamse Radio- en Televisieomroeporganisatie Auguste Reyerslaan 52, 1043
Brussel

nv van publiek recht
BTW BE 0244.142.664
RPR Brussel
http://www.vrt.be/gebruiksvoorwaarden

This e-mail (including any attachments) is confidential and may be
legally privileged. If you are not an intended recipient or an
authorized representative of an intended recipient, you are prohibited
from using, copying or distributing the information in this e-mail or
its attachments. If you have received this e-mail in error, please
notify the sender immediately by return e-mail and delete all copies of
this message and 

Re: Passport Advantage failure?

2012-07-18 Thread Hart, Charles A
Looks fine on this link
https://www-304.ibm.com/software/howtobuy/softwareandservices/authentica
te/Registration?caller=PAC

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Allen S. Rout
Sent: Wednesday, July 18, 2012 10:06 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Passport Advantage failure?

I'm getting a remarkably official-looking "This page is no longer
available" from the passport advantage customer login links scattered
all over IBM web space.

You folks seeing the same?  The page tastes like a 'removed this
feature' sort of page, not like a "D'oh, app busted!" page.

- Allen S. Rout

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: RMAN direct to NFS

2012-07-11 Thread Hart, Charles A
Hi Nick

1) We only carve out 1 vtape lib per 7650 - we then share that library
with 2-4 TSM instances on one AIX server, some installations will have
two 7650's to a aix p6 650.

2) 128 Drives was just a number - We get close 60-80 Drives as our DB
backups larger than 2gig goes to vtl so we get quite a few mounts

3) We share one and sometimes two 7650's with one vtl defined per
physical hosts, then have 2-4 instances share the 1-2 vtls so one
instance is a LM too.   We haven't done stg agents in a while, with us
we can't seem to scale the tsm instance big enough (v 5.5 - just
starting to get into v6.x where we will go dowon to one TSM instance per
host os.)  Seems that after 15TB ingestion per instance we need to turn
up a new instance cant get all the work done in a day.

I 'm hoping we can move toward more of a NFS model, I know EMC's DD has
it but not being able to get a DD in a gateway version doesn't alow us
to leverage our primary disk vendor for a cheaper for a good disk price.
I imagine 6PB of DD would be a lot more than 6PB of our current disk
technology.

The whole tape path management can be a pain, we do have a script that
will delete define etc, but even then it would just seem cleaner to go
sequential file devclass on NFS.  Maybe when the IBM 7650 goes NFS we
can leverage that but we'll see.  

We alomost got our DBA's on to a NFS target even without TSM but they
too were afraid of NFS and having to manage their own backup disk space.
arrggg 


Richard, 

I like youre layout of pro's and cons, we've seen NFS issues on our AIX
backup servers for 2ndcopy scripts etc and it can be painful.  We were
in a spot where we lost a 7650 and slid a SUn 7410 NAS device in for a
week or two it ownly had a couple 1gig interfaces so it was slow but
wrked well.  We were even thinking that we'd test some other NAS devices
for a NFS target instead of a "Backup Appliance" but that POC is on hold
due to too much day to day wrk. 


Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Nick Laflamme
Sent: Tuesday, July 10, 2012 7:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] RMAN direct to NFS

This is more about VTLs than TSM, but I have a couple of questions,
influenced by my shop's experience with VTLs.

1) When you say "40 VTLs," I presume that's on far fewer frames, that
you're using several virtual libraries on each ProtecTier or whatever
you're using?
2) I see that as 128 tape drives per library. Do you ever use that many,
or is this just a "because we could" situation? (We use 48 per library,
and that may be overkill, but we're on EMC hardware, not IBM, so the
performance curves may be different.)
3) Do I read 1) and 4) to mean that you're sharing VTLs among TSM
servers? Why, man, why? Can't you give each TSM server its own VTL and
be done with it? Or are you counting storage agents as TSM instances? 

I don't know if we'd have gone with VTLs if we were architecting this
from scratch, but as we went from tape-based to virtual technology, the
VTL interfaces made the transition logically simpler, and it appeased
the one team member who has an irrational hatred of NFS. We're now under
pressure to adopt a new reference architecture that is NFS based, not
VTL based. I'm skeptical about that will work, but because we're
changing everything except the fact that we're still a TSM shop, if it
doesn't go well, everyone will have a chance to blame someone else for
any problems. 

Now that I think about it, I have no idea how many paths we have defined
to all of our VTLs on all of our DataDomains. It might be 10,000 paths
ultimately, but when you define them a few hundred at a time, or fewer,
it's not so overwhelming! 

Nick


On Jul 10, 2012, at 12:29 PM, Hart, Charles A wrote:

> The IBM one, the reason I said overhead and complexity 
> 
> 1) We have 40 VTL's for 
> 2) 5120 Configured Vtape Drives 
> 3) More than 10,000 TSM Tape Drive Paths 
> 4) 100 TSM Instances that share all the above
> 
> It would "seem" that if we used a VTL that has NFS we would still have
> 40 Devices but not he 15K objets to manage (tape Drives and paths)
> 
> Regards, 
> 
> Charles 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: RMAN direct to NFS

2012-07-10 Thread Hart, Charles A
It's a size thing .. We average 190TB per night for incremental, and
480TB on the weekends (db fulls) across 4 primary DC's.  We also have 4
fully populated STK sl8500 and two TS3500's for Offsite Copies (vaulted
in-house using fcip).  We have a team of 8 that manages it now.

It can be exciting and many times overwhelming, but many times long for
the simple days of 4 tsm servers and 2 libs.

Regards, 

Charles 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Alex Paschal
Sent: Tuesday, July 10, 2012 2:20 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] RMAN direct to NFS

Wow.  Charles, do you mind if we grill you about your environment? Is it
simply size that pushes you to scale to that many VTLs and TSM
instances, or is it some other consideration?


On 7/10/2012 10:29 AM, Hart, Charles A wrote:
> The IBM one, the reason I said overhead and complexity
>
> 1) We have 40 VTL's for
> 2) 5120 Configured Vtape Drives
> 3) More than 10,000 TSM Tape Drive Paths
> 4) 100 TSM Instances that share all the above
>
> It would "seem" that if we used a VTL that has NFS we would still have
> 40 Devices but not the 15K objects to manage (tape Drives and paths)
>
> Regards,
>
> Charles
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of
> bkupmstr
> Sent: Tuesday, July 10, 2012 11:19 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] RMAN direct to NFS
>
> Chart2: what Vtl are you currently using and what does it cause a lot
of
> overhead and complexity?
>
>
+--
> |This was sent by bkupm...@yahoo.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
>
+--
>
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the
intended
> recipient or his or her authorized agent, the reader is hereby
notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify
the
> sender by replying to this message and delete this e-mail immediately.
>

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: RMAN direct to NFS

2012-07-10 Thread Hart, Charles A
The IBM one, the reason I said overhead and complexity 

1) We have 40 VTL's for 
2) 5120 Configured Vtape Drives 
3) More than 10,000 TSM Tape Drive Paths 
4) 100 TSM Instances that share all the above

It would "seem" that if we used a VTL that has NFS we would still have
40 Devices but not he 15K objets to manage (tape Drives and paths)

Regards, 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
bkupmstr
Sent: Tuesday, July 10, 2012 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] RMAN direct to NFS

Chart2: what Vtl are you currently using and what does it cause a lot of
overhead and complexity?

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: RMAN direct to NFS

2012-07-09 Thread Hart, Charles A
This would be interesting to hear .. we were looking at it to reduce the
VTL overhead and complexity...  Our DBA's diudnt want to do cause theyd
have to manage their backup space.  ar

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
bkupmstr
Sent: Monday, July 09, 2012 3:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] RMAN direct to NFS

Anyone out here thinking of sending oracle backup data direct to an NFS
target?
Care to elaborate?

What is driving the process?
TSM licensing ?
DBA control?

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: VTL's and D2D solutions

2012-07-02 Thread Hart, Charles A
With Gary's point below using the PT Gateway you can slide in Encrypted
disk underneath the other appliances don't support encryption.  With
using your own disk you can leverage your volume purchasing power with
your existing disk vendor as well.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary
Sent: Monday, July 02, 2012 10:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL's and D2D solutions

We are looking at vtls as well.
You might check into the ibm protectier 7650.

This is a head end that you attach to whatever storage you have.
Emulates a lot of libraries, including the 3484 and 3494 I believe.

Should know more in a couple of weeks after we review the answers to our
RFPs.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Kevin Boatright
Sent: Monday, July 02, 2012 10:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VTL's and D2D solutions

We are currently looking at adding a Disk to Disk backup solution.  Our
current solution has a 3584 tape library with LTO-5 drives using TKLM.
 
We have looked at Exagrid and Data Domain.  Also, I believe HP has a
solution.
 
We will need to have encryption on the device and the ability to
replicate between the two disk units.
 
Anyone have any comments or recommendations?  
 
Thanks,
Kevin
 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Application called Teradata

2012-06-15 Thread Hart, Charles A
We have one and it has Netbackup Wrapped up in side with its own tape
library.  We don't manage the backups terradata does ... 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Richard Rhodes
Sent: Friday, June 15, 2012 10:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Application called Teradata

We had some meetings a long time ago about acquiring one (which we
didn't).  At that time it was a specialized database appliance.  I seem
to
remember a discussion that it did have a TSM API interface client of
some
kind.

Rick





From:   Remco Post 
To: ADSM-L@VM.MARIST.EDU
Date:   06/15/2012 11:17 AM
Subject:Re: Application called Teradata
Sent by:"ADSM: Dist Stor Manager" 



On 15 jun. 2012, at 15:41, Geoff Gill wrote:

> I was wondering if anyone could tell me is this application is on
Oracle, Microsoft SQL, some other DB or possibly all of the above.
>

I've been told it's none of the above. All (1) Teradata installations
that
I know of have a dedicated tape library.

>
> Thank You
> Geoff Gill
>

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Mega-Exchange backup

2012-05-15 Thread Hart, Charles A
We also stager full / incrs every other day, If you're backing up 1.2TB
in 14hrs you're only achieving 22MBS, you may have a performance
opportunity such as checking Disk I/O on your backup server as well as
the Exchange host for constraints, mutli stream your backups one thread
per stggroup and check your network links, ensure other Exchange
housekeeping tasks arnt running during backup window assuming you have a
gig NIC for backups you should see 60-80MBS assuming solid cfg as the
Exchange stggrps usually stream well. 

Regards, 

Charles


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Schaub, Steve
Sent: Tuesday, May 15, 2012 6:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Mega-Exchange backup

Wanda,
We do this now, along with running multiple concurrent SG backups.
Contact me offline and I'll send you a copy of our current Powershell
script.

Steve Schaub
Systems Engineer II, Windows Backup/Recovery
BlueCross BlueShield of Tennessee
steve_schaub AT bcbst.com
423-535-6574 (desk)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, May 14, 2012 9:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Mega-Exchange backup

I have a customer with a 1.2 TB Exchange 2007 DB, expected to grow to 3
TB in the next 4 months.
Fulls are already a problem (14 hours).
Is there any Exchange-related reason I can't do the fulls for a couple
of storage groups on Monday, the next 2 on Tuesday, next 2 on Weds. etc
and spread them out?

Thanks ..
-
Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: LTO4 Tape Volumes with different Estimated Capacity

2012-01-18 Thread Hart, Charles A
All based on data types and compression, so if you have a SQL DB backup
you'll be able to compress more data on a tape than Windows bianries or
image type files. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Botelho, Tiago (External)
Sent: Wednesday, January 18, 2012 9:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] LTO4 Tape Volumes with diferent Estimated Capacity

 

Hello

 

I have several tape volumes (LTO4) with different Estimated Capacity.
What can cause this situation?  

 

Ex:

A00035L4 WIN_SRV_T   TS3310   629.5 G
95.4   Full  

A00039L4 WIN_SRV_T   TS3310 1.1 T
98.2   Full  

A00101L4 WIN_SRV_T   TS3310   884.8 G
84.4   Full  

A00113L4 WIN_SRV_T   TS3310 1.0 T
93.7   Full  

A00118L4 WIN_SRV_T   TS3310 1.0 T
73.6   Full  

A00128L4 WIN_SRV_T   TS3310 1.0 T
99.2   Full  

A00129L4 WIN_SRV_T   TS3310   969.1 G
93.7   Full  

A00133L4 WIN_SRV_T   TS3310   934.8 G
96.1   Full

 

 

All tapes and Drives are LTO4, TSM Server 553 for AIX.

 

The devc are configured as:

 

Device Class Name: TS3310

Device Access Strategy: Sequential

Storage Pool Count: 16

   Device Type: LTO

Format: ULTRIUM4C

 Est/Max Capacity (MB): 

   Mount Limit: DRIVES

  Mount Wait (min): 60

 Mount Retention (min): 60

  Label Prefix: ADSM

   Library: TS3310

 Directory: 

   Server Name: 

  Retry Period: 

Retry Interval: 

Shared: 

High-level Address: 

  Minimum Capacity: 

  WORM: No

  Drive Encryption: Off

   Scaled Capacity: 

Last Update by (administrator): ADM

 Last Update Date/Time: 07/27/09   17:32:05

 

Cumprimentos / Best regards 

Tiago Botelho



 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Can a TSM server admin purloin client backups?

2011-10-25 Thread Hart, Charles A
Nothing, it's a policy challenge if they has TSM Sys Admin rights.  Kind
of like a Cop that sells evidence or takes a bribe, a priest that
protects the young ... at some point you have to trust your admin or
fire them.  In my exp a node pw can be overridden with a Sys admin user
and pw.

Maybe I over simplified the situation.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Keith Arbogast
Sent: Tuesday, October 25, 2011 3:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Can a TSM server admin purloin client backups?

This question came up again here. If a TSM admin with system
authorization knows the client password for a certain TSM node, what
keeps him from restoring files from that node to another server of his
choosing?

Sorry to resuscitate this old horse.

With many thanks,
Keith  

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Tsm client for exchange perfornce question

2011-10-05 Thread Hart, Charles A
To Multistream Legacy non-vss Exchange backups setup a tdp backup cmd
per Exchange stggroup.  So if you have 4 Exchange Stggroups you could
have 4 streams. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Del Hoobler
Sent: Wednesday, October 05, 2011 1:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Tsm client for exchange perfornce question

Hi Gary,

For Legacy backups, RESOURCUTILIZATION will not do anything.
BUFFERS and BUFFERSIZE default to the optimum value as tested in the
lab.
There is some discussion in the book about BUFFER and BUFFERSIZE, but as
with any "tuning" option, your mileage may vary.

For VSS backups, RESOURCUTILIZATION may help. It depends on how your
physical data is spread across your volumes.
For VSS backups, RESOURCUTILIZATION needs to be placed into the DSM.OPT
file for the DSMAGENT (not the DP/Exchange one).

Thanks,

Del



"ADSM: Dist Stor Manager"  wrote on 10/05/2011
01:22:33 PM:

>> From: "Lee, Gary D." 
>> To: ADSM-L@vm.marist.edu
>> Date: 10/05/2011 01:30 PM
>> Subject: Tsm client for exchange perfornce question Sent by: "ADSM: 
>> Dist Stor Manager" 
>>
>> Is the resourceutilization parameter in dsm.opt valid for the 
>> exchange client?  Also, are there any general guidelines for setting 
>> the number of buffers and buffer size parameters?
>>
>> I haven't found anything in the tdp for exchange manual, so thought I

>> would try here.
>>
>>
>> Gary Lee
>> Senior System Programmer
>> Ball State University
>> phone: 765-285-1310
>>
>>

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Defining Multiple filesystems to one FILEDEVCLASS in TSM v5.5

2011-10-05 Thread Hart, Charles A
We did it see below, we alos share it with multiple instances on a
physical hosts, so far so good! 


define devclass sql-file devtype=file
directory=/tsmstgsql-lv1,/tsmstgsql-lv2,/tsmstgsql-lv3,/tsmstgsql-lv4
maxcapacity=50G MOUNTLimit=128 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
David W Daniels/AC/VCU
Sent: Wednesday, October 05, 2011 9:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Defining Multiple filesystems to one FILEDEVCLASS in
TSM v5.5

We have a  (red hat linux)  server running TSM server 5.5  The question
is,  can you define multiple file systems to a FILEDEVCLASS? If so, what
are pros/cons.

** Don't be a phishing victim - VCU and other reputable organizations
will never use email to request that you reply with your password,
social security number or confidential personal information. For more
details visit http://infosecurity.vcu.edu/phishing.html

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: vtl versus file systems for pirmary pool

2011-09-26 Thread Hart, Charles A
Take in to consideration that the VTL's have limited bandwidth
regardless of Fibre connectivity so lets say you have a VTL head with 4
x 4gig fibre ports 1600MBS of bandwidth but your vtl head maxes out @
500-1000MBS.  Then think of in a LAN free env, you have one VTL head
provide LAN free mount points so one DB host zoned to a VTL with 2 * 4gb
ports potentially pushing 400MBS to a head that supports 500-1000MBS so
you'll be dedicating a expensive VTL for one to two DB hosts backup? It
maybe cheaper to do Disk based snaps for large db's
 
Also take in to consideration cost of Fibre ports on your switch,
seprate FC HBA's for you hosts etc. 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Huebner,Andy,FORT WORTH,IT
Sent: Monday, September 26, 2011 4:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] vtl versus file systems for pirmary pool

Another consideration is that FC is faster than Ethernet. 8GB FC > 10GB
FCoE Ether-Over-Head is much greater than FC over-head.

Also, virtual tape libraries will fit nicely with your companies
virtualization strategy, where file device class storage does not.

We currently use VTL, our next iteration of TSM (3 years) will most
likely be File device class storage.

I would say there is no real advantage either way.  Both types of
storage have unique "features".  For us the Ethernet network was not
built to handle the data load to where our storage is and we had
existing FC.


Andy Huebner


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, September 26, 2011 3:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] vtl versus file systems for pirmary pool

The one drop-dead difference is that if you want to do LAN-Free, the
target has to be a tape (or tape-emulating) device.
Can't use TSM filepools.

If you replicate between VTL's, it's transparent to TSM, but you have to
have the same vendor hardware at both sites.
Dedup may also be faster with a VTL, if your VTL does it in-line.
Also a VTL is often quicker to set up (which doesn't necessarily mean it
is easier to maintain, if you consider firmware updates, multiple
maintenance contracts, etc.)


TSM 6.2 has to dedup as part of the reclaim process,
post-data-landing-there. That isn't necessarily a bad thing, it's just a
different way.
To get replication of the deduped data, you have to set up a second/copy
pool yourself; but it does not have to be the same hardware on both
ends.

There is nothing about having a VTL/VTL gateway that inherently means
you can do fewer concurrent backups than with a TSM filepool,  depends
on your hardware.  You can have a fast VTL or a slow crappy VTL, just
depends on what you pay for...





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Tim Brown
Sent: Monday, September 26, 2011 4:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] vtl versus file systems for pirmary pool

What advantage does VTL emulation on a disk primary storage pool have

as compared to disk storage pool that is non vtl ?



It appears to me that a non vtl system would not require the daily
reclamation process

and also allow for more client backups to occur simultaneously.



Thanks,



Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com <>
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the
intended recipient. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this
message to the intended recipient, please notify the sender immediately
by replying to this note and deleting all copies and attachments.

This e-mail (including any attachments) is confidential and may be
legally privileged. If you are not an intended recipient or an
authorized representative of an intended recipient, you are prohibited
from using, copying or distributing the information in this e-mail or
its attachments. If you have received this e-mail in error, please
notify the sender immediately by return e-mail and delete all copies of
this message and any attachments.

Thank you.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM using vSCSI adapters

2011-08-31 Thread Hart, Charles A
I'm not the most AIX savvy, but we did recently build a p7 750 using
NPIV / VIO, still seperating the disk and tape data paths.  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Bob Molerio
Sent: Wednesday, August 31, 2011 10:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM using vSCSI adapters

Hi guys,

I'm trying to find out if TSM 6.x will work in POWERVM (VIOS) AIX setup.
I'm not sure if this is supported as I believe TSM usually requires FC
adapters or SCSI adapters to connect to a tape libraries and storage.

--
Sincerely,

Bob Molerio.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: tsm and data domain

2011-06-17 Thread Hart, Charles A
Has anyone used the DD NFS mount point for Devclass type of File?  

If so to what degree?  

How hard could you push it?  1, 2 or 3 TBH? (10Gige = approx 2.25TBH)

We have a fairly robust backup server platform 

I ask as we like the idea of no tape / robot devices from OS to app
layer(TSM).

This thread is a great read!
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Cowen, Richard
Sent: Friday, June 17, 2011 8:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm and data domain

Brian,

The DD has dedup by volume.  If you lookup the contents of a volume on
TSM, you can get an feeling for how dedup varies.
The resulting information is "approximate", due to the fact some of the
data on a DD volume has "expired" from TSM's point of view, and will not
show up in the volumeusage table (I don't think.)  Also, the first copy
of any "chunk" will have a dedup ratio of 1.
 

filesys show compression /backup/vtc/*/*

/backup/vtc/Default/N00022L3:
 mtime: 1302728324652916847,
 bytes: 47,868,905,726,
 g_comp: 47,521,219,818,
 l_comp: 8,403,079,871,
 meta-data: 154,714,124,
 bytes/storage_used: 5.6

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
PAC Brion Arnaud
Sent: Friday, June 17, 2011 3:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm and data domain

Tim,

We are right  conducting a POC with  a pair of DD860, and are relatively
satisfied with them.  The machines are mostly used as VTL except for a
small NFS partition which is used or TSM DB backups.
Deduplication rate is OK : around 10 so far, with a good mix of
Exchange, Oracle, DB2 as well as Win and AIX data.
Ingestion rate is corresponding to our needs, slightly more than  1 GB/s
, and the possibility to define plenty drives.

Negative or "no so impressive"  points, so far :  deduplication rate is
a "global" factor, impossible to know what type of data dedupes better
than another one, thus lowering the granularity of deduplication
monitoring . 
We also experienced pretty long initialization times for the TSM server
attached to a virtual library having 100 drives : at restart  TSM server
needs at least 15 minutes to recognize all the drives, thus generating
numerous client errors, as  they try to get a drive which is defined but
not available. We still have to experiment if defining more smaller
libraries would solve that issue ... 
At least replication :  it is still unclear  for me what would be
happening if we do rely on replication to replace copy storage pools,
and if our TSM  server crashes on primary site while replication is not
completed. To my mind the virtual volumes contents would not be matching
TSM DB content : not sure so far on how to handle such a situation ...

Hope this helped !

Cheers

Arnaud


**
Corporate IT Systems & Datacenter Responsible Panalpina Management Ltd.,
Basle, Switzerland, CIT Department Viadukstrasse 42, P.O. Box 4002
Basel/CH
Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: arnaud.br...@panalpina.com

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Tim Brown
Sent: Thursday, 16 June, 2011 21:50
To: ADSM-L@VM.MARIST.EDU
Subject: tsm and data domain

Any one use emc's data domain devices for storage pools and replication

Would like to here positive and negative issues.



Thanks,



Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com <>
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the
intended recipient. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this
message to the intended recipient, please notify the sender immediately
by replying to this note and deleting all copies and attachments.



The information contained in this transmission may contain privileged
and confidential information. 
It is intended only for the use of the person(s) named above. If you are
not the intended recipient, you are hereby notified that any review,
dissemination, distribution or duplication of this communication is
strictly prohibited. If you are not the intended recipient, please
contact the sender by reply email and destroy all copies of the original
message. 
To reply to our email administrator directly, please send an email to
postmas...@sbsplanet.com.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is

Re: LTO5 performance

2011-05-03 Thread Hart, Charles A
Few things to consider 

1) Are the HBA's for the LTO5 Drives dedicated or do they have other
devices. 

2) How many LTO drives per HBA? 

3) Speed of Fibre 2/4/8Gb



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Gary Bowers
Sent: Monday, May 02, 2011 9:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] LTO5 performance

Just a quick poll of what people are seeing for LTO-5 performance.  Is
it anywhere close to the rated 140 MB/s native, or much lower?  I ask,
because I have a customer that is only getting about 40-50 MB/s from HP
LTO5 drives in a Quantum library.  This number seems really low to me,
but before I jump to making recommendations, I would like to have some
real world numbers to go off of.  You'd think Google would find some,
but my searches turned up nill.

As an aside, has anyone had performance problems with Quantum libraries
and their built-in Fibre Switches?

OS is AIX 6, having the same performance from both TSM server, AIX and
Solaris 10 LanFree.

Thanks for sharing :)

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Stopping access to a filepool

2011-04-28 Thread Hart, Charles A
A scratch tape will only be requested from the devclass that's in the path ... 
It the Stgpool next inline from the disk pool. 

There's always more than one way to solve your challenges with TSM.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew 
Carlson
Sent: Thursday, April 28, 2011 9:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Stopping access to a filepool

This would work for existing volumes, but if it's uses a scratch pool, it could 
still try to create volumes.

On Thu, Apr 28, 2011 at 09:05, Prather, Wanda  wrote:
> Well if nothing else, this will surely do it:
>
> update vol * wherestgpool=filepoolname access=readonly
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Richard Rhodes
> Sent: Thursday, April 28, 2011 9:17 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Stopping access to a filepool
>
> How do you offline/stop_access to a filepool?
>
> We have been implementing a DataDomain which is accessed as a filepool.
> Prior to the DD, most backups went to the diskpool with some very large 
> backups going straight to tape.  When we would want to take the library down, 
> we would kill any processes/sessions writing to tape and then take the tape 
> drives offline.  This prevented any new sessions from trying to access tape 
> until the library work was completed.
>
> Now were trying to figure out that the equivalent process is for the DD and 
> it's filepool.
>
> Most backups still go into the diskpool, and the very large backups go 
> straight into the DD filepool.  If we want to take the DD down for 
> some reason, we would kill any processes/sessions writing directly to the 
> filepool, but we can't think of any way to prevent new sessions from trying 
> to access the filepool.   I guess I'm looking for some way to offline the 
> filepool.
>
> Any thought/comments are most welcome!
>
> Rick
>
>
>
>
>
> -
> The information contained in this message is intended only for the personal 
> and confidential use of the recipient(s) named above. If the reader of this 
> message is not the intended recipient or an agent responsible for delivering 
> it to the intended recipient, you are hereby notified that you have received 
> this document in error and that any review, dissemination, distribution, or 
> copying of this message is strictly prohibited. If you have received this 
> communication in error, please notify us immediately, and delete the original 
> message.
>



--
Andy Carlson
---
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month, The 
feeling of seeing the red box with the item you want in it:Priceless.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Stopping access to a filepool

2011-04-28 Thread Hart, Charles A
Maybe take a diff approach... 

Current 
Disk-Pool  Next Stgpool=Data Domain

DD - Under Maint
Disk pool >> next Stgpool (other media tape disk etc ) ex tapepool 


We do this often if we are upgrading a VTape appliance we will always
have our backup and archive copy groups set to a destination of a disk
pool even if it does not have disk associated, then we use the
"nextstgpool" option to point to what ever other media we may use.
Example - w have disk-pool1 with a next stgpool of vtape 1, we need to
upgrade code on vtape one we just update the diskpool1 with a
"Nextstgpool" of vtape2 (second vtape head) or a tape pool... This way
your clients are not writing to the pool under maint, you may have a
mediaw if a client is trying to do a restore ... 

Charles 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rick Adamson
Sent: Thursday, April 28, 2011 9:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Stopping access to a filepool

Taking the storage offline would work too but during a scheduled backup
window the schedule may fail with an error of no storage available.


~Rick Adamson


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Richard Rhodes
Sent: Thursday, April 28, 2011 10:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Stopping access to a filepool

Would like to keep clients accessing the diskpool to still keep running.
That's the beauty of a diskpool . . .you can take your library offline (
or DataDomain) for some time and not effect backups that hit the
diskpool.

Rick




From:   Rick Adamson 
To: ADSM-L@VM.MARIST.EDU
Date:   04/28/2011 09:48 AM
Subject:Re: Stopping access to a filepool
Sent by:"ADSM: Dist Stor Manager" 



What about "disable sessions client"?


~Rick Adamson
Jacksonville, FL


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Richard Rhodes
Sent: Thursday, April 28, 2011 9:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Stopping access to a filepool

How do you offline/stop_access to a filepool?

We have been implementing a DataDomain which is accessed as a filepool.
Prior to the DD, most backups went to the diskpool with some very large
backups going straight to tape.  When we would want to take the library
down, we would kill any processes/sessions writing to tape and then take
the tape drives offline.  This prevented any new sessions from trying to
access tape until the library work was completed.

Now were trying to figure out that the equivalent process is for the DD
and it's filepool.

Most backups still go into the diskpool, and the very large backups go
straight into the DD filepool.  If we want to take the DD down for some
reason, we would kill any processes/sessions writing directly to the
filepool, but we can't think of any way to prevent new sessions
from trying to access the filepool.   I guess I'm looking for some way
to offline the filepool.

Any thought/comments are most welcome!

Rick





-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: How do you allocate costs for deduplicated data?

2011-04-18 Thread Hart, Charles A
It appears by doing a select on the occupancy table; the full amount of
data is reported in the Logical Column, and reporting is deduped 

tsm: TSMLAB1>select NODE_NAME, STGPOOL_NAME, NUM_FILES, PHYSICAL_MB,
LOGICAL_MB, REPORTING_MB from occupancy where STGPOOL_NAME='DISK-DEDUPE'

NODE_NAME  STGPOOL_NAME  NUM_FILES
PHYSICAL_MBLOGICAL_MB  REPORTING_MB
-- -- 
- - -
LABW9236   DISK-DEDUPE   -2620
0.00   1852.71  5.10
LABW9236   DISK-DEDUPE   0
0.00  0.01  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.04  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.10  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.13  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.23  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.87  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.94  0.00
LABW9236   DISK-DEDUPE   0
0.00  1.19  0.00
LABW9236   DISK-DEDUPE   0
0.00 14.51  0.00
LABW9236   DISK-DEDUPE   0
0.00 17.57  0.00
LABW9236   DISK-DEDUPE   0
0.00 31.43  0.00
LABW9236   DISK-DEDUPE   0
0.00 24.59  0.01
LABW9236   DISK-DEDUPE   0
0.00 45.27  0.01
LABW9236   DISK-DEDUPE   0
0.00 82.46  0.01
LABW9236   DISK-DEDUPE   0
0.00229.62  0.03
LABW9236   DISK-DEDUPE   0
0.00300.47  0.04
LABW9236   DISK-DEDUPE   0
0.00418.44  0.05
LABW9236   DISK-DEDUPE   0
0.00470.74  0.06
LABW9236   DISK-DEDUPE   0
0.00925.03  0.12
LABW9236   DISK-DEDUPE   0
0.00939.91  0.12
LABW9236   DISK-DEDUPE   0
0.00   1257.26  0.16
LABW9236   DISK-DEDUPE   0
0.00   1572.13  0.19
LABW9236   DISK-DEDUPE   0
0.00   2152.09  0.28
LABW9236   DISK-DEDUPE   0
0.00   3131.19  0.39
LABW9236   DISK-DEDUPE   0
0.00   3191.17  0.39
LABW9236   DISK-DEDUPE   0
0.00   3322.11  0.41
LABW9236   DISK-DEDUPE   0
0.00   5474.88  0.67
LABW9236   DISK-DEDUPE   0
0.00   6981.77  0.87
LABW9236   DISK-DEDUPE   0
0.00   7247.60  0.89

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Jim Neal
Sent: Monday, April 18, 2011 1:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] How do you allocate costs for deduplicated data?
Importance: High

Hi All,





We are in the process of testing TSM's Deduplication
feature and I have a question about how the cost of the deduplicated
data gets allocated.  For example:



  Two systems that have 80% of their data in common and that belong to
two different departments get backed up.  The first system gets
deduplicated and is billed for 100% of the deduplicated data.  The
second system only has 20% of unique data and so is only billed for that
20% after deduplication?  Is this correct? If so, is there a way to get
around this so that the cost is fairly divided?



Any information or advice on this would be greatly appreciated.  Thanks
much!



Jim Neal

Sr. TSM Administrator

U.C. Berkeley

Storage and Backup Group

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her autho

Re: draining a diskpool when using a DD and no copypool

2011-03-22 Thread Hart, Charles A
Can someone expound on that limit... Is it 150 concurrent streams like a
stream per proc core?  Do the queued request time slice? Has it become
an Achilles heal for anyone?

Thank you for your insight! 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Tuesday, March 22, 2011 11:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] draining a diskpool when using a DD and no
copypool

Interesting - didn't know there was that session limit on the DD!~
Thanks for the info!
W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Richard Rhodes
Sent: Tuesday, March 22, 2011 11:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] draining a diskpool when using a DD and no
copypool

"ADSM: Dist Stor Manager"  wrote on 03/22/2011
10:25:00 AM:

> From: "Prather, Wanda" 
> To: ADSM-L@VM.MARIST.EDU
> Date: 03/22/2011 10:25 AM
> Subject: Re: draining a diskpool when using a DD and no copypool Sent
> by: "ADSM: Dist Stor Manager" 
>
> Curious as to why you are using the disk pool with migration to DD 
> instead of having your backups write directly to the DD.
>
> Wanda

1) We're still trying to make this puppy work (right now having problems
getting 10g direct connects to the TSM servers working).

2) For initial implementation we want to make as few changes as
possible.

3) But the real reason is that we really like having backups buffered
before the next pool.  This is especially true for Oracle archive
logging.
 We're thinking this is especially true for the DD which only has a
single head - nonredundant.

Now, we think we will send more directly to DD, just not at the start.
And then it will probably be via a limit on file size of the diskpool.
That way it's less playing with nodes and management classes. There is a
limit to the number of sessions the DD can handle concurrently (150),
which includes backup, restore, and replication sessions, and this limit
is shared between two TSM instances.  We want to wait and see this all
looks like before we jump too deep into other changes.  It seems best at
the start to just change out tapepool for filepool (with the required
migration), see how it works, then see what else to change to make
things better.

 . . . at least that's the plan . . .

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: draining a diskpool when using a DD and no copypool

2011-03-22 Thread Hart, Charles A
"There is a limit to the number of sessions the DD can handle
concurrently (150)"  

Is this limit based on a particular model of DD?  Seems a bit
constrictive for anything larger than a SMB. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Richard Rhodes
Sent: Tuesday, March 22, 2011 10:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] draining a diskpool when using a DD and no
copypool

"ADSM: Dist Stor Manager"  wrote on 03/22/2011
10:25:00 AM:

> From: "Prather, Wanda" 
> To: ADSM-L@VM.MARIST.EDU
> Date: 03/22/2011 10:25 AM
> Subject: Re: draining a diskpool when using a DD and no copypool Sent 
> by: "ADSM: Dist Stor Manager" 
>
> Curious as to why you are using the disk pool with migration to DD 
> instead of having your backups write directly to the DD.
>
> Wanda

1) We're still trying to make this puppy work (right now having problems
getting 10g direct connects to the TSM servers working).

2) For initial implementation we want to make as few changes as
possible.

3) But the real reason is that we really like having backups buffered
before the next pool.  This is especially true for Oracle archive
logging.
 We're thinking this is especially true for the DD which only has a
single head - nonredundant.

Now, we think we will send more directly to DD, just not at the start.
And then it will probably be via a limit on file size of the diskpool.
That way it's less playing with nodes and management classes. There is a
limit to the number of sessions the DD can handle concurrently (150),
which includes backup, restore, and replication sessions, and this limit
is shared between two TSM instances.  We want to wait and see this all
looks like before we jump too deep into other changes.  It seems best at
the start to just change out tapepool for filepool (with the required
migration), see how it works, then see what else to change to make
things better.

 . . . at least that's the plan . . .

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: draining a diskpool when using a DD and no copypool

2011-03-22 Thread Hart, Charles A
We've run in to the same conundrum, do you let the device replicate the
deduped data and create the situation described below or do you just let
TSM handle the Offsite copy and end up transferring all the backups data
from the prior night, this assumes you have adequate bandwidth to your
remote site.  We chose to let TSM do the copy as we have the bandwidth
available like the fact of granular control over the VTape and Seq File
volumes.

Regards, 

Charles Hart

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rick Adamson
Sent: Tuesday, March 22, 2011 8:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] draining a diskpool when using a DD and no
copypool

Rick,
  About a year ago we implemented the Data Domain system just as you
described and after having some time to study its operation and design I
would urge you to think long and hard about it and debate the design
with DD and TSM Support. My concern is this, as recommended by Data
Domain we removed all copy pools and replicate a duplicate copy of our
entire primary sequential (file) storage to our DR site. The problem is;
what do you do if you incur corruption or some other issue in the
primary pool that would normally be resolved by marking the volume as
destroyed and recovering the data from the copy pool (which is now
non-existent). While I have not begun the discussion yet I think a
better design approach may be to either use the DDR as a copy pool
resource or incorporate the resources at the DR site to add copy pools.
Think about it in terms of "what if" something happened to the data at
the primary facility and it got replicated to the DR site, what would be
your option to resolve? I don't see any.

Feedback anyone?

~Rick Adamson
Jacksonville, Fl.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Richard Rhodes
Sent: Tuesday, March 22, 2011 9:13 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] draining a diskpool when using a DD and no copypool

Hello everyone,

An interesting problem . . . we are implementing a Data Domain system.
It
will be configured as a file device via NFS mount.  DD replication of
the primary file device pool to the DR site will be use so there will be
no
copypool.   We will still use a diskpool with migration into the file
device pool, and this brings up a problem.  The majority of backups come
in over night into diskpool and get migrated.  But some backups (Oracle
archive logs, long running backups from remote sites and some others)
come in at any time around the clock.  Since DR relies on DD replication
of the primary file device pool, we MUST make sure that at some point
every file
in diskpool  gets migrated.With ongoing backups coming into
diskpool,
migration to 0% may never complete.The one thought we had was to
once
a day (after migration completed) run movedata on each diskpool volume.

We've asked DD this question, but so far they haven't provided an
answer.


Thanks!

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Blocking Archive from TSM Client

2011-03-16 Thread Hart, Charles A
May not be the most pretty way, but have the archive copygroups point to
a stgpool that's empty or doesn't exist, or don't create a Arc copyg at
all...  They'll get an out of stgspace msg or no copyg error but they
wont be asrchiving and you it's OS indepenant.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Jerry Michalak
Sent: Wednesday, March 16, 2011 8:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Blocking Archive from TSM Client

Hi everyone,

Is there a way to disable or prevent someone from running a TSM Archive
from the client side?

No matter what the base O/S is ?

I appreciate any ideas on this.




 Jerry Michalak
jerry_...@yahoo.com

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Does Move Data recover space on virtual volumes?

2011-03-10 Thread Hart, Charles A
"The number of cartridges in use by the pool goes up and up"

Do you have collation on?  Check stgpool with f=d, if you don't have
Collocate Set to NO it will default to Group then if no Colo Group
exists it defaults to Node.  So every Client Filespace gets a tape.  
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Keith Arbogast
Sent: Thursday, March 10, 2011 2:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Does Move Data recover space on virtual volumes?

My specific problem is with a tape pool that is a target for virtual
volumes created by a remote TSM server. The number of cartridges in use
by the pool goes up and up, at a faster rate than the data growth on the
remote server. I suspect there are many virtual volumes in the target
pool with not many active files left on them. 

My understanding is that even one active file in a virtual volume keeps
all the space used by it on a tape from being reclaimed. So far, my only
resort seems to be to do MOVE DATA on virtual volumes with log
pct_utilization, using 'reconstruct=yes'. That won't be efficient or
fast. 

If I am missing something, please let me know.

My thanks to all who have offered suggestions so far, Keith

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Does Move Data recover space on virtual volumes?

2011-03-10 Thread Hart, Charles A
Also is you add the parameter Reconstruct=yes - It should move the data
and not the white space ...  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Keith Arbogast
Sent: Thursday, March 10, 2011 10:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Does Move Data recover space on virtual volumes?

Sorry.  Move data does send volumes to 'scratch'. In the case of virtual
volumes they are not reused.
Keith

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: 1024 DB2 database connection limit on AIX

2011-03-09 Thread Hart, Charles A
How many mount points / streams per client are you allowing for 750
Clients?

Regards, 

Charles  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
deehr...@louisville.edu
Sent: Wednesday, March 09, 2011 8:06 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] 1024 DB2 database connection limit on AIX

Our maxsessions was 650.   It is now 750.

David

>>> John Monahan  3/8/2011 11:39 PM >>>
Thanks for the info.  What is/was your maxsessions server setting?

___
John Monahan
Delivery Consultant
Logicalis, Inc.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
deehr...@louisville.edu
Sent: Tuesday, March 08, 2011 4:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: 1024 DB2 database connection limit on AIX

I hit this problem the first night of backups after converting to
V6.2.2.0 from v5.5.  I don't know but I'd guess the determining factor
would be number of nodes backing up.  We have between 400 -500 nodes
backing up.  They back up between 3-5 TB a night.  My v6 db is about 140
GB (used).

David Ehresman

>>> John Monahan  3/8/2011 11:11 AM >>>
http://www-01.ibm.com/support/docview.wss?tcss=Newsletter&uid=swg2142855
7


I'm wondering if anyone on the list has run into this issue or if
someone from IBM can clarify or give examples to quantify slightly
better than "servers with large workloads".  How large a DB?  How many
clients?  How much backed up nightly?  At least give something in the
ballpark so TSM admins can make an educated decision if they need to
consider this problem upfront.  Putting in a large server and then
crossing your fingers that you don't reach the limit isn't really a good
customer support strategy.

Thanks



___
John Monahan
Delivery Consultant
Logicalis, Inc.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Deletion Backup Client Object with a FileList

2011-03-04 Thread Hart, Charles A
We are trying to script the deletion of old RMAN backup objects (we'd
use TDPOSync if it wasn't ver 2.x and didn't core dump) so we were going
to delete the RMAN objects from the dsmc cmd line then RMAN would do a
crosscheck to drop the orphans from the RMAN catalog. 

When you display the q backup of the RMAN objects they look like
/adsmorc//ar.dDIATRNPA.t703456537.s4095.p1 - seems ok, except when you
add those files to a list for the dsmc delete -filelise=tst dsmc returns
the message Cant find object dsmc is converting the "//" to "/" 


 ** Unsuccessful **
ANS1345E No objects on server match
'/adsmorc/ar.dDIATRNPA.t701642117.s3939.p1'

This diff is the object listed in the error is missing a / and a space
... 

In My File List
/adsmorc//ar.dDIATRNPA.t703456537.s4095.p1

Whats Displayed as Not Availble 
/adsmorc/ar.dDIATRNPA.t701642117.s3939.p1


Has anyone dealth with this?  I tried encapsulating with quoates but no
luck. 

Thank you

Charles Hart

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Reduce DB in 6.2.2.2

2011-02-03 Thread Hart, Charles A
Doest DB2 do Online Re-oirgs?  Or was the intend to clean up a 5.x DB for a 
migrate to 6.2? 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Grigori Solonovitch
Sent: Thursday, February 03, 2011 5:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Reduce DB in 6.2.2.2

To release DB space I am using for TSM 5.5:
1) estimate dbreorgstats
2) dsmserv UNLOADDB devclass= volumenames=
3) dsmserv LOADFORMAT ...
4) dsmserv LOADDB devclass= volumenames=
5) dsmserv AUDITDB fix=yes, if required

It takes time, but works fine.



Grigori G. Solonovitch


From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Alexander Heindl
Sent: Thursday, February 03, 2011 2:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Reduce DB in 6.2.2.2

Hi,

I have a TSM-Server instance with a 188 GB Database:
Space Used by Database(MB): 188,032

Although deleting several (big) filespaces which are not needed anymore, 
database ist not shrinking. I thought with 6.1 (an 6.2) this is done 
automatically?

Best Regards,
Alex

__
Ing. Alexander Heindl

Generali IT-Solutions GmbH
WIs

Kratochwjlestraße 4, 1220 Wien
Telefon: +43 (0)1 53401-13160
Fax: +43 1 532 09 49 3160
E-Mail: alexander.hei...@generali.at

http://www.generali.at

Generali IT-Solutions GmbH, Sitz in Wien registriert beim Handelsgericht Wien 
unter FN 215738 m
DVR-Nr.: 266.
Die Gesellschaft gehört zur Unternehmensgruppe der Assicurazioni Generali 
S.p.A., Triest, eingetragen im Versicherungsgruppenregister der ISVAP unter der 
Nummer 026.
__


CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.

Please consider the environment before printing this Email.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Exchange 2010 and TDP / VSS

2011-01-26 Thread Hart, Charles A
Thank you Del,  Yes I meant quiesced just couldn't type.  Really
appreciate the insight.

Regards, 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Del Hoobler
Sent: Tuesday, January 25, 2011 6:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Exchange 2010 and TDP / VSS

>> I browsed the fine manual but was hoping to gain a little insight 
>> from other experiences.  I understand that it's no longer a streaming
backup,
>> it's a VSS software snap.  Does anyone know how long the Exchange 
>> DB's are in a guessed state?

Do you mean quiesced state?
Writes to the disk are suspended for less than 10 seconds (Microsoft
requirement), but Exchange continues to be able to service requests.


>> Does it depend on the DB size or does VSS get a snap and then tsm 
>> backs the snap?

Once a snapshot is created, there is logically a "static" copy of the
volumes that the the database and logs are on. At that point, TSM will
perform the integrity check (Microsoft requirement) and then back the
files up to the TSM Server. The integrity check and backup is performed
on the snapshot volume.
The VSS Provider that you use will determine what the physical
representation of that logical volume snapshot is.
If you do not install a VSS Hardware Provider from the disk vendor that
you are using, the Microsoft "in box" VSS System Provider is used. The
the Microsoft "in box" VSS System Provider is a software-based,
"copy-on-write" implementation.


>> If the DB is guessed for the duration of the backup then maybe it 
>> makes sense to backup just the passive
dags.

Again, I assume you mean "quiesced" state...
Microsoft and IBM recommend that you perform the backup from the passive
DAG copies to help "offload" the hit to the production servers.


>>
>>
>> Any thoughts or insight would be great!

Take a look at this. It is a blog that I wrote that gives you some links
that help explain what VSS is and how TSM works with VSS.

https://www-950.ibm.com/blogs/tivolistorage/entry/ibm_tivoli_storage_fla
shcopy_manager_and_windows_snapshots8?lang=en_us


>>
>> Regards
>>
>> Charles


Thanks,

Del



This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Exchange 2010 and TDP / VSS

2011-01-25 Thread Hart, Charles A
I browsed the fine manual but was hoping to gain a little insight from
other experiences.  I understand that it's no longer a streaming backup,
it's a VSS software snap.  Does anyone know how long the Exchange DB's
are in a guessed state?  Does it depend on the DB size or does VSS get a
snap and then tsm backs the snap?  If the DB is guessed for the duration
of the backup then maybe it makes sense to backup just the passive dags.


Any thoughts or insight would be great!

Regards 

Charles 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM OR Replacement

2011-01-07 Thread Hart, Charles A
Its unfortunate that the Op Reporting has gone, prior to the Op Reporting there 
use to be a package called Tivoli Data Wherehouse that would report on Tiovoli, 
again a monsterous footprint (db2, webshephe cognos).  We eventually ended up 
buying Bocada's Backup Reporting product as you could hire a FTE to develop 
reports from scratch.  We have been trying tio use the Common Reporter for 
Fastback but it contunially fails and IBM can't seem to resolve the issue as 
we've jhad a PMR open for 6+ months ... 


 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Harris
Sent: Friday, January 07, 2011 4:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM OR Replacement

yes Tuncel you are correct.  I  looked into BIRT because Tivoli Common 
Reporting was using it.

However IBM's product has a considerable resource requirements and footprint 
and because it is the common reporting component for all of the Tivoli line it 
has to be all things to all men.

My goals are much less ambitious, basically a light weight reporting tool for 
current TSM data, and will hopefully run on the same class of hardware as TSM 
OR does now.

Regards

Steve.

On 7/01/2011 9:53 PM, tuncel.mu...@akbank.com wrote:
> IBM is quicker than you. From the Wiki page - "In 2007 IBM's Tivoli Division 
> adopted BIRT as the infrastructure for its Tivoli Common Reporting (TCR) 
> product. TCR produces historical reports on Tivoli-managed IT resources and 
> processes."
>
> Regards,
>
> Tuncel
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf 
> Of Steven Harris
> Sent: Friday, January 07, 2011 6:13 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: TSM OR Replacement
>
> Hi All
>
> TSM Operational Reporting has been useful for a long time, but is now 
> something of an unwanted orphan.  It has significant limitations, 
> particularly in that it can only support a limited number of reports, 
> the ability to customize is limited and it presents a server-by-server 
> view of things.
>
> For quite a while I have been toying with replacements, most of which 
> have been dead ends.  Eventually I have settled upon BIRT as a 
> reporting engine, and believe this has some merit.  I knocked together 
> a basic but still impressive Client Schedule report with just a couple 
> of hours effort having never used the tool before.
>
> I can't justify developing this in company time, but I am very keen to 
> continue so I am starting an open-source project to take this further.
>
> It may come to nothing, but, if anyone is interested in joining-in in 
> any capacity with this, or even just to express an interest in any 
> eventual outcome, please email me at tsmrpt...@gmail.com
>
> Regards
>
> Steve
>
> Steven Harris
> TSM Admin
> Paraparaumu New Zealand.
>
>
>
>
>
>
>
> Bu e-posta ve muhtemel eklerinde verilen bilgiler kişiye özel ve gizli olup, 
> yalnızca mesajda belirlenen alıcı ile ilgilidir.Size yanlışlıkla ulaşmışsa 
> lütfen göndericiye bilgi veriniz, mesajı siliniz ve içeriğini başka bir 
> kişiye açıklamayınız, herhangi bir ortama kopyalamayınız. Bu mesaj aksi 
> sözleşme ile belirtilmedikçe herhangi bir finansal işlem teklifi, alımı, 
> satımı veya herhangi bir havalenin teyidi gibi bankacılık işlemi yapılması 
> amacını taşımamaktadır.Verilen tüm bilgilerin doğruluğu ve bütünlüğünün 
> garantisi verilmemekte olup, önceden bildirilmeksizin 
> değiştirilebilecektir.Bu mesajın içeriği Bankamızın resmi görüşlerini 
> yansıtmayabileceğinden Akbank T.A.Ş. hiçbir hukuki sorumluluğu kabul etmez.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Why won't objects show up as expected for Restore?

2011-01-05 Thread Hart, Charles A
Also make sure the user that backed them up is the same user as
performing the restore, or a user with admin auth on the TSM server.  Ex
If root backs up files abc only root can restore or a tsm admin.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Sims
Sent: Tuesday, January 04, 2011 4:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Why won't objects show up as expected for Restore?

This may be a case where the TSM client is unclear as to what portion of
the path spec constitutes the filespace name. You may have to put braces
around the filespace name to help it out, as illustrated in the client
manual.  Doing 'dsmc q fi' may be helpful in identifying the filespaces
precisely.

   Richard Sims

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Query for Disk Volume Mounts

2010-12-21 Thread Hart, Charles A
Might have to do a select from the actlog .. Summary table records "Tape
Mount" but I didn't see anything that represented a device file mount
other than the act log. 

12/21/10   03:16:26  ANR8340I FILE volume
/tsmstgsqllv16/00032D63.BFS mounted. 
  (SESSION: 841673, PROCESS: 3435)


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Baughman, Ray
Sent: Tuesday, December 21, 2010 8:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Query for Disk Volume Mounts

I have a need for a query to find the total number of mounts for a
device class with a device type of file.  I can't figure it out, any
help would be appreciated.

 

 Ray Baughman
TSM & Engineering Systems Administrator
National Machinery LLC
Phone 419-443-2257
Fax 419-443-2376
Email rbaugh...@nationalmachinery.com

 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Slow client backup

2010-12-14 Thread Hart, Charles A
You can also try a journal based backup, 
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com
.ibm.itsm.client.doc/c_bac_jbbwin.html

Maybe use / try Virtual volume  mount concept to break it up in smaller
chunks assuming its one big vol 
http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com
.ibm.itsm.client.doc/c_cmd_restore_virtual.html



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Abbott, Joseph
Sent: Tuesday, December 14, 2010 1:56 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Slow client backup

The only thing that has worked for us in these types of cases is to run
a selective backup rather than an incremental.
This way TSM just backs up every file rather than going through the
examination process.
This may or may not help you as I don't know the specifics of your
systems.

JoeA

Joseph A Abbott MCSE 2003/2000, MCSA2003 Tivoli Storage Manager
Architect jabb...@partners.org
Cell-617-633-8471
Desk-617-724-4929
Page-# (617) 362-6341
6173391...@usamobility.net

"Be who you are and say what you feel because those who mind don't
matter and those who matter don't mind."


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Moyer, Joni M
Sent: Tuesday, December 14, 2010 2:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Slow client backup

Hi Everyone,

I am looking for suggestions on how to improve the backup of an AIX TSM
client:


 Node Name: CHRS141

  Platform: AIX

   Client OS Level: 5.3

Client Version: Version 5, release 5, level 2.0

Policy Domain Name: AIX

To a TSM AIX server at the 5.5.5.0 release.

It appears that it is examining over 15 million files which to me is
most likely the root cause of the backup taking so long.  Is there
anything that can be done to make this backup run faster/more
efficiently?

Please let me know.  Any suggestions are greatly appreciated!  Thanks!

12/14/10 14:33:37 ANE4952I (Session: 12522, Node: CHRS141)  Total
number of
   objects inspected: 15,779,764(SESSION: 12522)
12/14/10 14:33:37 ANE4954I (Session: 12522, Node: CHRS141)  Total
number of
   objects backed up:  365,295(SESSION: 12522)
12/14/10 14:33:37 ANE4958I (Session: 12522, Node: CHRS141)  Total
number of
   objects updated:  0(SESSION: 12522)
12/14/10 14:33:37 ANE4960I (Session: 12522, Node: CHRS141)  Total
number of
   objects rebound:  0(SESSION: 12522)
12/14/10 14:33:37 ANE4957I (Session: 12522, Node: CHRS141)  Total
number of
   objects deleted:  0(SESSION: 12522)
12/14/10 14:33:37 ANE4970I (Session: 12522, Node: CHRS141)  Total
number of
   objects expired: 68,914(SESSION: 12522)
12/14/10 14:33:37 ANE4959I (Session: 12522, Node: CHRS141)  Total
number of
   objects failed:   0(SESSION: 12522)
12/14/10 14:33:37 ANE4961I (Session: 12522, Node: CHRS141)  Total
number of
   bytes transferred: 24.77 GB(SESSION: 12522)
12/14/10 14:33:37 ANE4963I (Session: 12522, Node: CHRS141)  Data
transfer
   time:16,311.37 sec(SESSION:
12522)
12/14/10 14:33:37 ANE4966I (Session: 12522, Node: CHRS141)  Network
data
   transfer rate:1,592.35 KB/sec(SESSION:
12522)
12/14/10 14:33:37 ANE4967I (Session: 12522, Node: CHRS141)
Aggregate data
   transfer rate:415.37 KB/sec(SESSION:
12522)
12/14/10 14:33:37 ANE4968I (Session: 12522, Node: CHRS141)  Objects
   compressed by:0%(SESSION:
12522)
12/14/10 14:33:37 ANE4964I (Session: 12522, Node: CHRS141)  Elapsed
   processing time:17:22:09(SESSION:
12522)



This e-mail and any attachments to it are confidential and are intended
solely for use of the individual or entity to whom they are addressed.
If you have received this e-mail in error, please notify the sender
immediately and then delete it. If you are not the intended recipient,
you must not keep, use, disclose, copy or distribute this e-mail without
the author's prior permission.
The views expressed in this e-mail message do not necessarily represent
the views of Highmark Inc., its subsidiaries, or affiliates.


The information in this e-mail is intended only for the person to whom
it is addressed. If you believe this e-mail was sent to you in error and
the e-mail contains patient information, please contact the Partners
Compliance HelpLine at http://www.partners.org/complianceline . If the
e-mail was sent to you in error but does not contain patient
information, please contact the sender and properly dispose of the
e-mail.

This e-mail, including attachments, may 

Re: Configuring IBM 3584 library

2010-12-13 Thread Hart, Charles A
If it's a "IBM" lib type with LTO / 3592 all you need is the atape
driver, not the TSM device drivers

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nick Laflamme
Sent: Monday, December 13, 2010 6:31 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Configuring IBM 3584 library

On Dec 13, 2010, at 6:09 AM, Richard Rhodes wrote:

> If you have a VTL from a non-ibm vendor and choose to emulate a IBM 
> lib, do you have the right to use  the IBM drivers?  There might be a 
> vtl in our near future and I've wondered about this.
> 
> Rick

Yes, you do. The tape drivers are part of TSM, not part of the 3584, as
far as I've ever heard. 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Configuring IBM 3584 library

2010-12-10 Thread Hart, Charles A
3584 is considered a SCSI Lib - so just use the define lib cmd and use
the parms for SCSI lib.  You can do a h def library on the TSM cmd line
for the options



>>-DEFine LIBRary--library_name->

   .-LIBType--=--MANUAL---.   
>--+--+->
   '-LIBType--=--+-MANUAL---+-'   
 +-SCSI--| A |--+ 
 +-349X--| B |--+ 
 +-EXTernal-+ 
 '-ACSLS--| C |-' 

   .-RESETDrives--=--Yes-. (1)
>--+-+-->
   '-RESETDrives--=--+-Yes-+-'
 '-No--'  

   .-AUTOLabel--=--Yes---. (2)
>--+-+-><
   '-AUTOLabel--=--+-No+-'
   +-Yes---+  
   '-OVERWRITE-'  

A (SCSI):


   .-AUTOLabel--=--No.   
|--+-+--|
   '-AUTOLabel--=--+-No+-'   
   +-Yes---+ 
   '-OVERWRITE-' 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Martha McConaghy
Sent: Friday, December 10, 2010 12:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Configuring IBM 3584 library

I need a little quick help.  I'm trying to connect our TSM 5.5 server to
an EMC VTL.  The EMC box is emulating an IBM 3584 library with IBM 3592
tape drives.  I can't figure out how to define the library.  Should it
be SCSI?  Anyone have any clue what options I should use?

We've never used either before and I haven't had much luck searching the
manuals for info on it.

Martha McConaghy
Marist IT

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Antwort: [ADSM-L] LanFree very low performance

2010-11-23 Thread Hart, Charles A
A couple questions 

1) How is this 5TB laid out, 1,2,3,4 or more file systems?  

2) How many streams do you see when the backup is running? (Resource 
utilization / mutli streaming is by File System for flat file backups.

3) Are the tape drives on dedicated HBA's? 

4) Are you sure that the Data is going over the fibre?  If not configured 
properly TSM will automatically direct the backups over the LAN (sounds right 
for 70MBS)

Charles 




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of David 
McClelland
Sent: Tuesday, November 23, 2010 8:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Antwort: [ADSM-L] LanFree very low performance

Peppix,

It may be helpful if you are able to send to the list the output of a backup 
session summary report from the client for a backup operation which exhibited 
this poor performance.

Rgds,
__
David Mc,
London, UK


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Ullrich Mänz
Sent: 23 November 2010 14:09
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Antwort: [ADSM-L] LanFree very low performance

Hi peppix, 

you are using incremental backup for an offline database backup. I wonder, how 
you deal with 5 tape drives. Normally, backup incremental is using one session 
per mountpoint. set max. number of mountpoints in node definition for 
parallelism. 

The throughput reported is ok for single session backup via LAN, It is slow for 
a large file backup via LANfree for a completely fibre channel based RAID 
system, but acceptable for SATA storage. 

Have a look at your storage environment and node definitions with respect to 
the directory layout of the database environment.

regards
Ulli

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - -
Ullrich Mänz
System-Integration

FRITZ & MACZIOL Software und Computervertrieb GmbH Ludwig Str. 180D, 63067 
Offenbach, Germany

Mobil:
+49 170 7678434
Fax:
+49 69 38013500 10
Web:
http://www.fum.de

Amtsgericht Ulm, Handelsregister-Nummer: HRB 1936
Geschäftsführer: Heribert Fritz, Oliver Schallhorn, Frank Haines
Inhaber: Imtech N.V., Gouda, Niederlande Referenzen finden Sie auf unserer 
Website, Rubrik 'News'.



Von:peppix 
An: ADSM-L@VM.MARIST.EDU
Datum:  23.11.2010 00:25
Betreff:[ADSM-L] LanFree very low performance
Gesendet von:   "ADSM: Dist Stor Manager" 



Hi all,
I'm trying to backup an oracle db (5TB) by lanfree, but the transfer speed is 
70 mb/s (over 5 drives)!! The db is offiline so I use the incremental backup.

The client configuration (v.5.5.2.2):

SErvername  X
   COMMMethodTCPip
   TCPPort   1500
   TCPServeraddress  XXX
   DISKBUFFSIZE   1023
   TCPWindowsize  1024
   TCPBUffsize   512
   passwordaccess generate
   nodename XXX
   RESOURCEUTILIZATION   10
   TCPNOdelay  YES
   TXNByteLimit 2097152
   LargeCommBuffer   NO
   CommRestartDuration 5
   CommRestartInterval  15
   schedmode   prompted
   managedservices   schedule webclient
   ERRORlogname   /usr/tivoli/tsm/client/ba/bin/dsmerror.log
   ERRORlogrete7 D
   SCHEDlogname  /usr/tivoli/tsm/client/ba/bin/dsmsched.log
   schedlogrete 7 D
   inclexcl /usr/tivoli/tsm/client/ba/bin/inclexcl.lst
   enablelanfreeyes
   LANFREECommmethod TCPIP
   LANFREETCPPort1500


Server configuration (v.5.5.4.0):
   COMMmethod TCPIP
   TCPPort1500
   TCPWindowsize  1024
   TCPBufsize32
   TCPNODELAY YES
   COMMmethod HTTP
   HTTPPort  1580
   COMMmethod SHaredmem
   SHMPort   1510
   IDLETimeout  2880
   LANGuage en_US
   DATEformat   2
   TIMEformat   1
   NUMberformat5
   EXPInterval   0
   MIRRORREAD LOG   NORMAL
   MIRRORREAD DB NORMAL
   MIRRORWRITE LOG PARALLEL
   MIRRORWRITE DB   SEQUENTIAL
   VOLUMEHistory  /tsmdblog/config/volhistory.txt
   VOLUMEHistory  /usr/tivoli/tsm/server/bin/volhistory.txt
   DEVCONFig   /tsmdblog/config/devconfig.txt
   DEVCONFig   /usr/tivoli/tsm/server/bin/devconfig.txt
   BUFPoolsize  32768
   LOGPoolsize  4096
   TXNGroupmax   256
   MOVEBatchsize 1000
   MOVESizethresh1024
   tcpnodelay  YES
   DNSLOOKUP NO
   COMMTIMEOUT14400
   MAXSESSIONS 80
   SANDISCOVERYON

Considerations:
- 12 drives installed in the library
- SAN and TSM work good (with the same tsm server in the same SAN I backup a 
lot of SAP by TDP at 300MB/s)
- all server backupped by incremental have the same problem (same configur

Re: GE Centricity Cardiology INW backups

2010-11-17 Thread Hart, Charles A
Interesting 

1)TSM 5.2.2 - No IBM Client support / bug fix - Usually TSM clients do
fine.  We use to run OS/2 Client @ 3.x against a TSM 5 Server.  It's a
support risk . Id ask them what issues were there with more current TSM
clients. 

2) 'Tivoli doe not require a client on the server'. What does GE mean by
this?  The App or backup Server? Ask them to clarify.

3) waveform file will not be updated after it is created, and hence will
only need to be backed up once?
If using standard TSM Incremental backups then it will only be
backed up once .. 

4) How does the size of a database backup compare to the overall disk
space usage?
 Depend on the flat file backup, not sure if the SQL / NTBackup
removes white space when it backs it up?


5) The instructions for using TSM focus on backing up three folders on
the D drive. Is there any major downside to letting TSM back up the
entire system (as long as we exclude *.ldf and *.mdf files)? 

No - assuming low rate of file change other than the db shouldn't be an
issue.  You'll ensure if new data or naming conv is created you'll get a
copy as opposed to selectively backing files up and missing a change the
app person puts in place. 

Hope this helps 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Thomas Denier
Sent: Wednesday, November 17, 2010 3:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] GE Centricity Cardiology INW backups

I have gotten a request to arrange for TSM backup coverage of a GE
Centricity Cardiology INW system. After reading GE's documentation I
have a number of questions for people who are already backing up INW
systems.

Our INW system runs under Windows 2003. We normally use TSM 5.5.1 client
code for this level of Windows, but GE recommends TSM 5.2.2.
What are the pros and cons of accepting GE's recommendation? This
recommendation is accompanied by the puzzling statement that 'Tivoli doe
not require a client on the server'. What does GE mean by this?

The INW system stores report files and waveform files, and has a MS SQL
database containing references to the report and waveform files. Am I
correct in suspecting that a report file or waveform file will not be
updated after it is created, and hence will only need to be backed up
once? The system dumps the MS SQL database contents to flat files and
lets TSM back up the flat files. How does the size of a database backup
compare to the overall disk space usage?

The instructions for using TSM focus on backing up three folders on the
D drive. Is there any major downside to letting TSM back up the entire
system (as long as we exclude *.ldf and *.mdf files)? 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


TSM 6.x / DB2 Best Practive Layout

2010-10-22 Thread Hart, Charles A
We are looking for some best practices for the DB2 layout for TSM.
We've asked IBM Premium Support and their answer was RTF DB2 Manual.
That's fine and we were also told no you don't need to be a DB2 DBA...
Ok I'm past that now and am looking for advise as to a good DB2 layout..
Other than the basics (logs / data files diff FS) we'd like to know PP
size, raw vs. cooked, jfs2? We plan on running up to 4 TSM instances on
a p550 power / power 7 hosts.

Or would the IBM Redbook below suffice... 

http://download.boulder.ibm.com/ibmdl/pub/software/dw/dm/db2/bestpractic
es/DB2BP_Storage_1009I.pdf

Have a good day!
Charles 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage

2010-10-20 Thread Hart, Charles A
Dumb statement, but isn't the whole Idea of the File Devclass is it is 
sequential.  Can one be more sequential than the other?  If its not then its 
random.
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Wednesday, October 20, 2010 1:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with 
SAN/FILEDEVCLASS storage

How you connect to the disk storage (i.e., SCSI or SAN) doesn't matter.  This 
goes more to the issue of how blocks within the volumes are laid out on the 
spindles.  formatting them one at a time will cause the blocks to be laid out 
in a more sequential fashion, so that when TSM references the blocks, they will 
be referenced in a more sequential fashion (assuming you are doing mostly 
sequential I/O).

..Paul


At 02:02 PM 10/20/2010, Zoltan Forray/AC/VCU wrote:
>Thanks for the affirmation.  This is what I have been seeing/experiencing. 
> As soon as I can empty the stgpool (5TB), I will define fixed volumes and 
>see how much difference that makes.   I am aware of the issue of 
>single-threading the define/formats to not fragment them, however I 
>wonder how much that really matters in a SAN?
>Zoltan Forray
>TSM Software & Hardware Administrator
>Virginia Commonwealth University
>UCC/Office of Technology Services
>zfor...@vcu.edu - 804-828-4807
>Don't be a phishing victim - VCU and other reputable organizations will 
>never use email to request that you reply with your password, social 
>security number or confidential personal information. For more details 
>visit http://infosecurity.vcu.edu/phishing.html
>
>
>
>From:
>Markus Engelhard 
>To:
>ADSM-L@VM.MARIST.EDU
>Date:
>10/20/2010 09:20 AM
>Subject:
>[ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS 
>storage Sent by:
>"ADSM: Dist Stor Manager" 
>
>
>
>Hi Zoltan,
>
>my experience has been: use fixed size preformatted volumes, and be 
>sure to format them sequentially, even if it seems to take a hell of a 
>time. But then, it´s a one-time action and highly automated, so just 
>don´t try to boost "performance" here. Make sure no one else is bogging 
>perfs, SAN guys sometimes tend to put all kinds of unassorted loads on 
>one storage array producing massive hot-spots during TSM activities.
>
>Kind regards,
>
>Markus
>
>
>--
>Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
>Informationen. Wenn Sie nicht der richtige Adressat sind oder diese 
>E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den 
>Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie 
>die unbefugte Weitergabe dieser Mail oder von Teilen dieser Mail ist nicht 
>gestattet.
>
>Wir haben alle verkehrsüblichen Maßnahmen unternommen, um das Risiko 
>der Verbreitung virenbefallener Software oder E-Mails zu minimieren, 
>dennoch raten wir Ihnen, Ihre eigenen Virenkontrollen auf alle Anhänge 
>an dieser Nachricht durchzuführen. Wir schließen außer für den Fall von 
>Vorsatz oder grober Fahrlässigkeit die Haftung für jeglichen Verlust 
>oder Schäden durch virenbefallene Software oder E-Mails aus.
>
>Jede von der Bank versendete E-Mail ist sorgfältig erstellt worden, 
>dennoch schließen wir die rechtliche Verbindlichkeit aus; sie kann 
>nicht zu einer irgendwie gearteten Verpflichtung zu Lasten der Bank 
>ausgelegt werden.
>__
>
>This e-mail may contain confidential and/or privileged information. If 
>you are not the intended recipient (or have received this e-mail in 
>error) please notify the sender immediately and destroy this e-mail. 
>Any unauthorised copying, disclosure or distribution of  the material 
>in this e-mail or of parts hereof is strictly forbidden.
>
>We have taken precautions to minimize the risk of transmitting software 
>viruses but nevertheless advise you to carry out your own virus checks 
>on any attachment of this message. We accept no liability for loss or 
>damage caused by software viruses except in case of gross negligence or 
>willful behaviour.
>
>Any e-mail messages from the Bank are sent in good faith, but shall not 
>be binding or construed as constituting any kind of obligation on the 
>part of the Bank.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu  

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Looking for SAN/tape experts assistance

2010-10-19 Thread Hart, Charles A
Thx for the clarification, been a while (we use AIX so the ODM takes
care of it all).  To the point earlier FC devices changing order can be
remediated fairly easy.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Strand, Neil B.
Sent: Tuesday, October 19, 2010 8:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Looking for SAN/tape experts assistance

The zoning process simply associates a server HBA port on the server
with the HBA port on the disk device.

Persistent binding is a function of the OS and HBA drivers on the
server.  Within the server configuration, the HBA must be told that a
device with a particular ID (i.e. /dev/rmt1) is always to be associated
with a physical device with a specific ID (i.e. WWPN).  This is
typically performed by a configuration file that manages the HBA
configuration.

On Solaris with an Emulex HBA, the file /kernel/drv/lpfc.conf will allow
you to manage persistent bindings by associating a specific WWPN or WWNN
to a specific scsi ID:
e.g. fcp-bind-WWPN="500a098386f7d4f3:lpfc0t0";
You also need to ensure that automatic reconfiguration is NOT set.

Automatic reconfiguration can be particularly vexing in a fiber channel
loop environment where device contention may cause indeterminent delays
with multiple target devices (tape drives) attached to a single
initiator (server HBA).

Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Hart, Charles A
Sent: Monday, October 18, 2010 4:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Looking for SAN/tape experts assistance

This may be a dumb  response but this behavior is similar in Windows and
or Solaris, I thought if the person that zoned the device enabled
persistent binding these devices would not re-order on but as it scans
the FC.

Did I completely miss it?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
giblackwood
Sent: Monday, October 18, 2010 1:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Looking for SAN/tape experts assistance

Mr Forray,

I know a lot about this problem you are dealing with.  My name is George
Blackwood.  I was a Systems Engineer with IBM for 30 years.  Among other
things, I was a SAN, tape, and TSM specialist.  I have been retired for
2 years, 1 month.  I have my own consulting business doing what I did
when I was an IBMer.

When Linux is rebooted (RedHat, SLES, whatever), it will scan and
re-discover its SCSI and FCP (Fibre Channel Protocol) tape resources
without regard of what it knew about those same devices before the
reboot (this is not the case with some UNIX systems).

So, unless you have one changer and one tape drive, you have no
guarantee that the Linux device numbers will be the same after reboot.
So, chances are IBMtape0 will be IBMtape20 the next time you reboot.

IBM's answer is to set "SANDISCOVERY ON".  This works sometimes for a
small number of drives (under 20), and will sometimes work for more.
But after 18 months of being in and out of IBM PMRs and "CritSits", I
have given up on sandiscovery to fix this issue.

I wrote a BASH script to fix this issue.  A current customer of mine has
8 RedHat Linux servers sharing 12 TSM instances (we can move them around
as need be).  Two instances are Library Managers.  All instances have
access to 4 EMC EDLs.  Each EDL has 80 drives.  So that comes to 3890
drives paths, plus 4 Library paths to maintain.

The script I wrote discovers what TSM instances (Library Servers and
Clients) are running on a given Linux server that has just been
rebooted.  It compensates for any drives that may be mounted, or any
Libraries that are in use, and re-defines all the Library and drive
paths for any TSM instance on a given Linux server.

So if one of the 8 servers needs to be rebooted, the script is run on
that server after reboot.  There is no need to unmount and quiesce
Libraries.  The only requirement is the Library Managers must be up.
The script will also find what drives are in a SCSI reserve "lock out".
And, it is safe to be run during full production time.

I can give you a few pointers to write a similar script (for free), or
for a fee, write it for you.  I guarantee my work.

George Blackwood
Blackwood Data Protection Consulting, LLC
785-218-9961
georgeblackw...@sunflower.com

+--
|This was sent by georgeblackw...@sunflower.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is a

Re: Looking for SAN/tape experts assistance

2010-10-18 Thread Hart, Charles A
This may be a dumb  response but this behavior is similar in Windows and
or Solaris, I thought if the person that zoned the device enabled
persistent binding these devices would not re-order on but as it scans
the FC.  

Did I completely miss it? 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
giblackwood
Sent: Monday, October 18, 2010 1:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Looking for SAN/tape experts assistance

Mr Forray,

I know a lot about this problem you are dealing with.  My name is George
Blackwood.  I was a Systems Engineer with IBM for 30 years.  Among other
things, I was a SAN, tape, and TSM specialist.  I have been retired for
2 years, 1 month.  I have my own consulting business doing what I did
when I was an IBMer.

When Linux is rebooted (RedHat, SLES, whatever), it will scan and
re-discover its SCSI and FCP (Fibre Channel Protocol) tape resources
without regard of what it knew about those same devices before the
reboot (this is not the case with some UNIX systems).

So, unless you have one changer and one tape drive, you have no
guarantee that the Linux device numbers will be the same after reboot.
So, chances are IBMtape0 will be IBMtape20 the next time you reboot.

IBM's answer is to set "SANDISCOVERY ON".  This works sometimes for a
small number of drives (under 20), and will sometimes work for more.
But after 18 months of being in and out of IBM PMRs and "CritSits", I
have given up on sandiscovery to fix this issue.

I wrote a BASH script to fix this issue.  A current customer of mine has
8 RedHat Linux servers sharing 12 TSM instances (we can move them around
as need be).  Two instances are Library Managers.  All instances have
access to 4 EMC EDLs.  Each EDL has 80 drives.  So that comes to 3890
drives paths, plus 4 Library paths to maintain.

The script I wrote discovers what TSM instances (Library Servers and
Clients) are running on a given Linux server that has just been
rebooted.  It compensates for any drives that may be mounted, or any
Libraries that are in use, and re-defines all the Library and drive
paths for any TSM instance on a given Linux server.

So if one of the 8 servers needs to be rebooted, the script is run on
that server after reboot.  There is no need to unmount and quiesce
Libraries.  The only requirement is the Library Managers must be up.
The script will also find what drives are in a SCSI reserve "lock out".
And, it is safe to be run during full production time.

I can give you a few pointers to write a similar script (for free), or
for a fee, write it for you.  I guarantee my work.

George Blackwood
Blackwood Data Protection Consulting, LLC
785-218-9961
georgeblackw...@sunflower.com

+--
|This was sent by georgeblackw...@sunflower.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Using copy pool (device class FILE)

2010-09-15 Thread Hart, Charles A
Why do all the import export stuff.  Just have an TSM instance on the
disaster site, backup your db to the offsite Devclass as file, start tsm
on the offsite with that db... Mark the primary vols unavailable and
you're up!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Grigori Solonovitch
Sent: Wednesday, September 15, 2010 7:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Using copy pool (device class FILE)

I have:



1) defined target server AUBKDR on source server BKME by "def ser AUBKDR
password=xxx hla=yyy lla=zzz" - for virtual volumes;

2) defined source server BKME on target AUBDR by "def ser BKME
password=xxx hla=yyy lla=zzz" - for virtual volumes;

3) pinged servers by "ping server NAME" successfully;

4) exported policy from source server BKME to target server AUBDR by
"exp pol AIX tos=AUBKDR";

5) exported admins from source server BKME to target server AUBDR by
"exp admin XXX tos=AUBKDR";

6) exported all AIX nodes from source server BKME to target server AUBDR
by "exp node XXX filedata=none tos=AUBKDR";

7) create FILE device class on server AUBKDR by "define devclass DCP3_DR
devtype=FILE mountlimit=32 maxcapacity=64G directory=/DCP3_DR
shared=yes" - shared=y is OK or not?

8) created FILE copy pool on server AUBKDR by "define stgpool DCP3_DR
DCP3_DR pooltype=copy maxscratch=0 reclaim=50";

9) created volumes for stgpool DCP3_DR on server AUBKDR by "define
volume DCP3_DR DCP3 formatsize=65536";

10) created device class on server BKME by "DEFINE DEVCLASS DCP3_DR
DEVTYPE=Server SERVERNAME=AUBKDR MAXCAPACITY=64G PREFIX=DCP3";

11) created stgpool on server BKME by "DEFINE STGPOOL DCP3_DR  DCP3_DR
POOLTYPE= Copy  RECLAIM=50 RECLAIMPROCESS=1 RECLAMATIONTYPE=THRESHOLD
COLLOCATE='No'.



I do not know, what is the next step.



Grigori G. Solonovitch



Senior Technical Architect



Information Technology  Ahli United Bank Kuwait
http://www.ahliunited.com.kw



Phone: (+965) 2231-2274  Mobile: (+965) 99798073  E-Mail:
grigori.solonovi...@ahliunited.com



Please consider the environment before printing this Email





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Hart, Charles A
Sent: Wednesday, September 15, 2010 3:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Using copy pool (device class FILE)



I may be oversimplifying, but all you need to do is create a Copy pool

using the Devclass file... Of course you have to have a FCIP connection

or configure TSM to TSM server to server with virtual volumes.





-Original Message-

From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of

Grigori Solonovitch

Sent: Wednesday, September 15, 2010 3:02 AM

To: ADSM-L@VM.MARIST.EDU

Subject: [ADSM-L] Using copy pool (device class FILE)



Hello Everybody,

I am trying to avoid tape movement between Head Office and Disaster Site

by using additional copy pool (device class FILE) at Disaster Site:

1) main instance of TSM Server 5.5.4.1 is running at Head Office;

2) additional instance of TSM Server 5.5.4.1 is running at Disaster

Site;

3) main and additional TSM servers are cross-defined;

4) policy and nodes are imported to disaster TSM Server;

5) main instance of TSM Server is mirrored online to Disaster Site by

PPRC (files, database, logs) and can be activated in case of disaster to

restore data from tape copy pool;

6) there is enough disk space to create additional copy pool (FILE) on

TSM Server at Disaster Site.



Is it possible to backup primary pools at Head Office directly to FILE

copy pool at Disaster Site to be able to restore data from disk in case

of disaster (avoid using tape copy pool)?



If yes, I need help to configure remote FILE copy pool (any suggestions,

documents, links, red books, etc).

I will deeply appreciate your help.

Kindest regards,





Grigori G. Solonovitch



Senior Technical Architect



Information Technology  Ahli United Bank Kuwait

http://www.ahliunited.com.kw



Phone: (+965) 2231-2274  Mobile: (+965) 99798073  E-Mail:

grigori.solonovi...@ahliunited.com<mailto:grigori.solonovi...@ahliunited

.com>



Please consider the environment before printing this Email







CONFIDENTIALITY AND WAIVER: The information contained in this electronic

mail message and any attachments hereto may be legally privileged and

confidential. The information is intended only for the recipient(s)

named in this message. If you are not the intended recipient you are

notified that any use, disclosure, copying or distribution is

prohibited. If you have received this in error please contact the sender

and delete this message and any attachments from your computer system.

We do not guarantee that this message or any attachment to it is secure

or free from errors, computer

Re: Using copy pool (device class FILE)

2010-09-15 Thread Hart, Charles A
I may be oversimplifying, but all you need to do is create a Copy pool
using the Devclass file... Of course you have to have a FCIP connection
or configure TSM to TSM server to server with virtual volumes.
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Grigori Solonovitch
Sent: Wednesday, September 15, 2010 3:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Using copy pool (device class FILE)

Hello Everybody,
I am trying to avoid tape movement between Head Office and Disaster Site
by using additional copy pool (device class FILE) at Disaster Site:
1) main instance of TSM Server 5.5.4.1 is running at Head Office;
2) additional instance of TSM Server 5.5.4.1 is running at Disaster
Site;
3) main and additional TSM servers are cross-defined;
4) policy and nodes are imported to disaster TSM Server;
5) main instance of TSM Server is mirrored online to Disaster Site by
PPRC (files, database, logs) and can be activated in case of disaster to
restore data from tape copy pool;
6) there is enough disk space to create additional copy pool (FILE) on
TSM Server at Disaster Site.

Is it possible to backup primary pools at Head Office directly to FILE
copy pool at Disaster Site to be able to restore data from disk in case
of disaster (avoid using tape copy pool)?

If yes, I need help to configure remote FILE copy pool (any suggestions,
documents, links, red books, etc).
I will deeply appreciate your help.
Kindest regards,


Grigori G. Solonovitch

Senior Technical Architect

Information Technology  Ahli United Bank Kuwait
http://www.ahliunited.com.kw

Phone: (+965) 2231-2274  Mobile: (+965) 99798073  E-Mail:
grigori.solonovi...@ahliunited.com

Please consider the environment before printing this Email



CONFIDENTIALITY AND WAIVER: The information contained in this electronic
mail message and any attachments hereto may be legally privileged and
confidential. The information is intended only for the recipient(s)
named in this message. If you are not the intended recipient you are
notified that any use, disclosure, copying or distribution is
prohibited. If you have received this in error please contact the sender
and delete this message and any attachments from your computer system.
We do not guarantee that this message or any attachment to it is secure
or free from errors, computer viruses or other conditions that may
damage or interfere with data, hardware or software.

Please consider the environment before printing this Email.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Data Domain: Data Domain, and SQL-Backtrack with Sybase databases

2010-08-27 Thread Hart, Charles A
Trade offsss... 
Tdp license tdp cfg / lan free lic lan free cfg... Its just a mount
poing... Let the DBA be the Master of their Own env... 
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Rhodes
Sent: Friday, August 27, 2010 10:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Domain: Data Domain, and SQL-Backtrack with
Sybase databases

We don't have DD or any other dedup box . . .but we think about them a
lot.

This has been one or our ongoing discussions. Our big database servers
that use rman/tdpo/lanfree.
If we could get enought throughput with straight ethernet or 10g
ethernet, then we could ditch tdpo/lanfree and use straight rman to disk
over NFS to the dedup box.
The only value in tdpo/lanfree/vtl seems to be SAN speed.  I wonder what
the trade off is between tdpo/lanfree licensing and 10g ethernet
adapters and switch ports . . . . h.

Rick






 "Hart, Charles A"
 
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor
cc
 Manager"
  Re: Data Domain: Data Domain, and
   SQL-Backtrack with Sybase
databases

 08/27/2010 09:32
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 






I could be shot for saying the following, but using a NFS share would
provide the opportunity for you to remove the backup application and its
associates hardware from the data protection stack altogether for things
such as data bases which in some shops is 80% of the load.

We are avid TSM users and fans but if RMAN backups directly to NFS
mounts works well it appears there would be an opportunity reducing
complexity and costs.

Charles

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nobody
Sent: Thursday, August 26, 2010 5:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Domain: Data Domain, and SQL-Backtrack with
Sybase databases

The last time I checked <10% of DD customers use the VTL option.  I'm
willing to bet that 60-80% of those are TSM customers.   The TSM folks
I've
talked to seem to prefer using VTL over file-type devices, which may
explain that.  The rest of the world (except large enterprise customers)
tends to prefer NAS devices.

On Thu, Aug 26, 2010 at 1:38 PM, ADSM-L  wrote:

> Curtis,
>
> >> Data Domain has good market share, but very few DD customers use
VTL.
>
> Really? That surprises me a little (i.e., the marginalised VTL usage) 
> and isn't necessarily representative of the TSM customers I've spoken 
> to or worked with using DDRs, many of whom still use the VTL 
> functionality. I can't comment on whether this is so much the case 
> with shops that use other backup software though (e.g., I know NBU has
its own OST interface).
>
> In any case, whether through VTL or NFS/CIFS the principle is the same

> and of course you're right, pre-appliance compression or even 
> encryption can be catastrophic to data de-dupe ratios. Without knowing

> any more, it sounds like you may have to make a compromise somewhere 
> without changing your current config Nancy, either in terms of backup 
> data storage footprint or RPO.
>
> Cheers,
> __
> David McClelland
> London, UK
>
> On 26 Aug 2010, at 19:10, Nobody  wrote:
>
> > Data Domain has good market share, but very few DD customers use 
> > VTL.
>

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.



-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his

Re: Data Domain: Data Domain, and SQL-Backtrack with Sybase databases

2010-08-27 Thread Hart, Charles A
I could be shot for saying the following, but using a NFS share would
provide the opportunity for you to remove the backup application and its
associates hardware from the data protection stack altogether for things
such as data bases which in some shops is 80% of the load.  

We are avid TSM users and fans but if RMAN backups directly to NFS
mounts works well it appears there would be an opportunity reducing
complexity and costs.

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nobody
Sent: Thursday, August 26, 2010 5:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Domain: Data Domain, and SQL-Backtrack with
Sybase databases

The last time I checked <10% of DD customers use the VTL option.  I'm
willing to bet that 60-80% of those are TSM customers.   The TSM folks
I've
talked to seem to prefer using VTL over file-type devices, which may
explain that.  The rest of the world (except large enterprise customers)
tends to prefer NAS devices.

On Thu, Aug 26, 2010 at 1:38 PM, ADSM-L  wrote:

> Curtis,
>
> >> Data Domain has good market share, but very few DD customers use
VTL.
>
> Really? That surprises me a little (i.e., the marginalised VTL usage) 
> and isn't necessarily representative of the TSM customers I've spoken 
> to or worked with using DDRs, many of whom still use the VTL 
> functionality. I can't comment on whether this is so much the case 
> with shops that use other backup software though (e.g., I know NBU has
its own OST interface).
>
> In any case, whether through VTL or NFS/CIFS the principle is the same

> and of course you're right, pre-appliance compression or even 
> encryption can be catastrophic to data de-dupe ratios. Without knowing

> any more, it sounds like you may have to make a compromise somewhere 
> without changing your current config Nancy, either in terms of backup 
> data storage footprint or RPO.
>
> Cheers,
> __
> David McClelland
> London, UK
>
> On 26 Aug 2010, at 19:10, Nobody  wrote:
>
> > Data Domain has good market share, but very few DD customers use 
> > VTL.
>

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TDP Compression

2010-08-02 Thread Hart, Charles A
1) Q node  f=d   see if Compression set to Yes ort Client if
client then check dsm.sys options

2) Your act log TDP summary job will. Let uyou know it says compressed
at the end

08/02/10   14:03:08  ANE4991I (Session: 8215330, Node:
DBSS-ORACLUSTER)
  TDP Oracle SUN ANU0599  TDP for Oracle:
(2645): =>() 
  ANU2526I Backup details for backup piece

  /ora_db//c-611266183-20100802-1c (database
"ancprr").
  Total bytes sent: 53739520. Total processing
time:   
  00:00:01. Throughput rate: 52480.00Kb/Sec.
Compressed: No


 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Huebner,Andy,FORT WORTH,IT
Sent: Monday, August 02, 2010 2:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TDP Compression

Is there an easy way to see which TDP backups (SQL, Oracle) are using
client compression?
These jobs are all client initiated jobs through a batch process.
I need to gather this information from the TSM side.
I have searched the activity log and cannot find anything for TDP
compression.

Andy Huebner



This e-mail (including any attachments) is confidential and may be
legally privileged. If you are not an intended recipient or an
authorized representative of an intended recipient, you are prohibited
from using, copying or distributing the information in this e-mail or
its attachments. If you have received this e-mail in error, please
notify the sender immediately by return e-mail and delete all copies of
this message and any attachments.

Thank you.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Could someone explain how TSM Licenses work

2010-04-01 Thread Hart, Charles A
It in the books, but you'll need the base software package which has the
lic keys, then upgrade the code to the level you were at.  Or if AIX a
sysback assuming your TSM code is in a patch Sysback gets, or other OS
baremetal backup / restore 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
yoda woya
Sent: Thursday, April 01, 2010 3:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Could someone explain how TSM Licenses work

Say for example that I need to rebulid a TSM server.

I put all the software on an new server, restore my db and storagae
pool.
In terms of licenses, what do I need to do.   Is there a phycsical key
file,
is there a process, coudl I get the key again from IBM, is based on an
honer system.


Any insight woudl be great

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: create new Filespace

2010-01-29 Thread Hart, Charles A
You can do a delete file space "del fi  /adsmorc" on the TSM
server, it will be removed immediately. You can also rename the file
space "ren fi  /adsmorc /adsmorc.org" then when he takes his
next backup it will come back under the /adsmorc fs as defined in his
tdpo.opt

Hope this helps, not too sure how the rename file space impacts his
ability to recover using the control files as opposed to a catalog, but
if he doesn't need the backup based on the control file I suppose it's a
moot point. 

Take Care, 

Charles 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Avy Wong
Sent: Friday, January 29, 2010 3:03 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] create new Filespace

Thank you Charles for your reply. This is my challenge here 


I find out the DBA did not use catalog on the Oracle database side,
instead he uses control files for the db backups, but because of this,
we cannot sync the tivoli side backup, and removing data outside the
retention time will have to be done manually.  While he is creating a
new database backup using the catalog this time, I need to get rid of
all those unneeded backup data. I either delete it manually, not sure if
this is possible on the TSM Server side,( almost think it is not
possible)  or redirect the new backups to a different FSID, then give it
sometime, then I can just delete the obsolete FSID. Not sure if this is
possible at all. Thank you for your help in advance.



Avy Wong
Business Continuity Administrator
Mohegan Sun
1 Mohegan Sun Blvd
Uncasville, CT 06382
(860)862-8164
(cell) (860)961-6976




 "Hart, Charles A"
 
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor
cc
 Manager"
  Re: [ADSM-L] create new Filespace


 01/29/2010 03:39
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 






Two ways.

1) Rename existing FS on tsm server

2) Change Client TDPO.opt fs name

This may create recovery challenges on the pre-exsting name with RMAN
...

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Avy Wong
Sent: Friday, January 29, 2010 2:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] create new Filespace

Hello,

   I would like to assign the new backups into a different FSID. How
do you go about doing that? Thanks.


  Node Name: TURIC
 Filespace Name: /adsmorc
 Hexadecimal Filespace Name:
   FSID: 1
   Platform: TDPO LinuxAMD64
 Filespace Type: API:ORACLE
  Is Filespace Unicode?: No
  Capacity (MB): 0.0
   Pct Util: 0.0
Last Backup Start Date/Time:
 Days Since Last Backup Started:
   Last Backup Completion Date/Time: 01/29/2010 13:41:24
   Days Since Last Backup Completed: <1 Last Full NAS Image
Backup Completion Date/Time:
Days Since Last Full NAS Image Backup Completed:




Avy Wong
Business Continuity Administrator
Mohegan Sun
1 Mohegan Sun Blvd
Uncasville, CT 06382
(860)862-8164
(cell) (860)961-6976


The information contained in this message may be privileged and
confidential and protected from disclosure. If the reader of this
message is not the intended recipient, or an employee or agent
responsible for delivering this message to the intended recipient, you
are hereby notified that any dissemination, distribution, or copy of
this communication is strictly prohibited. If you have received this
communication in error, please notify us immediately by replying to the
message and deleting it from your computer.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.




The information contained in this message may be privileged and
confidential and protected from disclosure. If the reader of this
message is not the intended recipient, or an employee or agent
responsible for delivering this message to the intended recipient, you
are hereby notified that any dissemination, distribution, or copy of
this communication is strictly prohibited. If you have received this
communication in error, please notify us immediately by replying 

Re: create new Filespace

2010-01-29 Thread Hart, Charles A
Two ways.

1) Rename existing FS on tsm server 

2) Change Client TDPO.opt fs name 

This may create recovery challenges on the pre-exsting name with RMAN
... 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Avy Wong
Sent: Friday, January 29, 2010 2:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] create new Filespace

Hello,

   I would like to assign the new backups into a different FSID. How
do you go about doing that? Thanks.


  Node Name: TURIC
 Filespace Name: /adsmorc
 Hexadecimal Filespace Name:
   FSID: 1
   Platform: TDPO LinuxAMD64
 Filespace Type: API:ORACLE
  Is Filespace Unicode?: No
  Capacity (MB): 0.0
   Pct Util: 0.0
Last Backup Start Date/Time:
 Days Since Last Backup Started:
   Last Backup Completion Date/Time: 01/29/2010 13:41:24
   Days Since Last Backup Completed: <1 Last Full NAS Image
Backup Completion Date/Time:
Days Since Last Full NAS Image Backup Completed:




Avy Wong
Business Continuity Administrator
Mohegan Sun
1 Mohegan Sun Blvd
Uncasville, CT 06382
(860)862-8164
(cell) (860)961-6976


The information contained in this message may be privileged and
confidential and protected from disclosure. If the reader of this
message is not the intended recipient, or an employee or agent
responsible for delivering this message to the intended recipient, you
are hereby notified that any dissemination, distribution, or copy of
this communication is strictly prohibited. If you have received this
communication in error, please notify us immediately by replying to the
message and deleting it from your computer.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Am I Shoe Shining My LTO4's?

2010-01-28 Thread Hart, Charles A
We push our 2nd copy over individual FCIP links with write acceleration
enabled, we use to trunk 6gige links together but ran in to latency
issues without write acceleration enabled and the technology did not
allow for write acceleration and trunking so we end up with 3-4 LTO4
drives down 1gige link with write acceleration.  

That said when I run select against the summary table to get timings on
Backup stgpools the more the more processes we kick off at one the
slower the speed (makes sense only so much per gige).  (TSM Env AIX p550
/ IBM LTO4)

LTO4's minimum matching drive speed is 30MBS, so once we push 4
processes down one link we are pushing less than 30MBS, thereby in
theory running below the minimum rated drive speed we should be shining
the drives pretty good.  Does anyone know of a way to validate that
indeed were shining?  

Regards, 

Charles 


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: DataDomain VTL

2010-01-13 Thread Hart, Charles A
Any "compressed/ encrypted" data will not de-dupe well or if at all.
(Compressed / Encrypted Data have unique signatures every time)
In regards to client side compression, that feature tends to be of value
on a small remote site or if you have a 10Mb lan.  You can force the
client compression off from the Server side using client option sets.  

Regards, 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nick Laflamme
Sent: Tuesday, January 12, 2010 7:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DataDomain VTL

On Jan 12, 2010, at 11:27 AM, Kelly Lipp wrote:

> We have a customer that insisted on buying one of these for his TSM
environment. Promised 20:1 dedup.  He saw about five to one.  He was in
our Level 2 class telling the story.  At the end he said he wouldn't buy
it again.  I made him repeat that part of the story...

DataDomain's "Best Practices" guide for TSM tells customers not to let
nodes compress data. I have to wonder how much compressing the data or
not alters the dedup ratio. I'm not at all sure that we're going enforce
a "no compression" policy for our clients. 

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Reclamation Processing won't start

2009-11-23 Thread Hart, Charles A
Seen this to - adjust reclaim percent and # of processes (TSM 5.5.2) yet
no reclaim runs.  I did get it work and it seems the only way I get it
to work is make sure there are no other processes running against those
pools like a backup stgpool or migration (depending if you're reclaiming
primary or offsite vols. )

Also make sure your Maxscratch on your stgpool is set appropriately )ie
there's some left between maxscratch and maxscratch used)

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Howard Coles
Sent: Monday, November 23, 2009 1:57 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Reclamation Processing won't start

If you are trying to get TSM to move the data to a new Tape Format
(increased density and/or tape type) you would be better served to run a
move data command.  I'm assuming you have created a new storage pool if
you've gone to a higher capacity drive type.  However, 99% means that
the tapes have to be 99% reclaimable, or only have 1% utilization.  

Have you tried bumping the reclaim threshold down to say 90% and see
what happens?  Also, I'm assuming that TSM uses the new drives just fine
for other types of processes (such as migration, storage pool backups,
etc.)?

Just so there's clarity you might want to give a general idea of how the
old and new storage pool situation is laid out, and what you mean by
"upgraded tape drives"  (could just mean newer drives of the same device
type).

See Ya'
Howard


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf 
> Of Maura Adams
> Sent: Monday, November 23, 2009 1:36 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Reclamation Processing won't start
> 
> We recently upgraded our tape drives.  I have verified that the new 
> device class has a sufficient # of mount points, we have plenty of 
> scratch volumes assigned to the tape pool.  I run a query and see
where
> we have 36 possible tape returns running reclamation at 99%.  I also 
> verified volumes were in readw status.
> I manually adjust the stg pool to desired reclamation value and
nothing
> happens.  No error message  no mention of reclamation 
> in the actlog ... zippo.  Tried both an onsite tape pool and 
> offsite
> copypool.  Frustrating.   What am I missing?
> 
> thanks,
> M

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM Disaster Recovery with VTL

2009-11-11 Thread Hart, Charles A
I'm pretty sure that once you make a VTL to VTL replication TSM wont know about 
the copy you created because it wasn't copied by TSM (i.e. not in the TSM DB).  
The only way to access is if you are replicating the TSM DB and Logs to the 
offsite as well so TSM will be aware of those VTL vols. 

We thought of doing that but elected to put use the offsite VTL as an Active 
Offsite Copy Pool so TSM is always aware, we replicate DB and Logs, so bringing 
up on instance on the other side and updating Primary to Destroyed ... And we 
are ready to go!



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Langdale
Sent: Wednesday, November 11, 2009 10:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Disaster Recovery with VTL

Hello

I'm assuming you are using the VTL's replication process to do this?  If so, 
I'd assume it would be expired (though that is TSM terminology rather than VTL 
terminology). 

Probably best to ask your VTL vendor.

Steven 

Steven Langdale
Global Information Services
EAME SAN/Storage Planning and Implementation ( Phone : +44 (0)1733 584175 ( 
Mob: +44 (0)7876 216782 ü Conference: +44 (0)208 609 7400 Code: 331817
+ Email: steven.langd...@cat.com

 



Mehdi Salehi  Sent by: "ADSM: Dist Stor Manager" 

11/11/2009 14:38
Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] TSM Disaster Recovery with VTL




Caterpillar: Confidential Green Retain Until: 11/12/2009 



Hi,
A VTL in primary site copies backup data to remote site asynchronously. If
some backups expire in TSM, what happens to the copy in remote site? Will 
it
be expired or retained?

Thanks

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Replacing tape drives (or "there has to be a better way")

2009-07-09 Thread Hart, Charles A
Dumb ? - But I was under the impression that the TS3500 (3584)'s drive
Serial Numbers were tied to the Drive Cage (rail) so the SN and WWN were
static... Maybe I'm thinking 3494 w/ 3592 Drives.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Baker, Jane
Sent: Thursday, July 09, 2009 2:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Replacing tape drives (or "there has to be a
better way")

 

We use LTO2 & LTO3 in a 3584 and the CE always sets the serial number to
match the old one so that we don't have this problem, same as Sean.

Regards,
Jane.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Len Boyle
Sent: 08 July 2009 18:30
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Replacing tape drives (or "there has to be a
better way")

In fact we found out that for lto-3 and lto-4 tape drives in an IBM 3584
library,  it is required that they change the serial number to match the
old tape drive. Because IBM tracks the drives by serial number for maint
contracts. This we found when the serial numbers that we send in for a
maint contract renewal were kicked out as field engineering  had not
been updating the serial numbers. But not for lto-2 tape drives.

len

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sean English
Sent: Wednesday, July 08, 2009 11:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Replacing tape drives (or "there has to be a
better way")

Zoltan,

The majority of our TSM servers are AIX and we do have a setup where we
share multiple library clients with one library.  When we have IBM CEs
come out and replace drives, they just change the serial number on the
new drive to match the old drive they are replacing.  Apparently there
is a way to do that on the drive itself.

Thanks,
Sean






Zoltan Forray/AC/VCU 
Sent by: "ADSM: Dist Stor Manager" 
07/08/2009 11:30 AM
Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] Replacing tape drives (or "there has to be a better way")






I need thoughts/suggestions/help on how to deal with SAN attached tape
drive replacements when a library is shared amongst 5-servers.

We just has a drive replaced, therefore giving us a new serial number
(3494ATL - TS1130).  All servers that use these drives/libraries are
RedHat Linux and use very current lin_tape drivers.

Currently, the method we use is to bounce each server so the system
rescans the SAN and gets the new serial number.

In the past, just stopping the TSM server and then restarting the
lin_tape driver would often be enough. Now with the latest lin_tape
drivers, I don't see the lin_taped daemon running any more.

Yes, I have tried updating the paths on the library manager server and
telling it to autodetect but that didn't help.

There has to be a better way!  If you have a similar configuration, how
do you handle this scenario?


Please check that this email is addressed to you. If not, you should
delete it immediately as its contents may be confidential and its
disclosure, copying or distribution unlawful.

C. & J. Clark International Limited takes steps to prevent the
transmission of electronic viruses but responsibility for screening
incoming messages and the risk of such transmission lies with the
recipient.

C. & J. Clark International Limited Trading as Clarks Registered in
England number 141015.
Registered office 40 High Street, Street, Somerset. BA16 0EQ. England.

This message has been scanned for viruses by BlackSpider MailControl -
www.blackspider.com

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Separating the Oracle TDP Cfg - from the OS cfg ..

2009-06-24 Thread Hart, Charles A
Thanks Neil, yes that's what I'm looking for.  Yes BA Client - OS Bkup
stanza only, managed by the Unix team and let the Oracle Team manage own
dsm.sys for their own stanzas ... I know it seems a bit silly really ...


Regards, 

Charles

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Neil Rasmussen
Sent: Tuesday, June 23, 2009 3:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Separating the Oracle TDP Cfg - from the OS cfg ..

Hello Charles,

DP Oracle relies on the dsm.sys file in the TSM API bin directory
(/usr/tivoli/tsm/client/api/bin64 on AIX, for instance). So the answer
to your question really depends on what you mean by "base dsm.sys"if
by "base dsm.sys" you mean the BA Client, then yes, DP Oracle can use a
different dsm.sys file.


Regards,

Neil Rasmussen
Tivoli Storage Manager Client Development IBM Corporation Almaden
Research Center 650 Harry Road San Jose, CA 95120-6099
rasmu...@us.ibm.com



From:
"Hart, Charles A" 
To:
ADSM-L@VM.MARIST.EDU
Date:
06/23/2009 12:47 PM
Subject:
Separating the Oracle TDP Cfg - from the OS cfg ..
Sent by:
"ADSM: Dist Stor Manager" 




We are wondering if its possible to separate the TDP Oracle Cfg so it
uses its own dsm.sys instead of the base dsm.sys  The reason we ask is
our Oracle DBA's use TSM to clone DB's and not all our Unix team member
know how to cfg the TDP client and all the stanza's associated with DB
Cloning (Backup Restore)

I looked in the TDP Redbook, didn't really see a way.  Does anyone have
a good idea to have a dsm.sys just for OS Backup and Dsm.sys for the
Oracle folks?


Thank you all for being so helpful!

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Separating the Oracle TDP Cfg - from the OS cfg ..

2009-06-23 Thread Hart, Charles A
 
We are wondering if its possible to separate the TDP Oracle Cfg so it
uses its own dsm.sys instead of the base dsm.sys  The reason we ask is
our Oracle DBA's use TSM to clone DB's and not all our Unix team member
know how to cfg the TDP client and all the stanza's associated with DB
Cloning (Backup Restore) 

I looked in the TDP Redbook, didn't really see a way.  Does anyone have
a good idea to have a dsm.sys just for OS Backup and Dsm.sys for the
Oracle folks?  


Thank you all for being so helpful!

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: optionset

2009-05-27 Thread Hart, Charles A
Upd node blahhh clopt=" "   (Space between quites, sometimes no space
needed)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Avy Wong
Sent: Wednesday, May 27, 2009 2:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] optionset

Hello forum,
  What is the correct syntax to clear the Optionset for a node? I
put
in 'update node nodename Optionset="  " ' ,that does not work.
Right now the node's Optionset: SQLSET, I want this parameter to be
blank.
Thank you for your help in advance.



Avy Wong
Business Continuity Administrator
Mohegan Sun
1 Mohegan Sun Blvd
Uncasville, CT 06382
(860)862-8164
(cell) (860)961-6976

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TDP FOR SQL

2009-05-20 Thread Hart, Charles A
Your IBM rep?   

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Lepre, 
James
Sent: Wednesday, May 20, 2009 1:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP FOR SQL

Hello Everyone,

 

  Does anyone know where I can get the trial license for the TDP for SQL?

 

Thank you 

 

James 

 


  
  
---
Confidentiality Notice: The information in this e-mail and any attachments 
thereto is intended for the named recipient(s) only.  This e-mail, including 
any attachments, may contain information that is privileged and confidential  
and subject to legal restrictions and penalties regarding its unauthorized 
disclosure or other use.  If you are not the intended recipient, you are hereby 
notified that any disclosure, copying, distribution, or the taking of any 
action or inaction in reliance on the contents of this e-mail and any of its 
attachments is STRICTLY PROHIBITED.  If you have received this e-mail in error, 
please immediately notify the sender via return e-mail; delete this e-mail and 
all attachments from your e-mail  system and your computer system and network; 
and destroy any paper copies you may have in your possession. Thank you for 
your cooperation.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: SV: VTL and Dedup ( TS7569G)

2009-04-30 Thread Hart, Charles A
It will depend on the Protectier version installed on the Gateway appliance, 
v2.1 = uses a TSM Smitty Sudo Device for the Changer and LTO ATAPE friendly 
devices for the drives.  (so you still have to use smitty to scan for the 
changer device down the FCS adapter) 

v2.2.1  supports a Atape based medium changer (robot) and LTO tape devices. 


Your IBM rep "should know this" Hope this helps 

Regards, 

Charles 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Pawlos 
Gizaw
Sent: Thursday, April 30, 2009 11:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] SV: VTL and Dedup ( TS7569G)

Does anyone use  TS7569G with TSM 5.4 running on AIX 5.2. This devices works 
fine on HPUX 11 v3 with TSM 5.5 by emulating ATL P300. But we are trying to 
test on AIX server the cfgmgr created all the devices files for the tape drives 
but not able to get the device file for media changer. We tried from smitty 
using Tivoli Storage Manager Devices and still not able to define it TSM 
Define Methods didnt/couldnt define any devices 


Thanks
Pawlos

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Hart, 
Charles A
Sent: Thursday, March 19, 2009 10:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] SV: VTL and Dedup ( TS7569G)

As many say your mileage may vary You can positively impact the Dedupe 
factor by ...

1) No Client Side Compressions (Compressed files always have unique Sig)

2) No Client Side Encryption (TSM or OS) - Encrypted Data is always unique like 
compressed data.

3) If you have more than one VTL each with its own repository - do your best to 
keep like data together (i.e. all Unix / Oracle data to one VTL, all Windows 
based Data on 2nd VTL etc  If would have 200+ Windows OS's you'll see some 
Deduping... 

4) Of course the Dedup for full DB backups gets much higher (21 Versions of 
Fulls we avg 15:1)  Our challenge has become I/O not space.

5) According to our VTL vendor the RMAN Files per set option should be @ 
1(Files perset allows Oracle to mix data backup streams there by creating 
unique data...

Good luck on your venture looks like the TS7569G (IBM Protectier?) same we are 
using... The most challenging aspect of any VTL is capacity planning assuming 
you have solid retention policies that are followed ... 


Regards 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Christian Svensson
Sent: Thursday, March 19, 2009 7:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: VTL and Dedup ( TS7569G)

Hi Martin,
The 1st Backup will probably don't do any major save for you. It always depend 
on what kind of data you backup.
What we normally see is saving on Archive Data and TDP Data. Everything else is 
most of the time unique data and you will not see any saving.

Dedup is a major saving for other backup software such Legato and NBU not for 
TSM because you are using Incremental forever.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson

Från: ADSM: Dist Stor Manager [ads...@vm.marist.edu] för Sabar Martin 
Hasiholan Panggabean [sabar.hasiho...@metrodata.co.id]
Skickat: den 19 mars 2009 13:09
Till: ADSM-L@VM.MARIST.EDU
Ämne: VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM using 
TS7569G ? Let say I have 100 TB of data and backup to this VTL. On the 1st 
attempt of backup / full backup, will this data size decrease on the VTL

BR,

Martin P

This e-mail, including attachments, may include confidential and/or proprietary 
information, and may be used only by the person or entity to which it is 
addressed. If the reader of this e-mail is not the intended recipient or his or 
her authorized agent, the reader is hereby notified that any dissemination, 
distribution or copying of this e-mail is prohibited. If you have received this 
e-mail in error, please notify the sender by replying to this message and 
delete this e-mail immediately.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM TDP Oracle - Expiration Opinions

2009-04-14 Thread Hart, Charles A
kdel is enabled on all the nodes.

It would be great to see what you guys find on your systems.  In the
mean time, I'm opening up that ETR.






Richard Rhodes wrote:
> We have the same setup.  TDPO backups go to separate nodes that have 
> use their own pool.  We have ongoing problems with RMAN deletes not 
> changing the file in TSM (rman backup pieces) to inactive status, 
> which are then removed during expiration.  We don't know if it's RMAN,

> TDPO, or us with the problem.  Our DBA's run TDPOSYNC fairly often to
fix things up.
> Something is wrong and we just haven't had time to track it down.
I
> have a script that queries the TSM backups table for files (rman 
> backup
> pieces) that are older than our retension period.  I run it once a
week.
>
> Rick
>
>
>
>
>
>
>
>
> "Gee, Norman"
>  LC.CA
> .GOV>  To
> Sent by: "ADSM:   ADSM-L < at > VM.MARIST.EDU
> Dist Stor  cc
> Manager"
>  VM.MARIST Subject
> .EDU> Re: TSM TDP Oracle - Expiration
> Opinions
>
> 03/20/2009 01:27
> PM
>
>
> Please respond to
> "ADSM: Dist Stor
> Manager"
>  VM.MARIST
> .EDU>
>
>
>
>
>
>
> You will have to let RMAN do its job.  Every RMAN backup piece and 
> sets have unique file names and will never place a prior backup into 
> an inactive status.
>
> How would you expire a RMAN backup since every backup piece is still 
> active? Short of mass delete on filespace.
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L < at > VM.MARIST.EDU] On 
> Behalf Of Hart, Charles A
> Sent: Friday, March 20, 2009 9:30 AM
> To: ADSM-L < at > VM.MARIST.EDU
> Subject: TSM TDP Oracle - Expiration Opinions
>
> We currently have your Oracle TDP Clients setup as a Separate node in 
> separate a separate domain down to the storage pool hierarchy.  That 
> said we are having challenges with DBA's and their RMAN delete scripts

> for various reasons.  According to the TDP for DB manual its 
> recommended to have the RMAN catalog maintain retention which I would 
> agree with but we are little success, and end up filling up virtual 
> tape subsystems, orphaning data etc.
>
> The enough now is to have TSM maintain the RMAN retention and the 
> DBA's would just clean their RMAN catalog with a crosscheck  and 
> delete process.
>
> What do you?  Do you let RMAN maintain Retention or TSM maintain 
> pitfalls of either?
>
>
> Best Regards,
>
> Charles Hart
>
> This e-mail, including attachments, may include confidential and/or 
> proprietary information, and may be used only by the person or entity 
> to which it is addressed. If the reader of this e-mail is not the 
> intended recipient or his or her authorized agent, the reader is 
> hereby notified that any dissemination, distribution or copying of 
> this e-mail is prohibited. If you have received this e-mail in error, 
> please notify the sender by replying to this message and delete this 
> e-mail immediately.
>
>
>
> -
> The information contained in this message is intended only for the 
> personal and confidential use of the recipient(s) named above. If the 
> reader of this message is not the intended recipient or an agent 
> responsible for delivering it to the intended recipient, you are 
> hereby notified that you have received this document in error and that

> any review, dissemination, distribution, or copying of this message is

> strictly prohibited. If you have received this communication in error,

> please notify us immediately, and delete the original message.


+--
|This was sent by sam.wozn...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM TDP Oracle - Expiration Opinions

2009-04-14 Thread Hart, Charles A
>
>
>
>
> "Gee, Norman"
>  LC.CA
> .GOV>  To
> Sent by: "ADSM:   ADSM-L < at > VM.MARIST.EDU
> Dist Stor  cc
> Manager"
>  VM.MARIST Subject
> .EDU> Re: TSM TDP Oracle - Expiration
> Opinions
>
> 03/20/2009 01:27
> PM
>
>
> Please respond to
> "ADSM: Dist Stor
> Manager"
>  VM.MARIST
> .EDU>
>
>
>
>
>
>
> You will have to let RMAN do its job.  Every RMAN backup piece and 
> sets have unique file names and will never place a prior backup into 
> an inactive status.
>
> How would you expire a RMAN backup since every backup piece is still 
> active? Short of mass delete on filespace.
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L < at > VM.MARIST.EDU] On 
> Behalf Of Hart, Charles A
> Sent: Friday, March 20, 2009 9:30 AM
> To: ADSM-L < at > VM.MARIST.EDU
> Subject: TSM TDP Oracle - Expiration Opinions
>
> We currently have your Oracle TDP Clients setup as a Separate node in 
> separate a separate domain down to the storage pool hierarchy.  That 
> said we are having challenges with DBA's and their RMAN delete scripts

> for various reasons.  According to the TDP for DB manual its 
> recommended to have the RMAN catalog maintain retention which I would 
> agree with but we are little success, and end up filling up virtual 
> tape subsystems, orphaning data etc.
>
> The enough now is to have TSM maintain the RMAN retention and the 
> DBA's would just clean their RMAN catalog with a crosscheck  and 
> delete process.
>
> What do you?  Do you let RMAN maintain Retention or TSM maintain 
> pitfalls of either?
>
>
> Best Regards,
>
> Charles Hart
>
> This e-mail, including attachments, may include confidential and/or 
> proprietary information, and may be used only by the person or entity 
> to which it is addressed. If the reader of this e-mail is not the 
> intended recipient or his or her authorized agent, the reader is 
> hereby notified that any dissemination, distribution or copying of 
> this e-mail is prohibited. If you have received this e-mail in error, 
> please notify the sender by replying to this message and delete this 
> e-mail immediately.
>
>
>
> -
> The information contained in this message is intended only for the 
> personal and confidential use of the recipient(s) named above. If the 
> reader of this message is not the intended recipient or an agent 
> responsible for delivering it to the intended recipient, you are 
> hereby notified that you have received this document in error and that

> any review, dissemination, distribution, or copying of this message is

> strictly prohibited. If you have received this communication in error,

> please notify us immediately, and delete the original message.


+--
|This was sent by sam.wozn...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

Please consider the environment before printing this Email.

"This email message and any attachments transmitted with it may contain
confidential and proprietary information, intended only for the named
recipient(s). If you have received this message in error, or if you are
not the named recipient(s), please delete this email after notifying the
sender immediately. BKME cannot guarantee the integrity of this
communication and accepts no liability for any damage caused by this
email or its attachments due to viruses, any other defects, interception
or unauthorized modification. The information, views, opinions and
comments of this message are those of the individual and not necessarily
endorsed by BKME."

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM 6.1 : Database replication Feature

2009-04-03 Thread Hart, Charles A
Mmm Federated ... See now if you could have a Master / Federated DB you
could really get that DE-Dupe in to the Enterprise as opposed to just
de-dupes for smaller TSM env ... 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Allen S. Rout
Sent: Friday, April 03, 2009 2:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 6.1 : Database replication Feature

>> On Fri, 3 Apr 2009 11:24:31 -0700, Colin Dawson 
said:

> If changes are being considered or made to the server database that 
> changes the schema (DDL) or other operational characteristics then 
> we're going to try to steer folks away from these.

Oy.  Or go pop popcorn to watch the explosion.

The customizations I think are rational tend to be meta-schema:
outside of it.  For example, I've already done a bunch of tinkering with
Federated DB configurations.

I've been building a meta-DB which holds references to all the tables in
my TSM server DBs.  Then some sort of virtual tables which union the
contents of, say, all the SESSION tables.  Add a column to the virtual
tables to denote the source server, and all of a sudden you can do
cross-instance monitoring and math which was impossible before.


- Allen S. Rout

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: VTL and Dedup ( TS7569G)

2009-03-26 Thread Hart, Charles A
Hitachi 9990x front end with FC disk for Protectier Meta Data FS and
SATA Drives for Repository FS.  The FC infrastructure form host to
storage is 4gb.  We are using the non-clustered cfg and achieving up to
500MB writes while doing 200MB reads from DB and Exchange Clients going
direct to the VTL.  

Regards, 

Charles

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
km
Sent: Wednesday, March 25, 2009 3:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

On 21/03, Hart, Charles A wrote:
> It works well if you understand your data and how you can push to it 
> with in reason before you deploy another.  The IBM product likes more 
> CPU cores ... Understand these are x86 boxes... We see up to 500MBS 
> Writes to one of our VTL's that ingests Exchange Backups via the TSM 
> TDP.
>

What kind of backend storage do you have to get that performance?

And is the performance good when restoring as well?

-David

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: VTL and Dedup ( TS7569G)

2009-03-23 Thread Hart, Charles A
I imagine the 1400MBS is for the Clustered version? 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Sunday, March 22, 2009 12:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

The white paper I listed in my response to this thread was written by
ESG.  They tested the TS7560 and obtained on the order of 1400MB/sec.
And Charles is correct: it is an x86 box, actually the IBM x3850 which
has, perhaps, the best architecture in the class.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Hart, Charles A
Sent: Saturday, March 21, 2009 8:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

It works well if you understand your data and how you can push to it
with in reason before you deploy another.  The IBM product likes more
CPU cores ... Understand these are x86 boxes... We see up to 500MBS
Writes to one of our VTL's that ingests Exchange Backups via the TSM
TDP.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Saturday, March 21, 2009 12:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny.

I would say that BAD dedupe is the enemy of throughput.

There IS good dedupe.  I've seen it.  It hurts neither backup, nor
restore performance.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Clark, Robert A
Sent: Friday, March 20, 2009 4:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Dedupe is an errand boy, sent by the storage industry, to collect a
bill.

Dedupe is the enemy of throughput.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Friday, March 20, 2009 4:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Why do you hate all things dedupe?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Friday, March 20, 2009 10:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny, but I was researching the TS7650 yesterday and found this article
on the IBM website.  Pretty good detail about the product in a non-TSM
environment.

ftp://service.boulder.ibm.com/storage/tape/ts7650g_esg_validation.pdf

And then this on in the TSM environment. I think this one might have
been written by somebody somewhat less familiar with TSM than we would
be.
Seemed a little heavy handed about TSM.

ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/tsw03043usen/TSW03043USEN.
PDF

My overall impression, and I hate all things de-dup, was this is a
pretty good product offering.  I'm sure it's way expensive but
understand there are some follow on products coming that will address
the lower end of this market.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Alex Paschal
Sent: Friday, March 20, 2009 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Hi, Sabar.

I couldn't find a TS7569G via Google, but on the TS7650G, also a
deduping VTL, after data goes through the factoring (dedup) algorithm it
is run through a compression algorithm.  You probably won't see much
deduplication, but on the first backup you should see a decrease in size
similar to the decrease you would see from the compression on a tape
drive.

Regards,
Alex

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sabar Martin Hasiholan Panggabean
Sent: Thursday, March 19, 2009 5:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM
using TS7569G ? Let say I have 100 TB of data and backup to this VTL. On
the 1st attempt of backup / full backup, will this data size decrease on
the VTL

BR,

Martin P


This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure under applicable law or may constitute as
attorney work product.
If you are not the intended recipient, you are hereby notified that any
use, dissemination, distribution, or copying of this communication is
strictly prohibited. If you have received this communication in error,
notify us immediately by telephone and
(i) destroy this message if a facsimile or (ii) delete this message
immediately if this 

Recall: [ADSM-L] VTL and Dedup ( TS7569G)

2009-03-23 Thread Hart, Charles A
Hart, Charles A would like to recall the message, "[ADSM-L] VTL and Dedup ( 
TS7569G)".

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: VTL and Dedup ( TS7569G)

2009-03-21 Thread Hart, Charles A
It works well if you understand your data and how you can push to it
with in reason before you deploy another.  The IBM product likes more
CPU cores ... Understand these are x86 boxes... We see up to 500MBS
Writes to one of our VTL's that ingests Exchange Backups via the TSM
TDP.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Saturday, March 21, 2009 12:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny.

I would say that BAD dedupe is the enemy of throughput.

There IS good dedupe.  I've seen it.  It hurts neither backup, nor
restore performance.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Clark, Robert A
Sent: Friday, March 20, 2009 4:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Dedupe is an errand boy, sent by the storage industry, to collect a
bill.

Dedupe is the enemy of throughput.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Friday, March 20, 2009 4:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Why do you hate all things dedupe?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Friday, March 20, 2009 10:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny, but I was researching the TS7650 yesterday and found this article
on the IBM website.  Pretty good detail about the product in a non-TSM
environment.

ftp://service.boulder.ibm.com/storage/tape/ts7650g_esg_validation.pdf

And then this on in the TSM environment. I think this one might have
been written by somebody somewhat less familiar with TSM than we would
be.
Seemed a little heavy handed about TSM.

ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/tsw03043usen/TSW03043USEN.
PDF

My overall impression, and I hate all things de-dup, was this is a
pretty good product offering.  I'm sure it's way expensive but
understand there are some follow on products coming that will address
the lower end of this market.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Alex Paschal
Sent: Friday, March 20, 2009 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Hi, Sabar.

I couldn't find a TS7569G via Google, but on the TS7650G, also a
deduping VTL, after data goes through the factoring (dedup) algorithm it
is run through a compression algorithm.  You probably won't see much
deduplication, but on the first backup you should see a decrease in size
similar to the decrease you would see from the compression on a tape
drive.

Regards,
Alex

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sabar Martin Hasiholan Panggabean
Sent: Thursday, March 19, 2009 5:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM
using TS7569G ? Let say I have 100 TB of data and backup to this VTL. On
the 1st attempt of backup / full backup, will this data size decrease on
the VTL

BR,

Martin P


This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure under applicable law or may constitute as
attorney work product.
If you are not the intended recipient, you are hereby notified that any
use, dissemination, distribution, or copying of this communication is
strictly prohibited. If you have received this communication in error,
notify us immediately by telephone and
(i) destroy this message if a facsimile or (ii) delete this message
immediately if this is an electronic communication.

Thank you.


DISCLAIMER:
This message is intended for the sole use of the addressee, and may
contain information that is privileged, confidential and exempt from
disclosure under applicable law. If you are not the addressee you are
hereby notified that you may not use, copy, disclose, or distribute to
anyone the message or any information contained in the message. If you
have received this message in error, please immediately advise the
sender by reply email and delete this message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail 

TSM TDP Oracle - Expiration Opinions

2009-03-20 Thread Hart, Charles A
We currently have your Oracle TDP Clients setup as a Separate node in
separate a separate domain down to the storage pool hierarchy.  That
said we are having challenges with DBA's and their RMAN delete scripts
for various reasons.  According to the TDP for DB manual its recommended
to have the RMAN catalog maintain retention which I would agree with but
we are little success, and end up filling up virtual tape subsystems,
orphaning data etc. 

The enough now is to have TSM maintain the RMAN retention and the DBA's
would just clean their RMAN catalog with a crosscheck  and delete
process.   

What do you?  Do you let RMAN maintain Retention or TSM maintain
pitfalls of either?


Best Regards, 

Charles Hart

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: SV: VTL and Dedup ( TS7569G)

2009-03-19 Thread Hart, Charles A
As many say your mileage may vary You can positively impact the Dedupe 
factor by ...

1) No Client Side Compressions (Compressed files always have unique Sig)

2) No Client Side Encryption (TSM or OS) - Encrypted Data is always unique like 
compressed data.

3) If you have more than one VTL each with its own repository - do your best to 
keep like data together (i.e. all Unix / Oracle data to one VTL, all Windows 
based Data on 2nd VTL etc  If would have 200+ Windows OS's you'll see some 
Deduping... 

4) Of course the Dedup for full DB backups gets much higher (21 Versions of 
Fulls we avg 15:1)  Our challenge has become I/O not space.

5) According to our VTL vendor the RMAN Files per set option should be @ 
1(Files perset allows Oracle to mix data backup streams there by creating 
unique data...

Good luck on your venture looks like the TS7569G (IBM Protectier?) same we are 
using... The most challenging aspect of any VTL is capacity planning assuming 
you have solid retention policies that are followed ... 


Regards 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Christian Svensson
Sent: Thursday, March 19, 2009 7:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: VTL and Dedup ( TS7569G)

Hi Martin,
The 1st Backup will probably don't do any major save for you. It always depend 
on what kind of data you backup.
What we normally see is saving on Archive Data and TDP Data. Everything else is 
most of the time unique data and you will not see any saving.

Dedup is a major saving for other backup software such Legato and NBU not for 
TSM because you are using Incremental forever.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson

Från: ADSM: Dist Stor Manager [ads...@vm.marist.edu] för Sabar Martin 
Hasiholan Panggabean [sabar.hasiho...@metrodata.co.id]
Skickat: den 19 mars 2009 13:09
Till: ADSM-L@VM.MARIST.EDU
Ämne: VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM using 
TS7569G ? Let say I have 100 TB of data and backup to this VTL. On the 1st 
attempt of backup / full backup, will this data size decrease on the VTL

BR,

Martin P

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TSM v6 -announcement

2009-02-13 Thread Hart, Charles A
That's Great!  Better than the 5-6GBH advertised.  Could you let us know what 
type of infrastructure?  pSeried / Disk Type etc?

Thanks! 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Abbott, Joseph
Sent: Friday, February 13, 2009 10:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM v6 -announcement

Personal experience with a 80GB database was 33minutes when going from 5.5.1.6 
to 6.1 Beta.

JoeA

Joseph A Abbott MCSE 2003/2000, MCSA2003 Partners Healthcare Systems 
Development Team Tivoli Storage Manager jabb...@partners.org
Cell-617-633-8471
Desk-617-724-4929
Page-# (617) 362-6341
jabb...@usamobility.net 

"I can't tell you how much I wish we could just shut up and smile."


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Rainer 
Tammer
Sent: Friday, February 13, 2009 10:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM v6 -announcement

Hello,

Hart, Charles A wrote:
> Couple folks just came back from pulse and one of the Labs was the TSM 
> Upgrade, apparently some Beta testers are seeing a 40GB upgrade DB 
> taking 28Hrs on avg hdw
>
>
is this is true than this would be a nightmare.
Can anyone comment on that ??

Bye
  Rainer

>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf 
> Of Andrew Raibeck
> Sent: Friday, February 13, 2009 7:19 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] TSM v6 -announcement
>
> Actually 6.1 has not been released yet; it has only been *announced*.
>
> Electronic availability date is March 27, 2009, and media availability 
> date is April 24, 2009.
>
> Best regards,
>
> Andy
>
> Andy Raibeck
> IBM Software Group
> Tivoli Storage Manager Client Product Development Level 3 Team Lead 
> Internal Notes e-mail: Andrew Raibeck/Tucson/i...@ibmus Internet e-mail:
> stor...@us.ibm.com
>
> IBM Tivoli Storage Manager support web page:
> http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageM
> an
> ager.html
>
>
> The only dumb question is the one that goes unasked.
> The command line is your friend.
> "Good enough" is the enemy of excellence.
>
> "ADSM: Dist Stor Manager"  wrote on 02/12/2009 
> 10:02:20 PM:
>
>
>> [image removed]
>>
>> Re: TSM v6 -announcement
>>
>> Zoltan Forray/AC/VCU
>>
>> to:
>>
>> ADSM-L
>>
>> 02/12/2009 10:07 PM
>>
>> Sent by:
>>
>> "ADSM: Dist Stor Manager" 
>>
>> Please respond to "ADSM: Dist Stor Manager"
>>
>> Having been a V6.1 beta tester (now that it has been released I guess 
>> I can speak about it), I can confirm that as of the last update I 
>> have
>>
>
>
>> not seen anything for zOS other than s390 Linux.
>>
>> Also, they will not be supporting Linux NON-x64, which causes me 
>> issues since 2 of my RH Linux server are not x64 capable and will 
>> require replacing/upgrading/merging with another server.
>>
>>
>>
>> "Gee, Norman" 
>> Sent by: "ADSM: Dist Stor Manager" 
>> 02/12/2009 07:23 PM
>> Please respond to
>> "ADSM: Dist Stor Manager" 
>>
>>
>> To
>> ADSM-L@VM.MARIST.EDU
>> cc
>>
>> Subject
>> Re: [ADSM-L] TSM v6 -announcement
>>
>>
>>
>>
>>
>>
>> I have read announcement letter 209-004 multiple times and was 
>> looking
>>
>
>
>> at the server documentation that was available and there were no 
>> mention of TSM version 6 for z/OS server.  There were documentation 
>> for z/OS BA client and API.  Is this a subtle hint that TSM version 6 
>> for z/OS server is not available or is it being drop?  Is it being
>>
> delay?
>
> This e-mail, including attachments, may include confidential and/or 
> proprietary information, and may be used only by the person or entity 
> to which it is addressed. If the reader of this e-mail is not the 
> intended recipient or his or her authorized agent, the reader is 
> hereby notified that any dissemination, distribution or copying of 
> this e-mail is prohibited. If you have received this e-mail in error, 
> please notify the sender by replying to this message and delete this e-mail 
> immediately.
>
>
>


The information in this e-mail is intended only for the person to whom it is 
addressed. If you believe this e-mail was sent to you in error and the e-mail 
contains patient information, please contact the Partners Compliance HelpLine 
at http://www.partners.org/complianceline . If the e-mail was sent to 

Re: TSM v6 -announcement

2009-02-13 Thread Hart, Charles A
Couple folks just came back from pulse and one of the Labs was the TSM
Upgrade, apparently some Beta testers are seeing a 40GB upgrade DB
taking 28Hrs on avg hdw

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Andrew Raibeck
Sent: Friday, February 13, 2009 7:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM v6 -announcement

Actually 6.1 has not been released yet; it has only been *announced*.

Electronic availability date is March 27, 2009, and media availability
date is April 24, 2009.

Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Product Development Level 3 Team Lead
Internal Notes e-mail: Andrew Raibeck/Tucson/i...@ibmus Internet e-mail:
stor...@us.ibm.com

IBM Tivoli Storage Manager support web page:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageMan
ager.html


The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.

"ADSM: Dist Stor Manager"  wrote on 02/12/2009
10:02:20 PM:

> [image removed]
>
> Re: TSM v6 -announcement
>
> Zoltan Forray/AC/VCU
>
> to:
>
> ADSM-L
>
> 02/12/2009 10:07 PM
>
> Sent by:
>
> "ADSM: Dist Stor Manager" 
>
> Please respond to "ADSM: Dist Stor Manager"
>
> Having been a V6.1 beta tester (now that it has been released I guess 
> I can speak about it), I can confirm that as of the last update I have

> not seen anything for zOS other than s390 Linux.
>
> Also, they will not be supporting Linux NON-x64, which causes me 
> issues since 2 of my RH Linux server are not x64 capable and will 
> require replacing/upgrading/merging with another server.
>
>
>
> "Gee, Norman" 
> Sent by: "ADSM: Dist Stor Manager" 
> 02/12/2009 07:23 PM
> Please respond to
> "ADSM: Dist Stor Manager" 
>
>
> To
> ADSM-L@VM.MARIST.EDU
> cc
>
> Subject
> Re: [ADSM-L] TSM v6 -announcement
>
>
>
>
>
>
> I have read announcement letter 209-004 multiple times and was looking

> at the server documentation that was available and there were no 
> mention of TSM version 6 for z/OS server.  There were documentation 
> for z/OS BA client and API.  Is this a subtle hint that TSM version 6 
> for z/OS server is not available or is it being drop?  Is it being
delay?

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: TDPO problem - can't delete old tdpo backups

2009-02-10 Thread Hart, Charles A
Depending on the How long you need to keep the history of the old
backups, if not visible in the RMAN cat is to rename the tdpo FS on the
Backup Server - then the next RMAN backup will create new one based on
the name in the tdpo.opt ... You then wait x days for your retention
then go back to the TSM server and delete the TDP filespace you renamed
... Little less dangerous than the del obj, not to mention by doing the
Del obj - you are only deleting one copy (assuming you make an offsite) 

Charles  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Mochnaczewski
Sent: Tuesday, February 10, 2009 2:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDPO problem - can't delete old tdpo backups

At this point, delete object is most likely your only option. We have
had to deal with orphaned backups in the past. It is not that difficult
to delete them. I have a step by step procedure to do this. Has not
failed me yet.

Rich

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu]on Behalf Of
Richard Rhodes
Sent: Tuesday, February 10, 2009 3:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TDPO problem - can't delete old tdpo backups


We've been using rman/tdpo backups for over a year now, and are still
trying to get our heads fully around it.  I've been working with TSM
support on a problem and not getting very far, so I thought I'd ask the
list.

We have several TDPO nodes in a TSM server  that have old backups we
cannot get rid of.  In all other ways TDP/TSM is working great.  When we
run TDPOSYNC it says the RMAN catalog and TDPO is INSYNC - nothing to
cleanup.  But, a list of objects form the backups table in TSM for the
node shows old backups.  we keep backups for 50 days - these old files
are well back into last year.

What we think happened:

Back in Oct/Nov/Dec time frame, we did RESTORES on several databases.
According to the DBA's, part of these restore was Oracle creating a new
DBID (Database ID).
In essence, after the restore we had a completely NEW DATABASE.
>From RMAN's perspective, these were NEW DATABASES.
The old objects in TSM that we can't get rid of appear to be from PRIOR
TO THE RESTORE.  It's as if both RMAN and TDPO have no knowledge of
these objects. The DBA who handles rman/tdpo (who is very good!) says
that these old objects are NOT in RMAN.  RMAN/TDPO/TSM is CORRECTLY
backing up and deleting backups SINCE the restore.  Also, we setup
rman/tdpo so that every Oracle db has it's own TSM node.

IBM suggested

1)  delete them with the BA client. It doesn't work.  It seems the files
in TSM have no owner, and the ba client won't/can't see them.

2)  rename the node and start over.  Yes, I can do this, but it seems to
be running around the problem.

or 3)  User error (the dba and myself) - we don't understand something
about RMAN/TDPO/TSM.

It seems to me that I have either a RMAN bug, or a TDPOSYNC bug.
 what I wish is that tdposync would list out the full set of files that
it is comparing from rman and tsm.

Yes, I'm aware of the "delete object" command . . . I really don't want
to go there!

=

Here is an example of a the files in one of the problem nodes:


==> tsmsap2  TDPO-ONSITE  SAPEQ1D1_DB
==> q node:  SAPEQ1D1_DBAIX TDPO-ONSITE 1   1
No



i0jo7odu_1_1ACTIVE_VERSION  2008-08-17
i8jo7tuc_1_1ACTIVE_VERSION  2008-08-17

u7jrej66_1_1ACTIVE_VERSION  2008-09-25
u8jrejb3_1_1ACTIVE_VERSION  2008-09-25

ucjrejk0_1_1ACTIVE_VERSION  2008-09-25
udjrejlo_1_1ACTIVE_VERSION  2008-09-25



4mk2gkvp_1_1ACTIVE_VERSION  2008-12-17
4tk2gv5l_1_1ACTIVE_VERSION  2008-12-17

o0k3bpge_1_1ACTIVE_VERSION  2008-12-28
o5k3bvvt_1_1ACTIVE_VERSION  2008-12-28

6pk410jh_1_1ACTIVE_VERSION  2009-01-05
6uk4173b_1_1ACTIVE_VERSION  2009-01-05

btk4iavp_1_1ACTIVE_VERSION  2009-01-11
buk4ib01_1_1ACTIVE_VERSION  2009-01-11









Thanks

Rick



-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sen

Re: SQL Select Stement Subtracting Time from Summary Table

2009-01-16 Thread Hart, Charles A
Thanks Richard for pointing out the ADSM Faq - for subtracting time ...
Our issue is we have OS backup backing up DB file systems via the OS
backup schedule. (We have a separate node dbss0001 for the OS backup and
a dbss0001-ORA for the RMAN backups.  So we are going to use the query
to Auto Ticket the Unix team for include Exclude challenges ---

SELECT START_TIME, END_TIME, SCHEDULE_NAME, ENTITY, BYTES FROM SUMMARY
WHERE ACTIVITY='BACKUP' AND (CURRENT_TIMESTAMP-END_TIME)HOURS <= 16
HOURS and (SCHEDULE_NAME Is Not Null) and (BYTES >= 600)



-Original Message-----
From: Hart, Charles A 
Sent: Friday, January 16, 2009 10:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: SQL Select Stement Subtracting Time from Summary Table


Is there a way to have a select statement perform much like the function
of the q act begint=-8 ?  I'm writing a SQL query to determine OS
Backups are backing up DB files systems by doing a select from the
summary table, with nodename, sched name, bytes recvd > 50GB etc, but
need to find a way to say only look back x # of Hours ... I've even
checked my SAL in 21Min book and cant find anything...


Hope all is well with everyone and good luck in this new challenging
year!

Regards, 

Charles 



This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


SQL Select Stement Subtracting Time from Summary Table

2009-01-16 Thread Hart, Charles A
Is there a way to have a select statement perform much like the function
of the q act begint=-8 ?  I'm writing a SQL query to determine OS
Backups are backing up DB files systems by doing a select from the
summary table, with nodename, sched name, bytes recvd > 50GB etc, but
need to find a way to say only look back x # of Hours ... I've even
checked my SAL in 21Min book and cant find anything...


Hope all is well with everyone and good luck in this new challenging
year!

Regards, 

Charles 



This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Question on DB2 expiration

2009-01-02 Thread Hart, Charles A
As I remember in db2util ver 7 to 8 db2 when db2util passes a delete its
like oracle in that its gone ... You may still see the occupancy if
expiration and aud lic hasn't run since they passed a delete...

Charles  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Mochnaczewski
Sent: Friday, January 02, 2009 2:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on DB2 expiration

When they invioke db2adutl, they see no backups in TSM for the databases
they expired , or more apporpriately deleted.

Rich

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu]on Behalf Of
Richard Sims
Sent: Friday, January 02, 2009 3:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on DB2 expiration


On Jan 2, 2009, at 2:13 PM, Richard Mochnaczewski wrote:

> Hi *,
>
> Our DBA group recently expired several terabytes of data. When they 
> check with their db2util utility, they see no backups. When I run a 
> select statement on the "virtual" filesystem type API:DB2/AIX64  for 
> the respective databases I see 100% usage ( some range from 2Tb to 
> 10Tb but they have all been cleaned up ). The deletion of the backups 
> have been done on Wednesday and expiration runs daily at 8:00 AM to 
> completion but the filespaces still show as being at 100%. Is this a 
> bug ? Running TSM 5.4.30 on AIX 5.3 .

But what are the retention policies according to the Copy Group within
the Management Class that was used for the backups?  What do they see
when they invoke the more appropriate utility, db2adutl?  (Keep in mind
that DB2 utilizes both Backup and Archive.) The DB2 TSM redbook gives a
good overview of the whole arrangement.

Richard Sims

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


RMAN - Oracle Deletes

2008-11-21 Thread Hart, Charles A
Hope everyone is well in these trying times.

Does anyone know if there's a way other than tracking occupancy to see
if RMAN is passing deletes to TSM for Oracle TDP clients?  We've had
many challenges with our Oracle group and their delete backup script not
working.

Thank you Have a great day!

Charles 


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Cacti

2008-11-17 Thread Hart, Charles A
 
Cool .. We are using Cacti for things like FCIP and other FC link's .
What mechanisms are you using to track TSM related info, is it a select
statement that dumps data to a file that cacti graphs?  Getting the
Session count would be really cool, among other things.

Appreciate your insight!

Regards, 

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Strand, Neil B.
Sent: Monday, November 17, 2008 7:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Cacti

Cacti works like a champ for trending.
We use it to trend db, log, storage pool, session count & drive mounts.


Cheers,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Mad Unix
Sent: Saturday, November 15, 2008 1:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Cacti

Any one using Cacti to report TSM utilization i.e. to graph Tivoli
Storage Mangager stats Utilization


Thanks

IMPORTANT: E-mail sent through the Internet is not secure and timely
delivery of Internet mail is not guaranteed. Legg Mason therefore,
recommends that you do not send any  action-oriented or time-sensitive
information to us via electronic mail, or any confidential or sensitive
information including:  social security numbers, account numbers, or
personal identification numbers.

This message is intended for the addressee only and may contain
privileged or confidential information. Unless you are the intended
recipient, you may not use, copy or disclose to anyone any information
contained in this message. If you have received this message in error,
please notify the author by replying to this message and then kindly
delete the message. Thank you.

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Oracle TDP and Server Side Retention ...

2008-11-05 Thread Hart, Charles A
We are having a challenge with our Oracle DBA's in that they are not
passing RMAN deletes per our standards.  That said, what are the
implications if we set the copy group to a retention other than the
recommended 1-0-1-0 ?  I'm guessing it will create a disconnect in the
DBA's RMAN catalog.  

Has anyone had to deal with a similar situation where they were forced
to put in a different retention set that the recommended?

Hope all is well with everyone and thank you for your input!

Regards, 

Charles 


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Several simultaneous backups of an AIX client

2008-10-23 Thread Hart, Charles A
If multiple file systems you could make sure there's a stream for each,
you could break it in small chunk with more stream using the Virtual
Mount point option then increase the Resource Util and Maxnummp for the
CLIent, other wise tar up all the files and send to TSM but you may end
up backing up stuff that hasn't changed.


Examples Options file: virtualmountpoint /afs/xyzcorp.com/home/ellen
virtualmountpoint /afs/xyzcorp.com/home/ellen/test/data
The virtualmountpoint option defines a virtual mount point for a file
system if you want to consider files for backup that begin with a
specific directory within that file system. Using the virtualmountpoint
option to identify a directory within a file system provides a direct
path to the files you want to back up, saving processing time. It is
more efficient to define a virtual mount point within a file system than
it is to define that file system using the domain option, and then to
use the exclude option in your include-exclude options list to exclude
the files that you do not want to back up. Use the virtualmountpoint
option to define virtual mount points for multiple file systems, for
local and remote file systems, and to define more than one virtual mount
point within the same file system. Virtual mount points cannot be used
in a file system handled by automounter.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Joni Moyer
Sent: Thursday, October 23, 2008 10:20 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Several simultaneous backups of an AIX client

Hello everyone,

I have a situation with an AIX TSM client which is at the 5.5.0.6
release on a TSM server which is on AIX and is at 5.5.1.1.  The client
has 18 million files to inspect when doing a daily incremental backup
that begins at 9PM and as you can see below took almost 11 hours.  As
you can guess this just isn't acceptable to run for this long period of
time.  Does anyone have any suggestions on what could be done in order
to get this server backed up incrementally in a quicker fashion?  Any
suggestions are greatly appreciated!

10/23/08 07:59:24 ANE4952I (Session: 40206, Node: CHRS141)  Total
number of
   objects inspected: 18,540,895(SESSION: 40206)
10/23/08 07:59:24 ANE4954I (Session: 40206, Node: CHRS141)  Total
number of
   objects backed up:  225,018(SESSION: 40206)
10/23/08 07:59:24 ANE4958I (Session: 40206, Node: CHRS141)  Total
number of
   objects updated:  0(SESSION: 40206)
10/23/08 07:59:24 ANE4960I (Session: 40206, Node: CHRS141)  Total
number of
   objects rebound:  0(SESSION: 40206)
10/23/08 07:59:24 ANE4957I (Session: 40206, Node: CHRS141)  Total
number of
   objects deleted:  0(SESSION: 40206)
10/23/08 07:59:24 ANE4970I (Session: 40206, Node: CHRS141)  Total
number of
   objects expired:  3,376(SESSION: 40206)
10/23/08 07:59:24 ANE4959I (Session: 40206, Node: CHRS141)  Total
number of
   objects failed:   0(SESSION: 40206)
10/23/08 07:59:24 ANE4961I (Session: 40206, Node: CHRS141)  Total
number of
   bytes transferred: 25.57 GB(SESSION: 40206)
10/23/08 07:59:24 ANE4963I (Session: 40206, Node: CHRS141)  Data
transfer
   time:31,313.02 sec(SESSION:
40206)

10/23/08 07:59:24 ANE4966I (Session: 40206, Node: CHRS141)  Network
data
   transfer rate:  856.53 KB/sec(SESSION:
40206)
10/23/08 07:59:24 ANE4967I (Session: 40206, Node: CHRS141)
Aggregate
data
   transfer rate:685.82 KB/sec(SESSION:
40206)

10/23/08 07:59:24 ANE4968I (Session: 40206, Node: CHRS141)  Objects
   compressed by:0%(SESSION:
40206)
10/23/08 07:59:24 ANE4964I (Session: 40206, Node: CHRS141)  Elapsed
   processing time:10:51:47(SESSION:
40206)

 Node Name: CHRS141
  Platform: AIX
   Client OS Level: 5.3
Client Version: Version 5, release 5, level 0.6
Policy Domain Name: AIX
 Last Access Date/Time: 10/23/08 10:55:10
Days Since Last Access: <1
Password Set Date/Time: 08/24/04 12:55:29
   Days Since Password Set: 1,521
 Invalid Sign-on Count: 0
   Locked?: No
   Contact: Open Systems
   Compression: No
   Archive Delete Allowed?: Yes
Backup Delete Allowed?: Yes
Registration Date/Time: 08/24/04 12:55:29
 Registering Administrator: LIDZR8V
Last Communication Method Used: Tcp/Ip
   Bytes Received Last Session: 241.55 M
   Bytes Sent Last Session: 3,157
  Duration of Last Session: 10.16
   Pct. Idle Wait Last Session: 5.16
  Pct. Comm. Wait Last Session: 47.67
  Pct. Media W

Re: Configure Shared Library in TSM...

2008-10-22 Thread Hart, Charles A
Yes as long as you have more than one tape drive. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kiran
Sent: Wednesday, October 22, 2008 6:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Configure Shared Library in TSM...

Hi,



How to configure Shared library in TSM.I need to share LTO library for
two TSM server instances.



Can I perform Backup/Archive, Restore/Retrieve operations from both
instances simultaneously?







Regards,

Kiran M



Disclaimer:
This email message (including attachments if any) may contain
privileged, proprietary, confidential information, which may be exempt
from any kind of disclosure whatsoever and is intended solely for the
use of addressee (s). If you are not the intended recipient, kindly
inform us by return e-mail and also kindly disregard the contents of the
e-mail, delete the original message and destroy any copies thereof
immediately. You are notified that any dissemination, distribution or
copying of this communication is strictly prohibited unless approved by
the sender.

DQ Entertainment (DQE) has taken every reasonable precaution to minimize
the risk of transmission of computer viruses with this e-mail; DQE is
not liable for any damage you may sustain as a result of any virus in
this e-mail. DQE shall not be liable for the views expressed in the
e-mail. DQE reserves the right to monitor and review the content of all
messages sent to or from this e-mail address


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Backup VMS on IA64?

2008-09-23 Thread Hart, Charles A
There's a third Party TSM client ... 

http://www.storserver.com/main.cfm?menu=3&submenu=3&detail=include/abcov
erview.cfm

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Christian Svensson
Sent: Tuesday, September 23, 2008 8:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backup VMS on IA64?

Hi Everyone,
Just a quick question how popular this will be.
Yesterday did I catch up a customer that just implemented 2 brand new HP
Blade servers based on Itanium Technoligy and they run VMS on it.
Is their any other people looking at VMS on IA64 and how are you going
to backup does servers?


Best Regards
Christian Svensson

WW Support: +44-1453-84 7009
U.S. Support: +1-866-832-2267
Cell: +46-70-325 1577
E-mail: [EMAIL PROTECTED]
Skype: cristie.christian.svensson


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Migration Process Question

2008-08-25 Thread Hart, Charles A
We noticed today that a migration for a virtual tape pool was migrating
to itself not the "Next Stgpoool" ?  Has anoyomn seen such a thing?

Thanks, 

Charles 


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Fantasy TSM

2008-05-05 Thread Hart, Charles A
There's a new NDMP Option with TSM 5.4 where you can perform an IP based
backup to a TSM Storage Pool, which then should allow you to process as
any other TSM Data hitting a stgpool... 



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Curtis Preston
Sent: Monday, May 05, 2008 3:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Fantasy TSM

> I'm wondering if TSM could take the NDMP dump and post-process it, to 
> carve out the changed files only and handle them according to regular
mgmt
> classes.

It is possible to do that, but I know of only one product that does it.
Avamar processes (inline, actually) the dump stream, figures out which
blocks are new and keeps them.  That means it's totally possible.

The question is whether or not IBM sees enough revenue from the NDMP
agent to do that much work on it.





This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail.


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Retain All Existing TSM Backup / Archive Data for 10Yrs

2008-04-29 Thread Hart, Charles A
Thanks for input, we've done the same thing for the Exchange
Environment, but I was under the impressions that once data is Archived
using the TSM ba client archive function that you could not extend the
retention on exiting archives...   Example Client A Archives data to a
2yr Mgmtclass (Archive CopyGroup) then for the process below (New Domain
with BU/Archive Copygroups set to 10years the archives that were
processed with a 2year archive copygroup would still drop of in 2 years
not the new 10year copygroup setting in the new domain structure.

Thanks


Charles Hart


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Dwight Cook
Sent: Tuesday, April 29, 2008 6:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Retain All Existing TSM Backup / Archive Data for
10Yrs

If you only have to keep what is there for 10 years and can allow normal
processing of new data going forward... set up a new domain with grace
periods to fit your needs... for 10 years 3,666 days will work, my
example here is where I had to keep everything for an umlimited period
of time.
(put in a base backup and archive mgmt that will fit your needs, again,
here my example was unlimited) Move whatever nodes need the extended
retention over into this domain and rename them something like
_10YR Then register the existing name back into their
initial domain.
This will basically freeze all of the existing data

tsm: >q domain unlimited f=d

Policy Domain Name: UNLIMITED
Activated Policy Set: STANDARD
Activation Date/Time: 12/15/2006 13:59:50 Days Since Activation: 501
Activated Default Mgmt Class: UNLIMITED Number of Registered Nodes: 3
Description:
Backup Retention (Grace Period): 9,999
Archive Retention (Grace Period): 30,000 Last Update by (administrator):
X Last Update Date/Time: 12/15/2006 13:59:50 Managing profile:
Changes Pending: No
Active Data Pool List:


tsm: >q copy unlimited active t=a

Policy Policy Mgmt Copy Retain
Domain Set Name Class Group Version
Name Name Name
- - - -  UNLIMITED ACTIVE
UNLIMITED STANDARD No Limit

tsm: >q copy unlimited active t=b

Policy Policy Mgmt Copy Versions Versions Retain Retain Domain Set Name
Class Group Data Data Extra Only Name Name Name Exists Deleted Versions
Version
- - - -   
--- UNLIMITED ACTIVE UNLIMITED STANDARD No Limit No Limit No Limit
No Lim- it

tsm: >


Dwight E. Cook
Systems Management Integration Professional, Advanced Integrated Storage
Management TSM Administration
(405) 253-4397


Ok, we have a requirements to retain all Backup / Archive data that
exists in our TSM environment today for 10 years.  There's a few "ugly"
options, but would it be possible to do the following?

Export node from Existing TSM servers to New TSM Server with Archive And
Backup Copy Groups set to 10 years retention.  I know that just
increasing the Archive Copy group on the existing TSM Server's Archives
to 10YR will not re-bind those Archive objects, but what if you export a
node to a new TSM server with Longer retention Archive Copy Groups will
the Archives bind to the new Archive Copy groups on the new TSM server?
(I'm hoping so)

Appreciate the input!


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Retain All Existing TSM Backup / Archive Data for 10Yrs

2008-04-28 Thread Hart, Charles A
Thank you Wanda and TSTLTDTD [EMAIL PROTECTED],

I forgot to mention the following.  We completely understand that this
data will most likely be unrecoverable for the sake of Application and
OS versions.  This is essentially a stop gap until a "real" Application
level archiving solution can be put in place...   I know its just Ugly!

Thanks again for your replies, hope you don't end up having to do the
same... 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Wanda Prather
Sent: Monday, April 28, 2008 2:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Retain All Existing TSM Backup / Archive Data for
10Yrs

Not necessary.

You can't rebind a file that has been archived to a different management
class, but if you change the RULE the file is bound to (i.e., the
archive copy group retention setting), that change does indeed take
effect, and will do what you ask.

That said, here's my plug for content management:

Will anybody be able to FIND anything in these archives 10 years from
now?
With TSM archiving, all you have to look at is file names.  If your mgmt
wants to be able to locate any INFORMATION 10 years from now, some sort
of content management application is usually more appropriate.  You get
some indexing and some organization and some search capabilities for the
data, not just the host name and filenames for client nodes that no
longer exist...

W



On Mon, Apr 28, 2008 at 2:01 PM, Hart, Charles A <[EMAIL PROTECTED]>
wrote:

> Ok, we have a requirements to retain all Backup / Archive data that 
> exists in our TSM environment today for 10 years.  There's a few
"ugly"
> options, but would it be possible to do the following?
>
> Export node from Existing TSM servers to New TSM Server with Archive 
> And Backup Copy Groups set to 10 years retention.  I know that just 
> increasing the Archive Copy group on the existing TSM Server's 
> Archives to 10YR will not re-bind those Archive objects, but what if 
> you export a node to a new TSM server with Longer retention Archive 
> Copy Groups will the Archives bind to the new Archive Copy groups on
the new TSM server?
> (I'm hoping so)
>
> Appreciate the input!
>
>
> This e-mail, including attachments, may include confidential and/or 
> proprietary information, and may be used only by the person or entity 
> to which it is addressed. If the reader of this e-mail is not the 
> intended recipient or his or her authorized agent, the reader is 
> hereby notified that any dissemination, distribution or copying of 
> this e-mail is prohibited. If you have received this e-mail in error, 
> please notify the sender by replying to this message and delete this
e-mail immediately.
>


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Retain All Existing TSM Backup / Archive Data for 10Yrs

2008-04-28 Thread Hart, Charles A
Ok, we have a requirements to retain all Backup / Archive data that
exists in our TSM environment today for 10 years.  There's a few "ugly"
options, but would it be possible to do the following?

Export node from Existing TSM servers to New TSM Server with Archive And
Backup Copy Groups set to 10 years retention.  I know that just
increasing the Archive Copy group on the existing TSM Server's Archives
to 10YR will not re-bind those Archive objects, but what if you export a
node to a new TSM server with Longer retention Archive Copy Groups will
the Archives bind to the new Archive Copy groups on the new TSM server?
(I'm hoping so)

Appreciate the input!


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: TSM being abandoned?

2008-04-20 Thread Hart, Charles A
1) With the acquisition of Diligent will the TSM Dedupe still be part of 6.x? 
With the Diligent Device you can have multiple TSM instances to get a better 
overall De-Dupe.  Example we have one p570 with two TSM Instances (Unix Prod 
w/Oracle and the Non-Prod with Oracle and go to the same Diligent device thus 
the factoring is from 10 to 18:1 (of course our DBA's do daily fulls)  The 
other thought is have the TSM Level De-Dupe a option that can be enabled or 
disabled.

2) It will be interesting to see the DB2 performance 

4) VTL's are actually a good thing for TSM is you have good daily operational 
backup retentions, so things can live an die in the VTL.  TSM should be 
managing the volumes regardless of a VTL being in the mix... The only concept 
I've heard where the VTL manages volumes if when you have VTL Replicating to 
another VTL with out TSM's involvement.





-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Francisco 
Molero
Sent: Sunday, April 20, 2008 6:33 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM being abandoned?

Hello colleagues,
 
TSM being abandoned ?
 
I don't know but IBM 6.1 will have much more functionality than previous 
versions. IBM has bought two companies FilesX and Diligent . FilesX will be add 
to TSM in Windows environments, and Diligent improves the dedup technologies.
 
First question : deduplication is very interesting for all backup software, but 
TSM has more necessities than dedup although in version 6.1 will be available. 
TSM doesn't need to backup the whole environment each weekend then dedup is 
more important for other backup sw. 
 
Second question : DB2 or TSM db, I am not an expert in DBs, but I think we will 
have better availability, performance with DB2. We will run an audit db 
online, and HADR will be available with DB2 ( I hope ). Nowadays I have several 
customers with DB size around 200 GBytes, this is a problem, I suppose DB2 can 
manage this TSM dbs better than TSM DB. 
 
Fourth question :  In my opinion VTLs provide not too much possibilities, in 
general VTLs aren't matures but it is ideal for lanfree backup. I don't like 
too much VTL can manage all volumes, I prefer TSM does that. TSM is working for 
many years and more or less works properly. But all VTL vendors want to sell a 
new solution, in many cases VTL is appliance with very high cost.
 
Regards,
 
    Fran

 

- Mensaje original 
De: Paul Zarnowski <[EMAIL PROTECTED]>
Para: ADSM-L@VM.MARIST.EDU
Enviado: sábado, 19 de abril, 2008 21:24:46
Asunto: Re: TSM being abandoned?

Timothy,

There was a discussion session about this at the Oxford TSM Symposium last 
fall, facilitated by some folks in TSM Development.  I cannot say what will be 
in the next version of TSM, but I can say that IBM appears to be well aware of 
this issue, and they have taken steps to listen to user feedback.  Time will 
tell what happens, of course, but at least they are listening.

..Paul


At 01:49 PM 4/18/2008, Timothy Hughes wrote:
>Remco Post wrote:
>
>>Timothy Hughes wrote:
>>
>>>Well, on that note  I have a "possible stupid question" does anyone 
>>>think it's will be possible for a customer to have a choice of 
>>>staying with the current TSM database if they liked they way it is 
>>>and still upgrade to TSM V6?  Sorry I was just curious
>>>
>>No, but you will have the option to stay with tsm v5.5 at least until 
>>tsm 6.2 has been released. My bet is that since 6.1 is a really big 
>>step (redesign, reimplementation of major parts of tsm), 6.2 will not 
>>be here for quite some time.
>>
>>--
>>
>>Met vriendelijke groeten,
>>
>>Remco Post
>
>
>Thanks Remco!
>
>Also,  Is there going to be a built in license calculator  in TSM V6? 
>Anyone


--
Paul Zarnowski                            Ph: 607-255-4757 Manager, Storage 
Services                Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801    Em: [EMAIL PROTECTED]



  __
Enviado desde Correo Yahoo! La bandeja de entrada más inteligente.


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: AW: [ADSM-L] Reset platform value in nodes table

2008-04-19 Thread Hart, Charles A
Best approach is register a TSM client for your TDP. You then have two node 
registrations and the OS client will always reflect the OS version and visa 
versa for the TDP client node name... 

TDP Backup - dbsp0001-ora 
OS Backups - dbsp0001 

This allows you the flexibility to break out your TSM Domain / Stgpool by data 
type  And more flexibility for Mgmtclassesetc

Domain = Platform - that points to a disk-platform stgpool
Domain = Oracle - that points to a disk-oracle stgpool



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Thomas Rupp
Sent: Saturday, April 19, 2008 6:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] AW: [ADSM-L] Reset platform value in nodes table

I think it depends on the version of the TDP client.
We once had some older TDP clients that didn't set the platform properly.
Try

select platform_name as Platform, -
client_os_level as OS_VER, -
node_name as Node, -
cast(cast(client_version as char(2)) || '.' - 
|| cast(client_release as char(2)) || '.' - cast(client_level as 
|| char(2)) || '.' - cast(client_sublevel as char(2)) -
as char(15)) as "Client" -
from nodes -
order by platform_name, "Client", Node  

to see what clients don't return a platform.

HTH
Thomas Rupp


-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Remco Post
Gesendet: Samstag, 19. April 2008 13:06
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: [ADSM-L] Reset platform value in nodes table


[EMAIL PROTECTED] wrote:
> Hello,
>
> we have some TSM clients with TDP installed. Some of them haven't set 
> the "real" platform (e.g. Linux) but some TDP-... string. How does TSM 
> set this value and how to reset this value for a specific node(s).
>

This value is set every time a cliet makes a back-up (or maybe even when the 
client connects). How to reset it: make sure you don't use one nodename to run 
multiple clients.

> Thanks,
>

--

Met vriendelijke groeten,

Remco Post
Vorarlberger Illwerke AG ein Unternehmen von illwerke vkw
Rechtsform: Aktiengesellschaft, Sitz: Bregenz, Firmenbuchnummer: FN 59202 m, 
Firmenbuchgericht: LG Feldkirch, UID-Nr.: ATU 36737402


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: TSM being abandoned?

2008-04-16 Thread Hart, Charles A
To the point below, Twin Cities have been going from NBU to TSM...
There's a shortage of TSM talent in the state  on MN 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Remeta, Mark
Sent: Wednesday, April 16, 2008 9:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM being abandoned?

Maybe it has something to do with IBM's recent audit of TSM licensee's.
I know our management is seriously thinking of abandoning TSM due to
cost.

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Howard Coles
Sent: Wednesday, April 16, 2008 10:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM being abandoned?

I'd say businesses are migrating in all kinds of directions.  I'd think
that you'd see some migrating from other products to TSM as well,
however, it would be interesting to see current numbers.

See Ya'
Howard

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
> Of Orin Rehorst
> Sent: Wednesday, April 16, 2008 9:13 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] TSM being abandoned?
> 
> A VTL vendor said he is seeing a number of mid-sized businesses 
> migrating from TSM to NBU (Symantec). Do you think this is true? My 
> concern is that the pool of support techs will shrink and put us in a 
> bind.
> 
> Regards,
> Orin
> 
> Orin Rehorst
> Port of Houston



Confidentiality Note: The information transmitted is intended only for
the person or entity to whom or which it is addressed and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination or other use of this information by persons or entities
other than the intended recipient is prohibited. If you receive this in
error, please delete this material immediately. 
Please be advised that someone other than the intended recipients,
including a third-party in the Seligman organization and government
agencies, may review all electronic communications to and from this
address.


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Re-Creating a ClientOption Set

2008-04-13 Thread Hart, Charles A
Thank you for the confirmation!  Kinda though so..

Have a great Weekend! 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Steven Harris
Sent: Friday, April 11, 2008 8:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Re-Creating a ClientOption Set

I've struck this before Charles.  I think its a referential integrity
thing,   You get a similar issue when you move a node between domains
and
the schedule associations are lost.

The way to do this is to DEL CLIENTOPT whatever INCLEXCL SEQ=ALL  and
then run DEF CLIENTOPT statements to recreate them as you wish. You
could fiddle with deleting and defining individual line numbers but I
find that error-prone and unsatisfactory.  I write macros to do this and
implement a backout for all my significant changes to option sets.
TSMMANAGER has a Client option set editor that is good for small changes
but you can export/import using a file as well, and I use the export as
the basis of my macros to satisfy the change people.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia






 "Hart, Charles A"
 <[EMAIL PROTECTED]
 .COM>
To
 Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor
cc
 Manager"
 <[EMAIL PROTECTED]
Subject
 .EDU> [ADSM-L] Re-Creating a
ClientOption
   Set

 12/04/2008 03:01
 AM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






We are doing Server Side Incl/Excl's using Clopts.  There were
inconsistencies from TSM instance to instance so to have them consistent
we renamed the existing CLOPT "Intel" to Intel. Save", then deleted the
original "Intel" CLOPT , and then re-created the "Intel" CLOPT on the
Instances and noticed that the previous client associations to the
Original Intel CLOPT disappeared.  Is it possible that the renaming,
deleting re-creating of a CLOPT will disassociate node to that CLOPT,
even though it was re-created with the original clopt name?  I know that
the clients were associated with the Intel Clopt, but now they are
not

Always finding new things about TSM even after 7yrs!

Thx!


This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity to
which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re-Creating a ClientOption Set

2008-04-11 Thread Hart, Charles A
We are doing Server Side Incl/Excl's using Clopts.  There were
inconsistencies from TSM instance to instance so to have them consistent
we renamed the existing CLOPT "Intel" to Intel. Save", then deleted the
original "Intel" CLOPT , and then re-created the "Intel" CLOPT on the
Instances and noticed that the previous client associations to the
Original Intel CLOPT disappeared.  Is it possible that the renaming,
deleting re-creating of a CLOPT will disassociate node to that CLOPT,
even though it was re-created with the original clopt name?  I know that
the clients were associated with the Intel Clopt, but now they are
not

Always finding new things about TSM even after 7yrs!

Thx!


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: Multiple Backup Streams with exchange.

2008-04-04 Thread Hart, Charles A
You can run a TDP Backup  Stream per Message store... You just have to
do tdpexe backup cmmds for each.  Works well

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Schaub, Steve
Sent: Friday, April 04, 2008 4:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Multiple Backup Streams with exchange.

There is no method of creating parallel backup streams in the TDP for
Exchange product (unlike it's SQL counterpart).  The only way to
accomplish something similar is to break your backup script into
multiple jobs, each backing up a specific storage group.  Of course,
that also requires multiple node names, schedulers, etc.

Steve Schaub
Systems Engineer, Windows
Blue Cross Blue Shield of Tennessee

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Steven Harris
Sent: Friday, April 04, 2008 1:21 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Multiple Backup Streams with exchange.

I continue to find new corners of this product to explore - or maybe its
grope blindly in the dark !

I have a customer with an exchange cluster TDP 5.3.3.1 backing up to a
Win
2k3 server running tsm 5.3.4.  Storage agents are installed on both
sides of the cluster at 5.3.4.

Exchange backups work fine, but the mail store has grown and they are
spilling into the online day - there are 4 drives available, but backups
only use one. Maxnummp for the node is set to 4.

I've been through the TDP for Exchange and Storage Agent manuals and can
see nothing that addresses a number of parallel streams.  I've tried
searching but obviously haven't come up with the right set of keywords.

Can someone point me in the right direction?

Thanks

Steve

Steven Harris
TSM Admin, Sydney Australia.
Please see the following link for the BlueCross BlueShield of Tennessee
E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


This e-mail, including attachments, may include confidential and/or 
proprietary information, and may be used only by the person or entity to 
which it is addressed. If the reader of this e-mail is not the intended 
recipient or his or her authorized agent, the reader is hereby notified 
that any dissemination, distribution or copying of this e-mail is 
prohibited. If you have received this e-mail in error, please notify the 
sender by replying to this message and delete this e-mail immediately.


Re: TSM dream setup

2008-02-28 Thread Hart, Charles A
Interesting, thank you for sharing.  Our challenge was the bottleneck
was that the Data Domain Device using Ethernet, (not sure if the device
has capable multiple gige's to trunk).  I thought I heard that Data
Domain now offers a FC based device ... (i.e. FC to Disk )

As far as compression ratio's yes... You will always have data types
that will be unique every time and wont de-dupe, and that Virtual tape
of any flavor is no place for long term data. (Long term is different to
different people, for us its 21 Days due to volume of daily inbound
data.)

If you have the opportunity to get to know this data that wont compress
well, sometimes you'll find that its being compressed or encrypted, if
encrypted then you probably have to live with it.

We learned a lot about data types and how they come at us for example if
Oracle RMAN uses Files Per Set more than a value of 1 then the RMAN
streams are intermixed and are different every time reducing the
Dedupability... 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ben Bullock
Sent: Tuesday, February 26, 2008 4:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM dream setup

Ok, I thought I would reply back here about our experience in
implementing a DataDomain 580 appliance into our TSM environment.

Setup - Easy. Put it on the network and NFS mounted it to our AIX/TSM
server.

TSM config - Easy. Created some "FILE" device classes and pointed them
to the NFS mount points.

Migration of data from tape to DataDomain appliance - Easy. "Move data",
"move nodedata", etc. work great.

Performance - We are getting a consistent 90MB/Sec in writing to the
device and a little better on reads. This is pretty much the limit of
the 1GB adapter we are running the data through. That equates to about
8TB of data movement a day, acceptable for our environment. NICs could
be combined for better throughput.

Dedupe/Compression - Here is where the answer from the vendor is always,
"It depends on the data". And indeed it does, but here is what we are
getting:

DB dumps - Full SQL, Sybase, and Exchange server DB dumps. 
   Original Bytes:   49,945,140,504,962  (TSM says there is this
much data)
  Globally Compressed:5,956,953,849,746  (this is how large it is
after deduplication)
   Locally Compressed:2,792,002,425,204  (this is how big it is
after lz compression)
about an 18 to 1 compression ratio.

Filesystems - OS files, document repositories, image scans, windows
fileservers, etc.
   Original Bytes:   27,051,578,287,711  (TSM says there is this
much data)
  Globally Compressed:7,907,366,156,093  (this is how large it is
after deduplication)
   Locally Compressed:4,499,161,648,844  (this is how big it is
after lz compression)
about an 6 to 1 compression ratio.
  
Overall deduplication/compression on our TSM backups: ~ 10 to 1
compression.

It's kinda like night and day between the fileserver and database
compression rates. We have found that some server's data is very
un-deduplicatable (is that a made-up word, or what?). Here are some
examples:

- A 6TB document repository with TIFF and PDF documents is only getting
about 5 to 1 compression.
- The VMWARE ESXRANGER backups are compressed so we get virtually NO
dedupe when the data goes to the appliance. We are in the process of
re-working this.
- A large application in our environment puts out data files that are
also non-deduplicatable. Who knew. No way to tell until you shovel it to
the appliance and see that it sucked and then shovel it back out to tape
for the time being.

We were well aware that some data isn't really fit for this expensive
appliance, so we are looking into other ways to put that TSM data on
disk and replicate it for DR (perhaps a NAS appliance). 

Overall, we are pleased with the appliance. The ability to replace a
whole tape library with a 6U appliance frees up a lot of computer room
space. And using 1/10th of the power to keep disks spinning (we are
fitting about 100TB of data onto a 10TB DataDomain), feels very "green"
and saves money in HVAC and power. 

Oh ya, and restores are almost instantaneous for individual files, and I
can restore whole filesystems now in a reasonable amount of time. YMMV
of course, it still depends on the number and size of the files. But it
is even faster than before when we were using collocated tapepools on a
LTO2.

Neat new technology 

Ben


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Friday, February 15, 2008 6:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM dream setup

>About deduplication, Mark Stapleton said:
>
> > It's highly overrated with TSM, since TSM doesn't do absolute (full)

> > backups unless such are forced.

>At 12:04 AM 2/15/2008, Curtis Preston wrote:
>Depending on your mix of databases and other application backup data, 
>you can actually get quite a bit of commonality in a TSM datas

  1   2   3   4   >