Re: Raid 1 vs Raid 5

2010-08-09 Thread Kelly Lipp
I'll amplify what Skylar said: if your goal for this disk pool is short term
storage then I probably wouldn't use any RAID protection as the data will be
backed up to tape and then migrated to tape again.  And as Skylar said,
worst case, the client will send it again if it somehow escapes.

Conserve space: don't RAID...

Kelly J. Lipp
O: 719-531-5574 C: 719-238-5239
kellyjl...@yahoo.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Skylar Thompson
Sent: Monday, August 09, 2010 9:33 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Raid 1 vs Raid 5

Do you have tape in your primary storage hierarchy? If so, remember that
even if part of your disk pool fails, you only lose access to the data
that are on the failed volumes. You can then regenerate that data by
either running another backup from the nodes that had backed up to that
volume (if the backup to the copy pool hasn't happened yet) or from the
copy pool. New backups can continue against the disk pool volumes that
are still available, or can be cut through directly to tape if the
entire pool is unavailable.

On 08/09/10 08:23, Dana Holland wrote:
 Does anyone have opinions about setting up storage pools as Raid 1 as
 opposed to Raid 5? We have a very limited amount of disk space at the
 moment and don't know when we'll get approval to buy more. At the time
 we first started planning to implement TSM, we purchased what we thought
 would be plenty of storage. But, that was 4 years ago - and our usage
 has grown. Now, if I choose Raid 1, I barely have enough to create a
 primary and copy storage pool for one of our servers. And that isn't
 allowing for any growth at all. And I'm not sure how much additional
 space incremental backups would take. I know Raid 5 would give me more
 storage space, but I've also read that it's harder to recover from if
 there's a disk failure (read this on a TSM site somewhere). So, I'm
 wondering what some of you are using?


 __ Information from ESET NOD32 Antivirus, version of virus
 signature database 5352 (20100809) __

 The message was checked by ESET NOD32 Antivirus.

 http://www.eset.com

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S048, (206)-685-7354
-- University of Washington School of Medicine


Re: PA-RISC system shelf life

2010-07-27 Thread Kelly Lipp
To amplify what Wanda has already said about archiving products, I've been 
researching one from FileTek that has a ton of functionality.  One bit relevant 
to this discussion is the ability to actually do archiving directly from 
Oracle.  I don't know all the details, but this could start a pursuit for you.

One other thing about long term database archival is to export the required 
data periodically to flat CSV (or CSV like) files and save those away.  Worst 
case one can open these in a program like Excel and find the information.  
Brute force, very close to microfiche but possibly effective.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Prather, Wanda
Sent: Tuesday, July 27, 2010 7:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] PA-RISC system shelf life

The TDP for Oracle is just a connector/driver; it's actually Oracle RMAN that 
is selecting the data out of the data base on a backup, and writing it back to 
the DB on restore.  So if you have issues restoring from PA-RISC to Itanium, 
that is actually a question for Oracle rather than TSM.

But the answer to your question is NO.  Don't plan on any server working for 20 
years.
TSM will happily store the data, because you can keep migrating it from one 
type of storage media to the next.
But besides the question of keeping the server physically working, you will be 
reliant on the version of Oracle you have installed on it.
If your restore fails (and again, it's RMAN handling the DB, not TSM), you 
aren't going to be able to get assistance from Oracle (or anybody else).

I would also point out, if you have a requirement to keep the data for 20 
years, it's probably not just 1 copy of the data- it's probably many copies 
over the years.  If you do need to recover something 10 years from now, how are 
you going to FIND it?  Restore every copy sequentially until you find what 
you're looking for?

The way to handle legal retention of that sort of stuff is with one of the 
archiving products.  (Tivoli has one, and there are others that use TSM as the 
backstore).  They not only put the data out in a machine-independent form, they 
index it so you have a clue of finding it in 10 years.  

Barring implementation of an archiving system (which is not without cost), you 
need to have your DBA's set up some sort of periodic dump to a non-DB dependent 
format (e.g. an ASCII text flat file) and archive that. 

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Thomas 
Denier
Sent: Tuesday, July 27, 2010 4:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] PA-RISC system shelf life

-Duane Ochs wrote: -

Thomas,
Did you perform a TDP backup of the Oracle database to TSM?
Or use a native oracle dump and backup the dump ?

The backups were done with the software Tivoli used to
market as TDP for Oracle, and now markets as part of
TSM for Databases.


Re: why create a 12TB LUN

2010-05-28 Thread Kelly Lipp
DEC Rainbow: 5MB.  Upgraded to 10MB for about $500.

Resides in Pueblo Reservoir as a boat anchor.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Langdale
Sent: Friday, May 28, 2010 4:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] why create a 12TB LUN

Well

If we're taking a trip down memory lane, I had an original IBM AT   built 
like a tank!

I used it up until a few years ago in the garage as a big step to get in 
the loft space.

Steven Langdale
Global Information Services
EAME Storage Planning and Implementation
CITA Backup  Recovery Architect
( Phone : +44 (0)1733 584175
( Mob: +44 (0)7876 216782
ü Conference: +44 (0)208 609 7400 Code: 331817
+ Email: steven.langd...@cat.com

 



Jacques Van Den Berg jvandenb...@pnp.co.za 
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
28/05/2010 09:40
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] why create a 12TB LUN




Caterpillar: Confidential Green Retain Until: 27/06/2010 



Had an original IBM 4.77MHz in 1991. 640KB Main memory. 360K Floppy drive 
 a 10MB Hard drive.

Kind Regards,
 
Jacques van den Berg
TSM / Storage / SAP Basis Administrator
Pick 'n Pay IT
Email   : jvandenb...@pnp.co.za
Tel  : +2721 - 658 1711
Fax : +2721 - 658 1676
Mobile  : +2782 - 653 8164 
 
Dis altyd lente in die hart van die mens wat God 
en sy medemens liefhet (John Vianney).


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Skylar Thompson
Sent: 27 May 2010 10:16 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] why create a 12TB LUN

I'm around there too. 20MB Seagate MFM drive in an Epson QX-16. This was 
actually a dual-processor system (8088 for DOS and Z80 for Epson's CPM 
clone TPM). I had fired it up just for the heck of it a few years ago 
and it came up without problems. They don't make 'em like they used to.

On 05/27/10 13:04, David McClelland wrote:
 I can beat than - I have a 20MB 'Winchester' HDD inside a working 
original Compaq Deskpro 8086 from c 1985. Fired her up last week for some 
photos, still works a treat. (No TSM client for it though...)

 /DMc
 Sent from my BlackBerry® wireless device

 -Original Message-
 From: Strand, Neil B.nbstr...@leggmason.com
 Date: Thu, 27 May 2010 14:45:51
 To:ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] why create a 12TB LUN

 Gill,
 This sounds like an interesting environment.  Could you share some 
of
 the particulars such as what storage device is providing the LUN, what
 server OS is using the LUN and what the general reason was for choosing
 the LUN?
 Historical note - My first hard disk in my home PC was 20GB

 Thank you,
 Neil Strand
 Storage Engineer - Legg Mason
 Baltimore, MD.
 (410) 580-7491
 Whatever you can do or believe you can, begin it.
 Boldness has genius, power and magic.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
 Gill, Geoffrey L.
 Sent: Wednesday, May 26, 2010 7:04 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] why create a 12TB LUN

 I'm guessing many of you will find this quite odd, I know I did, but I
 had someone come to me and say they were going to ask for a 12TB LUN and
 wanted to back it up. Without even mentioning the product they want to
 use, obviously not TSM though, and I'm not even sure it would make
 difference, how would you manage to get a 12TB LUN backed up daily. I
 would expect it to be at least 75% full if not more, and even without
 knowing what percentage of data changes on it, it would seem to me the
 request seems strange. They're thinking of getting a VTL and backing up
 through fiber direct, not across the network, but no idea which one or
 what sort of throughput to expect.



 Have any of you been approached with this sort of request and if so what
 was your response? I'm sort of dumbfounded at this point since I've not
 heard or seen this anywhere.

 Thanks,



 Geoff Gill
 TSM/PeopleSoft Administrator

 SAIC M/S-B1P

 4224 Campus Pt. Ct.

 San Diego, CA  92121
 (858)826-4062 (office)

 (858)412-9883 (blackberry)



 IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or 
timely delivery of Internet mail is not guaranteed. Legg Mason therefore 
recommends that you do not send time sensitive
 or action-oriented messages to us via electronic mail.

 This message is intended for the addressee only and may contain 
privileged or confidential information. Unless you are the intended 
recipient, you may not use, copy

Re: T950 Library experiences

2010-05-19 Thread Kelly Lipp
I have about 10 years of experience with Spectra libraries and we are a 
reseller of them so take my comments with the appropriate grains.

They are highly engineered libraries with tons of features.  In general they 
are very reliable and Spectra support is quite good.  I think Nick's comments 
reflect that as well.

We have a couple of T950s in the field and they have been good to us and our 
customers.  I have seen their brand new library (in fact I'm going to training 
tomorrow) and it is very nice.  I think that's the T-Finity line.  Very similar 
to the T950 but updated technology.

These guys are very committed to tape.

Steve, if you need more we can take this offline. I'm actually pretty objective 
when it comes to these guys.  Over Sun?  Take the Spectra every time.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Nick 
Laflamme
Sent: Wednesday, May 19, 2010 5:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] T950 Library experiences

On May 19, 2010, at 1:21 AM, Steven Harris wrote:

 Hi All
 
 I'm looking at a Spectralogic T950 tape library instead of another Sun
 one to replace our aging Sun L700s
 
 Has anyone good or bad stories to tell about this?  How does the
 user-replaceable spares offering work in practice? Are the tetrapaks
 robust or do they break easily?
 
 Is configuration and partitioning simple and straightforward?

It's been a couple of years since I worked with one, but we used one as a 
remote tape library locked away in a wiring closet across campus. 

We didn't do any of our own servicing, and I don't think we partitioned the 
library at all, but the tetrapaks were just fine. 

We had a period when, as I recall, the library would start to experience 
intermittent hangs. Shortly before I left that employer, SpectraLogic flew 
someone out who determined that a good realignment would fix the problem; she 
was right. (That may have been related to us physically moving the library on 
our own from one wiring closet to another a couple of hundred yards away. I'm 
not saying it was, but I have to wonder.)

SpectraLogic was moving customers to SuperDAT drives to IBM LTO drives as I 
moved away from that shop. That's been more than two years; I presume they're 
stable on their drive choice now. 

Six months ago, when my current employer was getting ready to get off a 
particular virtual tape library technology, I would have been thrilled if we'd 
gone to T950s as our library type. I have good memories of the one I used. 
(Instead, we simply changed virtual tape library families, but that's for 
another day.)

It's hard to beat the density of a T950.

Good luck,
Nick


Re: two libraries in a storage pool?

2010-04-07 Thread Kelly Lipp
Your answer is correct.  You can't have a single pool in two libraries.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mehdi 
Salehi
Sent: Wednesday, April 07, 2010 3:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] two libraries in a storage pool?

Hi all,
Is there any way to assign tape drives in two separate LTO libraries in a
single storage pool? I see in TSM reference that each storage pool has one
device class and to each device class only one library can be assigned. My
answer to above question is no. Am I mistaken?

Thanks.


Re: Virtual TSM server - using disk only

2010-03-11 Thread Kelly Lipp
Duane,

Works, but isn't supported.  So if/when it doesn't work...

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Ochs, 
Duane
Sent: Thursday, March 11, 2010 12:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Virtual TSM server - using disk only

Good day everyone,
Has anyone explored using TSM server (windows) on a VM using Iscsi storage ? No 
library requirement at this time.
I have multiple European sites within close proximity of each other and they 
have outgrown the WAN coming back to the states.
Only storage available there is Iscsi and they have a substantial VMware 
implementation which would allow us to ride on a VM if feasible/functional.

Thoughts ?

Thanks,
Duane


Re: Virtual TSM server - using disk only

2010-03-11 Thread Kelly Lipp
My comment concerned the VM portion of the question, not the iSCSI portion.  I 
concur with Gary on that.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Gary 
Bowers
Sent: Thursday, March 11, 2010 12:37 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Virtual TSM server - using disk only

My experience with direct connected iSCSI storage on a TSM server is
that it gets abysmal performance unless you turn off Direct IO in
TSM.  See other posts for that.  It is technically possible, but with
the iSCSI limitation you might not want to use RMD Raw Device
Mapping in VMware.  I am not sure on this, but it makes sense given
what I have seen and read about here.  By the way, NFS and CIFS were
equally bad performers for disk pools with DirectIO turned on.  They
seem to really need the filesystem caching.  I'm guessing that
putting the disks in a VMFS would help buffer the writes, and give you
decent performance.

It is something that would need to be tested first.  I'm confident
that it would be much faster than WAN connection back to the States.
Yuck.

Good luck,

Gary Bowers
Itrus Technologies

On Mar 11, 2010, at 1:18 PM, Ochs, Duane wrote:

 Good day everyone,
 Has anyone explored using TSM server (windows) on a VM using Iscsi
 storage ? No library requirement at this time.
 I have multiple European sites within close proximity of each other
 and they have outgrown the WAN coming back to the states.
 Only storage available there is Iscsi and they have a substantial
 VMware implementation which would allow us to ride on a VM if
 feasible/functional.

 Thoughts ?

 Thanks,
 Duane


Re: Data retention period 60days

2010-01-21 Thread Kelly Lipp
Figure out which management class is governing your Oracle backups.  Q node 
nodename where nodename is the name the client uses to identify itself.  Check 
to see which policy domain the node is in.

Q copy domainname f=d

You'll probably see verexists=2, verdel=1 retextra=30 retonly=60.

The most accurate settings for a 60 day retention (which is a long time for a 
database IMHO) should be 

verexists=60, verdel=60 retextra=60 retonly=60

So,

Update copy domainname policysetname magagementclassname STANDARD verexists=60 
verdel=60 retextra=60 retonly=60

Then activate the policyset:

Activate policyset domainname policysetname

And you're done.

It is likely that the policyset and managementclassname is STANDARD as well.  
If I had set up your system the commands would look like

Update copy ORACLE STANDARD STANDARD STANDARD verexists=60 verdel=60 
retextra=60 retonly=60
Activate ORACLE STANDARD

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Sujatha chk
Sent: Thursday, January 21, 2010 12:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Data retention period 60days

Hi Gurus,

we backup our oracle database using rman to TAPE by TSM media management. 
Currently  Backup Retention policy has been set to 30 days. Please guide me on 
how to configure Back up Retention period to 60 days?

-Sujatha

+--
|This was sent by sujatha@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: DataDomain VTL

2010-01-12 Thread Kelly Lipp
We have a customer that insisted on buying one of these for his TSM 
environment. Promised 20:1 dedup.  He saw about five to one.  He was in our 
Level 2 class telling the story.  At the end he said he wouldn't buy it again.  
I made him repeat that part of the story...

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Howard 
Coles
Sent: Monday, January 11, 2010 2:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DataDomain VTL

Is anyone out there using a DataDomain VTL?  I'm getting some pressure
to look at this, and I'd like to find some honest opinions of them.  I
know that some time back there were some conversations around this, but
some tech has been updated and DD has been bought by EMC, etc.  So, if
you have one, and would like to share your opinion I'd appreciate it.

 

See Ya'

Howard Coles Jr.

Sr. Systems Engineer

(615) 296-3416

John 3:16!

 

 


Re: More than 1 EXPORT NODE per tape

2010-01-07 Thread Kelly Lipp
Stacked.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Drew
Sent: Thursday, January 07, 2010 12:34 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] More than 1 EXPORT NODE per tape

Are they stacked on the same tape?  I thought they were put on separate 
tapes?

Regards, 
Shawn

Shawn Drew





Internet
boatr...@memorialhealth.com

Sent by: ADSM-L@VM.MARIST.EDU
01/07/2010 02:25 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] More than 1 EXPORT NODE per tape






Export node node1,node2,node3

_
From: Mario Behring [mailto:mariobehr...@yahoo.com] 
Sent: Thursday, January 07, 2010 2:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] More than 1 EXPORT NODE per tape


Hi list,

Is it possible to use put more than 1 export on the same tape? I executed 
one export but there is room for much more data on the tape.I think it 
is not possible.didnt see anything that made me believe 
otherwise.so Ím asking...

Mario



This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: alternatives to TSM due to license costs

2009-12-30 Thread Kelly Lipp
At least let IBM know you are thinking about jumping ship and that they need to 
help with the pricing on the new licenses or you are gone!  You have the 
leverage at this point so you might as well use it to your advantage.

Based on our many competitive situations along the way with our appliance we do 
this all the time. For the low end, TSM is not a good fit generally.  However, 
if you have more than 50 clients then TSM is a good fit for the reasons Remco 
states.

Taking a giant step back to the 20th century with your backups (which is what 
you'll be doing with any other product) is generally a bad idea.

Oh, IBM!  Are you listening?  The current licensing scheme is just killing us!  
Stop the madness before we lose more of our installed base. Stop the madness 
before I lose another Appliance deal!

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Remco 
Post
Sent: Wednesday, December 30, 2009 3:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] alternatives to TSM due to license costs

On 30 dec 2009, at 20:11, woodbm wrote:

 Hello all,

 I have been tasked with looking at alternatives to TSM due to recent Audit 
 from IBM and the amount of money we just shelled out for the TSM License.  I 
 am sure most of you have pertaken to this wonderful experience.  I don't 
 really want to move away from TSM, but have to provide alternatives to 
 management.  Could anyone direct me to where I can begin my search.  I have 
 only used TSM my entire time here.  What is the industry leader?  Any info or 
 documents or links doing a comparison would be helpful.  Also, is the license 
 structure the same for other environments as well or is IBM way out of line?  
 I am going to start with EMC's Avamar/Networker.


there are a few products that people try to compare TSM to; Networker, 
Netbackup and CommVault come to mind.

Keep in mind that all of these require full backups at some interval, occupying 
much more tape that TSM does. So when presenting alternatives, do not only 
consider licensing cost but also the cost of media, servers, network etc. Also 
keep in mind that some products are disk to tape only, requiring you to have at 
least one tape drive for each concurrent backup/restore session, or implement 
expensive VTL solutions while TSM doesn't need one. (The IBM VTS for the 
mainframe was running ADSM for a reason!).

Licensing schemes for these products are completely different. While IBM 
charges for cpu's, competitors also charge for functionality, such as 
archiving? drm? The number of volumes you can manage in a library? The number 
of drives? etc. Look into what you need and make sure you have the total 
picture.

As a word of advise, make sure that the audit is correct. I've seen IBM try to 
charge customers for TDP for databases for DB2, while everybody knows that 
there is no such product! If in doubt, challenge the audit results, this might 
be worth the effort.

 Thanks much,
 Bryan

 +--
 |This was sent by woo...@nu.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: IBM bureaucracy strikes again

2009-12-22 Thread Kelly Lipp
I particularly enjoy the ten product limit.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Tuesday, December 22, 2009 7:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] IBM bureaucracy strikes again

Yet another change.  Every time they improve things, they make it harder for 
us to find the same information we used to be able to get at.

I'll give this a try before passing judgement.  It looks like they are going to 
a portal model where you can customize your own experience.

At 08:30 AM 12/22/2009, Richard Sims wrote:
If you haven't been to the TSM Support Page 
(http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html)
 very recently, you'll be dismayed to learn that this helpfully 
product-specific page is going away - being replaced by a generalized facility 
which you get to from the current TSM Support Page by scrolling down a very 
long list of products to find TSM, then select one category of things you want 
to see, then wait for their website to grind and produce a page with just that 
information.  You can navigate to other sub-areas from the left pane, but it 
can entail considerable delays.  And because it's generalized, there's no page 
title to quickly let you know you're looking at TSM stuff.

Someone at IBM will probably get a bonus this year for thinking up this method 
of improving IBM's internal web pages organization ... by obliterating the 
tailored product page that we all found so useful over the years, where we 
could quickly get at information we needed.  IBM management's focus seems to 
be on satisfying organizational directions more than best meeting customer 
needs.

Richard Sims


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: IBM bureaucracy strikes again

2009-12-22 Thread Kelly Lipp
I'm fed up and I won't take it anymore!  Oh, shoot.  Where will I go?

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Wanda 
Prather
Sent: Tuesday, December 22, 2009 9:36 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] IBM bureaucracy strikes again

OK.  Can someone on the list, or someone from Tivoli who monitors this list,
give us a contact point within IBM where we can take these complaints?

We should be bombarding them directly...


On Tue, Dec 22, 2009 at 11:26 AM, Kelly Lipp l...@storserver.com wrote:

 I particularly enjoy the ten product limit.

 Kelly Lipp
 Chief Technology Officer
 www.storserver.com
 719-266-8777 x7105
 STORServer solves your data backup challenges.
 Once and for all.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
 Paul Zarnowski
 Sent: Tuesday, December 22, 2009 7:57 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] IBM bureaucracy strikes again

 Yet another change.  Every time they improve things, they make it harder
 for us to find the same information we used to be able to get at.

 I'll give this a try before passing judgement.  It looks like they are
 going to a portal model where you can customize your own experience.

 At 08:30 AM 12/22/2009, Richard Sims wrote:
 If you haven't been to the TSM Support Page (
 http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html)
 very recently, you'll be dismayed to learn that this helpfully
 product-specific page is going away - being replaced by a generalized
 facility which you get to from the current TSM Support Page by scrolling
 down a very long list of products to find TSM, then select one category of
 things you want to see, then wait for their website to grind and produce a
 page with just that information.  You can navigate to other sub-areas from
 the left pane, but it can entail considerable delays.  And because it's
 generalized, there's no page title to quickly let you know you're looking at
 TSM stuff.
 
 Someone at IBM will probably get a bonus this year for thinking up this
 method of improving IBM's internal web pages organization ... by
 obliterating the tailored product page that we all found so useful over the
 years, where we could quickly get at information we needed.  IBM
 management's focus seems to be on satisfying organizational directions more
 than best meeting customer needs.
 
 Richard Sims


 --
 Paul ZarnowskiPh: 607-255-4757
 Manager, Storage Services Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu



Re: Move Windows filespaces

2009-12-14 Thread Kelly Lipp
I think it will certainly be easier to re-backup the data.  Faster?  Not sure, 
but probably as fast as an export would go.

I guess I wouldn't over think this: set up new client after the data is moved 
and do the backup.  Probably be done before you know it.  You will obviously 
lose the previous versions of files, but you could keep the old filespaces 
around until the data expires.  If this is mostly on tape I'd opt for this 
approach.

Kelly Lipp
Chief Technology Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of David 
Longo
Sent: Monday, December 14, 2009 1:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Move Windows filespaces

I have TSM sever 5.5.2.0.  Also have a W2K3 client with
version 5.5.0.0 BA client, doing backups only.  This has
about 2 dozen drives and therefore filespaces.  There is
many TB of data involved in total and these are all SAN disks.
(Also many TB of data that is left, so not easy to say redo that
either.)

For several technical reasons, they need to move about half the
SAN disks to another server and leave half on existing server.
They will just rezone those disks to the new server, no copying
involved.  I certainly don't want to backup all this moved data
again.  If was ALL being moved to new server, I could just
RENAME NODE and FILESPACES and be done with it, have done that
before.  But is not the case.

As I see it, my only option is to EXPORT NODE for the filespaces that
are being moved and then import again into new node.  I have
never used Export Node, but have almost a few times over the years.

Am I correct that this is my only option?  Other than backing up in new
location and deleting in old - or would that be faster and easier?  NO
idea on the speed this works.  I have 3584 library with LTO2 and 3
drives/tapes.

Thanks,
David Longo

Health First, Inc.


#
This message is for the named person's use only.  It may
contain private, proprietary, or legally privileged information.
No privilege is waived or lost by any mistransmission.  If you
receive this message in error, please immediately delete it and
all copies of it from your system, destroy any hard copies of it,
and notify the sender.  You must not, directly or indirectly, use,
disclose, distribute, print, or copy any part of this message if you
are not the intended recipient.  Health First reserves the right to
monitor all e-mail communications through its networks.  Any views
or opinions expressed in this message are solely those of the
individual sender, except (1) where the message states such views
or opinions are on behalf of a particular entity;  and (2) the sender
is authorized by the entity to give such views or opinions.
#


Backing up Oracle Hyperion Essbase

2009-12-08 Thread Kelly Lipp
Folks,

I've researched and mostly determined that backing this guy up is a matter of 
closing it down (or putting it into readonly mode) and backing up the files.  I 
saw a couple of entries from folks doing this several years ago.

Our competition for this is EMC Avamar/Data Domain.  Getting hurt by source 
side deduplication which we'll have down the road.

If any of you are still out there listening and can provide me with a bit of 
information, I would appreciate it.  Private email or the phone number below.

My indebtedness will be immeasurable!

Kelly Lipp
Chief Technology Officer
www.storserver.comhttp://www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.


Re: TSM 6.1 De-dupe + NDMP

2009-12-03 Thread Kelly Lipp
If it's true that the Celera compresses then it probably won't look like the 
previous version of the file so it won't deduplicate.  Then the question 
becomes: maybe don't compress but rather send all the bits and let it get 
deduplicated - will this result in better storage utilization?


From: ADSM: Dist Stor Manager [ads...@vm.marist.edu] On Behalf Of Wanda Prather 
[wanda.prat...@jasi.com]
Sent: Thursday, December 03, 2009 10:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 6.1 De-dupe + NDMP

Is the NDMP backup going via TCP/IP and into the same storage pool as your
other backup data?
How can you tell it is not getting deduped?


On Thu, Dec 3, 2009 at 11:54 AM, Brian P. Boyd boydb...@duke.edu wrote:

 Hello,
 Currently we are using TSM 6.1 to do NDMP backups of our EMC Celerra.
 We are getting no de-duplication results from these backups.  I'm
 wondering if it is because the celerra is already sending a compressed
 image snapshot file to TSM and TSM just doesn't de-dupe a file like
 that.  We're thinking about going to a more traditional backup method
 for NDMP backups (going with weekly incrementally, and monthly
 fulls...etc) however it would be good to understand why this is
 happening.  I find it a bit strange because we still do file-level
 restores just fine...

 If someone could educate me a bit more that would be great for piece
 of mind!

 Thanks!
 Brian P. Boyd
 Sr. SAN Admin
 DUKE - OIT
 boydbria (at) duke (dot) edu


Re: Copy Tape pool for a selected set of tapes

2009-11-25 Thread Kelly Lipp
I'll echo what Richard said and amplify it: TSM requires planning.  As it is 
about 1000 more powerful and flexible than anything else you might 
work with, you must think about what your business requirements are before 
implementing it.  Then do the right thing.  The next coolest thing about the 
product is the fact that if you don't do the right thing, with a little thought 
you can change it!  Usually no harm no foul.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Richard Sims
Sent: Wednesday, November 25, 2009 7:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Copy Tape pool for a selected set of tapes

This is a plan-ahead issue.  Nodes which need special handling like
that need to be assigned their own storage pool, where your site
should have distinct management classes for data having differing
requirements.  You can define such a storage pool, create a management
class  copy group pointing to it, and define a management class for
the node to use henceforth.  The MOVe NODEdata command can be employed
to shift existing data to that separate storage pool, whereafter the
desired Backup Stgpool operation can be performed.

If the data is static (no new incoming files), another approach could
be to define a new stgpool at the bottom of the current storage pool
hierarchy, with no migration into it, then MOVe NODEdata into that
pool, and do a Backup Stgpool on it.  This would avoid the need for a
new management class (though you should still pursue that).

Richard Sims


Re: tape library for TSM

2009-11-15 Thread Kelly Lipp
Brands?

IBM 3584 for the big boy!  Great library, but somewhat expensive.  You can work 
your dealer on price.

I like the very small IBM libraries for folks that can get by with very little 
library.

Qualstar XLS for a more cost effective big boy.  Very reliable and very good 
value.  TLS family been around a very long time and very reliable as well.  If 
footprint isn't an issue, the TSL88132 or 264 is probably the least expensive, 
highest capacity library available.

Spectralogic T950.  Great technology, not a cost leader but perhaps a bit less 
than the IBM.  The other T380 family of libraries is more cost effective and 
flexible enough for most of us.  T120 is still a workhorse and if all you'll 
need is 120 slots and six drives, perhaps the best rack mount library in the 
class.  And as Wanda suggests: it has a 10 slot I/O port.

Overland?  Not wild about that brand.

And the rest are just variants of the same OEM in many cases.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Wanda 
Prather
Sent: Sunday, November 15, 2009 3:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tape library for TSM

And in most cases, look for a library with more than 1 I/O slot.



On Sat, Nov 14, 2009 at 2:59 PM, madunix madu...@gmail.com wrote:

 On Sat, Nov 14, 2009 at 8:02 AM, Mehdi Salehi
 iranian.aix.supp...@gmail.com wrote:
  What are the essentials of a tape library to fully work with TSM?
  - enough tape drives
  - barcode reader
  - cartridge barcode labels
  - FC connection for LAN-free
  ...?
 

 All above!



Re: tape library for TSM

2009-11-14 Thread Kelly Lipp
On the supported device list...  Perhaps that's obvious, but I would start 
there.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of madunix
Sent: Saturday, November 14, 2009 12:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tape library for TSM

On Sat, Nov 14, 2009 at 8:02 AM, Mehdi Salehi
iranian.aix.supp...@gmail.com wrote:
 What are the essentials of a tape library to fully work with TSM?
 - enough tape drives
 - barcode reader
 - cartridge barcode labels
 - FC connection for LAN-free
 ...?


All above!


Re: TSM DB Size

2009-11-10 Thread Kelly Lipp
The architectural limit is 500GB.  Practically, one should be a good bit 
smaller.  It really boils down to how long do you want a restore of the backed 
up DB to take?  Figure about 150% of the backup time for a restore.  Can you 
live with that?

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Huebschman, George J.
Sent: Tuesday, November 10, 2009 9:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB Size

We have DB's over 190 Gb in 5.5, on AIX servers:

 TSM_Server   CAP_GB MAX_EXT_GB PCT_UTIL MAX_UTIL
--- --- --  
AIXPRODXYZ   191.95   0.00 89.4 89.4

AIXPRODDMZ   219.76   4.17 90.3 91.1

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Mochnaczewski
Sent: Tuesday, November 10, 2009 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM DB Size

Hi *,

Does anyone know what the maximum size database is for TSM 5.4 and TSM
5.5 ? We were told by IBM when we were at TSM 5.3 that 120Gb was the
limit and was wondering what the limit is for 5.4 and 5.5.

Currently running TSM 5.4.3.0 on AIX 5.3 TL 9.

Rich






IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: TSM DB Size

2009-11-10 Thread Kelly Lipp
I stand corrected, as usual, by Richard!  500, 530, big diff.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Richard Sims
Sent: Tuesday, November 10, 2009 9:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB Size

The Admin Guide manual says that the maximum size for the db is 530 GB.


Re: TSM DB Size

2009-11-10 Thread Kelly Lipp
And most importantly, if you get a room full of TSM gurus, IBM types and folks 
like us, nobody will really agree on a hard number.

What I have seen during 10 years and hundreds of TSM environments is that the 
number has gradually increased.  When I first got into this business, 50GB was 
huge.  Now that's nothing.  The more normal is 100-150GB and things seem to be 
working just fine.  I recall a period of time, soon after Dave Cannon arrived, 
that IBM's engineering focus was on quality.  They did a ton of great work 
fixing what ailed the product.  We could see the results afterwards.  This work 
set the base for what we're seeing today (up to 5.5.x anyway).  Couple that 
with huge improvements in hardware performance and our favorite product has 
grown up nicely.

Our problems are so much different than everybody else's.  They're still 
worried about getting the weekly full backup done they can't think about 
anything else.  That or adding the next freaking band-aid to their already half 
assed (and declining) solution.

You can sure tell when I'm working on a customer presentation can't you?

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Drew
Sent: Tuesday, November 10, 2009 10:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB Size

100-120GB was the IBM recommended limit based on their tests on a baseline
AIX system (Specs are lost to history).
I talked to the Watson Research guys at a Symposium quite a while ago on
this.
(The guy was from one of their east coast research places, and I'm
assuming it was Watson.  He was with IBM Global Services and described his
job as basically playing with all the hardware that came out and wrote
reports and guidelines for them.)

The idea was that if you have a faster system than their baseline, you can
go higher.  It was a general rule-of-thumb type of thing.

You can decide for yourself on your own hardware with your own data based
on how long your expirations/db backups take and if that's acceptable to
you.


Regards,
Shawn

Shawn Drew





Internet
richard.mochnaczew...@standardlife.ca

Sent by: ADSM-L@VM.MARIST.EDU
11/10/2009 11:51 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] TSM DB Size






We had a health check done by IBM when we were at TSM 5.3 and were told
they don't recommend a DB size higher than 120Gb for performance and
restore purposes.

Rich

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Tuesday, November 10, 2009 11:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB Size

The architectural limit is 500GB.  Practically, one should be a good bit
smaller.  It really boils down to how long do you want a restore of the
backed up DB to take?  Figure about 150% of the backup time for a
restore.  Can you live with that?

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Huebschman, George J.
Sent: Tuesday, November 10, 2009 9:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM DB Size

We have DB's over 190 Gb in 5.5, on AIX servers:

 TSM_Server   CAP_GB MAX_EXT_GB PCT_UTIL MAX_UTIL
--- --- --  
AIXPRODXYZ   191.95   0.00 89.4 89.4

AIXPRODDMZ   219.76   4.17 90.3 91.1

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Mochnaczewski
Sent: Tuesday, November 10, 2009 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM DB Size

Hi *,

Does anyone know what the maximum size database is for TSM 5.4 and TSM
5.5 ? We were told by IBM when we were at TSM 5.3 that 120Gb was the
limit and was wondering what the limit is for 5.4 and 5.5.

Currently running TSM 5.4.3.0 on AIX 5.3 TL 9.

Rich






IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
therefore recommends that you do not send any confidential or sensitive
information to us via electronic mail, including social security
numbers, account numbers, or personal identification numbers. Delivery,
and or timely delivery of Internet mail is not guaranteed. Legg Mason
therefore recommends that you do not send time sensitive
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain
privileged or confidential information. Unless you are the intended
recipient, you may not use, copy or disclose to anyone any information
contained in this message. If you have received this message in error

Re: de-duplicating compressed data

2009-11-08 Thread Kelly Lipp
That assumes that the compression occurs file by file.  Is that true or is on 
the transaction.  I suppose it is on the files themselves and all clients would 
compress the file into the same set of bits.  If it doesn't do that though, 
then your high dedup rates won't be realized.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Grigori Solonovitch
Sent: Saturday, November 07, 2009 9:16 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] de-duplicating compressed data

What is the effect of compression on de-duplication? Does it help to reach a 
more de-duplication level?

This is my opinion (please correct, if something is wrong):

1) note we are talking about client compression (compression=yes for node or in 
dsm.opt). Hardware compression on drive level is tottally independent from 
dedup process;

2) client compression can be used for any primary storage pool (device type 
DISK, FILE or any tapes). In this case, compressed data is comming to copy 
pools as well and you need less number of tapes in copy pools;

3) client compression takes time during backups (backups are much longer), but 
amount of data sent to TSM server via network is much less (average compression 
rate is 2-4 times);

4) deduplication is working only with primary sequential disk storage pool 
(device class FILE) and can give compression rate 10-20 and more. Deduplication 
process is working with data from all nodes (not only from one) and compares 
ALL to ALL. So just imagine which comression rate you can reach for some cases, 
when there are a lot of similar Windows servers (like server in each bank 
branch) with the same level of Windows and the same applications. For 50 
branches you can have compression rate 40;

5) I see only one reason why deduplication is only working with FILE and is not 
working with DISK - after software deduplication you need to run reclamation to 
release space. Reclamation is not applicaple for DISK with random access. By 
the way, this  question is still open and only IBM can anwer, what is the real 
reason;

6)  there is special protection for data in TSM server. Deduplication is not 
working with data, if there is less than 2 copies on tapes. So sequence of 
actions is: backup data to DISK, make at least 2 copies of data to tapes 
(without deduplication!!), start deuplication and start reclamation. 
Deduplication will never reduce data on copy pools;

7) deduplication and compression are working together, but overal compression 
rate will be more than with only compression, but much less than with only 
deduplication. For example, you will have compression rate N for compression 
only (backups and all copies), M for deduplication only (only backups, copies 
have full size) and K for compression/deduplication (K for backups and N for 
copies).

In general, N is much less than M, K is more than N and less than K. Real 
values for N, M and K depend on type of data;

Regards,

Grigori

Please consider the environment before printing this Email.


This email message and any attachments transmitted with it may contain 
confidential and proprietary information, intended only for the named 
recipient(s). If you have received this message in error, or if you are not the 
named recipient(s), please delete this email after notifying the sender 
immediately. BKME cannot guarantee the integrity of this communication and 
accepts no liability for any damage caused by this email or its attachments due 
to viruses, any other defects, interception or unauthorized modification. The 
information, views, opinions and comments of this message are those of the 
individual and not necessarily endorsed by BKME.


Re: Performance and migration: AIX vs Linux

2009-10-23 Thread Kelly Lipp
I studied this in some detail as it obviously comes up often in my practice.  
If one reads the IBM documentation on the latest x3850/x3950 M2 one will 
observe that the data paths within that architecture are actually faster than 
in the latest pSeries hardware.  That said, Windows or Linux inefficiencies 
probably just balance out.

Shawn's comment about multiple $25K x3850 vs. one $100K p is valid. I think 
that clearly multiples of those will outperform on p.

I think this argument should morph to what OS is your shop using predominantly 
as you will have to support the OS while fooling with TSM.  And perhaps even 
more with V6.  The difference in relative performance is probably in the single 
digits and thus undetectable by most of us. 

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Drew
Sent: Friday, October 23, 2009 8:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Performance and migration: AIX vs Linux

# IMNSHO, IBM pSeries hardware is the best there is for large I/O
# workloads. I've seen AIX do things that Linux wouldn't survive.

I've always wondered about this.  We have p570s and we can throw anything
at them, and they won't even breath hard.

But if you spent $100K on p-series, and $100K on multiple Intel machines,
which solution will get the better, cumulative I/O-Dollar ratio?


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Performance and migration: AIX vs Linux

2009-10-23 Thread Kelly Lipp
Paul,

Yes, the bus structures are similar and in the x M2 variants faster.  I just 
attempted to lay my hands on a the technical paper I read about this to no 
avail and I'm struggling to recreate the search I used to find it originally.  
When I went after that is was my intent to confirm that the overall bus 
structures were indeed similar and that is what I found.

I'll try to find that document again and post it when I do.

Thanks,

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Friday, October 23, 2009 12:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Performance and migration: AIX vs Linux

Kelly,

At 11:27 AM 10/23/2009, Kelly Lipp wrote:
If one reads the IBM documentation on the latest x3850/x3950 M2 one will 
observe that the data paths within that architecture are actually faster than 
in the latest pSeries hardware.

I'm curious - does your analysis include the number of busses available?  I 
know that the pSeries has quite a lot of busses to drive all of its adapters.  
I'm not as familiar with the xSeries, so I don't know if there is a model 
available that has a comparable number of busses.  It's not just the speed, 
it's the overall throughput that most concerns me.  Sure, I could have more 
smaller xSeries servers, but then I would have more resource baskets to manage.

Thanks for sharing.
..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Performance and migration: AIX vs Linux

2009-10-23 Thread Kelly Lipp
I found it!

http://www.redbooks.ibm.com/redpapers/pdfs/redp4362.pdf

Actually a fairly fun read.  Way cool technology.  I'm sure it will show up in 
the pSeries soon anyway so the hardware issues become further blurred.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Friday, October 23, 2009 12:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Performance and migration: AIX vs Linux

Kelly,

At 11:27 AM 10/23/2009, Kelly Lipp wrote:
If one reads the IBM documentation on the latest x3850/x3950 M2 one will 
observe that the data paths within that architecture are actually faster than 
in the latest pSeries hardware.

I'm curious - does your analysis include the number of busses available?  I 
know that the pSeries has quite a lot of busses to drive all of its adapters.  
I'm not as familiar with the xSeries, so I don't know if there is a model 
available that has a comparable number of busses.  It's not just the speed, 
it's the overall throughput that most concerns me.  Sure, I could have more 
smaller xSeries servers, but then I would have more resource baskets to manage.

Thanks for sharing.
..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Performance and migration: AIX vs Linux

2009-10-23 Thread Kelly Lipp
Seven slots, but you can stack up to four servers to get to 28 slots.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Friday, October 23, 2009 2:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Performance and migration: AIX vs Linux

I didn't read the whole thing, but it looks like there are only 7 I/O slots?  
Does it have RIO drawers similar to the higher end pSeries?

At 03:46 PM 10/23/2009, Kelly Lipp wrote:
I found it!

http://www.redbooks.ibm.com/redpapers/pdfs/redp4362.pdf

Actually a fairly fun read.  Way cool technology.  I'm sure it will show up in 
the pSeries soon anyway so the hardware issues become further blurred.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Friday, October 23, 2009 12:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Performance and migration: AIX vs Linux

Kelly,

At 11:27 AM 10/23/2009, Kelly Lipp wrote:
If one reads the IBM documentation on the latest x3850/x3950 M2 one will 
observe that the data paths within that architecture are actually faster than 
in the latest pSeries hardware.

I'm curious - does your analysis include the number of busses available?  I 
know that the pSeries has quite a lot of busses to drive all of its adapters.  
I'm not as familiar with the xSeries, so I don't know if there is a model 
available that has a comparable number of busses.  It's not just the speed, 
it's the overall throughput that most concerns me.  Sure, I could have more 
smaller xSeries servers, but then I would have more resource baskets to manage.

Thanks for sharing.
..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Create holes in data

2009-10-16 Thread Kelly Lipp
That would be the basic idea.  Explore the command syntax to see if you can do 
for more than one client at a time.  500TB is a bunch of data to wade through, 
but you knew that!

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Strand, Neil B.
Sent: Thursday, October 15, 2009 1:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Create holes in data

I have a question - please do not ask for an explination - it is what
the customer wants.

Environment:
-TSM Server 5.4 on AIX
-TSM BA client on Linux, Solaris and Windows (various flavors)
-TS3500 tape library with TS1120 drives (+1TB on each tape)

The customer desires me to convert the last couple of years of backup
data - all of which has 3 year retention - to look like periodic full
backups.
- Data between 1 and 6 months old - only the end of month backup
- Data older than 6 months - only one backup at the end of the year
Similar to a traditional backup where fulls are periodically taken

The customer's data from their 160 clients occupies nearly 500TB of tape
space (primary only) and is looking to cut down on the number of tapes
that they would be required to purchase.

This will be a one time effort since the customer is now performing
their own backups and I would like the customer to take custody of this
legacy data.

One thought is to create backup sets of each client at the specified
points in time.

Your ideas are appreciated.

Thank you
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.



IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: Migration for Windows-based installation

2009-10-12 Thread Kelly Lipp
Start from scratch.  Install TSM at the level you would like, move and 
configure the library. Point the clients at the new server and go.  That first 
backup will necessarily be a full.  Will take longer than usual, but easy.

This method allows you go clean up your database.

Keep the old server around until the data expires.

Clearly, there are more details, especially since you are going to re-use the 
library on the new STORServer (oops, I mean TSM Server).  The overall concept 
is sound. For sites of your size I like this method as it is simple and gets me 
a brand new, pristine database.  This will become important next year when you 
migrate to TSM 6.2.  Besides, a clean start is always a good thing.

This topic has been covered earlier and in much more detail.  You might peruse 
the archives to see what you can find.  My name will show up along with others 
like Wanda who have been through this many times.

Thanks,

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kurt 
Buff
Sent: Monday, October 12, 2009 1:02 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration for Windows-based installation

I'd prefer to be able to migrate server data as much as possible,
including the diskpool, but if it it would be an incredibly difficult
maneuver (or take inordinate amounts of time, say on the order of more
than a 4-day weekend) and/or the downside of losing the diskpool is
relatively minor, I could potentially live without it.

I would also contemplate keeping the current TSM server version on the
new machine and upgrading immediately after implementation if that
would be of benefit.

We back up fewer than 10 servers, but one is our Exchange 2003 server
at roughly 200gb and is a full backup every night, and the other is
our file server at over 2tb, though on a nightly  basis it normally
does around 25-50gb.

Does that answer your question?

Kurt

On Mon, Oct 12, 2009 at 11:18, David McClelland t...@networkc.co.uk wrote:
 Are you looking purely at a lift and shift hardware change here, or at a 
 clean installation of TSM Server (at 5.5.3 for example) and migrating clients 
 into the new instance?

 Cheers,

 /David Mc
 London, UK

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kurt 
 Buff
 Sent: 12 October 2009 18:54
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Migration for Windows-based installation

 All,

 We have a TSM server that's running out of steam, and nearing the end
 of its expected reliable life. We have Version 5, Release 3, Level 4.6
 installed, with various levels of clients installed on our servers.
 The server has 1gb RAM, roughly 2tb of disk storage, but it's old/slow
 PATA, and the OS is Win2k Pro. Definitely not ideal.

 The tape robot is a Spectralogic T50, with two LTO3 drives and 25
 slots, out of which we expect to get much more life.

 We expect to replace the server with a new Dell server with 3tb of
 SATA disk, 3gb RAM, Win2k3 (32bit), and use the current tape robot.

 I'd like to get to the newest in the 5.x series on the new server,
 since the talk on this list about moving to 6.x indicates to me that
 we'd be better off staying at 5.x for now.

 I've been casting about, and can't seem to find documentation on how
 to migrate the setup to the new machine.

 Does anyone have a pointer to good documentation on doing this?

 Frankly, we've been given a quote by a VAR, and though the number of
 hours they are quoting seem reasonable, the price they're asking to
 work with us on this is beyond our budget.

 Thanks

 Kurt

 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.421 / Virus Database: 270.14.3/2415 - Release Date: 10/12/09 
 04:01:00



Re: AW: TSM 6.1 and the ever expanding DB

2009-10-02 Thread Kelly Lipp
That last paragraph made my head hurt!  I had the opportunity to take a 
database class in college.  Didn't want to know it then, don't want to know it 
now.

I recall one of the design centers for the DB2 thing was to ensure that a TSM 
admin didn't need to become a DB2 admin.  I don't even know the lingo!

I'll echo Rick's comments: you pioneers, you go!  Those arrows don't hurt that 
much.  That which doesn't kill you makes you and all of us stronger.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Richard Rhodes
Sent: Friday, October 02, 2009 9:26 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] AW: TSM 6.1 and the ever expanding DB

I've been watching this discussion with great interest, and more than a
little fear.

We are going to implement the v6 ISC/SC shortly on a standalone Win server,
but we aren't planning to upgrade the TSM servers until next year.  A BIG
thanks to all you bleeding edge types out there

IBM has a interesting/hard problem -  TSM is used to backup TSM.  I assume
the requirement for multiple backups before a archive log is deleted is to
ensure that multiple backups occur for each archive log.They are
effectively throwing disk space at the archive logs to ensure they have
good over lapping backups of them.

I wonder if IBM isn't eventually going to have to implement some process
that will periodically backup archive logs, make a second copy of them on
different media, generate a Vol_Hist, a prepare(?), then delete the archive
logs.  In other words, on some trigger make a good backup of all archive
logs (multiple copies on separate media) and everything needed for a
recovery from the last full - roll forward through archive logs - then the
active log.

Rick







   
 Zoltan
 Forray/AC/VCU 
 zfor...@vcu.edu  To 
 Sent by: ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc 
 Manager  
 ads...@vm.marist Subject 
 .EDU Re: AW: TSM 6.1 and the ever
   expanding DB
   
 10/02/2009 10:52  
 AM
   
   
 Please respond to 
 ADSM: Dist Stor  
 Manager  
 ads...@vm.marist 
   .EDU   
   
   




Good luck with the PMR.  Let us know how it works out.

As I understand it, currently this is WAD (requiring 2+ full backups and
corresponding volhist backups before clearing the archivelog).

It is also my understanding (from my last conversation with an L2 tech at
IBM) that they know this is a problem and hope to reduce the requirements,
in future (V6.2?) releases.  They still have numerous DB2/TSM interaction
bugs to squash, first!



From:
Stefan Holzwarth stefan.holzwa...@adac.de
To:
ADSM-L@VM.MARIST.EDU
Date:
10/02/2009 10:16 AM
Subject:
[ADSM-L] AW: TSM 6.1 and the ever expanding DB
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



We want also to go into production with 6.1.2. All setup is finished.
But with about 20 nodes (all export/import) we continously have trouble
with full active log and full archivelog.
Our active log size is 16Gbyte and it seems to be enough for this small
setup. But sometimes the log usage explodes and goes rapidly up to the
limit.
I opened my second PMR

Regards
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im
 Auftrag von Zoltan Forray/AC/VCU
 Gesendet: Freitag, 2. Oktober 2009 15:15
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM 6.1 and the ever expanding DB

 Join the club.  I am beginning to wonder if anyone is
 successfully using
 V6.1, trouble-free.

 Monday I

Open Letter to TSM Product Mangement. Was Per terabyte licensing

2009-09-29 Thread Kelly Lipp
This has been a good discussion.  I would like to change the tone a bit in 
order to help IBM product management as they ponder this issue.

STORServer is an OEM of IBM TSM code and TSM is an integral part of our 
appliance.  We compete in the marketplace against just about everyone else in 
the backup space. The most difficulty we encounter is with respect to our 
licensing which is necessarily identical to IBMs.

I have thought long and hard about how to decouple client licensing from our 
product and stay in compliance with our OEM agreement.  I have not come up with 
an idea.

I postulate the following: a TSM client derives value from the TSM environment 
in two ways:

1. simply by having the ability to store and restore data on the TSM server and
2. from the intrinsic features the server uses to store maintain that data.  
Some clients use server features relatively less while others use them 
relatively more.  The features used in the server are relevant to the overall 
business requirements rather than for a single client.

At STORServer, we asses this value by determining how much it costs us to 
support an environment.  We can expect to field a certain number of support 
calls per customer with client side issues and certain number with server side 
issues.  The more clients a customer has, the more calls we’ll get and the more 
sophisticated the server side is (larger library, more disk, server to server, 
etc.) the more server side calls we'll get.  To account for the client side 
calls is fairly simple since we have to pay IBM an annual support fee for the 
clients we've licensed from them.  We uplift this slightly to cover our costs 
of support.  On the server, we've taken the approach of basing the initial cost 
of our solution and ongoing support costs on the overall size (in Terabytes) of 
the server storage.  We have four tiers: micro, up to 40TB of storage, small 
40-80TB, medium 80-120TB and large over 120TB.  The levels are somewhat 
arbitrary but reasonably reflect the STORServers in the field and correlated 
nicely with what our support numbers are telling us.

I go into this as I think it would behoove IBM to consider a similar model.  A 
client doesn't necessarily benefit more or less based on the number of cores it 
has.  It does benefit, generally, from having the ability to backup and restore 
data.  The overall environment benefits from the presence of the TSM server as 
it is that environment that allows for the secure maintenance of critical 
corporate data.  It also provides services to recover after a disaster and 
finally, it provides a support organization to help a customer when it all goes 
wrong.

The value of the solution is thus spread.  A licensing scheme that spreads this 
value is appropriate. A client has a license no matter how big or small it is.  
Essentially a connection fee.  The more clients you have the more you pay.  The 
server is sized according to how much data is processed and stored.  The more 
data that arrives each day and the more data that is stored necessarily results 
in a larger server environment and thus more value.

It is very easy to count how much or how many of each.  It is also easy to sell 
increments of licensing to accommodate growth.  I would not be inclined to sell 
a per GB/month type scheme as this is too difficult for customers to budget.  
There must be a fixed component to licensing with a periodic true up period 
to make the scheme fair to IBM.

Today, the licensing scheme is not fair to either party. Value as perceived by 
the customer is not tied to the number of cores in the processor and IBM cannot 
accurately determine if a customer is in compliance.  This is not acceptable by 
either party.

As I write this, I recall an earlier version of the licensing model: clients 
were free and we paid for the server stuff.  It was priced by function.  For 
instance, we paid for DRM and its support.  That model wasn't correct as it 
rewarded the sites with large numbers of clients.

One of you said it correctly: it's time to get this right once and for all.  We 
need a fair licensing model that ensures TSM continues to be a viable product 
in the marketplace.  That means one that rewards IBM for the hard work it does 
to provide the code and its support and one that provides real value to its 
customers.

Subtract out the IBM bureaucracy and this is simple, right?

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges.
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of John 
D. Schneider
Sent: Tuesday, September 29, 2009 8:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Per terabyte licensing

Kelly,
 You are right.  IBM's pricing model also has in mind IBM customers
that have dozens of Tivoli titles, Websphere, etc., which all use the
PVU model.
 I think that IBM should build the license

Re: Per terabyte licensing

2009-09-28 Thread Kelly Lipp
Really.  How much does a TB of storage cost?

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Langdale
Sent: Monday, September 28, 2009 11:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Per terabyte licensing

My Tivoli S/W rep here in the UK is happy to sell by PVU or per TB. 

It sounds like it's not quite made it over the water yet.


Steven Langdale
Global Information Services
EAME SAN/Storage Planning and Implementation
( Phone : +44 (0)1733 584175
( Mob: +44 (0)7876 216782
ü Conference: +44 (0)208 609 7400 Code: 331817
+ Email: steven.langd...@cat.com

 



John D. Schneider john.schnei...@computercoachingcommunity.com 
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
28/09/2009 15:38
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] Per terabyte licensing




Caterpillar: Confidential Green Retain Until: 28/10/2009 



Duane,
I asked our TSM rep this question, and he asked Ron Broucek, the 
North America Tivoli Storage Software Sales Leader.  His response was:
 
just a rumor at this time as we occasionally evaluate pricing
strategies to make sure we're delivering the right value in the
marketplace.
Ron Broucek
North America Tivoli Storage Software Sales Leader

So if he says it is just a rumor, then how do you know IBM is offering
both?  Do you have this from a reliable source within IBM?
 
Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721

 
 
 Original Message 
Subject: Re: [ADSM-L] Per terabyte licensing
From: Ochs, Duane duane.o...@qg.com
Date: Mon, September 28, 2009 9:07 am
To: ADSM-L@VM.MARIST.EDU

We are actually looking into the cost difference. 
From what I understand, IBM is offering both. However, per terabyte
licensing eliminates sub-capacity licensing.
And it is your entire site. Not just where it works out best.

We are in the midst of passport renewals and found an increase due to
core type upgrades.

Previously we had older xeons using 50 PVUs per core. And the new
machines replacing the older ones are either same cores but at xeon 5540
cores which are now 70 PVUs or double the cores. 
They brought up per TB licensing. Since then sales has sent me two
E-mails inquiring total number of hosts, total TSM sites and total
library capacity at each. 
I was hesitant to say the least. 

It's been about a week and I haven't heard back yet. When I hear more
I'll drop a line.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Skylar Thompson
Sent: Saturday, September 26, 2009 11:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Per terabyte licensing

We're in that boat too. We have a GPFS cluster we expect to grow into
the petabyte range, so unless IBM sets the per-byte cost *really* low
we'll get hammered with that licensing scheme.

Zoltan Forray/AC/VCU wrote:
 Or more costly. We have test VM servers with quad-core processors 
running
 15-VM guests. If I started counting by T-Bytes backed-up, it would cost
 a lot more than 4-CPU's!



 From:
 David Longo david.lo...@health-first.org
 To:
 ADSM-L@VM.MARIST.EDU
 Date:
 09/25/2009 03:22 PM
 Subject:
 Re: [ADSM-L] Per terabyte licensing
 Sent by:
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



 Haven't heard that.
 My first thought is that it would make licensing
 a LOT easier to figure out!

 David Longo

 Thomas Denier thomas.den...@jeffersonhospital.org 9/25/2009 3:09 PM


 Within the last few months there was a series of messages on counting
 processor cores. A couple of the messages stated that TSM is moving to
 licensing based on terabytes of stored data rather than processor
 cores. Where can I find more information on this?


 #
 This message is for the named person's use only. It may
 contain private, proprietary, or legally privileged information.
 No privilege is waived or lost by any mistransmission. If you
 receive this message in error, please immediately delete it and
 all copies of it from your system, destroy any hard copies of it,
 and notify the sender. You must not, directly or indirectly, use,
 disclose, distribute, print, or copy any part of this message if you
 are not the intended recipient. Health First reserves the right to
 monitor all e-mail communications through its networks. Any views
 or opinions expressed in this message are solely those of the
 individual sender, except (1) where the message states such views
 or opinions are on behalf of a particular entity; and (2) the sender
 is authorized by the entity to give such views or opinions.
 #


--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Systems

Re: Per terabyte licensing

2009-09-28 Thread Kelly Lipp
And the key to that would be to add the phrase in some cases...

No matter what IBM does there will be happy people and unhappy people.  While a 
core based model doesn't make sense to many of us, a per TB model may turn out 
to make even less sense.

To argue on their side, they must find a model that is compatible with the 
industry and that does not diminish their own cash flow.  We need for IBM to 
continue to enhance the product.  They do that by keeping us as customers and 
by attracting new customers.  That balance is a lot harder than one may think.

I was fairly vocal about this at a previous Oxford.  While we're the loudest of 
the constituent parties, we also matter the least from a cash flow perspective: 
new customers actually spend more money (they've already gotten ours).  The 
dance is tricky and sometimes comes down to a they won't really leave (where 
would they go?) so let's worry about them but not too much.

As I own my own business I can understand the complexity they face.  It's 
really hard, though, not to simply say it's their problem.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Langdale
Sent: Monday, September 28, 2009 12:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Per terabyte licensing

He was a bit cagey about the actual cost, but said we should expect approx 
20% reduction in overall cost. Not pursued it as yet.


Steven Langdale
Global Information Services
EAME SAN/Storage Planning and Implementation
( Phone : +44 (0)1733 584175
( Mob: +44 (0)7876 216782
ü Conference: +44 (0)208 609 7400 Code: 331817
+ Email: steven.langd...@cat.com

 



Kelly Lipp l...@storserver.com 
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
28/09/2009 19:00
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] Per terabyte licensing




Caterpillar: Confidential Green Retain Until: 28/10/2009 



Really.  How much does a TB of storage cost?

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Steven Langdale
Sent: Monday, September 28, 2009 11:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Per terabyte licensing

My Tivoli S/W rep here in the UK is happy to sell by PVU or per TB. 

It sounds like it's not quite made it over the water yet.


Steven Langdale
Global Information Services
EAME SAN/Storage Planning and Implementation
( Phone : +44 (0)1733 584175
( Mob: +44 (0)7876 216782
ü Conference: +44 (0)208 609 7400 Code: 331817
+ Email: steven.langd...@cat.com

 



John D. Schneider john.schnei...@computercoachingcommunity.com 
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
28/09/2009 15:38
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] Per terabyte licensing




Caterpillar: Confidential Green Retain Until: 28/10/2009 



Duane,
I asked our TSM rep this question, and he asked Ron Broucek, the 
North America Tivoli Storage Software Sales Leader.  His response was:
 
just a rumor at this time as we occasionally evaluate pricing
strategies to make sure we're delivering the right value in the
marketplace.
Ron Broucek
North America Tivoli Storage Software Sales Leader

So if he says it is just a rumor, then how do you know IBM is offering
both?  Do you have this from a reliable source within IBM?
 
Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721

 
 
 Original Message 
Subject: Re: [ADSM-L] Per terabyte licensing
From: Ochs, Duane duane.o...@qg.com
Date: Mon, September 28, 2009 9:07 am
To: ADSM-L@VM.MARIST.EDU

We are actually looking into the cost difference. 
From what I understand, IBM is offering both. However, per terabyte
licensing eliminates sub-capacity licensing.
And it is your entire site. Not just where it works out best.

We are in the midst of passport renewals and found an increase due to
core type upgrades.

Previously we had older xeons using 50 PVUs per core. And the new
machines replacing the older ones are either same cores but at xeon 5540
cores which are now 70 PVUs or double the cores. 
They brought up per TB licensing. Since then sales has sent me two
E-mails inquiring total number of hosts, total TSM sites and total
library capacity at each. 
I was hesitant to say the least. 

It's been about a week and I haven't heard back yet. When I hear more
I'll drop a line.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Skylar Thompson
Sent: Saturday, September 26, 2009 11:02 AM

Re: Per terabyte licensing

2009-09-28 Thread Kelly Lipp
And remember, too, that the PVU thing contemplated something like a DB2 
license.  Perhaps you had two or three systems that would run DB2.  It did not 
contemplate something like TSM where EVERY system in the environment would have 
the software running.  Keeping track of a couple of systems and their various 
processor/core/PVU stuff is relatively simple.  Keeping track of that same 
thing across several hundred (never mind your case!) is very difficult.

The one size fits all mentality of Tivoli software clearly missed the mark 
with TSM.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of John 
D. Schneider
Sent: Monday, September 28, 2009 4:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Per terabyte licensing

Kelly,
You are right, IBM must build their license model to ensure the
profit they expect.  We can't blame them for doing this as a business. 
They can't give their product away for free. 
But the PVU based licensing model is a huge problem for an
environment like ours that has over 2000 clients of all different shapes
and kinds.  Lots of separate servers, but also VMWare partitions, and
AIX LPARs, and NDMP clients, etc.  Keeping up with the PVU rules is a
huge effort, especially the way IBM did it.  In Windows, the OS might
tell you that you have 2 processors.  But is that a single-core dual
processor, or two separate processors.  The OS can't tell, but IBM
insists there is a difference, because it counts PVUs differently in
this case.  That is too nit-picky if you ask me, and places too
difficult a burden on the customer.  There are freeware utilities that
will correctly count processors IBM's way, but to run them on 2000
servers is a pain, too.  We ended up writing our own scripts to call a
freeware tool IBM recommended, then parse the resulting answer to get
the details into a summarized format.  As if that wasn't enough, the
freeware tool crashed about 20 of our servers before we realized it. 
Boy, was that hard to explain to management!
It is also very objectionable to us that they don't have
sub-processor licensing for large servers like pSeries 595s.  We have a
128 processor p595, with a 2-processor LPAR carved out of it running
Oracle.  Even if we aren't running Oracle on any of the other LPARs, we
have to pay for a 128 processor Oracle license.  That is insane, and bad
for everybody, including IBM. We also have to pay for 128 processors of
regular TSM client licenses, even if we have only allocated half the
processors in the p595.  These are unfair licensing practices, and just
make IBM look greedy.
To simplify the license counting problem, we are looking at IBM
License Metric Tool, but it is a big software product to install and
deploy on 2000 servers, too, just to count TSM licenses.  ILMT 7.1 was
deeply flawed, and 7.2 just came out, so we are going to take a look at
that.
From my perspective, a total-TB-under-management model would be very
easy on the customer, as long as it was reasonably fair.  It would be
easy to run 'q occ' on all our TSM servers and pull together the result.
 You could find out your whole TSM license footprint in 10 minutes.  The
first time we had to it counting PVUs, it took us two months.
 
Best Regards,

John D. Schneider
The Computer Coaching Community, LLC
Office: (314) 635-5424 / Toll Free: (866) 796-9226
Cell: (314) 750-8721

 
 
 Original Message 
Subject: Re: [ADSM-L] Per terabyte licensing
From: Kelly Lipp l...@storserver.com
Date: Mon, September 28, 2009 3:05 pm
To: ADSM-L@VM.MARIST.EDU

And the key to that would be to add the phrase in some cases...

No matter what IBM does there will be happy people and unhappy people.
While a core based model doesn't make sense to many of us, a per TB
model may turn out to make even less sense.

To argue on their side, they must find a model that is compatible with
the industry and that does not diminish their own cash flow. We need for
IBM to continue to enhance the product. They do that by keeping us as
customers and by attracting new customers. That balance is a lot harder
than one may think.

I was fairly vocal about this at a previous Oxford. While we're the
loudest of the constituent parties, we also matter the least from a cash
flow perspective: new customers actually spend more money (they've
already gotten ours). The dance is tricky and sometimes comes down to a
they won't really leave (where would they go?) so let's worry about
them but not too much.

As I own my own business I can understand the complexity they face. It's
really hard, though, not to simply say it's their problem.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor

Re: DR restore of Client Data to new hardware w/ new IP

2009-09-23 Thread Kelly Lipp
Eric,

Should have been update stg access=destroyed rather than unavailable.  That's 
one thing.

Why isn't the client seeing the TSM server at the new IP?  H.  The client 
is pointing at the new source server, not the target, correct?  The new 
source/target combo may require updating as well since the target new the 
source at a different address.  So the whole HLLL address thing needs to be 
fixed up on the new source and target.  Are sessions enabled on the new source? 
 Can the TSM client on the source talk to the source TSM server?  These are all 
troubleshooting things I would try. 

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Eric 
Vaughn
Sent: Wednesday, September 23, 2009 12:01 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DR restore of Client Data to new hardware w/ new IP

Hello,
Our environment consist of source to target. I have recreated and restored the 
db onto a new piece of hardware that would be used as the replacement source 
server. My issue is that the IP is new and not the address that was assigned to 
the original source server. My assumption lead me to change the opt file on a 
client to the new server address. I restarted the services and tried to open up 
the gui and received a TCP/IP communication failure. Can anyone tell me what I 
am missing? I would like to restore from the copy pool. I issued the command 
update stgpool secondarypool access=unav so the client would retrieve data from 
the archivepool but no luck.

Eric Vaughn
Technical Administrator
Stevenson University
Office (443)334-2301
evau...@stevenson.edu
 


Re: the purpose of file device class

2009-09-22 Thread Kelly Lipp
I don't think it's an either/or decision.  I believe that tape will always have 
a place but that using some file pool will offer very nice RTO/RPO combos for 
some data structures.  The allure of very inexpensive tape storage should 
always be there (and perhaps increase again with LTO5) while the allure of 
instant restore will intrigue us, especially as 2TB SATA becomes mainstream.

Power and cooling disk is expensive and not green (I'm not green either, but 
thought I should at least try to be politically correct), while tape is still 
green (except for the oil needed to make the darn things).

How about some hardware compression on the shelf based controllers?  Perhaps 
target these products specifically to the backup space.  To me, that's the 
biggest problem.

Kelly Lipp
Chief Technical Officer
www.storserver.com
719-266-8777 x7105
STORServer solves your data backup challenges. 
Once and for all.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Richard Rhodes
Sent: Tuesday, September 22, 2009 5:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] the purpose of file device class

ADSM: Dist Stor
 Hi TSM-ers!
 At this moment we are using a diskpool with a VTS-like (DL4106 by EMC)
 storage pool as nextpool.
 I too am looking at a FILE pool to replace this in the future, just to
 prevent a vendor lock-in for our TSM environment and of course the
 possibility to use de-dup.
 The only problem I see for using large FILE (100 Tb +) pools is the size
 of the filesystems on the host running the TSM server. Now it's AIX, but
 in the future we are likely to migrate to Linux.

It would be interesting to know if the VTL vendors (which are basically
implementing FILE type volumes inside the VTL) use a filesystem, raw
logical vlumes, raw disk . . . or whatever.

 Does anyone have experience with 100+ Tb FILE storagepools?
 Thanks for any reply in advance!

We keep looking at big pool of FILE devices vs VTL for our next
big purchase down the road - whether for the initial pool where
backups go directly, or are later migrated to.

We keep hitting our heads against a wall in trying to come up
with a way to make widespread use of a BIG FILE device pool:

- dedup: IBM has solved this one with v6.1!!!

- compression: To replace tape drive compression we need TSM server side
compression (or should I say FILE device pool compression).
We can't just push the job onto the clients.  VTL's
support hdwr compression via compression cards.  I wonder if IBM
will ever support server side hdwr compression via
add-on cards for FILE devices (http://www.aha.com).

- lanfree:  Provide good lanfree with FILE devices that is not
Sanergy.

We've got a couple years yet on our tape systms, but as of
today, we really don't see much way to effectively implement
a large scale FILE pool to replace our current TAPE drives.
FILE pools may be IBM strategic direction for TSM, but I think
they have a lot more work to make it a true reality.

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: the purpose of file device class

2009-09-21 Thread Kelly Lipp
During Oxford 2005, IBM stated that the File Device Class is the strategic data 
structure.  One can see this in action with the dedup functionality in V6.

Our products use DISK for cachepool usually deployed on JBOD arranged FAST disk 
(SAS or FC at 15K) as the initial target for client backups, particularly for 
large numbers of clients with small data movement.

File Device class is used for large amounts of storage (since we can have 
reclamation there).  Our pools are targeted by clients moving a lot of data 
daily, or are the target of migration from the cachepool.

I think both are still valid for the reasons already described but in the long 
run DISK may go the way of the dinosaur.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of David 
McClelland
Sent: Monday, September 21, 2009 9:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] the purpose of file device class

Just to be uber-picky, FILE volumes now do allow multiple sessions/processes
to read/write concurrently to a single FILE volume from TSM 5.5 onwards
(http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmms
munn.doc/anrsgd5515.htm#wq28).

The big picture as I've read it is that IBM are perhaps angling the
user-base towards using FILE volumes, and that new developments would be
implemented against FILE technology rather than DISK. That being said, it's
fair to say that many people are simply more comfortable with the ease,
simplicity and habit of implementing and managing DISK volumes.

/DMc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Shawn Drew
Sent: 21 September 2009 15:58
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] the purpose of file device class

Disk Based (Random access) lets you have several sessions all sending data
to it.  This removes the need to queue backups like all the competitors.
But it will fragment as data expires and there is no defrag for random
access.

File devices (Sequential access) do also fragment over time, just like
tape, but you are able to run reclamation on them.  But sequential access
would cause queuing.
They each have different pro's and con's which make them suited for
different tasks.  (long term vs short term storage)

Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC,
Inc.

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.409 / Virus Database: 270.13.109/2384 - Release Date: 09/20/09
06:22:00


Re: backup encryption

2009-08-19 Thread Kelly Lipp
Trust, my friend, trust...

You'll know it's encrypted when you try to restore it without the key.

Sorry folks, couldn't resist.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Geil
Sent: Wednesday, August 19, 2009 9:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backup encryption

I just turned on backup encryption on one of my clients using the
include.encrypt option.  Is there a way to tell if it is actually
encrypting the data?


Re: backup encryption

2009-08-19 Thread Kelly Lipp
On a more serious note, perhaps something in the schedule log will indicate the 
include statement.  I've not done this so I don't know.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Geil
Sent: Wednesday, August 19, 2009 9:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backup encryption

I just turned on backup encryption on one of my clients using the
include.encrypt option.  Is there a way to tell if it is actually
encrypting the data?


Re: I'm getting new disk storage.

2009-08-06 Thread Kelly Lipp
It seems the perfect environment for the JBOD I suggested as you do not want 
all those sessions banging a RAID5 array.

Of course work flow is an issue, but the moving a TB disk to disk or disk to 
tape is a couple of hour process if you work it correctly into your daily 
processing and probably isn't an issue for you.  Ideally you wouldn't need to 
move it again, but you really can't have that many sessions banging RAID5 so 
what else do you do?

I've arrived at this approach pragmatically and through trial and error.  I 
know that I can easily run fifty or sixty (and perhaps more) sessions to 12 SAS 
15K drives (use two disk pool volumes per drive to saturate each drive).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Michael Green
Sent: Thursday, August 06, 2009 4:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

On Tue, Aug 4, 2009 at 7:41 PM, Kelly Lippl...@storserver.com wrote:
 You are exactly correct: modeling what will be rather than what is can be 
 tricky.  The problem really boils down to not having enough data to really 
 play with it adequately.

 I can tell you from experience on probably 200 TSM servers (Windows 2003 
 based, which is just fine all of you AIX heads!) that your overall scheme is 
 good.  One additional, I like to run initial backups to SAS drives using a 
 storage pool of device class disk.  Don't protect those drives simply use 
 JBOD as you will migrate the data out of them fairly soon either to tape or 
 file device class disk.  Use RAID5/6 for your SATA drives (We generally use 6 
 drive RAID5 sets in 12 bay shelves and 8 drive in 16 bay shelves) and keep a 
 smallish number of simultaneous backups to pools there as large numbers of 
 backups thrash the RAID set something awful.


Your suggestion of using JBOD instead of RAID for DISKCLASS data is
intriguing... I accept the reasoning behind not protecting these
drives by RAID. But how does it affect the workflow performance-wise?
What is the rational behind this? Economy (no wasted drives for
parity), performance gains?



 How much data do you backup today?  How many clients simultaneously? If you 
 want to take this private, give me a call.  I have a pretty good idea how to 
 size this if I have some more information.  Perhaps can save you a testing 
 step (testing costs money that you could spend on additional storage...).

That particular server backs up  700-1000GB  (~400K+ affected objects)
coming from just under 100 nodes nightly.
Thanks for offering me a phone consultation :) I won't bother you for
the time being, but maybe I will at a later time :)


 Thanks,

 Kelly Lipp
 CTO
 STORServer, Inc.


Re: I'm getting new disk storage.

2009-08-06 Thread Kelly Lipp
Andy,

In your case, you probably have a good bit of cache in the controller and your 
RAID sets are relatively small and you're using FC disks which are more 
reliable, tolerant and faster than SATA so all is good.  My data comes from the 
SATA 7.2K world with six or eight drives per RAID5.  Remember, our goal was to 
maximize capacity with these drives rather than performance.  In your case, you 
chose performance as most of your data is being stored long term somewhere else 
(cheaper disk or tape).

Failure in the disk pool is the question.  Since the data is copied relatively 
soon after arrival (both to the copy pool and to the migration pool) it is not 
at risk for very long.  If you do have a failure of a drive you probably have 
the data in the other pools, or still on the client. My rationale is as 
follows: if you lose a cachepool disk and the client loses data then you are in 
trouble. That's two bad things happening to one good person and what the 
chances?  If you lose the cachepool then the next time the client backs up the 
data will move again.

With the more reliable SAS/SCSI/FC drives, failures happen very infrequently.  

Are there holes in my rationale?  Of course.  Have I been bitten by them? No.

Long term storage is different.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Huebner,Andy,FORT WORTH,IT
Sent: Thursday, August 06, 2009 9:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

I must be missing something?  I cannot saturate my 15 FC 10k disks (3x 4+1 
RAID5) with 2GB Ethernet and 110 concurrent clients (330+ sessions).  The 
Ethernet on the other hand is saturated.
Out of curiosity, what happens with a disk failure?

Andy Huebner
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Thursday, August 06, 2009 10:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

It seems the perfect environment for the JBOD I suggested as you do not want 
all those sessions banging a RAID5 array.

Of course work flow is an issue, but the moving a TB disk to disk or disk to 
tape is a couple of hour process if you work it correctly into your daily 
processing and probably isn't an issue for you.  Ideally you wouldn't need to 
move it again, but you really can't have that many sessions banging RAID5 so 
what else do you do?

I've arrived at this approach pragmatically and through trial and error.  I 
know that I can easily run fifty or sixty (and perhaps more) sessions to 12 SAS 
15K drives (use two disk pool volumes per drive to saturate each drive).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Michael Green
Sent: Thursday, August 06, 2009 4:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

On Tue, Aug 4, 2009 at 7:41 PM, Kelly Lippl...@storserver.com wrote:
 You are exactly correct: modeling what will be rather than what is can be 
 tricky.  The problem really boils down to not having enough data to really 
 play with it adequately.

 I can tell you from experience on probably 200 TSM servers (Windows 2003 
 based, which is just fine all of you AIX heads!) that your overall scheme is 
 good.  One additional, I like to run initial backups to SAS drives using a 
 storage pool of device class disk.  Don't protect those drives simply use 
 JBOD as you will migrate the data out of them fairly soon either to tape or 
 file device class disk.  Use RAID5/6 for your SATA drives (We generally use 6 
 drive RAID5 sets in 12 bay shelves and 8 drive in 16 bay shelves) and keep a 
 smallish number of simultaneous backups to pools there as large numbers of 
 backups thrash the RAID set something awful.


Your suggestion of using JBOD instead of RAID for DISKCLASS data is
intriguing... I accept the reasoning behind not protecting these
drives by RAID. But how does it affect the workflow performance-wise?
What is the rational behind this? Economy (no wasted drives for
parity), performance gains?



 How much data do you backup today?  How many clients simultaneously? If you 
 want to take this private, give me a call.  I have a pretty good idea how to 
 size this if I have some more information.  Perhaps can save you a testing 
 step (testing costs money that you could spend on additional storage...).

That particular server backs up  700-1000GB  (~400K+ affected objects)
coming from just under 100 nodes nightly.
Thanks for offering me a phone consultation :) I won't bother you for
the time being, but maybe I will at a later time :)


 Thanks,

 Kelly Lipp
 CTO
 STORServer, Inc.


This e-mail (including any

Re: I'm getting new disk storage.

2009-08-06 Thread Kelly Lipp
That is correct: you can lose a couple of storage pool volumes (on a failed 
disk) and still go on.  The operation in progress writing to those volumes will 
stall/fail (I don't know which, but I'm guessing retries probably save your 
butt).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Huebner,Andy,FORT WORTH,IT
Sent: Thursday, August 06, 2009 3:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

Thank you for the reply; I should have been more precise with the disk failure 
question. 

Doesn't a disk failure affect the backup run, or are your pools on many 
physicals so the loss of one is not fatal to the process?

I understand the data loss part; you have a short window of time for a low 
probability set of disk failures.


Thanks again.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Thursday, August 06, 2009 2:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

Andy,

In your case, you probably have a good bit of cache in the controller and your 
RAID sets are relatively small and you're using FC disks which are more 
reliable, tolerant and faster than SATA so all is good.  My data comes from the 
SATA 7.2K world with six or eight drives per RAID5.  Remember, our goal was to 
maximize capacity with these drives rather than performance.  In your case, you 
chose performance as most of your data is being stored long term somewhere else 
(cheaper disk or tape).

Failure in the disk pool is the question.  Since the data is copied relatively 
soon after arrival (both to the copy pool and to the migration pool) it is not 
at risk for very long.  If you do have a failure of a drive you probably have 
the data in the other pools, or still on the client. My rationale is as 
follows: if you lose a cachepool disk and the client loses data then you are in 
trouble. That's two bad things happening to one good person and what the 
chances?  If you lose the cachepool then the next time the client backs up the 
data will move again.

With the more reliable SAS/SCSI/FC drives, failures happen very infrequently.  

Are there holes in my rationale?  Of course.  Have I been bitten by them? No.

Long term storage is different.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Huebner,Andy,FORT WORTH,IT
Sent: Thursday, August 06, 2009 9:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

I must be missing something?  I cannot saturate my 15 FC 10k disks (3x 4+1 
RAID5) with 2GB Ethernet and 110 concurrent clients (330+ sessions).  The 
Ethernet on the other hand is saturated.
Out of curiosity, what happens with a disk failure?

Andy Huebner
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Thursday, August 06, 2009 10:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

It seems the perfect environment for the JBOD I suggested as you do not want 
all those sessions banging a RAID5 array.

Of course work flow is an issue, but the moving a TB disk to disk or disk to 
tape is a couple of hour process if you work it correctly into your daily 
processing and probably isn't an issue for you.  Ideally you wouldn't need to 
move it again, but you really can't have that many sessions banging RAID5 so 
what else do you do?

I've arrived at this approach pragmatically and through trial and error.  I 
know that I can easily run fifty or sixty (and perhaps more) sessions to 12 SAS 
15K drives (use two disk pool volumes per drive to saturate each drive).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Michael Green
Sent: Thursday, August 06, 2009 4:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm getting new disk storage.

On Tue, Aug 4, 2009 at 7:41 PM, Kelly Lippl...@storserver.com wrote:
 You are exactly correct: modeling what will be rather than what is can be 
 tricky.  The problem really boils down to not having enough data to really 
 play with it adequately.

 I can tell you from experience on probably 200 TSM servers (Windows 2003 
 based, which is just fine all of you AIX heads!) that your overall scheme is 
 good.  One additional, I like to run initial backups to SAS drives using a 
 storage pool of device class disk.  Don't protect those drives simply use 
 JBOD as you will migrate the data out of them fairly soon either to tape

Re: I'm getting new disk storage.

2009-08-04 Thread Kelly Lipp
You are exactly correct: modeling what will be rather than what is can be 
tricky.  The problem really boils down to not having enough data to really play 
with it adequately.

I can tell you from experience on probably 200 TSM servers (Windows 2003 based, 
which is just fine all of you AIX heads!) that your overall scheme is good.  
One additional, I like to run initial backups to SAS drives using a storage 
pool of device class disk.  Don't protect those drives simply use JBOD as you 
will migrate the data out of them fairly soon either to tape or file device 
class disk.  Use RAID5/6 for your SATA drives (We generally use 6 drive RAID5 
sets in 12 bay shelves and 8 drive in 16 bay shelves) and keep a smallish 
number of simultaneous backups to pools there as large numbers of backups 
thrash the RAID set something awful.

Our STORServers almost always have this mix of storage pool options for our 
customers to use: SAS 15K JBOD, SATA 7.2K RAID and some tape.  DB/LOG on SAS 
using best practices.

Next thing to consider is your V6.2 upgrade next spring.  You gotta count on 
needing more space than you have allocated currently to the DB/LOG volumes.  
Probably 3-6x is a good guess but we're all still learning.  And probably more 
spindles than you might consider for V5.  The shelves we offer have the ability 
to mix and match SAS and SATA but I'm thinking for you a whole shelf of SAS at 
least and then SATA. I don't think you really need to much intelligence or 
features in the shelves themselves as you have plenty of options with TSM to do 
what you need to do.

How much data do you backup today?  How many clients simultaneously? If you 
want to take this private, give me a call.  I have a pretty good idea how to 
size this if I have some more information.  Perhaps can save you a testing step 
(testing costs money that you could spend on additional storage...).

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Michael Green
Sent: Tuesday, August 04, 2009 10:06 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] I'm getting new disk storage.

I know performance benchmarking is an art. And asking questions like
how can I test this new storage may bring a smile on face of some of
you. But still...

The time has come and I need to replace the disk storage underlying
the DB/DISKCLASS/FILECLASS files. I'm going to test two storages from
Infortrend and DDN. Both storages allow mix and match of SAS (for
DB/LOG) and SATA (for DISK/FILECLASS) drives.
Both will be FC 4Gb attached.
I've been thinking about replicating my biggest TSM server (Linux
based) on a server that has been put aside for this purpose.  I'll
load DB/LOG onto SAS drives and some FILECLASS volumes onto SATA
drives and time the following workloads:
- expiration
- db defrag
- Reclamation

To be honest I am not sure I know how to properly set up such testing
environment. I'm concerned that my tests will not be representative
enough and will not indicate the actual performance that I'll observe
after the storage system will become production.

Your thoughts on this topic are much appreciated...
--
Warm regards,
Michael Green


Re: Change TSM Platform

2009-08-04 Thread Kelly Lipp
What is the concern with keeping a Windows TSM platform (I sat in the weeds on 
this as long as I could)?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Drew
Sent: Tuesday, August 04, 2009 8:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Change TSM Platform

You will have to move the NAS clients over to the new TSM server,
and wait for the old backups to expire before you retire the old backup
server.

That's what I figured, but I'm not keeping that thing around for 7 years.
I guess we'll have to stick with Windows :(


Regards,
Shawn

Shawn Drew





Internet
john.schnei...@computercoachingcommunity.com

Sent by: ADSM-L@VM.MARIST.EDU
08/03/2009 06:38 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Change TSM Platform






Shawn,
From my understanding, you are going to have to set up a new TSM server,
and migrate the clients over to it.  You can use the export commands to
export policies and client data from one TSM server to another directly
across the LAN.  That will make it somewhat less painful, but depending
on how many clients you have, this could take a few weeks.  You will
have to have enough storage capacity on the new system to absorb all
this data.  If you only have one tape library, you will have to set up
library sharing, and have enough tapes to have two copies of some
clients' data as you migrate clients over.  If you want more detailed
instructions, please ask. Many of us have been through such migrations
before.

According to the help on EXPORT NODE, you can't use it on nodes of type
NAS.  You will have to move the NAS clients over to the new TSM server,
and wait for the old backups to expire before you retire the old backup
server.

Best Regards,

 John D. Schneider
 The Computer Coaching Community, LLC
 Office: (314) 635-5424 / Toll Free: (866) 796-9226
 Cell: (314) 750-8721


   Original Message 
Subject: Re: [ADSM-L] Change TSM Platform
From: Michael Green mishagr...@gmail.com
Date: Mon, August 03, 2009 1:32 pm
To: ADSM-L@VM.MARIST.EDU

You will be moving from x86/64 architecture to Power. They are binary
incompatible. You cannot upload DB from x86 to Power (this is what
backup/restore essentially doees). Your only option is to export DB.

Don't know about the NDMP.
--
Warm regards,
Michael Green



On Mon, Aug 3, 2009 at 9:06 PM, Shawn
Drewshawn.d...@americas.bnpparibas.com wrote:
 We are looking at the possibility of changing a branch's TSM 5.4 server
 from Windows to AIX.
 As far as I know, you can NOT backup a DB on Windows and restore it to
an
 AIX platform.Is this still the case?

 If not, can anyone think of a way to move NDMP toc/backups from one TSM
 server to another?




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Change TSM Platform

2009-08-04 Thread Kelly Lipp
Perfectly rational!  That one I can get behind...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Drew
Sent: Tuesday, August 04, 2009 11:21 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Change TSM Platform

Just a standard.
All of our TSM servers are based on AIX in our main data centers.  We
recently took control of a branch location who has been running on
Windows.  We are spending a disproportionate amount of time to get our
automation, and everything else, to work with Windows just for this little
branch.
little things like no grep or mail causes headaches.  I know there are
ways to get this to work, just requires time to work on it, where I'd
rather just spend the extra money for an AIX box.

Regards,
Shawn

Shawn Drew





Internet
l...@storserver.com

Sent by: ADSM-L@VM.MARIST.EDU
08/04/2009 12:42 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Change TSM Platform






What is the concern with keeping a Windows TSM platform (I sat in the
weeds on this as long as I could)?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Shawn Drew
Sent: Tuesday, August 04, 2009 8:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Change TSM Platform

You will have to move the NAS clients over to the new TSM server,
and wait for the old backups to expire before you retire the old backup
server.

That's what I figured, but I'm not keeping that thing around for 7 years.
I guess we'll have to stick with Windows :(


Regards,
Shawn

Shawn Drew





Internet
john.schnei...@computercoachingcommunity.com

Sent by: ADSM-L@VM.MARIST.EDU
08/03/2009 06:38 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Change TSM Platform






Shawn,
From my understanding, you are going to have to set up a new TSM server,
and migrate the clients over to it.  You can use the export commands to
export policies and client data from one TSM server to another directly
across the LAN.  That will make it somewhat less painful, but depending
on how many clients you have, this could take a few weeks.  You will
have to have enough storage capacity on the new system to absorb all
this data.  If you only have one tape library, you will have to set up
library sharing, and have enough tapes to have two copies of some
clients' data as you migrate clients over.  If you want more detailed
instructions, please ask. Many of us have been through such migrations
before.

According to the help on EXPORT NODE, you can't use it on nodes of type
NAS.  You will have to move the NAS clients over to the new TSM server,
and wait for the old backups to expire before you retire the old backup
server.

Best Regards,

 John D. Schneider
 The Computer Coaching Community, LLC
 Office: (314) 635-5424 / Toll Free: (866) 796-9226
 Cell: (314) 750-8721


   Original Message 
Subject: Re: [ADSM-L] Change TSM Platform
From: Michael Green mishagr...@gmail.com
Date: Mon, August 03, 2009 1:32 pm
To: ADSM-L@VM.MARIST.EDU

You will be moving from x86/64 architecture to Power. They are binary
incompatible. You cannot upload DB from x86 to Power (this is what
backup/restore essentially doees). Your only option is to export DB.

Don't know about the NDMP.
--
Warm regards,
Michael Green



On Mon, Aug 3, 2009 at 9:06 PM, Shawn
Drewshawn.d...@americas.bnpparibas.com wrote:
 We are looking at the possibility of changing a branch's TSM 5.4 server
 from Windows to AIX.
 As far as I know, you can NOT backup a DB on Windows and restore it to
an
 AIX platform.Is this still the case?

 If not, can anyone think of a way to move NDMP toc/backups from one TSM
 server to another?




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or
partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain
functions and services for BNP Paribas may be performed by BNP Paribas
RCC, Inc.


Re: Another perspective on ridiculous retention

2009-08-04 Thread Kelly Lipp
There is a huge difference between backup and archive, especially with TSM.  
One can actually make them different in TSM.

The keys to effective archiving are three fold:

1. Only archive that which you are legally (or for business purposes) required 
to archive and then for only the time required. Rarely is anyone required to 
keep everything forever!  The requirements are much more specific than that.  
Having data around that you are not required to have can be as bad or worse 
than not having something!

2. Archive in some commonly readable format.  Remember, if you are going to 
need to retrieve something it is likely going to be sometime down the road.  
How you read that something is critical.  Does the application that created the 
data still exist?  Perhaps not.  Thus a binary representation of the data is 
probably not going to be easily, if at all, readable. This is especially true 
for databases.  Consider archiving an export of the data rather than the 
database files themselves.  This seems a duh, but often is not considered.

3. IT does not own archiving.  The business units that rely on IT own archiving 
and its requirements.  To expect the IT staff to somehow magically 
know/understand the business and legal requirements is absurd.  We must be 
provided with concise requirements for archiving for us to do a good job.  We 
must understand exactly what data, how long and what the retrieve time 
requirements are.  Everything forever is STEWPID!

I wrote a paper on this some time ago.  If anyone is interested, email me 
privately and I'll be happy to send it to you.



Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of David 
McClelland
Sent: Tuesday, August 04, 2009 11:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Another perspective on ridiculous retention

Questions flood into my head along the lines of 'what's the difference
between a backup and an archive' (obviously not in a TSM sense) and if/
how should they be treated differently practically with TSM (e.g. a
seperate TSM Server instance for archival purposes, as some places do
etc).

/David Mc

Sent from my iPhone

On 4 Aug 2009, at 18:45, Troy Barnhart tbarnh...@rcrh.org wrote:

 Only 7 years?  We're in Healthcare.  Seven is usually the minimum.

 If you're talking Minors, then it is 18+7= 25 years.  Digital
 Mammography  Research-related electronic medical records are
 FOREVER.  There are lots of numbers on time-retention floating
 around out there - it just depends on the governing authority.  We
 haven't completed our Retention Policy, so we have tapes from
 various Operating Systems and Applications from the 1990's.

 Regards,

 Troy Barnhart, Sr. Systems Programmer
 tbarnh...@rcrh.org
 Regional Health, Inc.
 353 Fairmont Boulevard
 Rapid City, South Dakota 57701
 PH: 605-716-8352 / FAX: 605-716-8302

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
 Behalf Of Shawn Drew
 Sent: Tuesday, August 04, 2009 11:15 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Another perspective on ridiculous retention

 Do I understand you to say you have to keep your NDMP backups
 around for
 7 years?  The tape media isn't even meant to last for 7 years.   Do
 you
 have customers that actually think they will need 7 year old copies
 of you
 NAS data?  That's a tough requirement.

 I thought I'd change this to a new topic.  I hear this type of comment
 alot on backup forums.  From an engineering perspective, it completely
 makes sense.   It also makes sense that people in backup forums
 think like
 engineers!

 Just another perspective.When I started with TSM, I was working
 for a
 software development company named Tivoli who obviously cared about
 their backup data.  The mantra of the backup guys was Restores are
 more
 important than backups!   I.E. do periodic test restores, and if a
 restore request comes in and conflicts with a backup.  cancel the
 backup
 in favor of the restore.

 Several years later, I start working for a bank.  After working here
 for a
 few years, I realize the mantra is now the reverse: Backups are more
 important than restores.  Meaning.  the main reason we perform
 backups
 and retain them for 7 years, is so we can show an auditor our
 settings and
 say we've done it.
 We very rarely have to restore anything that old, but we very often
 have
 to show records of these backups.

 One last note, I have been involved in legal discovery projects
 where we
 actually did have to restore 7 year old data off of old DLT IV
 tapes.  We
 found tapes with dried up BBQ sauce on them and all sorts of damage.
 Luckily, between the multiple storage pools we were able to rebuild
 all
 the data.  The DLTs never actually failed due to age  (only by a
 tomato-based attack!)

 Regards,
 Shawn

Re: Backup Stgpool using 2 drives taking too long

2009-08-03 Thread Kelly Lipp
And on Windows you can use the task manager to look at bytes read or written 
per process.  Find the TSM Server process and verify that it is indeed moving 
data.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of John 
D. Schneider
Sent: Monday, August 03, 2009 2:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Backup Stgpool using 2 drives taking too long

Mario,
So, if I understand this correctly, the primary storage pool is LTO4,
and the copy storage pool is LTO4, and so the whole backup stgpool is
straight tape to tape.  Is that correct?

When you say the process becomes idle for 20 minutes, how can you tell
it is idle?  Do you mean the number of bytes copied doesn't get any
bigger?  The total byte count only gets updated at the end of each
aggregated file.  That is, if the copy stream hits a 300GB single file,
then the copy will proceed to copy that file as fast as tape performance
permits, but the total byte count won't increase until that huge file
completes.  And a multi-hundred GB file could easily take 20 minutes or
more.

Next time you catch it in this state, look at the size of the current
file, and do the arithmetic.  Is it possible that is what you are
seeing?

If you can look at the tape drives themselves, you can tell if the tape
drives themselves are reading and writing, or sitting idle, can't you? 
I suspect that the tape drives will be buzzing away, even though it
seems like you aren't making progress.

If I have misunderstood, and you can tell that the drives are really
idle, then I would look at drive firmware next, then at IBMtape drivers.
 Make sure you are up-to-date  We have 6 TSM Windows servers with IBM
libraries and drives, and we are not seeing the symptom you describe,
that is, periods of actual stalled backup stgpools.

Feel free to contact me offline if you have not done the firmware and
driver updates before, and need any assistance.


Best Regards,
 
 John D. Schneider
 The Computer Coaching Community, LLC
 Office: (314) 635-5424 / Toll Free: (866) 796-9226
 Cell: (314) 750-8721
 
 
   Original Message 
Subject: [ADSM-L] Backup Stgpool using 2 drives taking too long
From: Mario Behring mariobehr...@yahoo.com
Date: Mon, August 03, 2009 1:34 pm
To: ADSM-L@VM.MARIST.EDU

Hi list,

The environment is TSM Server 5.5 running on a Windows 2003 box with a
IBM ULT3580-TD4 SCSI (2 drives) connected.

There is a backup stgpool process running on background for more than 12
hours (several schedules were missed during this time).

The process is using both drives, one reading the data and the other
writing, as expected. The problem is that, at some point, both drives
stay IDLE for more than 20 minutes...then start reading/writing again
(for a short time I might add).

The process would be long over by now if this was not happening...what
could be wrong and how can I put the drives to work non-stop, if
possible...??

Any process that uses both drives in this fashion, like an EXPORT for
example, is behaving the same way...

Thanks

Mario


Re: Tivoli disk only backup with data duplicated at two locations?

2009-07-29 Thread Kelly Lipp
So are you thinking FCOE for the connection to the storage at the remote site?  
That way the storage is essentially locally mounted.  Remco brings up an 
interesting point about recovery: how do you rationalize the copy pool volume 
locations from the original to the DR server but I think that can be worked out 
in advance by ensuring that the system at the DR site that takes over the 
mountpoints has the same drive letter (or if Unix device names) as the old 
system.  Restore the database and voila.

I liked, too, the notion of keeping the config files or perhaps even using OS 
duplication/mirror of the DB and log files.  One could then fire the DR TSM 
server and mount the correct storage devices and go.

Planning is everything!

Remember, too, that TSM has server-to-server functionality.  It isn't quite 
slick enough for me, but it does work and for many sites is completely 
adequate.  Since it is an intrinsic part of the product, well documented and 
requires a bit less tom foolery than anything else.

To me this always boils down to is it really cheap enough?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Davis, 
Adrian
Sent: Wednesday, July 29, 2009 9:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Tivoli disk only backup with data duplicated at two 
locations?

We are looking at a number of possibilities for a new backup system?

 

Until now we have a used a traditional tape based backup strategy -
but with falling disk and network bandwidth prices - we are looking at
the possibility of using a disk only backup solution.

 

One option is to use Tivoli.

 

We need to be able to have two copies of all data each copy at a
different physical location.

 

One idea that comes to mind is to backup to a disk storage pool, then
BACKUP STGPOOL this storage pool to another (copy) disk storage pool -
The second disk pool being a volume on a separate disk array at a remote
location.

 

We would also BACKUP DB the TSM database to the remote location.

 

In the event of the loss of the primary site - I'm thinking we could
build a TSM server, using the database backup up to the remote site,
mount the remote copy disk storage pool volume on the new server and
(as at that time the primary storage pool is unavailable) access the
data on the disk copy storage pool directly in order to perform our DR
restores.

 

This seems like a (relatively) simple solution. However, as I have not
seen this documented in any of the manuals, I'm guessing that there is a
problem with this approach?

 

Does anybody have any comments -or- better ideas for a Tivoli disk only
backup with data duplicated at two sites?

 

Many Thanks,

   =Adrian=

DISCLAIMER

This message is confidential and intended solely for the use of the
individual or entity it is addressed to. If you have received it in
error, please contact the sender and delete the e-mail. Please note 
that we may monitor and check emails to safeguard the Council network
from viruses, hoax messages or other abuse of the Council’s systems.
To see the full version of this disclaimer please visit the following 
address: http://www.lewisham.gov.uk/AboutThisSite/EmailDisclaimer.htm

For advice and assistance about online security and protection from
internet threats visit the Get Safe Online website at
http://www.getsafeonline.org



Re: Used 3494 / 3590 equip

2009-07-24 Thread Kelly Lipp
I'll take it!

Just kidding: I'm thinking this has some rather significant value.  Try eBay...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Druckenmiller, David
Sent: Friday, July 24, 2009 12:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Used 3494 / 3590 equip

Just wondering if anyone knows what the market might be for a 3-frame 3494 with 
8 3590-H1A drives.  Also, is there market for about 1800 used 3590E cartridges? 
 We just finished migrating off and my boss just wants to trash the whole 
thing.  I'd be curious to know just how much money they're throwing away.  
Thanks.

-
CONFIDENTIALITY NOTICE: This email and any attachments may contain
confidential information that is protected by law and is for the
sole use of the individuals or entities to which it is addressed.
If you are not the intended recipient, please notify the sender by
replying to this email and destroying all copies of the
communication and attachments. Further use, disclosure, copying,
distribution of, or reliance upon the contents of this email and
attachments is strictly prohibited. To contact Albany Medical
Center, or for a copy of our privacy practices, please visit us on
the Internet at www.amc.edu.


Re: Data Domain - questions from a meeting with DD

2009-06-15 Thread Kelly Lipp
Ben,

How much data do you have in DD and what sort of ratios are you seeing? What 
about performance during restores?

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Ben 
Bullock
Sent: Monday, June 15, 2009 9:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Domain - questions from a meeting with DD

We use a DD580 on our TSM servers. We use a device type of FILE and set it to 
a 50GB file size. Our TSM server is on AIX, so we use an NFS mount for the 
storage pools on the DD. It works well, we can get very good throughput over 
the 1GB NIC to the DD. If your environment is larger and you need more 
throughput, you could get a DD with the VTL option and then I think you can 
attach it to your TSM server though fibre. 

We try not to compress anything we send to TSM. Yes, it takes longer to send 
over the network, but it then dedupes well. In some cases we do have to receive 
pre-compressed data that will not depupe/compress. In those cases, we don't 
send that data to the DD storagepool, we keep it on the tape drives.

DD has to do its own CLEAN process once a week, which is kinda like a 
reclamation on the TSM server. So we just let the TSM server do it's normal 
reclamation on the DD storagepools, but knowing that the data will not actually 
re-appear on the DD until after it does it's clean.

Ben


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Loon, 
EJ van - SPLXM
Sent: Monday, June 15, 2009 9:31 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Domain - questions from a meeting with DD

Hi Rick!
Quite a coincident: I just spoke to DataDomain too!
1) The Dutch guys also confirmed that compressed data is dedupable, but it 
depends how compression is done. If it's file level compression, you can expect 
that a copy of that file is similar and thus dedupable. A compressed 
ntkernel.dll from system A is similar to a compressed ntkernel.dll from system 
B.
2) and 3) The Dutch DD guys stated that most TSM users with a DataDomain 
appliance use a storagepool type=disk!
Normally you don't want to store your data in a diskpool, because it becomes 
heavily fragmented overtime, but since the DD appliance does backend 
defragmentation, front-end fragmentation on you diskpool this will have no 
negative effect on performance.
I found it a very interesting option, since the appliance does in-band and 
on-the-fly deduplication, compression and defragmentation and has the option to 
duplicate the de-duplicated data to a remote location.
Kind regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Richard Rhodes
Sent: maandag 15 juni 2009 15:29
To: ADSM-L@VM.MARIST.EDU
Subject: Data Domain - questions from a meeting with DD


Hello,

The other day we had a meeting with Data Domain - just the normal vendor update 
about their products.  While informative, several comments were made by DD 
systems that I thought I like your opinions on.  (note:  we are a all tape 
shop, and probably will be until the next hardware refresh in several years)

Interesting comments DD made:

1)  de-dup of compressed data.

We told them that much of our backup load comes from Oracle backups that are 
already compressed via Unix compress utility then pushed to TSM.  We asked what 
effect this would have on de-dup.  They said we could expect a 4x-5x de-dup 
ratio on this compress data.  I had always thought that compressed data would 
not de-dup.


2)  NFS mounted storage pools.

They said that the vast majority of their installations, including TSM 
installations, use NFS (some OpenStorage for NetBackup, and a little CIFS) for 
backups to disk.  In other words, VTL emulation is a very small percentage of 
their installations.  If you have DD hdwr, are you using NFS or file devices, 
or, a vtl interface?  Is anyone using NFS for a storage pool?


3)  DD reclaim of scratch tapes

I asked about when/how DD will reclaim no longer used space.
In other words, when a scratch tape passed reuse-delay.  He said that DD had 
some kind of interface to backup softw such that it knows when data is no 
longer needed and reclaimes that space.  I know TSM v5.5 has a feature to clear 
a vol when no longer used with a vtl, but DD was saying they had a special 
interface to accomplish this.  Does DD have a special interface to TSM?


Thanks

Rick


-
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error

Re: Internal compression?

2009-06-11 Thread Kelly Lipp
No, this isn't internal compression.  If you look at the sched log for the 
clients, do you see and failed messages?  For instance, let's say you have copy 
serialization set to one of the shared options (sharedynamic, sharestatic).  If 
the client starts to move the file, then it opens, then it closes and the 
client starts again, all the data that is moved (whether successful or not) is 
counted toward the GB transferred number.  I've seen this account for the 
difference between what you think was backed up based on session roll up stats 
and what's migrated or copied during daily processing.

Just an idea.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Shawn 
Drew
Sent: Thursday, June 11, 2009 12:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Internal compression?

I noticed some strange data.  I never paid attention to it before, but now
that I noticed it, I find it is the same across all of our TSM instances.
Is there any reason that STGPOOL backups and Migrations would only process
about half the quantity of data that is reported to be backed up?

They are all AIX/TSM 5.4.4 and ALL nodes have Compression = no at the
server level


- Quantity of data backed up last night on one small system was 140 GB. (I
manually confirmed this by adding up all the Total number of bytes
transferred numbers from the actlog)
select sum(bytes) from summary where activity='BACKUP' and
cast((current_timestamp-END_TIME)days as integer)1

- Storage pool backups, which run after the client backups only
transferred 68GB (confirmed from adding up the ANR0986I messages from the
actlog)
select sum(bytes) from summary where activity='STGPOOL BACKUP' and
cast((current_timestamp-END_TIME)days as integer)1

(And yes, I did verify we are backing up all the primary pools that the
backup data could have possibly landed)

- Migration also only transferred 68GB
select sum(bytes) from summary where activity='MIGRATION' and
cast((current_timestamp-END_TIME)days as integer)1


Run these select statements on your server and see if you see the same
thing.


Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Internal compression?

2009-06-11 Thread Kelly Lipp
Yoo hoo! Me and Richard agree!  I love when that happens.  And I got there 
first.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Richard Sims
Sent: Thursday, June 11, 2009 12:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Internal compression?

On Jun 11, 2009, at 2:14 PM, Shawn Drew wrote:

 Is there any reason that STGPOOL backups and Migrations would only
 process
 about half the quantity of data that is reported to be backed up?

One commonly encountered reason is retries, evidenced in the full
client log.  (It's a product deficiency that the summary statistics
don't report the retries in any way, so you have to pore over the full
log.)

Richard Sims


Re: Performance question

2009-05-27 Thread Kelly Lipp
I would guess that it won't go any faster via iSCSI.  Perhaps it might be 
slower due to one more protocol conversion on each end.

You can move about one quarter TB/hour over a GigE. So that's 1200 hours to 
move all that data...  That's a long time!

How about deploying another TSM server local to that data and doing the backup 
there and removing the tapes for DR?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Christian Svensson
Sent: Wednesday, May 27, 2009 5:26 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Performance question

Hi all *SMers
I got a new challenge in a front of me.
Every quarter will I get a large package with 10 000 new files where the total 
size is 300 TB of Data (each file is average size 30GB large). This is static 
data that will not replace any other files or modified.
The problem is that I can not do any LAN-Free backup over SAN because of the 
data and TSM is on different locations (20 Miles between the sites). The link 
between the data and my TSM Server is black fiber but I only have 1 GBit 
limited speed on this link. Why I don't know...
I don't have any time limit how long time it takes to backup/restore this 
files, even if it takes 1 hour or 30 days. The customer don't care.

But as TSM Admin I want to do this backup/restore as fast as possible.

I was thinking of to do a LAN-Free backup but via iSCSI so I sending larger 
blocks between server and tape but I don't know if that will give me any 
performance benefit then backup as normal incremental backup over normal TCP/IP 
network.

My question is, will it go faster to do a backup via iSCSI then regular LAN 
Backup?

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson


TSM Sales Engineer needed

2009-05-27 Thread Kelly Lipp
Folks,

http://www.storserver.com/main.cfm?menu=1detail=jobs/job_CSE.htm

Describes positions.  I need someone in Chicago, Southern California and 
Oregon/Washington.

Resumes to me if you have or know of somebody with interest.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


Re: Changing volume status with active data to SCRATCH

2009-05-12 Thread Kelly Lipp
Del volume discarddata=yes will do it.  A pretty big hammer and you will not be 
able to get the data back without restoring the TSM database.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mario 
Behring
Sent: Tuesday, May 12, 2009 3:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Changing volume status with active data to SCRATCH

Hi list,

How can I change the status of a particular tape in the LTO Library to SCRATCH 
considering that there is still not expired data on it?

Is there any way to expire this data or tell TSM to ignore it and change the 
tape to SCRATCH?

Thanks

Mario


Re: LTO for long term archiving

2009-05-06 Thread Kelly Lipp
Here here.  I'll reiterate: the data must be stored in the lowest common 
denominator format.  No databases, no strange and wondrous binary formats.  
Simple text is the best.  That's why microfiche is still used!

I would be very concerned about medical data: it is in so many different 
formats and one has to wonder if our elected officials and their beaurocratic 
minions in the various regulatory agencies are aware of these facts and the 
problem they are going to have in the future.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Kauffman, Tom
Sent: Wednesday, May 06, 2009 7:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Soapbox time.

The media is not important - any sane retention policy will require that the 
information be copied at least annually, with enough copies for redundancy.

BUT -

to be able to PROCESS the data 25 years from now imposes additional 
requirements.

First and formost -- the data will NOT be in any format except unloaded 
flat-file; ASCII, UTF-8, or UTF-16 encoding. You will NOT be able to process 
proprietary data formats 25 years from now.

Look at the Domesday Book - the version written on parchment or vellum in 1086 
is still readable. The BBC digitized it in 1986 and found it almost impossible 
to find systems to read the digitized version in 2002 - 16 years later. You're 
trying for 25 years.

Tom Kauffman
NIBCO, Inc

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
 Of Evans, Bill
 Sent: Tuesday, May 05, 2009 5:34 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: LTO for long term archiving

 I truly doubt that archiving drives, servers and tapes for 25 years
 each
 time the technology updates  will let you read the tapes because the
 drive and server will probably not even boot up and run.

 You will have to update the data every two LTO cycles or so.  LTO will
 read two generations back and 25 years from now we will be on LTO14 or
 15, so LTO4 is toast.   Or, more probably, we will be storing into some
 kind of flash drive at Petabyte capacities.

 I think that Blu-Ray DVD will meet your 25 year mark without having to
 retrieve and update to new media.  I know that my 1986 CD's (those not
 seriously scratched or warped from laying on the dash ) still work on
 today's systems.  Properly stored DVD's would need to have players
 stored also, but, these are mechanically simpler than LTO drives and
 servers and would most likely still run.  They are also much cheaper,
 so
 having a new DVD in storage every 5 years is no big expense.

 The bigger issue is where and how do you keep track of all of this?  I
 think TSM's HSM is probably capable, however, I'm not real comfortable
 recommending it.  We have had several years of problems running HSM on
 Solaris and have finally turned it off.

 What is needed is a good archiving tool that can keep an updated DB of
 content and storage location that users can browse.

 We recently restored a power point file written by Office version (?)
 on
 an OS 9 Mac.  This could NOT be read by Office XP, 2003, 2007 (PC) or
 Office 2004 or 2008 (Mac).  We had to find an old Mac OS/9 that still
 had a copy of Office 2000, read it, write it back to the 2000 version
 .ppt file before any 2004-2009 software could read it.  If it had been
 18 years instead of 9 years, then we never would have been able to read
 it at all, that old OS 9 Mac would never have been saved.

 This will happen more and more as our programs become more complex and
 require significant changes in the file formats.  So the real problem
 is
 not just how to archive the data for 25 years, it's how to archive the
 applications for 25 years so we can access that data!

 Actually, stone tablets are, so far, the best archive media...


 Thanks,

 Bill Evans
 Research Computing Support
 FRED HUTCHINSON CANCER RESEARCH CENTER
 206.667.4194  ~  bev...@fhcrc.org


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
 Of
 Kelly Lipp
 Sent: Tuesday, May 05, 2009 1:42 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] LTO for long term archiving

 I like the implication, but I'm pretty sure somebody actually thought
 being able to read the information would have been a good idea.

 Kelly Lipp
 CTO
 STORServer, Inc.
 485-B Elkton Drive
 Colorado Springs, CO 80907
 719-266-8777 x7105
 www.storserver.com


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
 Of
 Remco Post
 Sent: Tuesday, May 05, 2009 2:39 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] LTO for long term archiving

 I do agree, having the tapedrives around _could_ be important. I know
 of at least one environment that was able to produce the media that
 stores the data

Re: LTO for long term archiving

2009-05-05 Thread Kelly Lipp
To me the problem is having the drives around and more importantly, the 
interfaces to the drives.  I think that probably the best bet is to plan on 
archiving a TSM server with a drive along with the media periodically.  Snap 
off the last database backup, restore it on the to be archived server (a good 
test in itself), and store the whole kit together.  If one needs to retrieve an 
archive, fire up the archived server, query the database to determine what tape 
is required, get it, retrieve the data and put the whole mess away.

The other way to do this would be to migrate the archived data to new tape 
media as you march through time.  I like this approach as that will have the 
double advantage of refreshing and verifying the data on those tapes.  One 
could shelve the media in the archive pools and do this on a very controller 
basis when the media changes.  Lots of data movement potentially, but it would 
become a nicely verified process that your auditors could look at to help 
ensure compliance.  It's one thing to say we're doing and quite another to show 
we're doing it.  Having the archive data more readily retrievable has obvious 
benefits as well.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Huebschman, George J.
Sent: Tuesday, May 05, 2009 2:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Does anyone have 25 year old tape media or tape drives around?
Will you stil be able to use LTOx media in 25 years?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Thomas Denier
Sent: Tuesday, May 05, 2009 4:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] LTO for long term archiving

I work for a large hospital. I have been asked to investigate possible
configurations for archiving something between a few hundred terabytes
and a petabyte of data for 25 years. This would be clinical records that
we need to keep in case of a malpractice suit. The retention period is
25 years because there are two ways we can get sued for alleged
malpractice involving a pediatric patient. The parents or guardians have
a seven year window of opportunity to file suit, starting at the time of
the alleged malpractice. The patient has a seven year window of
opportunity, starting at his or her 18th birthday. In principle, the
retention period should vary depending on patient age, but nobody I have
talked to so far thinks it is practical to sort records in this way;
they want a uniform retention period that covers the worst case scenario
(a patient allegedly harmed as a newborn suing just before the end of
his or her seven year window).

As far as I can tell, the most expensive part of such a configuration is
the media, and LTO media will cost about a third as much as the most
economical MagStar media (extended length 3592 volumes read and written
with TS1130 drives). With the sort of workload described above I don't
expect any difficulty staying within the recommended limit on the number
of times an LTO volume passes over the tape heads. Are there any other
reasons to be nervous about using LTO for long term archives?

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: LTO for long term archiving

2009-05-05 Thread Kelly Lipp
I like the implication, but I'm pretty sure somebody actually thought being 
able to read the information would have been a good idea.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Remco 
Post
Sent: Tuesday, May 05, 2009 2:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

I do agree, having the tapedrives around _could_ be important. I know
of at least one environment that was able to produce the media that
stores the data, but no drives. But then again, they only had to
retain the data, not the infra to access it.

On May 5, 2009, at 22:35 , Kelly Lipp wrote:

 To me the problem is having the drives around and more importantly,
 the interfaces to the drives.  I think that probably the best bet is
 to plan on archiving a TSM server with a drive along with the
 media periodically.  Snap off the last database backup, restore it
 on the to be archived server (a good test in itself), and store the
 whole kit together.  If one needs to retrieve an archive, fire up
 the archived server, query the database to determine what tape is
 required, get it, retrieve the data and put the whole mess away.

 The other way to do this would be to migrate the archived data to
 new tape media as you march through time.  I like this approach as
 that will have the double advantage of refreshing and verifying the
 data on those tapes.  One could shelve the media in the archive
 pools and do this on a very controller basis when the media
 changes.  Lots of data movement potentially, but it would become a
 nicely verified process that your auditors could look at to help
 ensure compliance.  It's one thing to say we're doing and quite
 another to show we're doing it.  Having the archive data more
 readily retrievable has obvious benefits as well.

 Kelly Lipp
 CTO
 STORServer, Inc.
 485-B Elkton Drive
 Colorado Springs, CO 80907
 719-266-8777 x7105
 www.storserver.com


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
 Behalf Of Huebschman, George J.
 Sent: Tuesday, May 05, 2009 2:25 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] LTO for long term archiving

 Does anyone have 25 year old tape media or tape drives around?
 Will you stil be able to use LTOx media in 25 years?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On
 Behalf Of
 Thomas Denier
 Sent: Tuesday, May 05, 2009 4:11 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] LTO for long term archiving

 I work for a large hospital. I have been asked to investigate possible
 configurations for archiving something between a few hundred terabytes
 and a petabyte of data for 25 years. This would be clinical records
 that
 we need to keep in case of a malpractice suit. The retention period is
 25 years because there are two ways we can get sued for alleged
 malpractice involving a pediatric patient. The parents or guardians
 have
 a seven year window of opportunity to file suit, starting at the
 time of
 the alleged malpractice. The patient has a seven year window of
 opportunity, starting at his or her 18th birthday. In principle, the
 retention period should vary depending on patient age, but nobody I
 have
 talked to so far thinks it is practical to sort records in this way;
 they want a uniform retention period that covers the worst case
 scenario
 (a patient allegedly harmed as a newborn suing just before the end of
 his or her seven year window).

 As far as I can tell, the most expensive part of such a
 configuration is
 the media, and LTO media will cost about a third as much as the most
 economical MagStar media (extended length 3592 volumes read and
 written
 with TS1130 drives). With the sort of workload described above I don't
 expect any difficulty staying within the recommended limit on the
 number
 of times an LTO volume passes over the tape heads. Are there any other
 reasons to be nervous about using LTO for long term archives?

 IMPORTANT:  E-mail sent through the Internet is not secure. Legg
 Mason therefore recommends that you do not send any confidential or
 sensitive information to us via electronic mail, including social
 security numbers, account numbers, or personal identification
 numbers. Delivery, and or timely delivery of Internet mail is not
 guaranteed. Legg Mason therefore recommends that you do not send
 time sensitive
 or action-oriented messages to us via electronic mail.

 This message is intended for the addressee only and may contain
 privileged or confidential information. Unless you are the intended
 recipient, you may not use, copy or disclose to anyone any
 information contained in this message. If you have received this
 message in error, please notify the author by replying to this
 message and then kindly delete the message. Thank you.

--
Met

Re: Copypool using more tapes then primary tapepool

2009-04-17 Thread Kelly Lipp
Do you manually kick reclamation on the copy pool? I noticed that the 
reclamation threshold on that pool is set to 100%.

It isn't unusual for reclamation processing to be somewhat off among pools.  
Generally, the copy pool will be slightly larger since reclamation isn't as 
aggressive and you will always have more partially empty tapes in that pool (if 
they are being removed from the library) than you will have in the primary pool.

6-9 tapes I wouldn't worry.  Hundreds?  Then something is probably wrong.

That said, you clearly need something better to do!  Isn't it amazing how easy 
it is to run a TSM environment?  The stuff that used to be hard is easy so you 
can start focus on other things!

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Larry 
Peifer
Sent: Friday, April 17, 2009 2:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Copypool using more tapes then primary tapepool

Why are we using more tapes in the copypool library vs the primary tape
library?

There is a 6 - 9 tape difference between the copypool and the primary tape
pool.  We average ~500 GB per tape so that's 1.5 - 4.5 TB of data.  It
doesn't seem like there should be that much of a discrepancy.  There is
both backup data and archive data mixed on the tapes and the DbBackups are
taken into account.

We have 2 identically configured IBM 3584 tape libraries.

On a daily basis our disk pools are migrated (migrate stgpool diskpool
lo=0) to the primary tape pool.

Then a daily schedule (backup stgpool tapepool6 tapepool7 maxprocess=4) is
run to keep everything equal between the 2 tape libraries.

Daily expiration and reclamation processes finish fine.

Schedules report successful completion daily.

Running TSM Server 5.4 with AIX 5.3 on p520 server.  LTO2 tapes with HW
compression

Storage Pool configurations:

Storage Pool Name: DISKPOOL
Storage Pool Type: Primary
Device Class Name: DISK
Estimated Capacity: 2,400 G
Space Trigger Util: 0.4
Pct Util: 0.4
Pct Migr: 0.4
Pct Logical: 100.0
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 4
Reclamation Processes:
Next Storage Pool: TAPEPOOL6
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Main Disk Storage Pool
Overflow Location:
Cache Migrated Files?: No
Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed:
Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
Migration in Progress?: No
Amount Migrated (MB): 1,235,496.70
Elapsed Migration Time (seconds): 9,284
Reclamation in Progress?:
Last Update by (administrator): admin
Last Update Date/Time: 08/24/07   09:50:37
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: No
Reclamation Type:
Overwrite Data when Deleted:

Storage Pool Name: TAPEPOOL6
Storage Pool Type: Primary
Device Class Name: LTOCLASS6
Estimated Capacity: 121,841 G
Space Trigger Util:
Pct Util: 32.9
Pct Migr: 47.0
Pct Logical: 99.3
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 2
Reclamation Processes: 2
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Primary Sequential Tape
Overflow Location:
Cache Migrated Files?:
Collocate?: No
Reclamation Threshold: 100
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 300
Number of Scratch Volumes Used: 152
Delay Period for Volume Reuse: 3 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): admin
Last Update Date/Time: 04/07/09   14:06:34
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?: Yes
CRC Data: Yes
Reclamation Type: Threshold
Overwrite Data when Deleted:

Storage Pool Name: TAPEPOOL7
Storage Pool Type: Copy
Device Class Name: LTOCLASS7
Estimated Capacity: 120,330 G
Space Trigger Util:
Pct Util: 32.3
Pct Migr:
Pct Logical: 99.3
High Mig Pct:
Low Mig Pct:
Migration Delay:
Migration Continue: Yes
Migration Processes:
Reclamation Processes: 2
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold:
Access: Read/Write
Description: Copy Pool
Overflow Location:
Cache Migrated Files?:
Collocate?: No
Reclamation Threshold: 100
Offsite Reclamation Limit: No Limit
Maximum Scratch Volumes Allowed: 300
Number of Scratch Volumes Used: 157
Delay Period for Volume Reuse: 3 Day(s)
Migration in Progress?:
Amount Migrated (MB):
Elapsed Migration Time (seconds):
Reclamation in Progress?: Yes
Last Update by (administrator): admin
Last Update Date/Time: 12/14/07   13:56:37
Storage Pool Data Format: Native
Copy Storage Pool(s):
Active Data Pool(s):
Continue Copy on Error?:
CRC Data: No
Reclamation Type

Re: TSM database information

2009-04-07 Thread Kelly Lipp
I would create a second TSM instance on your server and restore the appropriate 
database backup to that instance.  Then you can issue the queries you need to 
determine if you have the tapes or if they were over written since the backup.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
RAYMOND J RAMIREZ RAMIREZ
Sent: Tuesday, April 07, 2009 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM database information

Hello to all,

I have a special situation.

In February I moved an AIX client from our old TSM 5.2 server to add it to our 
newer TSM 5.4 server, then I deleted the file spaces that belonged to this 
client and removed the client from the old server. Now the users want to 
recover old files that was on the AIX client before I moved it and deleted the 
file spaces. I know I can restore the database to the point before the 
deletion, but I need to be sure that the files can be recovered before 
attempting this. Everything was backed up on IBM 3590 cartridges in a large IBM 
library, and I have a tape management report that identifies all the TSM 
database tapes as available, as all of the data tapes, too.

But I also need to know which of the 4,000+ data tapes has the files I need.  I 
need a method to read the TSM database tape file without restoring it, and in 
reading it, I wish to know which are the data tapes with the files I need. If 
the tapes are available (since reclamation and reuse could have destroyed the 
original files), then I will restore the database tape, and run TSM to recover 
the requested files. But if most or all of the tapes were reused (the tape 
management system can verify this), then I can be sure that the data is lost 
and I would not have to do the TSM database restore.

It may sound confusing, but it is like knowing if there is fish in a lake 
before travelling toward the lake to catch fish.I am open to any and all 
suggestions and recommendations.

Raymond J. Ramirez, P.E.
Distributed Systems Supervisor
ITS Operations and Infrastructure


Re: Backup files to empty tape

2009-04-01 Thread Kelly Lipp
I assume you are using scratch tapes in your library for the pool where this 
data is heading.  By default, TSM will choose one of the filling scratch tapes 
first.  To make sure it uses a brand new tape, set all the filling tapes in 
that pool in that library to ReadOnly.  Then start the backup to that pool.  
TSM will be forced to take another tape from scratch.  Note the volser of the 
tape being used.  When the backup is done, check that tape out and set the 
readonly tapes back to readwrite.

There are variations on the theme but this will work.  Make sure nothing else 
is going on during this or you might get other data on that tape.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mario 
Behring
Sent: Wednesday, April 01, 2009 12:23 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Backup files to empty tape

Hi list,

I have to backup some files from a TSM node to an empty tape and send this tape 
away (check out from the library). What is the best approach to perform this 
task?

Thanks

Mario


Re: Tape Drive SN issue

2009-03-30 Thread Kelly Lipp
I assume these message occur when you try to define the path?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Huebner,Andy,FORT WORTH,IT
Sent: Monday, March 30, 2009 9:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Tape Drive SN issue

We have a three drives that generate these messages and we are unable to use 
the drives because the path will not stay on-line.  We have not been able to 
find a solution to this problem.

03/30/2009 04:30:41  ANR8963E Unable to find path to match the serial number
  defined for drive L1R3DR1 in library IBM3494A . (SESSION:
  139365)
03/30/2009 04:30:41  ANR8873E The path from source TSMSERVER1 to destination
  L1R3DR1 (/dev/rmt13) is taken offline. (SESSION: 139365)

These are 3590-E drives; the other 21 do not have this problem.

The 24 drives are in one 3494 library, the library is shared by 2 TSM servers, 
1 TSM server is the library manager.  There are also 8 AS/400 LPARs that use 
the library.  There have not been any problems reported by the AS/400.
The problem started a few months ago with no known changes.
The AIX error logs do not show any problems, the fiber switch also does not 
show any errors.  IBM has checked the drives and they do not show any errors.
These drives are on different HBA's and different FC switches, both of which 
have working drives.
AIX has been restarted, due to other reasons, and that did not fix the problem. 
 The drives have been reset and power cycled.
We are running AIX 5.3  TSM 5.4.2.0.

What have we missed, what could be looked at next?  Does ANR8963E have a hidden 
meaning?


Andy Huebner


This e-mail (including any attachments) is confidential and may be legally 
privileged. If you are not an intended recipient or an authorized 
representative of an intended recipient, you are prohibited from using, copying 
or distributing the information in this e-mail or its attachments. If you have 
received this e-mail in error, please notify the sender immediately by return 
e-mail and delete all copies of this message and any attachments.
Thank you.


Re: define path problems

2009-03-26 Thread Kelly Lipp
You use the IBM TSM driver for HP drives, not the IBM drivers or the HP 
drivers...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Alexander Lazarevich
Sent: Thursday, March 26, 2009 1:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] define path problems

Well, we have HP drives that came with the Overland Neo 4100 library. Any
idea where to get the HP drivers for LTO2 drives? I've looked on HP's
website and can only find the Windows drivers, not TSM.

It also seems odd that these are identical drivers to the ones that were
replaced, shouldn't the same TSM driver we've been using for the last 5
years still work for the replacement drives?

Thanks for the help!

Alex

On Thu, 26 Mar 2009, Dennis, Melburn (IT Solutions US) wrote:

 It looks like you need to reinstall the drivers for the new drives.
 Your OS is seeing the drives, but TSM needs specific drivers for them to
 be associated as LTO class devices.  If they are IBM drives, you can go
 the IBM ftp site and download the latest drivers
 (ftp://service.boulder.ibm.com/storage/devdrvr/Windows/Win2000/Latest/).
 Once you have updated the drivers, redefine the drives (delete paths,
 drives first), and it should show up as LTO.


 Mel Dennis
 Systems Engineer II

 Siemens IT Solutions and Services, Inc.
 Energy Data Center Operations
 4400 Alafaya Trail
 MC Q1-108
 Orlando, FL 32826
 Tel.: 407-736-2360
 Mob: 321-356-9366
 Fax: 407-243-0260
 mailto:melburn.den...@siemens.com
 www.usa.siemens.com/it-solutions


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
 Alexander Lazarevich
 Sent: Thursday, March 26, 2009 12:33 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] define path problems

 I'm having another problem and could use more help.

 I've got lots of tapes in the library with lots of data on them, and
 they
 do seem to be defined and ready to go:

 tsm: ITG-TSMq libv

 Library Name  Volume Name  Status Owner  Last Use   Home
 Device
 Element
 Type
   ---  -- -  -  ---
 -
 LB6.0.0.5 ITG002L2 PrivateITG-TSMData   45
 LB6.0.0.5 ITG003L2 PrivateITG-TSMData   48
 LB6.0.0.5 ITG004L2 PrivateITG-TSMData   53

 but for some reason they are just unavailable to the server and I can't
 figure out why.

 One clue is that since I replaced the drives (delete drives, update
 library path, define drives, define drive paths) the new drives are
 showing up as a new device type GENERICTAPE1:

 tsm: ITG-TSMq drive

 Library Name Drive Name   Device Type On-Line
   --- ---
 LB6.0.0.5MT1.0.0.4GENERICTAPE Yes
 LB6.0.0.5MT2.0.0.4GENERICTAPE Yes

 Before the drives were replaced, these showed up as LTOCLASS1 device
 types. Here's the device type I have:

 tsm: ITG-TSMq dev

 DeviceDeviceStorage  DeviceFormatEst/Max   Mount
 Class Access   Pool  TypeCapacity  Limit
 Name  StrategyCount  (MB)
 - -----  - --  --
 DISK  Random  3
 LTOCLASS1 Sequential  1LTO DRIVE   DRIVES


 So my guess is I need to reassociate those tapes with device type
 LTO1CLASS, but I don't see any way to do this. Any idea if I'm on the
 right track?

 I appreciate the help,

 Alex

 On Thu, 26 Mar 2009, Alexander Lazarevich wrote:

 Thank you Mel.

 Alex

 On Thu, 26 Mar 2009, Dennis, Melburn (IT Solutions US) wrote:

 ANR8466E  command: Invalid update request for drive drive name in
  library library name.

 Explanation: An invalid update request has been made for the given
 drive. This can occur if a new device name is given and the
 characteristics of the device do not match the characteristics of the
 original device.

 System action: The server does not process the command.

 User response: If a different type of drive has been installed, the
 old
 drive definition must be deleted with a DELETE DRIVE operation, and a
 new drive must be defined. The UPDATE DRIVE command cannot be used in
 this case.


 As it says in the error description, you must not only delete the
 drive
 paths, but the drive definitions themselves. Deleting the drive path
 does not remove the previous drives information, and when you try to
 redifine the path, its expecting to connect to that old drive, which
 obviously no longer exists.


 Mel Dennis
 Systems Engineer II

 Siemens IT Solutions and Services, Inc.
 Energy Data Center Operations
 4400 Alafaya Trail
 MC Q1-108
 Orlando, FL 32826
 Tel.: 407-736-2360
 Mob: 321-356-9366
 Fax: 407

Re: define path problems

2009-03-26 Thread Kelly Lipp
When you put the new drives in the library, Device Manager found them.  You 
will need to right click on each drive and update the driver to use the TSM 
driver.  The HP driver probably bound to the drive automatically after the 
reboot.  You must have the IBM TSM Device Driver controlling these drives.

Once you have the correctly done, you will have a mt device in the TSM Device 
Driver view.  Update the drive and path definitions and TSM and you should be 
fine.  Call me at the number below if you struggle and I'll help you.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Alexander Lazarevich
Sent: Thursday, March 26, 2009 1:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] define path problems

Well, we have HP drives that came with the Overland Neo 4100 library. Any
idea where to get the HP drivers for LTO2 drives? I've looked on HP's
website and can only find the Windows drivers, not TSM.

It also seems odd that these are identical drivers to the ones that were
replaced, shouldn't the same TSM driver we've been using for the last 5
years still work for the replacement drives?

Thanks for the help!

Alex

On Thu, 26 Mar 2009, Dennis, Melburn (IT Solutions US) wrote:

 It looks like you need to reinstall the drivers for the new drives.
 Your OS is seeing the drives, but TSM needs specific drivers for them to
 be associated as LTO class devices.  If they are IBM drives, you can go
 the IBM ftp site and download the latest drivers
 (ftp://service.boulder.ibm.com/storage/devdrvr/Windows/Win2000/Latest/).
 Once you have updated the drivers, redefine the drives (delete paths,
 drives first), and it should show up as LTO.


 Mel Dennis
 Systems Engineer II

 Siemens IT Solutions and Services, Inc.
 Energy Data Center Operations
 4400 Alafaya Trail
 MC Q1-108
 Orlando, FL 32826
 Tel.: 407-736-2360
 Mob: 321-356-9366
 Fax: 407-243-0260
 mailto:melburn.den...@siemens.com
 www.usa.siemens.com/it-solutions


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
 Alexander Lazarevich
 Sent: Thursday, March 26, 2009 12:33 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] define path problems

 I'm having another problem and could use more help.

 I've got lots of tapes in the library with lots of data on them, and
 they
 do seem to be defined and ready to go:

 tsm: ITG-TSMq libv

 Library Name  Volume Name  Status Owner  Last Use   Home
 Device
 Element
 Type
   ---  -- -  -  ---
 -
 LB6.0.0.5 ITG002L2 PrivateITG-TSMData   45
 LB6.0.0.5 ITG003L2 PrivateITG-TSMData   48
 LB6.0.0.5 ITG004L2 PrivateITG-TSMData   53

 but for some reason they are just unavailable to the server and I can't
 figure out why.

 One clue is that since I replaced the drives (delete drives, update
 library path, define drives, define drive paths) the new drives are
 showing up as a new device type GENERICTAPE1:

 tsm: ITG-TSMq drive

 Library Name Drive Name   Device Type On-Line
   --- ---
 LB6.0.0.5MT1.0.0.4GENERICTAPE Yes
 LB6.0.0.5MT2.0.0.4GENERICTAPE Yes

 Before the drives were replaced, these showed up as LTOCLASS1 device
 types. Here's the device type I have:

 tsm: ITG-TSMq dev

 DeviceDeviceStorage  DeviceFormatEst/Max   Mount
 Class Access   Pool  TypeCapacity  Limit
 Name  StrategyCount  (MB)
 - -----  - --  --
 DISK  Random  3
 LTOCLASS1 Sequential  1LTO DRIVE   DRIVES


 So my guess is I need to reassociate those tapes with device type
 LTO1CLASS, but I don't see any way to do this. Any idea if I'm on the
 right track?

 I appreciate the help,

 Alex

 On Thu, 26 Mar 2009, Alexander Lazarevich wrote:

 Thank you Mel.

 Alex

 On Thu, 26 Mar 2009, Dennis, Melburn (IT Solutions US) wrote:

 ANR8466E  command: Invalid update request for drive drive name in
  library library name.

 Explanation: An invalid update request has been made for the given
 drive. This can occur if a new device name is given and the
 characteristics of the device do not match the characteristics of the
 original device.

 System action: The server does not process the command.

 User response: If a different type of drive has been installed, the
 old
 drive definition must be deleted with a DELETE DRIVE operation, and a
 new drive must be defined. The UPDATE DRIVE command cannot be used in
 this case.


 As it says in the error description, you must not only delete

Re: VTL and Dedup ( TS7569G)

2009-03-23 Thread Kelly Lipp
Yes, that is correct.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Hart, 
Charles A
Sent: Monday, March 23, 2009 10:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

I imagine the 1400MBS is for the Clustered version? 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Sunday, March 22, 2009 12:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

The white paper I listed in my response to this thread was written by
ESG.  They tested the TS7560 and obtained on the order of 1400MB/sec.
And Charles is correct: it is an x86 box, actually the IBM x3850 which
has, perhaps, the best architecture in the class.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Hart, Charles A
Sent: Saturday, March 21, 2009 8:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

It works well if you understand your data and how you can push to it
with in reason before you deploy another.  The IBM product likes more
CPU cores ... Understand these are x86 boxes... We see up to 500MBS
Writes to one of our VTL's that ingests Exchange Backups via the TSM
TDP.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Saturday, March 21, 2009 12:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny.

I would say that BAD dedupe is the enemy of throughput.

There IS good dedupe.  I've seen it.  It hurts neither backup, nor
restore performance.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Clark, Robert A
Sent: Friday, March 20, 2009 4:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Dedupe is an errand boy, sent by the storage industry, to collect a
bill.

Dedupe is the enemy of throughput.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Friday, March 20, 2009 4:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Why do you hate all things dedupe?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Friday, March 20, 2009 10:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny, but I was researching the TS7650 yesterday and found this article
on the IBM website.  Pretty good detail about the product in a non-TSM
environment.

ftp://service.boulder.ibm.com/storage/tape/ts7650g_esg_validation.pdf

And then this on in the TSM environment. I think this one might have
been written by somebody somewhat less familiar with TSM than we would
be.
Seemed a little heavy handed about TSM.

ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/tsw03043usen/TSW03043USEN.
PDF

My overall impression, and I hate all things de-dup, was this is a
pretty good product offering.  I'm sure it's way expensive but
understand there are some follow on products coming that will address
the lower end of this market.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Alex Paschal
Sent: Friday, March 20, 2009 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Hi, Sabar.

I couldn't find a TS7569G via Google, but on the TS7650G, also a
deduping VTL, after data goes through the factoring (dedup) algorithm it
is run through a compression algorithm.  You probably won't see much
deduplication, but on the first backup you should see a decrease in size
similar to the decrease you would see from the compression on a tape
drive.

Regards,
Alex

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sabar Martin Hasiholan Panggabean
Sent: Thursday, March 19, 2009 5:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM
using TS7569G ? Let say I have 100 TB of data and backup to this VTL. On
the 1st attempt of backup / full backup, will this data size decrease on
the VTL

BR,

Martin P


This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure under applicable law or may constitute as
attorney work product

Re: VTL and Dedup ( TS7569G)

2009-03-21 Thread Kelly Lipp
What Robert said.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of W. 
Curtis Preston
Sent: Saturday, March 21, 2009 11:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny.

I would say that BAD dedupe is the enemy of throughput.

There IS good dedupe.  I've seen it.  It hurts neither backup, nor restore
performance.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Clark, Robert A
Sent: Friday, March 20, 2009 4:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Dedupe is an errand boy, sent by the storage industry, to collect a
bill.

Dedupe is the enemy of throughput.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
W. Curtis Preston
Sent: Friday, March 20, 2009 4:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Why do you hate all things dedupe?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Friday, March 20, 2009 10:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Funny, but I was researching the TS7650 yesterday and found this article
on the IBM website.  Pretty good detail about the product in a non-TSM
environment.

ftp://service.boulder.ibm.com/storage/tape/ts7650g_esg_validation.pdf

And then this on in the TSM environment. I think this one might have
been written by somebody somewhat less familiar with TSM than we would
be.
Seemed a little heavy handed about TSM.

ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/tsw03043usen/TSW03043USEN.
PDF

My overall impression, and I hate all things de-dup, was this is a
pretty good product offering.  I'm sure it's way expensive but
understand there are some follow on products coming that will address
the lower end of this market.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Alex Paschal
Sent: Friday, March 20, 2009 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Hi, Sabar.

I couldn't find a TS7569G via Google, but on the TS7650G, also a
deduping VTL, after data goes through the factoring (dedup) algorithm it
is run through a compression algorithm.  You probably won't see much
deduplication, but on the first backup you should see a decrease in size
similar to the decrease you would see from the compression on a tape
drive.

Regards,
Alex

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sabar Martin Hasiholan Panggabean
Sent: Thursday, March 19, 2009 5:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM
using TS7569G ? Let say I have 100 TB of data and backup to this VTL. On
the 1st attempt of backup / full backup, will this data size decrease on
the VTL

BR,

Martin P


This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed and may contain
information that is non-public, proprietary, privileged, confidential,
and exempt from disclosure under applicable law or may constitute as
attorney work product.
If you are not the intended recipient, you are hereby notified that any
use, dissemination, distribution, or copying of this communication is
strictly prohibited. If you have received this communication in error,
notify us immediately by telephone and
(i) destroy this message if a facsimile or (ii) delete this message
immediately if this is an electronic communication.

Thank you.


DISCLAIMER:
This message is intended for the sole use of the addressee, and may contain
information that is privileged, confidential and exempt from disclosure
under applicable law. If you are not the addressee you are hereby notified
that you may not use, copy, disclose, or distribute to anyone the message or
any information contained in the message. If you have received this message
in error, please immediately advise the sender by reply email and delete
this message.


Re: VTL and Dedup ( TS7569G)

2009-03-20 Thread Kelly Lipp
Funny, but I was researching the TS7650 yesterday and found this article on the 
IBM website.  Pretty good detail about the product in a non-TSM environment.

ftp://service.boulder.ibm.com/storage/tape/ts7650g_esg_validation.pdf

And then this on in the TSM environment. I think this one might have been 
written by somebody somewhat less familiar with TSM than we would be.  Seemed a 
little heavy handed about TSM.

ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/tsw03043usen/TSW03043USEN.PDF

My overall impression, and I hate all things de-dup, was this is a pretty good 
product offering.  I'm sure it's way expensive but understand there are some 
follow on products coming that will address the lower end of this market.

Thanks,

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Alex 
Paschal
Sent: Friday, March 20, 2009 11:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] VTL and Dedup ( TS7569G)

Hi, Sabar.

I couldn't find a TS7569G via Google, but on the TS7650G, also a
deduping VTL, after data goes through the factoring (dedup) algorithm it
is run through a compression algorithm.  You probably won't see much
deduplication, but on the first backup you should see a decrease in size
similar to the decrease you would see from the compression on a tape
drive.

Regards,
Alex

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sabar Martin Hasiholan Panggabean
Sent: Thursday, March 19, 2009 5:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VTL and Dedup ( TS7569G)

Hi,


Does anyone here has been implementing or know how Dedup works in TSM
using TS7569G ? Let say I have 100 TB of data and backup to this VTL. On
the 1st attempt of backup / full backup, will this data size decrease on
the VTL

BR,

Martin P


This message (including any attachments) is intended only for
the use of the individual or entity to which it is addressed and
may contain information that is non-public, proprietary,
privileged, confidential, and exempt from disclosure under
applicable law or may constitute as attorney work product.
If you are not the intended recipient, you are hereby notified
that any use, dissemination, distribution, or copying of this
communication is strictly prohibited. If you have received this
communication in error, notify us immediately by telephone and
(i) destroy this message if a facsimile or (ii) delete this message
immediately if this is an electronic communication.

Thank you.


Re: Server platform comparison

2009-03-20 Thread Kelly Lipp
Hashed many times over the past several months.  Might check the archives.  
Should be based on knowledge within the organization.  

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Mark 
Devine
Sent: Friday, March 20, 2009 3:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Server platform comparison

I haven't googled it just yet, but I though you folks may have references as
to the best TSM server platform.

Currently targeted for AIX, but the server consolidation crew is throwing
around HP  Linux.

Any URL's would be greatly appreciated.

TIA


Re: network data transfer rate aggregate data transfer rate

2009-03-19 Thread Kelly Lipp
Network data transfer rate is the rate when the client is actually sending the 
data.  Usually very close to the wire speed.  Aggregate is amount of data 
transferred divided by the backup's elapsed time. Not just the time the client 
is actually moving data on the wire.  If you don't see the network rate very 
high (in the xx,xxx range) it can indicate a network problem.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Arthur 
Poon
Sent: Thursday, March 19, 2009 10:04 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] network data transfer rate  aggregate data transfer rate

I have a question about aggregate data transfer rate. Why it is always a
lot slower than the network data transfer rate?

For example:

ANE4961I (Session: 18137, Node: MFDCITRIX) Total number of bytes
transferred: 13.61 GB (SESSION: 18137) 

03/18/09 15:37:37 ANE4963I (Session: 18137, Node: MFDCITRIX) Data
transfer time: 914.17 sec (SESSION: 18137) 

03/18/09 15:37:37 ANE4966I (Session: 18137, Node: MFDCITRIX) Network
data transfer rate: 15,618.11 KB/sec (SESSION: 18137) 

03/18/09 15:37:37 ANE4967I (Session: 18137, Node: MFDCITRIX) Aggregate
data transfer rate: 2,220.91 KB/sec (SESSION: 18137) 

03/18/09 15:37:37 ANE4968I (Session: 18137, Node: MFDCITRIX) Objects
compressed by: 14% (SESSION: 18137) 

 

AP 


Re: Mixing LTO2 and LTO4 drives and media in the same library

2009-03-19 Thread Kelly Lipp
LTO4 drives will read, but not write LTO2 media.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Howard 
Coles
Sent: Thursday, March 19, 2009 1:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Mixing LTO2 and LTO4 drives and media in the same library

We do this with a mixed LTO3 and LTO2 drives.  We did not partition the 
library.  However, I would highly suggest putting the smaller format higher in 
the drive number list.  DRIVE01 = LTO2 . . . DRIVE10 =LTO4.  That will keep the 
LTO4 drives available for LTO4 tapes, if LTO4 drives will read and write LTO2 
tapes.

See Ya'
Howard


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
 Of David McClelland
 Sent: Thursday, March 19, 2009 9:13 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Mixing LTO2 and LTO4 drives and media in the same
 library
 
 Hi Team,
 
 I've a question about mixing LTO drives and media in the same library -
 LTOn
 and LTOn+2.
 
 * Windows TSM 5.4.3.2 (soon to be migrated to a 5.5.1.1 server running
 on
 AIX)
 * IBM 3584 single frame library, presently with 4 x IBM LTO2 drives
 * Currently no library clients or storage agents.
 
 My client is looking to migrate from 4 x LTO2 drives to 8 x LTO4 drives
 in
 their 3584 library (as well as performing some other TSM Server
 platform
 migration activities).
 
 The LTO drive generation-spanning capabilities are such that LTO4
 drives can
 read from but not write to LTO2 media; that LTO2 drives can read/write
 to
 LTO2 media, but will have nothing to do with LTO4 media.
 
 Now, for an interim period, it has been suggested that the library be
 configured with 5 x LTO4 drives and 3 x LTO2 drives. TSM Server and the
 IBM
 3584 Tape Library are both stated to support mixed media and tape
 drives
 between these LTO generations.
 
 However, the idea behind this is that during this interim period, new
 incoming backup data should be written to LTO4 media by the LTO4 drives
 (as
 well as some existing LTO2 data being moved/consolidated onto LTO4
 media),
 and that offsite copies only continue to be generated to LTO2 media
 using
 the LTO2 drives (various reasons for this, including maintaining media
 compatibility for DR purposes).  If possible, largely I think for
 flexibility, the preference is to achieve this within a single library
 partition.
 
 It's this last bit that I'm having trouble getting my head around.
 
 Obviously, a new device class (FORMAT=ULTRIUM4C) will be added to the
 TSM
 Server to make use of the LTO4 drives (making sure that the current
 LTO2
 device class is locked down to FORMAT=ULTRIUM2C), as will a new set of
 LTO4
 storage pools pointing to this new device class. However, with both of
 these
 drive types being located in a single library partition I don't quite
 see
 how they'll be able to make sure that only LTO2 scratch tapes get
 mounted
 into LTO2 drives and that LTO4 drives don't go trying to write to LTO2
 media. Is it simply a case of manually allocating the LTO2 scratch
 media to
 the LTO2 copy storagepool (and setting its MAXSCRATCH to 0), and
 ensuring
 that LTO4 is the only media in the general scratch pool? Will TSM
 ensure
 that, given we're explicit in our device class definitions, when a
 backup
 stgpool operation requests a mount of an LTO2 scratch volume (or indeed
 an
 LTO2 filling volume) to perform a write it'll only mount one into one
 of the
 LTO2 drives, and similar for LTO4 media and drives? I suspect not...
 
 Is anyone running with a similar configuration to this (if it's
 possible)?
 
 Of course, the other blindingly obvious option is to partition the
 3584,
 creating one library partition for the 3 x LTO2 drives and enough slots
 to
 manage the offsite requirement, and another for the 5 x LTO4 drives
 with the
 remainder of the carts. The client doesn't have ALMS for quick flexible
 library partitioning (I don't know about the cost of this feature, but
 I
 expect that they'd be unwilling to consider it for only an interim
 period)
 so we'd have to use more rigid non-ALMS partitioning method. This
 option
 appeals to me in that it's pretty simple to get my head around, but
 perhaps
 requires a little more work to perform the library repartitioning.
 
 Any thoughts gratefully received!
 
 Cheers,
 
 David Mc,
 London, UK
 
 No virus found in this outgoing message.
 Checked by AVG.
 Version: 7.5.524 / Virus Database: 270.11.18/2009 - Release Date:
 18/03/2009
 07:17
 


Re: Calculating Change in Backup Environment

2009-03-18 Thread Kelly Lipp
First, you aren't really doing what you think you're doing: providing the 
ability to restore data back to any point in the previous 90 days.  You get 
seven days for a file that changes every day.

Steve is right: should probably do something like verexists=unlimited and 
retextra=21.  That way if you do a backup more than once a day you can still 
get 21 days worth of restores.

This boils down to how many files really change 21 times in 21 days.  Most 
files won't do that.  Large things, like SQL and Exchange databases will.  But 
you wouldn't really want to restore your SQL database back 21 days in any 
event.  How would you get back to scratch?  You couldn't.  So keeping these 
types of things for shorter periods makes more sense.

Anecdotally, my experience has been that increasing/reducing the numbers does 
not generally result in a ton more storage consumed/saved as most files are 
created and then never changed (or deleted).  They get backed up the one time 
and then live until they are deleted and retonly is reached.

I would definitely get to the 21/21 soon, though, as your user's expectations 
are probably different from what you're providing!

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Harris
Sent: Wednesday, March 18, 2009 3:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Calculating Change in Backup Environment

Dennis,

FWIW, I don't like a N days/N versions strategy.  It can have unexpected
side effects.

N Days/unlimited versions has more predictable behaviour and also is more
efficient at expiry time.

Regards

Steve.

Steven Harris
TSM Admin, Sydney Australia




 Dennis, Melburn
 (IT Solutions
 US)   To
 melburn.den...@s ADSM-L@VM.MARIST.EDU
 IEMENS.COMcc
 Sent by: ADSM:
 Dist Stor Subject
 Manager  [ADSM-L] Calculating Change in
 ads...@vm.marist Backup Environment
 .EDU


 18/03/2009 11:46
 PM


 Please respond to
 ADSM: Dist Stor
 Manager
 ads...@vm.marist
   .EDU






Currently, our backup environment employs a 90-day / 7 revision backup
policy.  My customer has come to me to find out how much our backup data
would grow or shrink if we went to a 21-day / 21-revision backup policy.
Have any of you out there experience requests like this before, and if
so, how were you able to get a reasonable guestimate of this.

Mel Dennis
Systems Engineer II

Siemens IT Solutions and Services, Inc.
Energy Data Center Operations
4400 Alafaya Trail
MC Q1-108
Orlando, FL 32826
Tel.: 407-736-2360
Mob: 321-356-9366
Fax: 407-243-0260
mailto:melburn.den...@siemens.com
www.usa.siemens.com/it-solutions


Re: Two different retention policies for the same node

2009-03-17 Thread Kelly Lipp
The more important issue regarding DR is the prioritization of application 
restore based on the business itself.  As it turns out, after a disaster about 
90% of the stuff that's backed up isn't necessary to run the business.  And the 
ability to get the previous seven days doesn't help either.  More important to 
have two different DR pools: one for important data and one for the rest.  Then 
optimize the important data DR pool to ensure it can do what you need it to do: 

I must have application XYZ back up and available to users within 24 hours of 
the disaster to a point no further back than 48 hours.  And it turns out that 
most business's ability to use recovered data within 24 hours is sketchy at 
best.  Where are the people that use the data going to be?  How will customers 
interact with them?  All of these issues are actually about 100 times more 
difficult than restoring data.  Yet few think of them!

If you worry DR application by application and think about all aspects of using 
the data, the problem actually becomes simpler since there is less data to 
worry about the time frame to recover it is probably longer than you think.

It's all about RPO and RTO.  In our sales practice, I'm spending a lot of time 
consulting (during pre-sales so it's free) about DR issues.  Bottom line: you 
need to have a very good plan, but since you will probably never execute 
(beyond testing), you probably shouldn't spend too much time/money on it.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Howard 
Coles
Sent: Tuesday, March 17, 2009 2:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Two different retention policies for the same node

There could be some serious issues with this.  If you have an onsite
volume that has 170 day old data, with 5 day old data and 80 day old
data (due to reclamation), and the volume goes bad, all you'll be able
to restore is the 5 day old data.

However, this is a real challenge.  I'd like to see the solution.

It appears on the surface that this is a result of is the we want to
keep it forever but can't afford the cost mentality.  Champagne taste
on Beer budget.

See Ya'
Howard


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
 Of Michael Green
 Sent: Tuesday, March 17, 2009 2:57 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Two different retention policies for the same node
 
 I've been asked to provide a DR/BAckup solution that seems to
 contradict TSM methodology, but I've decided I'll throw this in here
 anyway.
 
 Given the following retention policy:
 RETE=180
 RETO=180
 VERE=NOL
 VERD=NOL
 (180 days, no version limit)
 
 I've been asked to find a way to keep offsite only 7 days worth of
 data (on deduped disk or somthng like that), both active and inactive.
 So that it would allow us to restore complete system image from any
 day within last week.
 
 Doable (without resorting to double backups under different MCs)?
 --
 Warm regards,
 Michael Green


Re: TSM Library Manager

2009-03-10 Thread Kelly Lipp
I've seen something similar in the non sharing environment if the element 
number to drive relationship is whacked.  For instance, you suggest that the 
drive at /rmt0 is element 256 when in reality it's 257.  The library is 
instructed to mount a tape in element 256 and then TSM looks for tape in /rmt0.

Verify that the paths and element numbers actually line-up correctly.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Morris.Marshael
Sent: Tuesday, March 10, 2009 10:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM Library Manager

I have setup a library manager in TSM.

This server is TSM 5.5.2

I am setting up a library client and it also has TSM 5.5.2.

The TSM library manager is using the drives with no problem.

I have setup the paths for the library client and I'm having problems
with the client.

The server, when requested, will mount a tape from the client and change
ownership to the client name.

Within the client activity log I get:

ANR0408I Session 15 started for server LIBRARYSERVER (AIX-RS/6000)
(Tcp/Ip) for

library sharing. 

ANR0409I Session 15 ended for server LIBRARYSERVER (AIX-RS/6000).

ANR8779E Unable to open drive rmt2, error number=2.

ANR0409I Session 3 ended for server LIBRARYSERVER (AIX-RS/6000).

ANR1404W Scratch volume mount request denied - mount failed.

 

Librarymanager tapes from AIX:

lsdev -Cc tape  

rmt0  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt1  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt2  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt3  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt4  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt5  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

smc0  Available 07-08-02 IBM 3584 Library Medium Changer (FCP)

 

client tapes from AIX:

lsdev -Cc tape  

rmt0  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt1  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt2  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt3  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt4  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt5  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

smc0  Available 07-08-01 IBM 3584 Library Medium Changer (FCP)

 

I have checked the serial numbers of the drives on the server to the
client and I believe that they are correct.

 

Can someone help me in figuring out what I have missed or what might be
the problem?

 

Thanks,

Marshael 

 


mccg.org email firewall made the following annotation

 
CONFIDENTIALITY NOTICE:
The information transmitted in this e-mail message, including any attachments,
is for the sole use of the intended recipient(s) or entity to which it is 
addressed
and may contain confidential, privileged and/or proprietary information. Any 
unauthorized
review, retransmission, use, disclosure, dissemination or other use of,or 
taking any
action in reliance upon this information by persons or entities other than the 
intended
recipient is prohibited.  If you are not the intended recipient, you are hereby 
notified
that any reading, dissemination, distribution, copying, or other use of this 
message or
its attachments is strictly prohibited. If you have received this message in 
error, please
notify the sender immediately by reply e-mail, or by calling (478) 633-7272, 
and destroy the 
original message, attachments and all copies thereof on all computers and in 
any other form.
Thank you.  The Medical Center Of Central Georgia.  http://www.mccg.org/


03/10/09, 12:24:19


Re: How do I override compression setting?

2009-03-10 Thread Kelly Lipp
How about exclude.compression on that filespace?  

exclude.compression 
Excludes files from compression processing if the compression option is set to 
yes. This option applies to backups and archives.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Schneider, John
Sent: Tuesday, March 10, 2009 10:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] How do I override compression setting?

Greetings,
I have a specific NFS filesystem on a TSM AIX client that I need to
backup without compression because the files are very large and already
compressed, so it is pointless to compress them.  But compression is
yes in the dsm.sys file because I want to compress the other
filesystems.
The TSM client definition for compression is set to client, so
presumably the client can choose to either compress or not.  Here is the
dsmsched.log of the schedule.  Note that I turn off compression in the
schedule:
 
03/10/09   10:00:38

03/10/09   10:00:38 Schedule Name: @1002
03/10/09   10:00:38 Action:Incremental
03/10/09   10:00:38 Objects:   /mnt/mksysb
03/10/09   10:00:38 Options:   -subdir=y -compression=n
03/10/09   10:00:38 Server Window Start:   09:57:56 on 03/10/09
03/10/09   10:00:38

03/10/09   10:00:38
Executing scheduled command now.
03/10/09   10:00:38 --- SCHEDULEREC OBJECT BEGIN @1002 03/10/09
09:57:56
03/10/09   10:00:38 Incremental backup of volume '/mnt/mksysb'
03/10/09   10:00:45 ANS1898I * Processed   500 files *
03/10/09   10:00:50 ANS1898I * Processed 1,000 files *

 Buncha lines skipped...
 
03/10/09   10:02:12 ANS1898I * Processed 5,500 files *
03/10/09   10:07:02 Normal File-- 2,489,241,600 /mnt/mksysb/osba
ckup/eeyore/eeyore-03082009 ANS1360I Compressed Data Grew
03/10/09   10:13:05 Normal File-- 2,489,241,600 /mnt/mksysb/osba
ckup/eeyore/eeyore-03082009 [Sent]
03/10/09   10:13:05 Successful incremental backup of '/mnt/mksysb'
 
03/10/09   10:13:05 --- SCHEDULEREC STATUS BEGIN
03/10/09   10:13:05 Total number of objects inspected:5,624
03/10/09   10:13:05 Total number of objects backed up:1
03/10/09   10:13:05 Total number of objects updated:  0
03/10/09   10:13:05 Total number of objects rebound:  0
03/10/09   10:13:05 Total number of objects deleted:  0
03/10/09   10:13:05 Total number of objects expired:  0
03/10/09   10:13:05 Total number of objects failed:   0
03/10/09   10:13:05 Total number of bytes transferred: 3.93 GB
03/10/09   10:13:05 Data transfer time:8.86 sec

Why am I still getting ANS1360I Compressed Data Grew when the options
for the schedule are -compression=n?  It shouldn't be trying to
compress, should it?  
 
We are running TSM client 5.4.2.0, and TSM server 5.4.3.0 on AIX, in
case that matters.

Best Regards,

John D. Schneider 
Lead Systems Administrator - Storage 
Sisters of Mercy Health Systems 
3637 South Geyer Road 
St. Louis, MO  63127 
Phone: 314-364-3150 
Cell: 314-750-8721 
Email:  john.schnei...@mercy.net

 
This e-mail contains information which (a) may be PROPRIETARY IN NATURE OR
OTHERWISE PROTECTED BY LAW FROM DISCLOSURE, and (b) is intended only for the
use of the addressee(s) named above. If you are not the addressee, or the
person responsible for delivering this to the addressee(s), you are notified
that reading, copying or distributing this e-mail is prohibited. If you have
received this e-mail in error, please contact the sender immediately.


Re: TSM Library Manager

2009-03-10 Thread Kelly Lipp
Can you see that the library is indeed hanging a tape in some drive?  And if 
so, is it indeed the correct drive?  Visual inspection of what you think is 
happening is what I'm suggesting.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Morris.Marshael
Sent: Tuesday, March 10, 2009 11:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Library Manager

I checked the drives to see what element numbers they were pointing to
and checked the library to see what element they were showing for the
drives, all are in sync.

Thanks,
Marshael

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Tuesday, March 10, 2009 1:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Library Manager

I've seen something similar in the non sharing environment if the
element number to drive relationship is whacked.  For instance, you
suggest that the drive at /rmt0 is element 256 when in reality it's 257.
The library is instructed to mount a tape in element 256 and then TSM
looks for tape in /rmt0.

Verify that the paths and element numbers actually line-up correctly.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Morris.Marshael
Sent: Tuesday, March 10, 2009 10:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM Library Manager

I have setup a library manager in TSM.

This server is TSM 5.5.2

I am setting up a library client and it also has TSM 5.5.2.

The TSM library manager is using the drives with no problem.

I have setup the paths for the library client and I'm having problems
with the client.

The server, when requested, will mount a tape from the client and change
ownership to the client name.

Within the client activity log I get:

ANR0408I Session 15 started for server LIBRARYSERVER (AIX-RS/6000)
(Tcp/Ip) for

library sharing. 

ANR0409I Session 15 ended for server LIBRARYSERVER (AIX-RS/6000).

ANR8779E Unable to open drive rmt2, error number=2.

ANR0409I Session 3 ended for server LIBRARYSERVER (AIX-RS/6000).

ANR1404W Scratch volume mount request denied - mount failed.

 

Librarymanager tapes from AIX:

lsdev -Cc tape  

rmt0  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt1  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt2  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt3  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt4  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

rmt5  Available 07-08-02 IBM 3580 Ultrium Tape Drive (FCP)

smc0  Available 07-08-02 IBM 3584 Library Medium Changer (FCP)

 

client tapes from AIX:

lsdev -Cc tape  

rmt0  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt1  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt2  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt3  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt4  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

rmt5  Available 07-08-01 IBM 3580 Ultrium Tape Drive (FCP)

smc0  Available 07-08-01 IBM 3584 Library Medium Changer (FCP)

 

I have checked the serial numbers of the drives on the server to the
client and I believe that they are correct.

 

Can someone help me in figuring out what I have missed or what might be
the problem?

 

Thanks,

Marshael 

 


mccg.org email firewall made the following annotation

 
CONFIDENTIALITY NOTICE:
The information transmitted in this e-mail message, including any
attachments,
is for the sole use of the intended recipient(s) or entity to which it
is addressed
and may contain confidential, privileged and/or proprietary information.
Any unauthorized
review, retransmission, use, disclosure, dissemination or other use
of,or taking any
action in reliance upon this information by persons or entities other
than the intended
recipient is prohibited.  If you are not the intended recipient, you are
hereby notified
that any reading, dissemination, distribution, copying, or other use of
this message or
its attachments is strictly prohibited. If you have received this
message in error, please
notify the sender immediately by reply e-mail, or by calling (478)
633-7272, and destroy the 
original message, attachments and all copies thereof on all computers
and in any other form.
Thank you.  The Medical Center Of Central Georgia.  http://www.mccg.org/


03/10/09, 12:24:19


Re: How do I override compression setting?

2009-03-10 Thread Kelly Lipp
That is the question: does the compress=no on the command line override the 
compress yes in the dsm.sys?  Apparently not.  The docs state, though, that the 
exclude.compression will override the dsm.sys.  Might be you only option.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Schneider, John
Sent: Tuesday, March 10, 2009 11:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] How do I override compression setting?

Kelly,
It is good to know an option like that exists, thanks.  But that
is not really my question.  What is wrong with what I am doing? I don't
see any mistakes in my configuration. When I create a schedule and put
-compression=n in the options for the schedule, the TSM client should
turn compression off for that schedule, right?  I shouldn't have to put
a special exclude in, should I?


Best Regards,

John D. Schneider 
Phone: 314-364-3150 
Cell: 314-750-8721
Email:  john.schnei...@mercy.net 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Tuesday, March 10, 2009 12:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] How do I override compression setting?

How about exclude.compression on that filespace?  

exclude.compression 
Excludes files from compression processing if the compression option is
set to yes. This option applies to backups and archives.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Schneider, John
Sent: Tuesday, March 10, 2009 10:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] How do I override compression setting?

Greetings,
I have a specific NFS filesystem on a TSM AIX client that I need to
backup without compression because the files are very large and already
compressed, so it is pointless to compress them.  But compression is
yes in the dsm.sys file because I want to compress the other
filesystems.
The TSM client definition for compression is set to client, so
presumably the client can choose to either compress or not.  Here is the
dsmsched.log of the schedule.  Note that I turn off compression in the
schedule:
 
03/10/09   10:00:38

03/10/09   10:00:38 Schedule Name: @1002
03/10/09   10:00:38 Action:Incremental
03/10/09   10:00:38 Objects:   /mnt/mksysb
03/10/09   10:00:38 Options:   -subdir=y -compression=n
03/10/09   10:00:38 Server Window Start:   09:57:56 on 03/10/09
03/10/09   10:00:38

03/10/09   10:00:38
Executing scheduled command now.
03/10/09   10:00:38 --- SCHEDULEREC OBJECT BEGIN @1002 03/10/09
09:57:56
03/10/09   10:00:38 Incremental backup of volume '/mnt/mksysb'
03/10/09   10:00:45 ANS1898I * Processed   500 files *
03/10/09   10:00:50 ANS1898I * Processed 1,000 files *

 Buncha lines skipped...
 
03/10/09   10:02:12 ANS1898I * Processed 5,500 files *
03/10/09   10:07:02 Normal File-- 2,489,241,600 /mnt/mksysb/osba
ckup/eeyore/eeyore-03082009 ANS1360I Compressed Data Grew
03/10/09   10:13:05 Normal File-- 2,489,241,600 /mnt/mksysb/osba
ckup/eeyore/eeyore-03082009 [Sent]
03/10/09   10:13:05 Successful incremental backup of '/mnt/mksysb'
 
03/10/09   10:13:05 --- SCHEDULEREC STATUS BEGIN
03/10/09   10:13:05 Total number of objects inspected:5,624
03/10/09   10:13:05 Total number of objects backed up:1
03/10/09   10:13:05 Total number of objects updated:  0
03/10/09   10:13:05 Total number of objects rebound:  0
03/10/09   10:13:05 Total number of objects deleted:  0
03/10/09   10:13:05 Total number of objects expired:  0
03/10/09   10:13:05 Total number of objects failed:   0
03/10/09   10:13:05 Total number of bytes transferred: 3.93 GB
03/10/09   10:13:05 Data transfer time:8.86 sec

Why am I still getting ANS1360I Compressed Data Grew when the options
for the schedule are -compression=n?  It shouldn't be trying to
compress, should it?  
 
We are running TSM client 5.4.2.0, and TSM server 5.4.3.0 on AIX, in
case that matters.

Best Regards,

John D. Schneider 
Lead Systems Administrator - Storage 
Sisters of Mercy Health Systems 
3637 South Geyer Road 
St. Louis, MO  63127 
Phone: 314-364-3150 
Cell: 314-750-8721 
Email:  john.schnei...@mercy.net

 
This e-mail contains information which (a) may be PROPRIETARY IN NATURE
OR
OTHERWISE PROTECTED BY LAW FROM DISCLOSURE, and (b) is intended only for
the
use of the addressee(s) named above. If you are not the addressee, or
the
person responsible for delivering this to the addressee(s), you

Re: TSM Library Manager

2009-03-10 Thread Kelly Lipp
And always consider what the truly smart guys have to say first when reading 
the list!  Thanks Paul.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Tuesday, March 10, 2009 1:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Library Manager

At 03:17 PM 3/10/2009, Schneider, John wrote:
Greetings,
 I have a solution to the problem of library sharing and
different rmt names.


We have a similar, but different solution.  We have a script we use to
rename AIX tape devices to a predictable name, based on the last 4
characters of the WWN.  After running it on each AIX system, the device
names will be the same on each system.  After running tsmchrmt rmt0, the
device will have a name similar to:
rmt.f0c6.0.0
where f0c6 are the last 4 chars of the device's WWN.

I don't recall where we got the seed for this script, but here is our
version, which we call tsmchrmt:


#!/bin/sh
if [ $# != 1 ]
then
   echo must specify 1 rmt device name as an argument.
   exit 4
fi
d=$1

WWN=`/usr/sbin/lsattr -El $d -a ww_name|cut -f2 -d |cut -c15-`
LUN=`/usr/sbin/lsattr -El $d -a lun_id|cut -f2 -d |cut -c3`
root=`echo $d|cut -c1-3`
new_name=$root.$WWN.$LUN.0
let j=0
while [[ -e /dev/$new_name ]]
do
 let j=j+1
 new_name=$root.$WWN.$LUN.$j
done
/usr/sbin/chdev -l $d -a new_name=$new_name


..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Preferred TSM Platform

2009-02-26 Thread Kelly Lipp
I have to disagree with that.  We routinely run multiple (up to six, and the 
only reason it's only six is we don't have any more to test so perhaps more 
will run) LTO4 drives as fast as they want to run using and IBM x3850 
processor.  We run four at a time using an x3650.  The buses are PCI-E, drives 
are either SAS or FC.  Would that box run six or eight of the IBM fancy pants 
drives? I don't know, haven't ever seen it tried.

For most sites, Windows and the crummy little hardware it runs on will be just 
fine.  For you big fellas, not so much. If you are in the gotta push 
3-6TB/day Windows will work.  For you 10TB/day folks, maybe not.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Kauffman, Tom
Sent: Thursday, February 26, 2009 7:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

Yup.

It boils down to Wanda's statement: I/O, I/O, it's all about I/O --- Wanda 
Prather

If you can do the work with LTO-1 or -2 drives, or DLT-7000, or similar 
speed/capacity, then Windows will work. When you get into 
high-speed/high-capacity drives the Intel/AMD architecture comes unglued. A 
single LTO-4 drive will use ALL the I/O bandwith of a PCI or PCI-x bus, and a 
significant chunk of a PCIe(1) bus. The challenge becomes one of finding a 
suitable X86 server with multiple PCIe busses in the design.

IBM has the GX series I/O modules with two PCIe busses each for the P6 
architecture that allows for significant I/O bandwith expansion -- for a cost, 
of course.

Tom Kauffman
NIBCO, Inc


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Wednesday, February 25, 2009 4:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Preferred TSM Platform

I love it when somebody quotes me!  Somebody is listening.

I had this discussion with one of our customers yesterday.  It really 
does/should boil down to the OS experience you have on hand.  Does the AIX 
platform have more capacity/performance than the best Windows platform?  I'm 
guessing it probably does.  But at what cost? And then more importantly: do 
either of the platforms have enough for you?  If both do, then pick the one 
that makes more sense based on your OS experience.  And remember: one can 
always divide and conquer.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Allen 
S. Rout
Sent: Wednesday, February 25, 2009 1:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

 On Wed, 25 Feb 2009 15:57:37 +0100, Henrik Vahlstedt s...@statoilhydro.com 
 said:

 Time to quote Kelly...

 So to me it's either AIX or Windows (yes, you can do a lot of TSM
 on Windows once you get past the bigotry!).  Choose whichever one
 you have the most experience with.

gollum Ac!  It BURNS usss, naty windowsss /gollum


- Allen S. Rout
- Prefers AIX for this. *koff* :)


CONFIDENTIALITY NOTICE:  This email and any attachments are for the
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in
reliance upon this message. If you have received this in error, please
notify us immediately by return email and promptly delete this message
and its attachments from your computer system. We do not waive
attorney-client or work product privilege by the transmission of this
message.


Re: Preferred TSM Platform

2009-02-26 Thread Kelly Lipp
Sure, but if you don't have any AIX expertise and have to buy/rent that, the 
cost goes up significantly. I'm no huge fan of Windows, but almost everybody 
has some of that expertise while AIX expertise is not available in most shops.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Orville Lantto
Sent: Thursday, February 26, 2009 9:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

Those high end Windows boxes are priced similarly to equivalent (or better) AIX 
boxes.  Check the benchmarks before deciding.

Orville L. Lantto



From: Kauffman, Tom
Sent: Thu 2/26/2009 08:38
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform


Yup.

It boils down to Wanda's statement: I/O, I/O, it's all about I/O --- Wanda 
Prather

If you can do the work with LTO-1 or -2 drives, or DLT-7000, or similar 
speed/capacity, then Windows will work. When you get into 
high-speed/high-capacity drives the Intel/AMD architecture comes unglued. A 
single LTO-4 drive will use ALL the I/O bandwith of a PCI or PCI-x bus, and a 
significant chunk of a PCIe(1) bus. The challenge becomes one of finding a 
suitable X86 server with multiple PCIe busses in the design.

IBM has the GX series I/O modules with two PCIe busses each for the P6 
architecture that allows for significant I/O bandwith expansion -- for a cost, 
of course.

Tom Kauffman
NIBCO, Inc


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Wednesday, February 25, 2009 4:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Preferred TSM Platform

I love it when somebody quotes me!  Somebody is listening.

I had this discussion with one of our customers yesterday.  It really 
does/should boil down to the OS experience you have on hand.  Does the AIX 
platform have more capacity/performance than the best Windows platform?  I'm 
guessing it probably does.  But at what cost? And then more importantly: do 
either of the platforms have enough for you?  If both do, then pick the one 
that makes more sense based on your OS experience.  And remember: one can 
always divide and conquer.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Allen 
S. Rout
Sent: Wednesday, February 25, 2009 1:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

 On Wed, 25 Feb 2009 15:57:37 +0100, Henrik Vahlstedt s...@statoilhydro.com 
 said:

 Time to quote Kelly...

 So to me it's either AIX or Windows (yes, you can do a lot of TSM
 on Windows once you get past the bigotry!).  Choose whichever one
 you have the most experience with.

gollum Ac!  It BURNS usss, naty windowsss /gollum


- Allen S. Rout
- Prefers AIX for this. *koff* :)


CONFIDENTIALITY NOTICE:  This email and any attachments are for the
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in
reliance upon this message. If you have received this in error, please
notify us immediately by return email and promptly delete this message
and its attachments from your computer system. We do not waive
attorney-client or work product privilege by the transmission of this
message.





This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail.


Re: Preferred TSM Platform

2009-02-26 Thread Kelly Lipp
We have had a couple of customers over the years running TSM on Solaris.  I 
must echo Mike's comments. As Solaris would optimistically finish third in IBMs 
race for resources there will necessarily be fewer resources both on the 
development and support side.  If/when there are problems they will be solved 
more slowly than on the Windows or AIX. I guess I would enter the TSM on 
Solaris world with caution. That said, I have found that if you are a very good 
Solaris person, the issues are much easier to solve as you can often walk the 
IBM resource through the problem. But it will take more of your time if there 
is a problem.

The most prevalent issues we have seen are integrating with libraries and 
drives as you would expect. Perhaps if you stay with IBM tape products these 
problems would be less?  Who knows.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of De 
Gasperis, Mike
Sent: Thursday, February 26, 2009 9:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

We're primarily a Solaris based TSM shop, our backup server platforms
are T2000's and T5220's currently which seem to be very good at handling
the I/O of the newer T1A  B drives along with the speeds of LTO's
and what not.  Most of our servers are loaded up with dual port 4Gb
Emulex cards usually eight total HBA ports per server.  Network wise we
use the onboard 4 Gb ports and usually a dual port Gb card and
Etherchannel/trunking to give us a large pipe for backup traffic.  Speed
wise the machines are great for an enterprise solution, price wise I
think they're fantastic as well.  The only issues we seem to run in to
is IBM  Solaris pointing fingers at each other when there are
complicated bugs encountered that can be resolved via simple queries to
get to the root of the problem.  We're primarily using SAN based storage
SUN/EMC arrays along with EDL's, disk suite management is usually done
with Veritas for us though mpxio is always an option.  For any disk we
use in TSM we typically use raw volumes and not formatted file systems.

I think the preferred platform is still AIX as TSM just seems to perform
better on it with less of these odd bugs we see from time to time.  The
new Sun servers though are a great buy performance wise and really do
handle these newer tape drive speeds well.  Most of these new servers
we're using we can't even get the CPU usage to go above 40% yet with
400-500 clients on them.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sergio Fuentes
Sent: Thursday, February 26, 2009 10:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

We're actually considering a new platform for future TSM servers simply
because
we're not an AIX shop anymore (TSM being the lone holdout).  We're not a
very
good windows shop either, and our strength is really in Solaris and/or
Linux,
technically speaking.

When I compare the hardware and LVM features for Solaris with those of
Linux, I
can see the benefits of Solaris.  But this listserv group has me
second-guessing
myself since I have yet to hear from someone with a Solaris-based TSM
infrastructure.  (I would stick with AIX if I could, but you know...
politics).

Solaris 10 and the built-in features of ZFS alone have kind of swayed me
towards
Solaris.  It's the only native LVM-based filesystem that I think can
compete
with what I'm used to, namely JFS2.  As for hardware, Sun offers some
pretty
hefty I/O-centric boxes, with a hefty pricetag.  But the pricey p650
that we're
on now has lasted almost 7 years, is still very stable and not breaking
much of
a sweat.  Still, the range of servers that Sun offers (which I don't see
in the
Dell world) is another advantage.

Any thoughts from anyone running a TSM server on Solaris?  We could use
the
insight since I believe we'll be rolling out a development environment
on
Solaris as a proof-of-concept.  Anyone familiar with DB2 performance on
Solaris?

Thanks!
SF

Jim Zajkowski wrote:
 On Feb 25, 2009, at 10:09 AM, Strand, Neil B. wrote:

 consider Solaris

 Actually I'm considering replacing my Linux TSM server with Solaris -
 either SPARC or x86 - predominately because Solaris has a fast TCP/IP
 stack, ZFS, and fewer driver issues than on Linux.  Has anyone also
 moved from Linux to Solaris?

 --Jim


Re: Preferred TSM Platform

2009-02-26 Thread Kelly Lipp
IBM x3850, dual quad core processors, 16GB, 7 PCI-E slots with four 2.5 15K 
SAS 73GB drives (have to use external storage on this guy), around $14K.  Can 
have up to four quad core procs, 256GB memory.

This is one screaming dude when used with TSM. And it's IBM...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Orville Lantto
Sent: Thursday, February 26, 2009 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

This level of performance is pretty near the bottom of the pSeries range, but a 
comparable would be a pSeries 520 which could be had for this a similar price.  
I just checked and a basic 520 is list priced below $12,000.



Orville Lantto  |  Consultant  
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Thursday, February 26, 2009 11:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

I haven't checked AIX prices lately, but my last Dell 2900 with 8-1TB
drives (for the LZ) and 2-500GB mirrored drives (OS and DB) with 8GB RAM
cost under $11K.  As I mentioned, the RH Linux license is around $50.  I
think it has 2-PCIe slots for the HBA's and 2-GIGe NICS.




Orville Lantto orville.lan...@glasshouse.com
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
02/26/2009 11:17 AM
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] Preferred TSM Platform






Those high end Windows boxes are priced similarly to equivalent (or
better) AIX boxes.  Check the benchmarks before deciding.

Orville L. Lantto



From: Kauffman, Tom
Sent: Thu 2/26/2009 08:38
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform


Yup.

It boils down to Wanda's statement: I/O, I/O, it's all about I/O ---
Wanda Prather

If you can do the work with LTO-1 or -2 drives, or DLT-7000, or similar
speed/capacity, then Windows will work. When you get into
high-speed/high-capacity drives the Intel/AMD architecture comes unglued.
A single LTO-4 drive will use ALL the I/O bandwith of a PCI or PCI-x bus,
and a significant chunk of a PCIe(1) bus. The challenge becomes one of
finding a suitable X86 server with multiple PCIe busses in the design.

IBM has the GX series I/O modules with two PCIe busses each for the P6
architecture that allows for significant I/O bandwith expansion -- for a
cost, of course.

Tom Kauffman
NIBCO, Inc


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Wednesday, February 25, 2009 4:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Preferred TSM Platform

I love it when somebody quotes me!  Somebody is listening.

I had this discussion with one of our customers yesterday.  It really
does/should boil down to the OS experience you have on hand.  Does the AIX
platform have more capacity/performance than the best Windows platform?
I'm guessing it probably does.  But at what cost? And then more
importantly: do either of the platforms have enough for you?  If both do,
then pick the one that makes more sense based on your OS experience.  And
remember: one can always divide and conquer.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Allen S. Rout
Sent: Wednesday, February 25, 2009 1:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

 On Wed, 25 Feb 2009 15:57:37 +0100, Henrik Vahlstedt
s...@statoilhydro.com said:

 Time to quote Kelly...

 So to me it's either AIX or Windows (yes, you can do a lot of TSM
 on Windows once you get past the bigotry!).  Choose whichever one
 you have the most experience with.

gollum Ac!  It BURNS usss, naty windowsss /gollum


- Allen S. Rout
- Prefers AIX for this. *koff* :)


CONFIDENTIALITY NOTICE:  This email and any attachments are for the
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in
reliance upon this message. If you have received this in error, please
notify us immediately by return email and promptly delete this message
and its attachments from your computer system. We do not waive
attorney-client or work product privilege by the transmission of this
message.





This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for
the individual named. If you are not the named addressee you should not
disseminate

Re: Changing Storage Pool Status

2009-02-26 Thread Kelly Lipp
Nothing until reclamation happens on the volumes. Then they will be reclaimed 
appropriate to the collocation type selected.  You can force the issue by doing 
move data operations on the volumes.  Newly arriving data from clients will be 
placed onto tape according to the collocation type.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Lepre, 
James
Sent: Thursday, February 26, 2009 1:06 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Changing Storage Pool Status

Hello Everyone,

 

  My question is if I have a Pool that is collocated, and I change it to
Collocation by group what happens to the data already in the pool. 

 

Thank you 

 

James Lepre

Senior Server Specialist

Solix Inc

100 South Jefferson Road

Whippany NJ 07981

Phone 1-973-581-5362

Cell 1-973-223-1921

 

 


  
  
---
Confidentiality Notice: The information in this e-mail and any attachments 
thereto is intended for the named recipient(s) only.  This e-mail, including 
any attachments, may contain information that is privileged and confidential  
and subject to legal restrictions and penalties regarding its unauthorized 
disclosure or other use.  If you are not the intended recipient, you are hereby 
notified that any disclosure, copying, distribution, or the taking of any 
action or inaction in reliance on the contents of this e-mail and any of its 
attachments is STRICTLY PROHIBITED.  If you have received this e-mail in error, 
please immediately notify the sender via return e-mail; delete this e-mail and 
all attachments from your e-mail  system and your computer system and network; 
and destroy any paper copies you may have in your possession. Thank you for 
your cooperation.


Re: Preferred TSM Platform

2009-02-25 Thread Kelly Lipp
I love it when somebody quotes me!  Somebody is listening.

I had this discussion with one of our customers yesterday.  It really 
does/should boil down to the OS experience you have on hand.  Does the AIX 
platform have more capacity/performance than the best Windows platform?  I'm 
guessing it probably does.  But at what cost? And then more importantly: do 
either of the platforms have enough for you?  If both do, then pick the one 
that makes more sense based on your OS experience.  And remember: one can 
always divide and conquer.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Allen 
S. Rout
Sent: Wednesday, February 25, 2009 1:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Preferred TSM Platform

 On Wed, 25 Feb 2009 15:57:37 +0100, Henrik Vahlstedt s...@statoilhydro.com 
 said:

 Time to quote Kelly...

 So to me it's either AIX or Windows (yes, you can do a lot of TSM
 on Windows once you get past the bigotry!).  Choose whichever one
 you have the most experience with.

gollum Ac!  It BURNS usss, naty windowsss /gollum


- Allen S. Rout
- Prefers AIX for this. *koff* :)


Re: backups direct to tape

2009-02-24 Thread Kelly Lipp
Interesting.  I wonder why that is?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Fred 
Johanson
Sent: Tuesday, February 24, 2009 7:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

In our experience, when we turned on simultaneous writes for the Lanfree 
stgpool, the backups went to the network.

Fred Johanson
TSM Administrator
University of Chicago

773-702-8464

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Monday, February 23, 2009 5:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

I don't think the comment that simultaneous writes won't work for LAN Free.  
The simultaneous write thing happens at the storage pool level (copystg= 
parameter).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Monday, February 23, 2009 4:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

Also, are these RMAN backups?  If so, they use the API.  I'm not sure if
simultaneous write works for the API.  Anyone know?  Also, RMAN supports
multiple tape streams.  We are setup to use 4 parallel streams.  We used to
use physical tape, but now the backups just go to serial disk volumes
(devclass=file).  The DBAs here are happy with that.   ..Paul

At 06:14 PM 2/23/2009, Fred Johanson wrote:
The big gotcha here is that you cannot use the simultaneous write feature
if the backup is going lanfree.


From: ADSM: Dist Stor Manager [ads...@vm.marist.edu] On Behalf Of Kelly
Lipp [l...@storserver.com]
Sent: Monday, February 23, 2009 1:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

Gill,

I will start by asking: What do you think happens?  The cool thing about
TSM is it will usually do what you think it will do.  If a tape error
occurs, the current transaction (whatever it happens to be) will fail.
What that means will vary.  In the case of a client/agent backup, a write
error on a tape will cause TSM to mount another tape and restart the
failed transaction. Data integrity is the hallmark of the product.

Direct to tape backups are actually cool. In fact, you can write two tapes
simultaneously: the onlinepool and the drpool volumes.  This avoids having
to migrate data from disk to tape and having to back that data up.  In the
case of a 500GB database that will reduce the amount of data flowing
through your server from 1.5TB to 500GB.  Huge savings.  And the write to
two tapes happens at the same speed as to one.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Gill, Geoffrey L.
Sent: Monday, February 23, 2009 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backups direct to tape

I have a question that I hope those using TSM to back up direct to tape
can answer. We don't, and never have, done this here and I'm not looking
to implement it either. I am really looking for information as to what
happens to the backup is a tape error occurs. (I just know I'm going to
get this question when the dba's who have convinced everyone netbackup
and direct to tape is so great) I am interested too if the TSM agents
have the ability to compensate for this compared to a standard client
backup. Will the agent take this into consideration and continue the
backup with a different tape or will it die? The same question applies
to the standard backup.



Thanks for your help.





Geoff Gill
TSM Administrator
PeopleSoft Sr. Systems Administrator
SAIC M/S-G1b
(858)826-4062 (office)

(858)412-9883 (blackberry)
Email: geoffrey.l.g...@saic.com


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: backups direct to tape

2009-02-23 Thread Kelly Lipp
Gill,

I will start by asking: What do you think happens?  The cool thing about TSM is 
it will usually do what you think it will do.  If a tape error occurs, the 
current transaction (whatever it happens to be) will fail. What that means will 
vary.  In the case of a client/agent backup, a write error on a tape will cause 
TSM to mount another tape and restart the failed transaction. Data integrity is 
the hallmark of the product.

Direct to tape backups are actually cool. In fact, you can write two tapes 
simultaneously: the onlinepool and the drpool volumes.  This avoids having to 
migrate data from disk to tape and having to back that data up.  In the case of 
a 500GB database that will reduce the amount of data flowing through your 
server from 1.5TB to 500GB.  Huge savings.  And the write to two tapes happens 
at the same speed as to one.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Gill, 
Geoffrey L.
Sent: Monday, February 23, 2009 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backups direct to tape

I have a question that I hope those using TSM to back up direct to tape
can answer. We don't, and never have, done this here and I'm not looking
to implement it either. I am really looking for information as to what
happens to the backup is a tape error occurs. (I just know I'm going to
get this question when the dba's who have convinced everyone netbackup
and direct to tape is so great) I am interested too if the TSM agents
have the ability to compensate for this compared to a standard client
backup. Will the agent take this into consideration and continue the
backup with a different tape or will it die? The same question applies
to the standard backup.

 

Thanks for your help.

 

 

Geoff Gill 
TSM Administrator 
PeopleSoft Sr. Systems Administrator 
SAIC M/S-G1b 
(858)826-4062 (office)

(858)412-9883 (blackberry)
Email: geoffrey.l.g...@saic.com 

 


Re: backups direct to tape

2009-02-23 Thread Kelly Lipp
And moving it from the old to TSM is virtually impossible. It would require a 
restore of the data and then an archive of it into TSM. As you state, figuring 
what data that is is the issue and not easily (if at all) resolvable.  Write 
off the 3000 tapes, keep the last of the catalogs around and cross your fingers 
to nobody will ask for data from them. I think undergoing a project to move 
that old data is a complete waste of time.

But once on TSM, you'll never have that stupid problem again! Simply continue 
to move your archive data to the latest tape technology as your implementation 
moves.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Gill, 
Geoffrey L.
Sent: Monday, February 23, 2009 1:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

Thanks for the response Kelly. 
I actually thought it worked the way you explained but I did not want to
say something without confirmation since I wasn't exactly sure. I
didn't, however, know that you could write both pools at the same time. 

So this is just another nail I hope to put in the netbackup coffin. What
I have seen with netbackup here is if a tape error occurs the backup
stops, period. What I have to do now is put together a short slide show
on how we will deal with moving 125 nodes to TSM and how to deal with
the data in netbackup we need moved in to TSM. At this point
unfortunately I doubt anyone can tell me anything about the data in
netbackup. Oh sure, I can see the expiration date for tapes but people
around here want to keep 'everything forever' so finding the real data
we need to keep from those 3000 tapes I think is going to be
challenging.

The short term, 2 week and 6 month tapes, in my opinion could expire on
their own since it's going to be some time before everything is moved.
Having both clients on the same machine is not an issue in my opinion.
It's the long term data that we need to find, the REAL long term data,
and move it in to TSM. Finding it is the hard part, moving it is the
easy part.

Geoff Gill 
TSM Administrator 
PeopleSoft Sr. Systems Administrator 
SAIC M/S-G1b 
(858)826-4062 (office)
(858)412-9883 (blackberry)
Email: geoffrey.l.g...@saic.com 

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
Of
 Kelly Lipp
 Sent: Monday, February 23, 2009 11:28 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: backups direct to tape
 
 Gill,
 
 I will start by asking: What do you think happens?  The cool thing
about
 TSM is it will usually do what you think it will do.  If a tape error
 occurs, the current transaction (whatever it happens to be) will fail.
 What that means will vary.  In the case of a client/agent backup, a
write
 error on a tape will cause TSM to mount another tape and restart the
 failed transaction. Data integrity is the hallmark of the product.
 
 Direct to tape backups are actually cool. In fact, you can write two
tapes
 simultaneously: the onlinepool and the drpool volumes.  This avoids
having
 to migrate data from disk to tape and having to back that data up.  In
the
 case of a 500GB database that will reduce the amount of data flowing
 through your server from 1.5TB to 500GB.  Huge savings.  And the write
to
 two tapes happens at the same speed as to one.
 
 Kelly Lipp
 CTO
 STORServer, Inc.
 485-B Elkton Drive
 Colorado Springs, CO 80907
 719-266-8777 x7105
 www.storserver.com
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
Of
 Gill, Geoffrey L.
 Sent: Monday, February 23, 2009 11:28 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] backups direct to tape
 
 I have a question that I hope those using TSM to back up direct to
tape
 can answer. We don't, and never have, done this here and I'm not
looking
 to implement it either. I am really looking for information as to
what
 happens to the backup is a tape error occurs. (I just know I'm going
to
 get this question when the dba's who have convinced everyone netbackup
 and direct to tape is so great) I am interested too if the TSM agents
 have the ability to compensate for this compared to a standard client
 backup. Will the agent take this into consideration and continue the
 backup with a different tape or will it die? The same question applies
 to the standard backup.
 
 
 
 Thanks for your help.
 
 
 
 
 
 Geoff Gill
 TSM Administrator
 PeopleSoft Sr. Systems Administrator
 SAIC M/S-G1b
 (858)826-4062 (office)
 
 (858)412-9883 (blackberry)
 Email: geoffrey.l.g...@saic.com
 
 


Re: backups direct to tape

2009-02-23 Thread Kelly Lipp
I don't think the comment that simultaneous writes won't work for LAN Free.  
The simultaneous write thing happens at the storage pool level (copystg= 
parameter).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Paul 
Zarnowski
Sent: Monday, February 23, 2009 4:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

Also, are these RMAN backups?  If so, they use the API.  I'm not sure if
simultaneous write works for the API.  Anyone know?  Also, RMAN supports
multiple tape streams.  We are setup to use 4 parallel streams.  We used to
use physical tape, but now the backups just go to serial disk volumes
(devclass=file).  The DBAs here are happy with that.   ..Paul

At 06:14 PM 2/23/2009, Fred Johanson wrote:
The big gotcha here is that you cannot use the simultaneous write feature
if the backup is going lanfree.


From: ADSM: Dist Stor Manager [ads...@vm.marist.edu] On Behalf Of Kelly
Lipp [l...@storserver.com]
Sent: Monday, February 23, 2009 1:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] backups direct to tape

Gill,

I will start by asking: What do you think happens?  The cool thing about
TSM is it will usually do what you think it will do.  If a tape error
occurs, the current transaction (whatever it happens to be) will fail.
What that means will vary.  In the case of a client/agent backup, a write
error on a tape will cause TSM to mount another tape and restart the
failed transaction. Data integrity is the hallmark of the product.

Direct to tape backups are actually cool. In fact, you can write two tapes
simultaneously: the onlinepool and the drpool volumes.  This avoids having
to migrate data from disk to tape and having to back that data up.  In the
case of a 500GB database that will reduce the amount of data flowing
through your server from 1.5TB to 500GB.  Huge savings.  And the write to
two tapes happens at the same speed as to one.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Gill, Geoffrey L.
Sent: Monday, February 23, 2009 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backups direct to tape

I have a question that I hope those using TSM to back up direct to tape
can answer. We don't, and never have, done this here and I'm not looking
to implement it either. I am really looking for information as to what
happens to the backup is a tape error occurs. (I just know I'm going to
get this question when the dba's who have convinced everyone netbackup
and direct to tape is so great) I am interested too if the TSM agents
have the ability to compensate for this compared to a standard client
backup. Will the agent take this into consideration and continue the
backup with a different tape or will it die? The same question applies
to the standard backup.



Thanks for your help.





Geoff Gill
TSM Administrator
PeopleSoft Sr. Systems Administrator
SAIC M/S-G1b
(858)826-4062 (office)

(858)412-9883 (blackberry)
Email: geoffrey.l.g...@saic.com


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Defining FILE Device class

2009-02-11 Thread Kelly Lipp
Get rid of the double quote before the /filedev1 and the space after the , and 
you should be fine.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Abdullah, Md-Zaini B BSP-IMI/231
Sent: Wednesday, February 11, 2009 8:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Defining FILE Device class

Hi TSM expoertise,

I have an issue when creating a FILE device with miltiple directories locating 
in  different filesystems.
Please advise.

from AIX
---
r...@bspsap11 # df -k|grep /file
/dev/fstlv_filedev1   103546880 1031263201%4 1% /filedev1
/dev/fstlv_filedev2   103546880 1031263201%4 1% /filedev2


From TSM server
-

TSMdefine devclass FILE_DEV_HR devtype=file directory=/filedev1, /filedev2 
SHAREd=yes MOUNTLimit=2 MAXCAPacity=40G

ANR8366E DEFINE DEVCLASS: Invalid value for DIRECTORY parameter.
ANS8001I Return code 3.





Md.Zaini.Abdullah
Seria Head Office
Jalan Utara, Panaga, Seria KB3534, Brunei Darussalam

Tel: +673-3-37 3538 
Email: md-zaini.abdul...@shell.com
Internet: http://www.shell.com


Re: Data Retention Settings: Unintended Consequences

2009-01-28 Thread Kelly Lipp
Boy, it sure looks like you'll have three versions forever on this one.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Nick 
Laflamme
Sent: Wednesday, January 28, 2009 12:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Data Rention Settings: Unintended Consequences

A client's TSM server has the following copygroup settings:

tsm: SERVER1q copygroup * active f=d

 Policy Domain Name: OPEN_SYSTEM_ENVIRONMENT
Policy Set Name: ACTIVE
Mgmt Class Name: BACKUP_SHARK
Copy Group Name: STANDARD
Copy Group Type: Backup
   Versions Data Exists: 3
  Versions Data Deleted: 3
  Retain Extra Versions: No Limit
Retain Only Version: 91
snip

Because Versions Data Deleted is more than 1, and because Retain
Extra Versions is set to No Limit, am I correct in deducing that
TSM will keep three copies of deleted files forever, because Retain
Only Version will never become relevant?

Or does TSM use Retain Only Version for all versions (copies) once
there isn't an active file? (That's not what the doc says, of course.)

Thanks,
Nick


Linux GFS File system backup using SAN Agent

2009-01-12 Thread Kelly Lipp
Folks,

Anybody tried this?

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


Re: I'm missing something somewhere -- part 2

2009-01-12 Thread Kelly Lipp
Is cache=yes on that pool?  Are you talking about percent utilized or percent 
migratable?  I'm reasonably sure you know the difference but thought I'd 
confirm...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Kauffman, Tom
Sent: Monday, January 12, 2009 1:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] I'm missing something somewhere -- part 2

I'm looking at 'first causes' for my off-site copy imbalance in my primary 
archive pool - and I've run into something 'interesting'.

Some background -

The archive pool (called ARCHIVEPOOL) has 5 disk volumes, all at 8 GB. Max file 
size is 5 GB. Migration threshold is 60%. Maxproc is 2.

So when we reach 60% or above in the pool, two processes kick off. And we 
migrate all the way down to 0%.

But what I see in the log looks like this:

ANR0984I Process 4758 for MIGRATION started in the BACKGROUND at 02:26:20.
ANR1000I Migration process 4758 started for storage pool ARCHIVEPOOL
automatically, highMig=60, lowMig=0, duration=No.
ANR0513I Process 4758 opened output volume 444019L4.
ANR1001I Migration process 4758 ended for storage pool ARCHIVEPOOL.
ANR0986I Process 4758 for MIGRATION running in the BACKGROUND processed 256
items for a total of 7,094,272 bytes with a completion state of SUCCESS at
02:26:22.
ANR0984I Process 4759 for MIGRATION started in the BACKGROUND at 02:26:23.
ANR1000I Migration process 4759 started for storage pool ARCHIVEPOOL
automatically, highMig=60, lowMig=0, duration=No.
ANR0513I Process 4759 opened output volume 444019L4.
ANR1001I Migration process 4759 ended for storage pool ARCHIVEPOOL.
ANR0986I Process 4759 for MIGRATION running in the BACKGROUND processed 256
items for a total of 8,732,672 bytes with a completion state of SUCCESS at
02:26:25.
ANR0984I Process 4760 for MIGRATION started in the BACKGROUND at 02:26:27.
ANR1000I Migration process 4760 started for storage pool ARCHIVEPOOL
automatically, highMig=60, lowMig=0, duration=No.
ANR0513I Process 4760 opened output volume 444019L4.
ANR1001I Migration process 4760 ended for storage pool ARCHIVEPOOL.

For several hours. For some reason the pool is over 60% full; migration kicks 
in and 15.7 MB migrates; the migration ends; and the pool is STILL over 60% 
full. Why?? More importantly, how do I fix this?

Thanks -

Tom

CONFIDENTIALITY NOTICE:  This email and any attachments are for the
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in
reliance upon this message. If you have received this in error, please
notify us immediately by return email and promptly delete this message
and its attachments from your computer system. We do not waive
attorney-client or work product privilege by the transmission of this
message.


Re: Best way to use TSM to move 2Tb of data

2008-12-12 Thread Kelly Lipp
If you assume a file create rate of about 100,000/hour then you are looking at 
a 20 hour restore if all else goes well.  You might squeeze more file creates 
out of your new server, but who really knows?  If you assume a 200 GB/hour 
transfer rate and use image instead, you can cut the restore time in half. You 
can't improve the file create rate by using multiple streams.  In fact, that 
actually reduces the rate.

I'm still advocating the image route.  

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Nicholas Rodolfich
Sent: Thursday, December 11, 2008 3:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

It is ~2,000,000 individual files after hours.


Re: Best way to use TSM to move 2Tb of data

2008-12-12 Thread Kelly Lipp
Amen to Dwight's comment!  And can you imagine a filespace with 10M files?  I 
shudder...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Dwight 
Cook
Sent: Friday, December 12, 2008 9:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

I've seen restores of 1+M files take days due to the delays associated with
general system over head (creating directory entries, etc...) and by days I
mean 5-7+.

And so again, I'll mention...

Just because you CAN put a million or more files on a single drive doesn't
mean it's a good idea!

Dwight

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Friday, December 12, 2008 10:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

If you assume a file create rate of about 100,000/hour then you are looking
at a 20 hour restore if all else goes well.  You might squeeze more file
creates out of your new server, but who really knows?  If you assume a 200
GB/hour transfer rate and use image instead, you can cut the restore time in
half. You can't improve the file create rate by using multiple streams.  In
fact, that actually reduces the rate.

I'm still advocating the image route.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, December 11, 2008 3:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

It is ~2,000,000 individual files after hours.


Re: Best way to use TSM to move 2Tb of data

2008-12-12 Thread Kelly Lipp
AH.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Fred 
Johanson
Sent: Friday, December 12, 2008 10:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

You mean like this?

CRONUSXBkup /mnt/ide0  2 OFFSITEPO- 10,222,45 
3,529,279 3,526,778
  OL1   
.74   .09

Or its departmental companion?

ATHENSXBkup /mnt/ide0  1 OFFSITEPO- 5,011,898 
3,380,065 3,355,847
  OL
.04   .95

Or another department that has 10 boxes like this?


OIABkup \\oia\d$   4 TAPEPOOL   3,947,878 19,719,65 19,714,64
   8.00  4.36


Or the user who did a network mount of a Time Server to her desktop?  By the 
time I got back from vacation she had created a filespace with 36M files.

Fred Johanson
TSM Administrator
University of Chicago

773-702-8464

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Friday, December 12, 2008 11:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

Amen to Dwight's comment!  And can you imagine a filespace with 10M files?  I 
shudder...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Dwight 
Cook
Sent: Friday, December 12, 2008 9:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

I've seen restores of 1+M files take days due to the delays associated with
general system over head (creating directory entries, etc...) and by days I
mean 5-7+.

And so again, I'll mention...

Just because you CAN put a million or more files on a single drive doesn't
mean it's a good idea!

Dwight

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Friday, December 12, 2008 10:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

If you assume a file create rate of about 100,000/hour then you are looking
at a 20 hour restore if all else goes well.  You might squeeze more file
creates out of your new server, but who really knows?  If you assume a 200
GB/hour transfer rate and use image instead, you can cut the restore time in
half. You can't improve the file create rate by using multiple streams.  In
fact, that actually reduces the rate.

I'm still advocating the image route.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, December 11, 2008 3:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

It is ~2,000,000 individual files after hours.


Re: Best way to use TSM to move 2Tb of data

2008-12-11 Thread Kelly Lipp
How about an image backup?  Eliminate the small file issues on backup and 
restore...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Dwight 
Cook
Sent: Thursday, December 11, 2008 1:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

Is it lots of little files (I know, silly question with it being a windows
file server).
Also, how long is over night?
Is that compressed client data or is it file space data?
Is that a backup or archive?
What is your network?  100 Mb/sec fast Ethernet?  Gig Ethernet?  Teamed
NIC's?
I'd run archives...if you have multiple high level directories that will let
you get away with it, I'd run one archive command against each high level
directory to try to have between 4  10 client sessions established with the
TSM server.
Run client compression if you have enough processing power on your windows
box.  (what ever your network/NIC max through put is... it will typically
move 3 times more data in the same period of time if you have enough power
to compress down the data without slowing down your entire process)

Anyway, as you can see... a few variables involved that you didn't
mention...

Dwight

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, December 11, 2008 1:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Best way to use TSM to move 2Tb of data

Hello All,

I have a client that needs to move a 2Tb volume on their Windows file
server. Their TSM sever is also on Windows w/TSM v5.3.4. What is the
fastest/best  way to accomplish this. We tested an archive but it only got
340Gb overnight. They don't have enough disk based pool space to land it
there so we will have to use tape. I suggest kicking up the
resourceutilization parameter on the client to enable multiple sessions but
I am hoping you guys/gals can teach me something.

Thanks for all your help!!



Nicholas=


Re: Best way to use TSM to move 2Tb of data

2008-12-11 Thread Kelly Lipp
I think the advantage is on the restore: you won't have to create a gazillion 
little files which is actually the bottleneck (typically) in Windows.  The 
backup will be limited to one stream, but that will be faster too, on the order 
of what a GiGE network can optimally do: 200-300GB/hour.  I think end-to-end, 
using image will be quicker.  Perhaps a little slower on the backup, but 
remarkably quicker on the restore.  Now, if the number of files is less than a 
million, say, that might not be so.


Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Dwight 
Cook
Sent: Thursday, December 11, 2008 1:56 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

That will limit you to a single session performing an image backup (won't
it??? I don't use image backup)
Windows clients have become better at pumping data but lately but a single
session still won't come near maxing out a NIC.
We have some multiple TB SAP data bases on windows servers (I know just
kill me now, please) and we can, with teamed NIC's that are 100 Mb/sec fast
Ethernet (so team can push 200 Mb/sec) anyway, we can shove 55 GB/hr of
compressed client data with about 4 or 5 sessions so that is around 150 GB
of file space per hour.  (we can only get windows clients running at 35
GB/hr best with a single 100 Mb/sec fast Ethernet nic)


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Kelly Lipp
Sent: Thursday, December 11, 2008 2:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

How about an image backup?  Eliminate the small file issues on backup and
restore...

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Dwight Cook
Sent: Thursday, December 11, 2008 1:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Best way to use TSM to move 2Tb of data

Is it lots of little files (I know, silly question with it being a windows
file server).
Also, how long is over night?
Is that compressed client data or is it file space data?
Is that a backup or archive?
What is your network?  100 Mb/sec fast Ethernet?  Gig Ethernet?  Teamed
NIC's?
I'd run archives...if you have multiple high level directories that will let
you get away with it, I'd run one archive command against each high level
directory to try to have between 4  10 client sessions established with the
TSM server.
Run client compression if you have enough processing power on your windows
box.  (what ever your network/NIC max through put is... it will typically
move 3 times more data in the same period of time if you have enough power
to compress down the data without slowing down your entire process)

Anyway, as you can see... a few variables involved that you didn't
mention...

Dwight

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, December 11, 2008 1:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Best way to use TSM to move 2Tb of data

Hello All,

I have a client that needs to move a 2Tb volume on their Windows file
server. Their TSM sever is also on Windows w/TSM v5.3.4. What is the
fastest/best  way to accomplish this. We tested an archive but it only got
340Gb overnight. They don't have enough disk based pool space to land it
there so we will have to use tape. I suggest kicking up the
resourceutilization parameter on the client to enable multiple sessions but
I am hoping you guys/gals can teach me something.

Thanks for all your help!!



Nicholas=


Re: Server Platform Upgrade

2008-12-04 Thread Kelly Lipp
If I were going to be in the x86 family, I would move into the larger platforms 
with more processors and more PCI-E slots.  So the HP DL580 I think would be 
your best bet.  That will reduce the 5x cost benefit somewhat but provide you 
with more flexibility.  Consider the IBM x3850 M2 server.  These are great 
boxes and IBM will continue to love and support you.  And perhaps a bit cheaper 
than HP.  Similar to my comment about Windows vs. Unix below, if you have a ton 
of HP stuff in your site now, go with HP.

Linux or Windows then.  Hmmm.  I'm not a Linux guy so I would choose windows.  
If most of your clients are windows and most of your internal IT knowledge is 
Windows, stay Windows.  If you have the Linux expertise, then use that.  Won't 
make much difference to TSM (I know, I know, Windows for I/O sucks and all 
that, which BTW is contrary to what I know and believe...).

You can probably buy two x86 and split the load and still save money over the 
mongo AIX box.  I'd rather do that.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Sam 
Sheppard
Sent: Thursday, December 04, 2008 11:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Server Platform Upgrade

We are currently running 3 TSM servers at Version 5.5, two on z/OS and
the third on a Solaris 10 box.  We have been tasked to combine these
into one.

The current Solaris system is on a Sun V240 w/8GB memory:

   Around 400 clients (combined) including a 3TB Exchange system, and a
  fairly large SAP implemtation in development on MS SQL Server,
  several Oracle boxes, and a large number of Windows servers.

   Total database size (Solaris) is around 80GB, expiration runs about
  two hours.  z/OS databases total 50GB.

   8 TS1120 tape drives in a 3494 ATL.

   1.2TB array for storage pools and database on the Solaris server
  which we are currently trying to separate.

   Total daily backup volume is around 1TB with additional weekly
  backups of 9TB.

I am the TSM guy and the z/OS systems programmer and as such don't
really have a feel for hardware sizing or configuration on the Unix side
of things and so have to rely on our Unix guys.  I suggested that AIX
would be the preferred platform for this implementation with another
Solaris box having the advantage of not requiring converting the exist-
ing one.  They came up with the following options with their favorite
being the x86(HP) with Linux because it is much cheaper and they claim
would be more powerful. The AIX and Sun configurations are similar in
price at around 5 times the x86.

Thought, comments, considerations? What are people using for disk
storage pools and would the internal drives on these boxes be adequate?
I'm also in the process of freeing one array on our ESS800 (Shark) for
fiber connection to this configuration.

Here are the proposed options:

IBM Power 550 ExpressHP DL380 G5
Up to 8 cores and 256GB RAM  Up to 8 cores and 64GB RAM
Six 300GB internal SAS 15k drivesEight 146GB internal SAS 15k drives
Three PCIe and two PCI-X slots   Four PCIe slots
Dual port 10 GB Ethernet card
4 GB fiber channel cards - x2

Thanks
Sam Sheppard
San Diego Data Processing Corp.
(858)-581-9668


Re: Waiting for access to input volume

2008-11-25 Thread Kelly Lipp
I don't think that's true. The purpose of the new code was to eliminate the 
problem with sequential volumes.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Petrullo, 
Michael G.
Sent: Tuesday, November 25, 2008 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Waiting for access to input volume

Kelly,

The multi-access read only applies to random access volumes, ie. Disk.

Shawn,

This is a flaw of TSM. If a process is using/holding a volume another
process will not be able to use it until it completes. I have ran into
this issue when running an export that was using a volume and then a
migration tried to access the same volume. The migration wasn't able to
get access to the volume until the export finished. I've even gone to
the extent of calling IBM about this issue and they informed me that it
is not even in the works for future versions of TSM. Hope this clears
things up a little.

Regards,
Mike

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Kelly Lipp
Sent: Wednesday, November 19, 2008 5:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Waiting for access to input volume

I haven't been following the entire thread, but multi-access read is now
available on certain TSM volumes.  It's a V5.5 feature.  May be time to
upgrade...  May be time to read the release notes at any rate to see if
your problem is addressed by this new feature.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Shawn Drew
Sent: Wednesday, November 19, 2008 3:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Waiting for access to input volume

It's intermittent slow backups that can sometimes occupy the virtual
tape like this.
I would like to make sure that a single slow backup doesn't hold up the
whole Life Cycle.   I would prefer to have the single Backup Storage
pool process fail and move to the next step in the housekeeping script
than having to cancel the backup manually when I happen to notice this
happening.  If I can't find a time-out, then I will setup a monitor
script to cancel if the wait time gets too large)

Any ideas?

Regards,
Shawn

Shawn Drew





Internet
[EMAIL PROTECTED]

Sent by: ADSM-L@VM.MARIST.EDU
11/19/2008 04:01 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Waiting for access to input volume





Perhaps a stuck restore has the volume?   Check Q RESTORE




Shawn Drew [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
11/19/2008 03:45 PM
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] Waiting for access to input volume






Does anyone know how to set the time-out for this?  The device class
Mount Wait doesn't seem to apply


14 Backup Storage Pool  Primary Pool VTL_C1, Copy Pool VTL_C2,
Files
Backed Up: 7, Bytes Backed Up:
17,871,357,954,
Unreadable Files: 0, Unreadable Bytes: 0.

Current Physical File (bytes): 10,866,872

Waiting for access to input volume
K00459L3
(27271 seconds). Current output volume:
W00472L3.


Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in
error, please delete it and immediately notify the sender. Any use not
in accord with its purpose, any dissemination or disclosure, either
whole or partial, is prohibited except formal approval. The internet can
not guarantee the integrity of this message. BNP PARIBAS (and its
subsidiaries) shall (will) not therefore be liable for the message if
modified. Please note that certain functions and services for BNP
Paribas may be performed by BNP Paribas RCC, Inc.

IMPORTANT: E-mail sent through the Internet is not secure and timely delivery 
of Internet mail is not guaranteed. Legg Mason therefore, recommends that you 
do not send any  action-oriented or time-sensitive information to us via 
electronic mail, or any confidential or sensitive information including:  
social security numbers, account numbers, or personal identification numbers.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: IBM 3200 Tape Library SAS adapters

2008-11-20 Thread Kelly Lipp
SAS is very different from SCSI so all bets are off when comparing the two.  An 
x4 4 lane SAS adapter will support 4 SAS drives.  You will need an interposer 
plugged into the SAS card which splits the four lanes onto four cables.  Then 
plug each drives cable into the interposer and you are completely fanned out.  
One lane per drive (each lane is 3Gb/sec transfer rate, well fast enough for 
one LTO4 tape drive).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark 
Stapleton
Sent: Thursday, November 20, 2008 10:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] IBM 3200 Tape Library SAS adapters

No docs, but you want as many ports as possible for maximum throughput.
Officially, you can put three drives per HBA port (through a switch, not
Y cables). I tell customers that a maximum of two drives per HBA port
will make them a LOT happier; three LTO4 drives could saturate the
connection, and that's a bad thing with tape.

--
Mark Stapleton ([EMAIL PROTECTED])
CDW Berbee
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Larry Peifer
 Sent: Thursday, November 20, 2008 12:49 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] IBM 3200 Tape Library SAS adapters

 We are in the process of purchasing an IBM 3200 Tape Library with 4 -
 LTO4
 half-height SAS drives.  Our IBM sales rep and tech support rep are
 telling
 us that we need 4 - 5912 HBAs, one for each drive, to support this
 configuration in our IBM 9117-MMA (p570) host.

 From our review of the tech specs it looks like the 5912 Adapter
(PCI-X
 DDR
 Dual -x4 SAS Adapter) should be able to support all four SAS tape
 drives
 from one card using 2- YO cables in the same way it can support 4 disk
 drives.  Can anyone point us to someone or some docs that can give us
 more
 information.

 Larry Peifer
 San Onofre Nuclear Generating Station
 AIX System Admin
 TSM system Admin


Re: IBM 3200 Tape Library SAS adapters

2008-11-20 Thread Kelly Lipp
I'm going with yes, you can and it should perform just fine.  I, however, don't 
have any firsthand experience with that config.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Larry 
Peifer
Sent: Thursday, November 20, 2008 11:28 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] IBM 3200 Tape Library SAS adapters

So if I understand Kelly correctly,  I can have 1 - 5912 going to 4 lto4 hh
drives running at 3Gb/sec per drive and not oversaturate the bus or adapter
when all four drives are active.  Do you know anyone actually running with
this configuration with a p570?




 Kelly Lipp
 [EMAIL PROTECTED]
 COM   To
 Sent by: ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager
 [EMAIL PROTECTED] Subject
 .EDU Re: [ADSM-L] IBM 3200 Tape Library
   SAS adapters

 11/20/2008 09:57
 AM


 Please respond to
 ADSM: Dist Stor
 Manager
 [EMAIL PROTECTED]
   .EDU






SAS is very different from SCSI so all bets are off when comparing the two.
An x4 4 lane SAS adapter will support 4 SAS drives.  You will need an
interposer plugged into the SAS card which splits the four lanes onto four
cables.  Then plug each drives cable into the interposer and you are
completely fanned out.  One lane per drive (each lane is 3Gb/sec transfer
rate, well fast enough for one LTO4 tape drive).

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Mark Stapleton
Sent: Thursday, November 20, 2008 10:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] IBM 3200 Tape Library SAS adapters

No docs, but you want as many ports as possible for maximum throughput.
Officially, you can put three drives per HBA port (through a switch, not
Y cables). I tell customers that a maximum of two drives per HBA port
will make them a LOT happier; three LTO4 drives could saturate the
connection, and that's a bad thing with tape.

--
Mark Stapleton ([EMAIL PROTECTED])
CDW Berbee
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Larry Peifer
 Sent: Thursday, November 20, 2008 12:49 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] IBM 3200 Tape Library SAS adapters

 We are in the process of purchasing an IBM 3200 Tape Library with 4 -
 LTO4
 half-height SAS drives.  Our IBM sales rep and tech support rep are
 telling
 us that we need 4 - 5912 HBAs, one for each drive, to support this
 configuration in our IBM 9117-MMA (p570) host.

 From our review of the tech specs it looks like the 5912 Adapter
(PCI-X
 DDR
 Dual -x4 SAS Adapter) should be able to support all four SAS tape
 drives
 from one card using 2- YO cables in the same way it can support 4 disk
 drives.  Can anyone point us to someone or some docs that can give us
 more
 information.

 Larry Peifer
 San Onofre Nuclear Generating Station
 AIX System Admin
 TSM system Admin


  1   2   3   4   5   >