IBM TSM Server v5.1.5 for Linux

2002-08-17 Thread Zlatko Krastev

Hi all,

for those of you who missed it - TSM on the end is available on Linux. I
am waiting to put my hands on it.
http://www.tivoli.com/news/press/pressreleases/en/2002/0813-linux-tsm.html

Zlatko Krastev
IT Consultant



Re: Eternal Data retention brainstorming.....

2002-08-17 Thread Zlatko Krastev/ACIT

Very well written !!!
May I slightly improve it:
1. Mark diskpool volumes read-only. Migrate disk pools and mark all
primary volumes read-only (or only ones which do not have copypool
backups). Check out all read-only tapes.
2. Like your step 1 but take only one db backup. The rest as described in
several copies. Do not forget OS and TSM documentation CDs (five years
later several investigators were changed and none of them knows what TSM
is and how to beat it - let them RTFM).
3. Mark diskpool volumes back to read-write.
4. Set up a test server. Disable expiration, restore the (only) DB backup,
modify all mgmtclasses/copygroups/stgpools. Mark primary tapes (of
backed-up "important" stgpools) unavailable. Backup the DB.

If cannot afford additional cartridges or investigation is supposed to be
short-term - make few more DB backups; set reuse delay on production
server's pools to . "Non-important" primary volumes and "important"
copypool volumes hold the data for both production and "investigation"
servers. Sit and wait this to finish sooner. Skip steps 5-10, ensure step
11 is done and do not dream about step 12.
If forced to keep the data long-term regardless the price:
5. Backup to copypool(s) even the "non-important" (non-backed up) storage
pools. Check-in (read-only) primary volumes in production server's library
and mark back to read-write (in production DB) for normal day-to-day
operations.
6. Mark all primary volumes unavailable and restore primary pools on the
test server (it now became DR test server :).
7. Return (production) copypool volumes to vault for normal DR. Delete
volumes from DR test server and backup all affected pool to make copies
for "investigation" DR.
8. Make additional copypool copies if necessary for very-long-term data
retension.
9. Backup the DB several times.
10. As your step 2.
11. Get signature from Legal as Tab pointed that he/she understands how
TSM is working and will not bother you to downgrade the production server.
12. Sit back and relax - you've done the task and covered your aZZ :)

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: Eternal Data retention brainstorming.

What kind of shelf life are you expecting for your media?  Since they've
discovered that optical has data decay, it's not even "forever"!  I've
seen
some others on the list point at it, but how will you restore this data in
10 years?

Here's what I'd look at doing:

1.  Take about 5 dbbackups (or snapshots) and send them to the vault. Take
a few OS level backups of your server (bootable tape images).  Send them,
too.  Send volhist, devconfig, a prepare file, TSM server and client code,
OS install code, everything else that makes your environment your
environment, including the clients, offsite.  This is DR - in the most
extreme sense of the term.
2.  Box up the vault.  Seal the boxes, leave them there.
3.  Start marking offsite volumes as destroyed (or just delete them) in
TSM, and run more Backup Stgpools.  They'll run longer, as you're
recreating the old tapes.
4.  Go back to step 1 and repeat once.  If this data is really that
important to have forever, make sure you can get it back!
5.  Start sleeping - you're going to be WAY behind on that!

Now, for the people making the requirement - they need to get a contract
to
have accessible like hardware to do the restores to.  Not just the TSM
server and tape library, but the clients, too.

Having the data available is one thing, being able to restore it is
another.

Nick Cassimatis
[EMAIL PROTECTED]

Today is the tomorrow of yesterday.



Re: TSM and protocol converters

2002-08-17 Thread Zlatko Krastev/ACIT

IBM SAN Data Gateway supports IBM 3575. You can attach both 3575s to same
SDG.
For 3583 there is SDG module with 4 LVD ports which is field installable
in the library (as an upgrade). It is cheaper than standalone product and
does not occupy additional space.
Both standalone and module versions are supported from hardware and
software (TSM) point of view. And I personally had no problems with them.
For TSM 4.1 you will have to replace SCSI TSM driver with FCP+SCSI one
(which is well documented). For v4.2 and v5.1 there is no SCSI-only driver
fileset.
I do not know "TSM supported" FC/SCSI converter for HP DLT 4/40 but any
which works with the library ought to work with TSM (statements on Tivoli
site are somewhat fuzzy on this topic).
Have in mind two issues:
- not every SAN component is compatible with any other SAN component. For
example IBM SDG (Pathlight) will for sure work with IBM 2109 (Brocade
Silkworm) switches but AFAIK will not work with McDATA or Inrange
Directors.
- number of necessary FC adapters - 4 LTO + 2x4 3570 + 4x DLT ~= 288 MB/s
with compression. So you will need at least 3 FC adapters.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:TSM and protocol converters

I'm spec'ing a new TSM server.  To avoid having to spend a fortune on a
huge server just to get the I/O slots I need to support HV Diff SCSI, I'm
thinking of using protocol converters to reduce the number of I/O slots in
the new server.

In other words, the server would provide Fibre Channel or Gigabit E or
even LV Diff SCSi but I would connect my existing libraries via their HV
Diff SCSI interfaces, with the protocol converter in the middle.

Assuming this is technically feasible (and I know there are FC to SCSI
converters) would TSM work in such an arrangement?  What would change
relative to simply plugging in a SCSI cable into the server as I have now?

Server is "pSeries" or RS/6000; TSM is currently 4.1 but the new server
would (very soon) enter service at 5.1.

Thanks in Advance,

Tab Trepagnier
TSM Administrator
Laitram Corporation



Re: Eternal Data retention brainstorming.....

2002-08-17 Thread Seay, Paul

This is why I am asking for a two new reclaimation requirements.  Reclaim
based on number of days since the tape was written and reclaimation if the
tape is mounted more than n times.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 10:58 AM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


That will work for the active data in the old (renamed) filespaces, but I
believe the inactive data in the renamed filespaces will continue to expire
according to the limits set in the management class/copygroup.

Anybody got evidence to the contrary?


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" - Scott
Adams/Dilbert





-Original Message-
From: Slag, Jerry B. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 4:30 PM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


If they tell you the hosts/filespaces just do a rename of the existing
filespaces.

-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 2:31 PM
To: [EMAIL PROTECTED]
Subject: Eternal Data retention brainstorming.


Folks,
I have a theoretical question about retaining TSM data in an unusual
way. Let me explain.

Lets say legal comes to you and says that we need to keep all TSM
data backed up to a certain date, because of some legal investigation
(NAFTA, FBI, NSA, MIB, insert your favorite govt. entity here). They want a
snapshot saved of the data in TSM on that date.

Anybody out there ever encounter that yet?

On other backup products that are not as sophisticated as TSM, you
just pull the tapes, set them aside and use new tapes. With TSM and it's
database, it's not that simple. Pulling the tapes will do nothing, as the
data will still expire from the database.

The most obvious way to do this would be to:

1. Export the data to tapes & store them in a safe location till some day.
This looks like the best way on the surface, but with over 400TB of data in
our TSM environment, it would take a long time to get done and cost a lot if
they could not come up with a list of hosts/filespaces they are interested
in.

Assuming #1 is unfeasible, I'm exploring other more complex ideas.
These are rough and perhaps not thought through all the way, so feel free to
pick them apart.

2. Turn off "expire inventory" until the investigation is complete. This one
is really scary as who knows how long an investigation will take, and the
TSM databases and tape usage would grow very rapidly.

3. Run some 'as-yet-unknown' "expire inventory" option that will only expire
data backed up ~since~ the date in question.

4. Make a copy of the TSM database and save it. Set the "reuse delay" on all
the storage pools to "999", so that old data on tapes will not be
overwritten.
In this case, the volume of tapes would still grow (and need to
perhaps be stored out side of the tape libraries), but the database would
remain stable because data is still expiring on the "real" TSM database.
To restore the data from one of those old tapes would be complex, as
I would need to restore the database to a test host, connect it to a drive
and "pretend" to be the real TSM server and restore the older data.

5. Create new domains on the TSM server (duplicates of the current domains).
Move all the nodes to the new domains (using the 'update node ...
-domain=..' ). Change all the retentions for data in the old domains to
never expire. I'm kind of unclear on how the data would react to this. Would
it be re-bound to the new management classes in the new domain? If the
management classes were called the same, would the data expire anyways?

Any other great ideas out there on how to accomplish this?

Thanks,
Ben



GRR - GUIS -etc

2002-08-17 Thread Fred Johanson

In fairness to the WebAdmin, it is useful for accessing my three systems from
my Brother-in-law's house, or my son's dorm room, or a laptop anywhere.  I
usually use the CLI, but, damn, I miss the flexibility of the GUI.


Quoting "Prather, Wanda" <[EMAIL PROTECTED]>:

> Look at all the responses on this thread, and what you see is that we are
> all discussing ways we GET AROUND the admin web interface.
> That should be indication enough to the developers that the product has a
> problem.
>
> Telling prospective TSM customers that "oh yeah don't worry, there are
> plenty of ways AROUND the limitations in the admin interface", isn't
> exactly
> a prime selling point.
>
>
> -Original Message-
> From: Kai Hintze [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, August 14, 2002 12:51 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Antwort: Web Admin Interface - grrr
>
>
> But it is precisely because I _do_ have boatloads of clients that I prefer
> the CLI. I can type "2 or 3 lines of parameters" as someone said a few
> messages back much more quickly than I can
>   
>
> More, I seldom type long parameters. I have samples of anything I have
> defined/updated/deleted that I can quickly copy to a text editor, change
> the
> few characters that change, and dump into a CLI session. I specify
> everything so that if the defaults change they don't take me by surprise,
> and I seldom have to look back to the manuals find a syntax. The only trick
> is when I change versions I do have to go through the list and compare to
> the new Admin Reference.
>
> - Kai.
>
> > -Original Message-
> > From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
> > Sent: Tuesday, 13 August 2002 8:38 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Antwort: Web Admin Interface - grrr
> >
> >
> > I'm afraid I'm another vote for the CLI.
>
>  . . .
> >
> > But I don't have boatloads of clients to administer, so I'm
> > probably not
> > seeing the issues the rest of you are fighting. :-)
> >
> > Tom Kauffman
> > NIBCO, Inc
> >
>
Fred Johanson



Re: Multiple logical libraries for 3584

2002-08-17 Thread Zlatko Krastev

You will need multiple control paths to the library only if you want to
partition it. Each TSM server will see its partition as standalone library
with number of drives and slots as assigned to that partition. Detailed
description is available in 3584 Planning and Operator Guide, Chapter 4
"Advanced Operating Procedures, Configuring the Library with Partitions".
The other approach is to use TSM library sharing as Don suggested. This
will need some TSM configuration but will allow you to have single scratch
pool for both servers instead of separate set of scratches in each logical
library (partition).
All roads go to Rome (but there are many of them).

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: Multiple logical libraries for 3584

Don,
I have 2 TSM servers accessing a single 3584 library and I would like to
set
up multiple control paths to the library so that both servers can access
it
equally.  Have you or anyone else done this successfully and do you have
any
information on setting it up?
Thanks,
Gentille

-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 13, 2002 4:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


Don France wrote:

>Yep (to the last question);  you cannot span multiple physical libraries
to
>make a single logical.  You can define multiple logicals within a single
>physical;  that is a common thing I've done, for various reasons.
>
>
>Don France
>Technical Architect -- Tivoli Certified Consultant
>Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
>San Jose, Ca
>(408) 257-3037
>mailto:[EMAIL PROTECTED]
>
>Professional Association of Contract Employees
>(P.A.C.E. -- www.pacepros.com)
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
>Dan Foster
>Sent: Tuesday, August 13, 2002 2:14 PM
>To: [EMAIL PROTECTED]
>Subject: Multiple logical libraries for 3584
>
>
>I've got a question.
>
>One 3584-L32 with 6 drives and one 3584-D32 with 6 drives.
>
>Is it possible to have a logical library that covers 4 drives in
>the L32, and a second logical library that covers last 2 drives
>in the L32 and all 6 drives in the D32?
>
>Or is that not a valid configuration -- ie, do I need to keep
>logical libraries from spanning multiple drives in multiple 3584
>units?
>
>-Dan
>
>
Hi Dan,

Judging by your configuration I beleive that you have just one physical
library.  I beleive the L32 is the base unit and the D32 is an expansion
cabinet, i.e. the 2 of them are physicaly attached to one another and
share the same robotics.  Is this correct?  If so then yes you can
partition the library as you described.

But the real question is whatare you trying to accomplish?  If this is
to be connected to 2 different servers there are a few more things that
have to be in place.  If both of these logical libraries are to be used
by the same TSM server I am not sure I understand the rational for doing
so.  You could just as easily manage the drive utilization through other
TSM server options.  Please explain a little bit more what you are
trying to accomplish.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===



Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)

2002-08-17 Thread Bill Boyer

Until you get to 4.2 or higher AND the apar is fixed, you could use the
NTBACKUP command to save the system state (as Microsoft calls it) to a flat
file. Run this as a preschedulecmd. We had success with recoveries of
systems that were pre 4.2 at one site. Then your recovery becomes, Load 2K,
Restore the C: drive, then use NTBACKUP to restore the system state from the
restored file.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Don France
Sent: Friday, August 16, 2002 6:28 PM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


If your backup data was done using 4.2 or later, Wanda's recovery document
(for a basic server) should work fine;  you do a basic/minimal operating
system install, including the device specific drivers, then fully restore
boot-system drive (C:, right?), do not re-boot until System Objects are
restored.

BUT NOTE: There is an APAR for this issue, as well; IC34015 which has to do
with restoring/not-restoring device specific keys (such as the case you
describe)...
"This affects 4.2 and 5.1 TSM Clients on Windows 2000 and XP."

Hope this helps.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Michael Swinhoe
Sent: Friday, August 16, 2002 7:50 AM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


I have gone down the windows repair route but this takes ages due to all
the re-boots to load the drivers.  I only have a 72 hour SLA to recover our
EDC so the timescales are tight.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



Rob Schroeder
 cc:
Sent by: "ADSM:Subject: Re: RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID
Dist Stor  CONTROLLERS)
Manager"
<[EMAIL PROTECTED]
ST.EDU>


16/08/2002
15:06
Please respond
to "ADSM: Dist
Stor Manager"





If this is a Win2k machine, have you already done a windows repair after
the restore was complete?

Rob Schroeder
Famous Footwear




  Michael Swinhoe
cc:
  Sent by: "ADSM:  Subject:  RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  08/16/2002 05:51
  AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






I have hit a brick wall and I need some help.

I am currently trying to restore some Compaq servers running W2K with
different Raid Controllers (3200 & 5300).  I have successfully managed to
recover a Compaq server with a 5300 raid controller onto a Compaq server
with a 3200 raid controller.  However when I try to do the opposite the
server blue screens.  Has anyone tried to do the same and if so which
registrey keys needed changed or is the solution simpler than this?  Or is
there a piece of software out there that would make the process run more
smoothly with out much manual intervention.

Thanks as always,

Mike.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



___

The information contained in this message is confidential and may be
legally privileged. If you are not the intended recipient, please do not
read, copy or otherwise use it and do not disclose it to anyone else.
Please notify the sender of the delivery error and then delete the
message from your system.

Any views or opinions expressed in this email are those of the author only.

Communications will be monitored regularly to improve our service and for
security and regulatory purposes.

Thank you for your assistance.

___



Re: 2 drives are required for LTO?

2002-08-17 Thread Zlatko Krastev

Mark,

I fully agree with your opinion. TSM *can* work with single drive but it
would be ugly. Same waste of resources as assignment to 5k project a
project manager with 300k salary (and you can always send 1 kg parcel with
a truck).
I said it is possible but will never say I recommend it.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: 2 drives are required for LTO?

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
> The requirement for 2 drives is not mandatory, it ought to be just a
> suggestion. LTO can be used as a standalone drive, TSM can use single
> drive or small autoloader with single drive. So it is NOT required but
> recommended.

Yes, technically a single drive is sufficient to do backups. But then I
could use a pair of nail scissors to mow my lawn...

> - single drive reclamation - define reclamation storage pool of type
FILE.
> On reclamation remaining data is moved to files and later written to new
> tape volume. Drawback: data is not read when written (sequential
> read+write vs. parallel) thus takes more time. Calculate time budget
> around the clock.

FILE storage pool-based reclamation is dog slow, and expensive of disk
space, particularly if you are backing up database-type data of any size.
I've got a customer trying to do this very thing, and reclamation is
extremely slow.

> - single drive copypools - define following hierarchy DISK -> FILE ->
LTO
> (file pool would be also lto reclamation pool). Prevent file->lto
> migration during backups (highmig=100). Perform backups to copypool
after
> node backups finish. Allow migration after backup to copypool finishes.
> Drawbacks: filepool must be large enough to hold all backups data.
Backups
> should not happen during migration because some object(s) may migrate
> without being copied to the copypool. Again time - data have to be
written
> twice through the one-and-only drive. And on the end with one drive
there
> is no way to perform copypool reclamation.

Bingo. A single tape drive, because of the lack of reclamation, means no
usable copy pool, no way to use move data to consolidate primary tape
volumes, and no way to use a restore volume command to rebuild bad primary
pool media from copy pool media--in short, a badly crippled TSM backup
system.

> Conclusion: for a small installation data might be not too much, time
> might be enough for all activities (node backups, copypool backup,
primary
> pool raclamation, migration, DB backup). Thus neither LTO technology nor
> TSM dictate number of drives to be used but only the business
requirements
> you have.

Don't let anyone tell you that a single tape drive is adequate for
anything
resembling a real backup system. If you can't afford a real library, you
can't afford a real backup system. My experience with multiple
environments
calls for a minimum of three drives--two drives for multitape operations,
and a spare in case one drive breaks down.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: LTO question - IBM 3600-109 (was Re: Restore backupset take very long)

2002-08-17 Thread Zlatko Krastev

Paul,

try on HP site. This is OEM-ed HP autoloader 1/9. Or use adsmscsi driver.
BTW: write own subjects because your question nearly slipped undetected as

answer to previous thread.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: LTO question

Hi Paul,
Have you tryied disabling W2K driver?.

(Why don't you put a new question is list? )

Rafael
---
to: Paul Wright <[EMAIL PROTECTED]>
cc:
date: 8/13/2002 1:19:11 PM
subject: Re: Restore backupset take very long



> Hoping someone might have an answer/guidance for me.  We are evaluating

> Tivoli Storage Manager (TSM) v5.1.1.  It seems that the LTO
drive/autoloader

> we have though is not recognized by WIN2K/TSM.  Presently we use the
tape

> drive on a Netware 5.1/Backup Exec combo.  The IBM LTO drive/autoloader
is a

> model 3600-109.  Tivoli's site says that TSM supports the 3600-109.  But
I

> can't get Windows 2000 to load a correct driver for it.  IBM doesn't
support

> WIN2K drivers for the 3600-109 like it does for the 3580,3581,3583,3584
LTO

> units.  I installed all the updated device drivers for TSM and still a
no go

> with it.  Does anyone here have any experience with the 3600-109?  Any
help

> would be appreciated.

>

> Thanks,

> Paul

>


___
Obtin gratis tu cuenta de correo en StarMedia Email. !Regmstrate hoy
mismo!. http://www.starmedia.com/email



Re: question on configuring large NT client for optimum restore proce ssing

2002-08-17 Thread Zlatko Krastev

Tom,

try to emulate virtualmountpoint through separate node names:
-   for each huge directory (acting as "virtualmountpoint") define a
node. In dsm.opt file define
--  exclude X:\...\*
--  include X:\\...\*
--  exclude.dir X:\
--  exclude.dir X:\
--  exclude.dir X:\
-   define other_dirs node with excludes for all "virtualmountpoints"
and without first exclude and the include.
Thus only the directory is included, existing known directories are
exclude.dir-ed and not traversed. If new directory is created and
forgotten to be excluded it will be traversed but only structure will be
backed up and not files. Last node will backup all but "virtualmountpoint"
directories.
You can create several schedule services and add them to MSCS resource
group for that (one-and-only) drive.
Collocation will come from itself.
Disclaimer: never solved such a puzzle nor tested it but ought to work.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:question on configuring large NT client for optimum restore proce ssing

What's the preferred method of configuring a large NT fileserver for
optimum
data recovery speed?

Can I do something with co-location at the filesystem (what IS a
filesystem
in NT/2000?) level?

We're bringing in an IBM NAS to replace four existing NT servers and our
recovery time for the existing environment stinks. The main server
currently
has something over 800,000 files using 167 GB (current box actually uses
NT
file compression, so it's showing as 80 GB on NT). We had to do a recovery
last year (raid array died) and it ran to 40+ hours; I'm getting the
feedback that over 20 hours will be un-acceptable.

The TSM server and the client code are relatively recent 4.2 versions and
will be staying at 4.2 for the rest of this year (so any neat features of
TSM 5 would be nice to know but otherwise unuseable :-)

To add to the fun and games, this will be an MS cluster environment. With
1.2 TB of disk on it. We do have a couple of weeks to play around and try
things out before getting serious. One advantage to the MSCS is that disk
compression is not allowed, so that should speed things up a bit on the
restore.

Tom Kauffman
NIBCO, Inc



Re: 2 drives are required for LTO?

2002-08-17 Thread Don France

Workable copy pools is *possible*, provided all primary storage pool is on
disk;  that is, all backups go to disk pool, which is sufficiently large to
hold all the versions for retention... and never does migration!

I've had to do this (and more) for a capital equipment-constrained customer;
12 locations around the world, getting the "same" equipment (from Dell) for
service replacement purposes.  Most sites had two drives (one in each 120T),
one site had two drives in a single library (130T), but TWO SITES WITH
SINGLE DRIVE LIBRARY needing onsite backups got 105 GB disk pool (sufficient
to provide 14-day point-in-time restore for approx. 55-65 GB, depends on
daily turnover)... so, with six usable slots, we configured 2-slots for
onsite copypool, 2-slots for offsite copypool, 2 slots for db-backups.

Reclamation was done as if both copy pools were offsite, so fresh tapes
would be cut from the primary disk pool -- so, single drive reclamation was
not required. For the 1-drive, 2-library sites, single drive reclamation was
done AND it worked just fine (using 100 GB disk pool, scheduled for weekends
after disk migration)... all these sites ran fine for over two years, the
only true "glitches" were due to onsite tape management needing occasional
assistance with their DRM actions.  Also, turns out, we rarely had tape
drive failure after the initial install/burn-in period -- even then, we had
less than 5 drives fail across all 12 sites.

NOT the *best* answers, but these configurations did allow us a lights-out
environment --- biggest caveat is when (not if) the drive goes down, no
backups to tape (slightly mitigated by using LAN to store db-backups off to
another server).

Yes, running with less than 3 drives is a challenge, but a lights-out setup
*can* be done with only 1 drive PLUS a large disk pool for the primary
storage pool!

(My motto, had to be: "If you bring money, we can solve"!!!)


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Saturday, August 17, 2002 8:09 AM
To: [EMAIL PROTECTED]
Subject: Re: 2 drives are required for LTO?


Mark,

I fully agree with your opinion. TSM *can* work with single drive but it
would be ugly. Same waste of resources as assignment to 5k project a
project manager with 300k salary (and you can always send 1 kg parcel with
a truck).
I said it is possible but will never say I recommend it.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: 2 drives are required for LTO?

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
> The requirement for 2 drives is not mandatory, it ought to be just a
> suggestion. LTO can be used as a standalone drive, TSM can use single
> drive or small autoloader with single drive. So it is NOT required but
> recommended.

Yes, technically a single drive is sufficient to do backups. But then I
could use a pair of nail scissors to mow my lawn...

> - single drive reclamation - define reclamation storage pool of type
FILE.
> On reclamation remaining data is moved to files and later written to new
> tape volume. Drawback: data is not read when written (sequential
> read+write vs. parallel) thus takes more time. Calculate time budget
> around the clock.

FILE storage pool-based reclamation is dog slow, and expensive of disk
space, particularly if you are backing up database-type data of any size.
I've got a customer trying to do this very thing, and reclamation is
extremely slow.

> - single drive copypools - define following hierarchy DISK -> FILE ->
LTO
> (file pool would be also lto reclamation pool). Prevent file->lto
> migration during backups (highmig=100). Perform backups to copypool
after
> node backups finish. Allow migration after backup to copypool finishes.
> Drawbacks: filepool must be large enough to hold all backups data.
Backups
> should not happen during migration because some object(s) may migrate
> without being copied to the copypool. Again time - data have to be
written
> twice through the one-and-only drive. And on the end with one drive
there
> is no way to perform copypool reclamation.

Bingo. A single tape drive, because of the lack of reclamation, means no
usable copy pool, no way to use move data to consolidate primary tape
volumes, and no way to use a restore volume command to rebuild bad primary
pool media from copy pool media--in short, a badly crippled TSM backup
system.

> Conclusion: for a small installation data might be not too much, time
> might be enough for all activities (node backups, copypool backup,
primary
> 

Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)

2002-08-17 Thread Don France

Outstanding suggestion, Bill;  that's exactly what we did for a customer
with all Win2K servers at 12 locations around the world.  We even did
validated both (a) domain controller (with Active Directory), and (b)
Exchange 5.5 server recovery using the NTbackup "trick" for System State
backups -- I think they're still doing it that way, which totally avoids the
limitations of TSM client!

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Bill Boyer
Sent: Saturday, August 17, 2002 10:07 AM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


Until you get to 4.2 or higher AND the apar is fixed, you could use the
NTBACKUP command to save the system state (as Microsoft calls it) to a flat
file. Run this as a preschedulecmd. We had success with recoveries of
systems that were pre 4.2 at one site. Then your recovery becomes, Load 2K,
Restore the C: drive, then use NTBACKUP to restore the system state from the
restored file.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Don France
Sent: Friday, August 16, 2002 6:28 PM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


If your backup data was done using 4.2 or later, Wanda's recovery document
(for a basic server) should work fine;  you do a basic/minimal operating
system install, including the device specific drivers, then fully restore
boot-system drive (C:, right?), do not re-boot until System Objects are
restored.

BUT NOTE: There is an APAR for this issue, as well; IC34015 which has to do
with restoring/not-restoring device specific keys (such as the case you
describe)...
"This affects 4.2 and 5.1 TSM Clients on Windows 2000 and XP."

Hope this helps.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Michael Swinhoe
Sent: Friday, August 16, 2002 7:50 AM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


I have gone down the windows repair route but this takes ages due to all
the re-boots to load the drivers.  I only have a 72 hour SLA to recover our
EDC so the timescales are tight.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



Rob Schroeder
 cc:
Sent by: "ADSM:Subject: Re: RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID
Dist Stor  CONTROLLERS)
Manager"
<[EMAIL PROTECTED]
ST.EDU>


16/08/2002
15:06
Please respond
to "ADSM: Dist
Stor Manager"





If this is a Win2k machine, have you already done a windows repair after
the restore was complete?

Rob Schroeder
Famous Footwear




  Michael Swinhoe
cc:
  Sent by: "ADSM:  Subject:  RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  08/16/2002 05:51
  AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






I have hit a brick wall and I need some help.

I am currently trying to restore some Compaq servers running W2K with
different Raid Controllers (3200 & 5300).  I have successfully managed to
recover a Compaq server with a 5300 raid controller onto a Compaq server
with a 3200 raid controller.  However when I try to do the opposite the
server blue screens.  Has anyone tried to do the same and if so which
registrey keys needed changed or is the solution simpler than this?  Or is
there a piece of software out there that would make the process run more
smoothly with out much manual intervention.

Thanks as always,

Mike.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



___

The information contained in this message is confidential and may be
legally privileged. If you are not the intended recipient, please do not
re

Re: Multiple logical libraries for 3584

2002-08-17 Thread Don France

Gentille,

No big deal, on 4.1 or later;  you set up one TSM server as the library
manager, so all tape mounts go thru that server.  Then, the shared drives
are connected to both servers, so when a drive is allocated to a given
server, the other one won't access... see the Admin. Guide (and 3584 doc.s)
for more details.

Don


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Saturday, August 17, 2002 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


You will need multiple control paths to the library only if you want to
partition it. Each TSM server will see its partition as standalone library
with number of drives and slots as assigned to that partition. Detailed
description is available in 3584 Planning and Operator Guide, Chapter 4
"Advanced Operating Procedures, Configuring the Library with Partitions".
The other approach is to use TSM library sharing as Don suggested. This
will need some TSM configuration but will allow you to have single scratch
pool for both servers instead of separate set of scratches in each logical
library (partition).
All roads go to Rome (but there are many of them).

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: Multiple logical libraries for 3584

Don,
I have 2 TSM servers accessing a single 3584 library and I would like to
set
up multiple control paths to the library so that both servers can access
it
equally.  Have you or anyone else done this successfully and do you have
any
information on setting it up?
Thanks,
Gentille

-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 13, 2002 4:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


Don France wrote:

>Yep (to the last question);  you cannot span multiple physical libraries
to
>make a single logical.  You can define multiple logicals within a single
>physical;  that is a common thing I've done, for various reasons.
>
>
>Don France
>Technical Architect -- Tivoli Certified Consultant
>Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
>San Jose, Ca
>(408) 257-3037
>mailto:[EMAIL PROTECTED]
>
>Professional Association of Contract Employees
>(P.A.C.E. -- www.pacepros.com)
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
>Dan Foster
>Sent: Tuesday, August 13, 2002 2:14 PM
>To: [EMAIL PROTECTED]
>Subject: Multiple logical libraries for 3584
>
>
>I've got a question.
>
>One 3584-L32 with 6 drives and one 3584-D32 with 6 drives.
>
>Is it possible to have a logical library that covers 4 drives in
>the L32, and a second logical library that covers last 2 drives
>in the L32 and all 6 drives in the D32?
>
>Or is that not a valid configuration -- ie, do I need to keep
>logical libraries from spanning multiple drives in multiple 3584
>units?
>
>-Dan
>
>
Hi Dan,

Judging by your configuration I beleive that you have just one physical
library.  I beleive the L32 is the base unit and the D32 is an expansion
cabinet, i.e. the 2 of them are physicaly attached to one another and
share the same robotics.  Is this correct?  If so then yes you can
partition the library as you described.

But the real question is whatare you trying to accomplish?  If this is
to be connected to 2 different servers there are a few more things that
have to be in place.  If both of these logical libraries are to be used
by the same TSM server I am not sure I understand the rational for doing
so.  You could just as easily manage the drive utilization through other
TSM server options.  Please explain a little bit more what you are
trying to accomplish.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===



Re: Backupset of HSM data

2002-08-17 Thread Zlatko Krastev

Look at management classes. There is an option MIGREQUIRESBkup which
defaults to Yes. Thus before migration normal backup takes place and there
ought to be no problem to create backupset.
If you want only migrated data then EXPort Node FILEData=SPacemanaged is
an answer as already pointed.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Backupset of HSM data

Hi guys and girls,

is it possible to generate backupsets from migrated HSM data??
We are going to rebuild a TSM environment from scratch but want to retain
a
the HSM data residing on an optical library. Let's suppose we destroy the
TSM DB and start with a new clean one. TSM will have forgotten all about
the optical volumes as well as the data on it. Backupsets will give me the
opportunitiy to restore stand-alone into a newly created HSM-Filesystem.
The first option would be copying the data into a disk filesystem large
enough to hold all the recalled data (130GB in our case) and later on
migrate it back. Apart from that option, what do I do if I haven't got the
necessary disk space available?
I could store on tape and the only thing that came to my mind was
backupsets. I tried to create one using the HSM filespace but it only
wrote
the directories to tape, not the files. Would this be a case for an
export/import?

Thanks

Lars



Re: question on configuring large NT client for optimum restore proce ssing

2002-08-17 Thread Don France

Interesting approach, Zlatko;  I agree, this should work -- if one is very
serious about restore SLA (and simulating Unix filespace-level
collocation -- using node-level collocation with your suggestion); a simpler
approach could accomplish the same effects:  cap drive-letter size at 100 or
250 GB, or maybe even 500 GB, then start a new drive letter. (The
drive-letter is a filespace on Windows platforms... until/unless you get
into the DFS or NTFS virtual volume game and then it's not that much
different).

If the customer restore SLA can "tolerate" 10-20 GB/Hr, a 300 GB cap with
DIRMC "tricks" will be sufficient;  just use high-level directories for
"GrpData" and "UsrData" with multiple restore threads; using classic (rather
than no-query) restore causes tape mounts to be sorted (more think time on
the server), it's another performance trade-off (lots of tapes, collocation,
lots of versions, large number of files being restored -- all interact to
affect restore speed).

I've supported several emergency server recovery situations, recent customer
had DLT, no collocation, DIRMC (worked great), 316 GB was restored in 30
elapsed hours -- that would have been more like 20 hours if they hadn't
over-committed the silo, requiring tape mounts for over 40 tapes in a 29
slot silo.

Tim ==> BTW, if you are going to use NAS filer from IBM, you can run the
backups (from the filer); so, with weekly (or monthly) full-image, in
concert with daily (or weekly) differential-image, plus normal daily
progressive-incremental (for file-level granularity), you'd get the fastest
file restore *and* server recovery possible... probably saturate the
network, getting 30 or 300 GB/hr (100Mbps vs. 1 Gbps).

Also, have you looked at Snapshot and/or SnapMirror support?!?  IBM NAS
comes with TSM Agent at 4.2 level;  IBMSnap, PSM & DoubleTake components
allow you to protect the NAS-based data on the NAS (so you could mirror each
drive-letter or network share) and run backups (image and file-level)
directly from the NAS. This kind of online/nearline recovery could totally
mitigate your restore SLA concerns;  if your SLA states 99% of the time,
recovery must be done in less than 4 hours, you are covered -- you only need
tape restores for site-level or drive-level disaster, which becomes less
than 1% of the failure instances, over time (after the first year).  See the
latest RedBook info about Snapshots & Replication with PSM & Double-Take --
built-in components of the IBM NAS, along with TSM client.

Typical NAS customers get a bunch of snap/mirror capacity (sufficient for a
number of days worth of Snapshots plus some RAID-1 protection of critical
data, RAID-like striping for faster performance, etc.)... so, you probably
won't need TSM tapes as much as in the old days where all the backup data
resides (exclusively) on tapes.  JBOD devices have gotten s cheap, it's
not terribly expensive to keep 7 or 14 days of incremental snapshots online
(in the ~snapshot directory, if using NetApp Snapshot, for example);  once
the end user is told where his snapshot data is stored, he stops calling in
trouble tickets for restores less than the snapshot retention period!

Check out the RedBooks on IBM NAS... up and coming, cheap (JBOD) solutions
to large file servers.  See this one, to start
http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/sg246831.html

Looks like you are in for some *actual* fun, with this project!!!


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Saturday, August 17, 2002 9:03 AM
To: [EMAIL PROTECTED]
Subject: Re: question on configuring large NT client for optimum restore
proce ssing


Tom,

try to emulate virtualmountpoint through separate node names:
-   for each huge directory (acting as "virtualmountpoint") define a
node. In dsm.opt file define
--  exclude X:\...\*
--  include X:\\...\*
--  exclude.dir X:\
--  exclude.dir X:\
--  exclude.dir X:\
-   define other_dirs node with excludes for all "virtualmountpoints"
and without first exclude and the include.
Thus only the directory is included, existing known directories are
exclude.dir-ed and not traversed. If new directory is created and
forgotten to be excluded it will be traversed but only structure will be
backed up and not files. Last node will backup all but "virtualmountpoint"
directories.
You can create several schedule services and add them to MSCS resource
group for that (one-and-only) drive.
Collocation will come from itself.
Disclaimer: never solved such a puzzle nor tested it but ought to work.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[E

Re: IBM vs. Storage Tek

2002-08-17 Thread Survoy, Bernard J

Another technology to consider is STK 9940B, GA in the September timeframe,
extremely high capacity with enterprise class reliability and performance
characteristics.

200GB native capacity
30MB/sec native transfer rate
2GB FC and fabric capable
Enterprise class duty cycle characteristics
Significantly lower effective media cost vs. LTO (uses same media as 9940A)
Comparable search/positioning characteristics to 9940A

Thanks,
Bernie Survoy
Consulting Systems Engineer
Phone: 216 615-9324
Cell:  330 321-3787
StorageTek
Information made Powerful


-Original Message-
From: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Sent: August 16, 2002 10:43 PM
Subject: Re: IBM vs. Storage Tek


My personal recommendation is to evaluate upgrade to 3590H. I am surprised
why this is not the IBM's first proposal. Arguments:

LTO definitely is downgrade to what you have now both in terms of
performance and reliability as others already answered. Higher capacity
per volume and much lower price are main advantages. Former will soon
change while the latter will never change.
For large data streams native rate of LTO (15MB/s) is higher than for 3590B
(9MB/s) and slightly above 3590E/3590H (14MB/s). Utilizing different
compression algorithm on compressed data the rate of 3590B (27MB/s) is
comparable to LTO (30MB/s) while 3590E/H definitely outperforms it (42MB/s).
But for smaller files you have to count also mount time and there 3590s are
more than two times faster than LTO. With LTO is much
harder to get some benefit from filespace collocation while this can help
in many cases with high-end tape drives.
Reliability of LTO is not bad but cannot compare to 3590/9x40.
Bigger volume ought not be such a problem for reclamation - it should be
nearly the same would you reclaim 3 volumes 100 GB each or 30 volumes 10
GB each for same average percentage.
If LTO gets damaged you for sure will have to restore much more. But this
is true whenever you go to *any* higher capacity technology - just the
same for 3590E/H or 9940.
Same remark for collocation - it is not related to technology but only to
capacity of the cartridges. There are two ways to prevent it - stay with
(older) lower capacity technology or limit maxscratch forcing more than
one node's data to be put on a volume. And you can always define several
pools with different maxscratch settings so it ought not be a big problem.

Upgrade to 3590H would be more expensive than brand new LTO but you will
get much better product. All remarks about performance and reliability
ought to be enough to justify higher price. You will not be forced to get
rid of those 5500 cartridges and buy new LTO ones. You will get 60 GB per
tape as from 9940A. SAN-attachment also is not a problem with fiber drives
and dual-ports (vs. single ported LTO) add to reliability.

StorageTek silos with 9840/9940 drives are very good products. If we want
to compare apples with apples we have to compare them with IBM 3494 +
3590B/E/H drives. Comparison with DLT or LTO would be apples vs. oranges. So
the statement "it costs twice" returns nothing useful. Count again the
need for new 9x40 cartridges and plan what will do with old 3590 ones.
But for me this would be replacement of Mercedes with Cadillac - both are
very good, both are very expensive and both give the same results with
different vendor's approach.


Zlatko Krastev
IT Consultant





Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:IBM vs. Storage Tek

Hello,

I know I've asked about this before, but now I have more information so I
hope someone out there has done this.  Here is my environment for TSM.
Right now it is on the mainframe and we are using 3590 Magstars.  We have
a
production and a test TSM server and each has about 13 drives and a total
of 5,500 tapes used for onsite and offsite tape pools between the 2
systems.  Two scenarios are being considered (either way TSM is being
moved
onto our SAN environment with approximately 20 SAN backup clients and 250
LAN backup clients and will be on SUN servers instead of the mainframe)
Here is what I estimated I would need for tapes:

 3590 9840 9940A LTO
 10 GB 20 GB 60 GB 100 GB
Production
Onsite1375 689  231  140
Offsite   1600 800  268  161
Total 2975 1489 499  301

Test
Onsite963  483  163  101
Offsite   1324 664  223  135
Total 2287 1147 386  236

Grand
Total 5262 2636 885  537

1. IBM's solution is to give us a 3584 library with 3 frames and use LTO
tape drives.  This only holds 880 tapes and from my calculations I will
need about 600 tapes plus enough tapes for a scratch pool.  My concern is
that LTO cannot handle what our environment needs.  LTO holds 100 GB
(native), but when a tape is damaged or needs to be reclaimed the time it
takes to do either process would take quite some time in my estimation.
Also, I was told tha

RE: Réf. : Re: Backup of Win2`K Fileserver

2002-08-17 Thread Mark Stapleton

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Gill, Geoffrey L.
> I just went through this with a Solaris box. Some of the guys here sent me
> info to help me out. Now it works great. You can't believe the greif I got
> for 2 months. Call IBM and ask them what's wrong, do this do that, TSM
> sucks. In the end it was just as I told him on day one.

*Thank* you.

Here's a list, generated from the school of hard knocks, of the effects and
and my "99% of the causes" of problems in TSM:

Problem: Slow throughput of all clients to the TSM server.
99% of the causes: misconfigured networks/NICs.

Problem: Manual backups succeed, scheduled backups fail.
99% of the causes: file and logon permission problems (almost always Window
clients).

Problem: Problems with NetWare client backups and restores.
99% of the causes: old/bad version of the NetWare TSA and SMDR modules.

Problem: "We're always running out of scratch tapes!"
99% of the causes: No one's checking to see if reclamation runs, or no one's
feeding new tapes in, or a lot more data than planned is added to the load.

Problem: "We can't restore a file that was deleted a month ago."
100% of the causes: Retentions policies only hold a file for 14 days.

Problem: A file can't be retrieved from a backupset.
99% of the causes: No one can remember what the directory tree looks like,
and no one read the manual where it states that you can't browse through the
contents of a backupset.

I could go on. And probably will.

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: TSM backing up in a DMZ zone.

2002-08-17 Thread Mark Stapleton

> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Seay, Paul
>
> See my responses inline.
>
>
> From: William Rosette [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 31, 2002 10:01 AM
> To: [EMAIL PROTECTED]
> Subject: Re: TSM backing up in a DMZ zone.
>
>
> HI TSMr's,
>
>   I have a DMZ Zone going in this Tuesday and they are asking me (TSM
> admin) to see if TSM can backup servers/clients in the DMZ zone.  I have
> heard some talk on this ADSM user group about that very thing.
> We are going
> to be using a Cisco Pix Firewall and eventually use a Nokia Checkpoint.  I
> gave them some options but I want to know if there are any more
> options that
> y'all might have.  Here are the ones I suggested.
>
> 1. Put a TSM remote server in the DMZ and share the library
> (3494) with the
> other server.
> This one requires port 3494 to be opened through the firewall so that the
> TSM server can talk to the library.  This one to me has some serious risks
> if the TSM server is broken into.  The reason is there is no
> security in the
> library to block the mtlib and lmcpd interfaces from being used to mount
> tapes belonging to other systems from being mounted in the drives of this
> remote TSM server.
>
> 2. Since most clients (NT & Linux servers) backup in 5 to 15 minutes and
> will not need to be backed up maybe once a week, open an obscure
> port once a
> week for 30 minutes for all backups.
> The port on the TSM server side has to be set for all clients.  But, you
> could create a small second TSM server processs on the machine inside the
> firewall or locate the remote one inside the firewall that uses this
> specific port and only allows connections from the NT & LINIX servers.
> Then, set your firewall up so that only port and connection works
> to the TSM
> server.  This is probably the most secure.
>
> The big negative is that the backup will be slow depending on
> your firewall
> and network.
>
> 3. Port access through Cisco script when backup happens.
> I am not familiar with this but it looks like 2 with some more security.
>
> 4. Direct connect to TSM server.
> Not sure what you meen by Direct Connect.
>
>
> I understand that probably each one has its security leaks and some more
> than others.  Is there someone who can share a good DMZ SLA?

There's another way.

1. Install a second NIC in each client in the DMZ.
2. Install a second NIC on the TSM server.
3. Create a private network for the DMZ clients and the TSM server to use.
4. Designate a TCP port for the server and clients to communicate through.
5. Set client backups to prompted instead of polling.
6. Turn on the second TSM server NIC
7. Run the backup
8. Close the server NIC.

(Steps 6-8 should run as a client schedule event with a PRESCHEDULECMD.)

This obviates the security risks in having a TSM server in the DMZ.

[I'd suggest using IPX only (instead of IP) for the private network comm
protocol (for additional security), but there seem to be some issues with
using IPX only on the TSM 5.1 server.]


--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE