Re: TSM 6.1 - can I do a X server platform database restore?

2009-04-29 Thread Don France
 
Clearly, this is not exactly the case;  LAN-free data is written to tape in a 
format that *any* hosting TSM server can read/restore/migrate/etc.  It's the 
DATABASE data (and metadata) that is stored in formats consistent with the 
hosting TSM server's OS-platform --- one issue has been the 
bigindian-littleindian issue, mostly (though there are some other issues, too 
-- such as LBA's and other such OS-filesystem-dependent things).  There were 
rumors that some x86-based OS's might be able to handle "some" items in a 
x-platform kinda way, but it's not been reported the degree of success.

-Don


 -- Original message from Paul Zarnowski : 
--


> Tapes are written in different formats on different platforms, so the
> answer to your question is "no".  You would have to export all of the data
> from Windows and import to AIX, not just the database.
> 
> No, I'm not from development, but I did sleep at a Holiday Inn Express last
> night!
> 
> ..Paul
> 
> At 05:41 PM 3/11/2009, Joerg Pohlmann wrote:
> >Could someone from development please comment - since the database for TSM
> >6.1 is DB2, can I move from a TSM server on for example, Windows to a TSM
> >server on for example, AIX by restoring the database and then updating any
> >tape device names in the path definitions?
> >
> >Joerg Pohlmann
> >250-245-9863
> 
> 
> --
> Paul ZarnowskiPh: 607-255-4757
> Manager, Storage Services Fx: 607-255-8521
> 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Is there any way within TSM to terminate a process on excessive read errors?

2009-01-01 Thread Don France
Yep,,, you will probably want to do just that;  you would normally hope the 
process would end after the first error, but (alas) it's an imperfect world --- 
I'd advise you open a PMR, this smells like a problem that could/should be 
fixed via process-end.  

The caveat is that some processes will restart (repeatedly) due to your other 
settings (like reclamation thresholds, etc.) , which could cause this to recur 
-- though drive-cleaning should have resolved.

-Don

---

Don France
Technical Architect - Tivoli Certified Consultant
Tivoli Storage Manager - Win2K/2003, AIX/Unix, OS/390

Professional Association of Contract Employees (P.A.C.E.) - www.pacepros.com
San Jose, CA
Phone - Voice/Mobile: (408) 348-8926
email: don_fra...@att.net 

-- Original message from "Kauffman, Tom" : 
-- 


> I get frustrated when I see something like this: 
> 
> ANR8944E Hardware or media error on drive DRIVE_02 (/dev/rmt0) with volume 
> 444035L4(OP=LOCATE, Error Number= 110, CC=0, KEY=03, ASC=09, ASCQ=00, 
> SENSE=70- 
> .00.03.00.00.00.00.58.00.00.00.00.09.00.36.00.78.B5.78.B5.00.01.34.34.34.30.33-
>  
> .35.4C.19.00.00.15.04.CB.00.00.00.00.00.80.2B.60.00.00.00.20.DD.20.00.00.00.00-
>  
> .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00-
>  
> .00.00.00.00.00.00.00.37.41.33.31.00.00.00.00.00.00, 
> Description=An undetermin- 
> ed error has occurred). Refer to Appendix C in the 'Messages' manual for 
> recommended action. 
> ANR8359E Media fault detected on LTO volume 444035L4 in drive DRIVE_02 
> (/dev/rmt0) of library GOBI. 
> ANR1080W Space reclamation is ended for volume 444035L4. The process is 
> canceled. 
> ANR1163W Offsite volume 333114L2 still contains files which could not be 
> moved.A 
> NR0986I Process 1714 for SPACE RECLAMATION running in the BACKGROUND 
> processed 
> 80934 items for a total of 13,531,260,308 bytes with a completion state of 
> FAILURE at 10:54:00. 
> 
> At the time I cancelled this a query process showed something in excess of 
> 26,000 files unreadable and I had several hundred entries in the AIX error 
> log. 
> (I have 28, 896 occurances of the ANR8944E error message today, so I presume 
> that's the accurate count). 
> 
> I cancelled the process, the input tape dismounted, the library cleaned the 
> drive - and I processed 717,095 files with no errors. 
> 
> Do I have to come up with a script of my own to catch and kill processes like 
> this? 
> 
> TIA - 
> 
> Tom Kauffman 
> NIBCO, Inc 
> 
>  
> 


Re: SL500 LAN-free without ACSLS???

2007-12-20 Thread Don France
THanks, Kurt & Gerald... this is excellent --- AND it avoids creating a
single-point-of-failure (with a single ACSLS)... will pass this along to the 
customer.


-- Original message --
From: Gerald Michalak <[EMAIL PROTECTED]>
>
> If you are mounting/unmounting tapes from the current TSM server without
> ACSLS then you don't need it for the Storage Agent/LAN-Free backup.
>
> Just let the TSM manage the library.
>
> I'm doing this currently with an STK-L700 library, 2 TSM servers and 10
> STA's.
>
>
> Gerald Michalak
> IBM TSM Certified Administrator


SL500 LAN-free without ACSLS???

2007-12-12 Thread Don France
Working a new configuration with SAN-connected Sun-STK SL500; it's working 
fine, except one client has huge (2.5 TB) database, needs smaller backup window 
(ie, LAN-Free might help).

Do I still need ACSLS in order to share library and drives with storage agents 
(or, can the TSM server function as the library manager)???

I've been reading the admin books, did define the library as shared, all is 
working fine;  I notice the TSM server now lists an "Owner" column in the "q 
libv" response, as if I am sharing with another TSM server (though I have only 
one server, so all tapes are owned by it!).




--
Don France
Technical Architect - Tivoli Certified Consultant
Tivoli Storage Manager - Win2K/2003, AIX/Unix, OS/390
San Jose, CA

Phone - Voice/Cell: (408) 257-3332


Re: backup destination change possible?

2006-12-04 Thread Don France
You should know that many MC's can be defined in a single PD;  read
about this in online help for the DEFINE MGMTCLAS cmd.  See, also,
the client option definition for INCLUDE -- the unique format used to
associate a different MC with various filename/filetype patterms.

Regards,
Don

-- Original message from "RODOLFICH, NICHOLAS" <[EMAIL PROTECTED]>: 
-- 


> How is this possible? Only 2 copy groups can be present in a MC. Only 
> one policy set within a policy domain can be active at a time. I don't 
> see how it can be done in the same policy domain. Could you be more 
> specific? 
> 
> Regards, 
> 
> Nicholas 
> 
> 
> -Original Message- 
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of 
> Don France 
> Sent: Friday, December 01, 2006 11:16 PM 
> To: ADSM-L@VM.MARIST.EDU 
> Subject: Re: [ADSM-L] backup destination change possible? 
> 
> You could use management class within the same PD; the same 
> way you'd use a different MC for incrementals or differentials (ie, 
> the logs), just add the appropriate INCLUDE statement in their opt file. 
> 
> Also, consider sending the disk_mc data to a storage pool that migrates 
> to the same tapepool as the other, direct-to-tape TDP nodes (to keep 
> all the TDP data in the same tape pool, for expiration/reclamation 
> efficiency). 
> 
> Don France 


Re: backup destination change possible?

2006-12-02 Thread Don France
You could use management class within the same PD;  the same
way you'd use a different MC for incrementals or differentials (ie,
the logs), just add the appropriate INCLUDE statement in their opt file.

Also, consider sending the disk_mc data to a storage pool that migrates 
to the same tapepool as the other, direct-to-tape TDP nodes (to keep 
all the TDP data in the same tape pool, for expiration/reclamation efficiency).

Don France
-- Original message from "RODOLFICH, NICHOLAS" <[EMAIL PROTECTED]>: 
-- 


> Hi everyone, 
> 
> Any assistance will be greatly appreciated. 
> 
> I have 4 TDP Exchange client nodes needs that are currently 
> backing up directly to tape. Our data outgrew our disk pool so we had to 
> point these directly to tape. Someone in our team wants to split these 
> up so 2 nodes will backup directly to tape and 2 nodes will go straight 
> to tape. 
> I am thinking that the only way to do this is to generate a new 
> policy domain to do this since a node can only belong to one domain and 
> only one policy set can be active at a time. Am I correct or is there 
> any easy way to do this? 
> 
> Current: 
> -- 
> client1 --> tape 
> client2 --> tape 
> client3 --> tape 
> client4 --> tape 
> 
> Proposed: 
>  
> client1 --> disk pool 
> client2 --> disk pool 
> client3 --> tape 
> client4 --> tape 
> 
> THANK YOU!!! 
> 
> 
> Regards, 
> Nicholas 
> 
> 
> 
> 
> 
> 
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, 
> distribute or copy this e-mail. 


Re: TSM 5.3 web gui

2006-03-05 Thread Don France
Hey Allen,

I have your phone (from Oxford);  still OWE you , big time;  let me know where 
to send you (at least) the donera that was on it, and if you want the phone 
back!  Contact me off the list (please), with your addr, eh!?!

The ISC-AC still sucks (at its latest 5.3.2.0 release, I had some difficulty 
with health-monitor, but fixed it -- the part that sets me "off" now is the 
maint. plan;  it forced and re-forced the "parallel" copy-pools when what I 
wanted was to merge two primaries into a single, offsite copy-pool --  sigh:).  

Maybe you're right;  TSM is too diverse in its installed environments and the 
admins that support it.  But, I gotta say, the old GUI was fine (for some 
tasks), just needed some minor improvements --- like quit collapsing the whole 
tree of policy constructs, so I can change more than one MC without 7 
mouse-clicks.  The IDEA is good, to get a single interface to multiple TSM 
servers, but it sure loses something in the translation to implementation... 
not to mention the Websphere issues you mention!

Best regards,
Don

Don France
email:  don_france at att_dot_net





-- Original message from "Allen S. Rout" <[EMAIL PROTECTED]>: 
-- 


> >> On Fri, 3 Mar 2006 14:28:06 -0500, Richard Mochnaczewski 
> said: 
> 
> 
> > I had some problems with the setup of the Admin Console. I placed a 
> > call with IBM, [...] 
> 
> 
> The ranting about the ISC was legion in Oxford, and clearly a source 
> of frustration for the IBMers there; there were many questions or 
> "I-want" type statements which were answered with "We're doing that in 
> the Admin Console". It's clear that they've placed a lot of effort 
> and thought into the AC design. 
> 
> I'm starting to think that we, TSM admins, are just too varied a bunch 
> to have our needs met within the constraints of one such system and 
> the ideology that must be imposed with it. Maybe IBM can just ditch 
> the GUI idea entirely, and leave the market to the 3rd party tools. 
> Or maybe they can ditch the idea that the GUI is 'full featured', and 
> deploy something intended to coddle folks who are never going to make 
> the effort, and omit the hard bits. 
> 
> 
> 
> I'm in sympathy with the desire to web-ify many administrative aspects 
> of many IBM tools under a unified umbrella. But the One Ring to Rule 
> Them All attitude has well-documented failure modes, and nobody wants 
> to be Sauron at the end. 
> 
> It gets worse when the One Ring is as (pardon me) shaky and 
> unmaintainable as Websphere. We've had deep, deep _DEEP_ problems 
> with that product. A low point was when a level 2 tech in all 
> seriousness told us he wasn't sure the product supported HTTP. 
> 
> No, really. I can't make that up. Our tech replied that maybe they 
> should change the product name to just "Sphere". 
> 
> I've been through the AIX install of the ISC and AC on a disposable 
> LPAR several times now; even with a fresh clean box and support on the 
> line, we've not been able to get a working console up, which I find 
> more amusing than irritating, any more. 
> 
> 
> 
> - Allen S. Rout 


Re: 5.1.6.2 Upgrade

2003-02-28 Thread Don France (TSMnews)
Hi Gretchen (and Gerhard),

Hope this finds you doing well. We've sure missed you at SHARE!

Question: Have you done anything new to "protect" yourself from a "bad" maintenance
level? Was there something special in 5.1.5 or 5.1.6 that attracted you to upgrade so
soon?

I have a customer wanting v5, and I am not anxious (yet) to go beyond 5.1.1.6 -- the
latest level I like from a server-stability perspective -- but they do not have the
5.1.0.0 CD,,, they have 5.1.5.0, so I am looking for hints/tips to identify a good
level beyond 515.

Thanks,

Don

==

My only complaint is the speed of the expiration - it's never fast

enough for me.

Gretchen Thiele

Princeton University


Re: Move from 3590 to LTO or 9840/9940

2002-09-06 Thread Don France

Yep... I am working with just such a customer;  getting rid of 3590E's (9
drives, 7=SCSI, 2=Fibre) in their 3494 silo, replacing with STK SN6000 (SAN)
virtualizing the PowderHorn silo, 14 drives (9940A's, maybe upgrade to B's,
later).  LTO is not the industrial-strength quality of 3590 or 9840/9940,
period... but, even with the best tape technology, you still should consider
copy pools (for offsite and/or protection against media failure during
restore of production data).  Also, we are installing Gresham EDT
single-server (to start), in preparation for LAN-free and library sharing
(with a 2nd TSM server).

Absolutely... always mirror (with TSM) your db & log; also, search the
archives for more "best practices" kinds of things like copy pools,
roll-forward mode, striping considerations, etc.  To move nodes from
mainframe to Sun, get on 5.1 first;  this version has a one-step
export/import feature (rather than the old, two-step process with
server-2-server via virtual volumes)... it's the only way to get your nodes'
data into a TSM server on a dissimilar platform.

This customer does currently about 1 TB per night (of backups), are doing
okay backup window and reclamation-wise, have now embarked on a server
consolidation plan which is adding about 800 to 1,000 NT servers to the TSM
environment.  I have been researching (via dsmaccnt.log and summary table)
their data volume, file space occupancy, and retention policies to come up
with a data migration plan.  We are keeping the existing AIX box (S80 on SP
switch/complex), so it's a simple matter of (a) put 9940's into production
(using new storage pool), (b) wait for a couple weeks, so normal attrition
will eliminate all but the current client disk occupancy worth of backup
data, then (c) let run storage pool migration (define new pool as the
nextstg of the old stgpool) on one or more weekends (we figured about 10 TB
per every 72-hour period,,, that's with a conservative 5 MB/sec thruput per
3590 drive).

For more complete answers to your laundry list of questions, I suggest you
try using ServerGraph (that's what we're planning to do) -- else, just hack
your way thru the myriad of data to answer the Q's for your environment.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Remco Post
Sent: Wednesday, September 04, 2002 9:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Move from 3590 to LTO or 9840/9940


On woensdag, september 4, 2002, at 03:13 , Joni Moyer wrote:

> Hi everyone!
>
> I know that I have asked questions about his before, but I am now
> looking
> for individuals that have done this conversion and what
> experiences/opinions you may have on this topic.  I was wondering if
> anyone
> has gone from 3590 Magstar tape cartidges to LTO or 9840B or 9940A?  If
> so,
> could you answer the following questions for me so that I can get an
> idea
> of what your environment is like and if it is similar to mine?  Thanks
> for
> any input  If you have gone from any tape device to LTO, 9840B or
> 9940A
> please let me know of you experiences also.
>
> We have 2 TSM servers (1 production, 1 test) that we will be moving from
> the mainframe to SUN servers on the SAN.  We will be doing 8 LAN-free
> backups.  We back up approximately 1 TB per night with backups and
> archives.  We also have about 200 clients.  We currently have 3590
> Magstars
> and it takes about 1/2 - 1 hour to reclaim a tape that is 40%
> utilized.  We
> don't mirror our DB/log volumes.  If you have any suggestions on a move
> from the mainframe to a SUN server I would also appreciate this a
> lot
> Thanks again!

Mirror the db volumes. That is one you'll need to do for sure. In case
you loose a disk, you're screwed, and with the amout of data you're
moving each night, that meast guaranteed loss of data that is
inrecovverable.

We use both 3590E an 9840 on our TSM server, both work great. About
comparable in speed. I think the 9840's compression is just a touch
better that the 3590E's, but not by much.

>
> 1.  What are the sizes of the files that you back up?  Are they all
> small
> files or do you also have large databases being backed up or archived?
>
> 2.  How much data are you backing up per night?  And also archiving?
>
> 3.  What is you maximum tape mounts per hour?  How many tape drives do
> you
> use concurrently?
>
> 4.  How long does it take you to reclaim a tape that is about 50% full?
>
> 5.  Do you consider your type of media reliable?  How many bad tapes
> have

Re: BareMetalRestore

2002-09-06 Thread Don France

For those who haven't learned (yet), MS in Win2K (and TSM in 5.1) have taken
steps to address this;  Win2K allows restore directly to existing C-drive,
whereby the backup product brings down system objects in a way that gets
them restored at the next reboot(except for the hardware specific pieces).
This can be (and has been) scripted (by at least a few customers), in such a
way that server recovery is possible -- the only limitation is TSM can only
handle non-authoritative restore of Active Directory (and other "shared"
registry/system objects).

Solaris has jump-start, HP has Ignition, AIX has mksysb -- they all have
their limitations, but that just makes our life abit more interesting.

One customer I helped deploy the old ADSM-pipe program to create a
mksysb-like package for the Solaris' equivalent of rootvg (and he liked it
alot!)  Another customer engaged Sun professional services to script
together the config's for their E10K nodes and E450's (about 40 machines) so
they could periodically update a "standard image" jump-start CD (for each of
the two machine types),,, last I heard, they were happy with that solution
for their DR "server recovery" needs.

At SHARE, IBM/Tivoli acknowledged they were working on a BMR solution for
Win2K;  no commitment to ever delivering said solution, but (at least)
they're looking at it.  Win2K is (essentially) the only platform that
doesn't contemplate bare-metal restore in a mksysb-like fashion.

For my customers, I still advise them to use NTbackup.exe for the System
State backups (to a file that gets picked up by c$ incremental) -- in order
to allow point-in-time, authoritative restore of system objects.  We
developed that approach (and discussed it, on this list) almost two years
ago -- we certified it for point-in-time recovery of AD, DC's and Exchange
Server in December, 2000.

This stuff is a royal pain, but not exactly rocket-science... just takes
proper funding and decisive commitment by platform-specific customer
personnel.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Wednesday, September 04, 2002 10:46 PM
To: [EMAIL PROTECTED]
Subject: Re: BareMetalRestore


The issue is actually more complex for IBM/Tivoli.  They typically do not
implement solutions that have potential integrity issues and they are
targeted toward enterprise recoveries in case of a disaster.

Unfortunately, none of the Disaster Recovery providers recommend BMR because
of HAL issues in the windows world and it being much easier to guarantee a
successful recovery for UNIX systems by restoring to an alternate drive and
booting.  What the DR providers do like is the mksysb of AIX.  That is a
great tool.

Tivoli does recognize the fact that customers must have a bare metal
recovery to identical hardware for windows.  I do not know when, but they
will eventually have a solution for this.  I believe this is the convenience
case.

If you are a business partner, you should be making the business lost case
to Tivoli because of the missing BMR capability.  I encourage all business
partners to make that plea with supporting documentation to Tivoli.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 04, 2002 11:16 PM
To: [EMAIL PROTECTED]
Subject: Re: BareMetalRestore


There is a problem: all other software, like Brightstor and Netbackup
support this function. Since the windows users are getting more, we cannot
ignore the requests from them. I lose about 4 case which Brightstor and
Netbackup take these case because of the lack of this function. TSM is a
very good solution, but is still not a complet solution. People do not think
about the disaster and disaster recovery, they just think of convinient,
especially they spend lots of money to build a solution.

Mephi Liu

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 5:07 AM
To: [EMAIL PROTECTED]
Subject: Re: BareMetalRestore


IBM is the only company that provides this capability for its operating
systems.  Standalone restore on the mainframe, mksysb on AIX.  These are
included free with the OS.  This is a OS vendor issue.  They simply do not
recognize the benefits of the capability and are not focused on SAR because
they do not see it as a problem.

The place that Tivoli needs to step in is make things like mksysb integrated
as a special backup capability that and manage the associated media and
provide a boot strap wrapper to invoke the native system's capability if the
are ever creat

Re: New and probably a simple question....

2002-09-06 Thread Don France

CIFS and NFS won't suffice if you care about your ACL's.

The best example I've seen is a customer bought enough NetApp capacity to
hold (a) 7-days of \UsrData and (b) 14 days of \GrpData;  check out the
SnapShot feature (it stores files that become inactive during the NetApp
"daily incrementals" in a read-only directory, accessible under the
"~snapshot" directory in the user's home directory (in the case of a
person's \UsrData stuff).

This same customer is now planning for the new R100 as a remote-site,
read-only mirror -- which enhances his site DR recoverability, and has a
cool procedure for "seeding" the initial "full" using tapes (rather than
sending it across the network).

So, in Bay Area, at least, it seems the NetApp sales folks have learned to
sell double (or more) the needed capacity -- so, recent-term backup data is
immediately available without a tape mount.

Now, this same customer is (also) looking to replace all their 3575
libraries (8 or 10) with LTO (IBM) drives inside an STK-L700E, 700 slot
silo somewhat off-topic, my segue into pointing out the need to ALSO
re-think your solution when considering the huge capacities of latest media
(LTO at 100 GB, 9940A at 60 GB, 3590-K at 40 GB -- it continues).  NetApp
for day-to-day restores (from ~snapshot), LTO for removable media (onsite +
offsite copies, for DR and protection from media failures).

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Thursday, September 05, 2002 11:16 PM
To: [EMAIL PROTECTED]
Subject: Re: New and probably a simple question


In addition to backing up using the NDMP interface you can backup the
standard client backup way through a share.  However, the functionality to
be able to restore files from the image is coming.  I think within a year.
This was discussed in San Francisco at the SHARE meeting a couple of weeks
ago.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Wheelock, Michael D [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 04, 2002 11:18 PM
To: [EMAIL PROTECTED]
Subject: New and probably a simple question


Hi,

We are looking at a TSM solution here at our facility.  We are also looking
at reorganizing our file shares onto a Network Appliance platform.  From a
thorough reading of the TSM 5.1 manuals, it seems that TDP for NDMP only
supports image backups.  Needless to say on a busy fileserver that isn't
going to fly.  While it might be a good disaster recovery solution, it is
not the right one for day to day operations.

My question is, how do most people back these things up?  Do you use a CIFS
or NFS share and backup that way?  Or is there something I am missing?
Thanks in advance.

Michael Wheelock
Integris Health of Oklahoma



Re: test for DRM

2002-09-03 Thread Don France

The recommended way to do a DR test is to take your copy pool (or a subset,
or a special-set) to another machine (NOT YOUR PRODUCTION box); if you
decide to use DRM (even if not) follow the instructions found in the Admin.
Guide -- this is about the most well-written piece of info across all the
books, even if you are not using the DRM component, it will teach you the
essential parts of performing a successful DR exercise.

Using a backup of your production TSM db, along with the other essential
config files, you install TSM and load the db on an alternate (DR-test)
machine;  if you're platform- and TSM-savvy, you could just install it on
the same system and even use the same library, to do an informal, in-house
test.  It's this alternate system that gets the "DR" treatment of marking
volumes destroyed, etc;  but, proceeding down this path without first
ensuring your backups are being done successfully will only lead to
disappointment... as putting the cart in front of the horse.

Meanwhile, Sun (their website, doc or PS folks) can help you with the spec.s
needed to configure your solution;  specifically, total system configuration
can be thrown off balance by inadequate distribution of the component
loads -- especially, NIC's need healthy sizings of CPU power (as in
Gigabit-E'net cards).  JBOD is great, but, in some cases you just need
protection -- consider raid-0+1 (or, go straight to tape for data that's
mission-critical and cannot simply be re-backed up the next night).

Hope this helps.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Chetan H. Ravnikar
Sent: Saturday, August 31, 2002 9:09 AM
To: [EMAIL PROTECTED]
Subject: test for DRM


Hi there and thanks in advance for all your tips and recommendations


we have a huge distributed new TSM setup, with server spread across the
campuses. We recently moved from 3 ADSM 3.1 servers to 9 TSM 4.2.2
servers all direct attached SUN 280r(sol-2.8), SUN T3 and Spectralogic 64K
libs

I have a few questions

1. We have TSM working on Solaris2.8 with SUN T3 storage for mirrored DB
   and storage pools. Our performances is nowhere close to what SUNs
   recomended T3 sustained writes which is 80MB. Recovery logs are on
   external D130 disk-packs

   has anyone seen a setup with SUN and is this normal? My writes to
diskpools are at 20 to 30 MB and that is slow. I have a raid5 setup for
the storage pools,
   Tivoli suggests JBOD for storagepools rather than raid5!? but how do  I
protect myself from a disk fail on a critacal quarter financial backup..
since the source gets overwritten as soon as they throw the data on to my
stoarge pools T3 (primary)

2. One such setup has a StorageTek L7000 lib and my customer wanted me to
prove that the tapes from offsite do work.

Tivoli suggests that I do not test DRM on a production system. But I had
no choice but to atleast test for bad media on the primary tapepool!if any
so I went ahead picked *a* node

with a select statement, marked all the tapes destroyed on the primary
tape pool(for that node), and started a restore of a filesystem. Prior
I had a bunch of tapes recalled from the off-site pertinent to the same
node.
Had them checked in as private and waited, to see if TSM picks those tapes
since the onsite were marked destroyed. This process has been rather
lengthy and tedious and unsuccessful

Has anyone done a rather simpler test for bad media, to prove that the
off-site tapes do work, less to say the test I performed came back with
data integrity errors and my customers are not happy and with all traces
setup.. Tivoli was unclear how that happened

(Tivoli claimed, there could be a flaw in my DRM process)

3. The last question, during a copy storage pools process, if I *cancel*
the process (since it took days), the next time I start (manual or via a
script) does it pick up from where it stoped!


thanks for all your responses, forgive me, My knowledge is pretty limited
and I started learning Tivoli while I started this project

Cheers..
Chetan



Re: How to list files backed up

2002-09-03 Thread Don France

If you use the GUI, you can click on the column heading "Backup Date";  it
will sort the list.  BTW, if your system is setup properly, you can also
just "grep" the dsmsched.log file to find messages for a given file --
combined with tail, you can zero in on the specific date in question.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cahill, Ricky
Sent: Friday, August 30, 2002 12:53 AM
To: [EMAIL PROTECTED]
Subject: How to list files backed up


I must be missed something really obvious as all I want to do is list all
the files that were backed up by a node in it's last backup, but  can't seem
to find any simple way to do this.

Heelp

Thanks in advance

  ..Rikk




Equitas Limited, 33 St Mary Axe, London EC3A 8LL, UK
NOTICE: This message is intended only for use by the named addressee
and may contain privileged and/or confidential information.  If you are
not the named addressee you should not disseminate, copy or take any
action in reliance on it.  If you have received this message in error
please notify [EMAIL PROTECTED] and delete the message and any
attachments accompanying it immediately.

Equitas reserve the right to monitor and/or record emails, (including the
contents thereof) sent and received via its network for any lawful business
purpose to the extent permitted by applicable law

Registered in England: Registered no. 3173352 Registered address above





Re: TSM upgrade

2002-09-03 Thread Don France

I suggest you install the TSM-4.1 (not 5.1), restore the db from "old" TSM
server as part of the migration;  that way, (a) you preserve all the old
backups (without need to export/import), and (b) you get the added
space/performance on Win2K that you currently lack on the old, NT box.
After successful "migration" to the new TSM server box, upgrade to 5.1.1.5
(or later)... read the README docs (server first, then clients later is only
*one* way to do it) about all the migration considerations.  In this case,
you *may* be required to audit the db before upgrade will be allowed to
proceed.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mario Behring
Sent: Monday, August 19, 2002 7:49 AM
To: [EMAIL PROTECTED]
Subject: TSM upgrade


Hi list,

I have to upgrade my TSM Server and clients from 4.1 to 5.1. The new
server, TSM 5.1, will be running on a different machine under Windows 2000
Server. The old one is now running on a Windows NT Server system. My
storage unit is a IBM 3590 tape library (SCSI connected). The TSM 4.1
database is 17GB in size and the Recovery Log is 2GB.

Do you guys have any tips on how should I do this ?? I mean, which is the
best and more secure way to do it. I4ve heard that I cannot simply backup
the TSM 4.1 database and restore it on the TSM 5.1 Server. And I can4t
install TSM 5.1 over 4.1 because the old server is  well  old and
there is no space left on the disks.

Any help will be apreciated.

Thanks.

Mario Behring

__
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com



Re: Eternal Data retention brainstorming.....

2002-09-02 Thread Don France

I like Wanda's export solution the best.  It's not the ideal answer, and
(it's been my experience that) many customers don't care about the inactive
versions, just the archives (snapshots of the db's) and the active versions
of file served data... so, you could export only the active versions.

I worked with a customer that had abit of this issue -- the recommended
solution was to start a new TSM server, with a new db;  the old server just
stops getting used, all mgmtclass retention is set to the length of time
requested, so it becomes a stand-by, restore-only instance of the TSM db --
doesn't need to be started unless a restore request comes in.  With this
solution, the db stops growing, the data continues to be available, the
policy domain/mgmt-class update produces the desired retention -- and, with
no new data, expiration process does not need to run, at all,,, hence,
preserving visibility of the inactive versions(even if you fail to change
retention!).

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
bbullock
Sent: Thursday, August 15, 2002 1:45 PM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


True, that would keep the last active versions, but in this case
they want everything that was backed up to TSM as of a certain date, even
the inactive versions. If I rename the filesystem, the inactive versions
will still drop off as the expire inventory progresses. :-(

Ben

-Original Message-
From: Doug Thorneycroft [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 2:30 PM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


how about renaming the filespace, This will keep your active versions.

-Original Message-
From:   bbullock [SMTP:[EMAIL PROTECTED]]
Sent:   Thursday, August 15, 2002 12:31 PM
To: [EMAIL PROTECTED]
Subject:Eternal Data retention brainstorming.

Folks,
I have a theoretical question about retaining TSM data in an unusual
way. Let me explain.

Lets say legal comes to you and says that we need to keep all TSM
data backed up to a certain date, because of some legal investigation
(NAFTA, FBI, NSA, MIB, insert your favorite govt. entity here). They want a
snapshot saved of the data in TSM on that date.

Anybody out there ever encounter that yet?

On other backup products that are not as sophisticated as TSM, you
just pull the tapes, set them aside and use new tapes. With TSM and it's
database, it's not that simple. Pulling the tapes will do nothing, as the
data will still expire from the database.

The most obvious way to do this would be to:

1. Export the data to tapes & store them in a safe location till some day.
This looks like the best way on the surface, but with over 400TB of data in
our TSM environment, it would take a long time to get done and cost a lot if
they could not come up with a list of hosts/filespaces they are interested
in.

Assuming #1 is unfeasible, I'm exploring other more complex ideas.
These are rough and perhaps not thought through all the way, so feel free to
pick them apart.

2. Turn off "expire inventory" until the investigation is complete. This one
is really scary as who knows how long an investigation will take, and the
TSM databases and tape usage would grow very rapidly.

3. Run some 'as-yet-unknown' "expire inventory" option that will only expire
data backed up ~since~ the date in question.

4. Make a copy of the TSM database and save it. Set the "reuse delay" on all
the storage pools to "999", so that old data on tapes will not be
overwritten.
In this case, the volume of tapes would still grow (and need to
perhaps be stored out side of the tape libraries), but the database would
remain stable because data is still expiring on the "real" TSM database.
To restore the data from one of those old tapes would be complex, as
I would need to restore the database to a test host, connect it to a drive
and "pretend" to be the real TSM server and restore the older data.

5. Create new domains on the TSM server (duplicates of the current domains).
Move all the nodes to the new domains (using the 'update node ...
-domain=..' ). Change all the retentions for data in the old domains to
never expire. I'm kind of unclear on how the data would react to this. Would
it be re-bound to the new management classes in the new domain? If the
management classes were called the same, would the data expire anyways?

Any other great ideas out there on how to accomplish this?

Thanks,
Ben



Re: Exchange bases ...

2002-08-30 Thread Don France

Well... "storage groups" is a term specific to Exchange 2000 or later;
Exchange 5.5 has only the directory store (DS) and information store (IS).
The backups (and restores) on 5.5 are restricted to either/both of those two
components... for good reasons -- performance and integrity.  In Exchange
2000, there is a reserved storage group (0);  if you need to recover a given
user's mailbox, you restore the group he's in, using the group-0, then
extract to .PST.  In Exchange 5.5, you needed to have a separate, warm
stand-by machine to restore the entire IS.

NEW NEWS:  There is now (or soon to be) a special, adjunct product from
Microsoft to support mailbox restores, using a "scan" of the storage group
containing the user in question.  It was discussed at SHARE in SF, will come
out in the proceedings (for members) in a few weeks.

Watch for MS ExMerge, a tool to extract mailboxes from Exchange server to
.PST files, then use b/a client to backup.  Alternatively, check out the
combination of TSM for Mail, ExMerge and IBM's CommonStore product (usually
used for long-term archiving).  Check out these MS TechNet articles...
Q174197 - XADM: Microsoft Exchange Mailbox Merge Program ( ...
... XADM: Microsoft Exchange Mailbox Merge Program (Exmerge.exe) Information
Q265441 - XADM: Some Questions and Answers About the Exmerge ...


Hope this helps.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Christian Astuni
Sent: Thursday, August 29, 2002 1:41 PM
To: [EMAIL PROTECTED]
Subject: Exchange bases ...


Hi people ...
Last Monday I installed the Tivoli Data Protection for Microsoft Exchange
2.2 in a Microsoft Exchange Server 5.5
I configured all the files and I can see the "Directory Information" and
the "Storage Groups" as all. My question is: Is possible to make a backup
only a one exchange base of one person, I need to make a backup to
selective mailbox??? If the response is "yes" .. How do this, because only
I can see as groups of all mailbox.

Thank you very much for yours help, I appreciated that very much.
Best Regards.


Christian Astuni
IBM Global Services
[EMAIL PROTECTED]
Tel. 4898-4621
Hipolito Yrigoyen 2149  - Martmnez (1640) Bs. As. - Argentina
=



Re: TSM Server Upgrade to 5.1

2002-08-19 Thread Don France

The fix for SumTab bytes transferred is in 5.1.1.1 (not .5 -- I got .5 on
the brain, for some reason).  5.1.1.1 was released around 01-July, so would
be my minimum level recommendation for a 5.1  shop.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Monday, August 19, 2002 1:11 PM
To: 'ADSM: Dist Stor Manager'
Subject: RE: TSM Server Upgrade to 5.1


1.  Never run from the virgin CD-release -- always check this list for
issues not documented in the README files;
2.  Read (completely) and follow instructions in the README files and
QUICK-Start manual -- it's all laid out there, just gotta read (and know
what you are reading!).

For 5.1.x, 5.1.1.5 is the current patch level I recommend (else, summary
tables don't have all the data I need to do my job!).

5.1.5 was recently announced, ETA around October.  I'd stay with your 4.2,
maybe upgrade to 4.2.2.5 (I like that patch level on 4.2, especially with
STK SN6000, PowderHorn 9940's and ACSLS in the environment).

Hope this helps.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Crawford, Lindy
Sent: Monday, August 19, 2002 8:13 AM
To: [EMAIL PROTECTED]
Subject: TSM Server Upgrade to 5.1


Hi TSMers,

Please can you assist me. How can I go about upgrading my tsm server from
4.2.1 to 5.1 without any glitches ?

Our config is as follows:-

TSM server  : 4.2.1
O/S : Windows NT4 SP6a
Devices : Magstar 3570 library, IBM 3583 (L18) library

Would I also have to upgrade my clients at the same time ?

Thank you for your assistance.

> Lindy Crawford
> Business Solutions: IT
> BoE Corporate
>
> *  +27-31-3642185
 <<...OLE_Obj...>>  +27-31-3642946
 [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>





WARNING:
Any unauthorised use or interception of this email is illegal. If this email
is not intended for you, you may not copy, distribute nor disclose the
contents to anyone. Save for bona fide company matters, BoE Ltd does not
accept any responsibility for the opinions expressed in this email.
For further details please see: http://www.boe.co.za/emaildisclaimer.htm



Re: ANS1075E : Program Memory Exhausted

2002-08-19 Thread Don France

Nice call, Andy;  thanx, for the update.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Andy Raibeck
Sent: Monday, August 19, 2002 3:18 PM
To: [EMAIL PROTECTED]
Subject: Re: ANS1075E : Program Memory Exhausted


Rob didn't mention the client version he is using. But the problem you are
referring to (IC32797) was fixed in 4.2.2 and 5.1.1 (it still exists in
5.1.0).

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED] (change eye to i to reply)

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




Don France <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
08/19/2002 13:16
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: ANS1075E : Program Memory Exhausted



There's been some history of the TSM scheduler allocating then not freeing
memory;  maybe try using managed services (with polling mode), so the
scheduler is periodically launched (and exits) from dsmcad... see the
Using
Clients book.

Also, check your virtual memory settings;  you may have so many
dir-objects
in your filesystem that you're exhausting available virtual memory.

If all that fails, call SupportLine and/or collaborate with your NT server
admins... they may need some HotFix (or have other ideas).


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Rob Hefty
Sent: Monday, August 19, 2002 10:20 AM
To: [EMAIL PROTECTED]
Subject: ANS1075E : Program Memory Exhausted


Hello all,
We have a win2k file server here running 4.2 that we have been doing
incrementals on for months with journaling enabled and had no problems
until
recently.  The error listed above outputs almost immediately after the
initial run of it.  We then rerun it (through a 3rd party scheduler) and
it
completes normally, backing up the normal amount.  I tried the Tivoli
website to no avail since this message is not documented and am waiting in
a
call back queue on it but have not heard back anything.  Any help is
appreciated.

Thanks,

Rob Hefty
IS Operations
Lab Safety Supply



Re: ANS1075E : Program Memory Exhausted

2002-08-19 Thread Don France

There's been some history of the TSM scheduler allocating then not freeing
memory;  maybe try using managed services (with polling mode), so the
scheduler is periodically launched (and exits) from dsmcad... see the Using
Clients book.

Also, check your virtual memory settings;  you may have so many dir-objects
in your filesystem that you're exhausting available virtual memory.

If all that fails, call SupportLine and/or collaborate with your NT server
admins... they may need some HotFix (or have other ideas).


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Rob Hefty
Sent: Monday, August 19, 2002 10:20 AM
To: [EMAIL PROTECTED]
Subject: ANS1075E : Program Memory Exhausted


Hello all,
We have a win2k file server here running 4.2 that we have been doing
incrementals on for months with journaling enabled and had no problems until
recently.  The error listed above outputs almost immediately after the
initial run of it.  We then rerun it (through a 3rd party scheduler) and it
completes normally, backing up the normal amount.  I tried the Tivoli
website to no avail since this message is not documented and am waiting in a
call back queue on it but have not heard back anything.  Any help is
appreciated.

Thanks,

Rob Hefty
IS Operations
Lab Safety Supply



Re: TSM Server Upgrade to 5.1

2002-08-19 Thread Don France

1.  Never run from the virgin CD-release -- always check this list for
issues not documented in the README files;
2.  Read (completely) and follow instructions in the README files and
QUICK-Start manual -- it's all laid out there, just gotta read (and know
what you are reading!).

For 5.1.x, 5.1.1.5 is the current patch level I recommend (else, summary
tables don't have all the data I need to do my job!).

5.1.5 was recently announced, ETA around October.  I'd stay with your 4.2,
maybe upgrade to 4.2.2.5 (I like that patch level on 4.2, especially with
STK SN6000, PowderHorn 9940's and ACSLS in the environment).

Hope this helps.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Crawford, Lindy
Sent: Monday, August 19, 2002 8:13 AM
To: [EMAIL PROTECTED]
Subject: TSM Server Upgrade to 5.1


Hi TSMers,

Please can you assist me. How can I go about upgrading my tsm server from
4.2.1 to 5.1 without any glitches ?

Our config is as follows:-

TSM server  : 4.2.1
O/S : Windows NT4 SP6a
Devices : Magstar 3570 library, IBM 3583 (L18) library

Would I also have to upgrade my clients at the same time ?

Thank you for your assistance.

> Lindy Crawford
> Business Solutions: IT
> BoE Corporate
>
> *  +27-31-3642185
 <<...OLE_Obj...>>  +27-31-3642946
 [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>





WARNING:
Any unauthorised use or interception of this email is illegal. If this email
is not intended for you, you may not copy, distribute nor disclose the
contents to anyone. Save for bona fide company matters, BoE Ltd does not
accept any responsibility for the opinions expressed in this email.
For further details please see: http://www.boe.co.za/emaildisclaimer.htm



Re: TSM backing up in a DMZ zone.

2002-08-18 Thread Don France

Excellent suggestion, Mark;  we recommend private/backup-only network for
all our customers,,, most had been moving toward the "cheap" side, so if a
switch goes down, the network & server teams go into major fire-drill mode.
Our suggestion  is for all production servers be dual-homed, which gives (a)
separation of backup/restore traffic, and (b) alternative path with nominal
network admin if they lose a network segment or switch.

Re. IPX, the stated TSM direction (see v5.1 Windows) is fewer protocols;  IP
and FC will be about all that's left, with IPX and NetBIOS being dropped.
So, security (as in this DMZ scenario) is best handled by network def.s in
the switches -- and isolate the TSM server to just the DMZ segments for DMZ
clients.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mark Stapleton
Sent: Saturday, August 17, 2002 9:37 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM backing up in a DMZ zone.


> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Seay, Paul
>
> See my responses inline.
>
>
> From: William Rosette [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 31, 2002 10:01 AM
> To: [EMAIL PROTECTED]
> Subject: Re: TSM backing up in a DMZ zone.
>
>
> HI TSMr's,
>
>   I have a DMZ Zone going in this Tuesday and they are asking me (TSM
> admin) to see if TSM can backup servers/clients in the DMZ zone.  I have
> heard some talk on this ADSM user group about that very thing.
> We are going
> to be using a Cisco Pix Firewall and eventually use a Nokia Checkpoint.  I
> gave them some options but I want to know if there are any more
> options that
> y'all might have.  Here are the ones I suggested.
>
> 1. Put a TSM remote server in the DMZ and share the library
> (3494) with the
> other server.
> This one requires port 3494 to be opened through the firewall so that the
> TSM server can talk to the library.  This one to me has some serious risks
> if the TSM server is broken into.  The reason is there is no
> security in the
> library to block the mtlib and lmcpd interfaces from being used to mount
> tapes belonging to other systems from being mounted in the drives of this
> remote TSM server.
>
> 2. Since most clients (NT & Linux servers) backup in 5 to 15 minutes and
> will not need to be backed up maybe once a week, open an obscure
> port once a
> week for 30 minutes for all backups.
> The port on the TSM server side has to be set for all clients.  But, you
> could create a small second TSM server processs on the machine inside the
> firewall or locate the remote one inside the firewall that uses this
> specific port and only allows connections from the NT & LINIX servers.
> Then, set your firewall up so that only port and connection works
> to the TSM
> server.  This is probably the most secure.
>
> The big negative is that the backup will be slow depending on
> your firewall
> and network.
>
> 3. Port access through Cisco script when backup happens.
> I am not familiar with this but it looks like 2 with some more security.
>
> 4. Direct connect to TSM server.
> Not sure what you meen by Direct Connect.
>
>
> I understand that probably each one has its security leaks and some more
> than others.  Is there someone who can share a good DMZ SLA?

There's another way.

1. Install a second NIC in each client in the DMZ.
2. Install a second NIC on the TSM server.
3. Create a private network for the DMZ clients and the TSM server to use.
4. Designate a TCP port for the server and clients to communicate through.
5. Set client backups to prompted instead of polling.
6. Turn on the second TSM server NIC
7. Run the backup
8. Close the server NIC.

(Steps 6-8 should run as a client schedule event with a PRESCHEDULECMD.)

This obviates the security risks in having a TSM server in the DMZ.

[I'd suggest using IPX only (instead of IP) for the private network comm
protocol (for additional security), but there seem to be some issues with
using IPX only on the TSM 5.1 server.]


--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE



Re: question on configuring large NT client for optimum restore proce ssing

2002-08-17 Thread Don France

Interesting approach, Zlatko;  I agree, this should work -- if one is very
serious about restore SLA (and simulating Unix filespace-level
collocation -- using node-level collocation with your suggestion); a simpler
approach could accomplish the same effects:  cap drive-letter size at 100 or
250 GB, or maybe even 500 GB, then start a new drive letter. (The
drive-letter is a filespace on Windows platforms... until/unless you get
into the DFS or NTFS virtual volume game and then it's not that much
different).

If the customer restore SLA can "tolerate" 10-20 GB/Hr, a 300 GB cap with
DIRMC "tricks" will be sufficient;  just use high-level directories for
"GrpData" and "UsrData" with multiple restore threads; using classic (rather
than no-query) restore causes tape mounts to be sorted (more think time on
the server), it's another performance trade-off (lots of tapes, collocation,
lots of versions, large number of files being restored -- all interact to
affect restore speed).

I've supported several emergency server recovery situations, recent customer
had DLT, no collocation, DIRMC (worked great), 316 GB was restored in 30
elapsed hours -- that would have been more like 20 hours if they hadn't
over-committed the silo, requiring tape mounts for over 40 tapes in a 29
slot silo.

Tim ==> BTW, if you are going to use NAS filer from IBM, you can run the
backups (from the filer); so, with weekly (or monthly) full-image, in
concert with daily (or weekly) differential-image, plus normal daily
progressive-incremental (for file-level granularity), you'd get the fastest
file restore *and* server recovery possible... probably saturate the
network, getting 30 or 300 GB/hr (100Mbps vs. 1 Gbps).

Also, have you looked at Snapshot and/or SnapMirror support?!?  IBM NAS
comes with TSM Agent at 4.2 level;  IBMSnap, PSM & DoubleTake components
allow you to protect the NAS-based data on the NAS (so you could mirror each
drive-letter or network share) and run backups (image and file-level)
directly from the NAS. This kind of online/nearline recovery could totally
mitigate your restore SLA concerns;  if your SLA states 99% of the time,
recovery must be done in less than 4 hours, you are covered -- you only need
tape restores for site-level or drive-level disaster, which becomes less
than 1% of the failure instances, over time (after the first year).  See the
latest RedBook info about Snapshots & Replication with PSM & Double-Take --
built-in components of the IBM NAS, along with TSM client.

Typical NAS customers get a bunch of snap/mirror capacity (sufficient for a
number of days worth of Snapshots plus some RAID-1 protection of critical
data, RAID-like striping for faster performance, etc.)... so, you probably
won't need TSM tapes as much as in the old days where all the backup data
resides (exclusively) on tapes.  JBOD devices have gotten s cheap, it's
not terribly expensive to keep 7 or 14 days of incremental snapshots online
(in the ~snapshot directory, if using NetApp Snapshot, for example);  once
the end user is told where his snapshot data is stored, he stops calling in
trouble tickets for restores less than the snapshot retention period!

Check out the RedBooks on IBM NAS... up and coming, cheap (JBOD) solutions
to large file servers.  See this one, to start
http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/sg246831.html

Looks like you are in for some *actual* fun, with this project!!!


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Saturday, August 17, 2002 9:03 AM
To: [EMAIL PROTECTED]
Subject: Re: question on configuring large NT client for optimum restore
proce ssing


Tom,

try to emulate virtualmountpoint through separate node names:
-   for each huge directory (acting as "virtualmountpoint") define a
node. In dsm.opt file define
--  exclude X:\...\*
--  include X:\\...\*
--  exclude.dir X:\
--  exclude.dir X:\
--  exclude.dir X:\
-   define other_dirs node with excludes for all "virtualmountpoints"
and without first exclude and the include.
Thus only the directory is included, existing known directories are
exclude.dir-ed and not traversed. If new directory is created and
forgotten to be excluded it will be traversed but only structure will be
backed up and not files. Last node will backup all but "virtualmountpoint"
directories.
You can create several schedule services and add them to MSCS resource
group for that (one-and-only) drive.
Collocation will come from itself.
Disclaimer: never solved suc

Re: Multiple logical libraries for 3584

2002-08-17 Thread Don France

Gentille,

No big deal, on 4.1 or later;  you set up one TSM server as the library
manager, so all tape mounts go thru that server.  Then, the shared drives
are connected to both servers, so when a drive is allocated to a given
server, the other one won't access... see the Admin. Guide (and 3584 doc.s)
for more details.

Don


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Saturday, August 17, 2002 7:47 AM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


You will need multiple control paths to the library only if you want to
partition it. Each TSM server will see its partition as standalone library
with number of drives and slots as assigned to that partition. Detailed
description is available in 3584 Planning and Operator Guide, Chapter 4
"Advanced Operating Procedures, Configuring the Library with Partitions".
The other approach is to use TSM library sharing as Don suggested. This
will need some TSM configuration but will allow you to have single scratch
pool for both servers instead of separate set of scratches in each logical
library (partition).
All roads go to Rome (but there are many of them).

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: Multiple logical libraries for 3584

Don,
I have 2 TSM servers accessing a single 3584 library and I would like to
set
up multiple control paths to the library so that both servers can access
it
equally.  Have you or anyone else done this successfully and do you have
any
information on setting it up?
Thanks,
Gentille

-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 13, 2002 4:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


Don France wrote:

>Yep (to the last question);  you cannot span multiple physical libraries
to
>make a single logical.  You can define multiple logicals within a single
>physical;  that is a common thing I've done, for various reasons.
>
>
>Don France
>Technical Architect -- Tivoli Certified Consultant
>Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
>San Jose, Ca
>(408) 257-3037
>mailto:[EMAIL PROTECTED]
>
>Professional Association of Contract Employees
>(P.A.C.E. -- www.pacepros.com)
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
>Dan Foster
>Sent: Tuesday, August 13, 2002 2:14 PM
>To: [EMAIL PROTECTED]
>Subject: Multiple logical libraries for 3584
>
>
>I've got a question.
>
>One 3584-L32 with 6 drives and one 3584-D32 with 6 drives.
>
>Is it possible to have a logical library that covers 4 drives in
>the L32, and a second logical library that covers last 2 drives
>in the L32 and all 6 drives in the D32?
>
>Or is that not a valid configuration -- ie, do I need to keep
>logical libraries from spanning multiple drives in multiple 3584
>units?
>
>-Dan
>
>
Hi Dan,

Judging by your configuration I beleive that you have just one physical
library.  I beleive the L32 is the base unit and the D32 is an expansion
cabinet, i.e. the 2 of them are physicaly attached to one another and
share the same robotics.  Is this correct?  If so then yes you can
partition the library as you described.

But the real question is whatare you trying to accomplish?  If this is
to be connected to 2 different servers there are a few more things that
have to be in place.  If both of these logical libraries are to be used
by the same TSM server I am not sure I understand the rational for doing
so.  You could just as easily manage the drive utilization through other
TSM server options.  Please explain a little bit more what you are
trying to accomplish.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===



Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)

2002-08-17 Thread Don France

Outstanding suggestion, Bill;  that's exactly what we did for a customer
with all Win2K servers at 12 locations around the world.  We even did
validated both (a) domain controller (with Active Directory), and (b)
Exchange 5.5 server recovery using the NTbackup "trick" for System State
backups -- I think they're still doing it that way, which totally avoids the
limitations of TSM client!

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Bill Boyer
Sent: Saturday, August 17, 2002 10:07 AM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


Until you get to 4.2 or higher AND the apar is fixed, you could use the
NTBACKUP command to save the system state (as Microsoft calls it) to a flat
file. Run this as a preschedulecmd. We had success with recoveries of
systems that were pre 4.2 at one site. Then your recovery becomes, Load 2K,
Restore the C: drive, then use NTBACKUP to restore the system state from the
restored file.

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Don France
Sent: Friday, August 16, 2002 6:28 PM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


If your backup data was done using 4.2 or later, Wanda's recovery document
(for a basic server) should work fine;  you do a basic/minimal operating
system install, including the device specific drivers, then fully restore
boot-system drive (C:, right?), do not re-boot until System Objects are
restored.

BUT NOTE: There is an APAR for this issue, as well; IC34015 which has to do
with restoring/not-restoring device specific keys (such as the case you
describe)...
"This affects 4.2 and 5.1 TSM Clients on Windows 2000 and XP."

Hope this helps.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Michael Swinhoe
Sent: Friday, August 16, 2002 7:50 AM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


I have gone down the windows repair route but this takes ages due to all
the re-boots to load the drivers.  I only have a 72 hour SLA to recover our
EDC so the timescales are tight.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



Rob Schroeder
 cc:
Sent by: "ADSM:Subject: Re: RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID
Dist Stor  CONTROLLERS)
Manager"
<[EMAIL PROTECTED]
ST.EDU>


16/08/2002
15:06
Please respond
to "ADSM: Dist
Stor Manager"





If this is a Win2k machine, have you already done a windows repair after
the restore was complete?

Rob Schroeder
Famous Footwear




  Michael Swinhoe
cc:
  Sent by: "ADSM:  Subject:  RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  08/16/2002 05:51
  AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






I have hit a brick wall and I need some help.

I am currently trying to restore some Compaq servers running W2K with
different Raid Controllers (3200 & 5300).  I have successfully managed to
recover a Compaq server with a 5300 raid controller onto a Compaq server
with a 3200 raid controller.  However when I try to do the opposite the
server blue screens.  Has anyone tried to do the same and if so which
registrey keys needed changed or is the solution simpler than this?  Or is
there a piece of software out there that would make the process run more
smoothly with out much manual intervention.

Thanks as always,

Mike.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



___

The information contained in this message is confi

Re: 2 drives are required for LTO?

2002-08-17 Thread Don France

Workable copy pools is *possible*, provided all primary storage pool is on
disk;  that is, all backups go to disk pool, which is sufficiently large to
hold all the versions for retention... and never does migration!

I've had to do this (and more) for a capital equipment-constrained customer;
12 locations around the world, getting the "same" equipment (from Dell) for
service replacement purposes.  Most sites had two drives (one in each 120T),
one site had two drives in a single library (130T), but TWO SITES WITH
SINGLE DRIVE LIBRARY needing onsite backups got 105 GB disk pool (sufficient
to provide 14-day point-in-time restore for approx. 55-65 GB, depends on
daily turnover)... so, with six usable slots, we configured 2-slots for
onsite copypool, 2-slots for offsite copypool, 2 slots for db-backups.

Reclamation was done as if both copy pools were offsite, so fresh tapes
would be cut from the primary disk pool -- so, single drive reclamation was
not required. For the 1-drive, 2-library sites, single drive reclamation was
done AND it worked just fine (using 100 GB disk pool, scheduled for weekends
after disk migration)... all these sites ran fine for over two years, the
only true "glitches" were due to onsite tape management needing occasional
assistance with their DRM actions.  Also, turns out, we rarely had tape
drive failure after the initial install/burn-in period -- even then, we had
less than 5 drives fail across all 12 sites.

NOT the *best* answers, but these configurations did allow us a lights-out
environment --- biggest caveat is when (not if) the drive goes down, no
backups to tape (slightly mitigated by using LAN to store db-backups off to
another server).

Yes, running with less than 3 drives is a challenge, but a lights-out setup
*can* be done with only 1 drive PLUS a large disk pool for the primary
storage pool!

(My motto, had to be: "If you bring money, we can solve"!!!)


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Saturday, August 17, 2002 8:09 AM
To: [EMAIL PROTECTED]
Subject: Re: 2 drives are required for LTO?


Mark,

I fully agree with your opinion. TSM *can* work with single drive but it
would be ugly. Same waste of resources as assignment to 5k project a
project manager with 300k salary (and you can always send 1 kg parcel with
a truck).
I said it is possible but will never say I recommend it.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: 2 drives are required for LTO?

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
> The requirement for 2 drives is not mandatory, it ought to be just a
> suggestion. LTO can be used as a standalone drive, TSM can use single
> drive or small autoloader with single drive. So it is NOT required but
> recommended.

Yes, technically a single drive is sufficient to do backups. But then I
could use a pair of nail scissors to mow my lawn...

> - single drive reclamation - define reclamation storage pool of type
FILE.
> On reclamation remaining data is moved to files and later written to new
> tape volume. Drawback: data is not read when written (sequential
> read+write vs. parallel) thus takes more time. Calculate time budget
> around the clock.

FILE storage pool-based reclamation is dog slow, and expensive of disk
space, particularly if you are backing up database-type data of any size.
I've got a customer trying to do this very thing, and reclamation is
extremely slow.

> - single drive copypools - define following hierarchy DISK -> FILE ->
LTO
> (file pool would be also lto reclamation pool). Prevent file->lto
> migration during backups (highmig=100). Perform backups to copypool
after
> node backups finish. Allow migration after backup to copypool finishes.
> Drawbacks: filepool must be large enough to hold all backups data.
Backups
> should not happen during migration because some object(s) may migrate
> without being copied to the copypool. Again time - data have to be
written
> twice through the one-and-only drive. And on the end with one drive
there
> is no way to perform copypool reclamation.

Bingo. A single tape drive, because of the lack of reclamation, means no
usable copy pool, no way to use move data to consolidate primary tape
volumes, and no way to use a restore volume command to rebuild bad primary
pool media from copy pool media--in short, a badly crippled TSM backup
system.

> Conclusi

Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)

2002-08-16 Thread Don France

If your backup data was done using 4.2 or later, Wanda's recovery document
(for a basic server) should work fine;  you do a basic/minimal operating
system install, including the device specific drivers, then fully restore
boot-system drive (C:, right?), do not re-boot until System Objects are
restored.

BUT NOTE: There is an APAR for this issue, as well; IC34015 which has to do
with restoring/not-restoring device specific keys (such as the case you
describe)...
"This affects 4.2 and 5.1 TSM Clients on Windows 2000 and XP."

Hope this helps.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Michael Swinhoe
Sent: Friday, August 16, 2002 7:50 AM
To: [EMAIL PROTECTED]
Subject: Re: RESTORING TO FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)


I have gone down the windows repair route but this takes ages due to all
the re-boots to load the drivers.  I only have a 72 hour SLA to recover our
EDC so the timescales are tight.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



Rob Schroeder
 cc:
Sent by: "ADSM:Subject: Re: RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID
Dist Stor  CONTROLLERS)
Manager"
<[EMAIL PROTECTED]
ST.EDU>


16/08/2002
15:06
Please respond
to "ADSM: Dist
Stor Manager"





If this is a Win2k machine, have you already done a windows repair after
the restore was complete?

Rob Schroeder
Famous Footwear




  Michael Swinhoe
cc:
  Sent by: "ADSM:  Subject:  RESTORING TO
FOREIGN HARDWARE (DIFFERENT RAID CONTROLLERS)
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  08/16/2002 05:51
  AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






I have hit a brick wall and I need some help.

I am currently trying to restore some Compaq servers running W2K with
different Raid Controllers (3200 & 5300).  I have successfully managed to
recover a Compaq server with a 5300 raid controller onto a Compaq server
with a 3200 raid controller.  However when I try to do the opposite the
server blue screens.  Has anyone tried to do the same and if so which
registrey keys needed changed or is the solution simpler than this?  Or is
there a piece of software out there that would make the process run more
smoothly with out much manual intervention.

Thanks as always,

Mike.

Regards,
Michael Swinhoe
Storage Management Group
Zurich Financial Services (UKISA) Ltd.
E-mail:   mailto:[EMAIL PROTECTED]



___

The information contained in this message is confidential and may be
legally privileged. If you are not the intended recipient, please do not
read, copy or otherwise use it and do not disclose it to anyone else.
Please notify the sender of the delivery error and then delete the
message from your system.

Any views or opinions expressed in this email are those of the author only.

Communications will be monitored regularly to improve our service and for
security and regulatory purposes.

Thank you for your assistance.

___



Re: Database backup retentions question

2002-08-16 Thread Don France

Beware there are several other considerations to be made...
1.  If using copy pools, and/or offsite storage, you will want to ensure
your re-use delay matches the db retention (else you may have one without
the other, which could make the whole of DR recovery impossible);

2.  DRM has several parameters;  if using DRM, it (MOVE DRM) does the
expiration of db backup tapes -- and RPfiles retention becomes an added
retention parm, with this feature.

Typically, you'd want to keep DB backups long enough to get thru a
long-weekend (4 to 14 seems to be the range most folks settle on);  hence,
you'd want to keep all the other data/media that the oldest DB backup
references -- so they still work!


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Tony Morgan
Sent: Friday, August 16, 2002 8:21 AM
To: [EMAIL PROTECTED]
Subject: Re: Database backup retentions question


Mark,

The backups are normally deleted by a scheduled task.  eg.

Administrative command schedules : DELETEDBBACKUP

Schedule Name DELETEDBBACKUP
Description Delete old database backups
Command delete volhist type=dbb todate=today-3
Priority 5
Start date 2000-12-14
Start time 15:00:00
Duration 1
Duration units HOURS
Period 1
Period units DAYS
Day of Week ANY
Expiration -
Active? YES
Last Update Date/Time 2001-05-29 09:24:28.00

Type "help delete volhist" for further details.

Rgds
Tony

-Original Message-
From: Mark Bertrand [mailto:[EMAIL PROTECTED]]
Sent: 16 August 2002 15:48
To: [EMAIL PROTECTED]
Subject: Database backup retentions question


We are backing up our database with the following command setup in an admin
script.
backup db devclass=ltoclass type=full
Is there a setting that tells how many copies of the database to keep?

My goal is to free or reclaim some tape volumes.



This e-mail and any files transmitted with it, are confidential and
intended solely for the use of the addressee. The content of this
e-mail may have been changed without the consent of the originator.
The information supplied must be viewed in this context. If you have
received this e-mail in error please notify our Helpdesk by
telephone on +44 (0) 20-7444-8444. Any use, dissemination,
forwarding, printing, or copying of this e-mail or its attachments is
strictly prohibited.



Re: Easiest way to do a command line Full Backup.

2002-08-15 Thread Don France

This method is equivalent to the SELective and is off topic;  he wanted to
use command line.  Also, Paul's point is a good one -- selective does not
update the last-incremental date on the filespace... so, we're back to
ABSOLUTE in the management class.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
William Rosette
Sent: Wednesday, August 14, 2002 4:21 AM
To: [EMAIL PROTECTED]
Subject: Re: Easiest way to do a command line Full Backup.


We use a fourth, the manual way with the GUI and using the "Always Backup"
button.  Our 3rd shift operators key it off usually after a automatic
incremental and before some special project on that particular client.  The
client requests a "full" and we will do a manual "Always Backup."



Thank You,
Bill Rosette
Data Center/IS/Papa Johns International
WWJD


|-+-------->
| |   Don France   |
| ||
| |   Sent by: "ADSM:  |
| |   Dist Stor|
| |   Manager" |
| |   <[EMAIL PROTECTED]|
| |   .EDU>|
| ||
| ||
| |   08/13/02 06:51 PM|
| |   Please respond to|
| |   "ADSM: Dist Stor |
| |   Manager" |
| ||
|-+>

>---
---|
  |
|
  |   To:   [EMAIL PROTECTED]
|
  |   cc:
|
  |   Subject:  Re: Easiest way to do a command line Full Backup.
|

>---
---|




There are THREE "sure" ways to trigger a (traditional) FULL backup:

1 - run "dsmc i" (incremental command) but use a management class which
maps
to a copygroup which specifies "absolute" (rather than modified) for
"mode",
AND frequency=0 (to allow on demand);  this is the "easiest" way to force
FULL (for data mapped to the appropriate management class, via INCLUDE
specs.);

2 - use the archive command (but you must specify each file system);

3 - selective will also work, but you must specify each file system.

That should do it.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)




"Gent, Chad E." <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2002-08-13 13:37
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Easiest way to do a command line Full Backup.


Hello,

Is there an easy way to do a full command line backup.   I have an NT
client
version 4.2.15 and I would like to do full backups instead of incremental.

Thanks

Chad
IMPORTANT:  The security of electronic mail  sent through the Internet
is not guaranteed.  Legg Mason therefore recommends that you do not
send confidential information to us via electronic mail, including social
security numbers, account numbers, and personal identification numbers.

Delivery, and timely delivery, of electronic mail is also not
guaranteed.  Legg Mason therefore recommends that you do not send
time-sensitive
or action-oriented messages to us via electronic mail, including
authorization to  "buy" or "sell" a security or instructions to conduct
any
other financial transaction.  Such requests, orders or instructions will
not be processed until Legg Mason can confirm your instructions or
obtain appropriate written documentation where necessary.



Re: IBM vs. Storage Tek

2002-08-15 Thread Don France

I am still trying to collect contacts for some of the STK sites;  I am aware
of about six customers switching "TO" 9940 (with SN6000) from something
else - one is replacing their 3590 with 9940's (not LTO).

Changing from 3590 to LTO is definitely a step "backward" in reliability;
performance is also 8-10 times slower... the basic idea behind LTO was to
provide a cost-effecting, competing product against DLT -- which is exactly
what it is.

LTO is 8-10 faster (and more reliable) than DLT;  all this, at a price point
competitive with DLT.  3590 is 8-10 times faster and worlds more reliable
than LTO (and 9840/9940 which is comparable with 3590).  I think of
3590/9940 as the industrial-strength answer for large data centers -- large
to me means moving over 1.5 TB per day, storing over 20 TB of data in their
silo;  smaller sites, more price conscious, willing to "tolerate" slower
restore speeds are the ideal target for LTO.  Most shops know that DLT is
slow;  if we can approximate 10-15 GB/hour in restoring a file server, they
know that is a good number -- and I have demo'ed that with Dell PowerVault
130 using DLT7000.  Large db's can be restored at 20-36 GB per hour;  DLT is
at the low end of that, LTO is at the high end -- you just about saturate
the 100 Mbps wire at 30 GB/hr.

LTO, like DLT, does not like alot of "back-hitch" operations;  so, I would
minimize the amount of collocation on LTO.  Every account I know that has
LTO is ecstatic about the performance;  the LTO design was based on 3590
technology, but (to cut costs) some reliability factors were sacrificed...
this is not a drive you want to run without a service contract, especially
if you're gonna beat it with the "back-hitch" action that's required to
back-space from eot, find last tape mark, start writing next data block from
end if the inter-record gap of the last one, etc.

One person's opinion, stretched over a dozen or more customer accounts.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Joni Moyer
Sent: Thursday, August 15, 2002 6:30 AM
To: [EMAIL PROTECTED]
Subject: IBM vs. Storage Tek


Hello,

I know I've asked about this before, but now I have more information so I
hope someone out there has done this.  Here is my environment for TSM.
Right now it is on the mainframe and we are using 3590 Magstars.  We have a
production and a test TSM server and each has about 13 drives and a total
of 5,500 tapes used for onsite and offsite tape pools between the 2
systems.  Two scenarios are being considered (either way TSM is being moved
onto our SAN environment with approximately 20 SAN backup clients and 250
LAN backup clients and will be on SUN servers instead of the mainframe)
Here is what I estimated I would need for tapes:

 3590 9840 9940A LTO
 10 GB 20 GB 60 GB 100 GB
Production
Onsite1375 689  231  140
Offsite   1600 800  268  161
Total 2975 1489 499  301

Test
Onsite963  483  163  101
Offsite   1324 664  223  135
Total 2287 1147 386  236

Grand
Total 5262 2636 885  537

1. IBM's solution is to give us a 3584 library with 3 frames and use LTO
tape drives.  This only holds 880 tapes and from my calculations I will
need about 600 tapes plus enough tapes for a scratch pool.  My concern is
that LTO cannot handle what our environment needs.  LTO holds 100 GB
(native), but when a tape is damaged or needs to be reclaimed the time it
takes to do either process would take quite some time in my estimation.
Also, I was told that LTO is good for full volume backups and restores, but
that it has a decrease in performance when doing file restores, archives
and starting and stopping of sessions, which is a majority of what our
company does with TSM.  Has anyone gone from a 3590 tape to LTO?  Isn't
this going backwards in performance and reliability?  Also, with
collocation, isn't a lot of tape space wasted because you can only put one
server  per volume?

2. STK 9840B midpoint load(20 GB) or 9940A(60 GB) in our Powderhorn silo
that would be directly attached to the SAN.  From what I gather, these
tapes are very robust like the 3590's, but the cost for this solution is
double IBM's LTO.  We would also need Gresham licenses for all of the SAN
backed up clients(20).

Does anyone know of any sites/contacts that could tell me the
advantages/disadvantages of either solution?  Any opinions would be greatly
appreciated.
Thanks


Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: drive online=no versus path offline=no

2002-08-15 Thread Don France

Yep... the only change (for simple, single server access environments) is
that you must DEFINE PATH in order to convey the device special file address
to the TSM server... the DEFINE LIBRary and DEFine DRive no longer accept
the device parameter.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gerhard Rentschler
Sent: Thursday, August 15, 2002 8:43 AM
To: [EMAIL PROTECTED]
Subject: drive online=no versus path offline=no


Hello,
with TSM 5.1 one has to define a path for each drive. Both, update drive and
update path have an online option. Is it sufficient to set a drive offline
in a single server environment?
Regards
Gerhard
---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany



Re: SQL timestamp not working when upgraded to 4.2.2 for summary tabl e

2002-08-15 Thread Don France

Yep... this issue had a lllooonnnggg discussion thread back when it first
occurred -- search on the APAR number for the gory details, and there are
caveats about which 5.1.? level has the fix (last I saw it was 5.1.1.5, I
think)... it's in the APAR.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
L'Huillier, Denis
Sent: Thursday, August 15, 2002 10:48 AM
To: [EMAIL PROTECTED]
Subject: Re: SQL timestamp not working when upgraded to 4.2.2 for
summary tabl e


Hey look what I found...

*
$$4225 Interim fixes delivered by patch 4.2.2.5
$$Patches are cumulative, just like PTFs.  So Interim fixes
$$delivered as "4.2.2.5" include those delivered in previous patches
*
<@>
IC33455 SUMMARY TABLE NOT BEING FULLY UPDATED


And I'm querying the summary table..

Thanks..

-Original Message-
From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 11:34 AM
To: [EMAIL PROTECTED]
Subject: Re: SQL timestamp not working when upgraded to 4.2.2 for summary
tabl e


Hi Denis,

Just for information : I tested your query on my system (4.2.2.15) and
it worked like a charm (except I had to modify "Node Name" to
"Node_Name")

Did you apply the latest PTF's to get 4.2.2.15 ? Maybe it could help ...
Good luck anyway !

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: L'Huillier, Denis [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 15 August, 2002 15:39
To: [EMAIL PROTECTED]
Subject: SQL timestamp not working when upgraded to 4.2.2 for summary
tabl e


Hello -
Since I upgraded our 4.1 server to 4.2.2 my sql query against the
summary table no longer works. Has anyone run into this problem before?
Here's the query...

/* --- Query Summary Table  */
/* ---   Run as a macro   - */
select cast(entity as varchar(12)) as "Node Name", \ cast(activity as
varchar(10)) as Type, \ cast(number as varchar(8)) "Filespace", \
cast(failed as varchar(3)) "Stg", \ cast(affected as decimal(7,0)) as
files, \ cast(bytes/1024/1024 as decimal(12,4)) as "Phy_MB", \
cast(bytes/1024/1024 as decimal(12,4)) as "Log_MB" \ from summary where
end_time>=timestamp(current_date -1 days, '09:00:00') \ and
end_time<=timestamp(current_date, '08:59:59') \ and (activity='BACKUP'
or activity='ARCHIVE') \ order by "Node Name"

Here's the output from a 4.1 server:

Node Name TYPEFilespace  Stg  FILES  Phy_MB
Log_MB
  --  -  ---  -  --
--
CENKRSBACKUP  6490   26  0.0222
0.0222
CENNTNFS  BACKUP  6480   15  0.0072
0.0072
RSCH-AS1-PBACKUP  6150   90  7.3412
7.3412
RSCH-DB2-PBACKUP  6140   43  5.6337
5.6337
RSCH-DB3-PBACKUP  60810  0.
0.
RSCH-DB3-PBACKUP  6160  114   1477.5513
1477.5513
RSCH-FS1-PBACKUP  61110  0.
0.
RSCH-FS1-PBACKUP  6180   97 10.3834
10.3834
RSCH-WS5-PBACKUP  6670   29  2.5706
2.5706
RSCH-WS6-PBACKUP  6660   35  5.4812
5.4812
TPRSCHHOME01  BACKUP  62420  0.
0.
TPRSCHHOME01  BACKUP  6270 2467  16412.1675
16412.1675
TPRSCHHOME02  BACKUP  63410  0.
0.
TPRSCHHOME02  BACKUP  6370 3552  19135.1409
19135.1409

Here's the output from a 4.2 server:

Node Name TYPEFilespace  Stg  FILES  Phy_MB
Log_MB
  --  -  ---  -  --
--
REMEDY2W  BACKUP  3896   0   64  0.
0.

I only get one line back.. There should be one for each node (about 100
nodes on this server)

Now, for any of you who are wondering..  'Filespace' and 'Stg' are
columns put in just as place holders. We were using the 'q occu' to
generate charge back info.  I needed to generate an sql query would look
Just like the q occu (same columns) so the data could be fed into an
existing program which handled charge Back to the clients.

Regards,

Denis L. L'Huillier
212-647-2168



Re: Deactivate ALL admin schedules? (how-to)

2002-08-15 Thread Don France

The DRM script includes a change to dsmserv.opt ---
disablescheds yes

This ensures that ALL schedules are disabled when preparing the DR site
configuration.  You just add to the bottom of the file, before starting the
TSM server.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Taylor, David
Sent: Thursday, August 15, 2002 9:09 AM
To: [EMAIL PROTECTED]
Subject: Deactivate ALL admin schedules? (how-to)


Hi all,

Is there anyway that I can globally set all of my admin schedules to
"ACTIVE=NO"?

I was looking for something like a standard SQL UPDATE-SET type statement,
but couldn't find anything.

I want to include this in a disaster recovery script (Korn).  The problem
that I've run into is that some of my schedule-names are longer than 16
characters (and therefor wrap).   I could probably write something (ugly and
bulky) that would work around this issue, but would prefer something
cleaner.

Additional info:

TSM server 4.2.1.15
AIX 4.3.3 ML6

TIA

David


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**



Re: Multiple logical libraries for 3584

2002-08-15 Thread Don France

Gentille,

Sure;  shared library support in 4.2/5.1 is what you would want to
exploit... full details are in the Admin. Guide.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Barkhordar,Gentille,GLENDALE,IS
Sent: Wednesday, August 14, 2002 10:21 AM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


Don,
I have 2 TSM servers accessing a single 3584 library and I would like to set
up multiple control paths to the library so that both servers can access it
equally.  Have you or anyone else done this successfully and do you have any
information on setting it up?
Thanks,
Gentille

-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 13, 2002 4:19 PM
To: [EMAIL PROTECTED]
Subject: Re: Multiple logical libraries for 3584


Don France wrote:

>Yep (to the last question);  you cannot span multiple physical libraries to
>make a single logical.  You can define multiple logicals within a single
>physical;  that is a common thing I've done, for various reasons.
>
>
>Don France
>Technical Architect -- Tivoli Certified Consultant
>Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
>San Jose, Ca
>(408) 257-3037
>mailto:[EMAIL PROTECTED]
>
>Professional Association of Contract Employees
>(P.A.C.E. -- www.pacepros.com)
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
>Dan Foster
>Sent: Tuesday, August 13, 2002 2:14 PM
>To: [EMAIL PROTECTED]
>Subject: Multiple logical libraries for 3584
>
>
>I've got a question.
>
>One 3584-L32 with 6 drives and one 3584-D32 with 6 drives.
>
>Is it possible to have a logical library that covers 4 drives in
>the L32, and a second logical library that covers last 2 drives
>in the L32 and all 6 drives in the D32?
>
>Or is that not a valid configuration -- ie, do I need to keep
>logical libraries from spanning multiple drives in multiple 3584
>units?
>
>-Dan
>
>
Hi Dan,

Judging by your configuration I beleive that you have just one physical
library.  I beleive the L32 is the base unit and the D32 is an expansion
cabinet, i.e. the 2 of them are physicaly attached to one another and
share the same robotics.  Is this correct?  If so then yes you can
partition the library as you described.

But the real question is whatare you trying to accomplish?  If this is
to be connected to 2 different servers there are a few more things that
have to be in place.  If both of these logical libraries are to be used
by the same TSM server I am not sure I understand the rational for doing
so.  You could just as easily manage the drive utilization through other
TSM server options.  Please explain a little bit more what you are
trying to accomplish.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===



Re: DSM.OPT FILE

2002-08-13 Thread Don France

There is a sample dsm.opt.smp (might be called dsm.opt) file in your install
directory -- even if it didn't use it.

You should realize there have been changes (with how include/exclude for
system files are handled);  see the README file for some interesting
reading... they now use the registry and Microsoft recommendations for
excluding files that will get rolled into system object & registry backups,
so we no longer need exclude's for them in the dsm.opt.

Also, most shops use centralized configuration data, client option sets, to
leverage their options among various client platforms... you didn't say what
you were trying to accomplish;  did you have a specific question?

Hope this helps.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Laura Booth
Sent: Tuesday, August 13, 2002 11:26 AM
To: [EMAIL PROTECTED]
Subject: DSM.OPT FILE


Does anyone have an example of a dsm.opt file for the new 2000 5.1
client they could send me?

Thanks,
Laura Booth



Re: TSM Restore help

2002-08-13 Thread Don France

Nope,,, I haven't personally seen this error -- BUT I've seen similar types
of problems when the nodename was used by a later client version, then the
user tries to restore using the older client version.  Once a later version
is used (to backup or archive data) going back to the older version should
generate a more gracious message, sometimes they just get the wrong message
index number!

Try running from a higher level client (preferably 4.2.x), then if it still
fails, call SupportLine... you may need to be at 4.2 or later to get
support, as your client level went out service a long time ago.  Also, you
could bypass the file in question, using web-client, to see if other data is
restorable -- probably not.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Peppers, Holly
Sent: Tuesday, August 13, 2002 12:21 PM
To: [EMAIL PROTECTED]
Subject: TSM Restore help


I am trying to do a restore, and am receiving the following error message:
ANS 4032E, file is not compressed. This is on a HP-UX client, restoring from
an old machine, using the virtualnodename parm.
I'm trying to do a restore from 3.1.08 to 3.1.08 HPUX. The TSM Server is at
V4.2.2.0.
Has anyone seen this message?  Help please!!!  Thanks

Holly L. Peppers
BCBSFL
Technical Services




Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
affiliate companies are not responsible for errors or omissions in this
e-mail message. Any personal comments made in this e-mail do not reflect the
views of Blue Cross Blue Shield of Florida, Inc.



Re: Multiple logical libraries for 3584

2002-08-13 Thread Don France

Yep (to the last question);  you cannot span multiple physical libraries to
make a single logical.  You can define multiple logicals within a single
physical;  that is a common thing I've done, for various reasons.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Dan Foster
Sent: Tuesday, August 13, 2002 2:14 PM
To: [EMAIL PROTECTED]
Subject: Multiple logical libraries for 3584


I've got a question.

One 3584-L32 with 6 drives and one 3584-D32 with 6 drives.

Is it possible to have a logical library that covers 4 drives in
the L32, and a second logical library that covers last 2 drives
in the L32 and all 6 drives in the D32?

Or is that not a valid configuration -- ie, do I need to keep
logical libraries from spanning multiple drives in multiple 3584
units?

-Dan



Re: Mount point with backup stg cmd

2002-08-13 Thread Don France

Since you are using the same destination copypool, even if your mount limit
is 3, it's could be waiting for the same output tape (not a drive).  TSM
will try to stack copypool data, unless collocation is activated.

Also, you normally would not want to occupy all the drives;  most shops want
to keep a drive available for (a) restore requests, (b) daytime backup jobs,
(c) unanticipated drive failure or other maintenance chores (like
checkout/checkin and triggered db- backups).

BTW, if you really wanted to find additional drives, they can be found on
the used market;  many customers are dumping their old 3575's to move up to
LTO or 3590... have your local IBM rep. refer you (else check the used
marketers out there).

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
David Longo
Sent: Tuesday, August 13, 2002 10:31 AM
To: [EMAIL PROTECTED]
Subject: Re: Mount point with backup stg cmd


It should work, only need one drive for that command.
However, if you do a "q devclass blah f=d" for the devclass
that is your tapepool, what is the "Mount Limit" defined
as.  It maybe less than 3.
Most people have it set to "DRIVES", therefore
making it the same as your number of tape drives.

David Longo

>>> [EMAIL PROTECTED] 08/13/02 11:45AM >>>
Hi, Experts,

I have a little 3575 L18 tape library attached to TSM on AIX. As I have
only 3 dirves on it (strange..), I tried to issued 'backup stg tapepool
copypool' and it gets 2 drives. Now that I have only 1 drive left, I
issued 'backup stg diskpool copypool' thingking that copying from
diskpool to copypool (tapes) would need only 1 drive. But, it says still
'waiting for mount point'. Does it mean that anytime I issue the command
'backup stg', it will use 2 drive at a time?

I tried to buy more drives, but IBM withdrew 3575 library from market.
Any tips for searching for used market? Thanks for your help as usual.




Jin Bae Chi (Gus)
Data Center
614-287-2496
614-287-5488 Fax
e-mail: [EMAIL PROTECTED]



"MMS " made the following
 annotations on 08/13/2002 01:32:14 PM

--
This message is for the named person's use only.  It may contain
confidential, proprietary, or legally privileged information.  No
confidentiality or privilege is waived or lost by any mistransmission.  If
you receive this message in error, please immediately delete it and all
copies of it from your system, destroy any hard copies of it, and notify the
sender.  You must not, directly or indirectly, use, disclose, distribute,
print, or copy any part of this message if you are not the intended
recipient.  Health First reserves the right to monitor all e-mail
communications through its networks.  Any views or opinions expressed in
this message are solely those of the individual sender, except (1) where the
message states such views or opinions are on behalf of a particular entity;
and (2) the sender is authorized by the entity to give such views or
opinions.


==



Re: Easiest way to do a command line Full Backup.

2002-08-13 Thread Don France

There are THREE "sure" ways to trigger a (traditional) FULL backup:

1 - run "dsmc i" (incremental command) but use a management class which maps
to a copygroup which specifies "absolute" (rather than modified) for "mode",
AND frequency=0 (to allow on demand);  this is the "easiest" way to force
FULL (for data mapped to the appropriate management class, via INCLUDE
specs.);

2 - use the archive command (but you must specify each file system);

3 - selective will also work, but you must specify each file system.

That should do it.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)




"Gent, Chad E." <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2002-08-13 13:37
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Easiest way to do a command line Full Backup.


Hello,

Is there an easy way to do a full command line backup.   I have an NT
client
version 4.2.15 and I would like to do full backups instead of incremental.

Thanks

Chad
IMPORTANT:  The security of electronic mail  sent through the Internet
is not guaranteed.  Legg Mason therefore recommends that you do not
send confidential information to us via electronic mail, including social
security numbers, account numbers, and personal identification numbers.

Delivery, and timely delivery, of electronic mail is also not
guaranteed.  Legg Mason therefore recommends that you do not send
time-sensitive
or action-oriented messages to us via electronic mail, including
authorization to  "buy" or "sell" a security or instructions to conduct
any
other financial transaction.  Such requests, orders or instructions will
not be processed until Legg Mason can confirm your instructions or
obtain appropriate written documentation where necessary.



Re: Delete a specific backup

2002-08-13 Thread Don France

It sounds like you are trying to manage your backup tapes (as in traditional
backup software -- like every other product out there);  you should just let
TSM manage the tapes -- if you have a failure on a daily incremental, you
simply re-run it the next night (or sooner, if you are so inclined)... TSM
will still only backup the new/changed files (assuming you are configured
with the standard, typical situation).  There are cases where a customer
wants "full" backups more than just the first time a full-incremental is
run -- but that is not typical, it's only because they are trying to
consolidate mission-critical backups for the fastest restore, but there are
better ways to accomplish that goal than re-sending the whole file system
across the network (such as weekly image backup, and/or collocation on the
file-level incrementals).

Your question suggests that you might greatly benefit from a better
understanding of the progressive-incremental technology that TSM employs to
back up file-served data;  this product will use far fewer tapes to handle
storing several versions of any given file, and (when configured properly)
will manage the tapes and restore the data faster than its competitors.  For
example, with your 200 GB file system, it probably has thousands of files;
using the "Getting Started" book, rule of thumb, you'd need only 200 x 5% x
#-versions, (say 14 days point-in-time restore capability), 1.7 times the
file space occupancy -- one full copy plus 5% for each additional day in
your point-in-time restore criteria (guessing that less than 5% change per
day across the most recent 14-day period).

With TSM, there is no need to delete a specific day's backup of flat-file
data, plain and simple;  if that statement makes no sense to you, then you
need a better understanding of the product -- your email is from IGS, so
that would be highest recommendation.  Now, all of this gets more "involved"
when discussing backup strategies for large data base volumes;  see the
applicable TDP books and discuss with the dba's supporting the application
to determine what's needed.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Christian Astuni
Sent: Tuesday, August 13, 2002 2:02 PM
To: [EMAIL PROTECTED]
Subject: Re: Delete a specific backup


Thank you for your response Daniel.
But what is the difference if I do full backups in between ??? If I see a
filespace of incremental backups as one.

I try to explain better my question. For example a I run a incremental
backup every day, of the file of 200 gb aprox. If for any reason this
backups fail, I want to delete this version, and reuse the tapes.
Is possible this ? ... yes .. how ?

Thank very much !!!
Best Regards


Christian Astuni
IBM Global Services
[EMAIL PROTECTED]
Tel. 4898-4621
Hipolito Yrigoyen 2149  - Martmnez (1640) Bs. As. - Argentina




  Daniel Sparrman
  cc:
  Sent by: "ADSM:  Subject:  Re: Delete a
specific backup
  Dist Stor
  Manager"

  <[EMAIL PROTECTED]
  .EDU>


  13/08/2002 12:42
  p.m.
  Please respond to
  "ADSM: Dist Stor
  Manager"





Hi

1. No, you cannot delete a specfic day. However, you can setup your
management classes to handle this, or do full between incrementals.

2. You will have to do a backup stgpool "stgpoolname" "copypoolname". You
cannot set volumes from a primary stgpool offsite, only copypool volumes.

Tivoli handles off-site copies using something called a copypool. This
storagepool is normally only used for off-site and media protection
purposes. If you haven't got a copypool, you will need to define one, to
be able to handle off-site copies.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervdgen 6B
183 62 HDGERNDS
Vdxel: 08 - 754 98 00
Mobil: 070 - 399 27 51




Christian Astuni <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2002-08-13 11:40
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Delete a specific backup


Hi ... people !!! I have two question 
1.- There are any way to delete a specific backup of one day or one week ?
I running a incremental backups since Monday to Thursday, and archive on
Friday. I want to delete all backups of a specific week. Do any people
kn

Re: Minimizing Database Utilization

2002-08-02 Thread Don France

Actually, you need to consider (a) 600 bytes per primary pool object, plus
(b) 200 bytes per copypool object... pretty simple, and "it works"!

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Todd Lundstedt
Sent: Tuesday, July 30, 2002 8:42 AM
To: [EMAIL PROTECTED]
Subject: Re: Minimizing Database Utilization


Well, well.. I totally read my book the wrong way.  I will go recalculate.
Thanks for pointing out this huge error on my part.  Now I have to go
figure out where the rest of my database utilization is going, too.


|+>
||  "Thomas Denier"   |
||  |
|||
||  07/30/02 10:37 AM |
|||
|+>

>---
-|
  |
|
  |  To: [EMAIL PROTECTED], [EMAIL PROTECTED]
|
  |  cc:
|
  |  Subject: Re: Minimizing Database Utilization
|

>---
-|




> I based the increase in DB size on the "600k
> of database space per object stored by TSM" rule.

I believe the rule of thumb historically given in TSM documentation
is 600 bytes per object, not 600 kilobytes. I have a single client
with 4.8 million backup files in one of its file systems, and several
others with substantial fractions of that number. I have offsite copy
pools for all backups. All of this fits in a ten gigabyte database.



Re: TSM Administration with Microsoft Management Console?

2002-07-30 Thread Don France

The MMC snap-in is a Microsoft thing;  you'll only find it on Windoze... and
then, only if the TSM server resides on Win2K.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gerhard Rentschler
Sent: Tuesday, July 30, 2002 4:36 AM
To: [EMAIL PROTECTED]
Subject: TSM Administration with Microsoft Management Console?


Hi,
the TSM 4.2 Technical Guide mentions a snap in for the Microsoft Management
Console which can be used for administering a TSM server. I just installed
TSM 5.1.1.1 on AIX and can't find anything like a snap in. Where can I find
it?
Best regards
Gerhard

---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany



Re: move drmedia

2002-07-30 Thread Don France

There is a checklabel parameter with DRM;  you can use
SET DRMCHECKLABEL NO
to suppress label reading during checkout part of MOVE DRM.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Miles Purdy
Sent: Monday, July 29, 2002 7:05 AM
To: [EMAIL PROTECTED]
Subject: Re: move drmedia


'move drm' should not actually look at any tapes. This command updates the
database only.

Miles



--
Miles Purdy
System Manager
Farm Income Programs Directorate
Winnipeg, MB, CA
[EMAIL PROTECTED]
ph: (204) 984-1602 fax: (204) 983-7557

"If you hold a UNIX shell up to your ear, can you hear the C?"

-

>>> [EMAIL PROTECTED] 29-Jul-02 12:25:57 AM >>>
Hi,
How can I prevent my 3583 library to check barcode labels when using "move
drmedia" command.
I could not find checklabel option in the command help.
Regards,
Burak



Re: New Policy Domain

2002-07-26 Thread Don France

Changing domain assignment will affect retention parameters -- not data
location.  If you want the data moved, using 5.1.1.1 or later (the only
level I would consider using on a 5.1 server), use the new MOVE NODEDATA
command to get the old data into the new storage pool.  Alternative, under
4.x is to await expiration.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Argeropoulos, Bonnie
Sent: Wednesday, July 24, 2002 1:39 PM
To: [EMAIL PROTECTED]
Subject: New Policy Domain


Hello,

Running  aix  4.1.1 and 4.1.1 on the NT clients
I recently created a new domain with separate disk storage space from the
existing domain and moved 4 nodes to this domain.  Backups seem to be
running okay...but there is still occupancy for these nodes in the old
domains disk space...this should definitely have migrated  by now.

Does anyone know why I have disk space in the old space of
gen_diskshouldn't
there only be space in the new gen_disk2?

Any help would be appreciatedthanks

Bonnie Argeropoulos
[EMAIL PROTECTED]

Node Name Type  Filespace   Storage Number of   Physical
Logical
NamePool Name   Files  Space
Space
Occupied
Occupied
(MB)
(MB)
    --  --  -  -
-
HBO   Bkup  $1$DRA0:GEN_DISK  366 383.84
237.17
HBO   Bkup  $1$DRA0:GEN_DISK2 933   8,616.59
8,616.59
HBO   Bkup  $1$DRA0:GEN_OFFSI- 11,956  36,774.52
36,475.88
 TE

HBO   Bkup  $1$DRA0:GEN_TAPE   10,657  27,982.64
27,622.38



RE: Réf. : LTO Tape OR 9840

2002-07-25 Thread Don France

Well put, Wanda;  in a nutshell, I would add...
- consider LTO, in lieu of DLT (ie, NO COLLOCATION) -- okay for diskpool
migration and non-collocated copypool data;
- use 3590/9840/9940 for the heavy duty cycle workloads... onsite copies,
collocated data, any workload generating lots of start/stop actions.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Prather, Wanda
Sent: Thursday, July 25, 2002 8:19 AM
To: [EMAIL PROTECTED]
Subject: RE: Rif. : LTO Tape OR 9840


I think you also need to consider duty cycles - how much data does TSM push
per day, how hard will you be pushing the tape drives.

In my discussions with IBM & STK, they BOTH say that the LTO drives are not
designed to replace "enterprise class" drives, meaning the 3590's or 9840's.
The construction is just not designed to take the beating that the 3590's or
9840's are.   Which is why the drives are more expensive than LTO (DUH).
The 3590 & 9840 drives also have some performance characteristics that make
them faster than LTO for some operations (like dealing with a lot of
start/stop activity).

This is neither good or bad.  It's a matter of matching your hardware to
your environment.

We have a TSM server here with some 9840 drives that run OVER 10 HOURS per
drive per day.  That is quite a beating.  Nothing short of a 9840 or 3590
will do.

We have two other TSM environments that are much more sedate.   Those would
do fine with LTO.

Something else to consider.


Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

"Intelligence has much less practical application than you'd think" -
Scott Adams/Dilbert





-Original Message-
From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 24, 2002 4:38 PM
To: [EMAIL PROTECTED]
Subject: Rif. : LTO Tape OR 9840


Hi Joni

I personnaly prefer 9840's but the bean counters love the LTO because the
drives are very much lower cost and the cost per GB is lower. The 9840 is
probably the best
drive/tape on the market today. The native thruput of 9840b's is 20 mb/sec
while the LTO is 16 mb/sec. The start/stop on the 9840 is considerably
faster than the LTO.
Reclaiming a 40 - 50 % full LTO took me 4 to 5 hours (client compressed
data) and I can move data a full 9840 (client compressed data) in less than
30 minutes. I reclaim
my 9840 at 40% and do about 20 to 30 tapes a day easy. Is your STK silo a
9310 or a 5500? Or is it one of the smaller ones like a L180 or L700? With
the big silos the
mount time is not much of an issue. The seek time will be faster with the
9840.

As for tape life, I've been working with 9840s for a little over a year
without any tape failure.

I look at it this way : if I lose a 9840, I loose a max of 20 GB of data.
With an LTO, I loose 5 times that.

Guillaume Gilbert
CGI Canada




Joni Moyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-07-24 15:17:32

Veuillez ripondre ` "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyi par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   LTO Tape OR 9840

Hello everyone!

The environment here is going to be changing soon... We will be moving off
of the mainframe and onto an AIX server that will be on our SAN.  We will
have 1 STK silo for the tapes for 2 TSM servers.  Right now we are
considering IBM's LTO or STK's 9840.  I was just wondering if anyone out
there has had experience with either one and if so,  what are the pro's and
con's of them?  Has anyone that has worked with LTO know how long it takes
to recover a bad tape?  Considering that they are 100GB tapes, it was
assumed that it would take 5 times as long as it does to recover a Magstar
3590(20 GB).  And also, do the tapes get damaged easily or is that all a
matter of handling them to take them offsite to vaults?  Thank you

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



RE: Réf. : LTO Tape OR 9840

2002-07-25 Thread Don France

The Gresham EDT software is needed for LAN-free and/or a SHARED library, and
then it's only needed on the LAN-free clients and TSM server --- not all
clients.

Also, I believe this middleware is not needed for IBM library;  only STK
w/ACSLS.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Joni Moyer
Sent: Thursday, July 25, 2002 4:22 AM
To: [EMAIL PROTECTED]
Subject: Re: Rif. : LTO Tape OR 9840


Our problem is that with our 9310 silo, if we go the IBM way we will have
to purchase a third party software, Gresham, that will manage the library.
>From what I understand this is not a cheap solution because we have to pay
for each license we have out there which is approximately 250.  What is
your environment?  Do you have any problems/concerns with either one?
Thanks!

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Guillaume Gilbert
 cc:
Sent by: "ADSM: Dist   Subject: Rif. : LTO
Tape OR 9840
Stor Manager"
<[EMAIL PROTECTED]>


07/24/2002 04:38 PM
Please respond to "ADSM:
Dist Stor Manager"






Hi Joni

I personnaly prefer 9840's but the bean counters love the LTO because the
drives are very much lower cost and the cost per GB is lower. The 9840 is
probably the best
drive/tape on the market today. The native thruput of 9840b's is 20 mb/sec
while the LTO is 16 mb/sec. The start/stop on the 9840 is considerably
faster than the LTO.
Reclaiming a 40 - 50 % full LTO took me 4 to 5 hours (client compressed
data) and I can move data a full 9840 (client compressed data) in less than
30 minutes. I reclaim
my 9840 at 40% and do about 20 to 30 tapes a day easy. Is your STK silo a
9310 or a 5500? Or is it one of the smaller ones like a L180 or L700? With
the big silos the
mount time is not much of an issue. The seek time will be faster with the
9840.

As for tape life, I've been working with 9840s for a little over a year
without any tape failure.

I look at it this way : if I lose a 9840, I loose a max of 20 GB of data.
With an LTO, I loose 5 times that.

Guillaume Gilbert
CGI Canada




Joni Moyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-07-24 15:17:32

Veuillez ripondre ` "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyi par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   LTO Tape OR 9840

Hello everyone!

The environment here is going to be changing soon... We will be moving off
of the mainframe and onto an AIX server that will be on our SAN.  We will
have 1 STK silo for the tapes for 2 TSM servers.  Right now we are
considering IBM's LTO or STK's 9840.  I was just wondering if anyone out
there has had experience with either one and if so,  what are the pro's and
con's of them?  Has anyone that has worked with LTO know how long it takes
to recover a bad tape?  Considering that they are 100GB tapes, it was
assumed that it would take 5 times as long as it does to recover a Magstar
3590(20 GB).  And also, do the tapes get damaged easily or is that all a
matter of handling them to take them offsite to vaults?  Thank you

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: Swappin Silos

2002-07-24 Thread Don France

Forgot to mention, in a true tape technology migration, use "nextstg" to
migrate a stgpool of data from one to another technology -- plus approp.
controls on the "old" library so no new data gets created there... all is
okay, data xfer done during off-peak usage times, start/stop migration as
desired (including number of processes/drives running concurrently).

This is not related to your specific scenario, but is applicable to many
others.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
william derksen
Sent: Wednesday, July 24, 2002 3:43 PM
To: [EMAIL PROTECTED]
Subject: Swappin Silos


We are running a TSM server attached to an iceberg using Gresham's EDT.   I
would like to know the best way to move operations to a newer silo on the
same TSM server.  The tape media for the current silo and the replacement
silo is to be the same format.

It would be nice if clients could be moved over the new silo incrementally
i.e., both libraries operational at the same.

Move data, checkout checkin, export import?

I've searched through the docs on this and have yet to find much help.

Advice and suggestions greatly appreciated.

TIA,

biru




biru_2000 the biru of the new millenium.


-
Do You Yahoo!?
Yahoo! Health - Feel better, live better



Re: Swappin Silos

2002-07-24 Thread Don France

Why not just update the library definition?  Remove all the tapes from
Lib-1, insert them in Lib-2;  audit lib on Lib-1 after removing all tapes,
change the Lib def to Lib-2, make the physical connections, dev-class uses
same lib, all is copasetic (zero data movement, since drive type/format is
the same, you're doing the equivalent to Unix "mv" process.)

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
william derksen
Sent: Wednesday, July 24, 2002 3:43 PM
To: [EMAIL PROTECTED]
Subject: Swappin Silos


We are running a TSM server attached to an iceberg using Gresham's EDT.   I
would like to know the best way to move operations to a newer silo on the
same TSM server.  The tape media for the current silo and the replacement
silo is to be the same format.

It would be nice if clients could be moved over the new silo incrementally
i.e., both libraries operational at the same.

Move data, checkout checkin, export import?

I've searched through the docs on this and have yet to find much help.

Advice and suggestions greatly appreciated.

TIA,

biru




biru_2000 the biru of the new millenium.


-
Do You Yahoo!?
Yahoo! Health - Feel better, live better



Re: LTO Tape OR 9840

2002-07-24 Thread Don France

I will share what I know from users across about a dozen customer accounts:

- 3590 Magstar drives are the Cadillac/Mercedes of the tape subsystems;
though you did not ask about this, I felt compelled to include it!

- LTO technology is based on IBM's Magstar, but shared in collaboration with
HP & Seagate;  most customers have migrated up from DLT, so are
(essentially) ecstatic about the speed and reliability of IBM's LTO!  (note,
IBM's LTO is what is considered to be the "Cadillac" of the LTO vendors,
although I've heard good things about Dell's and HP's.)  Price:capacity is
attractive - for a reason;  it's intended to compete with DLT (not high-end
tape transports like 3590 or 9840/9940).

- 9840's are the "old" (ie, outgoing) technology from STK;  I would advise
you to consider 9940A's, and maybe migrate to 9940B's -- that is the
emerging, latest tape technology from STK -- while the jury is still out,
initial experiences have been mostly positive (using SN6000 as a SAN-based
conduit to a shared library, looks pretty nice, clean... so far).  STK is
highly motivated to make this work!

Hope this helps!

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Joni Moyer
Sent: Wednesday, July 24, 2002 12:18 PM
To: [EMAIL PROTECTED]
Subject: LTO Tape OR 9840


Hello everyone!

The environment here is going to be changing soon... We will be moving off
of the mainframe and onto an AIX server that will be on our SAN.  We will
have 1 STK silo for the tapes for 2 TSM servers.  Right now we are
considering IBM's LTO or STK's 9840.  I was just wondering if anyone out
there has had experience with either one and if so,  what are the pro's and
con's of them?  Has anyone that has worked with LTO know how long it takes
to recover a bad tape?  Considering that they are 100GB tapes, it was
assumed that it would take 5 times as long as it does to recover a Magstar
3590(20 GB).  And also, do the tapes get damaged easily or is that all a
matter of handling them to take them offsite to vaults?  Thank you

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Re: Tape questions

2002-07-22 Thread Don France

Yes, but

You may want to ensure the copy pool version gets restored before you need
it;  "restore volume" is designed with this in mind -- refer to the admin
guide or reference, as it's dependent on the database info to find the copy
pool data.

If you truly wiped out all references to the data, the copypool references
might have (also) been deleted -- depends on how you deleted the volume's
data.  If you need old versions of data stored on that tape, you may need to
restore the TSM db to a point in time where it contains the copy pool info,
then do the "restore vol" (possibly, run 2nd TSM instance on current
server).

Alternatively, it may be simpler/easier to regenerate the data from the next
backup cycle --- or start backups now, to expedite backup of current
versions for the data that got "destroyed".


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Rob Hefty
Sent: Monday, July 22, 2002 1:32 PM
To: [EMAIL PROTECTED]
Subject: Tape questions


Hello,

We recently removed a  damaged 3584 library tape from our primary tape pool.
We were unable to complete the move data command and the data was
unavailable.  We removed the tape and deleted it from the database.  Luckily
we still have the copy pool tape.  What if we need to do a restore from this
dataset, does TSM know to ask for the copy pool instead of the primary?

Thanks,
Rob Hefty
IS Operations
Lab Safety Supply



Re: command file execution

2002-07-22 Thread Don France

Joe,

Are you using Win2K's RSM (rather than TSM driver) for library manager?
(How did you "connect" the BackupExec service to TSM library manager?)

The output from the show session looks strange -- it indicates the last
event was restore, and that it ended, but Platform ID should be WinNT;
rather it looks like (maybe) the service-id?

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Monday, July 22, 2002 7:19 AM
To: [EMAIL PROTECTED]
Subject: Re: command file execution


Don,

I agree with all that you state below... and that is how I thought it worked
as well.
Here's what's really happening in my case though.
I execute a command schedule to recycle Backup Exec services on NT servers
(we use Backup Exec to backup 100s of Exchange servers to TSM).  We have
Backup Exec set up to use TSM as its robotic library
(virtual device).  Once the Backup Exec services come back up, Backup Exec
creates a session with TSM to "confirm" the robotic library that it's
connecting to.  This connection hangs out on the TSM
server until the idletimout parm in TSM kicks it out.

NOTE:  Backup Exec is a Veritas backup product that we use to backup only
Exchange data.

Below is the output of a show session command of 1 of the sessions that I'm
referring to.

THE QUESTION: Does "Last Verb ( EndTxn ), Last Verb State ( Sent )" mean
that TSM sent a message back to the client to end the session?  Is this a
problem with my Veritas Backup Exec software?  Why
does this session stay in the system?

Session 24806:Type=Node,   Id=LA4701S001BE
   Platform=LA4701S001, NodeId=119, Owner=LA4701S001
   SessType=4, Index=1, TermReason=0
   RecvWaitTime=0.000  (samples=0)
   Backup  Objects ( bytes )  Inserted: 0 ( 0.0 )
   Backup  Objects ( bytes )  Restored: 1 ( 0.1035 )
   Archive Objects ( bytes )  Inserted: 0 ( 0.0 )
   Archive Objects ( bytes ) Retrieved: 0 ( 0.0 )
   Last Verb ( EndTxn ), Last Verb State ( Sent )

Any help would be greatly appreciated.

Regards, Joe


-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Saturday, July 13, 2002 3:25 AM
To: [EMAIL PROTECTED]
Subject: Re: command file execution


The command file is launched, that's all;  there is no "connection" to
break, per se -- all that happens is the "dsmc schedule" daemon gets the
command (from the server's schedule arguments, of course), closes the
session (if prompted scheduling is used) and executes the command.   (The
client already has the command args, in polling-type schedules, so there
would be no session at all, unless/until the command script initiates a dsmc
command.)

The connection between server and client for launching a command file ends
as soon as the data gets passed to the client-scheduler daemon... BEFORE the
command file even runs (essentially).  To see this in action, match the
actlog entries with the dsmsched.log info;  unless you are using
server-prompted scheduling, there is no session/connection between TSM
client & TSM server until/unless the command script contains a
session-creating command (like dsmc).

Most folks will likely have a "dsmc args" (or similar) in the command file,
which creates a session with TSM server to run whatever the args say.  Upon
completion of dsmc command, that completion terminates associated sessions.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Wednesday, July 10, 2002 6:39 AM
To: [EMAIL PROTECTED]
Subject: command file execution


An easy one...

When initiating the execution of a command file from the server to the
client via schedule (the command file resides on client), what breaks the
connection between the server and the client after the
command file completes execution?  Is it the Idletimeout parm?  Is there
another way to break the connection after the cmd sched executes?

Regards, Joe



Re: Checkin libvol V5.1.1 server

2002-07-17 Thread Don France

Checkin libvol
was, traditionally, a single-threaded component -- regardless of how many
processes you started;  this will probably not be changed, since most
customers use a single library per TSM server (and the libvol must be
serialized during the checkin process).

BTW, checkin will fail if there are no drives available (even if you request
it to just read the barcodes, without loading a drive with tapes).

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gill, Geoffrey L.
Sent: Tuesday, July 16, 2002 11:08 AM
To: [EMAIL PROTECTED]
Subject: Checkin libvol V5.1.1 server


I just upgraded my test server to TSM 5.1.1. I can't say I ever noticed this
on the V4 server so I don't know if this is the same or different. I checked
in 9 scratch volumes separately so I could watch all 9 processes finish. The
first 6 I purposefully use checkl=yes to force mounts on drives to see if
they were ok. The last 3 with checkl=no. The last 3 process with checkl=no
sat there till the last process of checkl=yes finished before they ran. Not
sure why a process not checking a label couldn't finish sooner than one that
is. Anyone else seen this?

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:<mailto:[EMAIL PROTECTED]> [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: command file execution

2002-07-13 Thread Don France

The command file is launched, that's all;  there is no "connection" to
break, per se -- all that happens is the "dsmc schedule" daemon gets the
command (from the server's schedule arguments, of course), closes the
session (if prompted scheduling is used) and executes the command.   (The
client already has the command args, in polling-type schedules, so there
would be no session at all, unless/until the command script initiates a dsmc
command.)

The connection between server and client for launching a command file ends
as soon as the data gets passed to the client-scheduler daemon... BEFORE the
command file even runs (essentially).  To see this in action, match the
actlog entries with the dsmsched.log info;  unless you are using
server-prompted scheduling, there is no session/connection between TSM
client & TSM server until/unless the command script contains a
session-creating command (like dsmc).

Most folks will likely have a "dsmc args" (or similar) in the command file,
which creates a session with TSM server to run whatever the args say.  Upon
completion of dsmc command, that completion terminates associated sessions.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Wholey, Joseph (TGA\MLOL)
Sent: Wednesday, July 10, 2002 6:39 AM
To: [EMAIL PROTECTED]
Subject: command file execution


An easy one...

When initiating the execution of a command file from the server to the
client via schedule (the command file resides on client), what breaks the
connection between the server and the client after the
command file completes execution?  Is it the Idletimeout parm?  Is there
another way to break the connection after the cmd sched executes?

Regards, Joe



Re: backups of mac files not workig

2002-07-13 Thread Don France

The *best* (and latest) server you should consider is either 4.2.2.6 or
5.1.1.1... these are BOTH patch levels, but they fix some serious damage.
Check the README's to see if there is an APAR of value to your problem.

Alternatively, get the PMR opened, so you get fixed (someday) and drop the
server back to the level that worked (probably too late, since you'll need
to restore an old database -- but, maybe you prepared for this possibility).
If it was my shop, I'd consider building a separate instance (on another
machine) for the Mac clients, until this gets resolved -- possibly using
abit of "magic" with server-2-server to (cheaply) resolve the library access
issue;  then, after all gets fixed, export/import the nodes, etc.


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Rob Schroeder
Sent: Friday, July 12, 2002 12:29 PM
To: [EMAIL PROTECTED]
Subject: backups of mac files not workig


Here is the error I am getting.

07/12/2002 14:23:49 fioScanDirEntry(): Can't map object 'C:
\temp\test\2-template M?W Size Ranges 11X7' into the local ANSI codepage,
skipping ...

Backups of this file worked fine with Win2k client 4.2.1.20 and Win2k
server 4.1.6.  I upgraded to client 4.2.1.32 and this is the error I get.
I originally sent an e-mail to this group and opened a ticket with Tivoli
support and both said to upgrade the server.  So here I am with Win2k
server
4.2.2.3 and I still have the problem.  I also have the problem with clients
4.2.2 and 5.1.1.  While I wait impatiently for Tivoli support to call back
I was hoping someone here
could help me.

Rob Schroeder
Famous Footwear



Re: Monthly shapshots/ Configuration management.

2002-07-02 Thread Don France

A coupla comments...

Have you checked the performance of monthly backupsets using v5???  (There
are supposed to be some performance improvements in there, somewhere.)

You mention that monthly archive is "driven by the server..., so less
fiddling at the client" -- how is that so?  Aren't you just going to script
the thing to choose the "monthly" or "yearly" -archmc?!?

Also, to relieve the TSM-db consumption, have you considered either of (a)
export the node (then later reincarnate the node) or (b) use a separate TSM
server, and start a new db once each year?  Using export node, you could be
using "incremental" for the monthly/yearly -- as a "poor man's" archive of
the yearly data that requires indefinite storage;  Re-starting the TSM-db
once every 2-5 years, to "reclaim" db space used by the annual snapshot is
another alternative.

Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Steve Harris
Sent: Monday, July 01, 2002 4:39 PM
To: [EMAIL PROTECTED]
Subject: Re: Monthly shapshots/ Configuration management.


Bill and Paul thanks for the input.

The full requirement is for a full monthly backup kept 13 months and a
yearly snapshot that is to be kept indefinitely.

I did an analysis of 4 options using my current production servers and data
volumes

Monthly archive,
Monthly incremental to "monthly" node
Tape backupset
Backupset to sequential disk files of 1 GB size  which are then migrated
using HSM

One key problem with backupsets is the time that they take, and you're not
supposed to perform any backups or expiry whilst they are being created.
Given the tape usage profile on my system that means new tape drives, and
costs slots in the 3494. The backupset with migration option was
surprisingly even more expensive - again bigger drive requirements in order
to migrate some of the backupset data files whilst the backupset is still
being created .

The problem with the monthly incremental is that the yearly snapshot has to
be taken by a different method. Although this option did cost out cheaper,
it wasn't by much.

The Monthly archive costed out to be second least expensive, and the same
method can be used for monthly and yearly snapshots. Its also driven from
the server side and so requires less fiddling on each client. Using the KISS
principle, this is the winner.  The big downside is, of course, the database
size increase.  I can cope with the archive network  load provided I split
the work over the 8 days of 4 weekends.

So, any input on configuration management?

Steve.

>>> [EMAIL PROTECTED] 29/06/2002 0:08:35 >>>
Why not use backupsets? I know, you need 1 tape per server for each
backupset as opposed to putting all the new monthly backup data in its own
pool and filling the tapes.

We thought about a couple other ways around this...virtual volumes to
another server and having a FILE device class for the backupsets and then
archiving the backupset files to TSM tape.

For the virtual volumes, you need another TSM server license and full
support for backupsets as virtual volumes is not there. The RECONCILE
VOLUMES command does not recognize a backupset as a valid volume type and
deletes the virtual volume, but not the related backupset entries. It works,
but you have to remember to NOT run the command.

For the FILE devclass, you need enough disk space to hold a backupset. Plus
to restore from the backupset, you would need to retrieve the archive that
contains that backupsets' files first.

Just some $.02 worth...

Bill Boyer
DSS, Inc.

>>> [EMAIL PROTECTED] 29/06/2002 11:59:36 >>>
You said backup, did you mean a full incremental once a month?
If so, the easiest way to do that is create a second node and second stanza
in your dsm.sys for the server with a different policy domain/management
class and use the -se option with a private dsm.opt for your include/exclude
list.  Mark the copygroup with mode=absolute.  Use a separate storage pool
if you would like to send just those offsite.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180






**
This e-mail, including any attachments sent with it, is confidential
and for the sole use of the intended recipient(s). This confidentiality
is not waived or lost if you receive it and you are not the intended
recipient(s), or if it is transmitted/ received in error.

Any unauthorised use, alteration, disclosure, distribution or review
of this e-mail is prohibited.  It may be subject to a statutory duty of
confidentiality if it relates to health service matter

Re: URGENT : ANS1503E Valid password not available for server 'TSMA'

2002-07-01 Thread Don France

Sounds like you are using passwordaccess=generate;  prove this by running
"dsmc -pa="
where  is the new password you set from the server.

To correct this, locate the password file (depends on client level) and
erase it; then run dsmc with no arguments to "set" the password file using
the value you used for .  See the "Using Clients" book for Unix --
"set password" command (from command line), etc.

 excerpt from latest "Install and User's Guide" ---
Use the passworddir option in your client system options file dsm.sys to
specify the directory location in which to store the encrypted password
file. The default directory location depends on how the client was
installed. When the passwordaccess option is set to generate and you specify
the password
option, the password option is ignored.



Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
rachida elouaraini
Sent: Monday, July 01, 2002 1:30 PM
To: [EMAIL PROTECTED]
Subject: URGENT : ANS1503E Valid password not available for server
'TSMA'


Hi all,
My problem began when I changed the passwords of Three of my
clients TSM (by the commande update node). Since my
schedules are MISSED, and when I want to do an incremental
or selective backup from one of the three nodes (dsmc i
file) , I can't and the output of the command is :

Node Name: MICHLEFEN
ANS1503E Valid password not available for server 'TSMA'.
The administrator for your system must run TSM and enter the
password to store it locally.

How can I do this, I have already changed the password of
each node from the server interface (dsmadmc).
All the platforms are AIX 4.3.3.
TSM version is 4.1.
Thank you in advance
__
Bonte aux lettres - Caramail - http://www.caramail.com



Re: Shortfalls in tsm/adsm

2002-06-27 Thread Don France

If you look at the results of "q proc f=d" or "q se ### f=d", you might come
close to knowing how much has transpired -- but you need something (like
ServerGraph?) to track progress so you can see what the instantaneous
performance level of a given task... kinda like vmtune/vmstat every 5 sec's,
then compare the delta and run a continuing, smooth curve graph connecting
the dots across the intervals.

I agree with Mark's post;  while this is "nice to have", and there IS more
instrumentation being incorporated in latest versions, I would put the
recent rash of relentless, recurring, regression bugs at the very HIGHEST
priority -- new features aren't worth the effort when they come at the
expense of serious breakage (eg, the recent/ongoing saga with expiration &
conflict-lock!).

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Thursday, June 27, 2002 7:28 AM
To: [EMAIL PROTECTED]
Subject: Re: Shortfalls in tsm/adsm


I cannot answer to the question but am afraid the answer is negative.
You can even find an APAR caused by TSM-driver for AIX (!!! not Solaris or
HP-UX) not being written according to AIX rules. Result: the devices being
deleted/unconfigured after restart (I learned this the hard way).
So lets not laugh too much on the others but try to be *always* better
than them.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Shortfalls in tsm/adsm

Hi.

reading a recent post on why people love tsm vs networker brought back a
lot of memories. Most of the great, but one that I just can't forgive.
There STILL appears (6 years after I broached the subject with the adsm
developers) to be no way to monitor in real-time what each of the tape
drives in the library is actually doing...

For example on Networker (And I know it's awful, I hate it as much as
the next geek who isn't enamoured by fancy GUI's etc) you can see
exactly what each drive is doing, AND HOW FAST IT'S DOING IT! Writing at
3.5MB/sec! Great. It's working fine. On tsm? Well there's manual mental
arithmetic and query proc if you feel brave...

Any chance I'm mistaken & the developers have actually fixed this
shortfall? Or at leats implemented a kernel table in their device
drivers so you can see the device stats like you can for hdisks?

TIA

Hamish.

--

I don't suffer from Insanity... | Linux User #16396
I enjoy every minute of it...   |
|
http://www.travellingkiwi.com/  |



Re: Shortfalls in tsm/adsm

2002-06-27 Thread Don France

Well... you are starting a NEW thread without saying as much in your subject
line;;; but, I'll answer anyway --- BUT the details will be left as an
exercise for the student.

Just write a menu program that (a) prevents escape to command-line, and (b)
invokes dsmc under the user's ID with the menu prompt for filespec -- as in
"dsmc restore %1"... the user can restore only files he has sufficient
rights to write/create.  Lots of shops have done this;  some do it to
prevent the TSM admin from doing anything else under root authority (sigh:()

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Dirk Kastens
Sent: Thursday, June 27, 2002 5:58 AM
To: [EMAIL PROTECTED]
Subject: Re: Shortfalls in tsm/adsm


Hi,

I just upgraded our server to 5.1. I'm still missing
quotas for client backup and archive space.
And I'm still looking for a possibility to allow
clients to restore their data but not to do backups.
We backup our /home filesystem at night, but we want
to allow our users to restore the files in their
homedirectories themselves without doing backups or
creating archives. Networker always had different client
programs for backup and restore.

Regards,

Dirk KastensTel.: +49 541 969-2347
Universitaet Osnabrueck, Rechenzentrum (Computer Center)
Albrechtstr. 28, 49069 Osnabrueck, Germany



Re: Expiring Data...in an unconventional manner

2002-06-25 Thread Don France

Probably the simplest technique is to just "DEL FIlespace "
from a TSM admin. session... but this does inspire the question --> "Do you
really want to have a period where no data can be restored, while awaiting
the 7th daily cycle?"

Another technique we've used is to change the copygroup (for this file
system's destination management class) to "absolute" every  7th day... this
tends to be more useful when the entire population or node needs a fresh
"Full" backup.


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Hagopian, George
Sent: Tuesday, June 25, 2002 8:27 AM
To: [EMAIL PROTECTED]
Subject: Expiring Data...in an unconventional manner


Greetings all...

Interesting dilemma...I have an AIX box (unix server) that I am backing up
via TSM 4.2...and I need to expire the data on a particular filesystem every
7 days but (knew that was coming) I need to keep the rest of the filesystems
on the AIX box forever...is there a way to do this?...ok I know there has to
be but I just don't see the lightsome one show me the light please

Thank You Very Much
George Hagopian
ICT Group Inc
AIX/TSM Admin



Re: DB backups

2002-06-22 Thread Don France

Great(?) news, FYI... the new "SHOW LOGPINNED" command is included in a
4.2.2 patch level (4.2.2.1) -- so now we can see who's gottit, and maybe
free things up before the server crashes!

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Wednesday, June 19, 2002 5:47 PM
To: [EMAIL PROTECTED]
Subject: Re: DB backups


Roll forward will even work in highly active environments, BUT, if you have
one constipated turtle you are screwed.  That can cause a log pin.
Typically, it is the 286 PC that is running on a 1200 baud line with a 50%
error rate from Afghanistan.  You get the point.

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Remeta, Mark [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 19, 2002 12:05 PM
To: [EMAIL PROTECTED]
Subject: Re: DB backups


Hi Dan,

Roll forward mode would work unless you have as much activity as Wanda does.
In that case you cannot use it because your log fills up to fast.

Mark


-Original Message-
From: Daniel Sparrman [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 19, 2002 6:58 AM
To: [EMAIL PROTECTED]
Subject: Re: DB backups


Hi

You don't have to do full backups every hour to be able to do
point-in-time restore of the database.

Having your recovery log in roll-forward mode, means that you don't have
to backup your database. In case of a corrupt database, you just restore
you're database that you have backed up earlier, and then tell TSM to do a
point-in-time restore. This means that after TSM has restored the database
from the tape, it will inspect the log, to see what transactions have been
made after the database backup.

This was what I was trying to explain in my last message.

Using this scenario, you don't have to backup your database twice a day.

Best Regards

Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Propellervdgen 6B
183 62 HDGERNDS
Vdxel: 08 - 754 98 00
Mobil: 070 - 399 27 51




Remco Post <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 2002-06-18 11:35
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: DB backups


Hi,

The question is more, do you really want to be able to do a point-in-time
restore of your database. In case of a real disaster, you'll probably be
very happy if you get the server back to a state it was in 24 hours before
the disaster. In case of database corruption, You'll probably cannot
afford
to restore 10 times just to find an approx point in time for the
corrupting
transaction, which is still in you log... I guess that maybe a few
incremental db backups during the day would be good enough for most
people.
We just do two fulls every day, but then again, we have two separate
robots
for primary and copy pools...

On Mon, 17 Jun 2002 15:04:12 +0200
"Daniel Sparrman" <[EMAIL PROTECTED]> wrote:

> Hi
>
> You have to have your recovery log in "roll-forward" mode to be able
> to
do
>  a point-in-time restore of the database(up to the minute).
>
> This mean that the recovery log isn't purged at every write to the
> database. Instead, the log is purged when a database backup occurs. This

> means that you can restore your database "up to the second" it went
down.
>
> Doing backups of the database every minute seems, how am I going to
> put
> it... Not the best solution. This is not a way to be able to restore the

> database using point-in-time.
>
> Best Regards
>
> Daniel Sparrman
> ---
> Daniel Sparrman
> Exist i Stockholm AB
> Propellervdgen 6B
> 183 62 HDGERNDS
> Vdxel: 08 - 754 98 00
> Mobil: 070 - 399 27 51
>
>
>
>
> William Rosette <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 2002-06-17
> 14:52 Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: DB backups
>
>
> You can get "up to the minute" if you do enough database backups, and
> I still don't understand the "up to the minute" idea.  If you are
> doing backups every minute that seems logical but do not most people
> backup
once
> a night? and if there is a database backup is not this database backup
and
> "up to the day" same as "up to the minute?"
>
>
>
> "Jolliff,
> Dale"To:   

Re: TSM v4.2.2.5 to v5.1.0.0 upgrade failed (AIX) - resolution

2002-06-22 Thread Don France

Gretchen,

Did you get an APAR number for that problem?  I have a customer wanting to
install 5.1, will want to forewarn them if they will need to run audit
before upgrade.

I saw IC28431, but that was for DLT devclass only, resolved at the 4.1.1.0
code level (and it only applied to NT platform).

Thanx,
Don


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gretchen L. Thiele
Sent: Friday, June 21, 2002 11:17 AM
To: [EMAIL PROTECTED]
Subject: TSM v4.2.2.5 to v5.1.0.0 upgrade failed (AIX) - resolution


I now have an answer from Level 2 - it's a known problem and
requires a database audit to resolve. This is an 86% utilized
100 GB database and should run two, if not three days (I hope!).

Again the error message was:

ANRD iminit.c(820): ThreadId<0> Error 8 converting Group.Members to
Backup.Groups.

Getting clearance from those affected now and will try to run this
over the weekend and upgrade next week. Will post audit stats at
the end of the process.

Gretchen Thiele
Princeton University



Re: dsmc sched as another user

2002-06-22 Thread Don France

You are right, ksh script won't work -- BUT a compiled C program does work,
with SUID.

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gerald Wichmann
Sent: Thursday, May 16, 2002 10:15 AM
To: [EMAIL PROTECTED]
Subject: Re: dsmc sched as another user


Ya good point and I thought of that. Fortunately it's not a big issue here.
The later suggestion about creating a program and setting SUID doesn't work.
At least not a ksh script..That was the first thing I tried. So far only
sudo works..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Thomas Denier [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 16, 2002 8:34 AM
To: [EMAIL PROTECTED]
Subject: Re: dsmc sched as another user

> Try using sudo.
> You can allow your non-root user execute only the dsmc command as root.

I think this would allow the non-root user to execute dsmc as root with
any operands, not just the 'sched' operand. This would be a serious
security exposure. The non-root user could replace any file on the system
with a copy of a different file or with an older version of the same file.
If the non-root user had root permission on any other Unix client system
the user could back up an arbitrary file there and restore it on the
system where he or she was a non-root user.

As far as I know, the only really safe way to do this is to write a
program specifically to start the scheduler and make that program
root owned, SUID, and executable by the user who needs to start the
scheduler. Many Unix systems even today have a bug that makes SUID
scripts dangerous. Unless you are certain that this bug is fixed on
your system you will need to write the program in C or some other
compiled language.



Re: Performance again!!!

2002-06-22 Thread Don France

Also, backup requires alot of db interaction (insert, commit,
calculate/build file aggregates, etc.), whereas migration just moves from
disk pool to tape... so, migration of 350 MB should be much faster --
notwithstanding tape mount and positioning (which, for DLT, can be several
minutes).

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev
Sent: Friday, May 17, 2002 1:11 AM
To: [EMAIL PROTECTED]
Subject: Re: Performance again!!!


Backup direct to tape involves the communications. Migration is purely
server process. So its check can eliminate some TCP bottlenecks (if any).
For local client there should be no big difference.

Zlatko Krastev
IT Consultant



Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: Performance again!!!

Hello,

we didn't try that one ...
Is there any reason for the migration from disk to
tape to be faster than backup from disk to tape?

thx
Sandra

--- Zlatko Krastev <[EMAIL PROTECTED]> wrote:
> How long does the migration to tape take after
> backup to disk?
>
> Zlatko Krastev
> IT Consultant
>
>
>
>
> Please respond to "ADSM: Dist Stor Manager"
> <[EMAIL PROTECTED]>
> Sent by:"ADSM: Dist Stor Manager"
> <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> cc:
>
> Subject:Performance again!!!
>
> Hello everybody,
>
> It seems like TSM performance problems will neer
> end!!!
>
> Here is the new problem:
>
> The customer is running TSM 4.2.1.0 on a Windows
> 2000
> server machine . An IBM rack case 82XX  which
> contains
> a Quantum DLT8000 tape drive is connected to the
> server.
> The driver version for the Quantum DLT drive is 1.5
> and
> is installed on the W2K machine. We tried a backup
> of
> 350MB on the local server with the Windows 2000
> Backup
>
> utility and it took us approximately 75 seconds .
>
> Next , we tried the same Backup from TSM using its
> Device Driver and it took us about 9 minutes . We
> tried switching TSM to use the Native device driver
> but still we got the same performance result .
>
> So we upgraded to 4.2.2 ; In the Device Manager for
> TSM,we can see that TSMSCSI.exe is upgraded to
> 4.2.2.25 and the ADSMSCSI.sys is 4.2.2.3 .  The
> server
> has a version of 4.2.2.25 .  Still , we obtained
> poor
> backup performance .
>
> We suspected that maybe it was a database bottleneck
> ( eventhough it is still empty) ; so we tried the
> same
> Backup using TSM but the destination was on the
> HardDisk.
> The performance was good and the backup finished
> within 75seconds .  So, we can eliminate the
> database
> problem.
> Also, we noticed with version 4.2.2.0 that it is
> crashing frequently . It was exiting abnormally .
>
> On the site of tivoli, the latest version of TSM
> server is 4.2.2 . We do not have the 5.1 release .
>
> does anyone have a suggestion?
>
> thx a lot
> Sandra
>
> __
> Do You Yahoo!?
> LAUNCH - Your Yahoo! Music Experience
> http://launch.yahoo.com


__
Do You Yahoo!?
LAUNCH - Your Yahoo! Music Experience
http://launch.yahoo.com



Re: retain extra versions question!!

2002-06-21 Thread Don France

Based on what your msg shows -- ve=14, vd=1, re=nolimit, ro=180...
the most recent 14 versions (ve=14) of a file that exists on the client
will be kept for "nolimit" number of days (after each run of expiration).

When the file is deleted on the client, the next full-incremental will
mark all but the last 1 (vd=1) for deletion, the last 1 will be kept for
ro=180 days.


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Joseph Dawes
Sent: Friday, June 21, 2002 10:58 AM
To: [EMAIL PROTECTED]
Subject: retain extra versions question!!


If my co looks as follows:

Policy   Policy   Mgmt Copy VersionsVersions
Retain Retain
Domain   Set Name ClassGroupDataData
Extra   Only
Name  Name Name   Exists Deleted
VersionsVersion
----
---
BKUP14D  ACTIVE   BKUP14M  STANDARD   14   2
No Limit180



will retainextra versions set to no limit mean files will never expire???


Any help would be great


Joe



Re: Small files V's Large files

2002-06-21 Thread Don France

You might consider (a) image backup (in concert with incremental), (b)
journaled incremental or -INCRBYDATE during the week (in concert with image
and/or full progressive-incremental on the weekend).  Some folks like doing
monthly image (on a weekend) for mission critical file servers, then daily
journaled-incremental and weekly full-progressive-incremental.

You should get 5-10 GB/Hr on large file sever with lots of files;  I've done
12 GB/Hr on a benchmark-configured system (that was on NT, before Win2K --
which some report should be faster)... the key issues are (a) TSM server
speed in handling large quantities of files -- set your aggregate larger
(they recently increased max. transaction size to 2 GB), and (b) file server
capability in processing thru its directories (Unix is generally faster than
Win2K), limiting each file system to under 1 million files/directories (and
under 200 GB total size) helps... smaller becomes faster.

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Dallas Gill
Sent: Thursday, June 20, 2002 9:27 PM
To: [EMAIL PROTECTED]
Subject: Small files V's Large files


Can anybody share with me the secret to getting good performance with small
files like I get with I big files, I know that I will not get the same
performance but I would like to think that I would be getting at least half
the throughput that I get with large files. I am getting approx 1GB per
minute for large files (20MB and bigger) and about 1GB per 10min for small
files. Can anyone help. Thanks Dallas



Re: NSM Upgrade Experience

2002-06-21 Thread Don France

Thanks, Dale... That sounds okay... the conflict lock APAR appeared at
4.2.2.0 -- so there's no greater exposure (to it) by installing 4.2.2.5 --
another nail to persuade customers to serialize their
expiration-migration-reclamation schedules... actually convinced a couple to
run reclamation only on weekends, after approp. tweaking the storage pools.

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Jolliff, Dale
Sent: Friday, June 21, 2002 3:48 AM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


For us, the lock conflict exhibits itself during expiration, when it hits
some corruption in our DB.  Systems that have time to run expiration,
reclamation and migration without other processes running may never see the
symptoms.   So I have been told.



-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 7:53 PM
To: 'ADSM: Dist Stor Manager'
Cc: Jolliff, Dale
Subject: RE: NSM Upgrade Experience


Ooops... we just advised a customer to install 4.2.2.5 --- for SumTab fix,
didn't know there is a (yet) new bug for expiration (introduced by
4.2.2.4???)

Are you sure that 4.2.2.6 is needed for EXPIRE INVENTORY to work, again ?!?
(There is nothing about expiration in the .4 or .5 APAR abstracts... Were
you/they referring to the "conflict lock" issue?  Geesh, .5 just came
out...sigh:()


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Jolliff, Dale
Sent: Thursday, June 20, 2002 2:57 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


I got my 5100-02 patch when someone at 800-IBM-SERV was playing phone tag
between myself and the CE when he was out investigating our PMH on duplicate
IP addresses.

I was told by level 2 this afternoon to avoid expiration on 4.2.2.4 until
4.2.2.6 is available
Real Soon Now.








-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 2:25 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


As I understand, support for TSM 4.1 expires June 30, 2002. So if you want
to be supported, you have no choice but to upgrade to TSM 4.2.

We too have been trying to implement EC017 on our NSM, and have had two
failed attempts at the upgrade. The upgrade to AIX 5 and TSM 4.2.1.9
requires an AIX load from 4mm tape. We got a bad 4mm tape. You know, WORN
technology (Write Once, Read Never). About 10% of the way into the upgrade
to AIX 5 we couldn't read the tape. Tried and tried and tried. But it was
bad. We had to recover from the MKSYSB we did that morning.  That's why we
take MKSYSB's.

The second time we tried the EC script failed because \dbaa1dk31\db has on
HDISK2. Huh? How did that happen. It turned out that when the recover was
done from the MKSYSB all the HDISKS were renumbered by AIX. It took us a
couple days with Level II support to get that straightened around. Took
Level II about 4 hours dialed in running AIX commands to re-do things and
put \dbaa1dk31\db back on HDISK31.

We have scheduled our third attempt at EC017 for Sunday, June 30. I've also
been told by Level II that anyone preparing to install EC017 should open a
PMR prior to the upgrade so support can track the activity. I was also told
that not very many NSM sites have done this upgrade, so many will be out of
support.

And what's this about OS patch 5100-02. Where do I get it and where do I
find these kinds of patches? Is there some piece of documentation I've
missed or did I just not catch something?

Good luck to all!!!
John G. Talafous
Information Systems Technical Principal
Global Software Support - Data Management
telephone:  (330)-471-3390
e-mail: [EMAIL PROTECTED]
http://www.ctnvm.inside.tkr/~talafous/
http://www.cis.corp.inside.tkr/networkstorage/



Re: NSM Upgrade Experience

2002-06-20 Thread Don France

And, if that isn't SAD enough, at 16:58 today, the AIX tar file for 4.2.2.5
got updated!  (Who knows what they just changed, if anything!?!)


-Original Message-
From: Don France [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 5:53 PM
To: 'ADSM: Dist Stor Manager'
Cc: '[EMAIL PROTECTED]'
Subject: RE: NSM Upgrade Experience


Ooops... we just advised a customer to install 4.2.2.5 --- for SumTab fix,
didn't know there is a (yet) new bug for expiration (introduced by
4.2.2.4???)

Are you sure that 4.2.2.6 is needed for EXPIRE INVENTORY to work, again ?!?
(There is nothing about expiration in the .4 or .5 APAR abstracts... Were
you/they referring to the "conflict lock" issue?  Geesh, .5 just came
out...sigh:()


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Jolliff, Dale
Sent: Thursday, June 20, 2002 2:57 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


I got my 5100-02 patch when someone at 800-IBM-SERV was playing phone tag
between myself and the CE when he was out investigating our PMH on duplicate
IP addresses.

I was told by level 2 this afternoon to avoid expiration on 4.2.2.4 until
4.2.2.6 is available
Real Soon Now.








-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 2:25 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


As I understand, support for TSM 4.1 expires June 30, 2002. So if you want
to be supported, you have no choice but to upgrade to TSM 4.2.

We too have been trying to implement EC017 on our NSM, and have had two
failed attempts at the upgrade. The upgrade to AIX 5 and TSM 4.2.1.9
requires an AIX load from 4mm tape. We got a bad 4mm tape. You know, WORN
technology (Write Once, Read Never). About 10% of the way into the upgrade
to AIX 5 we couldn't read the tape. Tried and tried and tried. But it was
bad. We had to recover from the MKSYSB we did that morning.  That's why we
take MKSYSB's.

The second time we tried the EC script failed because \dbaa1dk31\db has on
HDISK2. Huh? How did that happen. It turned out that when the recover was
done from the MKSYSB all the HDISKS were renumbered by AIX. It took us a
couple days with Level II support to get that straightened around. Took
Level II about 4 hours dialed in running AIX commands to re-do things and
put \dbaa1dk31\db back on HDISK31.

We have scheduled our third attempt at EC017 for Sunday, June 30. I've also
been told by Level II that anyone preparing to install EC017 should open a
PMR prior to the upgrade so support can track the activity. I was also told
that not very many NSM sites have done this upgrade, so many will be out of
support.

And what's this about OS patch 5100-02. Where do I get it and where do I
find these kinds of patches? Is there some piece of documentation I've
missed or did I just not catch something?

Good luck to all!!!
John G. Talafous
Information Systems Technical Principal
Global Software Support - Data Management
telephone:  (330)-471-3390
e-mail: [EMAIL PROTECTED]
http://www.ctnvm.inside.tkr/~talafous/
http://www.cis.corp.inside.tkr/networkstorage/



Re: solved my case... RE: How to flush DRM references, anybody know ?

2002-06-20 Thread Don France

Thanx for sharing, Dwight!!!


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Thursday, June 20, 2002 1:59 PM
To: [EMAIL PROTECTED]
Subject: solved my case... RE: How to flush DRM references, anybody know
?


Well, just got off the phone with IBM.
they had me doing all sorts of queries looking for DR plans to delte
but absolutely nothing was there
UNTIL I did a "q machine" and another guys desk top AIX box was listed...
I did a "delete machine blah" and now I'm no longer using DRM !

Just thought I'd pass this along...

Dwight


-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 12:03 PM
To: [EMAIL PROTECTED]
Subject: Re: How to flush DRM references, anybody know ?


Dwight, if you find out, would you please pass on the info?

We don't need DRM anymore, but I can't figure out how to get rid of it,
either.



-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 12:36 PM
To: [EMAIL PROTECTED]
Subject: How to flush DRM references, anybody know ?


I've already opened a problem with Tivoli but anyone know how to flush all
references to DRM  ?

At one point in time in an attempt to issue a "q pr" the "q" was left off
and the "pr" initiated a prepare.
Now my server shows I'm using DRM !

In the past I just stuck on the license so it would report valid BUT I just
upgraded to 4.2.0.0, then 4.2.2.0 and the drm.lic file doesn't exist ! ! !
and my server reports license as "FAILED" so I'm looking for a way to flush
all internal references to DRM !

anyone know ?

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: NSM Upgrade Experience

2002-06-20 Thread Don France

Ooops... we just advised a customer to install 4.2.2.5 --- for SumTab fix,
didn't know there is a (yet) new bug for expiration (introduced by
4.2.2.4???)

Are you sure that 4.2.2.6 is needed for EXPIRE INVENTORY to work, again ?!?
(There is nothing about expiration in the .4 or .5 APAR abstracts... Were
you/they referring to the "conflict lock" issue?  Geesh, .5 just came
out...sigh:()


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Jolliff, Dale
Sent: Thursday, June 20, 2002 2:57 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


I got my 5100-02 patch when someone at 800-IBM-SERV was playing phone tag
between myself and the CE when he was out investigating our PMH on duplicate
IP addresses.

I was told by level 2 this afternoon to avoid expiration on 4.2.2.4 until
4.2.2.6 is available
Real Soon Now.








-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 2:25 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


As I understand, support for TSM 4.1 expires June 30, 2002. So if you want
to be supported, you have no choice but to upgrade to TSM 4.2.

We too have been trying to implement EC017 on our NSM, and have had two
failed attempts at the upgrade. The upgrade to AIX 5 and TSM 4.2.1.9
requires an AIX load from 4mm tape. We got a bad 4mm tape. You know, WORN
technology (Write Once, Read Never). About 10% of the way into the upgrade
to AIX 5 we couldn't read the tape. Tried and tried and tried. But it was
bad. We had to recover from the MKSYSB we did that morning.  That's why we
take MKSYSB's.

The second time we tried the EC script failed because \dbaa1dk31\db has on
HDISK2. Huh? How did that happen. It turned out that when the recover was
done from the MKSYSB all the HDISKS were renumbered by AIX. It took us a
couple days with Level II support to get that straightened around. Took
Level II about 4 hours dialed in running AIX commands to re-do things and
put \dbaa1dk31\db back on HDISK31.

We have scheduled our third attempt at EC017 for Sunday, June 30. I've also
been told by Level II that anyone preparing to install EC017 should open a
PMR prior to the upgrade so support can track the activity. I was also told
that not very many NSM sites have done this upgrade, so many will be out of
support.

And what's this about OS patch 5100-02. Where do I get it and where do I
find these kinds of patches? Is there some piece of documentation I've
missed or did I just not catch something?

Good luck to all!!!
John G. Talafous
Information Systems Technical Principal
Global Software Support - Data Management
telephone:  (330)-471-3390
e-mail: [EMAIL PROTECTED]
http://www.ctnvm.inside.tkr/~talafous/
http://www.cis.corp.inside.tkr/networkstorage/



Re: Slow Reclamation - And a need for speed...

2002-06-12 Thread Don France

1.  The lowest (in general) "rec" value I recommend is 60 -- means at least
60% of the volume is "reclaim-able".

Your rec=20 and rec=10 are way too low!  Some sites (to get best bang for
buck, in the past) did stair-step every couple hours from rec=95, then
rec=90, etc. until reaching end of their window.

2.  If you're supporting WinNT or Win2K clients without the infamous DIRMC
trick (ie, special management class directing dirs to disk pool which
migrates to FILE pool on disk, then copy pool both) your restores *AND*
reclamation will run many times longer than necessary (eg, we restored 1.6
million directory objects to a Win2K disk in about 2 hours).

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Coats, Jack
Sent: Tuesday, June 11, 2002 12:34 PM
To: [EMAIL PROTECTED]
Subject: Slow Reclamation - And a need for speed...


TSM 4.1.3 on NT 4, 2 LTO drives in IBM 18 tape library.  Single SCSI chain.

We run reclamation what seems like all day every day, but tapes don't
seem to free up in a timely manner.  And our offsite tapes are 'growing'.
We have been using TSM for about a year with 60 day retention, and have
only added two (small) clients in the last 2 months.

The pool for onsite TAPEPOOL (that stays in the library) and COPYPOOL
that goes off site.  The number of tapes in the COPYPOOL (offsite) seem to
be
growing greater than what would seem reasonable given the size of the
TAPEPOOL.

Lately I have stopped reclamation and done a 'move data' of some of the
lowest use volumes that are offsite, and now they are marked 'pending'.

Our current schedule is:
  Weekdays 1300 till 2000 - update stgpool COPYPOOL rec=20
  Weekdays 0900 till 1300 - update stgpool TAPEPOOL rec=10
  Saturday 0800 till 2000 - update stgpool TAPEPOOL rec=10
  Sunday 0800 till 2200 - update stgpool COPYPOOL rec=20

Would it be a good idea to up the recovery percent?

Should I possibly do copypool on MWF from 0900-2000 then
tapedata on TTh instead of some on each day?

Suggestions?

TIA .. JC



Re: TSM v4.1 clients NOT compatible with V5.1 server?

2002-06-06 Thread Don France

Check the README files, to confirm your planned migration matches your
expectations... in general, old clients are supported (ie, continue to work
just fine!), within the following constraints:
- old client on new server, only old features can be used;
- new client on old server, usually okay provided new server features are
not exploited;
- new client to new server -- often, cannot go back to old client level...
see README for migration strategy in case you need to allow for this (highly
recommended for "early adopters").

So, your Win9x clients can continue to run "forever";  just need to know
that (eventually) they need to upgrade the OS (and TSM client) for continued
vendor support via standard service contracts.



Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
MC Matt Cooper (2838)
Sent: Thursday, June 06, 2002 11:10 AM
To: [EMAIL PROTECTED]
Subject: TSM v4.1 clients NOT compatible with V5.1 server?


Hello all,
 If I am reading things correctly, all my v4.1 clients are NOT
compatible with V5.1 server.  Nor is a V5.1 client (where supported)
compatible with a V4.1.5 server.   Is this supposed to be a multi-step
upgrade?   Am I therefore supposed to tell management that the presence of
WIN95 laptops will stop our ability to upgrade to V5.1, (because they do not
have a v4.2 client)  ?
http://www.tivoli.com/support/storage_mgr/compatibility.html

I am running v4.1.5 server on z/OS 1.1.   I have WINxx (all of them )
clients, AIX, SUN, Novell, and MAC clients mostly v4.1.  Has anyone gone
this route already or can tell me what migration path I can take?
Matt



Re: node filespace cleanup, any ideas ? ? ?

2002-05-31 Thread Don France

Dwight,

The last completed incremental is what controls the update, per each
filespace;  hence, if you exclude.fs (or use DOMAIN to exclude), then
the date in the filespaces table should reflect that -- it's also what
is shown from the ba-client "q fi", or admin.client "q fi f=d".

So... you probably want to generate some tier-1 & tier-2 warning lists;
tier-1 to show how many fs backups are older than a week, how many are older
than a month --- then, send msg to owner to notify of planned purge from
backup storage (assume you have an SLA for doing this)... this could be
semi-automated, so at least some review (and notification) occurs before the
delete action is performed.

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cook, Dwight E
Sent: Friday, May 31, 2002 6:53 AM
To: [EMAIL PROTECTED]
Subject: node filespace cleanup, any ideas ? ? ?


As filespaces come and go on a client, does anyone know of a solid way to
clean them up ? ? ?

Since directories are "backed up" even when they are "excluded" would the
"Last Backup Start Date/Time:" reflect the last time the file system was
mounted ? ? ?

here is an example of why I would like to clean up things...
We have an SAP disaster recovery box that has also been and will continue to
be used for misc other things...
So this has had QA TS BX PF etc... instances on it and filesystems have come
and gone over the last year because they are generally of the form
/oracle/XXX/sapdata#
where XXX is some instance like PF1 and # is a sequence number
Well, I did a
select node_name,sum(capacity) as "filespace capacity
MB",sum(capacity*pct_util/100) as "filespace occupied MB" from
adsm.filespaces group by node_name

and I really don't think this box has 17 exabytes of filespaces on it ;-)

Filespace MBOccupied MB

17,592,369,800,454  17,592,367,574,528

memory refresh:
terabyte is 1024 gigabytes
petabyte is 1024 terabytes
exabyte is 1024 petabytes
zettabyte is 1024 exabytes
yottabyte is 1024 zettabytes



Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suit 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: allocating disk volumes on RAID5 array

2002-05-29 Thread Don France

Zlatko,

I can appreciate your imaginary situation;  my comments were directed at the
typical scenario for incremental "backup" data, which is easily recovered on
the next night's cycle.  This is another reason there is no single, correct
answer to the question.

Every situation requires one to apply their analytical skills in concert
with TSM capabilities;  for my customers, the large db backups (and
mission-critical, dual-copied archive/redo logs) go straight to tape... as
for archive storage pools, either go straight to tape, or never use the
delete option (I would not advise using the delete option -- there are just
too many ways the process can fail, then you are without recourse!)  This is
a good point for considering RAID-0 (or just simple, non-RAID disk) for
backups;  for mission-critical data like redo logs, consider using RAID-1
(vs. RAID-5) for performance *and* protection.

Your point about how many logical volumes -- using just ONE logical volume
can cause major performance bottlenecks in TSM;  the disk queue for parallel
writes is done on a logical volume basis (hence my reference to "mount
point" wait -- it's really a delay);  it's been a long-time ROT (Rule of
Thumb) to allocate as many logical volumes as one wants parallel sessions...
per Wanda's original comments.  (Unless things have changed, which has not
been indicated in the latest performance info shared by developers.)

Regards,
Don


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Zlatko Krastev/ACIT
Sent: Wednesday, May 29, 2002 3:56 AM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array


> ... I rather NOT use RAID-5 for TSM disk pool volumes... Why pay the
penalty
for calc & writing parity when you don't expect to read it more than
once?!? This data is so transient (in general, it exists for less than 48
hours)
it's not worth sacrificing ANY performance -- we really want to get the
backups AND migration done as quickly as possible, right?  These days, I
recommend RAID-0, striping without parity, if any at all; ...

Don,
on this list several people have had problems when something happened
before they completed daily BACKUP STGPOOL. If you define volumes over
RAID 0 array and only one disk fails you will lose many hundred GBs of
client backups. For some sites this might not be important and backups can
be restarted or wait for the next day's backup. But for others it can be
not possible. It is all about SLA you have.
And lets just imagine a situation - quaterly results are archived with
-deletefiles option and you RAID-0 drops a disk before primary pool backup

This is an imaginary situation but you can find more realistic one.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Re: allocating disk volumes on RAID5 array

Wanda has identified the critical choices;  there is no single, right
answer
because the performance factors intersect at different points for nearly
every situation -- if only because most sites are at different points
along
the evolutionary development of IT infrastructure, different sized disks
for
RAID, different RAID technologies (ESS vs. EMC vs. native/local drives,
etc.), and different performance ranges for a given TSM server box.

And it does matter how you decide to carve up your disk pool resources.
Just
as different tape capacities & technology will influence your choices for
how tape pools are configured, variety in the type and capacity AND number
of channels accessing a set of disks will dictate the kind of performance
you can achieve.  So, if you think your TSM server system can handle 25
(50
in today's terms) sessions concurrently, then you measure the disk I/O
performance choices against that level of concurrency.  Most folks (these
days) disregard the effect of distributing many logical (TSM) volumes over
some number of physical volumes;  increasing the number of logical volumes
per physical gets that many concurrent writes queued to the device.  The
more of each, the more sessions can be sustained without waiting for
"mount-point".

I rather NOT use RAID-5 for TSM disk pool volumes... Why pay the penalty
for
calc & writing parity when you don't expect to read it more than once?!?
This data is so transient (in general, it exists for less than 48 hours)
it's not worth sacrificing ANY performance -- we really want to get the
backups AND migration done as quickly as possible, right?  These days, I
recommend RAID-0, striping without parity, if an

Re: allocating disk volumes on RAID5 array

2002-05-28 Thread Don France

Wanda has identified the critical choices;  there is no single, right answer
because the performance factors intersect at different points for nearly
every situation -- if only because most sites are at different points along
the evolutionary development of IT infrastructure, different sized disks for
RAID, different RAID technologies (ESS vs. EMC vs. native/local drives,
etc.), and different performance ranges for a given TSM server box.

And it does matter how you decide to carve up your disk pool resources. Just
as different tape capacities & technology will influence your choices for
how tape pools are configured, variety in the type and capacity AND number
of channels accessing a set of disks will dictate the kind of performance
you can achieve.  So, if you think your TSM server system can handle 25 (50
in today's terms) sessions concurrently, then you measure the disk I/O
performance choices against that level of concurrency.  Most folks (these
days) disregard the effect of distributing many logical (TSM) volumes over
some number of physical volumes;  increasing the number of logical volumes
per physical gets that many concurrent writes queued to the device.  The
more of each, the more sessions can be sustained without waiting for
"mount-point".

I rather NOT use RAID-5 for TSM disk pool volumes... Why pay the penalty for
calc & writing parity when you don't expect to read it more than once?!?
This data is so transient (in general, it exists for less than 48 hours)
it's not worth sacrificing ANY performance -- we really want to get the
backups AND migration done as quickly as possible, right?  These days, I
recommend RAID-0, striping without parity, if any at all;  on most moderate
sized Unix machines, no striping at all, and as many smaller drives as can
be handled -- these days that translates to 36 or 72 GB drives, fill the
drive bays, get as much SCSI separation as possible... do the math on
dividing each physical drive into some 7-10 logical drives.  For RAID
anything, Gianluca's 7 or 8 physical drives per RAID config is a good
high-end count.


Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gerald Wichmann
Sent: Tuesday, May 28, 2002 3:30 PM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array


That's more along the lines of what kind of info I'm digging for - but
doesn't quite address how one goes about coming up with a number or size.
There may not be a "right" or "single" answer however you must admit there
is a generalized "right" answer. Consider for example that while RAID-5
arrays can vary in the # of disks assigned and size of them, there is a
generalized rule of thumb as Cordiali pointed out. Too few or too many disks
in your RAID-5 array and you can have performance implications. There's a
sort of sweet spot for creating RAID-5 arrays and keeping that in mind there
should also be a similar sweet spot in how many volumes one might want to
assign. All in all though I still wonder if it really matters a whole lot..
Whether you have 1 write going on or 10, you're still striping across the
array and I question whether you'd really see much difference one way or
another. Might be an interesting thing to try various variants of..

Regards,

Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 28, 2002 2:38 PM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array

Well yes, it does matter.
Trouble is, there is no "right" answer.
It's a performance thing.

Assuming you have multiple clients backing up concurrently to your disk
pool, TSM will start as many I/Os as there are clients sending data, up to
the number of  TSM "volumes" in your disk pool.

If you have more "volumes", then you get more I/O's in flight concurrently.
That's a good thing and will improve performance, until you get "too many"
in flight, then the effect of yanking the heads around degrades performance.

It's even harder to figure out what is optimal in a RAID situation, since
you don't have a 1-to-1 correspondendence between your TSM "volumes" and
physical disks.  And most RAID setups have some cache that acts as a buffer,
and that helps improve performance but further disassociates the number of
concurrent writes from the number of physical disks.

So think about it this way:  How many concurrent WRITES do you want to occur
in that RAID pool?  Pick a number, and create that many TSM "volumes".



-Original Message-
From: Gerald 

Re: changeing bytes to gigabytes

2002-05-26 Thread Don France

BUT WAIT... have you not seen 4.2.2 and 5.1.x -- both have "broken" summary
table info, specifically the BYTES column is (mostly, not always) ZERO!  I
am still researching the other columns, they may be FUBAR'ed also;  I am
told there is an APAR open for this -- IC33455 -- anyone know when it will
get fixed?!?  (For capacity planning & workload monitoring, this is the
single BEST resource we've used in a long time, since the old SMF days!)

Don France
Technical Architect -- Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Seay, Paul
Sent: Wednesday, May 22, 2002 7:12 PM
To: [EMAIL PROTECTED]
Subject: Re: changeing bytes to gigabytes


Select entity, cast(bytes/1024/1024/1024 as decimal(8,2)) as "Gigabytes  "
from summary

Paul D. Seay, Jr.
Technical Specialist
Naptheon, INC
757-688-8180


-Original Message-
From: Blair, Georgia [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 21, 2002 11:52 AM
To: [EMAIL PROTECTED]
Subject: changeing bytes to gigabytes


I am using the summary table to monitor how much data migrates from disk on
a daily basis. What is the easiest way to find change the amount to
gigabytes versus bytes? Or does someone have a particular select statement
for this.

Thanks in advance
[EMAIL PROTECTED]



Re: opinion on AIT vs LTO and 3570 tape technology?

2002-05-26 Thread Don France

Nicely put, Gianluca!  I agree that LTO is (nearly) a no-brainer here.

The part about using HSM or not -- depends on your customer's perspective
(and wallet).  I've been working with a couple clients who have similar (or
worse!) retention needs for some of their data;  we've about resolved to use
multiple TSM servers, once a TSM db gets "so large" that it's going to
hinder server recovery or expiration/migration/reclamation processing -- so,
after 1-year's worth of data is accumulated, export/import the node (and its
data) to a "restore/retrieve only" TSM server, maybe even on the same box.
The argument for HSM depends on whether they really wanna spend for 2-year's
worth of online disk;  if they really want the data as fast as "always
spinning rotating memory" can provide, then all the points about how fast
can data be gotten back from LTO are moot!  Notwithstanding this
round-a-bout argument for HSM, LTO is *the* emerging, cost-effective way to
store large volumes of data;  it's performance is between DLT and 3590,
capacity is much greater than both, is available from HP, Dell, etc. (though
I like IBM's the best, at least until it's more mature.)

Good luck,

Don France
Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E. --
www.pacepros.com)
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gianluca Mariani1
Sent: Friday, May 24, 2002 11:15 AM
To: [EMAIL PROTECTED]
Subject: Re: opinion on AIT vs LTO and 3570 tape technology?


Hi Lisa,
the main points in the AIT vs LTO contest to me are:

1. AIT is a proprietary format dveloped for the then niche market of
digital media. It is true it has faster access times than LTO. this is,
though, just about the only advantage it can count over LTO. AIT-2
cartridges have 50GB native capacity compared to 100GB for native LTO;
AIT-2 can go up to 130GB for compressed capacity while LTO can reach 200GB.
AIT-2 has faster access times because the cartridge is smaller, so, on
average, the head has to go through a shorter tape length than LTO to get
to the first byte of data; but from then on contest is over, as LTO can
sustain transfer rates of 15MB/s in native mode and 30MB/s for compressed
data while AIT-2 runs, respectively, at 6 and 15.6MB/s.
what this means is that when you are transferring big sequential files, as
seems to be your case, LTO will "beat the pants off AIT" for overall
throughput; an analogy could be 3570 vs 3590. 3570 will get to data before
3590, on average, and then lose out on transfer speed. if you're talking
about start/stop and small file transfer then access times are important,
otherwise access time is much less of an issue. even in a situation like
this last one, LTO has a performance advantage that is quite impressive.
anyhow, no one beats 3570's capabilities for start/stop access situations.
Generation 2 LTO is, at the moment, under test and will be out in a few
months with 200GB native media and 30MB/s, or around that mark, native
transfer rate.

2. I don't know of any AIT automated library that can be compared to
3584LTO as to capacity and footprint; you have up to 248TB of native
capacity for the 3584, and you can start out with a base frame with up to
12 drives and up to 28TB of native capacity. AIT libraries, if I remember
correctly, cannot go further than a few TB(4 I think) and a few drives. If
money is a major consideration and you have a homogeneous environment, 3583
would still outpace AIT and cost a lot less than 3584.

3. LTO is an open standard, AIT is proprietary. what this means is that no
one company can control LTO's roadmap and force customer's choices. LTO has
a set roadmap for the next 4/5 years, and if you don't like IBM tape you
just go out and buy HP or STK or whatever and keep using your media. with
AIT you do what Sony tells you to.

4. LTO is SAN ready. LTO drives and libraries have Fiber Channel attachment
and can be put straight into a Storage Area Network. maybe with a GB
Ethernet, your case it seems, this is not an issue but in general it's an
important point. TSM can drive these libraries and move data over the SAN
with a benefit for LAN traffic (ok, not always as we all know... :-)) . AIT
and 3570 are out of the picture here.


hope this helps.

Cordiali saluti
Gianluca Mariani
Tivoli TSM GRT EMEA
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]



  Lisa Cabanas
  <[EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  >cc:
  Sent by: "ADSM:  Subject:  opinion on AIT vs
LTO and 3570 tape technology?
  Dist Stor
  Manager"

Re: TDP for Exchange and TSM sizing

2002-05-24 Thread Don France

The only "sanctioned" method of backups requires either full db, else
transaction logs (diff or incr).  The very best explanation of the choices
involved with Exchange are in the Install & User's Guide;  also, see the
Redbooks for backing up databases with TSM -- and the many posts on this
forum from Del Hoobler.

Most dba's like daily FULL (online) backups -- regardless of the flavor;
that goes for Exch, SQL-server, Oracle (including SAP/R3 & Peoplesoft).  The
larger the db gets, the more important it becomes to minimize the data loss
(to under 24-hours) between FULLs;  mission-critical db's were on HA
machines, redo/transaction logs get dumped and copy-pooled every 3-4 hours
(depending on volume/time -- using filesystem monitor to watch for 30% of
capacity).

RMAN's not-so-great block-level incrementals have been slow on restore;  the
incr speed is wonderful, getting good restore speed has been very
difficult/frustrating... if you really want block-level incrementals (for
ANY db, consider anybody's snapshot -- like Veritas, EMC-TimeFinder, etc.)

All things considered, you must evaluate the db-size and tape media involved
in context with your restore SLA;  mostly, the dozen or so customers I've
had on Exchange go for daily or weekly fulls, plus (sometimes) incrementals
in between... but that was with Exch-5.5, smaller IS standards (always less
than 100 GB, strive to keep IS under 50GB, standard was 35 GB then put in
change control to create an additional server).

Now, with Exch-2K, the reliability & availability being much greater(?),
your IS will be best capped around 100GB, again depends on SLA for restore.
As your mileage will vary, per network, tape devices, and TSM server
capability, measure your environment for desired SLA --- on a shared LAN, we
readily achived 18 GB/Hr restore speeds (old 3575, 100 Mbps production LAN
during the business day at 3pm).  Differential plus FULL versus several
incrementals after restoring FULL;  BUT, wait... with Exc-2K, you don't need
to restore the entire IS, just the group that holds the mail objects you are
after ... so, given 80-95% restores are for mailbox of individual items,
you're passing more tape than actually restoring.

Logs are 5MB, number depends on how much activity you have;  for my money's
worth, I'd advise (a) daily FULL (perhaps periodic INCR/DIFF during the day,
or (b) weekly FULL with daily DIFF (to consolidate logs for a given week --
in case you *must* do a restore, provided the number is "manageable" as
determined by your restore SLA!)

In our case, we typically saw less than 100 log files per day; for a week,
we'd accumulate maybe 500 or so. We used a separate storage pool, so each
day's data was separated only by other Exch. nodes on the same TSM server).

Good luck;  you're gonna need to do your own homework -- there are no
shortcuts to due dilligence (on the customer's expectations and your
infracstructure to support any given SLA), especially for data base apps!

Don France
Technical Architect - Tivoli Certified Consultant
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]
(P.A.C.E. -- www.pacepros.com)


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
DEAN LUNDGREN
Sent: Friday, May 24, 2002 12:42 PM
To: [EMAIL PROTECTED]
Subject: TDP for Exchange and TSM sizing


We are planning to convert out mail from Groupwise to Exchange 2000 and are
trying to estimate the load to TSM.   We have 2000 users on Groupwise and a
database of about 70 GB.   We are told to expect the Exchange to be three
time larger or about 200GB.

>From what I've read about TDP for Exchange we can choose to do fulls, then
either perform differentials or incremental on the log until the next full.
How big is the log compared to the database?

Some input we have received is to expect the whole database to be backed up,
which makes no sense to me.  Otherwise the TDP Exchange agent is worthless.
We have a 60 day retention policy for our mail and the implications to our
tape usage would be too much if we have to backup the whole thing.

Can anyone share the size of the Exchange database and how much log space
changes.  Please let me know whether you've chosen to use differential or
incremental and how often you do fulls.

Thanks,





Dean Lundgren
Sr. System Engineer
(740)322-5479



Re: Problems after unloaddb/loaddb

2002-05-23 Thread Don France

For item #1, traverse the Registry under
CurrentControlSet\...\adsm\server... there are separate entries for server1,
server2, server3, and server4.  Fix yours so server1 references the
"...\server1" directory.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E. --
www.pacepros.com)
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED]

-Original Message-
From: Rajesh Oak [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 23, 2002 10:23 AM
To: [EMAIL PROTECTED]
Subject: Problems after unloaddb/loaddb


Everyone,
I tried out the unloaddb/loaddb on one of my NT TSM Servers. The Server is
up and running but am facing a few problems:
1. I formatted space on my z drive for db and log(as I had before the
unloaddb)
dsmserv loadformat 1 z:\tsmdata\server1\log1.dsm 2048 1
z:\tsmdata\server1\db1.dsm 5120

It was looking for the devcnfg.out file under z:\tsmdata for the loaddb
command so I put it there. Then after it finished it was looking for the
volhist.out file in the same location. It finished without the file.
After the loaddb operation when I started the server it now looks for all
the files dsmserv.opt, volhist.out, devcnfg.out under
c:\program files\tivoli\tsm\server instead of c:\program
files\tivoli\tsm\server1.
1. How do I make it look for the dsmserv.opt file under c:\program
files\tivoli\tsm\server1 instead of c:\program files\tivoli\tsm\server
2. Now it will not delete any volhist files when I give the command
DELETE VOLHISTORY TODATE=05/22/2002 TOTIME="10:47:22" TYPE=DBBACKUP
 What is the problem? and is there a solution?

Rajesh Oak





Outgrown your current e-mail service?
Get a 25MB Inbox, POP3 Access, No Ads and No Taglines with LYCOS MAIL PLUS.
http://login.mail.lycos.com/brandPage.shtml?pageId=plus



Re: scheduled session vs non-scheduled client session

2002-05-23 Thread Don France

Len,

The session query is what you want, for a script to ascertain which
sessions to cancel... "q se f=d" might indicate session_type;  if not,
use the SQL query...
select session_id, session_type from sessions

You'll need to use a shell script which runs this as batch dsmadmc, trap
the output, then parse for the session_type (4, I think) for scheduled
session.

This info is reported in accounting records, SUMMARY table and SESSIONS
table.

Hope this helps.

Don France
Technical Architect   Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
mailto:[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Len Boyle
Sent: Tuesday, May 14, 2002 4:04 PM
To: [EMAIL PROTECTED]
Subject: scheduled session vs non-scheduled client session


Good Day

Is there a method with a program to tell which session were started with the
TSM scheduler vs those that are started by a person?

The reason that I ask, is that we need to be able to cancel session if they
run into the prime shift hours, but I would to not cancel any sessions that
are being run by someone.

Thanks len boyle



Re: TSM on a win2k cluster

2002-05-22 Thread Don France (TSMnews)

Cannot comment on the drive "view" or schedule-mode problems -- sounds like
a TSM-cluster service definition discrepancy (else, a bug; need to check the
cluster services setups and client code level, then contact Support Line or
install latest client code -- there's been a bunch of client-code activity
in cluster support this year, both for Win2K and AIX).

Regarding your speed/performance of incremental {If the EMC disks appear as
"local" drives, you should consider the NTFS journaling-incremental
feature.} -- alternatively, you may want to consider -incrbydate for your
weekday backups;  the speed of progressive-incremental is largely due to the
client traversing the entire file system structure to identify which files
to process -- file systems with large numbers of files (anything over
half-million) seem to be cause for performance concerns.  I had a client
that decided to address this issue by limiting their file systems to 100 GB;
starting a new drive-letter when reaching that size greatly helped mitigate
the daily incrementals (AND full file system restores, their main concern).
We recently did 1.6 million file/object restore for a 320 GB file system,
achieved nearly 10 GB/Hr with parallel restore sessions, DIRMC (very
important), DIRSonly and FILESonly options, to minimize NTFS re-org
thrashing, and CLASSIC (vs. no-query) restore path, to ensure minimal tape
mounts.

Result was backups of 12-20 GB, restores 6-15 GB/Hr, depending on
collocation standards.  High priority (ie, mission critical or
high-visibility) servers get collocated.

Hope this helps.  See, also, a dozen other posts this past couple months,
search for "cluster" in the subject field.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Firl Debra K" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, April 04, 2002 2:07 PM
Subject: TSM on a win2k cluster


> How is everyone elses experience using TSM on a win 2000 cluster?
>
> Here is ours.
>
>
> We have 2 win2k clusters that are just used for file/print.  All the disks
> are EMC DASD connected via fibre.One (clusterA has 11 group-disk
> resources, the other 9 (clusterB).  The clusters are used in any mode,
> active-active any drive combination, active-passive.
>
> ClusterA (308g used of 572g), about 4 million files
> ClusterB (642g used of 719g), about 4 million files.
>
> Both clusters are just used by users to shares for file and print access.
>
> Are there others out there that have clusters of this capacity and using
TSM
> to backup them?
>
>
> First Attempt.
> I looked at the redbook, sg24-5742-00, Using Tivoli Storage Management in
a
> clustered Windows NT environment.
> I first configured it using the common names method.  Initial complete
> backups of volumes took a while.  We averaged 4-6g/ hour.
> Worked somewhat ok, tell I tried to kick off the scheduler.  When the
> cluster is in active passive mode, I could get the scheduler to work using
> sched mode prompted ,and refer to one dsm.opt file that included all the
> cluster domains, e:-o:., It would grab the last scheduler that was started
> and look at the dsm.opt file and run from there.  So one schedule ran to
> include all the drives NOW when the cluster is in active\passive mode,
I
> could only get it to work with sched polling and not all the schedulers
> would kick off.  2 to maybe 4 would kick off of the 11, so when using a
> separate dsm.opt file per disk cluster group only.
> Dealt with tech support, never got more scheduler to kick off then the 4.
> NOTICE that I had to have different configurations depending on how the
> cluster was in, whether active-active or active-passive.  So that was not
a
> solution.
>
> Tech support suggested using the unique names method which is the only
> method now referenced in their newer documentation.
>
> Ok, I registered with TSM a separate node per cluster group.  Created a
> separate scheduler per cluster group.  So I have 11 plus the quorum.  The
> initial complete backup speed about 4/6 g... .. Now something interesting
is
> happening.  One one node 2 of the cluster group disk can't be seen in TSM.
> The users are using it fine on the server and it is available in the
> operating system.   I tried enabling the scheduler for those 2 disks, TSM
> does not back them up because it does not see them.  I swap to the other
> node the disk cluster groups.  TSM can now see it and manual backups and
> scheduled backups works.  Weird!
> On the node that does not recognize the disk, thinks they are not
clustered
> disks for some reason. Noticed the error when doing a command line backup.
>
> The que

Re: Windows 2000 Server Spec for TSM 5.1

2002-05-22 Thread Don France (TSMnews)

Your backup sizing is quite small;  are you sure it's that small?  There are
Redbooks on sizing for AIX;  also, there is much material from SHARE
proceedings on performance and tuning.

A more typical arrangement (I've seen) for a single-client+server situation
might be 50 GB to start, up to 100 GB total backup occupancy, on a
file/print server that is also a TSM server (and client) -- which I had at
two sites;  we configured with 4-way Dell processors, 512MB RAM, external
RAID for the file-served data, internal drives for the TSM db, log & disk
pool (102 GB).  This was a software development & marketing site, this HW
config was more than sufficient to handle both TSM and file-server loads
with sub-second response time for normal business day users... the main
deficiency was only one tape drive, so we insisted that primary storage
pool )for backups) stay on disk, two copy pools to tape (one for onsite, one
for offsite) to fully protect their data -- which we limited to 60 GB of
file server storage.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Dallas Gill" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, May 20, 2002 5:19 PM
Subject: Windows 2000 Server Spec for TSM 5.1


> Can someone please tell me or point me in the right direction to find so
> documentation on how I should spec my TSM Server, I need to find out how
> many CPU's I need also how much RAM I should have. I am looking to backup
> approx. 1Gb of data first up then approx 600Mb of Data for the incremental
> backups & this data all resides on the TSM server it self (no other
clients)
> Could someone please help. I am going to be running TSM Server 5.1
>
> Thanks.
>
> DJG



Re: Select Stmt? or Query

2002-05-21 Thread Don France (TSMnews)

Also, there *may* be other messages of interest (if b/a-client, 4961, 4959,
and others in close numeric proximity are the session statistics messages).


Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Steve Harris" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 07, 2002 7:26 PM
Subject: Re: Select Stmt? or Query


Try something like

select date_time, message
from actlog
where nodename='MYNODE'
and date_time > current timestamp - 12 hours
and msgno in (1234, 5678, 9876)

Just tune the where clause to select the messages that you want.  Run this
in commadelimited mode and pipe the output through your scripting tool of
choice to give you a pretty report.

Steve Harris
AIX and TSM Admin
Queensland Health, Brisbane Australia

>>> [EMAIL PROTECTED] 08/05/2002 3:35:57 >>>
Hello All:

I have a unix client that kicks off an incremental backups thru tivoli
using an internal dump process for a sql database.  I want to provide a
report to the group that monitors its backup.  The only information they
care about is whether a filespace backed up successfully, what files
failed, and how much data was transferred.  For the time being I am
running 'q actlog begind=-1 begint=18:00:00 sea='nodename', because we
try to schedule our backups between 6 p.m. and 6 a.m.  This is too much
information to run thru on a daily basis for them and they were hoping I
could trim down a report.  Does anybody have a select statement or query
that comes close to the needs I have?

Thanks,

Bud Brown
Information Services
Systems Administrator



**
This e-mail, including any attachments sent with it, is confidential
and for the sole use of the intended recipient(s). This confidentiality
is not waived or lost if you receive it and you are not the intended
recipient(s), or if it is transmitted/ received in error.

Any unauthorised use, alteration, disclosure, distribution or review
of this e-mail is prohibited.  It may be subject to a statutory duty of
confidentiality if it relates to health service matters.

If you are not the intended recipient(s), or if you have received this
e-mail in error, you are asked to immediately notify the sender by
telephone or by return e-mail.  You should also delete this e-mail
message and destroy any hard copies produced.
**



Re: Tuning TSM

2002-05-17 Thread Don France (TSMnews)

Reading thru this thread, no one has mentioned that backup will be slower
than archive -- for TWO significant reasons:
1. The "standard" progressive-incremental requires alot of work in comparing
the attributes of all files in the affected file systems, especially for a
LARGE number of files/directories (whereas archive has minimal database
overhead -- it just moves data).
2. Writes to disk are NOT as fast as tape IF the data can be delivered to
the tape device at "streaming" speed;  this is especially true if using
no-RAID or RAID-5 for disk pool striping (with parity)... RAID-0 might
compete if multiple paths & controllers are configured.  The big advantage
to disk pools is more concurrent backup/archive operations, then
disk-migration can stream offload the data to tape.

So, firstly, debug fundamentals using tar and archive commands (to eliminate
db overhead comparing file system attributes to identify "changed"
files/objects);  once you are satisfied with the thruput for archive, allow
20-50% overhead for daily incremental. If your best "incremental" experience
is not satisfactory, (but archive is okay) consider other options discussed
in the performance-tuning papers -- such as, reducing the number of files
per file system, use incrbydate during the week, increase horsepower on the
client machine and/or TSM server (depending on where the incr. bottlenecks
are).

The SHARE archives do not yet have the Nashville proceedings posted; when
they do show up, they are in the members-only area  (I was just there,
searching for other sessions).


- Original Message -
From: "Ignacio Vidal" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 17, 2002 6:30 AM
Subject: Re: Tuning TSM


Zlatko:
OK: tcpwindowsize is in KBytes (actually my setting is 1280).

I have the same question (why client is idling?).
We have users working on that server from 9:00 until 22:00 (every
day...), backup jobs start about 00:15/01:15

I've monitored the nodes in disk i/o operations and network transfers,
in different moments of the day.
About cpu load/memory usage/pagination: the values are all OK, for
example:
- cpu load (usr) has an average of 5~7 during all day
- memory usage (have 7GB RAM) is not a problem (about 30~40% for
computational the same for noncomp)
- pagination: max use may be 10~12% (mostly of the day 0.5%, peaks
during user's work time)

Viewing your results (500GB in 3:10hs), and trying to compare: we're
backing up 250GB starting 00:15 and ending 07:00... 250GB in 6:45hs
(it's not good)

Yesterday I set resource utilization = 10 too, and performance was the
same (really a bit worst).

I think there are multiple problems (and our provider -IBM- cannot help
us):

First of all: disk performance (IBM should tell us how to get the best
from FastT500), we have from 15 to 25 MB/sec in the storage...

Then: all TSM nodes in our installation have not the same file
configuration.
I explain a bit more this: we have nodes merging a lot of files
(>25) with an average size of 40KB each and a few files (<1000) with
an average size of 50MB (it's Oracle Financials: the database server
keeps datafiles and files belonging to application and DB motor).

We have 4 nodes such as just described, with a total of 35~40GB for each
(average and growing...)

Well, here was a brief description.
I'm listening for new ideas.
Thanks

Ignacio


> -Mensaje original-
> De: Zlatko Krastev [mailto:[EMAIL PROTECTED]]
> Enviado el: viernes, 17 de mayo de 2002 5:29
> Para: [EMAIL PROTECTED]
> Asunto: Re: Tuning TSM
>
>
> Just a point
> TCPWindowsize parameter is measured in kilobytes not bytes.
> And according
> to Administrator's reference it must be between 0 and 2048. If not in
> range on client complains with ANS1036S for invalid value. On server
> values out of range mean 0, i.e. OS default.
> However this are side remarks. The main question is why
> client is idling.
> Have you monitored the node during to disk and to tape operation? Is
> migration starting during backup? Are you using DIRMC.
> You wrote client compression - what is the processor usage
> (user)? What is
> the disk load - is the processor I/O wait high? Is the paging
> space used -
> check with svmon -P.
> You should get much better results. For example recently
> we've achieved
> 500 GB in 3h10m - fairly good. It was similar to your config - AIX
> node&server, client compression, disk pool, GB ether. Ether
> was driven
> 10-25 MB/s depending on achieved compression. The bottleneck was EMC
> Symmetrix the node was reading from but another company was
> dealing with
> it and we were unable to get more than 60-70 MB/s read.
> Resourceutilization was set to 10.
>
> Zlatko Krastev
> IT Consultant
>
>
>
>
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> cc:
>
> Subject:Re: Tuning TSM
>
> Zlatko:
> Here are the answers:
>
> > Have you tested w

Re: Reclaming offsite tapes

2002-05-11 Thread Don France

The "Pending" state simply means the reuse-delay is in
effect!  That is, after expiration occurs, *all* offsite
tapes go thru the period defined for "re-use delay"...
which is intended to protect from over-writing tapes
that might be needed during the period immediately
following expiration -- usually about 4 days.

You should read about it in the Admin. Guide, for a
thorough understanding (of how offsite tapes are expired
then actually made "scratch" again thru DRM).

The main thrust is to protect offsite tapes released
thru reclamation from being over-written while there are
still unexpired db backup tapes that might be needed
(using db restore) to recover data from such tapes.

Hope this helps.

-Don
> If it is "Pending", it means that, even though the tape has no valid
> contents for
> your current production database could recover, you still have a valid
> database
> backup offsite that has not expired that does know what is on that tape, and
> if
> in a disaster situation you need to use that old database backup, it would
> need
> the tape that is currently "Pending".
>
> Once it is shown as Free or Empty rather than Pending, you can re-use that
> tape.
>
> > -Original Message-
> > From: Adamson, Matt [SMTP:[EMAIL PROTECTED]]
> > Sent: Friday, May 10, 2002 4:50 PM
> > To:   [EMAIL PROTECTED]
> > Subject:  Re: Reclaming offsite tapes
> >
> > Sorry if I sound a little ignorant.  I took our backup environment over a
> > few months ago and have been trying to learn how it was set up.  It was
> > explained to me that I could not get tapes back from offsite, just in case
> > we need to do a Point in time restore with DRM. I have asked if we could
> > bring tapes back that are in a Pending state and was told NO.  Example: If
> > we pulled a DBSnap from lets say August 2001, it was explained to me that
> > some of these tapes that are marked Pending now, As of August 2001 they
> > could have data on them at that point in time.
> > Am I missing something or am I to assume that I can pull these(Pending)
> > tapes back.
> >
> > ={
> >
> > -Original Message-
> > From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
> > Sent: Friday, May 10, 2002 11:34 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Reclaming offsite tapes
> >
> >
> > When you start reclaims on a COPY pool where the tapes are OFFSITE, TSM
> > knows that the tapes aren't available (they are marked OFFSITE, yes?).  So
> > TSM does the reclaim using only the ONSITE tapes.
> >
> > If you have 3 tapes offsite that are only 10% good, TSM will mount a
> > scratch
> > tape, find the onsite copies of all those files, and create a new tape in
> > the OFFSITE tape pool that is 30% full.  Then it marks the OFFSITE tapes
> > as
> > EMPTY.  So you send the new tape offsite, and then you can bring the EMPTY
> > tapes back on site and reuse them.
> >
> > We do it constantly.
> >
> >
> > -Original Message-
> > From: Adamson, Matt [mailto:[EMAIL PROTECTED]]
> > Sent: Friday, May 10, 2002 2:17 PM
> > To: [EMAIL PROTECTED]
> > Subject: Reclaming offsite tapes
> >
> >
> > Here is my scenario...
> >
> > We currently send all of our tapes offsite for 7 years.  In the library we
> > collocate by filespace.  But, when we send the tapes offsite they are
> > uncollocated.  We have retired a number of servers in the past couple
> > years,
> > meaning we no longer need that data.  Being that we send the tapes offsite
> > Uncollocated, data from a retired server could be on the same tape of a
> > server that we still have in production.  Is there a way for me find out
> > if
> > this is so?  Is there a way I can call the tapes back from offsite and
> > perform some sort of reclamation?
> >
> > Any ideas would be great, but if I'm stuck I can deal with it.  Tape costs
> > are making look into all different scenarios.
> >
> > Thanks,
> >
> > Matt



Re: mksysb or sysback to library volume

2002-05-06 Thread Don France (TSMnews)

Nice catch, Bill -- that is another essential action, or maybe, mark it
private *before* using the tape...

- Original Message -
From: "Bill Mansfield" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, May 05, 2002 1:08 AM
Subject: Re: mksysb or sysback to library volume


> The only thing I would add to Don's excellent description is
> (1a) checkout the tape with option REMOVE=NO
> This will keep TSM from deciding to try to use the tape in the midde of
> the mksysb.
>
>
>
> _
> William Mansfield
> Senior Consultant
> Solution Technology, Inc
> 630 718 4238
>
>
>
>
> "Don France (TSMnews)" <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 05/03/2002 05:00 PM
> Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: mksysb or sysback to library volume
>
>
> Try using tapeutil (or some 3575 library tool, like mtlib on 3494) to
> mount
> a tape in a given drive -- you must choose a scratch tape and ensure TSM
> is
> not using the drive;  then, issue (1) make drive "offline" to TSM, (2) the
> cmd to mount the tape, (3) the cmd to mksysb to that device, then (4) the
> cmd to dismount the tape... then, manually remove the tape from the
> library
> (else it's still in TSM's inventory and could get used unless you also
> mark
> it "private".)
>
> Since sys. admin. time is more expensive than operator time, we usually
> specify the AIX box to have internal 4mm tape drive, so we script the
> mksysb
> to that drive, daily;  operators rotate the tapes, report any visual
> problems (if the script fails, the tape is not ejected), the script logs
> are
> used for admin. verification.
>
> Regards,
> Don
>
> Don France
> Technical Architect - Tivoli Certified Consultant
>
> Professional Association of Contract Employees (P.A.C.E.)
> San Jose, CA
> (408) 257-3037
> [EMAIL PROTECTED]
>
> - Original Message -
> From: "Martin, Jon R." <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, May 03, 2002 11:17 AM
> Subject: mksysb or sysback to library volume
>
>
> > Hello,
> >
> > Possibly I am imaging this whole thing.  However,  I believe it
> is
> > possible to backup the AIX operating system directly to a tape in the
> > Library, not through TSM necessarily.
> >
> > What AIX device would I choose to backup to?
> > rmt0 IBM Magstar MP 3570 Library Tape Drive
> > smc0 IBM Magstar MP 3575 Library Medium Changer
> > rmt1 IBM Magstar MP 3570 Library Tape Drive
> > rmt2 IBM Magstar MP 3570 Library Tape Drive
> > rmt3 IBM Magstar MP 3570 Library Tape Drive
> >
> > What if anything needs to be done to specify the tape cartridge(s) that
> will
> > be used?
> >
> > If anyone is doing this, a little insight to get me started would be
> greatly
> > appreciated.
> >
> > TSM version 3.7.2
> > Operating System: AIX 4.3.3 ML9
> > Server: IBM 7026-H50
> > Library: 3575-L32
> >
> > Thank You,
> > Jon Martin
> > AIX/Tivoli/TSM/SAN Administrator



Re: Management Class for Directorys on Linux

2002-05-05 Thread Don France (TSMnews)

You must realize the Linux directories (likely) fit nicely within the
available space in the TSM server,,, how do you know Linux clients ignore
your DIRMC?  (The only simple way I would know how to test is to create a
path greater than 160 bytes -- I think the TSM db only has room for about
150 bytes or so.)

- Original Message -
From: "Andreas Wvhrle" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, April 11, 2002 1:34 PM
Subject: Management Class for Directorys on Linux


Hi all,

I have a TSM Server Version 4.2 on WIN2K.
I have Policys for WINNT, NOVEL and LINUX with a Management Class vor Data
and Directorys.
For WINNT and NOVEl all is OK but the Linux Clients
ignore the Management Class for the Directorys. When I look in the Gui
Client all seems good.
Has someone the same problem?

Thanks
Andreas Woehrle



Re: mksysb or sysback to library volume

2002-05-03 Thread Don France (TSMnews)

Try using tapeutil (or some 3575 library tool, like mtlib on 3494) to mount
a tape in a given drive -- you must choose a scratch tape and ensure TSM is
not using the drive;  then, issue (1) make drive "offline" to TSM, (2) the
cmd to mount the tape, (3) the cmd to mksysb to that device, then (4) the
cmd to dismount the tape... then, manually remove the tape from the library
(else it's still in TSM's inventory and could get used unless you also mark
it "private".)

Since sys. admin. time is more expensive than operator time, we usually
specify the AIX box to have internal 4mm tape drive, so we script the mksysb
to that drive, daily;  operators rotate the tapes, report any visual
problems (if the script fails, the tape is not ejected), the script logs are
used for admin. verification.

Regards,
Don

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: "Martin, Jon R." <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 03, 2002 11:17 AM
Subject: mksysb or sysback to library volume


> Hello,
>
> Possibly I am imaging this whole thing.  However,  I believe it is
> possible to backup the AIX operating system directly to a tape in the
> Library, not through TSM necessarily.
>
> What AIX device would I choose to backup to?
> rmt0 IBM Magstar MP 3570 Library Tape Drive
> smc0 IBM Magstar MP 3575 Library Medium Changer
> rmt1 IBM Magstar MP 3570 Library Tape Drive
> rmt2 IBM Magstar MP 3570 Library Tape Drive
> rmt3 IBM Magstar MP 3570 Library Tape Drive
>
> What if anything needs to be done to specify the tape cartridge(s) that
will
> be used?
>
> If anyone is doing this, a little insight to get me started would be
greatly
> appreciated.
>
> TSM version 3.7.2
> Operating System: AIX 4.3.3 ML9
> Server: IBM 7026-H50
> Library: 3575-L32
>
> Thank You,
> Jon Martin
> AIX/Tivoli/TSM/SAN Administrator



Re: Management Class problem

2002-05-03 Thread Don France (TSMnews)

This is documented in the Admin Guide about directories stored in backup
storage (and numerous APAR's to "fix" over the years since v1).

When directory information is stored for backups, TSM attempts to keep all
the info in the TSM database;  if there is more data than the data base can
hold (due to long path names and/or ACL's), it uses backup storage so must
choose a management class -- TSM chooses the management class will longest
retention (maybe the default, maybe not), to ensure any data that must be
restored can also recover its associated directory.  Ed stated the rule much
more succinctly.

Hope this helps.

Don

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: "Wholey, Joseph (TGA\MLOL)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 03, 2002 6:46 AM
Subject: Re: Management Class problem


> Can someone clarify Edgardo's response.  Particularly, "when you set up
"NTWCLASS" with a higher retention number of versions".  Higher than what?
Is this by design?  I also have ONLY directory
> structures going to mgmtclasses that I would not suspect.  thx.  -joe-
>
> -Original Message-
> From: Edgardo Moso [mailto:[EMAIL PROTECTED]]
> Sent: Friday, April 26, 2002 9:37 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Management Class problem
>
>
> That happens when you set up "NTWCLASS" with higher retention number of
> days or versions.   The directory backup goes to the
> mgt classs with the highest retention.   Ours,  we specified the directory
> backup by using DIRMC "mgt classs".
>
>
>
>
>
> From: David Longo <[EMAIL PROTECTED]> on 04/26/2002 11:59 AM
>
> Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>
> To:   [EMAIL PROTECTED]
> cc:
> Subject:   Management Class problem
>
> I have TSM server 4.2.1.10 on AIX 4.3.3 ML09.  I have AIX clients TSM
> 4.2.1.23 and NT clients 4.2.1.20.  This is a new setup, has been running
> a few months.  I just noticed that "some" of the data from these clients
> is being bound to mgt class "NTWCLASS" and not to the default  Class.
>
> I double checked the ACTIVE management class and backup copy groups.
> The "DEFAULTCLASS" is the default and NTWCLASS is not.  (I have
> setup NTWCLASS, but not using it yet - or I thought not!!).  I do not have
> ANY
> CLIENTOPSETS defined.  I do not have these copygroups using each
> other as "NEXT".  I checked the dsm.opt and dsm.sys and backup.excl
> files and I am not using this class.  Using default or other special
> classes.
>
> Notice I said "some" of the data is going to wrong class, some of it is
> going
> to correct class. It is not clear on the data as to the pattern of what's
> going
> to wrong place.
>
> This data should all be bound to "default".  Whats's the deal?
>
>
>
> David B. Longo
> System Administrator
> Health First, Inc.
> 3300 Fiske Blvd.
> Rockledge, FL 32955-4305
> PH  321.434.5536
> Pager  321.634.8230
> Fax:321.434.5525
> [EMAIL PROTECTED]
>



Re: TSM 4.2 differences

2002-05-03 Thread Don France (TSMnews)

Nice catch, Bill!

- Original Message -
From: "Bill Mansfield" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 03, 2002 6:03 AM
Subject: Re: TSM 4.2 differences


> How about the TSM 4.2 Technical Guide Redbook SG24-6277?  Not powerpoint,
I know...
>
>
>
> _
> William Mansfield
> Senior Consultant
> Solution Technology, Inc
>
>
>
>
>
> "Don France (TSMnews)" <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 05/02/2002 02:54 PM
> Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: TSM 4.2 differences
>
>
> Nope... I do have hardcopy (from SHARE);  you might find what you want in
> the books --- there's a good "summary of changes" in the preface of the
> Admin. Guide, Using xxx Clients, and Admin. Ref.
>
> Don France
> Technical Architect - Tivoli Certified Consultant
>
> Professional Association of Contract Employees (P.A.C.E.)
> San Jose, CA
> (408) 257-3037
> [EMAIL PROTECTED]
>
>
> - Original Message -
> From: "Gerald Wichmann" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Thursday, May 02, 2002 10:39 AM
> Subject: TSM 4.2 differences
>
>
> > Does anyone have the TSM 4.2 differences powerpoint presentation on what
> > changed from 4.1 to 4.2? Or could point me in the proper place to look
> that
> > up. Thanks
> >
> > Gerald Wichmann
> > Sr. Systems Development Engineer
> > Zantaz, Inc.
> > 925.598.3099 w
> > 408.836.9062 c
> >



Re: Help Understanding Mgmt classes

2002-05-03 Thread Don France (TSMnews)

Michael,

The rules I described apply universally to backup objects with the same name
owned by a given node in backup storage.

With TDP products, in some cases, the backup objects are given different,
unique names every time a backup occurs -- you must review the Install &
User's Guide for the TDP you are using.  For example, TDP v1 for Exchange
creates unique names for every backup, so retention/expiration must be done
manually;  in v2 of this TDP, objects stored in backup storage are given the
same name, so their retention/expiration becomes "automated" like for files
backed up by the b/a-client -- and the management class rules, as I
described, will apply.

Hope this helps.

- Original Message -
From: "Regelin Michael (CHA)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 03, 2002 5:17 AM
Subject: Re: Help Understanding Mgmt classes


Hi Don,

I'm not sure to understand your answer.

By the way, thanks for your answer. I'm interested in this mailing and was
not the originator.


Our Backup solution is based on tdp for domino 1.1.2. Our tsm client is
4.2.1.20 based on Windows Nt4 server.

here is our strategy:
 a full backup a week - keeping 5 version (management class=week)
 a full backup a month - keeping 13 version (management class=month)
 an incremental version a day - keeping 5 version (management class=daily)

after reading your mail, I understand that having 3 MC for the same file
will cause the retention to change after every backup when the MC is used.
So for example:
when week backup is finished, it will apply the MC for the file and when
month backup is finished, it will change all retention on every file base on
the new MC ?

thanks

Mike

>

___
> Michael REGELIN
> Inginieur Informatique - O.S.O.
> Groupware & Messagerie
> Centre des Technologies de l'Information (CTI)
> Route des Acacias 82
> CP149 - 1211 Genhve 8
> Til. + 41 22 3274322  -  Fax + 41 22 3275499
> Mailto:[EMAIL PROTECTED]
> http://intraCTI.etat-ge.ch/services_complementaires/messagerie.html
> __________
>


-Message d'origine-
De : Don France (TSMnews) [mailto:[EMAIL PROTECTED]]
Envoyi : vendredi, 3. mai 2002 03:39
@ : [EMAIL PROTECTED]
Objet : Re: Help Understanding Mgmt classes


You are abit confused.  The *ONLY* way to have TWO policies applicable to a
given file is to use TWO node-names for your backups;  swapping policy sets
*may* work for your situation, if what you want (and set) is 30 versions of
a given file... that piece will work.

Files can be bound only to one management class at a time; if you try
changing MC for the file, it will change ALL versions to that MC, not just
the next backup.  The policyset-swap trick is useful when changing from
modified to absolute and back;  that's about the only use I've ever seen for
multiple policy sets.  Hope this helps.

Regards,
Don

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: "Diana Noble" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 02, 2002 10:13 AM
Subject: Help Understanding Mgmt classes


> Hi All -
>
> I believe I have my management classes all defined with a major flaw.  We
> do scheduled modified backups during the week and scheduled absolute
> backups on Sundays.  I have two management classes defined.  Both have the
> same retentions coded but one has "absolute" for the copy mode and one has
> "modified" coded.  I have a script that swaps the default management class
> on Sundays.  After rereading the manual and looking at the archives of
this
> list, it seems there's no guarantee that the backup will use the default
> Management class.  Also, if I've specified to keep 30 versions of the data
> in both management classes, does that mean I'm going to retain 30 versions
> from the "absolute" and 30 versions of the "modified"?  I really want 30
> versions all together.
>
> My thought is to create multiply policy sets, and activate the policy set
> that contains only the management class I want.  I would then specify a
> retention of 4 versions for my policy set that contains the management
> class for "absolute".  This won't delete any of my 30 versions that were
> saved using the policy set that contains the "modified" management class,
> will it?  Does this make sense, or am I still way off here?
>
> Diana



Re: dsadmc -consolemode

2002-05-03 Thread Don France (TSMnews)

Probably not... however, if you are using AIX, you could run the dsmulog
daemon to capture the console log to a file (much like the old SYSLOG
feature on MVS), then you could "monitor" that file with "more" or "tail".
See the AIX Admin Guide for details.

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Gerald Wichmann" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 02, 2002 3:26 PM
Subject: dsadmc -consolemode


> Normally running dsmadmc -consolemode doesn't display any date/time stamp
> with each message. Is it possible to make it do so such as what gets
> displayed when you do a "q act"? I don't see anything in the guide so as
far
> as I can tell no..
>
> Gerald Wichmann
> Sr. Systems Development Engineer
> Zantaz, Inc.
> 925.598.3099 w
> 408.836.9062 c
>



Re: Help Understanding Mgmt classes

2002-05-03 Thread Don France (TSMnews)

Diana,

I guess I added to your confusion... I will try to clarify.  Your CAN use
the policy set "trick" to flip between modified and absolute;  that's about
the only option that will help for a single-node-name solution.  Any other
attributes that are changed in the copy-group could adversely affect the
desired version count subject to expiration.

So, in your example, you could (once a week, when you have the cycles to
handle) activate a policy set that sets "absolute" for all nodes in that
domain.  Then on Monday, re-activate the normal policy set for "modified"
incrementals.  Assuming you have identical ve/vd/re/ro parameters, with
ve/vd both = 30, you will have 30 versions (max) of any given file, for up
to re/ro number of days.

I hesitate to recommend this approach, because the granularity of control is
at the policy domain level.  I would (firstly) question why your customer
needs to run TSM as if it were Veritas or Legato;  full backups this often
are unnecessary under TSM, due to its progressive incremental technology.
If you must run periodic full backups, I would do it using an alternative
node-name... so you don't get hurt trying to complete the backup in a given
24-hour cycle for ALL nodes in the domain (you'd have node-level
granularity).

Hope this helps.

Don
- Original Message -
From: "Diana Noble" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 03, 2002 6:22 AM
Subject: Re: Help Understanding Mgmt classes


> Don -
>
> I think I'm more than a bit confused.  So, according to your first
paragraph, I
> cannot activate a new (different) policy set within a domain and expect
that my
> files will then be backed up according to the mgmtclass specifications in
in
> the new policy set?
>
> So what is the best way to swap back and forth between absolute and
modified
> backups, keeping a retention of 30 versions combined.  Would it be best to
> modifiy my existing management class backup copygroup to absolute or
modified
> depending on what should be done that day, leaving the version count the
same?
> If I change it to absolute, what does that do the modified backups already
> taken, anything?
>
> Thank you for your help.
>
> Diana
>
>
>
> Quoting "Don France (TSMnews)" <[EMAIL PROTECTED]>:
>
> > You are abit confused.  The *ONLY* way to have TWO policies applicable
to a
> > given file is to use TWO node-names for your backups;  swapping policy
sets
> > *may* work for your situation, if what you want (and set) is 30 versions
of
> > a given file... that piece will work.
> >
> > Files can be bound only to one management class at a time; if you try
> > changing MC for the file, it will change ALL versions to that MC, not
just
> > the next backup.  The policyset-swap trick is useful when changing from
> > modified to absolute and back;  that's about the only use I've ever seen
> > for
> > multiple policy sets.  Hope this helps.
> >
> > Regards,
> > Don
> >
> > Don France
> > Technical Architect - Tivoli Certified Consultant
> > Professional Association of Contract Employees (P.A.C.E.)
> > San Jose, CA
> > (408) 257-3037
> > [EMAIL PROTECTED]
> >
> > - Original Message -
> > From: "Diana Noble" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Thursday, May 02, 2002 10:13 AM
> > Subject: Help Understanding Mgmt classes
> >
> >
> > > Hi All -
> > >
> > > I believe I have my management classes all defined with a major flaw.
We
> > > do scheduled modified backups during the week and scheduled absolute
> > > backups on Sundays.  I have two management classes defined.  Both have
> > the
> > > same retentions coded but one has "absolute" for the copy mode and one
> > has
> > > "modified" coded.  I have a script that swaps the default management
> > class
> > > on Sundays.  After rereading the manual and looking at the archives of
> > this
> > > list, it seems there's no guarantee that the backup will use the
default
> > > Management class.  Also, if I've specified to keep 30 versions of the
> > data
> > > in both management classes, does that mean I'm going to retain 30
> > versions
> > > from the "absolute" and 30 versions of the "modified"?  I really want
30
> > > versions all together.
> > >
> > > My thought is to create multiply policy sets, and activate the policy
set
> > > that contains only the management class I want.  I would then specify
a
> > > retention of 4 versions for my policy set that contains the management
> > > class for "absolute".  This won't delete any of my 30 versions that
were
> > > saved using the policy set that contains the "modified" management
class,
> > > will it?  Does this make sense, or am I still way off here?
> > >
> > > Diana
> >



Re: Help Understanding Mgmt classes

2002-05-02 Thread Don France (TSMnews)

You are abit confused.  The *ONLY* way to have TWO policies applicable to a
given file is to use TWO node-names for your backups;  swapping policy sets
*may* work for your situation, if what you want (and set) is 30 versions of
a given file... that piece will work.

Files can be bound only to one management class at a time; if you try
changing MC for the file, it will change ALL versions to that MC, not just
the next backup.  The policyset-swap trick is useful when changing from
modified to absolute and back;  that's about the only use I've ever seen for
multiple policy sets.  Hope this helps.

Regards,
Don

Don France
Technical Architect - Tivoli Certified Consultant
Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]

- Original Message -
From: "Diana Noble" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 02, 2002 10:13 AM
Subject: Help Understanding Mgmt classes


> Hi All -
>
> I believe I have my management classes all defined with a major flaw.  We
> do scheduled modified backups during the week and scheduled absolute
> backups on Sundays.  I have two management classes defined.  Both have the
> same retentions coded but one has "absolute" for the copy mode and one has
> "modified" coded.  I have a script that swaps the default management class
> on Sundays.  After rereading the manual and looking at the archives of
this
> list, it seems there's no guarantee that the backup will use the default
> Management class.  Also, if I've specified to keep 30 versions of the data
> in both management classes, does that mean I'm going to retain 30 versions
> from the "absolute" and 30 versions of the "modified"?  I really want 30
> versions all together.
>
> My thought is to create multiply policy sets, and activate the policy set
> that contains only the management class I want.  I would then specify a
> retention of 4 versions for my policy set that contains the management
> class for "absolute".  This won't delete any of my 30 versions that were
> saved using the policy set that contains the "modified" management class,
> will it?  Does this make sense, or am I still way off here?
>
> Diana



SELECT from SUMMARY -- BYTES column mostly zeroes on 4.2.2 and 5.1

2002-05-02 Thread Don France (TSMnews)

F Y I --- if you try using the SUMMARY table in the latest TWO maint. drops of the 
server, you get unreliable, mostly zeroes in the bytes-transferred column!

Don't know if other columns are correct, but at least they are (mostly) non-zero;  
also, not sure of all the data is getting to this table that belongs there.

There goes our monitoring for capacity & workload statistics!!!

Don France 
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.) 
San Jose, CA 
(408) 257-3037 
[EMAIL PROTECTED] 



Re: TSM 4.2 differences

2002-05-02 Thread Don France (TSMnews)

Nope... I do have hardcopy (from SHARE);  you might find what you want in
the books --- there's a good "summary of changes" in the preface of the
Admin. Guide, Using xxx Clients, and Admin. Ref.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Gerald Wichmann" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 02, 2002 10:39 AM
Subject: TSM 4.2 differences


> Does anyone have the TSM 4.2 differences powerpoint presentation on what
> changed from 4.1 to 4.2? Or could point me in the proper place to look
that
> up. Thanks
>
> Gerald Wichmann
> Sr. Systems Development Engineer
> Zantaz, Inc.
> 925.598.3099 w
> 408.836.9062 c
>



Re: copy storage pools

2002-04-30 Thread Don France (TSMnews)

I may have missed a large part of this thread;  seems that normal backup stg
works just fine (notwithstanding the courier damaging media in transit --
maybe need a "closed container with padding" contract, like Paul Seay is
doing).  Your concern becomes (1) the recovery plan (DRM solves this) and
(2) the time it takes to complete 50 servers;  most folks will tell you, the
business will survive if you can just identify the mission-critical servers,
and recover them first.

The *real* solution here, as anywhere, depends on how much it's worth (X
dollars) to get data back (in Y hours).  Most managers just need to
understand the cost associated with faster recovery times -- so, you
calculate the cost of filespace vs. node-based collocation for a given
example server;  use your best guess about which server situation the
business depends on the most --- OR, get the customer to classify the
service for their apps & servers, using just 3 categories (mission critical,
production, non-production).  For the mission-critical, calculate the cost
of the varying collocation settings... if you can winnow the list down to
just one or two file-servers that need collocation, you'll be okay(all the
other data can be restored, it will just take longer for some than others).

For most offsite DR's, imho, you may get away with no collocation for the
offsite tapes;  mission-critical data base servers are (generally) backed up
daily (full-online or full-snapshot or BCV's) so the data is already clumped
(no need for collocation).  It's the file servers that will bite you on a
DR;  carefully configured with high-level directories, to allow for
multi-session restore, and properly identifying/isolating the key server or
two that need offsite collocation -- this, also, means a separate onsite
storage pool, to minimize the amount of data getting collocation treatment.
And, there are (other) varying choices to be made about collocation (ie,
onsite vs. offsite, controlling number of tapes in the pool, etc.).

The question of separating active from inactive data is (essentially)
answered with backupsets and export (filedata=active);  implementing this
for the new MOVE NODEDATA got a "concerned" response --- to do it requires
the aggregates be re-built, which becomes very time-consuming.  Seems like
an offsite reclamation "feature" would be nice... try to articulate a way of
getting just the active versions reclaimed, then submit to development for
review (via SHARE it would get a good peer review and visibility with
developers).
Hey, I like the way Gerald said it:
backup stgpool   filetype=active
This has its drawbacks, but would seem to come closer to what's desired than
the speed of backupset or export.  Alternatively, there IS the point about
most customers end up using point-in-time parameters when doing filesystem
restores.

Hope this helps.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Rob Schroeder" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, April 30, 2002 12:16 PM
Subject: copy storage pools


> Here is my dilemma.  I have 50 Win2k servers.  Our auditors demand a
> complete disaster recovery plan, and I only have one data center.  I have
> about 2 terabytes of data active.  There are a couple oracle servers, sql
> servers, data servers and a whole bunch of application servers.  I cannot
> duplicate 60 3590E tapes everyday with a backup storage pool command.  I
> also cannot specify 50 generate backup sets and expect my operators to do
> it right, much less promptly.  Yet, I still need to have offsite copies of
> my data.  You may say that's the cost of doing infinite incrementals, but
> tell that to the companies using TSM that worked in the WTC, or had their
> building ruined by a tornado last week, or the one that will burn to the
> ground next week from arson.  Am I supposed to gamble my billion dollar
> business on that?
>
> Rob Schroeder
> Famous Footwear
> [EMAIL PROTECTED]



Re: Unix directory exclude question

2002-04-29 Thread Don France (TSMnews)

Sounds like a bug -- yes, there has been a level (or 3) that incorrectly
caused exclude list to be processed by "ar" cmd... it's the client code that
controls it -- try running the latest (4.2.x) client, unless you're hot to
use 5.1, then get the latest 5.1 download patch.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Mattice, David" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, April 29, 2002 4:44 PM
Subject: Unix directory exclude question


> We are running a scheduled "incremental" on an AIX 4.3.3 client.  There is
a
> need to exclude a specific directory tree, which needs to be "archived"
via
> another, shell script based, scheduled "command".
>
> The initial idea was to add an "exclude.dir" in the client dsm.sys file.
> This caused the incremental to exclude that directory tree but, when
> performing the command line (dsmc) archive the log indicates that this
tree
> is excluded.
>
> Any assistance would be appreciated.
>
> Thanks,
> Dave
>
> ADT Security Services



Re: TDP R3 keeping monthly and yearly for different retentions?

2002-04-29 Thread Don France (TSMnews)

The customers I've worked with used a shell script to determine -archmc for
daily/weekly/monthly;  without TDP, the script manipulates the parameter
passed in for the -archmc value on the "dsmc ar" cmd... you could use a
presched command to do the same (for flip the profile name, causing TDP to
use varying -archmc values).

- Original Message -
From: "Paul Fielding" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, April 29, 2002 10:22 AM
Subject: TDP R3 keeping monthly and yearly for different retentions?


Hi all,

I did some poking around the list and didn't see anything on the subject.

Does anybody have a good method for doing Monthly and Yearly backups of an
R3 (oracle) database using the TDP for R3? I have a requirement to maintain
daily backups for 2 weeks, monthly backups for 3 months and yearly backups
for 7 years.   Superficially, It appears to be straightforward to set up
different server stanzas within the TDP profile for different days of the
week, but that's it.

I suspect that I could get extra fancy and write a script to do a flip of
the profile to an alternate profile file on the appropriate days, and have
it flip back when it's done, but that seems like a bit of a band-aid to me
and I'm wondering if anyone's come up with something better?

regards,

Paul



Re: Big Restores?

2002-04-25 Thread Don France (TSMnews)

1.  Turn OFF client and admin schedules;
2. Turn OFF any real-time virus scan (on the destination client);
3. If you're doing restore to Windows platform, use  -DIRSONLY option, to
restore just the directories, first -- after first pass, then restore
the -FILESONLY -- using PIT restore options, in both cases;
4. Use command-line client, AND consider using CLASSIC restore (eg,
specify -PICK option) so the server will sort & consolidate tape mounts;
5. Run multiple restore sessions from separate high-level directories, up to
the number of tape drives available for the task.

Monitor network pipe on both ends, ensure it's full of data (remove any
bottlenecks observed, such as other apps like lfcep.exe).  Expect to get
5-10 GB per hour with large file server;  best case, maybe up to 15 GB/Hr.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]


- Original Message -
From: "Schreiber, Roland" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 19, 2002 1:55 AM
Subject: Big Restores?


Hello,

how can I generally perform big restores(e.g. we have DIRMC)???

Any suggestions??


Regards,

Roland



Re: Technical comparisons

2002-04-25 Thread Don France (TSMnews)

There are a couple (new) white papers on the Tivoli site... one's pretty
good ("Achieving Cost Savings..."), and has no "do not duplicate" notices;
the other is pretty weasel-ly, was commissioned to a consulting group *and*
has "do not duplicate" notices on it --- AND it's not all that good, except
for high-level management that are clue-less about storage management ROI
details!!!

http://www.tivoli.com/products/solutions/storage/storage_related.html#white

I have been sharing copies of the good one with IT director types (and the
NT-platform admin types that think BrightStor is a good solution).  There's
another one I liked, also;  the "Disk to Disk Data File Backup and
Restore..." targets SANergy, but has an outstanding build-up from local tape
drive to network-based (and, ultimately, SAN-based) backup/restore
strategies.  It's listed with the Technical Briefs at the above link.

Don France
Technical Architect - Tivoli Certified Consultant

Professional Association of Contract Employees (P.A.C.E.)
San Jose, CA
(408) 257-3037
[EMAIL PROTECTED]



- Original Message -
From: "Jolliff, Dale" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, April 03, 2002 6:12 AM
Subject: Technical comparisons


> Does anyone have a link to some detailed "white paper" sort of comparisons
> between TSM and the leading competitors in storage management?
>
> I have a customer specifically asking for comparisons between Veritas and
> Tivoli - and the most recent google search turned up several marketing
> pieces from Veritas and one Gartner comparison on old versions of ADSM/TSM
> (version 3.x)..
>
> Surely someone else has already invented this wheel?



Re: Logical volume Snapshot, to enable 'online' image backups.

2002-04-23 Thread Don France (TSMnews)

Petur,

You will need v5 server, as well as client;  there are other limitations --
see the post from Anthony Wong, and go RTFM... they are now posted at

http://www.tivoli.com/support/public/Prodman/public_manuals/td/StorageManage
rforWindows5.1.html

Have fun!


- Original Message -
From: "Pitur Ey~srsson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 12, 2002 6:52 AM
Subject: Logical volume Snapshot, to enable 'online' image backups.


> Hi Fellow comrades.
>
> I am searching for information on how to take Online Image Backups in W2k.
> with TSM Client 5.1 I know I hasn't read the manual  (so don't RTFM me
:/ ).
>
> how do I do it.
>
> Do I need the TSM Server 5 for it or can I do it with TSM Sever 4
>
> thanks in advance
>
> Kvedja/Regards
> Petur Eythorsson
> Taeknimadur/Technician
> IBM Certified Specialist - AIX
> Tivoli Storage Manager Certified Professional
> Microsoft Certified System Engineer
>
> [EMAIL PROTECTED]
>
>  Nyherji Hf  Simi TEL: +354-569-7700
>  Borgartun 37105 Iceland
>  URL:http://www.nyherji.is



  1   2   >