Re: Sum of inactive versions

2003-04-03 Thread bbullock
Thanks Nick, that's what I was looking for.
I'm not sure why I couldn't figure out that SQL query on my own

I'll now start to make the wild guesses management is looking for... sigh

Ben

-Original Message-
From: Nicholas Cassimatis [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 02, 2003 12:43 PM
To: [EMAIL PROTECTED]
Subject: Re: Sum of inactive versions


This request is a statistical nightmare - don't promise results to be
identical to what you come up with...

You're missing single quotes around 'INACTIVE_VERSION', so the statement
looks like:

select * from backups where node_name='TSMHOST6' and
filespace_name='/export/home' and state='INACTIVE_VERSION'

(Not sure why you were using like references instead of =)

But the number of objects won't help you for how many tapes you'll save -
you need the average size of an object, too.  I can think of two ways to
guesstimate that value:

1.  For the average size of an object on a tape:

select avg(filesize) from contents where volume_name='XXX'

And do a random selection of volumes.

2.  Average size of an object for a particular node:

select sum(physical_mb)/sum(num_files) from occupancy where
node_name='NODENAME' and type='Bkup'

That, times the number of objects you think you can get rid of, is the
approximate amount of data space you'll get back.

And some more things to think about:

Not all objects will have the same number of inactive versions - some will
have 0, some will have your retain_extra +1 (depending on if expiration has
run or not).
TDP nodes won't be effected - the application on the client controls the
versions, not TSM.
Do you have archives?  They don't play by versions, either.

Have fun - I tend to cringe when I get projects like this one.

Nick Cassimatis
[EMAIL PROTECTED]

Think twice, type once.


Sum of inactive versions

2003-04-02 Thread bbullock
Folks,
Management would like to know what kind of impact we would have on the volume 
of data we have stored in TSM if we were to lower the retention periods. They are 
expecting something like if we lower the 'retain only version' from 180days to 60 
days we will free up X GB of tapes and X GB of database space. Unfortunately no what 
if tools exist, so I have to build one to accomplish this.

Anybody gone through this exercise? Have any good queries they'd like to share?

It looks like I can get a count of all the objects from the BACKUPS table, but 
then equating that to volume of data is going to take another table, although I don't 
see how to get that at this point.

To take the first whack at this and get a ballpark figure, I thought I'd at 
least get a count of all the inactive objects in the database. You think I should 
count DIRs and FILEs or just FILES?

 This simple query is not working because I'm not getting the state correct. Anybody 
know what an ENUMERATED(BACKUPSTATE); value should be? I tried 0 and 1 and that wasn't 
it. I've tried various quotes and values, but can't seem to get it.


TSMSERV1Aselect * from backups where node_name like 'TSMHOST6' and filespace_name 
like '/export/home' and state like INACTIVE_VERSION
ANR2921E The SQL data type of expression 'STATE' is ENUMERATED(BACKUPSTATE); expecting 
a character string expression.

 |
 V..
 espace_name like '/export/home' and state like INACTIVE_VERSION

Anybody with more SQL experience want to help?

Thanks,
Ben Bullock
Unix Admin
Micron Technology Inc.
Boise ID


Re: Prompted mode backups did not start

2003-03-19 Thread bbullock
I had this happen once on one of my TSM servers running 5.1.5.2. It just 
looked like the scheduler process on the TSM server gave up and didn't even try to 
contact any of the clients.

I restarted the TSM services on the server and have not seen it since ( that 
was about 2 months ago). My intention was to open up a case if it happened again, but 
knock on wood it has not happened again.

Ben

-Original Message-
From: DFrance [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 19, 2003 12:47 PM
To: [EMAIL PROTECTED]
Subject: Re: Prompted mode backups did not start


Hi David,

Haven't seen this on v5.1.1.6 -- would sure like to know how it gets resolved... have 
a customer running 5.1.6.2 since last week, and wants to start using server-prompted 
mode (for more precise scheduling).  Keep us posted, eh?!!

Thanks,
Don


Don France
Technical Architect -- Tivoli Certified Consultant
Tivoli Storage Manager, WinNT/2K, AIX/Unix, OS/390
San Jose, Ca
(408) 257-3037
mailto:[EMAIL PROTECTED] (change aye to a for replies)

Professional Association of Contract Employees
(P.A.C.E. -- www.pacepros.com)



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
David E Ehresman
Sent: Wednesday, March 19, 2003 7:32 AM
To: [EMAIL PROTECTED]
Subject: Prompted mode backups did not start


We've been running prompted mode backups for a few months and have been
running the AIX TSM Server 5.1.6.2 for about a month and life has been
good.  Last night, NONE of our prompted mode backups started.  There
were no server side ANR2561I Schedule prompter contacting VHOST3
(session 23462) to start a scheduled operation. messages that we
normally get at the start of the backup window.  Other than backups not
being started, the server appears to be responding normally and complete
the daily administrative tasks without error.   There are no entries in
the client dsmsched or dsmerror logs for the clients that did not run.
Our lone polling mode backup did run normally.

Any ideas what might have caused this or things to look for?  I've sent
logs to TSM support and if no one comes up with things to check, we'll
restart the server before tonights backups and hope that clears things.

David Ehresman


Re: overland tape library?

2003-03-19 Thread bbullock
Um... come on now. You can't just shoot it down without some explanation. Are 
you a VAR that gets a commission for selling a different tape library or do you have 
an actual reason for shooting down his idea?

Ben

-Original Message-
From: Paul Bergh [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 19, 2003 2:26 PM
To: [EMAIL PROTECTED]
Subject: Re: overland tape library?


Bigmistake!!!

Alexander Lazarevich wrote:

 Hey,

 Does anyone use the Overland Storage Neo 4100 or 4200 LTO-1 tape library
 with TSM 5.1 server?

 We are upgrading our tape library and our ADSM server (3.1 - 5.1), and
 are trying to decide what library to get. We've been looking at the IBM
 3583-L36 Tape Library, which uses 36 LTO-1 tapes, up to 7TB compressed
 capacity, with two LTO drives, which is gonna cost about 36K.

 But I recently found out about Overland storage which sells a product
 called NEO4100, with 3 LTO-1 drives, 60 tape slots, up to 12TB capacity
 compressed, which sells for about 30K.

 So it seems like for 6K less we can get almost twice the capacity, with an
 extra drive! That extra drive would totally kick butt. But if Overland
 hardware sucks and breaks and doesn't work well with TSM 5.1, then screw
 it.

 Any comments?

 Thanks in advance!

 Alex
 ---   ---
Alex Lazarevich | Systems Administrator | Imaging Technology Group
 Beckman Institute - University of Illinois
[EMAIL PROTECTED] | (217)244-1565 | www.itg.uiuc.edu
 ---   ---


Re: sunOS 5.7 problem

2003-03-17 Thread bbullock
It's a simple set of 'ndd' settings to get it hard-coded to 100 full. Here is 
an example of both a hme and a qfe card

#Settings for hme0
ndd -set /dev/hme instance 0
ndd -set /dev/hme adv_100fdx_cap 1
ndd -set /dev/hme adv_10fdx_cap 0
ndd -set /dev/hme adv_10hdx_cap 0
ndd -set /dev/hme adv_100hdx_cap 0
ndd -set /dev/hme adv_autoneg_cap 0

#Settings for qfe1
ndd -set /dev/qfe instance 1
ndd -set /dev/qfe adv_100fdx_cap 1
ndd -set /dev/qfe adv_10fdx_cap 0
ndd -set /dev/qfe adv_10hdx_cap 0
ndd -set /dev/qfe adv_100hdx_cap 0
ndd -set /dev/qfe adv_autoneg_cap 0

These will be lost upon reboot, so you will want to put it in an rc script to 
be run at boot.

Ben
Unix admin
Micron Technology Inc.
Boise, Id

-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]
Sent: Monday, March 17, 2003 1:46 PM
To: [EMAIL PROTECTED]
Subject: sunOS 5.7 problem


Extremely slow backups from a sunOS 5.7 client with lots of collisions.
does anyone know how to hardcode the interface to 100 Full Duplex? Sparc 20
model.

Steven A. Conko
Senior Unix Systems Administrator
ADT Security Services, Inc.


Re: sunOS 5.7 problem

2003-03-17 Thread bbullock
True, but on some interfaces you might want to leave auto negotiate on or have 
a different setting (depending on the brand of the switch or the settings that seem to 
work best on that subnet. i.e. one interface is on a monitoring network were we use 
older dime a dozen 10Mb switches because speed and throughput are not needed, just 
connectivity.)

Both methods will work depending on what he wants to achieve.

Ben

-Original Message-
From: Berning, Tom [mailto:[EMAIL PROTECTED]
Sent: Monday, March 17, 2003 1:58 PM
To: [EMAIL PROTECTED]
Subject: Re: sunOS 5.7 problem


If you put these in the /etc/system file without the ndd - parameters it
will set them on all qfe or hme even if you have a quad ethernet card in the
box.

example

#Settings for qfe1
set /dev/qfe instance 1
set /dev/qfe adv_100fdx_cap 1
set /dev/qfe adv_10fdx_cap 0
set /dev/qfe adv_10hdx_cap 0
set /dev/qfe adv_100hdx_cap 0
set /dev/qfe adv_autoneg_cap 0

Thomas R. Berning
8485 Broadwell Road
Cincinnati, OH 45244
Phone: 513-388-2857
Fax: 513-388-

-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]
Sent: Monday, March 17, 2003 3:53 PM
To: [EMAIL PROTECTED]
Subject: Re: sunOS 5.7 problem


It's a simple set of 'ndd' settings to get it hard-coded to 100
full. Here is an example of both a hme and a qfe card

#Settings for hme0
ndd -set /dev/hme instance 0
ndd -set /dev/hme adv_100fdx_cap 1
ndd -set /dev/hme adv_10fdx_cap 0
ndd -set /dev/hme adv_10hdx_cap 0
ndd -set /dev/hme adv_100hdx_cap 0
ndd -set /dev/hme adv_autoneg_cap 0

#Settings for qfe1
ndd -set /dev/qfe instance 1
ndd -set /dev/qfe adv_100fdx_cap 1
ndd -set /dev/qfe adv_10fdx_cap 0
ndd -set /dev/qfe adv_10hdx_cap 0
ndd -set /dev/qfe adv_100hdx_cap 0
ndd -set /dev/qfe adv_autoneg_cap 0

These will be lost upon reboot, so you will want to put it in an rc
script to be run at boot.

Ben
Unix admin
Micron Technology Inc.
Boise, Id

-Original Message-
From: Conko, Steven [mailto:[EMAIL PROTECTED]
Sent: Monday, March 17, 2003 1:46 PM
To: [EMAIL PROTECTED]
Subject: sunOS 5.7 problem


Extremely slow backups from a sunOS 5.7 client with lots of collisions.
does anyone know how to hardcode the interface to 100 Full Duplex? Sparc 20
model.

Steven A. Conko
Senior Unix Systems Administrator
ADT Security Services, Inc.


Re: Partial page writes - TSM mirroring vs OS/hardware mirroring

2003-03-12 Thread bbullock
My 2-cents

I've always read notes about OS mirroring V.S. TSM mirroring with interest. I 
understand the arguments and they sure seem logical and scare me into thinking I need 
to do TSM mirroring on my systems.

However, we have been running 8 TSM servers on AIX hosts for over 7 years. We 
mirror the DB and recovery logs at the OS level and have never had the problem 
described in this scenario. We've had the systems crash because of power outages, 
crash in the middle of DB backups, crash in the middle of expirations, crash in the 
middle of a busy backup window, crash during filespace deletes, crash during a delete 
dbvolume. You name it, we've crashed one of our servers during that activity. None of 
the disaster scenarios have appeared.

Sure, you might say well you've just been lucky. Maybe so, but with luck 
like this, perhaps I need to go to Vegas ;-). Perhaps credit is due to our 
environment: AIX's LVM (logical volume manager), SSA disks, fast-write cache with 
battery backup.

Why have I resisted mirroring using TSM? Main reason is that I find it 
cumbersome. At our site, we have many people on the oncall rotation, some with very 
little TSM experience, but all with AIX experience. Since we use OS mirroring on all 
other hosts (Sybase, Oracle, etc.), replacing a failed mirror on the TSM servers is 
much more similar and straightforward when using OS mirrors then compared to TSM 
mirroring.

Also, with OS mirroring, when I want to move DB volumes around (for load 
balancing across SSA adapters, upgrade to larger disks, replace failed disks, etc), I 
run 1 delete dbvol command and it all moves to the new disk (previously defined). If 
I use TSM mirroring, it took 3 or more steps and more than twice as long to accomplish 
the same task.(delete dcopy, define dbcopy)

There has also been discussion about better performance using one mirroring 
over the other. Although I have no data to substantiate it, my gut feeling (right 
after I switched from TSM mirroring back to OS mirroring) was that TSM mirroring was 
slightly slower than OS mirroring.

In my case, I trust AIX mirroring, it works better with our oncall support 
model, it's simpler. If it works well and it's not broke, I won't mess with it. Your 
mileage may vary

Ben Bullock
Unix system admin
Micron Technology Inc.
Boise, Id.

-Original Message-
From: Jurjen Oskam [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 12, 2003 2:05 AM
To: [EMAIL PROTECTED]
Subject: Partial page writes - TSM mirroring vs OS/hardware mirroring


Hi everybody,

I have read several discussions in the archive about the TSM mirroring
versus OS/Hardware mirroring of the database and/or log.

Both those discussions and the Administrator's Guide mention partial
page writes. To see if I understand correctly:

- When writing a database or log page to disk, there is a point
  in time when the on-disk structure of a volume is invalid. If the
  process of writing that page is interrupted (e.g. power outage)
  at the wrong time, the on-disk structure remains invalid.

- The TSM server can be configured to create a mirrored database or
  log, and, when updating a page on disk, to first update the page on the
  first mirrored copy and then update the page on the second mirrored copy.
  This way, a partial page write can still occur, but by sequentially
  updating the mirrored copies there is at most one mirrored copy that
  is invalid due to the partial page write. The other copy is valid.

- When starting the TSM server, it cannot use an invalid copy of a
  a database volume. If no valid mirror is available, the TSM server
  cannot start and a database restore is necessary.

- A partial page write is a shortcoming of TSM; the on-disk structure
  should always be valid. Page writes should happen atomically. (Of course,
  the responsibility of TSM doesn't need to go further than the sync
  procedures of the OS. If the OS says the data is synced to disk, TSM
  can assume it *is* synced to disk. Otherwise, the OS/drivers/hardware
  should be fixed.)


My question is: in recent versions of TSM, do page writes happen atomically
or not? I would like to use the mirroring in our Symmetrix, but if TSM is
still vulnerable to the problem of partial page writes invalidating volumes
I would have to use TSM mirroring.

Thanks,
--
Jurjen Oskam

PGP Key available at http://www.stupendous.org/


Re: 3494 cleaning (2 nd. try)

2003-03-10 Thread bbullock
On the 3494, you set it up so that certain labels on the cartridges are seen 
as cleaning tapes. (i.e. CLN*). If your new cleaning tapes match the pattern, then 
there is nothing else to do. Perhaps your new cleaning tapes do not match the 
cleaning tape mask and you need to adjust/add to it.

I'm not near my library, so I don't recall which menu it's under, but it ~is~ 
there... :-)

Ben


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Monday, March 10, 2003 9:18 AM
To: [EMAIL PROTECTED]
Subject: Re: 3494 cleaning (2 nd. try)


Hi Richard!
Thank you very much for not ignoring me this time :-)
The output from  /usr/bin/mtlib -l $LMCP -vqK -s fffd:

Performing Query Inventory Volume Count Data using /dev/lmcp0
Inventory Volume Count Data:
   sequence number..10143
   number of volumes0
   category.FFFD

It looks like my library doesn't see the cleaning tape.
About two weeks ago I saw the first cleaning errors in the AIX error log. I
went to the library and I saw that the cleaning tape was ejected to the bulk
I/O area. So I removed it from the library and I placed a new one in the
bulk I/O area. The library picked it from there and placed it in an empty
cell. I though that that was enough, but apparently not. Is there a special
procedure for checking in a cleaner cartridge?
Thanks again!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Monday, March 10, 2003 16:10
To: [EMAIL PROTECTED]
Subject: Re: 3494 cleaning (2 nd. try)


I haven't received an answer yet, so I'll give it another try:

We've simply been ignoring you.  ;-)

In the IBM Redbook IBM Magstar Tape Products Family: A Practical Guide I
read the following line:

For 3590 Magstar drives, use a value of 999 mounts to perform cleaning
based
on drive request rather than library initiated.

So I changed the value to 999, but nothing happens. Both drives are still
displaying *CLEAN. The library doesn't seem to pick up the drives cleaning
request. How can I make the library clean the drives?
Thanks in advance for any reply!!!

Well, the first thing I would check is whether you have cleaning tapes in
your
library...that they have cycles left...that they have prefixes (like CLN)
matching the spec defined in the library, etc.

 # Get number of tapes:
 /usr/bin/mtlib -l $LMCP -vqK -s fffd

 # Get available cleaner cycles number:
 /usr/bin/mtlib -l $LMCP -qL

No cleaning tapes = no cleaning.  Available cycles is something we have to
watch
for, as it can deplete rather quietly.  (Exhausted cleaning tapes may
auto-eject, but operators may send them offsite.  ;-)

  Richard Sims, BU


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


Re: Summary table not updated since 5.1.6.2 upgrade.

2003-03-06 Thread bbullock
Just yesterday, I was exploring the problems within the summary table. I 
actually called to support about certain client levels that report 0 bytes and I 
ended up wading through other summary table issues that I was not aware of. Below 
are the ones I looked at before finding a match for the issue I was having, but their 
might be more. I'd suggest calling support or perhaps exploring:
 http://www-3.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html

Ben
_

APAR IC34207
ERROR DESCRIPTION:
Backup/Archive stats are wrong in the TSM Server Summary Table.
This is on all platforms of the TSM Server V5.1.X.X.

The status are being truncated by the 1000...

... Fixed in 4.2.2.10 or higher
... Fixed in 5.1.1.4 or higher

___

APAR IC33455
When backing up to a V4.2..2.0 or V5.1.0 TSM server the
summary table is not being updated to show the correct amount
of bytes received.  In the older versions of the server the
bytes would display the amount of data received,however with
versions 5.1.0 and 4.2.2.0 the bytes received is
reflecting 0 bytes.

... fixed in 4.2.2.5 or higher.
... fixed in 5.1.1.1 or higher.

__

APAR IC34462
Kind of like the one above, but concerning disconnected sessions.
... not sure which level it was fixed
_


-Original Message-
From: Coolen, IG (Ilja) [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 06, 2003 2:54 AM
To: [EMAIL PROTECTED]
Subject: Summary table not updated since 5.1.6.2 upgrade.


Hi there,

Last weekend we upgraded our TSM Server from 4.1.2.9 to 5.1.6.2 by doing an
unloaddb,fresh install and a loaddb.
Yes, we had the time to do it (it took 12 hours in total).
We use a script which gathers info from the summary table on amounts of data
exchanged between client and server for our own convenience.
But since the server upgrade no client information is stored in the summary
table at all.  Maybe this problem existed in earlier 5.1 releases, but it's
the first time i see it.
Has anyone else noticed this?

For what it's worth,
i'd like to have the summary information fixed.




kindest regards,

Ilja G. Coolen





ABP / CIS / Servers / BS / TB / Storage Management
Telefoon: +31(0)45  579 7938
Fax: +31(0)45  579 3990
Email: [EMAIL PROTECTED]
Centrale Mailbox: Centrale Mailbox - BS Storage (eumbx05)



- Everybody has a photographic memory, some just don't have film. -


=DISCLAIMER=

De informatie in dit e-mailbericht is vertrouwelijk en uitsluitend bestemd voor de 
geadresseerde. Wanneer u dit bericht per abuis ontvangt, verzoeken wij u contact op te 
nemen met de afzender per kerende e-mail. Verder verzoeken wij u in dat geval dit 
e-mailbericht te vernietigen en de inhoud ervan aan niemand openbaar te maken. Wij 
aanvaarden geen aansprakelijkheid voor onjuiste, onvolledige dan wel ontijdige 
overbrenging van de inhoud van een verzonden e-mailbericht, noch voor daarbij 
overgebrachte virussen.

The information contained in this e-mail is confidential and may be privileged. It may 
be read, copied and used only by the intended recipient. If you have received it in 
error, please contact the sender immediately by return e-mail; please delete in this 
case the e-mail and do not disclose its contents to any person. We don't accept 
liability for any errors, omissions, delays of receipt or viruses in the contents of 
this message which arise as a result of e-mail transmission.


Re: Poor TSM Performances

2003-03-06 Thread bbullock
At my site, a few of us call the TSM application our canary in a coal mine, 
it typically the first indication of other possible problems on the host or network. 
Sure, most of the time it is actually only the TSM client having issues, but in those 
cases where we can't figure out why TSM is having a problem, it has turned out to be a 
non-TSM issue.

We have had many situations where the TSM backups fail on a client and we 
start poking around only to find that TSM failures were only a symptom, not the cause. 
Most common are changes in network topology or settings that were incorrect, but TSM 
has also lead us to find filesystem corruption, incorrect system settings, botched 
patches, and other issues before any OS tools found it.

Good luck finding the problem

Ben

-Original Message-
From: Gianluca Mariani1 [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 06, 2003 8:56 AM
To: [EMAIL PROTECTED]
Subject: Re: Poor TSM Performances


I tend to think of TSM as a troubleshooter by now.
in my experience, in what is probably the majority of real world cases I've
been in, anything that happens in the environment in which TSM sits,
automatically reflects back to TSM which is the first application to
suffer.before anything else.
we range from faulty fibre channel cables to DB2 unsupported patched
drivers to tape drive microcode, just to mention the very last engagements
we have been involved in.
most of the times it's difficult to convince people that it might actually
be a problem outside of TSM.
my 2 eurocents of knowledge tell me that before shouting TSM sucks, it
could be beneficial taking a deep breath and wondering about possible other
causes.
FWIW.

Cordiali saluti
Gianluca Mariani
Tivoli TSM Global Response Team, Roma
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]


The Hitch Hiker's Guide to the Galaxy says  of the Sirius Cybernetics
Corporation product that it is very easy to be blinded to the essential
uselessness of  them by the sense of achievement you get from getting them
to work at all. In other words ? and this is the rock solid principle  on
which the  whole  of the Corporation's Galaxy-wide success is founded
-their fundamental design flaws are  completely  hidden  by  their
superficial design flaws...



 Shannon Bach
 [EMAIL PROTECTED]
 Sent by: ADSM:To
 Dist Stor  [EMAIL PROTECTED]
 Manager   cc
 [EMAIL PROTECTED]
 ST.EDU   bcc

   Subject
 06/03/2003 Re: Poor TSM Performances
 16.26


 Please respond
 to ADSM: Dist
  Stor Manager






Three times our TSM Server has suffered from major performance problems.
All three times it turned out to be an undetected problem outside of our
TSM server.  One was a TCP/IP patch that was missed in an upgrade.  The
second time was with just one of our bigger clients with what seemed to be
the NIC settings, which in turn lead to the discovery of a faulty mother
board.  Once a new one was put in, the performance problem disappeared.
The last one was much harder to detect but turned out to be a problem with
a switch in a gateway ? (what the network guys told me : ) .  Our TSM
server is on our MVS OS/390 mainframe though so maybe the same things won't
apply for you.  I will tell you this though, each time the network guys
were positive it was within TSM and insisted that it could not be a
problem on their side.  They now think of TSM as one of their testing tools
whenever there are any changes implemented on their side as it seems to
detect problems no other tests do.

Shannon Bach
Madison Gas  Electric Co.
Operations Analyst - Data Center Services
Office 608-252-7260
Fax 608-252-7098
e-mail [EMAIL PROTECTED]


Re: Netware Memory question running the scheduler

2003-03-06 Thread bbullock
Interesting, I had not heard of the managedservices option and thought it 
was only a netware thing. Surprise, it's on all client versions and looks like it's 
been around since V 4.2. Once again I learn something new from the listserv

I have a few hosts with very large filesystems where the schedule daemon is 
not releasing memory. I think I may try this option out.

Just as a question to the group: Is there a reason you would not want to use 
this on all the clients? (just have it restart the scheduler at the time of the backup)

Thanks,
Ben


-Original Message-
From: Matt Zufelt [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 05, 2003 2:56 PM
To: [EMAIL PROTECTED]
Subject: Re: Netware Memory question running the scheduler


I second Jim's advice.  Especially on our servers with large number of files, using 
the managedservices option saves TONS of memory.  We had one server that after a few 
weeks, DSMC.NLM was using nearly 3/4 of a gig of RAM.  Using managedservices and 
dsmcad.nlm solved the problem.

Matt Zufelt
Southern Utah University


 [EMAIL PROTECTED] 03/05/03 02:31PM 
Do it!

Run managedservices webclient schedule. I'd advise upgrading to the 5.1.5.6
client to avoid an potential problem with the client trying to grap an
invalid port, I believe it was an nwipcport error.

Other than that it's been good, helped with the memory release issue a lot.

Mark Hayden wrote:

 I have been reading about a option called  managedservices   that you
 put in the dsm.opt file to help resolve memory problems that the TSM
 scheduler causes by not releasing the memory back to the NetWare
 ServerWe have had ongoing problems with this forever. I would just
 like to know if anyone is using this option on any NetWare clients
 Thanks

 I would like to try this with the option managedservices schedule to
 see if this helps What do you think?

 Thanks, Mark Hayden
 Informations Systems Analyst
 E-Mail:  [EMAIL PROTECTED]

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884


Re: Backup Strategy and Performance using a PACS Digital Imaging System

2003-03-04 Thread bbullock
Hmmm, an issue we all deal with in varying degrees...

Answer #1:
Yes, it will take longer to do your expire inventory. TSM will scan through 
all the files looking for candidates, and performance is typically measured in  X 
thousand files examined in X seconds, the more files the more seconds... no way 
around the math.

Answer #2, 3  4:
If the data is fairly static once it's written and will be on an NT host, it 
sounds like the journaling option for backups would be the best solution to improve 
backup performance. IMHO.

I have a few monster hosts as you do, with millions of files  even more 
directories. They reside on Unix hosts, so journaling backups are not available.

On some, the option of multiple TSM instances works well as the data is on 
separate mount points. You can create a TSM instance for each mount point and have 
multiple backups running on the host at the same time. The downside is that each will 
compete for CPU, memory, and I/O, so performance may suffer.

Additional issue you will eventually encounter

 While you are concerned with the backing up of the data, my concern/problem 
has always been with restoring the data in a timely manner. The more files in a 
filesystem, the longer it takes to do restores on it. I won't bore you with the 
details, but you may want to look in the ADSM archives for 2001 at 
http://search.adsm.org/ for a discussion called Performance Large Files vs. Small 
Files that went on about the type of data you have (static, small files, long term).

Thanks,
Ben

-Original Message-
From: Todd Lundstedt [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 04, 2003 11:38 AM
To: [EMAIL PROTECTED]
Subject: Backup Strategy and Performance using a PACS Digital Imaging
System


Heya *SMers,
This is a long one, grab a beverage.

I am running TSM 4.2.1.7 on AIX 4.3.3 on a dual processor B80 backing up
around 112 nodes with 5.2TB capacity 2.3TB used, storing 18.6 million
objects (3.2TB) in the primary and copy storage pools.  The TSM database is
8.7GB running around 80% utilization.  All but a few of the nodes are
server class machines.  We backup about 250GB a night, and on weekends we
do full TDP SQL database backups to the tune of an additional 450 GB (and
growing).  Expiration processing occurs daily, and completes in
approximately 70 minutes.  The four Fuji PACS servers we have are included
in the above numbers, but only the application and OS, not the images and
clinical notes (less than 1k text files).

FYI.. where TSM and disk management are concerned, Fuji is the DEVIL!.
Each image, and each 1k note file with text referencing an image are stored
in their own directory.. image_directory/imagefile and
text_directory/textfile.. a one to one relationship.  To backup the
directories/textfiles now takes the backup process over 12 hours to
complete, incrementally backing up very little.  The backup has to scan the
files to see what needs to be backed up (this is not TSM yet, but some
other backup software).

The powers that be are asking what it would take to move all of the data
stored on DVDs in the DVD jukebox (images) to magnetic media disk based
storage.  Then, start backing all of that up to TSM.  I have some numbers
from the PACS administrator.  On the four PACS servers, the additional data
they would like TSM to backup tallies up to...
1.5+ million folders
1.0+ million files (yes... more folders than files...)
2.2+ TB storage (images and text files)

All of this data will not change.  Once it backs up, it will very likely
never need to be backed up again.  Because of that, I am recommending three
tape storage pools at a minimum: one primary, one on-site copy, and one
off-site copy.  I would actually like to have two off-site copy storage
pools.  Since this data doesn't change, and no additional backups will
occur for the files, there will be no need for reclamation.  The extra copy
storage pools are a safety net in case we have bad media spots/tapes.
Without reclamation, we will never know if we have bad media.

So, at a minimum, 3 storage pools containing a total of 7.5+ million
objects ((directory+files)*3) will use up 4.3GB of a TSM database (7.5
million * 600 bytes).  The amount of growth per year is being estimated at
about 4+ GB of TSM database, so, approximately another 2.3+ million
files/folders each year.  It will very likely be more.  (Daily estimates
are 6500 additional files/folders).
Keep in mind.  This data will NOT be changed or deleted in the foreseeable
future.  New data incoming daily.  NO data expires.  I don't know if Fuji
will ever change the way they store their images/text files.

So, here is what I am trying to figure out.
1.  Will adding the additional objects from the PACS servers significantly
increase my expiration processing run time?  Will TSM have to scan all of
those database objects during expiration processing?
2.  I have heard it is possible to run another instance 

library support question

2003-02-26 Thread bbullock
Folks,

We are looking at perhaps getting this small HP/AIT library to use for backups 
at a remote site.
http://h18004.www1.hp.com/products/storageworks/ssl2020/index.html

I'm not finding it on the list of supported devices on this web page or in the 
README notes in the latest releases.

http://www-3.ibm.com/software/sysmgmt/products/support/IBM_TSM_Supported_Devices.html

It looks like it's a Compaq library that has just been rebranded as HP, but I 
can't find a cross-reference list.

Anybody know if TSM supports this library? If not, how do I request future 
support?

Thanks,
Ben


Re: Cannot remove tapes from copy storage pool

2003-02-26 Thread bbullock
This has been discussed a few times if you look in the archives, but the 
answer for copypool tapes that are offsite:

update vol  access=readw
audit vol  fix=yes
update vol  access=offsite

No need to actually return the tape back to the library.

Ben

-Original Message-
From: David Lin [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 25, 2003 6:54 PM
To: [EMAIL PROTECTED]
Subject: Cannot remove tapes from copy storage pool


Hi all

I have 2 tapes in the copy storage pool that I cannot get rid of.
When I try to do a move data command, it says that the tape has no data on
it.
When I try to do a delete volume command, it says that the tape still
contain data.
(What a contradiction!!!)

Does anyone have any suggestions on how to remove these 2 tapes from the copy
storage pool?

Yours sincerely
David Lin

Desktop/Laptop Support
Aventis Pharma
Phone: +61 2 9422 4859  (BNE: 820859)
Mobile: 0402 146 606  (BNE: 826859)
Fax: +61 2 9428 6959  (BNE: 823859)

Email: [EMAIL PROTECTED]


(Veritas) BMR security issue

2003-02-25 Thread bbullock
Interesting note about BMR. This might be old news, as the report is almost a 
week old, but I don't recall seeing it sent out to the listserv.

http://seer.support.veritas.com/docs/252933.htm

Ben


Archive with delete question

2003-02-06 Thread bbullock
Quick question:

Sometimes when we run the command  archive /somedir/somefile*
-delete=yes, we have seen 2 different behaviors:
- We see that it archives 1 file then deletes it, archives the next
file, deletes it, etc.
- We see that it archives a bunch of files before it starts deleting
files. Kind of in batches of files.

I know I've seen both behaviors, but I'm not sure why. Is it ..
- interactive dsmc command as opposed to a cron job?
- different versions of TSM?
- different OSes?
- a TSM client setting I'm not familiar with?

I have looked through the manuals and found nothing, so I thought
I'd bounce it off the group before I started to delve into testing to see
where and when I see the different behaviors.

We are looking for more consistent behavior from the command to keep
busy filesystems from filling up.

Thanks,

Ben Bullock
Unix administrator
Micron Technology Inc.




Re: export AIXTHREAD_MNRATIO=1:1 : needed or not ?

2003-01-29 Thread bbullock
We had to apply those changes to our TSM servers years ago. It
improved performance dramatically at the time.

After seeing that message though, I'm not sure it's necessary any
more



-Original Message-
From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 29, 2003 6:09 AM
To: [EMAIL PROTECTED]
Subject: export AIXTHREAD_MNRATIO=1:1 : needed or not ?


Hi *SM'ers,

A quickie one : we're actually facing severe performance problems, log
filling at the speed of light etc etc, so we opened a PMR at IBM, and one of
the first advises I got was to add following lines in  rc.adsmserv. :
export AIXTHREAD_MNRATIO=1:1
export AIXTHREAD_SCOPE=S

We are actually under AIX 3.3.3.0, using TSM server 4.2.3.1. My question is
: do I really have to implement those lines, as the header of rc.adsmserv
that was installed automatically specifies :

# 04/11/2001 rmf:No_MNRATIO:export AIXTHREAD* no longer required.
#break links with tsm410s.lanfree and tsm420s.beta

So, what do you think ?
Thanks in advance.

Arnaud
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



Re: Need help with TDP for Informix

2003-01-10 Thread bbullock
Hmm, we've seen a somewhat similar message on some of our Solaris
hosts from time to time:

01/09/03   15:08:12 dlopen() of
/opt/tivoli/tsm/client/ba/bin/plugins/libPiIMG.so failed.
01/09/03   15:08:12 ld.so.1: dsmc: fatal: libApiDS.so: open failed: No such
file or directory.

OK, it's really not all that related other than it's a lib problem.
:-)

It seems to be non-fatal but annoying, and an uninstall and
reinstall of the TSM software usually clears it up.

Ben

-Original Message-
From: David Gratton [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 10, 2003 9:56 AM
To: [EMAIL PROTECTED]
Subject: Need help with TDP for Informix


Hi, I'm using the following

Informix 9.2
TDP for Informix 4.1.3
TSM API 4.2.1.11
TSM Server 4.2.1.0

I'm getting errors with  library /usr/lib/ibsad001.o.My question is ,
what
file should this library be linked too? Mine is linked to
/usr/tivoli/tsm/client/api/bin/libXApi.a and I am thinking that is
incorrect...Can anyone help me here? Thanks in advance


Dave Gratton



Re: 32 bit TSM Client for for AIX5.1

2003-01-08 Thread bbullock
You can install the 32-bit version on AIX5.1 and it will work. I had
to do this recently because the 64-bit client would core dump about every
other day. i.e. it would get part way through the incremental and then
core-dump.

Since I downgraded to the 32-bit version it seems to be more
reliable...(crossing my fingers)

aixXXXa[1]/ # oslevel
5.1.0.0

aixXXXa[2]/ # lslpp -l tivoli.tsm.client*
  Fileset  Level  State  Description


Path: /usr/lib/objrepos
  tivoli.tsm.client.api.aix43.32bit
 5.1.1.0  COMMITTED  TSM Client - Application
 Programming Interface
  tivoli.tsm.client.ba.aix43.32bit.base
 5.1.1.0  COMMITTED  TSM Client - Backup/Archive
 Base Files
  tivoli.tsm.client.ba.aix43.32bit.common
 5.1.1.0  COMMITTED  TSM Client - Backup/Archive
 Common Files
  tivoli.tsm.client.ba.aix43.32bit.nas
 5.1.1.0  COMMITTED  TSM Client - NAS Backup
Client
  tivoli.tsm.client.image.aix43.32bit
 5.1.1.0  COMMITTED  TSM Client - IMAGE Backup
 Client
  tivoli.tsm.client.web.aix43.32bit
 5.1.1.0  COMMITTED  TSM Client - Backup/Archive
 WEB Client

Ben
-Original Message-
From: Rajesh Oak [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 08, 2003 9:37 AM
To: [EMAIL PROTECTED]
Subject: 32 bit TSM Client for for AIX5.1


Is there a 32 bit TSM Client for AIX5.1 OS.?


Rajesh Oak


_
Get 25MB, POP3, Spam Filtering with LYCOS MAIL PLUS for $19.95/year.
http://login.mail.lycos.com/brandPage.shtml?pageId=plusref=lmtplus



Re: What does this message mean?

2002-12-30 Thread bbullock
FYI... we just started getting the message on one of our servers
that's at 5.1.5.2. Yet another server at that level is not logging the
message... I guess I will log a call to let Tivoli know. It not a fatal
error, just annoying...

Ben


-Original Message-
From: Thach, Kevin [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 26, 2002 1:43 PM
To: [EMAIL PROTECTED]
Subject: Re: What does this message mean?


Yep, I'm on 4.2.3 as well.  I'll give support a call tomorrow and see what
they say.  I'll post my findings here tomorrow.  Thanks guys.

-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 26, 2002 3:41 PM
To: [EMAIL PROTECTED]
Subject: Re: What does this message mean?


Still happens here in 4.2.3

I have two servers where it happens, one AIX and the other on Solaris.

Creating archives from an older client (3.1 in both cases) triggered the
messages.

I removed the node and data from the AIX server, and eventually the messages
slowed to an occasional occurrence.



-Original Message-
From: Robert L. Rippy [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 26, 2002 2:29 PM
To: [EMAIL PROTECTED]
Subject: Re: What does this message mean?


Here is the APAR for someting similar. This was fixed in version Version 4,
Release 2, Modification 3.

IC34219 ANRD SMNODE.C(6136): THREADID42 ERROR VALIDATING INSERTS

  ### Sysroutes for APAR  ###
  PQ63701 5698TSMVS 421
  IC34268 5698TSMHP 420
  IC34269 5698TSMSU 420
  IC34270 5698TSMNT 420
  IC34219 5698TSMAX 420

Thanks,
Robert Rippy





From: Thach, Kevin [EMAIL PROTECTED] on 12/26/2002 03:13 PM

Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]

To:   [EMAIL PROTECTED]
cc:
Subject:  What does this message mean?

I started receiving this error message recently for certain nodes after
their backup sessions end normally.  Any idea what the cause is?  Nothing
has changed with our setup for quite some time.


Date/TimeMessage


--

12/26/2002 15:00:25  ANR0403I Session 107304 ended for node ESTARFIN
(AIX).

12/26/2002 15:00:26  ANRD smnode.c(18972): ThreadId95 Session
exiting has
  no affinityId cluster


This E-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended only
for the use of the Individual(s) named above.  If you are not the intended
recipient of this E-mail, or the employee or agent responsible for
delivering it to the intended recipient, you are hereby notified that any
dissemination or copying of this E-mail is strictly prohibited.  If you
have
received this E-mail in error, please immediately notify us at (865)
374-4900 or notify us by E-mail at [EMAIL PROTECTED]

This E-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended only
for the use of the Individual(s) named above.  If you are not the intended
recipient of this E-mail, or the employee or agent responsible for
delivering it to the intended recipient, you are hereby notified that any
dissemination or copying of this E-mail is strictly prohibited.  If you have
received this E-mail in error, please immediately notify us at (865)
374-4900 or notify us by E-mail at [EMAIL PROTECTED]



Re: The tape that would not be scratched...

2002-12-27 Thread bbullock
Ahh, this is an old problem that you can find in the archives. But
to save you the trouble, here is the fix.
The simple answer is a audit vol fix=yes but for a copypool tape, you need
a couple extra steps:

update vol $1 access=readw
audit vol $1 fix=yes
update vol $1 access=offsite

(note, you do NOT need to insert the tape, it will never ask for it).

It will now be empty and should be returned by your normal DR processes.

Ben

-Original Message-
From: Lawson, Jerry W (ETSD, IT) [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 27, 2002 10:56 AM
To: [EMAIL PROTECTED]
Subject: The tape that would not be scratched...


Date:   December 27, 2002   Time: 12:40 PM
From:   Jerry Lawson
The Hartford Insurance Group
860 547-2960[EMAIL PROTECTED]

-
Hello to all of my friends out there, its been a while since I've had the
pleasure of spending time with this forum.  Unfortunately, I have a problem
that brings me here - I'm filling in for some folks while they enjoy the
holidays away from work.

I have a series of 5 tapes (unrelated to each other, I believe) that fall
into the same set of circumstances.  They are copypool tapes, and as such,
of course, are offsite.  Each tape shoes as being 100% reclaimable, and a Q
Content confirms this.  I even tried a Move Data on the tapes, and the
response was that they contained no data.  The problem is that they keep
coming for reclamation, and they can't be reclaimed!  About every 5 minutes,
a process kicks off, ends with a completion code of Success, but nothing
happens, and the loop begins again.

Now the obvious solution would be to delete the tapes, but when I try this
the response is a condition code 13 (ANS8001I message), and the associated
message says that the tapes still contain data.  So, throwing all caution to
the winds, I do a Delete Vol, discard=yes, and I still get the condition
code 13.

It is indeed interesting that the Q content shows nothing, the Move Data
confirms this, yet the Q Volume shows 100% reclaimable, with a status of
FULL, and not EMPTY, and the Delete Volume will not work, even with a Force
command.

The only good thing about all of this is that when a legitimate tape
qualifies for reclamation, it seems to jump to the head of the queue, and
gets reclaimed normally.

Has anyone seem anything like this before?  Any suggestions on how to get
rid of these tapes?



-
 Jerry



This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all
copies.



Re: ADSM v3.1 under AIX 5?

2002-12-06 Thread bbullock
When I upgraded my server to 5.1.5.2, I had some Solaris clients at
a 3.1.0.7 version that would connect to the server, send data, but then not
cleanup. i.e. The session stays in a Run state with 0 in the Wait Time
column and would never disconnect. The only way to get rid of them was to
cancel the session.

Once we upgraded the clients to a 4.1 version the problem
disappeared.

All the other clients we have at are at a 4.1 version or higher and
they are having no problems.

Ben

-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 06, 2002 7:12 AM
To: [EMAIL PROTECTED]
Subject: ADSM v3.1 under AIX 5?


I have been asked to pose the question:

Has anyone tried running the old, now-unsupported ADSM v3.1 under AIX
version 5?
If so, have you found it to run satisfactorily?

  thanks,  Richard Sims, BU



Re: ABC for VMS

2002-11-21 Thread bbullock
Not that we have seen. We have to use another scheduling tool on the
client to have the jobs kick off.

Ben


-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Thursday, November 21, 2002 8:31 AM
To: [EMAIL PROTECTED]
Subject: ABC for VMS


Does this client software have the ability to run a scheduler so the TSM
server can start the backup instead of the client kicking it off?

Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Re: Where did the online manuals go?

2002-11-12 Thread bbullock
Hmm, that link still works for me...

http://www.tivoli.com/support/public/Prodman/public_manuals/td/TD_PROD_LIST.
html

Perhaps a hiccup in a proxy server or some other internet anomaly?

Ben

-Original Message-
From: Alexander Verkooijen [mailto:alexander;sara.nl]
Sent: Tuesday, November 12, 2002 8:40 AM
To: [EMAIL PROTECTED]
Subject: Where did the online manuals go?


Hello,

Does anybody know where the online manuals are?

They used to be at

http://www.tivoli.com/support/public/Prodman/public_manuals/td/TD_PROD_LIST.
html
#S

but that link is dead now.

There is a notice on the Tivoli site telling
me that the manuals have been moved:

http://www.tivoli.com/support/public/Prodman/public_manuals/info_center/redi
rect
_info.html

And after a few seconds it re-directs me to the
dead link above!

I've tried the new (IBM) support site,
but it seems that registration is required
to access it.

We used to give our customers the URL to the
manuals so they could solve most of their own
problems. Since we can't expect our customers
to register on the IBM site our only
alternative would be to maintain a repository
of manuals on our own website. Keeping such
a repository up to date would be very
time consuming.

Regards,

Alexander
---
Alexander Verkooijen([EMAIL PROTECTED])
Senior Systems Programmer
SARA High Performance Computing



Re: TSM version 5

2002-11-06 Thread bbullock
We are in the process of moving our 8 TSM servers from the
very-stable,I-never-wanna-change Version 4.1.4.0, to the
it-scares-me-to-death-to-change Version 5.

FYI, we have 8 TSM servers running on AIX 4.3.3 ML10, ~800 clients
(NT, AIX, Solaris, Linux  VMS) running with TSM client levels anywhere from
2.1 to 5.1, with the majority of them at version 4.1 and 4.2.

We are being motivated to move to a newer version so we're on a
supported version and because there are new features that our NT admins want
to exploit.

So, my logic went like this:
4.2.* - Way to buggy, not ~even~ going to go there.
5.1.0.0 - First version, too buggy.
5.1.5.0 - Seems to re-introduce previously fixed bugs.
5.1.1.6 - I don't like to install patches (as opposed to maintenance
levels) unless I encounter a bug.

So, that lead me to install 5.1.1.0 on my TSM servers.

I'm rolling out the 4.1.4.0 to 5.1.1.0 upgrade gradually, just in
case I encounter a big gotcha. As of today, I have upgraded 5 of the 8
servers,(doing about 1/week).

Things seem to be good so far. I'm keeping an eye on the SYSTEM
OBJECTS on the NT hosts, but they seem to be stable and not growing any
larger, (but I'm going to keep an eye on that one). Also, the clients don't
seem to be reporting any more errors than usual. No errors on the expire
inventory that I can see.

I'm kinda thinking that I dodged a bullet and it won't be as painful
as I thought this was going to be. It's quiet... perhaps a little ~too~
quiet... (roll scary music... queue scary bug...) ;-)

Once we are comfortable with the TSM servers at 5.1.1.0, we will
start to upgrade the client software.

Ben



-Original Message-
From: Luciano Ariceto [mailto:Luciano.Ariceto;IPAPERBR.COM]
Sent: Wednesday, November 06, 2002 4:28 AM
To: [EMAIL PROTECTED]
Subject: TSM version 5


Hi

I would like to know about TSM version 5 (good or bad things). Nowadays we
are running version 4.1 and we are planning move to v5. If you have any
comments or experince about it I will appreciate.

Thanks a advanced

Luciano Ariceto
Technical Support
International Paper do Brasil Ltda.



Re: Script problems

2002-10-14 Thread bbullock

Thanks for all the replies. It looks like the '-outfile' option is
the one that will work

Thanks,
Ben

-Original Message-
From: Frost, Dave [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 09, 2002 4:43 AM
To: [EMAIL PROTECTED]
Subject: Re: Script problems


try adding the parameter -outfile to the dsmc command.  From unix..

dsmadmc -id=xx -pa=yy -outfile -tab select volume_name from volumes \
where devclass_name like \'3590DEV\'


  regards,

-=Dave=-
--
+44 (0)20 7608 7140

A Bugless Program is an Abstract Theoretical Concept.



-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 09, 2002 2:20 AM
To: [EMAIL PROTECTED]
Subject: Re: Script problems


I do not know why yours stopped working, but this will work:

dsmadmc -id=## -pass=##  select volume_name from volumes where
devclass_name like '3590DEV'  |cat

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 08, 2002 7:28 PM
To: [EMAIL PROTECTED]
Subject: Script problems


Folks,

Just today, I upgraded from TSM 4.1 to TSM 5.1 This is an AIX host running
4.3.3 ML 10. Everything went smoothly. It even rebuilt the new path
configurations for the tapes and drives by itself. I didn't have the
cleanup backupset issue because this TSM server only backs up some VMS
clusters, so I had no SYSTEM OBJECTS. I most likely will not be so lucky
in 2 weeks when I have to upgrade some TSM servers that backup NT hosts)

I've only encountered 1 problem this far:

I gather some stats off of the TSM server with various scripts. When I run
these scripts, I typically put a /dev/null on the end, otherwise it gets
to the first (ENTER to continue, 'C' to cancel) prompt and just sits
there. As a simple example:

 dsmadmc -id=## -pass=##  select volume_name from volumes where
devclass_name like '3590DEV'  /dev/null

(to get a listing of tape numbers of a certain device class).

This has always worked in the past but now that I'm at 5.1, it gets to the
first (ENTER to continue, 'C' to cancel) prompt and then perpetually
spews:
...
The character '#' stands for any decimal integer.The only valid
responses are characters from this set: [Enter, C]
The character '#' stands for any decimal integer.The only valid
responses are characters from this set: [Enter, C]
The character '#' stands for any decimal integer.The only valid
responses are characters from this set: [Enter, C]
...

It seems to dislike the /dev/null very much.

Anybody seen this? Is there a better way to do this? I'm guessing
there is, otherwise it would be in the archives. Something like a
-youdonthavetohittheenterkey option.

Somebody enlighten me. ;-)

Thanks,
Ben


This e-mail has been scanned for all viruses by MessageLabs.


This email has been scanned for all viruses by the MessageLabs SkyScan
service. For more information on a proactive anti-virus service working
around the clock, around the globe, visit http://www.messagelabs.com




Script problems

2002-10-08 Thread bbullock

Folks,

Just today, I upgraded from TSM 4.1 to TSM 5.1 This is an AIX host running
4.3.3 ML 10. Everything went smoothly. It even rebuilt the new path
configurations for the tapes and drives by itself. I didn't have the
cleanup backupset issue because this TSM server only backs up some VMS
clusters, so I had no SYSTEM OBJECTS. I most likely will not be so lucky
in 2 weeks when I have to upgrade some TSM servers that backup NT hosts)

I've only encountered 1 problem this far:

I gather some stats off of the TSM server with various scripts. When I run
these scripts, I typically put a /dev/null on the end, otherwise it gets
to the first (ENTER to continue, 'C' to cancel) prompt and just sits
there. As a simple example:

 dsmadmc -id=## -pass=##  select volume_name from volumes where
devclass_name like '3590DEV'  /dev/null

(to get a listing of tape numbers of a certain device class).

This has always worked in the past but now that I'm at 5.1, it gets to the
first (ENTER to continue, 'C' to cancel) prompt and then perpetually
spews:
...
The character '#' stands for any decimal integer.The only valid
responses are characters from this set: [Enter, C]
The character '#' stands for any decimal integer.The only valid
responses are characters from this set: [Enter, C]
The character '#' stands for any decimal integer.The only valid
responses are characters from this set: [Enter, C]
...

It seems to dislike the /dev/null very much.

Anybody seen this? Is there a better way to do this? I'm guessing
there is, otherwise it would be in the archives. Something like a
-youdonthavetohittheenterkey option.

Somebody enlighten me. ;-)

Thanks,
Ben



Re: Calculating amount of data being backed up every 24 hours.

2002-10-03 Thread bbullock

Great script for hunting down problem children backing up the
wrong stuff! At our site we have a lot of archives being pushed from hosts,
so I changed BACKUP to ARCHIVE and I get some great summary stats.

Kudos to Miles for a very helpful script. The one I was using to get
this information out of the server was U-G-L-Y because I am not all that
good with SQL queries, but I am good with kornshell scripting. This is
~much~ better.

Thanks,
Ben
Micron Technology



-Original Message-
From: Miles Purdy [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 27, 2002 6:55 AM
To: [EMAIL PROTECTED]
Subject: Re: Calculating amount of data being backed up every 24 hours.


This reports on MB / Node:

*
select summary.entity as NODE NAME, nodes.domain_name as DOMAIN,
nodes.platform_name as PLATFORM, \
cast((cast(sum(summary.bytes) as float) / 1024 / 1024) as decimal(10,2)) as
MBYTES , \
count(*) as CONECTIONS from summary ,nodes where
summary.entity=nodes.node_name and \
summary.activity='BACKUP' and start_time current_timestamp - 1 day group by
entity, domain_name, platform_name order by
MBytes desc

Ex:

NODE NAME  DOMAIN PLATFORM
MBYTES  CONECTIONS
-- -- 
 ---
UNXR   NISA_DOM   AIX
72963.48   8
UNXP   NISA_DOM   AIX
34052.88   9
UNXM   NISA_DOM   AIX
10923.21   6
CITRIX01   NT_CLIENTS WinNT
734.06   2
WAGNT_CLIENTS WinNT
454.40   2
CWSNISA_DOM   AIX
178.12  15
NISAMTANT_CLIENTS WinNT
48.38   2
UNXD   NISA_DOM   AIX
42.87   3
KOSTELNJ   NT_CLIENTS WinNT
30.72   1
MILES  NT_CLIENTS WinNT
26.45   2
99 NOVELL_CLIENTS NetWare
4.32   2
MAXNOVELL_CLIENTS NetWare
4.32   2
UNXA   AIX_TEST_SERVERS   AIX
0.04  17
UNXB   AIX_TEST_SERVERS   AIX
0.02   1



--
Miles Purdy
System Manager
Agriculture and Agri-Food Canada,
Information Systems Team,
Farm Income Programs Directorate
Winnipeg, MB, CA
[EMAIL PROTECTED]
ph: (204) 984-1602 fax: (204) 983-7557

If you hold a UNIX shell up to your ear, can you hear the C?

-

 [EMAIL PROTECTED] 27-Sep-02 6:46:55 AM 
Hi everyone.

Does anyone have any suggestions regarding how to calculate how much data
is being handled by TSM every day ??

Ideally I would like to write a script which tells me how much data is
being backed up by TSM every day (in total), how much is going to disk
storage pools and how much is going to tape storage pools. The trouble is I
can't seem to think up a simple way to measure these statistics without
resorting to complicated select statements and mathematics.


Thanks in advance for your help.

Robert Dowsett



Re: All Time Record for Amount of data on one 3590 K Cartridge

2002-09-06 Thread bbullock

you win, ~sigh~... ;-(

The closest I came was this under-achiever ;-)


Volume Name   Storage  Device  EstimatedPct   Volume

  Pool NameClass Name   Capacity   Util   Status

(MB)
  ---  --  -  -

E02104OFFST_TAPE-  3590DEV 272,034.2  100.0Full


Ben

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 06, 2002 9:54 AM
To: [EMAIL PROTECTED]
Subject: All Time Record for Amount of data on one 3590 K Cartridge


Volume Name   Storage  Device  EstimatedPct   Volume
  Pool NameClass Name   Capacity   Util   Status
(MB)
  ---  --  -  -

422189TAPE_NNSORA  DC_ATL-2--  410,443.7   50.9Full
3590E1A

Has anyone else gotten 410GB on a cartridge?
That is 10:1 Compression.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180



Re: finding old filespaces from unix clients...

2002-09-03 Thread bbullock

As with most TSM commands, are a million ways to skin this cat. Here
is a sql query I use:

select node_name, Filespace_name, (current_timestamp-backup_end)days as
days_since_backup from filespaces where
cast((current_timestamp-backup_end) days as decimal) =180 order by
days_since_backup

In my case, I only start to clean up the orphaned filesystems after
they have not had a backup in the last 180 day (thus the 180 in the
command). It gives me a listing of the node, the filespace and the number of
days since a backup on that filespace completed.

Thanks,
Ben


-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 10:48 AM
To: [EMAIL PROTECTED]
Subject: finding old filespaces from unix clients...


I've noticed that if an entire filesystem is removed from a unix TSM client,
the environment (as a whole) will keep that data until manually purged.
I think this is OK/FINE/WHAT I WANT because you never know why an entire
filesystem has gone away (might just be varied offline, etc...)

but I still see obsolete data hanging on out there from 1998, 1999-ish so
what I've done is

select node_name,filespace_name as filespace_name ,cast(backup_start as
varchar(10)) as bkupst, cast(backup_end as varchar(10)) as bkupend from
adsm.filespaces where cast(backup_end as varchar(7))'2002-09'  old_FS_out


or use ~where cast(backup_end as varchar(4))'2002'

this gives me a list of filesystems that I might manually purge after
further investigation.

just thought I'd pass this along.

Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave.  Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109



Re: multiple reclaim processes

2002-09-03 Thread bbullock

Yes, I believe that that's right, reclamations are one per
storagepool.

The only way I know to get around it is to write a script/select
statement to look for the tapes that are the least utilized and do a move
data on those tapes. I have not automated the process, but have a simple
select that gives me a list of all the tapes and the pct_reclaim from the
volumes table. The pct_reclaim is the opposite of
% utilized, so it's kind of a % empty value.

select volume_name, pct_reclaim, stgpool_name from  volumes where
status='FULL' order by 3,2

I run this script when I want to predict how changing the
reclamation threshold on a storage pool will effect processing. i.e. It will
show me how many tapes are above a given threshold and would be reclaimed.

Thanks,
Ben

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 12:58 PM
To: [EMAIL PROTECTED]
Subject: Re: multiple reclaim processes


Donald,

As far as I know, only one reclamation process per stgpool!

Regards,

Demetrius Malbrough
AIX Storage Specialist

-Original Message-
From: Levinson, Donald A. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 03, 2002 11:00 AM
To: [EMAIL PROTECTED]
Subject: multiple reclaim processes


Is there a way to have multiple reclaim processes for the same storage pool?
It seems silly to have an 8 spindle library and only use two of them to try
to keep up with my reclaim processing.

TSM 5.1.1
AIX 4.3.3



This transmittal may contain confidential information intended solely for
the addressee. If you are not the intended recipient, you are hereby
notified that you have received this transmittal in error; any review,
dissemination, distribution or copying of this transmittal is strictly
prohibited. If you have received this communication in error, please notify
us immediately by reply or by telephone (collect at 907-564-1000) and ask to
speak with the message sender. In addition, please immediately delete this
message and all attachments. Thank you.



Re: Eternal Data retention brainstorming.....

2002-08-16 Thread bbullock

You are correct, the data in the primary pool and the copypool would
be expired on the production database. Since the copypool tapes are not
being reclaimed, the data will still intact on the copypool tapes, it's just
that the production database no longer knows where it is. With the DBBackup
we take and set aside, ~that~ DB will have pointers to the data on the
un-reclaimed copypool tapes.
So in the case of a restore from of that older data, I would have to
restore the old database (to a test host) attach it to a drive, mark the
primary tapes as unavailable and feed the old copypool tapes into the
library during the restore.

That's my theory at this point.

Ben

-Original Message-
From: Ramnarayan, Sean A [EDS] [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 4:09 AM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


Hi

I have just a question ?
When you use the reuse delay option and set it to  on your copypools ,
what happens to the data if you have a management class which expires the
data from your primary pool, the data on your copy pool will automatically
be expired, so which ADSM/TSM database backup would you use to restore your
data.
I just need to know as I need to do set up a policy similar to the one you
require.

Thks
Sean Ramnarayan
EDS (South Africa)

-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 1:54 AM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


Dang, my mails are long-winded. Forgive me if I'm boring those of
you who are not interested. Fell free to use the delete button. :-)

There's been a lot of good suggestions sent to me. The real kicker
is that with the volume of data involved (460TB as of today), some of the
solutions do not scale well, in that they will take months to complete, or
cost $100,000s in tapes to execute.

Like most of you out there, we gradually grew to our current size,
and management can take the constant nickel and diming we do to keep ahead
of the capacity curve. A big $$ expenditure to CYA for a what if will not
sit well with management and they will look to me for creative
alternatives.

I actually called Tivoli and discussed it with a level 1  2 and
here's what I/we came up with.

We have 2 kinds of data: Mission Critical data that we keep in a
copypool, and non-critical/old data we do not backup to a copypool.
Unfortunaly, ALL of the data needs to be retained and the non-critical
(unduplicated) portion is rather substantial in size.

Here's a 1000-ft view of what we are thinking about:

Take a backup of the TSM DB. Make it so that copypool tapes will
never be reused reusedelay=. As production rolls on, all copypool
tapes will be taken from scratch, so the old data we are trying to retain
should remain on the copypool tapes, although at some time, the production
DB may remove their BD pointers through it's expire inventory process. Our
growth in copypool tapes is much more reasonable (10-20 tapes a week), and
not costly enough to freak management out.
In the case of the old data being needed, the TSM database can be
restored on a test host  the primary tape volumes marked as inactive. When
a restore is needed, the TSM server will request the copypool tapes, and the
data will still be intact because they have not been re-used. This portion
is pretty much like a disaster recovery test.

The non-critical data is a little harder, as there is only 1 copy of
the data. We could try to push it to a copypool, but there is a LOT and it
will take quite a bit of time $ money. The good thing is that this data is
~only~ being retained for the investigation and no other purpose. For this
reason, daily operations, and file restores will ~never~ need these tapes.
We will check all these primary tapes out of the library at the same
time we take the DB snapshot and box them up with BIG RED letters and put
behind a barber wire fence guarded by vicious dogs. We will then mark the
tapes as DESTROYED on our production database. We might even delete the
filespace, as they are just that useless, however we are going through all
this trouble because they are required because ALL BACKUPS/ARCHIVES are
required to be retained.
If we need to restore data off of one of these tapes, we will
restore the TSM DB backup to our test host. These tapes should all still be
marked as readw on the restored DB, so we should just be able to restore
the data by feeding it the primary tapes it requests.

Benefits:
- No need to duplicate the data with a export data or making a 3rd
copypool copy.
- Very little extra $$.
Problems:
- More work and time up front to make sure it works as planned.
- Care needs to be taken and procedures in place so that copypool or
primary useless data tapes are not accidentally reclaimed

Re: Eternal Data retention brainstorming.....

2002-08-16 Thread bbullock

Good write up. Infact, it is what we are considering doing at this
point. The only question I have is about step 4. Why would you go through
and create another copy of everything? That would give me 2 copypool copies
of all the data. I guess you're going for a extra bit of CYA for a possible
failed tape?
I' thinking that the expire inventory would need to be turned off
during the second round of the instructions, and that would be a problem. I
think I'll get buyoff that the 1 copypool copy is all we got. If a tape goes
bad, we did the best we could given the scope of the request and the volume
of data involved.

Thanks,
Ben

-Original Message-
From: Nicholas Cassimatis [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 8:46 AM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


What kind of shelf life are you expecting for your media?  Since they've
discovered that optical has data decay, it's not even forever!  I've seen
some others on the list point at it, but how will you restore this data in
10 years?

Here's what I'd look at doing:

1.  Take about 5 dbbackups (or snapshots) and send them to the vault.  Take
a few OS level backups of your server (bootable tape images).  Send them,
too.  Send volhist, devconfig, a prepare file, TSM server and client code,
OS install code, everything else that makes your environment your
environment, including the clients, offsite.  This is DR - in the most
extreme sense of the term.
2.  Box up the vault.  Seal the boxes, leave them there.
3.  Start marking offsite volumes as destroyed (or just delete them) in
TSM, and run more Backup Stgpools.  They'll run longer, as you're
recreating the old tapes.
4.  Go back to step 1 and repeat once.  If this data is really that
important to have forever, make sure you can get it back!
5.  Start sleeping - you're going to be WAY behind on that!

Now, for the people making the requirement - they need to get a contract to
have accessible like hardware to do the restores to.  Not just the TSM
server and tape library, but the clients, too.

Having the data available is one thing, being able to restore it is
another.

Nick Cassimatis
[EMAIL PROTECTED]

Today is the tomorrow of yesterday.



Re: TSM and protocol converters

2002-08-16 Thread bbullock

Hmm, interesting idea. I'd too be interested to find out if it's
possible. I wonder if there might be some translation problems on TSM the
AIX side. Through the FC connection to the RS6000, it's going to need to
define multiple rmt devices. Would a standard 'cfgmgr' see the converter and
be able to create the devices, or would they need to be manually created?

Ben

-Original Message-
From: Tab Trepagnier [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 16, 2002 1:01 PM
To: [EMAIL PROTECTED]
Subject: TSM and protocol converters


I'm spec'ing a new TSM server.  To avoid having to spend a fortune on a
huge server just to get the I/O slots I need to support HV Diff SCSI, I'm
thinking of using protocol converters to reduce the number of I/O slots in
the new server.

In other words, the server would provide Fibre Channel or Gigabit E or
even LV Diff SCSi but I would connect my existing libraries via their HV
Diff SCSI interfaces, with the protocol converter in the middle.

Assuming this is technically feasible (and I know there are FC to SCSI
converters) would TSM work in such an arrangement?  What would change
relative to simply plugging in a SCSI cable into the server as I have now?

Server is pSeries or RS/6000; TSM is currently 4.1 but the new server
would (very soon) enter service at 5.1.

Thanks in Advance,

Tab Trepagnier
TSM Administrator
Laitram Corporation



Eternal Data retention brainstorming.....

2002-08-15 Thread bbullock

Folks,
I have a theoretical question about retaining TSM data in an unusual
way. Let me explain.

Lets say legal comes to you and says that we need to keep all TSM
data backed up to a certain date, because of some legal investigation
(NAFTA, FBI, NSA, MIB, insert your favorite govt. entity here). They want a
snapshot saved of the data in TSM on that date.

Anybody out there ever encounter that yet?

On other backup products that are not as sophisticated as TSM, you
just pull the tapes, set them aside and use new tapes. With TSM and it's
database, it's not that simple. Pulling the tapes will do nothing, as the
data will still expire from the database.

The most obvious way to do this would be to:

1. Export the data to tapes  store them in a safe location till some day.
This looks like the best way on the surface, but with over 400TB of data in
our TSM environment, it would take a long time to get done and cost a lot if
they could not come up with a list of hosts/filespaces they are interested
in.

Assuming #1 is unfeasible, I'm exploring other more complex ideas.
These are rough and perhaps not thought through all the way, so feel free to
pick them apart.

2. Turn off expire inventory until the investigation is complete. This one
is really scary as who knows how long an investigation will take, and the
TSM databases and tape usage would grow very rapidly.

3. Run some 'as-yet-unknown' expire inventory option that will only expire
data backed up ~since~ the date in question.

4. Make a copy of the TSM database and save it. Set the reuse delay on all
the storage pools to 999, so that old data on tapes will not be
overwritten.
In this case, the volume of tapes would still grow (and need to
perhaps be stored out side of the tape libraries), but the database would
remain stable because data is still expiring on the real TSM database.
To restore the data from one of those old tapes would be complex, as
I would need to restore the database to a test host, connect it to a drive
and pretend to be the real TSM server and restore the older data.

5. Create new domains on the TSM server (duplicates of the current domains).
Move all the nodes to the new domains (using the 'update node ...
-domain=..' ). Change all the retentions for data in the old domains to
never expire. I'm kind of unclear on how the data would react to this. Would
it be re-bound to the new management classes in the new domain? If the
management classes were called the same, would the data expire anyways?

Any other great ideas out there on how to accomplish this?

Thanks,
Ben



Re: Eternal Data retention brainstorming.....

2002-08-15 Thread bbullock

Wanda,
Thanks for the input. The last suggestion of starting over had
crossed my mind, but I forgot to include it in my list. I guess the reason I
left it out was that with 800 clients, (some of them large) and with HSM in
place, starting from scratch would be difficult. Then, the hardware (8
high-end RS6000s would be hard to afford).

I have since looked at option 4 a little closer and refined it
further. All the data is duplicated in copypools, right? So, what if I took
a snapshot of the database and kept it aside. I then could change the
reusedelay to 999 on the tapes in the copypool so that they will not be
overwritten. That way I won't have to make an additional copy of the data as
I already have one. Sure, tape usage will still go up as the copypool tapes
are never reclaimed, but it would not instantly double as they do in the
other scenarios.

It kind of sounds like a disaster recovery test if I were to need to
restore the old data. I'd restore the DB to a test host, gather up the
copypool tapes and try to restore something. I could hit 2 birds with one
stone (meet the legal requirements and do a DR test).

Thanks,
Ben

-Original Message-
From: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 2:09 PM
To: [EMAIL PROTECTED]
Subject: Re: Eternal Data retention brainstorming.


Well, this is an interesting what-if scenario for discussion!
I'll take a crack at it...

1) Painful, but may be the best solution overall.

2) I don't think that will work.  Turning off EXPIRE INVENTORY will prevent
your tapes from reclaiming, but if you have a mgmt class set to 5 versions,
I think the 6th backup will still make the 1st version invisible to the
client.  Test it and see.

3) Doesn't exist.

4) I think that would work.

5) I guess that would work, but what would be accomplished by using the new
domains?
Result is the same (I think) as just setting the existing management classes
in the existing domains to never expire/unlimited versions.
And, if you don't have the same management classes in the new domains as the
old, when you move a client to a new domain you get a lot of errors, and the
files get managed by the grace period numbers, anyway.  Nothing good will
happen.

Export may be the cheapest solution, overall, although it's gonna get
expensive fast since you will quickly double your tape requirements.
1) Change all your management classes to never expire/unlimited versions
2) Make sure NONE of your clients has the delete-backup-data privilege
3) Start your exports, take your time doing them.
When done, you have a complete set of data that is external to TSM.
You can set up a test server with a small TSM, and do IMPORTS of specific
clients as needed, if the investigating agencies ever figure out what they
want to look at.  (Careful to have your mgmt classes set to never
expire/unlimited versions when doing the imports.)
THEN you can reset your production system back to normal retention periods,
and TSM will purge itself of the built-up extra stuff and move on.

Besides that, the best solution I can think of, change all the management
classes to never expire/unlimited versions,
Copy the DB to your test server, lock all the client nodes, put your tapes
on a shelf.
Save the last DB backup, just in case.
Start your production server over with a clean DB, back up everything new
and move on.
If anybody needs their old stuff, get a copy via export (from test) and
import(back to production).

That would keep you from (immediately) doubling your tape requirements, will
cost you some hardware to make tape available for your test system..



Wanda Prather
The Johns Hopkins Applied Physics Lab
443-778-8769
[EMAIL PROTECTED]

Intelligence has much less practical application than you'd think -
Scott Adams/Dilbert









-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 3:31 PM
To: [EMAIL PROTECTED]
Subject: Eternal Data retention brainstorming.


Folks,
I have a theoretical question about retaining TSM data in an unusual
way. Let me explain.

Lets say legal comes to you and says that we need to keep all TSM
data backed up to a certain date, because of some legal investigation
(NAFTA, FBI, NSA, MIB, insert your favorite govt. entity here). They want a
snapshot saved of the data in TSM on that date.

Anybody out there ever encounter that yet?

On other backup products that are not as sophisticated as TSM, you
just pull the tapes, set them aside and use new tapes. With TSM and it's
database, it's not that simple. Pulling the tapes will do nothing, as the
data will still expire from the database.

The most obvious way to do this would be to:

1. Export the data to tapes  store them in a safe location till some day

Re: LTO Tape Status Full

2002-08-15 Thread bbullock

Nope, that's not going to work, it will still error out.

This has been discussed recently in this forum and you should be
able to find a solution in the archives located at http://adsm.org

To give you a head start, it's a audit vol.. fix=yes command.

Ben

-Original Message-
From: Jason Liang [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 2:52 PM
To: [EMAIL PROTECTED]
Subject: Re: LTO Tape Status Full


Camilo,

You can delete this volume by using

Move data A00060

Jason Liang

- Original Message -
From: Camilo A. Marrugo [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, August 15, 2002 11:48 AM
Subject: LTO Tape Status Full


 Do anyone ever seen when a tape goes to status full with 0.0% used. And
 every time that you try to move the data out of it, it gives you an:

 ANR2209W Volume A00060 contains no data.

 And if you try to delete the volume it gives you:

 ANR2406E DELETE VOLUME: Volume A00060 still contains data.

 Any help will be appreciated.

 Thanks,
 Camilo Marrugo

 --
 Camilo Marrugo
 Backup Solutions Administrator/Sr. Systems Administrator
 Dialtone Inc. an Interland Company
 [EMAIL PROTECTED] - http://www.dialtone.com
 Voice: (954) 581-0097 ext. 130 - Fax: (954) 581-7629
 We are Interland... One Team, One Vision.




Re: adsm-l remove

2002-08-14 Thread bbullock

LOL ;-)

-Original Message-
From: Matt Steinhoff [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, August 14, 2002 10:19 AM
To: [EMAIL PROTECTED]
Subject: Re: adsm-l remove


 -Original Message-
 From: Pace, David K [mailto:[EMAIL PROTECTED]]

 How do I remove myself

If you don't mind online documentation, here
is a fairly good site...

http://www.kevork.org/

If you're more of paper-in-your-hands learner,
try out this former best-seller...

http://www.amazon.com/exec/obidos/ASIN/0440507855

Good luck!

Matt



Re: Backup of Win2`K Fileserver

2002-08-02 Thread bbullock

You say the backup has been running for 2 weeks. I guess the
question I would ask is, Are you getting a decent throughput? i.e. if you
do a 'q occ' 1 hour apart, are you getting a reasonable increase or are you
getting very little?. If you get very little data, I'd pursue analyzing the
network and NIC settings. If you are getting decent throughput, but it never
completes, then your situation might be similar to what we had last week.

Lemme tell you what happened. Our NT admins move a large (300GB)
fileserver onto a NetApp. They back it up through an NFS mount, and of
course, the drive letter changed, so it had to do a full backup. (We are not
using NDMP at this time, but I'm willing to listen to any reports of the
good/bad points of NDMP on TSM. I believe that using NDMP, I will have to
move all 300GB once a week for the full backup, and that sounds just
painful).

Anyways, we let it run for a couple of days, and it finally gets to
about 300GB of data pushed to the TSM server. It should be about done, yet
it keeps on going. What? how can the initial backup of a 300GB filesystem
backup 350 GB? We get the NT admins to look at it and surprise! The NetApp
is doing a nightly snapshot and the inclexcl on the client is not excluding
any of the '.snapshot' directories that are all renamed/moved nightly, so
TSM is trying to backup each file on the host 7 times, regardless of whether
it has changed or not. Ouch!

We got the inclexcl cleaned up to only backup 1 copy of the data and
it should be OK now.


Thanks,
Ben


-Original Message-
From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 31, 2002 2:31 PM
To: [EMAIL PROTECTED]
Subject: Backup of Win2`K Fileserver


Hello

I have a BIG performance problem with the backup of a win2k fileserver. It
used to be pretty long before but it was managable. But now the sysadmins
put it on a compaq
storageworks SAN. By doing that they of course changed the drive letter. Now
it has to do a full backup of that drive. The old drive had  1,173,414 files
and 120 GB  of
data according to q occ. We compress at the client. We have backup retention
set to 2-1-NL-30. The backup had been running for 2 weeks!!! when we
cancelled it to try to
tweak certain options in dsm.opt. The client is at 4.2.1.21 and the server
is at 4.1.3 (4.2.2.7 in a few weeks). Network is 100 mb. I know that journal
backups will help
but as long as I don't get a full incremental in it doesn't do me any good.
Some of the settings in dsm.opt :

TCPWindowsize 63
TxnByteLimit 256000
TCPWindowsize 63
compressalways yes
RESOURceutilization 10
CHAngingretries 2

The network card is set to full duplex. I wonder if an FTP test with show
some Gremlins in the network...?? Will try it..

I'm certain the server is ok. It's a F80 with 4 processors and 1.5 GB of
RAM, though I can't seem to get the cache hit % above 98. my bufpoolsize is
524288. DB is 22 GB
73% utilized.

I'm really stumped and I would appreciate any help

Thanks

Guillaume Gilbert



Re: Gigabit Ethernet Performance

2002-07-24 Thread bbullock

Just to throw in another data point.

Here is that FTP done between an IBM S80 and an M80, both with GB
interfaces:

ftp put |dd if=/dev/zero bs=32k count=1 /dev/null
200 PORT command successful.
150 Opening data connection for /dev/null.
1+0 records in.
1+0 records out.
226 Transfer complete.
32768 bytes sent in 13.25 seconds (2.414e+04 Kbytes/s)
local: |dd if=/dev/zero bs=32k count=1 remote: /dev/null


Ben

-Original Message-
From: Christoph Pilgram
[mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 24, 2002 5:43 AM
To: [EMAIL PROTECTED]
Subject: AW: Gigabit Ethernet Performance


Hi Thomas,

I tried it on my machines (AIX 4.3.3 H80)
this is what I got :
ftp put |dd if=/dev/zero bs=32k count=1 /dev/null
200 PORT command successful.
150 Opening data connection for /dev/null.
1+0 records in.
1+0 records out.
226 Transfer complete.
32768 bytes sent in 16,33 seconds (1,959e+04 Kbytes/s)
local: |dd if=/dev/zero bs=32k count=1 remote: /dev/null

Best wishes 
Chris

 -Ursprüngliche Nachricht-
 Von:  Rupp Thomas (Illwerke) [SMTP:[EMAIL PROTECTED]]
 Gesendet am:  Dienstag, 23. Juli 2002 18:51
 An:   [EMAIL PROTECTED]
 Betreff:  Gigabit Ethernet Performance
 
 Hi TSM-ers,
 
 our newly installed Gigabit Ethernet seems to have some performance
 problems.
 When I do the following FTP command between two UNIX (AIX) machines I only
 get about 8Megabyte per second.
 
 # ftp 10.135.11.55
 Name: adminrt
 Password:
 ftp put |dd if=/dev/zero bs=32k count=1 /dev/null
 200 PORT command successful.
 150 Opening data connection for /dev/null.
 1+0 records in.
 1+0 records out.
 226 Transfer complete.
 32768 bytes sent in 39.5 seconds (8102 Kbytes/s)
 local: |dd if=/dev/zero bs=32k count=1 remote: /dev/null
 
 This command transfers data from nowhere to nowhere so no disk
 is involved. The performance numbers only include network, network
 adapter,
 TCP/IP and a bit of UNIX.
 Do you think 8MB/s is OK for Gigabit Ethernet?
 Would anyone be so kind and test this command in their own UNIX
 environment
 and tell me the numbers?
 
 Thanks in advance and greetings from Austria
 Thomas Rupp
 Vorarlberger Illwerke AG
 MAIL:   [EMAIL PROTECTED]
 TEL:++43/5574/4991-251
 FAX:++43/5574/4991-820-8251
 
 
 
 --
 
 Dieses eMail wurde auf Viren geprueft.
 
 Vorarlberger Illwerke AG
 --
 



Re: Problem with restore a large File

2002-07-15 Thread bbullock

We bumped into this also last week, although it was a 5 GB file. We
found that the users limits were set OK, but the root account somehow had
the fsize set to 4GB. When you run a TSM restore as the root user and
monitor it, you will see that it writes out the restored file as root and
then changes it to the owner of the file once it has been successfully
written out.

I'd check the defaults paragraph in the /etc/security/limits file
also to see if that is set OK.

Ben

-Original Message-
From: Juan R. Obando [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 15, 2002 12:45 PM
To: [EMAIL PROTECTED]
Subject: Problem with restore a large File


Hi Tsm'ers*

I have a problem with a restore, the file to restore is about 1 Gb
large, but the client returns message :

15-07-2002 12:51:12  ANE4025E (Session: 4987, Node: MHSAFI1)  Error
processing '/backup/historico/aficen/db_sitep/histing_ingres.unl':
file exceeds user or system file limit

The client is an AIX 4.3.3 machine, I try to modify the
/etc/security/limits file without results, any ideas ?

Thanks in advance.

Ing. Juan Reynaldo Obando Carballo



Re: offsite tape will not reclaim, delete or move

2002-07-08 Thread bbullock

We had this happen on some early version of TSM (4.1.2?). Tivoli had
us upgrade to 4.1.4 and then use this procedure to get the tapes back into
the proper state:


There have been several instances where volumes in a certain storage pool
are empty but the tape utilization does not get updated as empty. The
following steps will update the volume's utilization to empty and return
it back to the scratch pool:

Check to make sure there is no data on the tape.
 q content TAPE# count=10

If no files are displayed, the tape needs to have this fix.
If files are listed, try a move data TAPE# to move the data to another
tape.


Change the tape status to readwrite so you can fix it (if it is outside of
the library)
 update vol TAPE# access=readw

There is no need to actually retrieve the tape and put it in the library.


Audit the volume to fix it, it should take about 3 seconds.
 audit vol TAPE# fix=yes


Tape should now be Empty.
 q vol TAPE#


Change tape status back to offsite.
 update vol TAPE# access=offsite


The next day's Tape return list should include the fixed tape to be
checked back in as scratch.
-

Your procedures may need to be different,as you are using DRM, but
the audit vol... fix=yes was the part that fixed it.

Ben
Micron Technology Inc.
Boise, Idaho.

-Original Message-
From: Steve Bennett [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 08, 2002 2:19 PM
To: [EMAIL PROTECTED]
Subject: offsite tape will not reclaim, delete or move


I have an offsite tape that will not go pending, it stays in the filling
state. It will not reclaim, move or delete. Relevant queriers, etc.
below. Anyone have any ideas as to what to try next?

q vol 000987 f=d
   Volume Name: 000987
 Storage Pool Name: DR_3590_JNU
 Device Class Name: 3590
   Estimated Capacity (MB): 35,840.0
  Pct Util: 0.0
 Volume Status: Filling
Access: Offsite
Pct. Reclaimable Space: 100.0
   Scratch Volume?: Yes
   In Error State?: No
  Number of Writable Sides: 1
   Number of Times Mounted: 5
 Write Pass Number: 1
 Approx. Date Last Written: 03/27/2002 01:31:59
Approx. Date Last Read: 03/26/2002 13:17:01
   Date Became Pending:
Number of Write Errors: 0
 Number of Read Errors: 0
   Volume Location: VAULT


q drm 000987 f=d
   Volume Name: 000987
 State: Vault
 Last Update Date/Time: 03/27/2002 07:54:11
  Location: VAULT
   Volume Type: CopyStgPool
Copy Storage Pool Name: DR_3590_JNU
 Automated LibName:



07/08/2002 11:22:46   ANR0984I Process 227 for SPACE RECLAMATION started
in the
   BACKGROUND at
11:22:46.
07/08/2002 11:22:46   ANR1040I Space reclamation started for volume
000987,
   storage pool DR_3590_JNU (process number
227).
07/08/2002 11:22:46   ANR0985I Process 227 for SPACE RECLAMATION running
in the
   BACKGROUND completed with completion state
SUCCESS at

11:22:46.

07/08/2002 11:22:46   ANR1041I Space reclamation ended for volume
000987.



07/08/2002 12:03:05   ANR2017I Administrator XTSCSMB issued command:
DELETE
   VOLUME 000987
discard=y
07/08/2002 12:03:05   ANR2406E DELETE VOLUME: Volume 000987 still
contains data.



07/08/2002 12:02:28   ANR2017I Administrator XTSCSMB issued command:
MOVE DATA
   000987
recons=y
07/08/2002 12:02:28   ANR2209W Volume 000987 contains no
data.



07/08/2002 12:19:00   ANR2017I Administrator XTSCSMB issued command:
QUERY
   CONTENT
000987
07/08/2002 12:19:00   ANR2034E QUERY CONTENT: No match found using
this

criteria.


--

Steve Bennett, (907) 465-5783
State of Alaska, Information Technology Group, Technical Services
Section



Re: allocating disk volumes on RAID5 array

2002-05-30 Thread bbullock

Hmm...
All of our disk storage pools are larger than 2 GB and have large
files enabled because we do see files larger than 2GB come in from
clients...

I'm quite sure I'm not going to want to slice up a 36Gb disk into 18
2GB filesystems just to use direct I/O. I think the drive head contention
would eat me alive.

I'm not sure where they expect this functionality to be used when
they have those 2 limitations on direct I/O...

Ben


-Original Message-
From: Paul Zarnowski [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 29, 2002 6:54 AM
To: [EMAIL PROTECTED]
Subject: Re: allocating disk volumes on RAID5 array


I'm curious if anyone feels that the new AIXDIRECTIO option in TSM 5.1
affects their opinions about how to utilize disk for storage pools.  TSM
5.1 also has an Async I/O option (AIX).  For more info, refer to
http://www.redbooks.ibm.com/redpieces/pdfs/sg246554.pdf, starting at
about page 107.  The AIXDIRECTIO option does not work with
large-file-enabled filesystems, so the filesystem must be 2GB or
smaller.  This option bypasses VMM file cache.



Re: Archive retention question

2002-05-03 Thread bbullock

Wait a second.

So, lets say I set up a management class with the archive copypool
set to 5 years and I archive a bunch of files. A couple of years later,
legal comes to me and says, laws have changed, we now need to keep those
files for 10 years. I have been under the false impression that I could
change the archive copygroup to 10 years, and the currently archived files
would hang around for an extra 5 years. This is not so?

Hmm, just when I thought I had it figured out... So that means I
must retrieve all 3TB of data to somewhere and re-archive it to keep it
around for the extra 5 years? Not a pleasant thought.

I've always wished that I could change the management class an
archive is bound to ~after~ it has been archived. For example, just last
week, a DBA archived a dump of a 900GB database, to a 90-day management
class instead of a 2 year management. Sure, I can delete the bad archive and
have them re-archive it, but tape cycles and bandwidth don't grow on
trees

Thanks,
Ben Bullock
Micron Technology
Boise, ID

-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 03, 2002 1:13 PM
To: [EMAIL PROTECTED]
Subject: Archive retention question


I found this gem on the ADSM site:

Unlike backups, once a file is archived, its
retention is set in stone (you can delete archive files however).


Scenario:
Client creates a management class called 3_Month and sets retention at 90
days.
Later they change their mind, and create another management class called
1_Month
and set it to 30 days.  All current archives go to 1_Month.

They want to ditch as much of the 3_Month as possible, without getting rid
of anything newer than 30 days...

Is there any way to accomplish this without deleting individual archives?



Re: Archive retention question

2002-05-03 Thread bbullock

Thanks Mark, you are right, I kinda misread the scenario. He was
saying that he would create new management class.

Whew...I could have ~sworn~ I knew how it worked, but this email had
me questioning it.

You are right, in his situation, it would be easier to change the
retention on the existing copygroup, but then the name of the management
class would be misleading and he would impact all the data backed up to that
management class. For those 2 reasons, we abandoned management classes with
retention periods in the name and got more granular to the type of data
(i.e. tax_data_mc or rnd_mc), then when tax laws change, we can update the
copygroup and it effects all of the correct data.


Thanks,
Ben

-Original Message-
From: Remeta, Mark [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 03, 2002 1:57 PM
To: [EMAIL PROTECTED]
Subject: Re: Archive retention question


I'm pretty sure as long as you just CHANGE the archive group it will change
the retention settings. Dale was talking about creating a NEW archive group.
If Dale wanted to change the retention value for 3_Month to actually 1 month
he would be better of changing the retention value for 3_Month to 1 month
rather than create a new group called 1_Month. Creating a new group would do
nothing for the existing 3_Month group. I know I changed one of my archive
groups from infinite to 1 year and saw tapes start to reclaim...

Mark


-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 03, 2002 3:25 PM
To: [EMAIL PROTECTED]
Subject: Re: Archive retention question


Wait a second.

So, lets say I set up a management class with the archive copypool
set to 5 years and I archive a bunch of files. A couple of years later,
legal comes to me and says, laws have changed, we now need to keep those
files for 10 years. I have been under the false impression that I could
change the archive copygroup to 10 years, and the currently archived files
would hang around for an extra 5 years. This is not so?

Hmm, just when I thought I had it figured out... So that means I
must retrieve all 3TB of data to somewhere and re-archive it to keep it
around for the extra 5 years? Not a pleasant thought.

I've always wished that I could change the management class an
archive is bound to ~after~ it has been archived. For example, just last
week, a DBA archived a dump of a 900GB database, to a 90-day management
class instead of a 2 year management. Sure, I can delete the bad archive and
have them re-archive it, but tape cycles and bandwidth don't grow on
trees

Thanks,
Ben Bullock
Micron Technology
Boise, ID

-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 03, 2002 1:13 PM
To: [EMAIL PROTECTED]
Subject: Archive retention question


I found this gem on the ADSM site:

Unlike backups, once a file is archived, its
retention is set in stone (you can delete archive files however).


Scenario:
Client creates a management class called 3_Month and sets retention at 90
days.
Later they change their mind, and create another management class called
1_Month
and set it to 30 days.  All current archives go to 1_Month.

They want to ditch as much of the 3_Month as possible, without getting rid
of anything newer than 30 days...

Is there any way to accomplish this without deleting individual archives?

Confidentiality Note: The information transmitted is intended only for the
person or entity to whom or which it is addressed and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination or other use of this information by persons or entities other
than the intended recipient is prohibited. If you receive this in error,
please delete this material immediately.



Re: Migration from DLT to LTO Tape Drive

2002-05-01 Thread bbullock

Um... Am I missing something here... This seems to have nothing to
do with TSM...

Why is it being posted here?

Ben

-Original Message-
From: Monit Kapoor [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 01, 2002 8:10 AM
To: [EMAIL PROTECTED]
Subject: Migration from DLT to LTO Tape Drive


Hello all,

We are in the process of retiring a couple of our Backup servers running
on Sun Solaris with Networker 6.1 and DLT tapes and migrating to Legato
6.1 with LTO Tapes.

However i would be able to have the ability to recover data saved via
any of these retired servers in future.

In order to do so, i would need to migrate all the media databses and
client indexes onto one backup servers which would be used as a recovery
machine.

Could someone help in explaining to me as to how should i go about this
?

Any help in this regards would be great.

Thanks,

--
Monit Kapoor
Sr.Systems Engineer (UNIX Support)
Off: +91-118-4562842 (extn: 4215)
Cell: +919811203816
www.cadence.com



Re: TSM on SUN

2002-04-18 Thread bbullock

Hey, careful, some of us love the IBM LVM and find Veritas
cumbersome and overly confusing. :-)

While we are on the subject, we recently brought up our first TSM
server on Solaris (our other 10 are on AIX) and had a question as to the
disk setup. VxFs and VxVm are available to be used on the disks, but should
we use raw volumes (i.e. /dev/vx/rdsk/stgdg/stg10) or mounted filesystems
(i.e. using the dsmfmt to create the TSM devices)?

We started using filesystems, but found the performance very poor.
We tried tweaking some Veritas settings but could never get it to match the
speed of the raw volumes, so we are now just using the raw volumes.

We are using TSM mirroring for the DB and logs, so is there any
added risk by using the VxVm raw volumes (as opposed to a logged
filesystem)?

Thanks,
Ben



-Original Message-
From: Kelly J. Lipp [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 18, 2002 4:10 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM on SUN


Our experience with TSM on Solaris has been very good.  Installation is very
easy, upgrades are very easy and the code has good support from IBM/Tivoli.
And the added benefit of not having to learn the iLogical Volume Manager!

If your customer has a large Solaris site already, let them use TSM on
Solaris.  No point adding another Unix platform in this case.  Now, if you
asked about HP the story would be completely different.

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs, CO 80949
[EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com or www.storserver.com
(719)531-5926
Fax: (240)539-7175


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Pitur Ey~srsson
Sent: Thursday, April 18, 2002 9:57 AM
To: [EMAIL PROTECTED]
Subject: TSM on SUN


Hi guys

A big company here in Iceland is thinking about installing TSM on SUN, they
will be the first one to have TSM on SUN here.


My question is,

Is there anything I should know about (bugs, problems) anything sun related
that is different from the other systems.
Or should I recommend just using AIX.?


Kvedja/Regards
Petur Eythorsson
Taeknimadur/Technician
IBM Certified Specialist - AIX
Tivoli Storage Manager Certified Professional
Microsoft Certified System Engineer

[EMAIL PROTECTED]

 Nyherji Hf  Simi TEL: +354-569-7700
 Borgartun 37105 Iceland
 URL:http://www.nyherji.is



Re: 1 Gb. Ethernet adapter

2002-02-22 Thread bbullock

We have 1GB adapters in all 7 of our RS/6000 TSM servers and have
them set at Auto_Negotiation, as that's the only choice. We have had no
problems with the throughput or duplex mis-matches that we see on the 10/100
adapters. Your mileage may vary, as it may depend on the network equipment
you plug it into and if you are using the Ethernet or fibre version of the
Gb card. (we have the fibre interfaces and plug them into to Extreme network
equipment).

Ben
Micron Technology Inc.
Boise, Id

-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]]
Sent: Friday, February 22, 2002 4:51 AM
To: [EMAIL PROTECTED]
Subject: 1 Gb. Ethernet adapter


Hi *SM-ers!
We have a H70 server with a 1 Gb. Ethernet adapter. I checked the setting
and it's set to Auto_Negotiation. I know that this is not the correct value
for optimal performance, but I see only the following choices:
10_Half_Duplex
10_Full_Duplex
100_Half_Duplex
100_Full_Duplex
Auto_Negotiation
There are no 1000_Half_Duplex and 1000_Full_Duplex options available.
What should select here?
Thanks in advance for any reply!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**



IBM 3583 Library question.

2002-02-06 Thread bbullock

We are hooking a new IBM 3583 tape library up to a Solaris
host to be a remote site TSM server. The Ultrium drives have various
/dev/?st...  devices through which we can access them. The no-brainer was to
use the driver that tells the drive to compress the data, but I'm not sure
between these 2 options:

/dev/rmt/*stc   - compression and Rewind on Close
/dev/rmt/*stcn  - compression but No Rewind on Close

I can't find anything in the TSM or library documents on which to
use. Does TSM get confused if a tape were to be mounted and it was not at
the beginning? Kind of seems like we would reduce tape and drive wear by
using the no rewind device drives, and writes might happen quicker it the
tape is not rewound...

Any suggestions would be helpful. (the listserv archive site
is currently inaccessible.)

Thanks,

Ben
Micron Technology Inc.



Re: VM TSM migration options: Veritas vs Netbackup

2002-01-23 Thread bbullock

Wow, that's quite a sad tale. You have my condolences.

I've had very little experience on Netbackup, but it's periodically
brought up in meetings. The claim of some folks is that on functions where
the TSM server software is weak (i.e. long restore times on filesystems that
are very large with a large number of files [1 million+]), Netback could do
it faster and better.

Recently one of our foreign offices set up a small Netbackup
environment. I'll be interested in seeing how it scales for them.

Thanks for the insight,
Ben

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Monday, January 21, 2002 12:28 AM
To:
Subject: Re: VM TSM migration options: Veritas vs Netbackup


There has been a lot of recent discussion on the list about the subject area
of Veritas on Intel versus TSM.  The comments here are for everyone, not the
author of the question, nothing is personally meant by any comments here.
Bottomline, NetBackup doesn't scale at all.  We are ripping it out of the
Windows environment right now.  We worked for 18 months with Veritas
Engineering to try to fix the product.  They simply gave up.  The word
compete should not even be put in the same sentence when speaking of
Netbackup vs TSM on Intel.  If you have less than 20 clients to backup and
none with over 50GB of data, Netbackup will be OK.  That is if you never
need to create duplicate offsite copies or need a deleted file policy.
Duplication on Windows is an impossibility in the Netbackup world unless you
buy 4 times the hardware you have in comparison to TSM and a 24 to 72 hour
window to create those duplicates.  Deleted file policy, Netbackup, asks
what is a policy?  No such animal, so you get stuck when you do not catch
that a file has been deleted before your tapes expire in Netbackup.  The key
word here is tape expiration, not backup object expiration.  NetBackup has
no such thing as storage management.  I refer to it as NetBackup,
GrossNoRestore.  In other words, NetBackup backs up some of your stuff, but
you will never be able to restore it all.

Yeah, UNIX is next.  After the debacles of implementing 3.4 of Netbackup,
Veritas really dug the grave deep.  Oh, I forgot to mention that our Windows
Netbackup 3.4 migration lead to a down (backups lost) situation for weeks
and we ended up figuring out what the problems were.

Because we lost half the performance from 3.2 to 3.4 on Windows, we were
faced with needing to change.  More or better hardware would not fix the
problem, hell, we are using ESS disk and Magstar FC tape with high-end
servers.  Before the migration we were getting 4.5MB/sec and up to 10 in
certain situations.  Veritas could not figure out how we were getting these
levels of performance.  They could not reproduce them with our own server
and identical hardware in their labs.  Simply, Netbackup cannot scale in the
Windows environment.

I consider myself an expert on Netbackup and a knowledgeable person on TSM.
I believed the Netbackup hype, thought the product was the best because it
had the features that I thought were needed.  When actually, implementing
you find out the features differences with TSM are gimmicks to get you to
buy and really never scale making them unusable.  These gimmicks cause you
to overlook the real issue of being able to restore your business, which
implies having control and the ability to direct what is backed up.
Netbackup's GUI is impressive, it is the registry hackers dream.   Wait till
all the timeout crap hits the fan and you start tweaking registry entries,
creating undocumented touch files and finding out there is poor to
non-existent Windows support at Veritas for Netbackup when you have a
critical problems.  When you are paying 23% maintenance from a large account
you would think that having half a dozen critical down situation open calls
would get someone from Development engaged to work with your account.  We
finally surmised these people did not exist anymore.

Yes, TSM has its quirks and customers have lost data over the years, but
probably mostly of their own doing and not really learning the TSM product.
After 911, everyone should be taking backup and recovery at a different
seriousness.  If not, you are in the wrong business.  That means if you are
not an expert in the backup product you are using and doing regular disaster
recovery tests, then shame on you, get to be an expert.  If you are not
capable, choose a vendor that has support, Tivoli is one of them.  The shame
if it is we automatically set the support expectation bar 2 notches higher
when it is an IBM company, but we will pay more to a fly-by-night
organization and make excuses for them when they do not answer the phone.

This all said make your NetBackup/TSM decision on facts, not likes or
dislikes.  Your business depends on you getting this right and ultimately
your job and reputation.

Consider one final note.  Your understanding of TSM is an irreplaceable
asset.  You 

Re: Bad performance after upgrading from 3.7.2.0 to 4.1.1.0

2002-01-15 Thread bbullock

The upgrade of the OS caused us a grief a couple of times. Here's
the note from the readme:

**
* Possible performance degradation due to threading  *
**

On some systems Tivoli Storage Manager for AIX may exhibit significant
performance degradation due to TSM using user threads instead of
kernel threads.

This may be an AIX problem however, to avoid the performance degradation
you should set the following environment variables before you start
the server.

export AIXTHREAD_MNRATIO=1:1
export AIXTHREAD_SCOPE=S


Run the command env |grep AIXTHREAD. If it does not come back with
the 2 settings above, it may be your problem.

Ben
Micron Technology Inc.


-Original Message-
From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, January 15, 2002 11:46 AM
To: [EMAIL PROTECTED]
Subject: Bad performance after upgrading from 3.7.2.0 to 4.1.1.0


Hi TSM-ers,

we upgraded a few days ago from TSM 3.7.2.0 to 4.1.1.0. This upgrade was
done by our
IBM CE (it's a NSM with an 3494 ATL).
After the upgrade the performance dropped dramatically. I think it's a
problem with the
database because the cache hit rate dropped below 95% - it was in the 99%
area before.
Nothing else was changed. Only AIX was upgraded to 4.3.3 and TSM was brought
to 4.1.1.0.

Are there any recommendations what settings I should change to bring
performance back
to normal?

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG
MAIL:   [EMAIL PROTECTED]
TEL:++43/5574/4991-251
FAX:++43/5574/4991-820-8251




--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG

--



Re: Product life cycle?

2001-11-26 Thread bbullock

that's what I needed. I'll bookmark it this time.

Thanks,
Ben

-Original Message-
From: Malbrough, Demetrius [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 26, 2001 2:26 PM
To: [EMAIL PROTECTED]
Subject: Re: Product life cycle?


http://www.tivoli.com/support/storage_mgr/tivolieoc.html

-Original Message-
From: bbullock [mailto:[EMAIL PROTECTED]]
Sent: Monday, November 26, 2001 3:26 PM
To: [EMAIL PROTECTED]
Subject: Product life cycle?


Quick question:

Where is that little matrix that shows the end-of-life cycle for all
the older TSM versions? I always have a hard time finding it, and am trying
to figure out when my TSM 4.1 servers will have to be upgraded to 4.2.

Thanks,
Ben



Product life cycle?

2001-11-26 Thread bbullock

Quick question:

Where is that little matrix that shows the end-of-life cycle for all
the older TSM versions? I always have a hard time finding it, and am trying
to figure out when my TSM 4.1 servers will have to be upgraded to 4.2.

Thanks,
Ben



Re: transmitting the result of an sql query to a script ? + loop struture ?

2001-11-26 Thread bbullock

As yet another example of the numerous ways to do the same task in
unix. Here's a simple script I wrote a while back to do the same task:


for i in $( dsmadmc -se=xxx -id=xxx -password=xxx select PROCESS_NUM from
processes where PROCESS like 'Space Reclamation' |grep '[0-9]' |grep -v
'[A-z]')
do
 dsmadmc -se=xxx -id=xxx -password=xxx can proc $i
done
__

As for the original question: I don't know of a way to do it within
the TSM server itself (with either a script or a macro). I'm not saying that
it can't be done, only that I don't know of a way.

Thanks,
Ben


-Original Message-
From: Baines, Paul [mailto:[EMAIL PROTECTED]]
Sent: Thursday, November 22, 2001 4:01 AM
To: [EMAIL PROTECTED]
Subject: AW: transmitting the result of an sql query to a script ? +
loop struture ?


I can't think how you'd do it without a shell script. Try this (not tested!)

for proc in `dsmadmc -se=xxx -id=xxx -password=xxx -tab select '#!#!',
process_num from processes where process='Space Reclamation' | awk '/^#!#!/
{printf(%d\n, $2)}'`
do
   dsmadmc -se=xxx -id=xxx -password=xxx -tab cancel proc $proc
done


Mit freundlichen Grüßen - With best regards
Serdeczne pozdrowienia - Slan agus beannacht
Paul Baines
TSM/ADSM Consultant


-Ursprüngliche Nachricht-
Von: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Gesendet: Donnerstag, 22. November 2001 11:01
An: [EMAIL PROTECTED]
Betreff: transmitting the result of an sql query to a script ? + loop
struture ?


Hi TSM'ers !

I already saw this in a thread, but can't find it again : I'm looking
for a convenient way to stop some reclamation processes in an automated
way (script).
As  those reclamation processes are made on an offsite stg pool, no way
stopping them by increasing the reclamation threshold to 100, so I
thought to something  like :
select process_num from processes where process='Space Reclamation' ,
and then transmit this process_num to a cancel proc command.
Is there a convenient way doing that, without calling an external AIX
script ?
More clever : if I have several of those reclamation processes, is there
a way building a loop in the script, to cancell them all, while running
the script once ?
TIA.
Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78/Fax: +41 61 226 17 01 | 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



Re: Daylight saving issue on backup

2001-10-26 Thread bbullock

How about this question: We have some NT clients still running the V
3.1.0.7 client, (I know, no longer supported, but you know how customers can
be). They are below the versions mentioned, but the 3.1 versions are not
explicitly discussed.

Will they do a  full backup or not?

Ben


-Original Message-
From: Jim Kirkman [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 26, 2001 1:05 PM
To: [EMAIL PROTECTED]
Subject: Re: Daylight saving issue on backup


As Andrew noted earlier, 3.7.2.18/19, 4.1.1.16 and 4.1.2 and above are free
of
this 'defect'

Chan, Kien wrote:

 Hi all TSMers and guru. Our site still had problem on this daylight saving
 issue on backup. Majority of our client was NT box running b/a client
 version below 3.7.2.18, which was the fix version for this daylight saving
 issue. My question is if we unchecked the box for automatically adjusting
 the daylight saving changes on Saturday night and checked that box on day
 morning, would all these clients do a full backup on Sunday when the
backup
 schedule start on Sunday evening? Besides this fix, would anyone have a
 better solution for a quick fix on this issue? Also, if the client was on
 the version above 3.7.2.18, do I still have to worry about this daylight
 saving and also whether that box for automatically adjusting daylight
saving
 changes is checked or unchecked? What about the TSM server, running
version
 4.2 for both (b/a and server)?

 Any help or idea will be greatly appreciated. Thank you for all in
advance.

 Best regards,
 Kien

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884



Re: manually ejecting a tape from a 3494 library

2001-10-24 Thread bbullock

to find the tapes that are free-agents

mtlib -l /dev/lmcp0 -q I |grep FF00

To eject the tapes from the library:

mtlib -l /dev/lmcp0 -C -V TAPE# -t FF10

Ben

-Original Message-
From: Shawn Bierman [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 24, 2001 11:49 AM
To: [EMAIL PROTECTED]
Subject: manually ejecting a tape from a 3494 library


I have some tapes inside our robot that are possibly miss-labeled.

I would like to have the robot eject these tapes and I thought the mtlib
command would be the answer.  I cannot figure out how to do it though.

Any suggestions?

thanks,
-shawn

Shawn L. Bierman
Unix Technical Support Analyst II
Methodist Healthcare
Information Systems
(901) 516-0143 (office)
(901) 516-0043 (fax)



Re: How do I install multiple TSM instances on AIX?

2001-09-25 Thread bbullock

Currently, 2 of our hosts have 2 TSM servers on them. We did it this
way to keep database growth down to a reasonable level.

Here are some of my personal notes from the last time we did it.
Note that it was back in the ADSM days, so the paths will need changing, but
the variable names have remained the same.

In our case, we duplicated the configuration of the first server
over to the second instance and then tweaked it. This way we had the same
management classes. In our configuration, we also don't use dynamic drive
sharing, each of the TSM servers has a fixed number of drives that are it's
own. Of course, we can change the distribution with some simple
'define/delete drive' commands.

Use at your own risk



Create the new directory structure for the new server.
 mkdir /usr/lpp/adsmserv/bin2

Copy over the current config file.
 cp usr/lpp/adsmserv/bin/dsmserv.opt /usr/lpp/adsmserv/bin2/dsmserv.opt

Set up the scripts  aliases in .kshrc to set environment variables
depending on which server you want to start/connect-to.
 export DSMSERV_CONFIG=/usr/lpp/adsmserv/bin2/dsmserv.opt
 export DSMSERV_DIR=/usr/lpp/adsmserv/bin

Change MOTD to notify users of 2 servers and how to get into them.

Change these dsmserv.opt attributes to be different from the first server
TCPPort 1500
httpport 1580
VOLUMEHistory  /usr/lpp/adsmserv/bin/vol.hist1
DEVCONFig  /usr/lpp/adsmserv/bin/dev.config1

Copy the current stanza in the client dsm.sys file to a new stanza.
Change the TCPPORT and SERVERNAME to connect to the correct TCP port for the
new instance.

Create new filesystems.

Format the new database and logs for ADSM.
 dsmfmt -db /ADSM2/dbvol1/adsm2dbvol1
 dsmfmt -log /ADSM2/logvol1/adsm2ogvol1

initialize an adsm server on the new devices
 cd to the new server directory
 dsmserv format 1 logfile 1 dbfile
 this will create the dsmserv.dsk file.

bring up the server
dsmserv

register licenses

define new logical library, device and tape drives.
* make sure private and scratch categories are different 
that you don't define the drives to the second instance until you delete it
from the first.*

Create storage pools, diskpools and copypools
 create jfs filesystems
 dsmfmt the devices
 dsmfmt -data /ADSM2/stgpool1/adsm2stgpool1

define the storage pools to ADSM
 define stg...
 define vol...

make a copy of the server definitions on the old server.
export server dev=3590dev scratch=yes

checkout libvol tapelib2 B00013

import the server definitions to the new server.
 checkin libvol tapelib2b B00013 stat=private devt=3590
 import server vol=B00013 dev=3590dev

change serversettings
 set servername ADSMSERV5B
 SET LICENSEAUDITPERIOD 8
  set accounting on
 SET WEBAUTHTIMEOUT 60

Delete all unneeded schedules

Update all management classes to point to new storagepools.

change the tape checkin and other scripts to new tape series.

Delete all nodes

checkin tapes for new server

___

Ben



-Original Message-
From: Mark T. Petersen [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 25, 2001 3:18 PM
To: [EMAIL PROTECTED]
Subject: How do I install multiple TSM instances on AIX?


All,
How do I install a second TSM instance on AIX?  We are running version
4.1.4.  I
have already created the directory for the second server
and formatted the log and db volumes but can't get the server to connect.
How
do I determine the HTTPORT and TCPPORT numbers
for the second instance?  Do I need to copy the dsm.sys and dsm.opt files
into
the directory of the second TSM server?

Thanks for any help you can offer!

Mark Petersen
McLeodUSA



TSM licenses needed....?

2001-09-05 Thread bbullock

Up till now, we have only bought IBM 3494 tape libraries for our TSM
environment. Well, we are looking at putting in a smaller installation at a
remote site and are considering purchasing an HP SureStor 2/20 or an IBM
Ultrium L18.

First of all, any comments on which would be the better purchase? I
lean towards the IBM library just because we've had such great success with
the 3494 library and the 3590 tape drives.

Second, when we purchase our 3494s, we need to purchase the
Extended device support  the Managed Library license to use them. We
are getting conflicting bids in the host country as to whether we need to
purchase those licenses for these libraries. Do I need one, both or neither
of these to use this smaller library?

Thanks,
Ben
Unix Manager
Micron Technology Inc.



Re: password caches on shared disk?

2001-08-02 Thread bbullock

Your experience is the same as ours. We encounter the same thing on
some HACMP pairs when we roll services between the hosts. It took us a while
to figure out why some would fail and others not, but we eventually found
that the encrypted password would not work on the a host whenever we would
change the hostname (such as we have to do on our Oracle hosts when we do an
HA roll).

We too deduced that the TSM password is encrypted using the hostname
as a key (or something along those lines).

Looking at your situation, I don't think you can get away with a
shared/encrypted password.

Here's the only plan I can think of: Because the hosts mounting the
GPFS filesystem never change hostnames, why don't you point the passworddir
to a local directory on each GPFS client (as it is by default). You will
need to log into each host that mounts the GPFS filesystem initially to
set/encrypt the password, but never again because your hostname will not
change. The only issue is that you have to do this on all the hosts, and it
adds an extra step when you add new hosts into the configuration.

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, August 02, 2001 2:58 PM
 To: [EMAIL PROTECTED]
 Subject: password caches on shared disk?


 We've got some shared disk (GPFS, on a SP) which we are
 trying to back up in
 such a way that any of the sharing machines can perform the backup.

 Most pieces of this appear to be working well, but the cached password
 (passwordaccess generate) does not.

 Here's what I've done:

 All clients use identical copies of a dsm.sys that includes a
 stanza like:

 server GPFS
nodename gpfs_node
commethod...
[]
passwordaccess generate
passworddir/shared/disk/directory/


 And then I type

 dsmc q sched -se=GPFS

 and everything works.  So long as I stay on the same machine.
  If I move to
 another machine, then I am re-prompted for a password.

 What I think is happening is that the passwordaccess generate
 transaction is
 NOT really a password for a _node_.  It is, instead, a sort of _host_
 authenticator, and thus is calculated (or whatever)
 differently on different
 hardware.

 Can anyone else shed some light on this?  Am I just missing a
 trick somehwere?
 Has anyone ever set up a password cache file that is visible
 to multiple
 boxes?



 - Allen S. Rout




Re: Backing up lots of small files

2001-07-31 Thread bbullock

This was discussed a while back and you should be able to search it
in the archives. But, in a nut shell, your suggestion is what we did in our
case. We had our developers write scripts that tar up the many little files
in some logical fashion (by date, by directory, by part type, etc.) and then
have TSM back up those files and ignore all the little files. When a restore
is needed, they have a script that restores the tar and uncompresses it to
it's original location.

The backups take much less time now and the restoration of 20,000
files now takes only 1 tar file and 3 minutes to restore, where as it used
to take hours to restore. Sure, the untar then takes a while to put the
files out there, but it is still much faster than a standard TSM restore.

In our case, these hosts have millions of files that are written
once and never changed, and then deleted a few days later, so it works well
in our situation. If the existing files are constantly changing though, it
may be more complicated to implement a scheme like this. Versioning of the
files would be next to impossible with a scheme like this. Like, when a user
wants a file restored but has no idea what day it was created, backed up or
deleted,(that never happens, right? ;-), you want to have a way to know
which tar file that file is going to be found in. If you can figure out how
to tar up the files logically, it can work.

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: Gerald Wichmann [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, July 31, 2001 10:38 AM
 To: [EMAIL PROTECTED]
 Subject: Backing up lots of small files


 Has anyone come up with any good strategies on backing up a
 file server with
 millions of small (1KB-10KB) files? Currently we have a
 system with 50GB of
 them that takes about 40 hours to backup. Many of the files change..

 I m wondering if anyone has anything in place that works well
 for this sort
 of situation..

 Something like this or perhaps something I haven t thought of:

 develop script that will tar/compress the files into a single
 file and back
 that single file up daily. Perhaps even search the file
 system by date and
 only backup files that have changed since the last backup
 (this seems like
 it wouldn t be any faster then simply backing them up though)




Re: Offsite volume in strange state

2001-07-13 Thread bbullock

Hmm, we have about a dozen tapes in this state that are empty but not empty.
We have even brought the tapes back on site and tried the 'audit vol...
fix=yes' command on them and they still are not getting cleared up. We have
opened a call with Tivoli but they have yet to be able to fix it. We are on
AIX 4.3.3 at TSM version 4.1.1, so I'm not sure if our problems are related,
but if they find a fix for us, I'll let you know what it is.

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: Thomas Denier [mailto:[EMAIL PROTECTED]]
 Sent: Friday, July 13, 2001 2:37 PM
 To: [EMAIL PROTECTED]
 Subject: Offsite volume in strange state


 Our TSM server log for yesterday afternoon shows many pairs of
 messages like the following:

 14:00:21 ANRD AFMIGR(2967): Space reclamation terminated
 for volume
  600206 - unexpected result code (1).
 14:00:21 ANR1092W Space reclamation terminated for volume 600206 -
  internal server error detected.

 The output from 'q v 600206 f=d' is as follows:

Volume Name: 600206
  Storage Pool Name: VSN_MAIL
  Device Class Name: VAULT3590
Estimated Capacity (MB): 9,216.0
   Pct Util: 0.0
  Volume Status: Filling
 Access: Offsite
 Pct. Reclaimable Space: 100.0
Scratch Volume?: Yes
In Error State?: No
   Number of Writable Sides: 1
Number of Times Mounted: 4
  Write Pass Number: 1
  Approx. Date Last Written: 07/11/2001 09:19:39
 Approx. Date Last Read: 07/10/2001 17:43:16
Date Became Pending:
 Number of Write Errors: 0
  Number of Read Errors: 0
Volume Location: VAULT
 Last Update by (administrator): BATSYS
  Last Update Date/Time: 07/11/2001 10:13:48

 A 'query content 600206' command reports 'No match found using
 this criteria.' Our server runs under OS/390 and is at level
 3.7.4.0. The 'delete volume' command fails with a return code
 of 13 with or without the 'discard=yes' option. The volume is
 in fact off-site, which makes 'audit volume' a significant
 undertaking. Is there any way to get rid of a volume like this?




Re: TSM server performance help

2001-07-11 Thread bbullock

I've seen all the replies and they are all good, but I don't think
they will solve your problem. I think the most telling problem is that when
you run backups individually, they work great, but when you have multiple
sessions and processes running at the same time, they all seem to bog down.
We are running 2 of our TSM servers on M80s and they just scream.

It sounds like what bit us when we upgraded to AIX 4.3.3. Because of
the way TSM threads it's jobs, if you don't have a certain setting in AIX,
your TSM server will become a single threaded beast and really bog down.
From the TSM 4.1.1 readme:

___
**
* Possible performance degradation due to threading  *
**

On some systems Tivoli Storage Manager for AIX may exhibit significant
performance degradation due to TSM using user threads instead of
kernel threads.

This may be an AIX problem however, to avoid the performance degradation
you should set the following environment variables before you start
the server.

export AIXTHREAD_MNRATIO=1:1
export AIXTHREAD_SCOPE=S
___


Making these changes don't require a reboot, so it's worth a shot to
add these in, make sure they get sourced into your current environment (by
logging out  back in, or just running these 2 lines) and then stop and
start the ADSM service.


Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: Chuck Lam [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, July 10, 2001 5:47 PM
 To: [EMAIL PROTECTED]
 Subject: TSM server performance help


 Server:  M80 running AIX 4.3.3, 4GB RAM, 768MB ps
  DiskStoragePool, Rlog,  Database on all on a
   SAN via fibre channel connection.
 TSM version:4.1

 This is a newly configured system.  We are conducting
 tests on it. When we tried to do a few backups by
 themselves, they seemed to get done pretty fast.
 However, when we had 4 migration processes and a
 couple of backups going at the same time, everything
 slowed down to a halt.  According to my client, his
 backup transfer speed was about 100MB per minute. Can
 anyone give me some suggestions to improve the
 performance?  I have attached the 'topas' reading of
 this server.

 TIA


 __
 Do You Yahoo!?
 Get personalized email addresses from Yahoo! Mail
 http://personal.mail.yahoo.com/




Re: ADSM 3.1.2.90 on AIX 4.3.3-06

2001-07-02 Thread bbullock

I'll take a stab at a possible solution if you haven't found a
solution yet.

As we upgraded our ADSM servers to latter versions of the OS and the
TSM server software, we experienced significant slowdowns on the system
especially when there were multiple sessions or processes going on. We
eventually found our answer in the readme file for the TSM server. I believe
this  bit us when we upgraded the OS as you did.

Here are some notes from the readme file for version 4.1.1.

___
**
* Possible performance degradation due to threading  *
**

On some systems Tivoli Storage Manager for AIX may exhibit significant
performance degradation due to TSM using user threads instead of
kernel threads.

This may be an AIX problem however, to avoid the performance degradation
you should set the following environment variables before you start
the server.

export AIXTHREAD_MNRATIO=1:1
export AIXTHREAD_SCOPE=S
___

Not having these 2 settings basically turns your multi-threading
machine into a single-threaded process and performance really gets bad.

Making these changes don't require a reboot, so it's worth a shot to
add these in, make sure they get sourced into your current environment (by
logging out  back in, or just running these 2 lines) and then stop and
start the ADSM service.


Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: irene [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, June 27, 2001 9:54 AM
 To: [EMAIL PROTECTED]
 Subject: ADSM 3.1.2.90 on AIX 4.3.3-06


 We were running ADSM 3.1.2.90 on AIX 4.2.1 and it behaved
 normally.  When we migrated AIX to 4.3.3-06, ADSM slowed
 down to unacceptable response times.  For example, a backup
 that used to take 5 minutes now takes 50 minutes.  The operating
 system seems to be performing O.K.  All of this is being done on
 a test machine at the moment.  Does anyone have any ideas
 why this is happening?  Should we have done a complete overwrite
 rather than a migrate when moving to AIX 4.3.3-06 from AIX 4.2.1?
 Thank-you for any insight provided.
   Irene Braun
   EMAIL:  [EMAIL PROTECTED]




Re: More on 3590K tape quality issues

2001-05-11 Thread bbullock

I second this issue. These puppies are a real problem to get
labeled. We have our IBM CE check that we had the latest and greatest
drivers installed (tape drive microcode, Atape drivers, atldd software), but
the problem persists.
In our case, we also seem to have a higher incidence of the 'K'
tapes logging an error and being marked as readonly. We change the tape back
to readw, and it continues to write and read the tape with no problems.

The new tapes just seem to be very sensitive, and throw up the panic
flag more often.

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: Richard Sims [mailto:[EMAIL PROTECTED]]
 Sent: Friday, May 11, 2001 9:51 AM
 To: [EMAIL PROTECTED]
 Subject: More on 3590K tape quality issues


 Postings from customers using 3590K tapes have, over the past months,
 noted problems which most conspicuously turn up when trying to label
 the tapes.  I had the same problem of I/O errors on several tapes
 two months ago, in the labeling process.  I persisted in performing
 the label operation on the problematic ones, and finally got them all
 labeled.  Once labeled, no problems.

 Today I received a fresh shipment of 3590K tapes (IBM part
 number 05H3188)
 and labeled 70 of them.  Three exhibited labeling problems,
 on different
 drives. I repeated the Label Libvolume on those three: two of
 them then
 successful, one failed.  Repeated on the last tough one and
 it finally went
 through.

 Such is the current state of 3590K tape and drive harmony, at least at
 this site.

 Richard Sims, BU




Re: More on 3590K tape quality issues

2001-05-11 Thread bbullock

Yes, the 'overwrite=yes' option was used every time. It just seems
to stubbornly fail until it decides to work a few tries later... These tapes
are possessed I tell you, pure evil.   ;-)

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: David Longo [mailto:[EMAIL PROTECTED]]
 Sent: Friday, May 11, 2001 10:32 AM
 To: [EMAIL PROTECTED]
 Subject: Re: More on 3590K tape quality issues


 When you all label theses tapes, do you use overwrite=yes?  I
 had occaisional problems with new tapes in 3575 library and
 when I started using overwrite=yes, I had no further problems
 with labeling.  I suspect that even with new tapes there
 maybe something on the tape.


 David B. Longo
 System Administrator
 Health First, Inc.
 3300 Fiske Blvd.
 Rockledge, FL 32955-4305
 PH  321.434.5536
 Pager  321.634.8230
 Fax:321.434.5525
 [EMAIL PROTECTED]


  [EMAIL PROTECTED] 05/11/01 12:08PM 
 I second this issue. These puppies are a real problem to get
 labeled. We have our IBM CE check that we had the latest and greatest
 drivers installed (tape drive microcode, Atape drivers, atldd
 software), but
 the problem persists.
 In our case, we also seem to have a higher incidence
 of the 'K'
 tapes logging an error and being marked as readonly. We
 change the tape back
 to readw, and it continues to write and read the tape with no
 problems.

 The new tapes just seem to be very sensitive, and
 throw up the panic
 flag more often.

 Ben Bullock
 Unix system manager
 Micron Technology Inc.


  -Original Message-
  From: Richard Sims [mailto:[EMAIL PROTECTED]]
  Sent: Friday, May 11, 2001 9:51 AM
  To: [EMAIL PROTECTED]
  Subject: More on 3590K tape quality issues
 
 
  Postings from customers using 3590K tapes have, over the
 past months,
  noted problems which most conspicuously turn up when trying to label
  the tapes.  I had the same problem of I/O errors on several tapes
  two months ago, in the labeling process.  I persisted in performing
  the label operation on the problematic ones, and finally
 got them all
  labeled.  Once labeled, no problems.
 
  Today I received a fresh shipment of 3590K tapes (IBM part
  number 05H3188)
  and labeled 70 of them.  Three exhibited labeling problems,
  on different
  drives. I repeated the Label Libvolume on those three: two of
  them then
  successful, one failed.  Repeated on the last tough one and
  it finally went
  through.
 
  Such is the current state of 3590K tape and drive harmony,
 at least at
  this site.
 
  Richard Sims, BU
 



 MMS health-first.org made the following
  annotations on 05/11/01 12:33:12
 --
 
 This message is for the named person's use only.  It may
 contain confidential, proprietary, or legally privileged
 information.  No confidentiality or privilege is waived or
 lost by any mistransmission.  If you receive this message in
 error, please immediately delete it and all copies of it from
 your system, destroy any hard copies of it, and notify the
 sender.  You must not, directly or indirectly, use, disclose,
 distribute, print, or copy any part of this message if you
 are not the intended recipient.  Health First reserves the
 right to monitor all e-mail communications through its
 networks.  Any views or opinions expressed in this message
 are solely those of the individual sender, except (1) where
 the message states such views or opinions are on behalf of a
 particular entity;  and (2) the sender is authorized by the
 entity to give such views or opinions.

 ==
 




Re: Multiple TSM* Servers On Same Machine

2001-05-09 Thread bbullock

I agree with all the points in the list except one. On #5, we have
been required by Tivoli to purchase one server license (and the other
required network enabler etc) for each instance of the server software we
are running on a host. Yes it is using the same binaries, hardware, network,
etc., but the word we got from Tivoli was that we were out of license
compliance unless we purchased another license.

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: Jeff Bach [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, May 09, 2001 10:22 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Multiple TSM* Servers On Same Machine


 1.  Less hardware to manage
 2.  SCSI tape resources can be shared
 3.  DISK resources can be moved between instances without
 an outage
 4.  Multiple interfaces can be shared
 5.  One TSM server license
 6.  Can be implemented in a few hours
 7.  Works around application bottlenecks
 8.  Cheaper

 DIS
 1.  Harder to upgrade
 2.  Memory allocation can be an issue


 -Original Message-
 From:
 [EMAIL PROTECTED]
 [SMTP:[EMAIL PROTECTED]]
 Sent:   Wednesday, May 09, 2001 10:42 AM
 To: [EMAIL PROTECTED]
 Subject:Multiple TSM* Servers On Same Machine

 Hi all,

 Why would one put multiple TSM servers on a single machine?
 What are the advantages and disadvantages? What are the
 scenarios?

 History:
 We have one AIX system with one TSM server with two libararies
 attached.

 Robert Miller
 Armstrong


 **
 This email and any files transmitted with it are confidential
 and intended solely for the individual or entity to
 whom they are addressed.  If you have received this email
 in error destroy it immediately.
 **




Re: Dr. Watson in NT after installing TSM4.1

2001-04-18 Thread bbullock

Are you getting this pkthread error on NT servers?

Ben Bullock
Unix system manager
Micron Technology Inc.


 -Original Message-
 From: LeBlanc, Patricia [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, April 18, 2001 1:04 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Dr. Watson in NT after installing TSM4.1


 Has anyone seen a pkthread error?
 We've upgraded from 3.1 to TSM 4.1.1.0 and keep getting this
 error that
 hangs the server.  It's not running and it's not down
 completely.  If it
 were down, our monitoring would pick that up.

 7 servers have been upgraded, but only a couple are having
 this problem on a
 regular basis.
 Any thoughts would be greatly appreciated.

  -Original Message-
  From: Miller, Ryan [SMTP:[EMAIL PROTECTED]]
  Sent: Thursday, April 12, 2001 3:33 PM
  To:   [EMAIL PROTECTED]
  Subject:  Re: Dr. Watson in NT after installing TSM4.1
 
  The one I can remember off the top of my head is the
 Fstypes option, it is
  no longer supported and if it is in the OPT file, it will
 cause Dr Watson.
  I believe we had another non supported option cause the
 same problem, but
  I can't remember it.  Sorry.  I would review any OPT file
 you are going to
  copy and question any options that seam out of place.
 That's how we found
  ours.
 
  -Original Message-
  From: LeBlanc, Patricia [mailto:[EMAIL PROTECTED]]
  Sent: Thursday, April 12, 2001 2:21 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Dr. Watson in NT after installing TSM4.1
 
 
  What are the options you are talking about that are no
 longer supported??
 
   -Original Message-
   From: Miller, Ryan [SMTP:[EMAIL PROTECTED]]
   Sent: Thursday, April 12, 2001 3:13 PM
   To:   [EMAIL PROTECTED]
   Subject:  Re: Dr. Watson in NT after installing TSM4.1
  
   Did you use the original OPT file?  If so, there are some
 options that
  can
   cause Dr Watson errors.  They are options that are no
 longer supported
  by
   the newer client, we have experienced this issue also and
 found this
   cause.
  
   -Original Message-
   From: Gottfried Scheckenbach
   [mailto:[EMAIL PROTECTED]]
   Sent: Thursday, April 12, 2001 11:24 AM
   To: [EMAIL PROTECTED]
   Subject: Re: Dr. Watson in NT after installing TSM4.1
  
  
   Because there where no answers:
  
   Today I had the same problem - after upgrading a old ADSM
 3.1 to TSM 4.1
   installation on NT with SP 3. Just after installation of
 SP 6 all went
   fine.
  
   Regards,
   Gottfried
  
  
   Shekhar Dhotre wrote:
   
Hi all ,
   
I   installed TSM server 4.1  on WINNT 4.0 service pack
 4 , RAM 256 MB
whenever i  open configuration wizard   to configure any of the
   parameters in
TSM
DR. WATSON  shows his face ..  what should i check to
 correct this
   problem ..
   
Thanks
shekhar




Re: SAP and 3494 performance.

2001-03-20 Thread bbullock

OK, I'll bite. We don't use the TDB client, we use home grown
scripts that use the standard TSM archive client, but I believe the
bottleneck in your case was the same as mine.

The bottleneck in your scenario is the 100Mb Ethernet interface. In
our testing, we find that a TSM session can sustain a throughput of about 11
MB/sec (which equates to about 88 Mb/sec). That is about 88% data and the
TCP/IP overhead. That's about as good as it gets, from my understanding.

Doing the math on that figure, (11MB/sec X 60sec X 60min)=
40GB/hour. It looks like a 350GB database would be able to move in about
8.75 hours. Obviously, with the compression on the client turned on, you are
actually moving less data over the wire and are achieving a better
throughput at 6 hours for your backup.

So, if you have one 100Mb interface on your SAP client, you can
start multiple client sessions and you probably still won't see any
improvement because the card is maxed out.

We were up against the same wall and ended up putting GB Ethernet
cards (and switches, etc) on both the TSM server and select clients. The
improvement in speeds were anywhere from 2X to 4X depending on the client.
On some overtaxed clients, it moved the bottleneck to their slower disks or
CPU. In a few cases, the 3590E drives became the bottleneck, but we just put
the 'resourceutilization' to a higher value to multi-thread the client to
multiple tape drives. Not sure how you multi-thread the TSM for SAP agent,
but I believe it is possible.

Hope that helps.

Ben Bullock
UNIX Systems Manager
Micron Technology Inc

 -Original Message-
 From: Francisco Molero [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, March 20, 2001 8:35 AM
 To: [EMAIL PROTECTED]
 Subject: SAP and 3494 performance.


 Hello,
 I have a query about performance between a TDP for SAP
 R/3 under HP-UX and TSM Server with a 3494 library.
 The size of database is 350 GB and the connection is
 via tcpip (ethernet 100). I used the brbackup -c -t
 offline command and the bacukp is sent to tape pool in
 the library 3494. I have a private network and also I
 am using the compression parameter to yes in the
 client in order to improve the performance in the
 network. The backup takes 6 hours in backing up the db
 and this time is very high. I need to reduce it. the
 drives are 3590, and the format device class is DRIVE.
 Can someone help me about the parameters which can
 improve this performance ???

 Regards,
 Fran

 __
 Do You Yahoo!?
 Get email at your own domain with Yahoo! Mail.
 http://personal.mail.yahoo.com/




Re: AIX client dumping core during backup

2001-03-15 Thread bbullock

Alright,
One of my intrepid coworkers opened a case with Tivoli about this
error. I'll attach the e-mail from them, but in a nutshell:
- This is an error only being seen on AIX clients.
- "This problem has been observed at 4.1.1 and 3.7.2.X levels including
patch level 3.7.2.15."
- There is an open APAR IC28528, since Oct 25th.
- Their solution is to "Run backup manually or use cron to schedule the
backup."
- They are pointing the finger at "NIS+"  indeed, the hosts we see this
error on are NIS+ clients.
- They recommend certain NIS+ filesets to be at certain levels. We are a
little behind  on "bos.net.nisplus 4.3.3.27" and "bos.rte.libc 4.3.3.27", so
we will update those.
- If this does not resolve the issue, we are to call them back and get on a
list of customers that are still up the creek.

Here is the text of the message if desire to read it in full.

-Original Message-

APAR= IC28528  SER=IN INCORROUT
SCHEDULED CLIENT BACKUPS FAIL WITH B/A TXN CONSUMER THREAD,
FATAL ERROR, SIGNAL 11
STAT= OPEN FESN0907344- CTID= SJ0291 ISEV= 2
SB00/10/25  RC00/10/25  CL  PD   SEV= 2

ERROR DESCRIPTION:
Scheduled backups fail with B/A Txn Consumer thread, fatal error
, signal 11 - manual backups work. Dsmerror.log may have entries
similar to the following:
08/15/00   01:58:04 B/A Txn Consumer thread, fatal error, signal
08/15/00   01:58:04   0xD01D04E8 shadow_pass_r
08/15/00   01:58:04   0xD01D0210 shadow_chk_r

08/15/00   01:58:04   0xD01D1B60 _getpwuid_shadow_r
08/15/00   01:58:04   0xD01D4A7C getpwuid
08/15/00   01:58:04   0x1003277C *UNKNOWN*
Trace may show entries similar to:
signal.pthread_kill(??, ??) at 0xd0088f9c
signal._p_raise(??) at 0xd008846c
raise.raise(??) at 0xd01785ec
abort.abort() at 0xd0171d30
psunxthr.psAbort() at 0x100162a8
psunxthr.psTrapHandler() at 0x100163b4
getpwent.shadow_pass_r(??, ??) at 0xd01d04e8
getpwent.shadow_chk_r(??, ??) at 0xd01d020c
getpwent._getpwuid_shadow_r(??, ??, ??, ??, ??) at 0xd01d1b5c
getpwent.getpwuid(??) at 0xd01d4a78
pssec.GetIDFromOS() at 0x10032778
pssec.GetId() at 0x100329a8
pssec.idObjGetName() at 0x10032a78
senddata.sdSendObj() at 0x100dc8e0
txncon.sendIt() at 0x100d79fc
txncon.PFtxnListLoop() at 0x100d7fa8

txncon.PrivFlush2() at 0x100d8968
txncon.PrivFlush() at 0x100d958c
txncon.tlSend() at 0x100d9c64
bacontrl.HandleQueue__14DccTxnConsumerFv() at 0x100d66fc
bacontrl.Run__14DccTxnConsumerFPv() at 0x100d62a0
bacontrl.DoThread__14DccTxnConsumerFPv() at 0x100d5d24
thrdmgr.startThread() at 0x10018af4
pthread._pthread_body(??) at 0xd007c358
This problem has been observed at 4.1.1 and 3.7.2.X levels
including patch level 3.7.2.15.
LOCAL FIX:
Run backup manually or use cron to schedule the backup.

 We need to inform Jason that
APAR IC28528 is an AIX problem and the Patches for this
APAR are as follows:
~
bos.net.nisplus 4.3.3.27
bos.net.nis.client 4.3.3.25
bos.net.nis.server 4.3.3.25
bos.rte.libc 4.3.3.27

After first investigation, It could be due to the use of TSM in conjunct
 ion with NIS or could be related to Apar IC26906.
 First try to apply the fix IP22085 (- install TSM client
 code 4.1.1), which correct this issue.
 This code can be downloaded at
 ftp://index.storsys.ibm.com/tivoli-storage-management/maintenance/client
 /v4r1/AIX/v411/.
 Here is the Apar abstract:
  ERROR DESCRIPTION:
  When using TSM 3.7.0, 3.7.1 or 3.7.2 client on a UNIX platform
  and backing up a large amount of data and directories, the
  client may terminate processing and produce an application
  core dump.
 .
  Screen output may include a (Signal/ 6) error.
 The dsmerror.log will show:
 B/A Txn Consumer thread, fatal error, signal 11
 and subsequent  hexidecimal error locations.
 Dsmsched.log will not provide any additional info.   Problem
 only occurs on backups and not archives.
 Problem occurs more frequently and earlier in processing when
 higher RESOURCEUTILZATION values are used and/or
 MEMORYEFFICIENT is turned on.
 .
 If it doesn't resolve your problem, this could be related to NIS
 I guess that NIS in implemented in your environment ?
 From the provided info, a signal 11 (segmentation violation / coredump)
 probably triggered when TSM tries to read invalid memory addresses. It
 starts when TSM makes an operating system call, getpwuid().
 In a normal AIX host (non NIS), the
 username/password of the users are stored in the /etc/passwd file on
 the local machine. But in an NIS client, this is not the case (This user
 information is stored on the NIS master / slave).
 In any case, getpwuid() is an AIX operating system function, that
 accesses the basic user information in the user database and returns a
 "passwd" structure.  The AIX documentation for this function gives the
 following entries for the "passwd" structure, and includes the following
 note/s, both of which seem appropriate in 

Re: AIX client dumping core during backup

2001-03-14 Thread bbullock

Interesting,
We too have over 70 AIX clients on our TSM server, but I have 1
client that coughs the same error about once a week:

03/11/01   16:00:11 B/A Txn Consumer thread, fatal error, signal 11
03/11/01   16:00:11   0xD01D0BA0 shadow_pass_r
03/11/01   16:00:11   0xD01D08C8 shadow_chk_r
03/11/01   16:00:11   0xD01D2218 _getpwuid_shadow_r
03/11/01   16:00:11   0xD01D5134 getpwuid
03/11/01   16:00:11   0x10021148 *UNKNOWN*
03/11/01   16:00:11   0x100210BC *UNKNOWN*
03/11/01   16:00:11   0x10020E5C *UNKNOWN*
03/11/01   16:00:11   0x100AE0D8 *UNKNOWN*
03/11/01   16:00:11   0x100ACB64 *UNKNOWN*
03/11/01   16:00:11   0x100AC644 *UNKNOWN*
03/11/01   16:00:11   0x100AB310 *UNKNOWN*
03/11/01   16:00:11   0x100AB020 *UNKNOWN*
03/11/01   16:00:11   0x100AA954 *UNKNOWN*
03/11/01   16:00:11   0x100A9B2C *UNKNOWN*
03/11/01   16:00:11   0x100A968C *UNKNOWN*
03/11/01   16:00:11   0x100A9110 *UNKNOWN*
03/11/01   16:00:11   0x100988AC *UNKNOWN*
03/11/01   16:00:11   0x100988FC *UNKNOWN*
03/11/01   16:00:11   0xD00081FC _pthread_body
03/11/01   16:00:11   0x *UNKNOWN*

The client is running AIX 4.3.3, and running TSM client version
4.1.2.0. The server in my case is also an AIX host running AIX 4.3.3 and TSM
server version 4.1.1.0. Our library is also a 3494 with 3590E drives. We
have tried uninstalling and reinstalling the client but it did not resolve
the problem.

The odd thing is that during the day, many archive sessions are
pushed to TSM with no problem, but the scheduled incremental backup causes
this error once in a while. Stopping and starting the daemon on the client
fixes it for a few days, but it inevitably comes back. I have not opened a
case with Tivoli on this because I have other priorities, but I too find it
perplexing...

Ben Bullock
UNIX Systems Manager
Micron Technology Inc.


 -Original Message-
 From: Andrea Campi [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 14, 2001 10:09 AM
 To: [EMAIL PROTECTED]
 Subject: AIX client dumping core during backup


 I have an AIX client which dumps core every time I try to backup
 a particular 3.5GB fs. The log shows:

 03/14/01   17:02:39 B/A Txn Consumer thread, fatal error, signal 11
 03/14/01   17:02:39   0xD01C8C08 shadow_pass_r
 03/14/01   17:02:39   0xD01C8930 shadow_chk_r
 03/14/01   17:02:39   0xD01CA268 _getpwuid_shadow_r
 03/14/01   17:02:39   0xD01CD16C getpwuid
 03/14/01   17:02:39   0x10021148 *UNKNOWN*
 03/14/01   17:02:39   0x100210BC *UNKNOWN*
 03/14/01   17:02:39   0x10020E5C *UNKNOWN*
 03/14/01   17:02:39   0x100AE0D8 *UNKNOWN*
 03/14/01   17:02:39   0x100ACB64 *UNKNOWN*
 03/14/01   17:02:39   0x100AC644 *UNKNOWN*
 03/14/01   17:02:39   0x100AB310 *UNKNOWN*
 03/14/01   17:02:39   0x100AB020 *UNKNOWN*
 03/14/01   17:02:39   0x100AA954 *UNKNOWN*
 03/14/01   17:02:39   0x100A9B2C *UNKNOWN*
 03/14/01   17:02:39   0x100A968C *UNKNOWN*
 03/14/01   17:02:39   0x100A9110 *UNKNOWN*
 03/14/01   17:02:39   0x100988AC *UNKNOWN*
 03/14/01   17:02:39   0x100988FC *UNKNOWN*
 03/14/01   17:02:39   0xD00081FC _pthread_body

 This is a 4.1.2.0 client, but 3.7 showed the exact same behavior.

 Anybody can help me understand what's up? Other AIX machines are ok.
 Servers are also AIX with TSM 3.7.3.8 (and by the way, should I try
 upgrading them? We are already planning to go to 4.1 sometimes before
 summer), 3494 libraries with 3590E.

 Thanks in advance, bye,
 Andrea


 Bye,
 Andrea

 --
 Andrea Campi  mailto:[EMAIL PROTECTED]
 I.NET S.p.A.  http://www.inet.it
 Direzione Tecnica - RD   phone: +39 02 32863 ext 1
 v. Darwin, 85 - I-20019   fax: +39 02 32863 ext 7701
 Settimo Milanese (MI), Italy




Re: Backupsets

2001-03-06 Thread bbullock

Good questions Gil,
I too have a similar TSM environment and have similar questions.

I have fiddled around a little with backupsets to see how they might
be used in our environment. One of the first things I noticed was that to
make a backupset, it took about an hour to just do a simple 30,000 file '/'
filesystem on an AIX host. It kept stopping for long periods of time with no
indication as to what it was doing. I suspect that it was
mounting/dismounting tapes, but the process gives no indication as to what
tape/tapes it's mounting (like you typically see on processes that use
tapes, i.e. reclamations, migrations, backup storagepools). In general, it
felt like a piece of the TSM server that was written by a different group
and did not behave as the typical process seem to.

Let me see what I can contribute to your questions:

1. My testing of backupsets leads me to believe that there is no way to get
an individual listing of files because it is saved in the TSM server
database as 1 object. You can get a "stream of thought" type of output of
what is on a backupset with a 'q backupsetcontents...' command on the
server, but it is rather useless. You are able to restore an individual file
or group of files with various '*'s on a 'restore backupset...' command on
the client, but it is not very practical if you don't know the exact file or
directory you need.(typical with an end-user restore where they are not sure
of the filename or where they put it).
In our testing, the only practical way to hunt through a backupset
for is to restore it somewhere else and search through it. It's really not
set up for the restoration of individual files, but to restore a whole
filesystem or box to a point in time.

2. Yes, you should be able to define the dat tape drive to the TSM server as
another device and then use it just to generate backupsets. Should just be a
'define library...','define device...'  'define drive...' command on the
server.

3. I too am curious about a CD burner on an AIX host. I haven't looked into
it very much, but I haven't heard of one.

Ben Bullock
UNIX Systems Manager


 -Original Message-
 From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, March 06, 2001 8:56 AM
 To: [EMAIL PROTECTED]
 Subject: Backupsets


 Hi All,

 Backupsets is the subject today. Questions and problems to be
 a bit more
 specific. A few weeks ago I created a backupset that was to
 be around for 5
 days. To date the backupset looks like it's still around. All
 attempts to
 query it or delete the volume have failed and I'm wondering
 if anyone has a
 clue as to what the problem is

 3494LIBU00866Private   BackupSet

 The second issue(s) is still backupsets.

 1. If I create a backup set on dat and bring it to a node I
 am supposed to
 be able to read it and restore it. The question is can I just
 restore a
 single file or some semblance of multiple files. I haven't
 been able to view
 anything with the GUI except the backupset info itself. How
 can I view the
 directory structure to pick and choose the files to restore?

 2. My environment is AIX 4.3.3 with TSM 4.1.2. The TSM server
 has a dat
 drive internal to the server which is rmt2. The 3494 has 4
 3590 drives;
 rmt0,1,3,4. I'd like to know if I can define this dat drive
 within TSM and
 use it to create a backupsets only. I'd also like to know if
 it is possible
 to connect an external autoloader dat unit for the same process. If
 backupsets turn out to be an option to implement I wouldn't
 want to use the
 3590's since nodes don't have them.

 3. I noticed the post recently that talks about putting
 backupsets on CD. I
 have never heard of a cd-writer/rewriter on an AIX box, which
 doesn't mean
 it's not possible. I'm wondering if anyone has a solution for creating
 backupsets on an environment similar to mine, or any other
 for that matter,
 that can create backupsets on cd's.

 Thanks all in advance for the help.

 Geoff Gill
 NT Systems Support Engineer
 SAIC
 Computer Systems Group
 E-Mail:   [EMAIL PROTECTED]
 Phone:  (858) 826-4062
 Pager:   (888) 997-9614




Re: Performance Large Files vs. Small Files

2001-02-26 Thread bbullock

EEK. I'm sure this is not the answer, because if I rename the
filesystem everyday, then I have to do a  full backup of the filesystem
every day. I don't think there are enough hours in the day to do a full
backup, export it, and delete the filespace. Thanks for the suggestion
though. 

Ben Bullock
UNIX Systems Manager
(208) 368-4287

 -Original Message-
 From: Lambelet,Rene,VEVEY,FC-SIL/INF. 
 [mailto:[EMAIL PROTECTED]]
 Sent: Monday, February 26, 2001 1:13 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Performance Large Files vs. Small Files
 
 
 Hello,
 
 
 you might think of renaming the node every day, then doing an export
 followed by a delete of the filespace (this will free the DB).
 
 In case of restore, import the needed node
 
 Ren Lambelet
 Nestec S.A. / Informatique du Centre 
 55, av. Nestl  CH-1800 Vevey (Switzerland) 
 *+41'21'924'35'43  7+41'21'924'28'88  * K4-117
 email [EMAIL PROTECTED]
 Visit our site: http://www.nestle.com
 
 This message is intended only for the use of the 
 addressee and 
 may contain information that is privileged and confidential.
 
 
  -Original Message-
  From: bbullock [SMTP:[EMAIL PROTECTED]]
  Sent: Tuesday, February 20, 2001 11:22 PM
  To:   [EMAIL PROTECTED]
  Subject:  Re: Performance Large Files vs. Small Files
  
  Jeff,
  You hit the nail on the head of what is the biggest 
 problem I face
  with TSM today. Excuse me for being long winded, but let me 
 explain the
  boat
  I'm in, and how it relates to many small files.
  
  We have been using TSM for about 5 years at our 
 company and have
  finally got everyone on our band wagon  and away from the variety of
  backup
  solutions and media we had in the past. We now have 8 TSM 
 servers running
  on
  AIX hosts (S80s) attached to 4 libraries with a total of 44 
 3590E tape
  drives. A nice beefy environment.
  
  The problem that keeps me awake at night now is 
 that we now have
  manufacturing machines wanting to use TSM for their 
 backups. In the past
  they have used small DLT libraries locally attached to the host, but
  that's
  labor intensive and they want to take advantage of our 
 "enterprise backup
  solution". A great coup for my job security and TSM, as 
 they now see the
  benefit of TSM.
  
  The problem with these hosts is that they generate 
 many, many
  small
  files every day. Without going into any detail, each file 
 is a test on a
  part that they may need to look at if the part ever fails. 
 Each part gets
  many tests done to it through the manufacturing process, so 
 many files are
  generated for each part.
  
  How many files? Well, I have one Solaris-based host 
 that generates
  500,000 new files a day in a deeply nested directory 
 structure (about 10
  levels deep with only about 5 files per directory). Before 
 I am asked,
  "no,
  they are not able to change the directory of file structure 
 on the host.
  It
  runs proprietary applications that can't be altered". They 
 are currently
  keeping these files on the host for about 30 days and then 
 deleting them.
  
  I have no problem moving the files to TSM on a 
 nightly basis, we
  have a nice big network pipe and the files are small. The 
 problem is with
  the TSM database growth, and the number of files per 
 filesystem (stored in
  TSM). Unfortunately, the directories are not shown when you 
 do a 'q occ'
  on
  a node, so there is actually a "hidden" number of database 
 entries that
  are
  taking up space in my TSM database that are not readily 
 apparent when
  looking at the output of "q node".
  
  One of my TSM databases is growing by about 1.5 GB 
 a week, with no
  end in sight. We currently are keeping those files for 180 
 days, but they
  are now requesting that them be kept for 5 years (in case a 
 part gets
  returned by a customer).
  
  This one nightmare host now has over 20 million 
 files (and an
  unknown number of directories) across 10 filesystems. We 
 have found from
  experience, that any more than about 500,000 files in any 
 filesystem means
  a
  full filesystem restore would take many hours. Just to restore the
  directory
  structure seems to take a few hours at least. I have told 
 the admins of
  this
  host that it is very much unrecoverable in it's current 
 state, and would
  take on the order of days to restore the whole box.
  
  They are disappointed that an "enterprise backup 
 solution" can't
  handle this number of files any better. They are willing to 
 work with us
  to
  get a solution that will both cover the daily "disaster 
 recovery" backup
  need for the host and the long term retentions they desire.
  
  I am pushing back and telling them that their 
 desire to keep it
  all
  for 5 years is unreasonable, but thought I'd bounce it off 
 you folks to
  see
  if there was some TSM so

Re: TSM-HSM experiences?

2001-02-22 Thread bbullock

Our experience with it was good and bad. Our platform was an AIX
ADSM server and AIX HSM clients. When it worked, it worked pretty well, but
when it broke, it broke bad. In all fairness, we were running in a not
recommend environment: on hosts with many millions of files in an HA
environment.

We had some problems with reconciliation on the hosts once there was
an HA roll because the HSM logging/tracking files (Sorry, I don't remember
the file names, but they were in the ".SpaceMan" directory, I do remember
that),resided on a local filesystem that did not roll with the HSM
filesystems. Probably a misconfiguration from the beginning on our end.

Eventually, it came down to the fact that disks had become so cheap,
that instead of the overhead and administration of HSM, we would just keep
it all the data local on disks. On the bright side, when we eventually
restored all the HSM files to another host, only 60 of the 17,000,000 files
came up missing. Sure, it was 60 files, but as a percentage of total files,
we thought it was pretty good.

I think the product has a place on clients with fewer number of
files and are non-HAed. Just my 2-cents.

Ben Bullock
UNIX Systems Manager


 -Original Message-
 From: Per Ekman [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, February 22, 2001 3:44 AM
 To: [EMAIL PROTECTED]
 Subject: TSM-HSM experiences?


 Hello,

 We've been evaluating and trying to implement a HSM service with the
 TSM-HSM client at our site. We use the 4.1 HSM client for AIX and run
 the 3.7.2 TSM server (also on AIX). Our experience so far is not very
 favourable. Compared to our current DMF HSM system TSM is an
 administrative nightmare, the HSM functionality in particular feels
 like something someone hacked together for fun and was suddenly turned
 into a product.

 So I'm wondering what experiences others have with TSM-HSM, good or
 bad.  I'm particularily interested in how much data is stored, what
 the usage patterns look like, how things are set up (backups for
 instance) and how much administration is needed.

 I know there are issues with dsmreconcile for filesystems with many
 files. Apart from that there are some bugs (filesystems filling up to
 the extent that migration is prevented, dsm* commands not following
 symlinks) some worrying omissions (serious contortions appear to be
 necessary to get good meta-data backups) and a generally strange and
 overly-complex design are my main concerns at this point.

 /Per




Re: Guidance

2001-02-21 Thread bbullock

Whew, that's a lot of questions. I'll chime in on how we handle
them.

Ben Bullock
UNIX Systems Manager

 -Original Message-
 From: Krishna Shastry [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 21, 2001 8:41 AM
 To: [EMAIL PROTECTED]
 Subject: Guidance


 Hi Gurus,

 Our datacenter (Not yet productional) has ADSM Server
 (3.1.2.0) and Client (3.1.20.7) versions installed on
 an IBM RS6000 SP. None of our staff are experienced in

 handling ADSM.

 I have a set of general queries, the answers to which
 might give further insight in planning our
 infrastructure and scheduling our activities. Your
 expertise is solicited.

 How are NFS mount points backed up with ADSM?
We only back up the data on the host that is serving out the NFS
filesystems.
We typically have the line "domain  all-local" in the dsm.opt on all
clients.
No need backing up the same data 100 times.
 Does ADSM provide data encryption capability ?
I have some marketing blurbs that say on version 4.1 of the client
under "Mobile client support"
they have "data encryption" but I have no idea if it is out yet.
 How many client backups can run concurrently ?
That is a setting on the tsm server that you can set as high as you
want depending on the limitations
of your server. see the SET MAXSESSIONS parameter.
 Can ADSM back up open files ?
Yes, there are some settings on the copygroup that determine how TSM
will handle open files.
Look at the help for "update copy" and see what the 4 serialization
settings can be set to.
 Does ADSM support data compression ? If so, What is
 the compression ratio ?
TSM can do data compression, although I don't know what the ratio
is. At our site we use
the compression on the 3590E tape drives and get about a 3 to 1
compression rate.
 Can more than one tape be written to at the same time
 ? ie. Can full dumps and
 incremental be writing to two different tapes on
 different drive, simultaneously ?
Yes, each session into the TSM server can write to a tape.
If 2 sessions are running from the same client and they both need a
tape, they can both get one.
With the "resourceutilization" setting on the client, you can have
the client
start multiple sessions to the TSM server and therefore use multiple
tapes simultaniously.
You will also want to look when you register a node to the TSM
server, because there is a
"maximum mount points alowed" setting that can limit your client to
one tape at a time.
 How are SYMBOLIC LINKs handled?
By default the link is backed up but not the actual file. You can
change this behaviour
with the "archisymaslink" setting on the clients.
 How does ADSM handle media I/O errors during backup
 processing ?
Depends on how bad the error is. If it is bad enough the session may
be aborted. We find that the
client is able to continue on with all but the worst tape or tape
drive errors.
 How is security handled? Can users just recover their
 own files ?

 Can labelling be done at the same time that backups
 are running ?
Yes, as long as you have 1 tape drive available and unmounted.
 Is there a procedure for making duplicate backups for
 offsite storage ?
Yes, the " backup stgpool..." command makes the copies and then you
can get them
out of the library using DRM or scripting it yourself.
 Does UNIX Servers require Root or can ADSM be
 configured to be fully functional from a non- root UID
 ?
Hmm, we always have it installed and run the schedule daemon as
root. I have not tried it any other way. Anyone?

 Any help and further insight would be highly
 appreciated.



 Rgds

 Krishna


 __
 Do You Yahoo!?
 Yahoo! Auctions - Buy the things you want at great prices!
http://auctions.yahoo.com/



Re: Performance Large Files vs. Small Files

2001-02-21 Thread bbullock

That is also an option that we have not considered. It actually
sounds like a good one. I'll bring it up to management.
The only problem I see in this is the $$$ needed to purchase a TSM
server license for every manufacturing host we try to back up.
I don't know about your shops, but with the newest "Point based"
pricing structure that Tivoli implemented back in November (I believe it was
about then). They are now wanting to charge you more $$$ to run the TSM
server on a multi-cpu unix host than on a single host NT box. In our shop
where we run TSM on beefy S80s, that means a price change that is
exponentially larger than what we have paid in the past for the same
functionality.


Ben Bullock
UNIX Systems Manager


 -Original Message-
 From: Suad Musovich [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 21, 2001 4:37 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Performance Large Files vs. Small Files


 On Tue, Feb 20, 2001 at 03:21:34PM -0700, bbullock wrote:
 ...
  How many files? Well, I have one Solaris-based host
 that generates
  500,000 new files a day in a deeply nested directory
 structure (about 10
  levels deep with only about 5 files per directory). Before
 I am asked, "no,
  they are not able to change the directory of file structure
 on the host. It
  runs proprietary applications that can't be altered". They
 are currently
  keeping these files on the host for about 30 days and then
 deleting them.
 
  I have no problem moving the files to TSM on a
 nightly basis, we
  have a nice big network pipe and the files are small. The
 problem is with
  the TSM database growth, and the number of files per
 filesystem (stored in
  TSM). Unfortunately, the directories are not shown when you
 do a 'q occ' on
  a node, so there is actually a "hidden" number of database
 entries that are
  taking up space in my TSM database that are not readily
 apparent when
  looking at the output of "q node".

 Why not put a TSM server on the Solaris box and back it up to
 one of the other
 servers as a virtual volume.
 It would redistribute the database to the Solaris host and
 the data is kept
 as a large object on the tape-attached TSM server.

 I also remember reading about grouping files together as a
 single object. I can't
 remember if it did selective groups of files or just whole
 filesystems.

 Cheers, Suad
 --




Re: Performance Large Files vs. Small Files

2001-02-21 Thread bbullock

I'll take a look into that option again to see if it will work for
me.

Thanks,
Ben Bullock
UNIX Systems Manager

 -Original Message-
 From: Petr Prerost [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, February 21, 2001 2:43 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Performance Large Files vs. Small Files
 
 
 Hello ,
 did you check -fromdate and -fromtime ( and -totime and 
 -todate ) restore
 parameters ?
 
 Regards
   Petr
 
 - Puvodn zprva -
 Od: "bbullock" [EMAIL PROTECTED]
 Komu: [EMAIL PROTECTED]
 Odeslno: 21. nora 2001 1:16
 Predmet: Re: Performance Large Files vs. Small Files
 
 
  Point well taken Steve. Your classification of the 
 nature of the
  data is basically correct except for a twist. On the day the data is
  written, it is extracted by other programs that analyze the 
 data to spot
  flaws and trends in the manufacturing process. Depending on 
 what it finds,
  they may then need to delve deeper into the data to analyze and fix
  production flaws. So their argument is that without that 
 data online, they
  have no idea if the chips we manufactured a couple of hours 
 ago are good
 or
  not.
 
  True, that after a couple of days the data is infrequently
 accessed,
  and after about a week, the data is rarely accessed, that's why they
 delete
  it after 30 days. But restoring just the files that were 
 newly backed up
 the
  night before is not possible... is it? I don't think a point in time
 restore
  will do that...
 
  I like the idea of renegotiating the SLA (service 
 level agreement)
  with the customer so that their expectations are set and my butt is
 covered.
  Thanks for the advice.
 
  Thanks,
  Ben Bullock
  UNIX Systems Manager
 
 
   -Original Message-
   From: Steve Harris [mailto:[EMAIL PROTECTED]]
   Sent: Tuesday, February 20, 2001 4:43 PM
   To: [EMAIL PROTECTED]
   Subject: Re: Performance Large Files vs. Small Files
  
  
   Ben
    bbullock [EMAIL PROTECTED] 21/02/2001 8:21:34 
   Big Snip
   This one nightmare host now has over 20 million 
 files (and an
   unknown number of directories) across 10 filesystems. We have
   found from
   experience, that any more than about 500,000 files in any
   filesystem means a
   full filesystem restore would take many hours. Just to
   restore the directory
   structure seems to take a few hours at least. I have told the
   admins of this
   host that it is very much unrecoverable in it's current
   state, and would
   take on the order of days to restore the whole box.
  
   They are disappointed that an "enterprise backup
   solution" can't
   handle this number of files any better. They are willing to
   work with us to
   get a solution that will both cover the daily "disaster
   recovery" backup
   need for the host and the long term retentions they desire.
remainder snipped
  
   I would debate whether the host is "unrecoverable".  It seems
   that this data is write once/read seldom in nature.  In that
   case, in the event of a disaster the priorities are
   1. get the server back to the state where it can write more files
   2. get back any individual required file in reasonable time
   TSM can provide both of those objects.  If a full restore is
   required it *can* be spread over days because most of the
   data will never be needed.  You will of course need to
   negotiate wth your users exactly what is urgent and needs to
   be restored immediately and maybe have some canned macro to
   do this in the heat of an emergency.
  
   Regards
  
  
   Steve Harris
   AIX and ADSM Admin
   Queensland Health, Brisbane Australia
  
 



Re: Performance Large Files vs. Small Files

2001-02-20 Thread bbullock

Jeff,
You hit the nail on the head of what is the biggest problem I face
with TSM today. Excuse me for being long winded, but let me explain the boat
I'm in, and how it relates to many small files.

We have been using TSM for about 5 years at our company and have
finally got everyone on our band wagon  and away from the variety of backup
solutions and media we had in the past. We now have 8 TSM servers running on
AIX hosts (S80s) attached to 4 libraries with a total of 44 3590E tape
drives. A nice beefy environment.

The problem that keeps me awake at night now is that we now have
manufacturing machines wanting to use TSM for their backups. In the past
they have used small DLT libraries locally attached to the host, but that's
labor intensive and they want to take advantage of our "enterprise backup
solution". A great coup for my job security and TSM, as they now see the
benefit of TSM.

The problem with these hosts is that they generate many, many small
files every day. Without going into any detail, each file is a test on a
part that they may need to look at if the part ever fails. Each part gets
many tests done to it through the manufacturing process, so many files are
generated for each part.

How many files? Well, I have one Solaris-based host that generates
500,000 new files a day in a deeply nested directory structure (about 10
levels deep with only about 5 files per directory). Before I am asked, "no,
they are not able to change the directory of file structure on the host. It
runs proprietary applications that can't be altered". They are currently
keeping these files on the host for about 30 days and then deleting them.

I have no problem moving the files to TSM on a nightly basis, we
have a nice big network pipe and the files are small. The problem is with
the TSM database growth, and the number of files per filesystem (stored in
TSM). Unfortunately, the directories are not shown when you do a 'q occ' on
a node, so there is actually a "hidden" number of database entries that are
taking up space in my TSM database that are not readily apparent when
looking at the output of "q node".

One of my TSM databases is growing by about 1.5 GB a week, with no
end in sight. We currently are keeping those files for 180 days, but they
are now requesting that them be kept for 5 years (in case a part gets
returned by a customer).

This one nightmare host now has over 20 million files (and an
unknown number of directories) across 10 filesystems. We have found from
experience, that any more than about 500,000 files in any filesystem means a
full filesystem restore would take many hours. Just to restore the directory
structure seems to take a few hours at least. I have told the admins of this
host that it is very much unrecoverable in it's current state, and would
take on the order of days to restore the whole box.

They are disappointed that an "enterprise backup solution" can't
handle this number of files any better. They are willing to work with us to
get a solution that will both cover the daily "disaster recovery" backup
need for the host and the long term retentions they desire.

I am pushing back and telling them that their desire to keep it all
for 5 years is unreasonable, but thought I'd bounce it off you folks to see
if there was some TSM solution that I was overlooking.

There are 2 ways to control database growth: reduce the number of
database entries, or reduce the retention time.

Here is what I've looked into so far.

1. Cut the incremental backup retention down to 30 days and then generate a
backup set every 30 days for long term retention.
On paper it looks good. you don't have to move the data over the net
again and there is only 1 database entry. Well, I'm not sure how many of you
have tried this on a filesystem with many files, but I tried it twice on a
filesystem with only 20,000 files and it took over 1 hour to complete. Doing
the math it would take over 100 hours to do each of these 2 million-file
filesystems. Doesn't seem really feasible.

2. Cut the incremental backup retention down to 30 days and run and archive
every 30 days to the 5 year management class.
This would cut down the number of files we are tracking with the
incrementals, so a full filesystem restore from the latest backup would have
less garbage to sort through and hopefully run quicker. Yet with the
archives, we would have to move the 600 GB over the net every 30 days and
would still end up tracking the millions of individual files for the next 5
years.

3. Use TSM as a disaster recovery solution with a short 30 day retention,
and use some other solution (like a local CD/DVD burner) to get the 5 year
retention they desire. Still looking into this one, but they don't like it
because it once again becomes a manual process to swap out CDs.

4. Use TSM as a disaster recovery solution (with a short 30 day retention)

Re: Performance Large Files vs. Small Files

2001-02-20 Thread bbullock

Point well taken Steve. Your classification of the nature of the
data is basically correct except for a twist. On the day the data is
written, it is extracted by other programs that analyze the data to spot
flaws and trends in the manufacturing process. Depending on what it finds,
they may then need to delve deeper into the data to analyze and fix
production flaws. So their argument is that without that data online, they
have no idea if the chips we manufactured a couple of hours ago are good or
not.

True, that after a couple of days the data is infrequently accessed,
and after about a week, the data is rarely accessed, that's why they delete
it after 30 days. But restoring just the files that were newly backed up the
night before is not possible... is it? I don't think a point in time restore
will do that...

I like the idea of renegotiating the SLA (service level agreement)
with the customer so that their expectations are set and my butt is covered.
Thanks for the advice.

Thanks,
Ben Bullock
UNIX Systems Manager


 -Original Message-
 From: Steve Harris [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, February 20, 2001 4:43 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Performance Large Files vs. Small Files


 Ben
  bbullock [EMAIL PROTECTED] 21/02/2001 8:21:34 
 Big Snip
 This one nightmare host now has over 20 million files (and an
 unknown number of directories) across 10 filesystems. We have
 found from
 experience, that any more than about 500,000 files in any
 filesystem means a
 full filesystem restore would take many hours. Just to
 restore the directory
 structure seems to take a few hours at least. I have told the
 admins of this
 host that it is very much unrecoverable in it's current
 state, and would
 take on the order of days to restore the whole box.

 They are disappointed that an "enterprise backup
 solution" can't
 handle this number of files any better. They are willing to
 work with us to
 get a solution that will both cover the daily "disaster
 recovery" backup
 need for the host and the long term retentions they desire.
  remainder snipped

 I would debate whether the host is "unrecoverable".  It seems
 that this data is write once/read seldom in nature.  In that
 case, in the event of a disaster the priorities are
 1. get the server back to the state where it can write more files
 2. get back any individual required file in reasonable time
 TSM can provide both of those objects.  If a full restore is
 required it *can* be spread over days because most of the
 data will never be needed.  You will of course need to
 negotiate wth your users exactly what is urgent and needs to
 be restored immediately and maybe have some canned macro to
 do this in the heat of an emergency.

 Regards


 Steve Harris
 AIX and ADSM Admin
 Queensland Health, Brisbane Australia




Re: upgrade

2000-12-14 Thread bbullock

First, our environment: various versions of RS/6000s running, AIX
4.3.3, IBM 3490 libraries and 3590 drives.

We just did it on our 8 adsm servers. The process was pretty much as
described in the manual. Make sure your OS, Atape and atldd drivers are at
the required levels. You must install 4.1.0.0 and then apply 4.1.1.0. In our
environment the 2 quirks were that the old ADSM server software was not
uninstalled like the support folks SWEAR it should do, and the 4.1.1.0
upgrade removed all the rmt devices from the OS, so a 'cfgmgr' had to be
done to get them back and usable.

We have had them running on our hosts for about 2 weeks now with no
noticeable problems (knock on wood).

Ben Bullock
Micron Technology

 -Original Message-
 From: victor isart [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, December 14, 2000 12:37 PM
 To: [EMAIL PROTECTED]
 Subject: upgrade


 Does anybody know of any problems of upgrading
 ADSM 3.1 to TSM 4.1 directly?

 Thanks




Re: Moving from single length to extended length 3590 cartridge t apes.

2000-08-29 Thread bbullock

Woah, woah woah, I think you are talking 2 different things here.

I believe the question being asked is "Can the single length 3590
tapes (denoted by a 'j' on them) co-exist in a library with the new extended
length tapes (denoted by a 'k' on them)?" I believe that the answer is yes,
as long as your tape drives are the 3590E drives. The 3590E tape drives will
know the length of the tape when it mounts it (because of the little colored
tabs below the label) and write it correctly. The 3590E writes out 256
tracks regardless of the tape length.

I believe that the question you have answered is, "How do I go about
upgrading my 3590B or 3590Ultra drives to the 3590E drives and use the same
tapes?" In this case, there is a difference in the number of tracks it is
writing and the procedure that is described below is valid.

We have upgraded to the 3590E drives but have yet to put in any of
the extended length tapes to test this theory, so I'm not 100% sure of this,
but that is the way IBM has explained it to us.

Thanks,
Ben
Micron Technology

 -Original Message-
 From: Ilja G. Coolen [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, August 29, 2000 1:22 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Moving from single length to extended length
 3590 cartridge
 tapes.


 Yes you can, but you need to upgrade to at least 3.1.2.50. We
 needed to do
 so. Lower levels will not be able to use the extended length feature.
 But when you use both tape lengths at the same time you have to do the
 following first, or else you will end up having many
 read/write errors. This
 is because the difference in number of tracks.

 1. update all single length tapes to access=reado
 2. move alle data of all single length tapes to scratch tapes.
 3. update all tapes back to access=readw

 The tape drives will not be able write 256 tracks on a 128
 tracks tape.


 -Oorspronkelijk bericht-
 Van: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]Namens Irene
 Braun
 Verzonden: dinsdag 29 augustus 2000 20:00
 Aan: [EMAIL PROTECTED]
 Onderwerp: Moving from single length to extended length 3590 cartridge
 tapes.


 We are currently using single length 3590 cartridge tapes and
 would like
 to move to using extended length tapes.  We are running ADSM
 3.1.2.40 on
 AIX.
 Can the two different tape length types co-exist in ADSM and
 if so how can
 this be accomplished?  Does it entail defining a second
 virtual library with
 a separate device class etc.  Would it be simple to manage?  Is there
 any
 online documentation discussing this procedure?
 Thanks,  Irene
 EMAIL:  [EMAIL PROTECTED]




Re: multiple adsm servers

2000-07-27 Thread bbullock

You will have to enter in the licenses on the new server so it can
built it's own license file. Just use the "register lic file=/usr/lpp/"
command.(Of course, after you have made sure you have paid Tivoli the $$$
required for 2 ADSM server licenses, etc... :-))

Ben

 -Original Message-
 From: Davidson, Becky [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, July 27, 2000 12:56 PM
 To: [EMAIL PROTECTED]
 Subject: Re: multiple adsm servers


 Thank you all.  I have the server running but I am getting
 Server is not in
 compliance with license term messages.  Any suggestion on how
 to resolve
 that.  I ended up using 1708.  Does anyone have three adsm
 servers running
 on the same machine?  If so what ports did you use?  That is
 what we will
 have eventually.
 Thanks in advance and now!

 -Original Message-
 From: David M. Hendrix [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, July 27, 2000 9:23 AM
 To: [EMAIL PROTECTED]
 Subject: Re: multiple adsm servers


 Becky,

 Is the second server running?
 Are you invoking dsmadmc with the -se parameter set to the secondary
 server?
 It appears the port assignments are different in dsm.sys.  Are they
 different in dsmserv.opt?

 Just make sure that your environment is set for the second
 server...check
 every small item.

 David





 "Davidson, Becky" [EMAIL PROTECTED]@VM.MARIST.EDU on
 07/26/2000 03:18:47
 PM

 Please respond to "ADSM: Dist Stor Manager" [EMAIL PROTECTED]

 Sent by:  "ADSM: Dist Stor Manager" [EMAIL PROTECTED]


 To:   [EMAIL PROTECTED]
 cc:

 Subject:  multiple adsm servers


 Has anyone ever set up multiple adsm servers on one machine.
 I am running
 adsm 3.1.2.40 and I have managed to get one running and
 operational and I
 can connect to it but I can't seem to be able to connect to
 the second one.
 I can get it up and running and it has it's own database but I can't
 connect
 to it.  I have tried setting up the tcpport to 1501 and then
 in dsm.sys
 setting up the server to connect to 1501 and not through loop
 back but I
 just keep connecting via dsmadmc to the first server and not
 the new one

 Becky Davidson
 Data Manager/AIX Administrator
 EDS/Earthgrains
 voice: 314-259-7589
 fax: 314-877-8589
 email: [EMAIL PROTECTED]




Re: All Time Record for Amount of data on one 3590 K Cartridge

-- Thread bbullock
->









  All Time Record for Amount of data on one 3590 K Cartridge
  
  
  
  
  
  
  








	

	adsm-l 

	
		
			-- Thread --
			-- Date --
			




Find 
			
		
	



	
	
	







Re: All Time Record for Amount of data on one 3590 K Cartridge,
bbullock

 
All Time Record for Amount of data on one 3590 K Cartridge,
Seay, Paul

Re: All Time Record for Amount of data on one 3590 K Cartridge,
Paul van Dongen

 












 
--  
Chronological
--
  

 
  

 
  --  
  Thread 
  --  
  





  
  
  
  
  C56C94844E5CD611B4F347ACE8EC8D6CCE@nnse9.nns.com">
  Reply via email to
  
  










 




<!--
google_ad_client = "pub-7266757337600734";
google_alternate_ad_url = "http://www.mail-archive.com/blank.png";
google_ad_width = 160;
google_ad_height = 600;
google_ad_format = "160x600_as";
google_ad_channel = "3243237953";
google_color_border = "CE9689";
google_color_bg = ["FF","ECE5DF"];
google_color_link = "006792";
google_color_url = "006792";
google_color_text = "00";
//-->







Re: All Time Record for Amount of data on one 3590 K Cartridge
bbullock

 

All Time Record for Amount of data on one 3590 K Cartridge
Seay, Paul


Re: All Time Record for Amount of data on one 3590 K Cartridge
Paul van Dongen

 



 






  
  





Reply via email to