Look into performing an image backup... then it will just be a single TSM
db entry
BUT you only have an all or nothing restore possibility, you won't be able
to just restore a single file out of the file system.
Dwight E. Cook
Software Application Engineer III
Science Applications International
from a client point of view, look at resourceutilization (1-10)
Things like a base dsmc incr will fire off a thread to process each
~mount point~ to be processed...
you still only see one task on the client but will see multiple sessions on
the server.
I believe the max sessions will be 8 if you
I use
select
node_name,platform_name,client_os_level,client_version,client_release,client
_level,client_sublevel from adsm.nodes
and you can add in a :
where client_version4 AND client_release2 AND client_level2
if you want to see nodes not at a specific vrlm...
Dwight E. Cook
Software
What processor OS is the TSM server ???
Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave. Suite 220
Tulsa, Oklahoma 74103-4606
Office (918) 732-7109
-Original Message-
From: Halvorsen Geirr Gulbrand [mailto:[EMAIL
No you don't !
Many ways around this...
For each drive that isn't currently active (and one at a time) simply either
make it unavailable at the library manager OR delete it from TSM... then
perform the firmware upgrade to the drive, then bring it back (either make
it available again at the
OK, all the new ~poop~ is multi-threaded.
Anyone know the scoop on LWP's (light weight processes), specifically under
Solaris ?
We noticed a slow down in some environments upon upgrading to latest TSM
4.2.2.1 tdp/r3 3.2.0.11
now that I'm bald from pulling out all my hair we notice that on a
remember it might not be that at all...
do you have any other tape activity by that box on the tsm server ???
look at your maxnummp for your client... if you have a maxnummp of 1 and
try to initiate two restores that require different tapes (yadayadayada)
your second restore request will fail.
Nope !
Migration uses drives based on the smallest number of :
A) max number of migration processes set for stgpool to be migrated
B) number of tape drives in the devclass of the stgpool being
migrated to
C) unique node's data in the stgpool to be migrated
OK, so if you
Geoff,
don't go down the patches subdirectory...
you want to go down maintenance, server, v5r1, AIX, LATEST
where you will find...
-rw-rw-r-- 1 18125700 20010491 May 31 15:03
TSMDEVAIX5110.README.DEV
-rw-rw-r-- 1 18125700 200 18809353 May 31 15:03
TSMSRVAIX05_01_01.tar.gz
NOPE ! (well, sort of, maybe, depends on which one you want)
If a file changes daily, you only keep 7 versions, so you will have at
most 7 copies...
on day 30 you only have days 23, 24, 25, 26, 27, 28, 29 to restore from...
days 1-22 have expired due to versions data exists AND say the person
Huh
dsmserv restore db dev=3590devc vol=ABC123 commit=yes
has always worked just dandy for me with a 3494 3590's
I'm up to 4.2.2.1 and it has worked OK since ADSM 2.x
Dwight
-Original Message-
From: Kent Monthei [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 09,
We've run into that... even a process check for dsmserv often times isn't
even enough, this is why I do (on an hourly basis) a little script like
#!/bin/ksh
ONCALLPERSON=$(head -1 $HOME/bin/oncallperson)
for ADSMSERV in $(cat $HOME/bin/tsm_server_list)
do
dsmadmc -id=blah -pass=blah
Problem there is the tape media on which all the client data is stored.
Is the same media available both on your ~mainframe~ and on your Sun box ?
probably not so you're out of luck in general
(and I like IBM/AIX boxes more than Sun/Solaris, just me...)
I took two environments off MVS a long time
Uhmmm can you do this is a server script
I thought only TSM admin session commands were allowed ???
I might be very wrong though...
Dwight
-Original Message-
From: Joni Moyer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 10, 2002 10:14 AM
To: [EMAIL PROTECTED]
Subject:
You have to go down a ways in the help to find this but here it is...
Physical Space Occupied (MB)
The amount of physical space occupied by the file space. Physical space
includes empty space within aggregate files, from which files may have
been deleted or expired.
Logical Space
Good question, never actually thought about it...
I would think that the sum of the difference between mount dismount times
for each drive...
OH THANKS. now I won't be able to sleep until I code some select
statement to do this :-(
if I figure it out, I'll pass it along
Dwight
This is a real catch-22
BIG virtual volumes means less tsm data base space tied up tracking tons of
little virtual volumes (and files on the remote server) BUT big virtual
volumes also means lots of data transfered over the network during
reclamation (sigh)
Also remember, a virtual volume is
Running a Domino incremental we see TONS of long waits
Waiting for TSM
server
???
when a selective (full) runs we see very few and those are very very
short...
I saw Del's reply but this is way beyond any sort of normal waiting...
Generally when your specification is a directory, you will want to end it
with a / so
dsmc archive -subdir=yes /oracle/dev/ /oracle/tst/
would get you there...
Dwight
-Original Message-
From: Gordillo, Silvio [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 04, 2002
Sure...
what you'll find is that for each ~operation~ in a server to server
enviornment, once that operation finishes, it closes out a virtual volume.
So say you have an Est Cap of 10 GB but your backup stgpool to the remote
server /virtual volume only pushes 2 GB. Then that virtual volume will
Tapes like this generally become this way due to some error such as not
being able to read the volume's internal label when mounted to fill a
scratch request.
There is some TSM message that is along the lines of
... making volume private to prevent further access...
Now this doesn't mean
Oh, could be a lot of things...
remember one server's virtual volume is just a file on the other server...
maybe that file is on a volume that is itself being reclaimed on the other
server...
or maybe on a volume that is listed as unavailable (etc...)
check the remote server for any sort of
, Oklahoma 74103-4606
Office (918) 732-7109
-Original Message-
From: Cook, Dwight E
Sent: Tuesday, August 27, 2002 6:20 AM
To: 'ADSM: Dist Stor Manager'
Subject: RE: 3494 on MVS vs AIX
On a 3494 attached to AIX what you can do is...
mtlib -l/dev/lmcp# -qL
Now, if the I/O station has any
Piece of cake ! but depends...
1) need big client box
2) need data such that multiple concurrent sessions may run
3) go to a diskpool...
We can back up a 3.4 TB data base in 12 hours...
E10K with 64 processors, 25 get dedicated to a TSM TDP/R3 backup (25
concurrent sessions), run client
OUCH ! that kind'a hurts...
...I strongly recommend you to consider SAN, that is the professional way.
I like to think of myself as a ~professional~ but 6 years ago when this was
all put together here, ~SAN's~ weren't really available PLUS we wanted out
backups done to servers across town at a
try a combination of both
select * from adsm.backups where ll_name='yourfile.name'
and
select * from adsm.archives where ll_name='yourfile.name'
if it finds it, you'll have the node name, drive/filesystem, and directory
path to it...
this will take a while !
Dwight E. Cook
Tell the legal department you want a complete duplicate of you TSM
environment !!!
People make so many unrealistic request with absolutely no thought as to
what they are asking for.
The only way to do that is to make a handful of tsm db backups (to protect
against loss due to media
BRMS Backup Recovery and Media Services (for OS/400)
Looks like a cut down TSM... actually states TSM server may be used for data
storage by BRMS
Any TSM'ers have any AS/400's and use BRMS ? ? ?
I think my backup services management is about to expand and was looking for
some info on...
BRMS,
Are you referring to the file grew with compression and unable to allocate
additional space problem when going to cached diskpools ? ? ?
If so, run with client settings compression yes compressalways no, if
the file grows with compression it will be sent uncompressed. (famous last
words...)
Had
I agree with Don...
Just connect in the new library and stick in all the tapes.
Data/tapes are bound to a device class, then the device class is bound to a
library
What I'd do is get a list of all your tape volsers in a file and assemble
checkout commands
checkout libvol libname volname
FYI...
if anyone else was waiting on it.
Dwight
Actually the process is to just mark the primary volume as destroyed, then
perform a restore volume
TSM uses the contents of the destroyed volume in order to rebuild it.
Now, with the old tape ~deleted~ from TSM, it doesn't know what was on it in
order to rebuild it !
Sorry...
Here is some info
Geoff,
What you will find is that all TDP/R3 data is written to a virtual
file space name by the name of /tdp and tdpmux with a type of API:XINTV3.
So even with collocation file, there is still only one filespace name to
spread things out across.
What you need to do is specify a different
Whew, being mighty bold but you ~might~ be able to get it to work...
I'd do everything with the library NOT available to the system, this MIGHT
allow you to clean out references to volumes not used on each system without
causing them to roll scratch.
Even if you do this you will still need to go
HA !
I didn't remember the query system command so I gave it a run on my test
environment and after a while got
**
*** --- select stgpool_name,devclass_name,count(*) as VOLUMES from
volumes
group by stgpool_name,devclass_name
I played around with this a lot in the past but don't have a large use for
it currently.
To do a recovery, you get a basic environment up and running with your
definitions over to your remote server and then just do a dsmserv restore db
as normal using the device class of the remote server and
if you lmcpd is running, you won't see a /dev/lmcp# on a sun box as you
would on an aix box...
under sun just simply use the name that is in the /etc/ibmatl.conf
tsmatl05 149.183.246.88 tsmsrv10
so here I just use tsmatl05
zdec23@tsmsrv10/export/home/zdec23 mtlib -l tsmatl05 -qL | more
You ever been an MVS person ? Used partitioned data sets ? ? ?
OK, TSM will do a trick where it bundles a lot of little files in a big file
within ~TSM space~.
OH, look at it like a ZIP file
so if a bunch of little files are in this one big file that TSM is dealing
with, natrually some files
also check on the cleanup archdir command...
it will get rid of all those extra entries
CLEAN ARCHDIR node_name {DELETEDUPLICATES | SHOWSTATS | RESETDESCR |
1DELETDIRS } [FORMAT=S|D] [WAIT=NO|YES]
Dwight
-Original Message-
From: Lawson, Jerry W (ETSD, IT) [mailto:[EMAIL PROTECTED]]
I've already opened a problem with Tivoli but anyone know how to flush all
references to DRM ?
At one point in time in an attempt to issue a q pr the q was left off
and the pr initiated a prepare.
Now my server shows I'm using DRM !
In the past I just stuck on the license so it would report
need DRM anymore, but I can't figure out how to get rid of it,
either.
-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 12:36 PM
To: [EMAIL PROTECTED]
Subject: How to flush DRM references, anybody know ?
I've already opened a problem
I don't do it all the time but I've used it for a lot of things...
One was when we were moving an environment from one location to another 15
miles away.
We had two servers but didn't have two libraries (and we had big diskpools
to hold multiple day's backups)
Anyway, I backed the db up to a disk
Just noticed, if you back up your db to a disk file, at least on AIX 4.3.3
with TSM server 4.2.2.0 the reported file name and the actual file name are
different ! ! !
tsm: TSMUTL01q volhist t=dbb
Date/Time: 06/14/2002 11:46:20
Volume Type: BACKUPFULL
Backup Series: 8
Backup
because TSM processes on a file system basis...
since / is a single file system, it will start a single process.
Do you only want / backed up ? what about /usr and /opt /var, etc
they are not part of /
Now if you said
incr -subdir=yes
then you would see a process for / kick off,
look a resourceutilization
you can't spread the compression task out for a single session but you can
initiate multiple sessions.
Each task will bind to a different processor (should... famous last words)
Is this just an incr task ? ? ? or are you running an archive(s) ?
Dwight
-Original
filespaces as a whole, aren't bound to a managmement class...
I would check the include/exclude file and see how it is coded.
Also remember, if you change the include/exclude list, that will apply it to
all files previously backed up so the include/exclude list would be the
place to look.
an
remember, the general way the ~supported environment matrix~ works is
client down one version.release from server
client equal to server
client ahead one version.release from server
so the 4.1 client to a 5.1 server would place the whole environment so it
falls outside the support matrix
Did you issue a backup devconfig command from a admin session with your
server ? ? ?
I would think that would put all required info into the devconfig file...
I have an admin schedule that performs that on a daily basis across all my
tsm servers.
try a rename of your devconfig and issue the
As filespaces come and go on a client, does anyone know of a solid way to
clean them up ? ? ?
Since directories are backed up even when they are excluded would the
Last Backup Start Date/Time: reflect the last time the file system was
mounted ? ? ?
here is an example of why I would like to
Impress your boss... !
use one of the following select statements to get a rough capacity of all
your tsm clients
(breakdown by node)
select node_name,sum(capacity) as filespace capacity
MB,sum(capacity*pct_util/100) as filespace occupied MB from
adsm.filespaces where
I just noticed that my Device: in my q libr shows nothing for TSM Srv
4.2.2.0 ? (under AIX 4.3.3)
I even updated the library to try to put it in, in case it got lost ???
This really SUCKS !
at least it is still written to the devconfig file...
anyone else noticed this ?
I mail myself
Run multiple concurrent client sessions...
Run with TSM client compression...
Go to disk first (to help facilitate the multiple concurrent sessions seeing
how you only have one drive)...
Get another drive ! ! ! (will really help when you have to run reclamation)
I'm partial to 3590's but lots of
error message manual states this is a restore/retrieve error, not a
backup/archive error...
and from the error manual text it sounds like a Netware client limitation in
that the ID you are operating under isn't allowed to allocate a file that
large (4 GB)
I believe it was in TSM 3.7.2 when it was
make sure and upd all your tape storage pools to recl=100 so no
reclamation will run
also remember that migration reclamation, etc... will wait until it has
finished processing its current file.
#!/bin/ksh
# purge all processes and look for still used tape mounts
dsmadmc -id=blah -pass=blah q
You really shouldn't have two schedulers listing on the same
port...(TCPCLIENTPORT)
you might try letting one default to 1501 and force the other to 1502 (say
in your TSM2 SErver entry)
Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston
Common problem !
Look into the use of braces... { }
When something gets altered like this use braces to explicitly specify the
filespace name...
so on your query try either
q backup {/inbound/site}/*
or
q backup {/inbound}/site/*
One of the two will give you what you want...
Just a general observation...
4.2 client a 3.7 server is not listed in the general supported
environments matrix.
An environment as a whole is only supported if the clients are
1 level back
same level
1 level forward
of the tsm server to which they back up.
Granted this
OK, I've been working with TSM (ADSM) for about 6 years now and you can call
me cheap but I never (personally) though DRM was worth the money. We do
operate in a unique environment here so I shouldn't say that DRM has no
place in the market, it is just that I was doing DRM before DRM came out
For what I call ~critical data~ such as the redo logs, we do have copy
pools...
Any long term storage data is exported to two tape sets...
general backups (like oracle .dbf files) are just a single copy, if one goes
bad, you restore the previous copy and roll it forward with redo logs.
We also do
Watch out !
A move data actually MOVES THE DATA, there is NO 2nd COPY
With a copypool, when you perform your backup stgpool it basically
performs an ~incremental~ style backup of your primary pool to your copy
pool.
Generally speaking, OS backups are incremetal style backups but all
backups of
Let us also remember how our media likes or dislikes to be transported,
dropped, left in various temperature ranges, etc...
Sure our tape transporter won't stop for a cheeseburger fries on the way
to the offsite facility, RIGHT !
and D@#$ I dropped a tape and broke a corner off !
I'll just glue
OK, now in a unix environment, I'm thinking specifically of an AIX one, you
have the ability to create a .netrc file under an id such that it will
perform specific actions upon an ftp session being initiated to a specific
node. So I build scripts that do...
#!/bin/ksh
echo machine
Have you checked to ensure you don't have any ~HUNG~ volumes ?
We recently discovered (in TSM 4.1) that if we had reclamation running
during our DB backup that TSM would have some lock problems and once all the
data was moved off a volume, it wouldn't roll scratch again ! ! !
I went out and
The way the data is spread out on the volumes, you will probably have to do
an unload and a load to reorg the data base prior to being able to
delete some of your volumes...
Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave. Suit
Do you have some spare disk space ? ? ?
You may set up a device class of a flat file and use that to unload and load
from.
I use extra space anywhere I can find it for such things... on one server I
use extra space in the log file system
DEFINE DEVCLASS DISKFILE DEVTYPE=FILE MAXCAPACITY=1048576K
check to see if they are protecting the DB LOG files via OS level
mirroring or maybe via some other form of RAID configuration...
In our environment using plain SSA disks in a JBOD configuration I mirror
(via tsm) the db log files
but in our environments using our ESS storage unit I don't...
...
Compression is controlled in the SAP TDP not the dsm.opt.
-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 16, 2002 12:30 PM
To: [EMAIL PROTECTED]
Subject: anyone notice TDP 3.2.0.8 on Solaris 2.6 with TSM 4.2.1.0...
anyone notice TDP 3.2.0.8
TDP simply uses the API interfaces to perform TSM activities...
the Simultaneous Writes to Primary and Copy Storage Pools would be a
feature of the TSM server and would be independent of the client (or how the
client is getting the data to the server)
famous last words...
so it should work just
Uh I don't believe so...
BUT once in the ATL, the library manager tracks everything (doesn't re-read
the barcodes unless told to do so)
If you have a tape that you don't know the VOLSER here is what I'd try...
Identify a scratch tape
Check it out with remove=no
Determine the cell in which
ARCHIVE !
I tell clients, if they HAVE to have data for a SPECIFIC period of time,
archive it into a management class with the required retention period.
I try to only give them 1 3, 6, 12 month management classes...
anything longer than that and I make them archive to a specially registered
A lot will depend on the network connectivity and the ability of the
clients...
We have a lot of TSM servers, 9 of 10 are RS6000 7071-S70's
Some have around 100 clients run 300+ concurrent session but only push 20-30
GB/hr (compressed client data) due to client and network limitations
Some have
IMHO, domains are just there for your use in any sort of logical separation
you wish to perform...
We use them to differ between UNIX-SAP, UNIX-non-SAP, NT, DOMINO, LNOTES,
ADSM (other adsm servers), etc...
That has fit our needs...
If you have different service levels for the different clients,
try
select sum (total_mb) as total MB from adsm.auditocc
make sure and run an audit lic daily to update teh auditocc table...
Dwight E. Cook
Software Application Engineer III
Science Applications International Corporation
509 S. Boston Ave. Suit 220
Tulsa, Oklahoma 74103-4606
Office
Generally seen with cached diskpools and files that grow with compression...
TSM pre-allocates based on listed size by the OS and in the case of cached
diskpools, frees that amount of cached space to take the file.
If you have compression yes and compressalways yes AND if a file grows
during
Maybe...
try doing a
mtlib -l/dev/lmcp# -qC -s'FF10'
I believe until a tape is physically removed it retains the Eject category
(FF10)
Funny thing I heard related to this this.
STK held a patent on reading volsers in the I/O bay so with IBM 3494's the
only thing the scanner does when
Try
select node_name,filespace_name,cast((backup_end) as varchar(10)) as Date
from adsm.filespaces where (cast((current_timestamp-backup_end)day as
decimal(18,0))2 )
or
select node_name,filespace_name,cast((backup_end) as varchar(10)) as Date
from adsm.filespaces where
DO like I do, label them from AIX
If I have 200 new tapes and 4 or more drives, I split the volsers into 4
groups of 50 each
I then fire off 4 dsmlabel jobs from AIX
(Oh, make sure you don't have any processes needing a tape drive for
a while if you only have 4 drives...)
works perfectly,
Things that limit migration processes...
1) migproc for the storage pool
2) number of drives in the atl
3) number of unique node's data in the diskpool you are migrating to tape
The smallest of these is what will determine how many drives are active...
So if you have 6 drives
and you upd stg
What is your environment ?
We had this problem in one of our environments (I can't remember which one)
I can't find any of my info on it right now...
It was about a year or more ago...
If you could fill me in on your envirionments server client (what storage
are you using on your server both
In all our upgrades, consolidations, alterations, etc... we have some
equipment that I just can't find a use for :-(
What is potentially available for sale is
IBM 7015 containment rack
IBM R40 processor
IBM 7133-020 fully populated with 9.1 GB drives (qty 3 so roughly 436.8 GB
of dasd)
IBM
It is all standard routing as far as how the traffic goes...
You may specify an IP address for the node to be contacted at but after
that, it is all standard routing
-Original Message-
From: TSM Group [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 14, 2002 4:04 PM
To: [EMAIL PROTECTED]
On the slight chance someone has figured out a trick to get TSM to reprocess
its dsmserv.opt file and I missed hearing about it...
I've bumped up the # in my dsmserv.opt but getting a quiet moment on this
TSM server is like asking the sun to not shine for a while.
Dwight
Only if you want to put two RAID5 arrays on two different adapters to
prevent outage due to an adapter loss...
-Original Message-
From: Wholey, Joseph (TGA\MLOL) [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, March 06, 2002 8:00 AM
To: [EMAIL PROTECTED]
Subject: Mirroring DB and Log
In
Probably like me...
I used to work for Amoco, then British Petroleum bought Amoco and then they
didn't want to have anything to do with computing (they are an oil company)
so they outsourced it to SAIC but (for the time being) I still have a bp.com
mail id (and an saic.com mail id)
-Original
Put that NT client in a schedule all by itself... (call it domain
nt_special )
then set an administrative command schedule to run at 00:01 on the 3rd of
each month that does an update of the nt_special schedule to set its
startd=+1
should work, I've used admin schedules to alter client schedules
Slightly off topic but since we are in the recovery position,
anything to help ward off data loss to begin with is close to on topic...
This is all I know for the time being...
Dwight
The Threat
SNMP (Simple Network Management Protocol) is a set of protocols designed for
monitoring and
OK, delete volhist type=dbb tod=today will not delete all of your data
base backup volumes...
TSM will never delete your latest/only db backup copy.
If you are trying to reuse that same tape volser DB_DO you will never be
able to do it !
You have to have AT LEAST TWO volumes, one which remains
I just looked in the TSM 4.2 messages manual and can't find an ANE4052E
message listed ? ? ?
Dwight
-Original Message-
From: Rainer Holzinger [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 20, 2002 7:19 AM
To: [EMAIL PROTECTED]
Subject: msg ANE4052
Hi all,
I have to backup
Sad... code doesn't earn money setting on the developer's desk :-(
and the only thing upper management cares about is the all mighty buck
$
What ever happened to pride ?
Oh, that is what management uses to ensure someone works a 60 hour week but
is only paid for 40 hours...
Sorry, I'll
Both the adsm.backups and the adsm.archives tables have the CLASS_NAME
parameter...
You could probably do something like
select distinct node_name from adsm.backups where class_name='YOUR_BAD_NAME'
and do the same for adsm.archives...
-Original Message-
From: Gerhard Rentschler
Define (at least) 3 management classes, probably all being other than
default.
I always name mine to reflect their settings like B7.3_R35.95_A35
where B7.3 reflect Versions data exists deleted
where R35.95 reflect Retain extra only days
where A### is the number of days an Archive is retained...
When you have corruption with a disk entry and you want to run an audit to
clear things to allow you to remove a volume, you will really want to do...
1) write a macro to delete all other disk volumes (other than the
hosed up one)
2) write a macro to define back all your disk
I would use archive
Now with an archive you have to be disk specific but you could do something
like this for each drive attached...
archive -subdir=yes C:\*.pst
archive -subdir=yes F:\*.pst
using the -subdir=yes ~should~ force TSM to go down the tree find all .pst
files on the
There is a whole system of software that will let you manage an ATL from a
remote PC just as if you were at the console of the library (I can't
remember the name of it right off). A two part piece, one part goes on the
ATL and another goes on a remote PC.
We had it to manage ATL's utilized by
I won't call this comparing apples to oranges but it might be Granny Smiths
to Red Delicious...
I suggest to all our clients that they run TSM client software compression
and 90% do but some don't (and often it is because of valid reasons such as
servers that house only blah.tar.Z already
WOW !
be careful on the slow down statement...
NOT THE CASE !
Actually with a big enough client it will greatly speed things up !
We have an SAP environment with a 2.7 TB data base, we back it up in less
than 18 hours using client compression
if we didn't use client compression it would
The only way is to
1) retrieve the files and re-archive with a management class of the
desired retention
2) DO NOT RUN ANY EXPIRE INVENTORY and then I won't guarantee
anything...
3) Rename the node, set up another domain with an archive grace
period of how long you need
The main thing using tcp/ip communication method with a 3494 is that it
enables you to use more tape drives.
Each RS232 attached host means one less tape drive that may be put in the
atl.
Also if you have a network outage, you stand the chance of dropping your
connection to your atl.
We've run
Oh, forgot to mention, with a lan card, if you have libraries scattered all
over 10-buck-2 you have the ability to define all those libraries to a
single environment and monitor their status (easily) from there.
Remember, you don't need drives in an atl to be able to communicate with it.
Example
Do a q occ nodename and look for what file systems are out on your
diskpool in great quantity.
That is, if you send all data first to a diskpool and then bleed it off to
tape (daily).
That will give you an idea of what file systems are sending the most data,
currently.
Then you may perform
101 - 200 of 419 matches
Mail list logo