PROTECTED]]
Sent: Thursday, June 28, 2001 4:35 AM
To: Dist Stor Manager
Cc: Cook, Dwight E; France, Don G (Pace)
Subject: complete change of hardware and server-platform ?
Dwight, we tested the aix-to-solaris-tsmdb-moving and it worked very fine-
so maybe we will use this as a regular
also verify that your client is allowed that many mount points...
and that your resourceutilization is set to where you can initiate that
many concurrent client sessions.
just some thoughts...
Dwight
-Original Message-
From: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
Sent:
Ok, I know these are basically some Storedge model but...
These are the models with two (2) internal tapes and a six (6) tape
removable magazine.
If one loads up one of these devices and performs a label libvol ...
search=yes one ends up with tapes such as
Library Name Volume Name Status
first you will need to use the
dsmfmt -m -log /somepath/newlogfilename #
where # is the number of meg you want to allocate the new log file
THen...
DSMSERV EXTEND LOG (Emergency Log Extension)
Use this command to extend the size of the recovery log when you need
additional log
space to
I added additional file systems to blah, the TSM incremental died and the
process terminated. The logged error was Program memory exhausted.
Experimented with unmounting some file systems. In the range of 487 through
489 file systems mounted, the TSM client fails similarly. Below that, the
What we do, is require a data set to be backed up prior to being migrated...
Backups migrations go to different pools thus different tapes.
In addition to that we have a copy pool that we backup the hsm storage pool
to.
So I keep 3 tape copies...
Dwight
-Original Message-
From: Indra
Now beyond my earlier statement on DASD units to keep a copy of the files in
the remote location...
Just treat the TSM db like any other db.
Say, once a week, take a full db backup, take that to the remote location,
and do a restore there... (with commit=no)
then, say, once daily (or more
The 3590's are the better drives...
(how many LTO drives do you see in MVS environments ?)
and especially if you have the E drives !
We are still running on 5 year old B1A's
(have about 40 or 50 of them around...)
the tsm environments take in a total of 2 TB nightly (on average)
In 5 years (best
Sure, done it many times moving servers around...
TSM doesn't care where it is (as long as you stay on the same op sys)
Dwight
-Original Message-
From: Kevin Zonts [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 30, 2001 7:14 AM
To: [EMAIL PROTECTED]
Subject: Changing the Tivoli
: Wednesday, May 30, 2001 10:03 AM
To: [EMAIL PROTECTED]
Subject: Re: Changing the Tivoli server hostname
Dwight,
We are newbies with regards to Tivoli. Could you briefly explain what
must be done to use a different hostname, or point me to some documentation?
Thanks,
Kevin
Cook, Dwight
On an AIX 4.3.2 server with tsm 3.7.2.x, I noticed our session counts were
above 1.5 million !
I think that was about 4 months of up time of tsm, not to mention how long
since that box had been bounced...
That specific server has about 100 GB of client traffic daily...
just some FYI...
Dwight
try slight variations of these...
Dwight
select node_name,filespace_name,cast((backup_end) as varchar(10)) as Date
from adsm.filespaces where (cast((current_timestamp-backup_end)day as
decimal(18,0))2 )
select node_name,cast((lastacc_time) as varchar(10)) from adsm.nodes where
select * from adsm.backups where ll_name=something.blah
filespace_name is the filespace name under unix or the drive id under
win.
the HL_NAME is the directory path (stuff left over after the filespace_name
but not including the actual file name)
and the LL_NAME is the end file name...
hope
OK, I can't say what the DDS3 drive is off the top of my head but if it is
something you would have an a potential disaster recovery TSM server, sure
it will work... ALSO based on the size of the file dumped to disk, you could
always ftp it somewhere else... AND if you have yet another tsm server
Which are primary storage pools, which are copy (part seems obvious) which
are the next pool(s) for which pool(s) ? ? ?
I would just do a
upd stg dltpool collocate=yes
and let things work themselves out over time...
if you must you can do move data commands against tapes in dltpool to
DISK VOLUME ? ? ?
if so...
halt tsm
from the install directory run
dsmserv auditdb diskstorage fix=yes
it will run through and straighten out the broke stuff... (technical terms
there ;-) )
when it completes, start tsm as usual...
then either you will be able to delete the volume or the
If you are using TCP/IP communication method (vs rs232) you can go ahead and
have your new host talking to the ATL (now).
Just have to define the AIX's address (stuff) on the 3494 (and put in all
the drivers on aix)
But you don't need tape drives in the atl to talk to the atl... (if you are
going
FF10 is only while they are waiting to be transported over to the I/O
station...
(well, that is what triggers them to be ejected... if you change a category
of a tape to FF10 the atl will eject it)
Once in the I/O station they are cleared from inventory.
I don't think there is any way to count
Just make non-default management classes with the archive retentions you
need... then have the user archive to a management class that fits their
needs for long term storage... Below are the management classes I've
defined for one of my domains, each management class directs the data to a
Yes, all the info is tracked by the volume which is associated with a device
class...
You may remove the library and define it again and yes, you will have to
check in all the volumes again...
With my 3494ATL's I've even gone as far as deleting the library and defining
a new one with different
Worry not... tsm will fill the holes
Are you only using disks ?
Won't you be bleeding the diskpools off to some tapepool nightly ?
Dwight
-Original Message-
From: Jack McKinney [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 02, 2001 3:28 PM
To: [EMAIL PROTECTED]
Subject: DISK
Geoff, under aix you only use
lsitab (to list inittab)
chitab (to change the inittab)
mkitab (to make a new inittab entry)
and
rmitab (to remove an inittab entry)
every time one of these commands alters the inittab it is fully refreshed
!
later,
Dwight
Now I tried one diskpool with a next diskpool followed by a next tape pool
I had cache turned on for both diskpools...
I noticed strange things and ended up turning off cache on the very first
diskpool...
does anyone know of a limit on the number of consecutive diskpools which may
be nexted
Yes it was, and here it is...:-)
Dwight
-Original Message-
From: Lee, Gary D. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 01, 2001 2:44 PM
To: [EMAIL PROTECTED]
Subject: tsm and backup versions
Some time ago, someone (I think dwight cook) sent out a spreadsheet
detailing tsm and
Can't do that unless you delete the data off those tapes !
Sounds like what you want to do is (do you perform any other archives ?
if not then)
Either archive the stuff with a 1 day retention and then export all your
node's archive data (daily)
OR
archive the stuff, export all your node's
Yep, if it was an active backup copy of a file and the file still exists
on the client, a new copy will be made...
inactive backup data just goes away...
Archived data... you are out of luck :-(
Dwight
-Original Message-
From: Alan Davenport [mailto:[EMAIL PROTECTED]]
Sent: Friday,
but any sort of move data or migration from one pool to another would be
no different than the basic creation of the data with the backup stg
primarypool copypool command.
No matter what you are going to have to read from one volume and write to
another.
I'd just put in the new stuff, create a
Look closely at the q volhist t=dbb and the libvols that show DbBackup !
you'll probably find the two lists don't match ! ! ! !
I've noticed that IF anything happens during the DB backup (that is, an
abnormal termination) that the libvol is left as private with last use of
DbBackup BUT the
SURE !
We use two (or more) subnets in most of our tsm servers... always best to
have minimal hops.
Our larger clients have multiple interface cards (to multiple subnets)
also...
We run with a production network for user traffic and a backup network (for
backups)
On some of the clients we have
- ) )-,_. ,\ ( `'-'
St. Louis, Missouri'---''(_/--' `-'\_)
Cat Pics: http://andyc.dyndns.org/animal.html
On Mon, 23 Apr 2001, Cook, Dwight E wrote:
Were you using just a single card ?
Have you ever played with higher availability SSA card configurations ?
Take two cards and set up
I think you are on the right track by using the grace retention period.
To insure that will work right, I'd set up a domain with say a default
archive of a couple of days, archive some data and then roll that node over
into this new domain and wait a week and view things to ensure the data
will
Were you using just a single card ?
Have you ever played with higher availability SSA card configurations ?
Take two cards and set up something like
Card1-PortA1 - out to drawer(s)
Card1-PortA2 - Card2-PortA1
Card2-PortA2 - out to drawer(s)
then if either card fails, the other still drives the
Archived copies of files have their expiration date set upon the file being
archived based on current date plus the duration of the archive management
class... so any data archived with a 365 day retention will remain for 365
days UNLESS you allow the client to delete archives and they go do it
Yep, send me direct "[EMAIL PROTECTED]"
what is your tsm server (hardware)
what storage does it use
Dwight
-Original Message-
From: Leijnse, Finn F SSI-ISES-21 [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 20, 2001 8:48 AM
To: [EMAIL PROTECTED]
Subject: out of sequence
hello tsm
Recently we had to move an environment from Chicago to Naperville.
The move was also an equipment upgrade so all that would be moved was the
tsm image (and then later the tapes atl)
Anyway we toyed with shutting down tsm and then ftp'ing the db log files
over to the new box but that wasn't an
I have a 3.1.0.6
zdec23@ttcstg02/home/ftp/adsm/v3r1/sun/sol25 ls -l
total 46976
-rw-r--r-- 1 root system 23972744 Apr 23 1999 IP21497.tar.Z
-rw-r--r-- 1 root system 20600 Apr 23 1999 README
-rw-r--r-- 1 root system 21434 Apr 23 1999 README.API
-rw-r--r-- 1 root
All depends on the mount retention times set are those mounts Idle ?
-Original Message-
From: Don Avart [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 17, 2001 12:12 PM
To: [EMAIL PROTECTED]
Subject: HSM Migration follow up
As per Mary Stephenson's recommendation I set the
try running an inventory on the library itself... if the tape(s) are
actually outside the atl, this will update the info you see with the "mtlib"
command. ALSO you need to note the status with which the tapes show up in
the output of the "mtlib" command... if they show up with a FFFA that is a
some tasks are just not allowed to run through the scheduler...
If you must, you could set up a ksh script that would perform a
dsmadmc -id=blah -pass=blah CHECKOUT LIBVOLUME LIB0 TSK071
CHECKLABEL=NO FORCE=NO REMOVE=NO
and run it from cron on the tsm server...
-Original Message-
his server-a
(with 'client-data-copys-send-to-server-c' )
are completely destroyed - then trying to restore latest
active backups for the Clients as fast as possible on server-b using
just the copy from server-c )
Thanks in advance for any hints !
Rainer
"Cook, Dwight E" wro
OOOPPPSss
with raw logical volumes you can't run dsmfmt against them...
you just do a define dbvol define dbcopy with the /dev/rblah
I've used raw volumes in the past for db log vols, that works...
now though, I tend to go ahead and use JFS files for db log and raws
I have 9 production AIX TSM servers, here are my numbers...
as you see they vary greatly, so my favorite answer applies here "It
depends"
(that is, I don't think anything can be called "normal")
hope this helps
later,
Dwight
Available Assigned Maximum MaximumPage Total Used
: Palmadesso Jack [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 16, 2001 9:03 AM
To: 'Cook, Dwight E'
Cc: '[EMAIL PROTECTED]'
Subject: RE: Expanding ADSM DB Using Raw Logical Volumes. help please
Thank you. I have not went ahead with anything yet. I was suspicious of
the dsmfmt command because
:
Command: generate backupset ggmaix test devc=3590devc
tsm: TSMSRV07
-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 10, 2001 11:14 PM
To: [EMAIL PROTECTED]
Subject: Re: Error in Backupset Expiration
"Cook, Dwight E" wrote:
Oh, this is a H
and you will notice, if a tape has already been mounted in manual mode,
dismounted, and then called for again, there will be an "*" next to the slot
# when it is displayed on the tape drive calling for the tape. That is to
let you know it is more than likely in the big pile you have on a table
sure, they just span volumes...
-Original Message-
From: Vibhute, Bandu [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 09, 2001 2:56 PM
To: [EMAIL PROTECTED]
Subject: Backup of Big files
Hi All,
Does anyone have backed up file bigger than tape capacity?
Thank you,
Bandu Vibhute,
Sure, to move an adsm environment across town where I was a few states away
and didn't want to fly in for a half day...
define a device class of "FILE" and use it to backup the DB.
I did a full, then FTP'ed it over to a new machine that was to become the
server... as soon as I got the full FTP'ed
If a volume is unavailable it isn't "available" for migration...
if you must... make the volume(s) reado (but not unavailable)
Dwight
-Original Message-
From: Wu, Jie [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 10, 2001 10:29 AM
To: [EMAIL PROTECTED]
Subject: duplicate stgpool
.0 for AIX...
If I figure out anything I'll pass it along...
Dwight
-Original Message-----
From: Cook, Dwight E
Sent: Tuesday, April 10, 2001 12:47 PM
To: 'ADSM: Dist Stor Manager'
Subject: RE: Error in Backupset Expiration
Oh, this is a HOOT !
you may do a "q volhist t=backupset"
but y
Oh, this is a HOOT !
you may do a "q volhist t=backupset"
but you may NOT do a "del volhist t=backupset ..."
you can see'em but you can't delete'em !
(unless you purge everything...)
Dwight
-Original Message-
From: HK Chng [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 10, 2001 12:13
OK, unless you have no (zero) clients accessing your server you shouldn't
just update the volume to read-only, it could cause the inbound client's
session to fail... use
vary offline blah
update vol blah acc=reado
vary online blah
then
move data blah
del vol blah
def vol stgpool blah
(it comes
With the incremental... on the restore, rather than a single step restore
dsmserv restore db dev=tapedevc vol=blah commit=yes(as it
would be with all full's)
you would have to do
dsmserv restore db dev=tapedevc vol=fullblah commit=no
dsmserv restore db dev=tapedevc
L14 ?
How close is that to an L12 ?
With L12 on the operator's console there will be a panel that displays the
manual mount requests...
will list the volser, its slot, drive on which to mount it...
with 3590B1A drives, they will also flash the volser to be mounted...
Now when you first go into
if we are going into debug mode...
check your 24V 36V power supplies for AC and DC power lights...
if you don't have them both (both lights on both power supplies) you may try
to power down that supply, wait 30 seconds and power it back on.
some minor power surges will trip internal breakers in
I'm lookking in other places currently but can anyone shed any light on this
one ?
04/05/01 09:17:31 ANRD afmigr.c(2683): Reconstruction of aggregates
is
disabled. Run audit reclaim utilities to re-enable
reconstruction of aggregates.
biggest single box is a 2.4 TB db that compresses down to 600 GB backes up
every other day/night
have tons of others that are 500-ish GB's that compress down to 100-200 GB
and back up nightly
across all the tsm servers we do 1.5 TB nightly
DWight
[EMAIL PROTECTED]
-Original Message-
Ok, over the years I've had disk failures that have left disks defined in
storage pools that I can't get rid of !
Anyone know of a sure fire way to purge these volumes from TSM ? ? ?
basic problem is they are offline because they physically don't exist
anymore, if I try to recreate them they
ED], on
04/04/01
at 09:54 AM, "Cook, Dwight E" [EMAIL PROTECTED] said:
Ok, over the years I've had disk failures that have left disks defined in
storage pools that I can't get rid of !
Anyone know of a sure fire way to purge these volumes from TSM ? ? ?
basic problem is they
on each of the
problem volumes. It restored some data in each case and then deleted the
volume. To restore a diskpool volume the volume must be varied offline
first, but that is already the case for you.
Bill
- - -
In [EMAIL PROTECTED], on
04/04/01
at 09:54 AM, "Cook, Dwight E" [EMAIL
Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 04, 2001 9:19 AM
To: [EMAIL PROTECTED]
Subject: Anyone know a debug way of deleting a disk volume...?
Ok, over the years I've had disk failures that have left disks defined in
storage pools that I can't get rid
I agree !
I've moved environments all around and unless you can detatch the disks and
put them on the new environment and import the volume group (which would be
hard on NT :-/ ) You are best off in doing just that... clear the volumes,
delete them, then add in new ones in your new environment.
Client compression would be very helpful...
How big are the boxes ? ? ?
and a full restore would take time but the data would be there.
I've backed up NT boxes from Atlanta GA into Tulsa OK (I'm pretty sure the
nodes are still registered...)
they run fine but my network speeds are a little
You probably are not initiating your migration processes early enough...
might have to take it down to about 80%
also if you clients run with client compression, they preallocate based on
allocate size and then release any unused portions after the transfer is
complete... in other words, if you
Nope, once a session goes into a wait on a device it will wait on it until
it is available...
In other words, it doesn't go back and check to see if the diskpool has
emptied out.
What you might want to do is set your disk storage pool HIGH value lower so
it will bleed the pool down without it
Here is something going on tomorrow...
Dwight
-Original Message-
From: Terry Liffick [mailto:[EMAIL PROTECTED]]
Sent: Friday, March 23, 2001 1:38 PM
To: Webcast Invitation List
Subject: Invitation to a Storage Webcast
Optimize your Storage Resource
Storage and Server Consolidation
OK backups are kept by versions versions alone !
if you keep 30 versions and the data changes daily, on the 31st day the 1st
copy goes away !
Copy pools only copy what is in the primary pool, when it expires from the
primary pool, so goes the copypool copy.
If people really want data for a year,
Hi Jerry, have they tried making more "volumes" to hold the data and then
jack up the resourceutilization to allow multiple processes to tackle the
task.
I had something like that on a unix box... half million files in a file
system that was only a few GB's... took hours to back up. I seem to
in a
timely fashion.
-Original Message-
From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 20, 2001 1:02 PM
To: [EMAIL PROTECTED]
Subject: Re: SAP and 3494 performance.
Our client is a Sun E10K with ?24 processors? maybe more if that is
possible... it has a bunch
And we
Do you mean a 3494 ATL with 3590 tape drives ?
what you can do is
on ATL perform "full inventory update" which makes the atl scan all
volumes and update its internal db
you can then (from the attached aix host) do a "mtlib -l/dev/lmcpx
-qI somefile" to get a list of internal
And note !
My current environment, if a tape is FF00 in the atl and is listed as a tsm
private tape... an "audit libr" within tsm WILL NOT update the FF00 to the
proper private status for that TSM server.
I believe in the past it did, and might again in the future but currently
my environments
Don't you have jcl to perform batch TSO session(s) ?
Just use that and have it issue a call to your dsmadmc
argh... we moved off all our MVS adsm 3 or so years ago BUT I seem to recall
I was doing that...
Dwight
-Original Message-
From: Lee, Gary D. [mailto:[EMAIL PROTECTED]]
Sent:
ANother thing you might wish to do is set your virtual volume size to either
something really big (like 1 TB) or as big as a tape on your remote
server... (I've used 10 GB and 1024 GB and sent the data compressed) and
actually I tried to keep reclamation from running on virtual volumes...
Now
-Original Message-
From: Richard L. Rhodes [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 20, 2001 7:23 AM
To: Cook, Dwight E; [EMAIL PROTECTED]
Subject: Re: SAP and 3494 performance.
Thanks for great info!
q) What kind of a client system are you using?
- machine type
Oh might want to double check that statement...
last I checked it will not use to determine if the actual "archive" will
occur but it WILL (or used to) bind that file to any management class
specified...
example (last time I checked)
if you have the line in your inclexcl.list of
In general...
data tapes will be associated with the device class within an atl
in other words you may checkout all libvols, delete all drives, delete the
library and all is still well !
Then if you define a new library, define your drives back to that new
library, (update your deviceclass for
what you will probably notice is that when you tried to put new labels on
already labeled tapes and the system figured out something was wrong 'cause
you didn't tell it to expect a label already there...
it ejected the tapes from the library (took them out of libvols, something
along those lines)
mtlib -l/dev/lmcp# -qL | more
that will have all the library slot/tape info you need...(also cleaner
cycles)
the tape use "thing" has been a thorn in my rear for a long time now (going
on 6 years)
just use accounting records, take all the inbound traffic (arch, bkup, hsm)
add them up and divide
All this is why (if you do charge back) you simply charge the client by what
the "auditocc" shows...
Then if "THEY" take their processor time to compress their data they get a
little bit of a charge break
but if "THEY" don't take the time to compress their data, they end up paying
more !
Or
It all boils down to where your bottle neck is...
If your client(s) have hugh amounts of data and big enough engines to
compress it down and it compresses nicely (such as oracle DB's) then you
will find that you can compress the data AND send it in less time than you
can send the data
I've been off the list for a while so this might have already been asked...
Has there been any improvement in migration process limits in versions past
3.7.2.0 ? ? ?
In other words, if there is only one client's data in a pool, will there
still only be a single migration process ? ? ?
I'm back
has anyone heard of, or experienced any problems with consecutive diskpools,
each having cache turned on ? ? ?
that is
diskpool1(cached) ---next- diskpool2(cached)-next
tapepool
I think something is getting dorked up with the pointers to my cached files
in the first
ps -ef | grep sched
then look for the process number
kill dsmc_sched_process_number
I set up in inittab
rootlsitab -a | grep sched
adsm:2:respawn:/usr/tivoli/tsm/client/ba/bin/dsmc sched /dev/null 21
now to start it from the aix prompt
nohup /usr/tivoli/tsm/client/ba/bin/dsmc sched /dev/null 21
There will only be a single copy sent if your "changingretries" is 0
now if that copy is kept depends on the serialization of the copygroup...
and shrdyn is the one that says "yes, keep it"
Dwight
-Original Message-
From: Rick Marshall [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October
define a management class, make it not the default, set the archive copy
group to point towards a new storage pool (and I wouldn't even put in a
backup copy group) now you could make this new storage pool, disk, then a
next to tape OR straight to tape... just depends on how many concurrent
client
OK, I basically keep a diskpool big enough to contain an entire night's
backups.
Now this is compressed...
Things to remember:
Clients, even though using compression, allocate enough disk space
in the diskpool to hold the file uncompressed, once the transfer is
completed, unused space is
Right, but the new drives can still read the old tapes, and if you drain the
data off the old tapes they can be written to by the E1A's... but if you
have all new tapes to fully populate your new atl, more power to you !
You don't really have to mess with setting the tapes to an overflow
I love this one...
I don't guess anyone has seen this before.
Dwight
Storage Management Server for Solaris 2.6/7 - Version 3, Release 7, Level
3.0
Last License Audit: 10/19/00 22:29:24
Registered Client Nodes: 1
Do you have collocation "yes" or "file" ? ? ?
if yes you will have as many filling tapes as clients
if file you will have as many filling tapes as total client file space count
here is an example from one of my tsm servers...
CHIDSM01 Tue Oct 10 06:05:43 CDT 2000 You have 281 SCRATCH and 518
Ok, how many people in their typing don't hit the "q" key hard enough and
rather than a "q pr" they issue a "pr" and the #@$%^ PREPARE initiates ! !
!
sorry about that, I'll go take my thorazine now...
Dwight
drop the - in front of the todate... that is a client type format... not an
admin session format !
just
del volhist tod=-30 t=dbb
or t=all or what ever type you want to get rid of
Dwight
-Original Message-
From: Shekhar Dhotre [mailto:[EMAIL PROTECTED]]
and you checked that the disks are also readwrite ! ?
I like a general
q vol acc=reado,destroyed,unavail
to let me know all is well, or not
OK, sure all client code levels work with all server code levels BUT...
the only time I've seen wierd things like that happen is with server to
You are correct !
look at it as a safety feature... if you could extend it, that would imply
being able to alter it in general which could lead to early deletion with a
reduction in duration.
Sooo you have to retrieve it and re-archive it with the retention they
really wanted to begin with :-(
With the object of an archive you have to be at least as specific as a "file
system" and by that I mean if you do a "q file" you will see what tsm refers
to as "file systems" on your client... to get a complete archive you would
need to list each of those ~objects~ (as the filespec) with a
If the data is not available from a primary storage pool volume and the copy
pool volume with the data on it is available...
without any intervention, the data will be retrieved from the copy pool
volume (if/when the client requests it)
to get the data back into the primary storage pool, you
'bout the only quirk I can see is you have your "TO" file in quotes...
by the exact syntax specifications, double quotes around the "TO" file isn't
allowed.
uh and I believe the main reason for quotes around the "FROM" file is if
you use wildcards...
try without the quotes on the "TO" file
first, to look at a file under unix just use the "more" command
more filename
If there is a real weird object in the include/exclude list that would be
impacting these files you would be more likely to see something about and
invalid management class, default being used...
This indicates to me
well...
v Improve the speed with which TSM restores file systems containing many
small files.
v Conserve resource on the server during backups since only one entry is
required for the image.
v Provide a point-in-time picture of your file system which may be useful if
your enterprise needs to
do a
show deadlock
just to see if one exists...
may also do a
show locks
just to see the volume of locks that exist... multiple "show locks" can
indicate if things are getting better or worse and how fast things are
changing...
some processes that are waiting on resources won't
Ah I just use an entire 4.5 GB or 9.2 GB drive for each DB file and
just add another one once I run low (hit about 85-90%util)
I've seen cache problems (low hit %) when you have an abundance of space
with nothing in it.
Dwight
--
From: Coles, Peter[SMTP:[EMAIL PROTECTED]]
: [EMAIL PROTECTED]
Subject: Re: Database sizing question
Sensitivity: Private
Are there any recommended DB filesizes ? For a 4gig Database would you
use
4*1gb on different spindles ? 2*2gb files or 1 4.5gb file ?
-Original Message-
From: Cook, Dwight E [SMTP:[EMAIL
301 - 400 of 419 matches
Mail list logo