I will throw this document into the ring as well.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.641.1965&rep=re
p1&type=pdf#page=169
Even though it's a bit dated, it hits on a lot of the elements that go
into getting the most performance out of your 10G network adapter.
As previousl
Since you haven't gotten much response, and you did ask for ANY insight,
I will offer mine! Be forewarned, I haven't used NetApp for TSM disk
pools, but on paper it would seem like a good fit. NetApp / WAFL
coalesces current write activity and spits the data out as full stripes
to the underlying ar
This matches what I see in our environment - q nasbackup doesn't show the full
backup that the differentials depend upon, once the full backup ages out. We
have successfully restored volumes using a recent differential with its older
associated full backup so it does work. However, your use of
So perhaps the issue folks are seeing isn't directly caused by a
languishing log pin, but that something in the 5.5.5.0 upgrade made a
questionable change to the database causing corruption/inconsistency?
Or some housekeeping operation changed in 5.5.5.0 that is generating
more/larger transactions
You may be running out of file handles. Based on your numbers, I'd guess there
is a per-process limit of 1024 file handles. Your storage volumes are using
most of them and there aren't enough left over for TSM to do what it needs to
(open a message file, etc).
Assuming mainframe Linux is equ
Have you tried using fewer tape drives?! Crazy as that sounds, if you
can't feed the 4 drives fast enough, they are all going to shoe-shine
which will kill throughput. Fewer tape drives would also reduce the
seek load on the disk pool (where again, the heads are hop-scotching all
around trying to
Have you considered setting up a separate TSM instance specifically to
handle this emergency? Use the TSM database backup from just prior to
all those files being marked inactive. That way you can leverage NQR to
get these big boys.
-Ken
-Original Message-
From: ADSM: Dist Stor Manager [
Since you mentioned they were SCSI drives, is it possible you have a
cable quality / length / termination issue on your SCSI bus. It might
only manifest itself under certain signalling conditions such as those
under heavy load by multiple targets. Can you do a full tape-to-tape
copy outside of TS
We have an IBM N-Series 6040 (a thinly veiled NetApp 3140) that amoung
other things contains several large volumes (2TB, 35 million files).
These volumes have qtrees defined for the larger top-level directories.
There are numerous other small top-level directories that are not in
qtrees.
We are r
2008
09:47:34 ANR0987I Process 485 for EXPIRE INVENTORY running in the
BACKGROUND processed 1141005 items with a completion state of SUCCESS
at 09:47:34 AM. (SESSION: 60470, PROCESS: 485)
"Mueller, Ken" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist St
Zoltan-
We also run TSM on a dedicated Linux/Intel, although on a smaller scale
than you. The server has 2.5GB RAM and two 2GHz Xeons. Our DB is 23G
of which 70% is in use. Buffer pool is set at 32M (8192 pages) with
selftune on. Expiration is run daily - typically completes in 4-7
minutes!
I
You can obtain the total size from the Search Results window by
selecting the modified files, right-click, hit Properties. Just be sure
you only select modified files, not folders, otherwise the size and
count will be inflated with the size/count of the folders' contents.
-Ken Mueller
-Origi
I ran into a similar situation where the TSM client (v5.2.3 at the time)
wouldn't backup/restore the ext3 extended acls on RHEL3. The problem
turned out to be that the TSM client is looking for libacl.so but
couldn't find it. It was available as libacl.so.1 (which symlinked to
libacl.so.1.1.0).
Following on the heels of the postings earlier this week about restoring
primary storage pools from copy pools, I was curious about the process used
to perform the restore:
If I have 'large objects' (Oracle databases, Exchange Databases, image
Backups, etc) in one or more copy pools, client restor
mothy Lin
Sent: Friday, March 10, 2006 10:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Journaling/Linux
That's very good performance, out of curiosity,
is your TSM server also running on top of VM ?
I know VM instances can talk to each other over the memory instead of
over the net
Tim.
Mu
ve behind that?
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Mueller, Ken
Sent: Thursday, March 09, 2006 8:23 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Journaling/Linux
We have numerous Linux/Samba file servers in production running under
VMWare E
We have numerous Linux/Samba file servers in production running under VMWare
ESX (we run almost everything under ESX). Here are the results from our
largest document imaging server (lots of small files - ext3 file system):
03/09/2006 00:24:41 --- SCHEDULEREC STATUS BEGIN
03/09/2006 00:24:41 Total
Have you explored other communication options besides the traditional telco
T1/T3? In particular, DSL and cable modem internet connections. If one of
those modes are available to you, you can tunnel over a VPN to your other
site. That would be cheaper and faster than a T1.
-Ken
-Original Me
You could always backup the client (X) under a second nodename (X_ACTIVE):
Backup node X with your normal management classes/retention. Backup node
X_ACTIVE with a management class set to only keep the active copy on your
special disk storage pool - anything inactive will be expired out.
Obviousl
We've been running TSM on Intel/Red Hat for over 2 years. For reference,
we're using TSM server 5.2.4.2 on RHEL 3.0 Update 1 on an IBM x335. FC
attached 3583 and disk pools (separate hba's, of course!) - Database is
about 20G - Backup types include DP for Oracle and Exchange, a few Windows
machine
Try something like this...
select cs.domain_name,cs.schedule_name, cs.starttime,
cs.dayofweek,a.node_name -
from associations a, client_schedules cs -
where a.domain_name='WINDOWS' -
and a.domain_name=cs.domain_name -
and a.schedule_name=cs.schedule_name -
order by a.node_name, cs.dayofw
Being a computer, it's doing exactly what you told it to!
I think you meant 'where date_time > current_timestamp - 3 days'
(if you have '=' it will only find records from 3 days ago that match the
current time down to the second)
Also, to find multiple message numbers, you can use 'msgno in (1064
You should be able to just add another output column doing the math... so
for your query:
SELECT STGPOOLS.STGPOOL_NAME, STGPOOLS.MAXSCRATCH,
Count(STGPOOLS.MAXSCRATCH) as "Allocated_SCRATCH",
STGPOOLS.MAXSCRATCH-count(STGPOOLS.MAXSCRATCH) as "Remaining_SCRATCH"
FROM STGPOOLS STGPOOLS, VOLUMES VOLU
This is probably not the most elegant way of doing things, but I think this
will work until your enhancement request is granted! Basically build 2
filelists, sort them together and show only the unique entries that reside
on the client.
#!/bin/sh
#
# Compares TSM filespace to actual filespace
# D
Our TSM setup is not nearly as large as most of yours, but for what it's
worth, here are our values for the past month:
FULL_DBBACKUP 2004-08-14 6537600
FULL_DBBACKUP 2004-08-15 6051600
FULL_DBBACKUP 2004-08-16
To add one more ingredient to the 'it all depends' performance soup - it's
not really sequential vs random reads - it's one sequential read stream vs
multiple sequential read streams. Depending upon the OS/disk subsystem,
read-ahead caching would be performed on each stream and reduce the amount
of
I had the same (or similar) problem using IBMtape 1.4.11, TSM would hit EOV
then fail with an 'already reached EOV once' error, flag the tape R/O and
move on to the next tape ad nauseum... upgrading to IBMtape 1.4.14 solved
that problem (had to bump the Linux kernel up a notch to run a version the
27 matches
Mail list logo