Here's one to try, not sure if it will work..
On the TSM server, before you restore,
TSMSRV:> rename filespace / /rescue
Then on the client just: dsmc restore /rescue
rename it back when it is done.
Matthew Glanville
From:
"Stackwick, Stephen"
To:
ADSM-L@vm.marist.ed
/ -subdir=yes works, it has a
different starting point.
If there are files /directories there, delete them, then mount /main/UT
back
or maybe keep them, if they are important...
I have seen this before on Solaris 9 OS backup a few years ago
Matthew Glanville | WWIS GI Server team |
Eastman Kodak Co
Remco,
You mention you are having performance problems with IBM XIV and 'DISK'
pools, how is the performance with 'FILE' based storage pools?
Matthew Glanville
directory compressed all the
time would work. I'll have to do some performance tests to figure that
out.
Matthew Glanville
Eastman Kodak
"ADSM: Dist Stor Manager" wrote on 04/08/2010
03:26:37 PM:
> From:
>
> "Prather, Wanda"
>
> To:
>
> ADSM-L@VM.
> "ADSM: Dist Stor Manager"
>
> On Dec 3, 2009, at 2:24 PM, Zoltan Forray/AC/VCU wrote:
>
> > I would assume so, since they roll off/expire after 180-days.
> >
> > How would I check?
>
> I'd give DOMDSMC Query DBBackup * /INACTive
> a try, and see what it reports. There may be reclusive stuff or
Just a warning, don't necessarily go to 1/2 or 1/8th of your total
physical memory..
If your server has 64 GB of memory, 8 GB (1/8th) for BUFPOOLSIZE is
probably too high. I would keep it below 1 GB unless you prove to
yourself with some testing that it is helping speed up the backups or
restores
t. Come on IBM step up...
Matthew Glanville
www.kodak.com
I have the exact same complaints about the 'FILE' devclass...
If only they would implement a different concept for it's hi/low/migration
settings and migrate things better.
I wish it would use a volume per 'connection/node/filespace/group'
depending on collocation setting, utilizing your 'size' th
Sounds to me like the TSM Client, a Windows 2003 server needs to have more
memory.
Or it's virus scanning the file share as it's backing it up.
Also look into the 'snapshots' that NetApp may be doing on the file
shares.
Are you also backing up them in addition to the current data?
That could easi
"ADSM: Dist Stor Manager" wrote on 12/06/2006
02:39:35 PM:
> -Luc Beaudoin wrote: -
>
> >I have a new SUN Solaris server, (SUNFIRE T2000), SunOS 5.10
> >
> >I installed the TSM client on it it's working ... but very
> >slw
> >The client version I installed is 5.3.
ed fairly idle even though backups were taking longer.
Now that have a smaller "bufpool" size, more cpu's are being used and
overall performance is much better.
Maybe they will come up with a better database in future TSM versions, I
am fairly sure that this is going to be needed for
t achives to form your opinion on the pros/cons of
using DISK vs FILE device classes for large disk based storage pools...
Matthew Glanville
Lance Nakata <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager"
05/30/2006 07:28 PM
Please respond to
"ADSM: Dist Stor Manager&
I'm still having 5.3.2.3 repair volume hanging problems when it repairs
offsite volumes during reclaimation.
Matthew Glanville
Eastman Kodak Company
Worldwide Information Systems (WWIS)
343 State Street
Rochester NY 14650
585-477-9371
Privacy/Confidential Disclaimer:
The information cont
>From my testing dsmcad doesn't complain about the '-server=' option, but
doesn't use it
Matthew Glanville
> I think the discussion could use more details about the architecture
> being used.
> If you haven't already, see IBM Technote 1208540.
>
> Richard Sims
Ahh those details.
TSM server 5.3.2 on Solaris 9, 64 bit, 8 cpus' 32 GB ram.
DB size, 150 GB, 75% used
Current BUFPOOLSIZE 1 GB
My te
> Oo, neat!
>
> Do I understand you correctly to say that you've got 32G of memory and
> a 4G database? If you've got core of a similar size to your DB, then
> I suggest an experiment: instead of sticking it all in a buffer, make
> some RAMdisk, and stick a third copy of your DB vols there, and se
ing that
database page from disk.
Anyone else seen this?
Thanks
Matthew Glanville
Eastman Kodak
hould be looked at before
blaming the RAID-5 entirely.
Matthew Glanville
>We have been evaluating this product and have not been able to achieve
the
>necessary performance throughput on the restores we require. I'd like to
>know if anyone else out there is successfully using this product.
>Environment:
>AIX 5.2 and TSM Server 5.2.4.2
>6 STK 9840 B tape drives, but w
> There's also performance internal to the tsm server to consider.
> The large server backups could be monopolizing your network
> throughput, scsi card throughput, pci/pci-x bus throughput, or some
> combination of all the above.
>
System performance monitoring shows hardly any network I/O during
Hi,
I would like to know how to explain a situation in which when 1 or more
incremental backups of some large 2 TB 5+ million file servers are running
there appears to be an affect on any data restorations.
If I cancel those backups, the data restorations start running much
faster.
So far my id
"ADSM: Dist Stor Manager" wrote on 09/09/2005
03:21:35 PM:
> I was just mulling about the Idea to speed up large restores of small
files,
> without resorting to image backup, keeping everything on disk, or
> collocating tapes. If I could keep just the active copy on disk I could
> speed up the re
Avoid using 5.3.1.2 server as any writes to collocated
(node/group/filespace) volumes (disk or tape) are extremely slow.
It appears to be fixed in 5.3.1.3+ see: IC46349
It was so slow I had to temporarily turn collocation off until the fix.
Matt Glanville
[EMAIL PROTECTED]
Dave Zarnoch <[E
file permissions!
TSM server 5.3.1.x has a bad bug. I am not sure if you are using that or
5.3.0
But if you are on 5.3.1, you could try turning Collocation off for your
destination storage pools.
IBM APAR IC46349
This bug makes TSM 5.3.1 useless for sites backing up more than a few
servers.
Matt G.
I'm running TSM on Solaris 9, version 5.3.1.2 and have experienced an
extreme slowdown in performance when writing data to any collocate
sequential storage pool. FILE or tape.
Server setings:
movebatchsize=1000, txngroupmax=2048, movesizethresh=2048
This happens both with client backups, pool mig
Other than lowering the backup process prioritys another way to cause TSM
to use less resources is to set 'MEMORYEFFICIENT YES' in the options.
This will cause TSM to use less memory, and also slow down it's disk
scanning.
Also check TSM's 'copy group' serialization parameters, the default of
'sha
> select file_name form contents where node_name='Sprint800"
'Sprint800' probably wont work, it has lowercase in it. (not positive
though)
Try SPRINT800.
But, I would narrow it down a bit more.
First find what tape it may be on:
select volume_name from volumeusage where node_name='SPRINT800' and
id appear that
we may have been one of the first to do it. Make sure your
IBM SE asks around for tips on how to do the upgrade.
Also, you can put in 4x3592 drives in the spaces taken up
by the 2x3590's.
Matthew Glanville
desk and decide to call in IBM to fix the
problem or just unload the tapes that wanted to come out of the I/O slots.
My biggest issue is that the server we have connected to those 6 drives
can't push data fast enough to them, they can go much faster!
Matthew Glanville
Eastman Kodak
We tried this. It worked. It's easy to set up as documented.
But anytime the TEC was restarted
the events stopped being sent from the TSM servers
until the TSM server was restarted or we ran the TSM commands:
TSM:> end eventlogging tivoli
TSM:> begin eventlogging tivoli
IBM/Tivoli Support wasn't
> The problem lies within the server side, since it does send the data to
the client very
slowly (and from the task manager I can see that the tsm server process
is reading data slowly as hell).
Wont that speed you see will just adjust itself to what the client can
write to the disk?
Most of the p
G.
Things I would look for.
Dont use 127.0.0.1/localhost for TCPserveraddress. This I have found to be
slower for TSM backup/restore on some OS's as it is just a test interface.
Make sure active virus scanning is turned off.
Software based raid will slow things down too.
You indicated that singl
ifferent in the
library definitions. (and scratch to if you want to further partition the
library)
Matthew Glanville
Eastman Kodak
Zoltan
Forray/AC/VCUTo: [EMAIL PROTECTED]
<[EMAIL PRO
et and the
sense errors were sent to them.
I will find out what they say.
Matthew Glanville
Dave frost wrote>
Have you had any interesting records in /var/adm/messages that you
can
match up to when one or more of the tapes was mounted? (ANR8468I
volume
dismounted is a go
Problem:
Lots and lots of tapes reporting errors when, auditing, copying, moving
data from them...
06/23/03 10:52:37 ANRD pvrntp.c(4586): ThreadId<15> Invalid
block header read from NTP drive DRIVE5 (/dev/rmt/10st).(magic=5A4D4E50,
ver=5, Hdr blk=1450 , dbytes=262096 <262096>)
(Thu
I agree with Andy don't change include/exludes between backups of the same
system!
Use TSM 'archive' function to get the data once per week to the TSM server
instead of TSM 'backup'
exclude's are ignored by the TSM archive function (unless you are using
exclude.archive)
Matt G.
Eastman Kodak
oes
it work have any side affects, slow things down etc?
Thanks
Matthew Glanville
[EMAIL PROTECTED]
Is that called a paradox
or a conundrum?
I agree that some of us are just spoiled by having smarter library
controllers.
But I am getting in shape by running to and from the 3584 library and the
TSM console :)
Matthew Glanville
[EMAIL PROTECTED]
h TSM and 3584's
does anyone else have a better way?
Matthew Glanville
[EMAIL PROTECTED]
>Yes, if you had a single Frame of each side by side and took a quick
>look, you'd think they were the same. The "Big Iron" is same design.
>Main difference is the electr
er to
client containg the drive names, so something is happening, but not
completely working.
I am still waiting for LVL 2 to respond.
Does anyone have any ideas?
Matthew Glanville
[EMAIL PROTECTED]
Eastman Kodak
he default IBMtape.conf file has sili=0 for the Solaris driver...
Should this be set to 0 or 1 for TSM and 3494 drives on Solaris 8?
Thanks
Matthew Glanville
[EMAIL PROTECTED]
virus
scanning and things are all ready for a painful TSM restoration.
I hope one of those things will help you get more performance out of this
configuration.
Matthew Glanville
Eastman Kodak
"Kelly J. Lipp" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[E
I will be out of the office starting 08/09/2002 and will not return until
08/20/2002.
I will respond to your message when I return. For any critical problems
regarding ADSM/TSM Backups or UNIX support please create vantive ticket in
'Server UNIX INET US' box, or call help desk at KNET 25x87047
From: Matthew Glanville
Joe,
I do not believe there is such a thing as 'server' level compression. My
understanding is that the device class compression settings are reflecting
the hardware level compression settings, they can override what the
microcode may have set the 'defaul
From: Matthew Glanville
You might want to turn on TSM client side compression...
In my experience notes databases can get at least 50% compressed.
Your backups will most likely go down to 2 hours, or even more.
TSM:> update node node_name compress=yes
Give it a try. For low bandwith line
From: Matthew Glanville
I have had this problem numerous times on TSM 3.7 - 4.1 on Solaris.
neither
delete volume volname discard=yes
or
audit volume volname fix=yes
worked.
the solution was to do that which no one likes to do...
dsmserv audit db fix=yes
Also a parameter after
From: Matthew Glanville
Every time I ran into this problem I had to audit my TSM database.
make sure you backup your database
then
dsmserv audit db archstorage fix=yes
Wait a few hours (or more to complete)
whenever I tried audit volume fix=yes always had the responce 'volume
contai
From: Matthew Glanville
This might get you some useful information
from TSM about tape mounting activity:
tsm> select * from summary where activity='TAPE MOUNT'
START_TIME: 2001-03-26 00:01:00.00
END_TIME: 2001-03-26 00:12:30.00
ACTIVITY: TAPE MOUNT
From: Matthew Glanville
This is probably that archive expiration problem. I forget the APAR.
run expire inventory like this:
expire inventory skipdirs=yes
That should let it run fast again...
Matt Glanville
Eastman Kodak Company
Bert Moonen <[EMAIL PROTECTED]> on 04/17/2001
From: Matthew Glanville
I have seen those errors on one ADSM server when running SUN Solaris 2.6
and ADSM 3.1
If I remember correctly, It didn't appear to be memory related. It just
was that too many things were going on at once on the TSM server (too many
threads?). I reduced the numb
From: Matthew Glanville
I am using a 3575 with TSM 3.7 on Solaris 2.6 and am not having those
problems.
There is a difference between my and your configuration:
I am using the /dev/rmt/Xst devices you are using the /dev/rmt/Xstc
This may have somthing to do with your problem. My tapes
From: Matthew Glanville
Change the device class format=drive
Matt Glanville
Eastman Kodak Company
Phil Bone <[EMAIL PROTECTED]> on 26/09/2000 12:36:23
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:(bcc: Matthe
From: Matthew Glanville
I have found that many times I overlook a very significant item that is
really the cause of the slow restore/backup on NT.
Make sure that there is NO Virus scanning software running
I have been hit at least 5 times by the NT admin complaining about a slow
backup
From: Matthew Glanville
I had a similar problem with TSM 3.7.1 on Solaris 2.6 on two different
servers...
I had to audit my database to fix it. It worked on one of my servers, on
the other the AUDIT Crashes! I gotta make another call to IBM/Tivoli... :(
Matthew Glanville
"Ul
From: Matthew Glanville
I have ran into this problem for both a Copy sequential volume and a
primary random access volume on two different servers!
I placed a call, waiting to hear back from level 2.
Situation: Unable to delete some volumes!
TSM 3.7.2.1 and 3.7.3.0 on Sun Solaris 2.6
56 matches
Mail list logo