Re: TSM DB and RLOG sizes

2004-10-13 Thread Christo Heuer
Hi,

You can increase your database a hundred(maybe even a thousand), times
bigger without suffering
performance problems.

In the Admin guide the space occupied by each object (30Million of them), is
explained.
You will definately need to increase your db size.

Regards
ChristoH

> Up to now I kept my DB and Rlog sizes to a minimum (96MB and 50MB
> respectively). Most of my backups are SQL dbases or large files so there
was
> no need to increase them.
> However, I have a new client (windows) that requires to backup a complex
> folder structure with about 30 million files! These are cheque images of
> about 20KB each - the total space is 600GB.
> I have no problem with my storage pool as I have enough empty tapes.
> I'm worried about the DB and Rlog sizes that I have to increase - I just
> don't know how much. A colleague suggested something between 1% and 5% of
> the size of the data to be backed up.
> Has anyone faced a similar situation, and if yes, any suggestions on the
> sizes ?
> Are there any catches as to how performance is affected by such a large
> database ?
>
> Environment:
> TSM 5.2.0 on Win2000
> TSM Client 5.2.0 on Win2000
>
> Thanks
> Yiannakis
>
>
> Yiannakis Vakis
> Systems Support Group, I.T.Division
> Tel. 22-848523, 99-414788, Fax. 22-337770


___

Important Notice:
Authorised Financial Services Provider

Important restrictions, qualifications and disclaimers
("the Disclaimer") apply to this email. To read this click on the
following address or copy into your Internet browser:

http://www.absa.co.za/ABSA/EMail_Disclaimer

The Disclaimer forms part of the content of this email in terms of
section 11 of the Electronic Communications and Transactions
Act, 25 of 2002.

If you are unable to access the Disclaimer, send a blank e-mail
to [EMAIL PROTECTED] and we will send you a copy of the
Disclaimer.


Re: Expiration performance

2004-09-13 Thread Christo Heuer
Hi Ben,

I agree with you on these points - ran Dave's script on our main
TSM server - and the Database backup performance is exceptionally
good - the slowest I'm seeing is  3100:
FULL_DBBACKUP  2004-09-03  62625600
FULL_DBBACKUP  2004-09-04  56156400
FULL_DBBACKUP  2004-09-05  59911200
FULL_DBBACKUP  2004-09-06  51152400
FULL_DBBACKUP  2004-09-07  57801600
FULL_DBBACKUP  2004-09-08  60886800
FULL_DBBACKUP  2004-09-09  52707600
FULL_DBBACKUP  2004-09-10  59018400
FULL_DBBACKUP  2004-09-11  57938400
FULL_DBBACKUP  2004-09-12  58968000

The average seems to be in the region of 5000 +, so from that
point of view it seems excellent - but the expiration sucks according to the
script.

ACTIVITY DateObjects Examined Up/Hr
-- -- -
EXPIRATION 2004-08-14856800
EXPIRATION 2004-08-14867600
EXPIRATION 2004-08-15864000
EXPIRATION 2004-08-17770400
EXPIRATION 2004-08-18669600
EXPIRATION 2004-08-19846000
EXPIRATION 2004-08-20734400

So, yes, it seems that Ymmv, but is there anyone out there getting
good performance from expiration?
If there are people out there, don't you want to share your environment
with us - might be able to collectively cast some light on the lower
expiration figures.

For the record, my environment is as follows:
AIX 5.2
TSM 5.2.2.5
DB size 105Gig - 93% utilized - 13 Lun's (One over the recommendation).
Disk subsys - FastT900
File system - RLV
Tape technology: 3584 with Lto-2 drives.
P650 (4 Processors and 4Gig Ram)

Regards
ChristoH

===
Yes, interesting stats. On all my TSM servers, they get above 5M
pages for the DB backup, but none of them are above 3.8M objects on the
expire inventory. Some in the 2M, others only in the .5 M range.

These random thoughts pointed at the group, not necessarily Joe.

Since my db backups (sequential reads) go well, but the expire
inventory (random reads and writes) are slow, might that point to DB
fragmentation? Improper tuning of TSM buffers? Overcommittal of the
fast-write disk cache? Bad karma?

One would think that the more files deleted during the expire
inventory, the longer it will take for the expire inventory to progress? No?
I can run 2 expire inventories in a row and the second one goes much much
quicker because there are very few changes to the DB to be made. It seems
like the performance on the "number of objects examined" is really one of
those "your mileage may vary" kind of stats.

Perhaps I'm not getting the performance I should out of the
expiration...

Ben



__
Important Notice:

Important restrictions, qualifications and
disclaimers ("the Disclaimer") apply to this email.

To read this click on the following address:

http://www.absa.co.za/ABSA/EMail_Disclaimer

The Disclaimer forms part of the content of this
email in terms of section 11 of the Electronic
Communications and Transactions Act, 25 of 2002.

If you are unable to access the Disclaimer,
send a blank e-mail to [EMAIL PROTECTED] and we
will send you a copy of the Disclaimer.


Re: Maximum database volumes?

2004-07-08 Thread Christo Heuer
John,

I had an LPAR dedicated to TSM - (200Mips) on a  z/900.
This LPAR did NOTHING but TSM.
When your database starts going over 40Gig you WILL NOT compete
with even a Win2K server in terms of throughput.
How many concurrent TSM sessions does your TSM server handle currently?
How much data can you pump through to TSM in an evening?
How long does your database backup run to do do 35Gig?
Some performance stats from a P650 - Lpar with 4Gig Ram, 4Cpu's and 2.9 Tb
of
staging area on a FastT900.
On average we backup between 1.5 Tb to 2.5 Tb a night - on the z900 we
battled
to do 500Gig (I've got all the graphs from the mainframe days of TSM to
now).
The TSM server on AIX handles this via two Gig ethernet adapters.
The mainframe had 2 Gig ethernet adapters as well doing into the Enterprise
backbone.
The network on the mainframe was OSA express adapters BTW.

My database currently on AIX is 85Gig big and 96% utilized.
A full database backup takes 22minutes to LTO-2 drives - my database
backup on z/Os took +- 4hours to do 39Gig - this was to 3590-K model drives.
When I originally installed Tivoli Descision Support and tried to run
reports it took in excess
of 24 Hours to actually run - which made the descision support product
useless because
I was running TSM on OS/390/z/Os - on my AIX TSM server I'm able to use the
product properly.

Don't think I'm negative towards the mainframe, (I ran ADSM v1 on MVS a
decade ago already),
and it runs fine up to a certain point - if your load is not high, then it
will suffice - when you are
running large workloads it becomes impractical - I would have had to run
about 5 LPAR's
to achieve the same I'm achieving with one  midrange TSM server.

Hope this puts my perspective on mainframe TSM servers into perspective

Regards
Christo Heuer

> Well, I have a 35 gbTSM database on Mod 9s, two volumes per physical
device
> and I have not noticed any worse performance since moving from mod 3s.
> I would have preferred to have one logical volume per physical device but
> TSM does not currently support extended vsam  (over 4gb)
> If/when TSM does support extended vsam then mod 27s will be quite feasible
> for larger databases,
> I do not agree with Christo's premise that  TSM mainframe performance is
> crap.
> You are not comparing like with like.
> What you see are dedicated unix or windows TSM servers, while the TSM
> mainframe server is normally sharing
> resources with many other applications.
> Certainly in my experience poor TSM mainframe performance is down to
> insufficient  resources for the overall workload rather than anything
> inherent in TSM.
> thanks,
> John
>
>
>
>  Christo Heuer
>  <[EMAIL PROTECTED]
>  .za>   To
>  Sent by: "ADSM:   [EMAIL PROTECTED]
>  Dist Stor  cc
>  Manager"
>  <[EMAIL PROTECTED] Subject
>  .edu> Re: Maximum database volumes?
>
>
>  08/07/2004 07:19
>
>
>  Please respond to
>  "ADSM: Dist Stor
>  Manager"
>  <[EMAIL PROTECTED]
>.edu>
>
>
>
>
>
>
> Performance and Tuning documentation states it should not be more than 13
> Volumes - not
> too sure how you achieve that in the mainframe world where you have 2.835
> Gig
> disks - (Mod3's).
> Most shops don't really have access to mod9's, so then from a performance
> guide point of view
> you can not really go over 36.8 Gig database size - but on the mainframe
> TSM
> starts performing
> crap before you even reach that size in anycase.
>
> Anyhow - 13 is the recommended number of DB vols.
>
> Cheers
> Christo
>
> > Before I upgrade our TSM server to v5.2 (seems to be taking forever to
do
> due to small outage windows) I want to reorganise our database volumes so
> they are a little more efficient. Is there any recommended maximum number
> of
> database volumes a TSM database should have? Ours is spread across about
20
> different volumes of varying sizes, not the best makeup I am sure, hence
> why
> I want to re-arrange it before hand.
> >
> > I've looked in the TSM Admin guide and searched IBM support, as well as
> this list, but haven't really found anything of help.
> >
> > Thanks in advance,
> >
> > Gordon
> >
> >
> > --
> >
> > This e-mail may contain confidential and/or privileged information. If
> you
> ar

Re: Maximum database volumes?

2004-07-07 Thread Christo Heuer
Performance and Tuning documentation states it should not be more than 13
Volumes - not
too sure how you achieve that in the mainframe world where you have 2.835Gig
disks - (Mod3's).
Most shops don't really have access to mod9's, so then from a performance
guide point of view
you can not really go over 36.8 Gig database size - but on the mainframe TSM
starts performing
crap before you even reach that size in anycase.

Anyhow - 13 is the recommended number of DB vols.

Cheers
Christo

> Before I upgrade our TSM server to v5.2 (seems to be taking forever to do
due to small outage windows) I want to reorganise our database volumes so
they are a little more efficient. Is there any recommended maximum number of
database volumes a TSM database should have? Ours is spread across about 20
different volumes of varying sizes, not the best makeup I am sure, hence why
I want to re-arrange it before hand.
>
> I've looked in the TSM Admin guide and searched IBM support, as well as
this list, but haven't really found anything of help.
>
> Thanks in advance,
>
> Gordon
>
>
> --
>
> This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: Novell Cluster (San volumes not getting backed up)

2004-06-23 Thread Christo Heuer
Hi Riaan,

TSM does treat the cluster resources correctly - making it cluster aware, as
far as my requirements go.
Although you load the TSA with the /cluster=off it is cluster aware and will
treat
the disks as logical units and treat fail-over to the other nodes in the
cluster
correctly.

This is discussed in great detail in the TSM 5.2 client manuals.

Regards
ChristoH

> Hi Mark,
>
> I ran into a similar problem on another  backup software.
>
> As this software isnt Novell Cluster aware (which ones are?) we had to in
> the end run the TSA with a /nocluster option.
>
> BUT, before you rush out and do this, this presented its own problems as
the
> disks are now presented to the backup software on both the physical nodes.
>
> We wrote a script that figured out which node was the active node and only
> backed that client up with a client side command line. (surely TSM will
have
> somethign similar). This meant that if we wanted to do a restore it was a
> pain as we had to "hunt" between the 2 possible backup clients to find the
> correct date for restore.
>
> If however only one of your clients will "always" be the active node then
> you should have less issues.
>
> I hope this helps, slightly...
>
> Regards,
> Riaan
>
>
> -Original Message-
> From: Mark Hayden [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, June 23, 2004 2:38 PM
> To: [EMAIL PROTECTED]
> Subject: Novell Cluster (San volumes not getting backed up)
>
>
> Sorry if this has been posted, but have not had time to read my E-mails
> for a while. We just upgraded our NetWare to 6.5 on all of our file
> Servers. All is well, except the Sans on 4 NetWare Cluster Servers. For
> starters, we just back up these Servers the same as all our Novell
> Servers. Tsm does not treat them as a Cluster. Problem, since we have
> upgraded Novell to 6.5 TSM does not see the San volumes. I can add these
> volumes to the Object line in the schedule, but will only back up SYS if
> not. We are at 5.2.2.1 Server code, and 5.2.2 on the Novell
> clientHas anyone ran into this Thank you for your help.
>
> Thanks, Mark Hayden
> Informations Systems Analyst
> E-Mail:  [EMAIL PROTECTED]
>
> ---
> Incoming mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.709 / Virus Database: 465 - Release Date: 6/22/04
>
>
> ---
> Outgoing mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.709 / Virus Database: 465 - Release Date: 6/22/04

__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: TSM on zLinux

2004-06-11 Thread Christo Heuer
Hi Markus,

I did the 5.2 TSM server Beta Test on a Z-Series Linux - SUSE9 Distro.
z900, 3584 - LTO-2 drives, Ess (F20) storage.

Let me know what you want to know.
Our only issue with the whole testing was support from IBM in terms
of z-Series Linux - skills seemed a bit lacking. (This might have changed
in more recent times - have not persued it further, we eventually decided
to go AIX on P-Series).

Regards
Christo

- Original Message - 
From: "Markus Engelhard" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, June 11, 2004 2:45 PM
Subject: TSM on zLinux


Hi TSMers,

we are looking at possible Platforms for TSM. Is there anybody out there
willing to share experience (online or offline) about TSM on zLinux? I
don´t seem to be able to find much about it on the list.

Mit freundlichen Grüßen,
Markus Engelhard
IT531 Storage & Recovery
Deutsche Bundesbank
Wilhelm-Epstein-Straße 14
60431 Frankfurt am Main
Tel. +49 69 9566 -6104
email: [EMAIL PROTECTED]

--
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail oder von Teilen dieser Mail ist nicht gestattet.

Wir haben alle verkehrsüblichen Maßnahmen unternommen, um das Risiko der
Verbreitung virenbefallener Software oder E-Mails zu minimieren, dennoch
raten wir Ihnen, Ihre eigenen Virenkontrollen auf alle Anhänge an dieser
Nachricht durchzuführen. Wir schließen außer für den Fall von Vorsatz oder
grober Fahrlässigkeit die Haftung für jeglichen Verlust oder Schäden durch
virenbefallene Software oder E-Mails aus.

Jede von der Bank versendete E-Mail ist sorgfältig erstellt worden, dennoch
schließen wir die rechtliche Verbindlichkeit aus; sie kann nicht zu einer
irgendwie gearteten Verpflichtung zu Lasten der Bank ausgelegt werden.
__

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorised copying, disclosure or distribution of  the material in this
e-mail or of parts hereof is strictly forbidden.

We have taken precautions to minimize the risk of transmitting software
viruses but nevertheless advise you to carry out your own virus checks on
any attachment of this message. We accept no liability for loss or damage
caused by software viruses except in case of gross negligence or willful
behaviour.

Any e-mail messages from the Bank are sent in good faith, but shall not be
binding or construed as constituting any kind of obligation on the part of
the Bank.=

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: Slow backup from Netware of small files

2004-04-27 Thread Christo Heuer
Hi Eric,

You will not get better performance than that - although, like Richard says,
it is common on all platforms it is worse on Netware.
You should definately switch off compression if you had it on - the reason
you are getting worse performance(this is my opinion anyhow), is because
Netware uses the
1st CPU in a multiple CPU box all tasks will queue for this processor - even
like in our case where we have 4 processor Netware boxes.
The Netware TSA's are all single threaded to the 1st CPU - inlcuding the
TSA's TSM uses to talk to the file system to read the data.

I've played with all the different tuning parameters you can use in TSM and
some woked better and others much worse - the best was not to use
compression at all.
We eventually settled on using flashcopy to another LUN on the SAN -
mounting that volume on another box - that was not running any workload -
and then running the backups from there.

Let me know if you need more info on the actuall parameters we played with
and what effect it had on the throughput.

Cheers
Christo

- Original Message -
From: "Eric Winters" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, April 27, 2004 10:46 AM
Subject: Slow backup from Netware of small files


> Dear All,
>
> ENVIRONMENT
> ==
> TSM Server 5.2 on AIX 5.1
> TXNGROUPMAX 256
> TCPWINDOWSIZE  64
>
>
> TSM Backup/Archive Client 5.2 on Netware 5
> TSANDS.NLM ver 10110.95
> TSA500.NLM ver 5.05d
> SMDR.NLM ver 6.50c
> TCPWINDOWSIZE  64
> TCPBUFSIZE 32
> LARGECOMMBUFFERS   NO
> TXNBYTELIMIT25600
> PROCESSORUTILIZATION10
> RESOURCEUTILIZATION 5
>
> Gigabit Network
>
> PROBLEM
> =
> Whilst backup performance of large files is very good (approx 16 MB/s) the
> backup performance of small files (50 KB to 300KB) is extremely poor (300
> KB/s).
> Backups are straight to a diskpool.
> I'd be interested in hearing from anyone with any experience of successful
> backup of small files in a Netware environment, or anyone who has any
ideas
> as to what might be done to improve the situation.
>
> Thanks as always,
>
>
> Eric Winters
> Sydney
> Australia

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Christo Heuer
Yes - If you use the client-side compression you will see most of your
cartridges showing close to or just over 200Gig - I have mix of clients
doing compression and others not doing.
What I've noticed is that on average I'm getting between 30% and 50%
compression.
In earlier years IBM would qoute a 3-1 compression ration - 10/30 5/15 etc.
I think they lowered this to a more concervative number of 2-1 - and my tape
numbers reflect this:
LTOATS  282,437.4
LTOATS  344,472.0
LTOATS  570,294.9
LTOATS  383,550.0
LTOATS  387,271.4
LTOATS  457,437.0
LTOATS  359,432.9
LTOATS  329,021.7
LTOATS  333,663.8
LTOATS  456,539.2

As can be seen - somewhere between 280 and 570- giving an average capacity
of close to 400Gig.
On the other hand - if all my data was compressed before arriving at the
server it would have been very close to 200gig.
Cheers
Christo
- Original Message -
From: "Willem Roos" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 10:01 AM
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


The whole compression issue has always confused the hell out of me (no
certification here :-). Does 200-400 mean
"200-native-,-400-if-we-hardware-compress-as-we-stream-to-tape"?
Sometimes the client may also compress? I think salespeople over the
years have greatly abused this x-2x tape cartridge capacity thing to
their advantage - you can always double up because nobody knows what
you're talking about anyway.

And you mean LZ (Lempel-Ziv) algorithm, don't you :-?

---
  Willem Roos - (+27) 21 980 4941
  Per sercas vi malkovri

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Christo Heuer
> Sent: 16 April 2004 09:26
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> It depends on the actual type of data - take a 2Tb oracle db
> that is mostly
> empty you will be getting a 90%+ compression ratio - so your
> LTO-2 cartridge
> will show the capacity used as 2000Gb.
> The typical ratio IBM used to qoute was 3-1 in recent years
> they changed
> this to 2-1,
> hence the 200-400 figure. The algorithm used for the comression is a
> modified ZL algorithm - similar to the algorithm used for pkzip etc.
> If you send already compressed data your tape usage will show
> 200Gig or
> less - if you were getting negative compression ratios - data already
> compressed can grow if comressed again.
> So - there is no clear-cut answer - work on the native
> capacity(200G), and
> everything else you get is a bonus.
>
> Cheers
> Christo
> - Original Message -
> From: "Chandrashekar, C. R. (Chandrasekhar)"
> <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, April 16, 2004 7:50 AM
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> > This is not a puzzle for me, Actually I want to know how
> much data it can
> > compress, Is there any one using same tape library. Which helps to
> estimate
> > the total storage capacity.
> >
> > Just to know how much percentage of compression in 3583L23
> library using
> > 3580ULTGen2 drive.
> >
> > Thanks,
> > c.r.chandrasekhar
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager
> [mailto:[EMAIL PROTECTED] Behalf Of
> > Christo Heuer
> > Sent: Friday, April 16, 2004 10:51 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > Hi,
> >
> > I see you are a Tivoli certified consultant (TSM) - maybe
> they missed that
> > part in
> > the certification exam - why don't you read up on
> compression - if this
> > puzzles you
> > I'm sure plenty other things will confuse the hell out of you.
> >
> > Christo
> > - Original Message -
> > From: "Chandrashekar, C. R. (Chandrasekhar)"
> <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Friday, April 16, 2004 6:51 AM
> > Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.
> >
> >
> > > Hi,
> > >
> > > Just for clarification, I'm using LTO-Ult tape cartridge
> having capacity
> > of
> > > 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with
> firmware
> > > 38D0, and devclass was defined with device-type=LTO and
> Format=ULTRIM2C.
> > Now
> > > the tape is storing more then 500GB of data, Is it normal
> behavior.
> > >
> > > Thanks,
> > > CRC,
> > >
> > &

Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-16 Thread Christo Heuer
It depends on the actual type of data - take a 2Tb oracle db that is mostly
empty you will be getting a 90%+ compression ratio - so your LTO-2 cartridge
will show the capacity used as 2000Gb.
The typical ratio IBM used to qoute was 3-1 in recent years they changed
this to 2-1,
hence the 200-400 figure. The algorithm used for the comression is a
modified ZL algorithm - similar to the algorithm used for pkzip etc.
If you send already compressed data your tape usage will show 200Gig or
less - if you were getting negative compression ratios - data already
compressed can grow if comressed again.
So - there is no clear-cut answer - work on the native capacity(200G), and
everything else you get is a bonus.

Cheers
Christo
- Original Message -
From: "Chandrashekar, C. R. (Chandrasekhar)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 7:50 AM
Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.


> This is not a puzzle for me, Actually I want to know how much data it can
> compress, Is there any one using same tape library. Which helps to
estimate
> the total storage capacity.
>
> Just to know how much percentage of compression in 3583L23 library using
> 3580ULTGen2 drive.
>
> Thanks,
> c.r.chandrasekhar
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
> Christo Heuer
> Sent: Friday, April 16, 2004 10:51 AM
> To: [EMAIL PROTECTED]
> Subject: Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> Hi,
>
> I see you are a Tivoli certified consultant (TSM) - maybe they missed that
> part in
> the certification exam - why don't you read up on compression - if this
> puzzles you
> I'm sure plenty other things will confuse the hell out of you.
>
> Christo
> - Original Message -
> From: "Chandrashekar, C. R. (Chandrasekhar)" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, April 16, 2004 6:51 AM
> Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.
>
>
> > Hi,
> >
> > Just for clarification, I'm using LTO-Ult tape cartridge having capacity
> of
> > 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with
firmware
> > 38D0, and devclass was defined with device-type=LTO and Format=ULTRIM2C.
> Now
> > the tape is storing more then 500GB of data, Is it normal behavior.
> >
> > Thanks,
> > CRC,
> >
> > C.R.Chandrasekhar.
> > Systems Analyst.
> > Tivoli Certified Consultant (TSM).
> > TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
> > Phone No: 91-80-5136.
> > Email:[EMAIL PROTECTED]
> >
> >
> >
> >
> > **
> > PLEASE NOTE: The above email address has recently changed from a
previous
> naming standard -- if this does not match your records, please update them
> to use this new name in future email addressed to this individual.
> >
> > This message and any attachments are intended for the
> > individual or entity named above. If you are not the intended
> > recipient, please do not forward, copy, print, use or disclose this
> > communication to others; also please notify the sender by
> > replying to this message, and then delete it from your system.
> >
> > The Timken Company
> > **
>
> __
> E-mail Disclaimer and Company Information
>
> http://www.absa.co.za/ABSA/EMail_Disclaimer
>
>
> **
> PLEASE NOTE: The above email address has recently changed from a previous
naming standard -- if this does not match your records, please update them
to use this new name in future email addressed to this individual.
>
> This message and any attachments are intended for the
> individual or entity named above. If you are not the intended
> recipient, please do not forward, copy, print, use or disclose this
> communication to others; also please notify the sender by
> replying to this message, and then delete it from your system.
>
> The Timken Company
> **

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: LTO tape cartridge(200GB/400GB) stores data 500GB+.

2004-04-15 Thread Christo Heuer
Hi,

I see you are a Tivoli certified consultant (TSM) - maybe they missed that
part in
the certification exam - why don't you read up on compression - if this
puzzles you
I'm sure plenty other things will confuse the hell out of you.

Christo
- Original Message -
From: "Chandrashekar, C. R. (Chandrasekhar)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, April 16, 2004 6:51 AM
Subject: LTO tape cartridge(200GB/400GB) stores data 500GB+.


> Hi,
>
> Just for clarification, I'm using LTO-Ult tape cartridge having capacity
of
> 200GB/400GB, Tape library 3582L23 with two 3580-LTOG2 drives with firmware
> 38D0, and devclass was defined with device-type=LTO and Format=ULTRIM2C.
Now
> the tape is storing more then 500GB of data, Is it normal behavior.
>
> Thanks,
> CRC,
>
> C.R.Chandrasekhar.
> Systems Analyst.
> Tivoli Certified Consultant (TSM).
> TIMKEN Engineering & Research - INDIA (P) Ltd., Bangalore.
> Phone No: 91-80-5136.
> Email:[EMAIL PROTECTED]
>
>
>
>
> **
> PLEASE NOTE: The above email address has recently changed from a previous
naming standard -- if this does not match your records, please update them
to use this new name in future email addressed to this individual.
>
> This message and any attachments are intended for the
> individual or entity named above. If you are not the intended
> recipient, please do not forward, copy, print, use or disclose this
> communication to others; also please notify the sender by
> replying to this message, and then delete it from your system.
>
> The Timken Company
> **

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: AW: TSM Scheduling - a lingusitic subthread

2004-03-10 Thread Christo Heuer
Hi Juraj,

I know that you know how schedules work - I've seen your replies on the
list - always
accurate - not a problem in that area - it was not aimed specifically at
you - it was more taking into consideration how TSM works - nothing to do
with lingusitics - I assumed that a person who works with TSM would have
noticed that the schedule actually did not kick off at all or that it kicked
off later than expected(which I assumed was what Mike observed).

But it is also like they say - Assumption is the mother of all .

Cheers
Christo
==


Hi christo,

I know very well about how schedules work :-)
I asked you a linguistic question.
I try to be more precise:

My understanding of Mike´s sentences per se is:
The schedules did not run untill 02:41.
The schedules may have run later,
but Mike is either not sure about or not exact at this point.

Your understanding is: the schedules did run at 02:41.
Is this fully clear from Mike´s sentences?
Or is this a conclusion from the sentence as written PLUS
your assumption based on your general TSM knowledge,
while the english sentence alone,
without using TSM knowledge to transform syntax into semantics
would result into my version?


:) Juraj


-Ursprüngliche Nachricht-
Von: Lawrence Clark [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 15:49
An: [EMAIL PROTECTED]
Betreff: Re: AW: TSM Scheduling


Hi Juraj,

The backups will attempt to start anytime in the time set for duration.

So if you have your backups scheduled to start at 1AM, and a duration of 2
hours, the backups could start as late as 3AM. They don't have to complete
within the time of duration, just start.

>>> [EMAIL PROTECTED] 03/10/2004 9:26:42 AM >>>
Hi christo,

this is question to improve my english:

> My backups are scheduled to start at 01:00, but didn't start till 02:41
> Monday & 01:21 today. Any idea why?

Does it tell the schedule did run at 02:41 for sure
or is this only a probable explanation?

regards
Juraj



-----Ursprüngliche Nachricht-
Von: Christo Heuer [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 10:45
An: [EMAIL PROTECTED]
Betreff: Re: TSM Scheduling


Simple answer - the fact that the schedule actually ran means there IS an
association - the fact that it does not run at the set time has to do with
the schedule randomization % that is set on the TSM server and the length of
the start-up window for the schedule for this client.
Simple example:
Schedule randomization=25%
Length of start-up window is 1 hour.
Client schedule is set at 01:00.

The randomization will cause the schedule to physically kick off anywhere
between 01:00 and 01:15. (25% of hour).

Hope this explains it.

Cheers
Christo

==
If it happens to me I know I have forgotten again to define an association.

Check with Q EVE whether you really have your schedules scheduled.
If not, search after an error in your schedule /associations definitions.
If yes, search after an error in Q ACTL and in dsmsched.log,dsmerror.log on
your clients - assuming your client tsm schedulres are running at all :-)

regards
Juraj


-Ursprüngliche Nachricht-
Von: VANDEMAN, MIKE (SBCSI) [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 02:31
An: [EMAIL PROTECTED]
Betreff: TSM Scheduling


My backups are scheduled to start at 01:00, but didn't start till 02:41
Monday & 01:21 today. Any idea why?

Mike Vandeman
510-784-3172
UNIX SSS
> (888) 226-8649 - SSS Helpdesk
>
>

__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer

__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: TSM Scheduling

2004-03-10 Thread Christo Heuer
Simple answer - the fact that the schedule actually ran means there IS an
association - the fact that it does not run at the set time has to do with
the schedule randomization % that is set on the TSM server and the length of
the start-up window for the schedule for this client.
Simple example:
Schedule randomization=25%
Length of start-up window is 1 hour.
Client schedule is set at 01:00.

The randomization will cause the schedule to physically kick off anywhere
between 01:00 and 01:15. (25% of hour).

Hope this explains it.

Cheers
Christo

==
If it happens to me I know I have forgotten again to define an association.

Check with Q EVE whether you really have your schedules scheduled.
If not, search after an error in your schedule /associations definitions.
If yes, search after an error in Q ACTL and in dsmsched.log,dsmerror.log on
your clients - assuming your client tsm schedulres are running at all :-)

regards
Juraj


-Ursprüngliche Nachricht-
Von: VANDEMAN, MIKE (SBCSI) [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 10. März 2004 02:31
An: [EMAIL PROTECTED]
Betreff: TSM Scheduling


My backups are scheduled to start at 01:00, but didn't start till 02:41
Monday & 01:21 today. Any idea why?

Mike Vandeman
510-784-3172
UNIX SSS
> (888) 226-8649 - SSS Helpdesk
>
>

__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: Can't define IBM3583-L72 LTO library to TSM

2004-03-05 Thread Christo Heuer
Hi Zoltan,

When you say you are trying to define a path to /dev/smc0 you are trying to
define
a path to the library - not the drives - so even if the drives are
responding fine with
the tapeutil command you can still have problems with the library - the
drives can function
without the library when using the tapeutil.

Check if the library is not maybe in offline/paused mode - the lsdev command
will just show that it is happy and it can see the library - but when TSM
wants to talk to the library it checks additional things before defining the
path. Are you actualy defining the path to /dev/smc0 as a library path or a
drive path? What parameter are you using for the desttype - tape or library?

If you are specifying library then it sounds like your library is in paused
mode - check via the web interface into the 3583 what the problem is.

Cheers
Christo

> I have spent hours and hours trying to get TSM server 5.2.2.0 on AIX 5.1
> to define the PATH to the above tape library.
>
> Every time I click on defining the PATH (/dev/smc0) to this library, TSM
> goes off and spins for a while, eventually coming back with some kind of
> error about being unable to communicate with the device.
>
> I have repeately removed and rediscovered the library and drives
> (restarted the TSM server numerous times).
>
> Played with tapeutil. It can successfully open the tape drives themselves,
> no problem.
>
> LSDEV says everything is happy:  smc0 Available 1D-08-01 IBM 3583 Library
> Medium Changer (FCP)
>
> So, what am I missing ?  What am I doing wrong ?

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: any possibility to make a copy of a dbbackup tape?

2004-01-26 Thread Christo Heuer
Hi Paul,

We do a backup of the DB to a file device class and then FTP this
file to a remote server. Depending on your TSM server DB size and network
it might or might not be feasible.

Regards
Christo

> Hello everbody,
>
> is there any possibility to make a copy of a dbbackup tape?
>
> regards
> Paul
__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: 3583 LTO Library

2003-11-20 Thread Christo Heuer
Anyone running a 3583 with LTO-2 drives on Win2K?

Something that might be pertinent to the I/O errors that this
thread originally discussed is the SCSI adapter being used - if not
fiber connected.
We implemented a 3583 with LTO-2 drives recently and were
experiencing terrible performance - including intermittent I/O errors
and tapes going reado or unavail - the performance was OK with
disk to tape migration but tape to tape copy was extremely slow.
32Gig of data would take in excess of 16 hours to copy.

After upgrading the TSM server code to 5.2 and no difference in
performance, we upgraded firmware. With everything updated to the
latest and greatest - we still did not see any improvement.
The only other variable was the SCSI card and cable - the card being
what IBM supports in their X-Series servers - an Adaptec 29160.
We'll - after searching the knowledge Base on the Adaptec site we
got a hit on performance problems with CD-Writers and some backup
tape units - not very specific - but it was worth a shot.
Well, after downloading the updated Adaptec driver - Dated Nov 2002,
the performance increased dramatically - we now do Tape to Tape copy
at the real speed the LTO-2 is rated - even better.
We now do the same 16Gig file in about 6 minutes compared to 16hours.
So far we have not seen ANY I/O errors again.

So - don't just blame the 3583/3580 hardware and media - look at the
rest of the server as well.

If you have this type of card - do yourself a favour and get the updated
driver.

Cheers
Christo


- Original Message -
From: "Prather, Wanda" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, November 19, 2003 6:00 PM
Subject: Re: 3583 LTO Library


> Interesting.  We've had no such problems.
> One 3583 bought in April, another in July.
> Maybe it's a later build; also, these 2 aren't terribly stressed.
>
>
>
> -Original Message-
> From: Mark Bertrand [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2003 8:56 AM
> To: [EMAIL PROTECTED]
> Subject: Re: 3583 LTO Library
>
>
> Yes, we used to get I/O errors on our 3583, although not sure if it was
the
> same exact OP code. Our fix, we replaced our 3583 with a 3584 after trying
> EVERYTHING else, hardware changes, microcode, firmware. We also had many
> other errors/problems, along with many other list users to help build our
> case to get out of the 3583. Just do a search for 3583, or 3583 +
problems,
> or 3583 + I/O, at search.adsm.org and start reading, you will see your not
> alone.
>
> Good Luck,
> Mark Bertrand
>
> -Original Message-
> From: Crawford, Lindy [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2003 7:43 AM
> To: [EMAIL PROTECTED]
> Subject: 3583 LTO Library
>
>
> Hi TSMer's,
>
> Please could you assist me, we have been have continuous errors on our
> library above and have update all microcode and firmware, but still no
joy-
>
> I get the following error:-
>
> 11/19/2003 15:11:39   ANR8300E I/O error on library IBM3583 (OP=8401C058,
>
>CC=304, KEY=04, ASC=15, ASCQ=83,
> SENSE=70.00.04.00.00.00-
>.00.0A.00.00.00.00.15.83.7C.00.00.00.,
> Description=Chang-
>er failure).  Refer to Appendix D in the 'Messages'
>
>manual for recommended action.
>
> Have any of you gotten the error above and what have you done to resolve
it
> 
>
> Thank you for all your assistance.
>
>
> Lindy Crawford
> Information Technology
> Nedbank Corporate - Property & Asset Finance
> *+27-31-3642185
>  +27-31-3642946
> [EMAIL PROTECTED] < mailto:[EMAIL PROTECTED]
>  >
>
>
>
>
>
>
>
> This email and any accompanying attachments may contain confidential and
> proprietary information.  This information is private and protected by law
> and accordingly if you are not the intended recipient you are requested to
> delete this entire communication immediately and are notified that any
> disclosure copying or distribution of or taking any action based on this
> information is prohibited.
>
> Emails cannot be guaranteed to be secure or free of errors or viruses. The
> sender does not accept any liability or responsibility for any
interception
> corruption destruction loss late arrival or incompleteness of or tampering
> or interference with any of the information contained in this email or for
> its incorrect delivery or non-delivery for whatsoever reason or for its
> effect on any electronic device of the recipient.
>
> If verification of this email or any attachment is required please request
a
> hard-copy version

__
E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: Number of Sessions?

2003-11-04 Thread Christo Heuer
Hi John,

This looks more like an IP problem than a sessions issue - the fact that
your
TCP/IP driver can not initialize is the problem. I remember a loong time ago
a similar
problem existed - more or less in ADSM V3 - but it was limited to specific
hardware
devices connecting the OS/390 system to your LAN - I think it was a problem
specific
to Cisco devices.
We've never had the problem - which might have something to do with the fact
that we
run IBM OSA adapters - previously IBM MAE's.

But coming back to your question/s - it has something to do with your IP
driver - do
you see any errors in the TCPIP STC?
Normally this message comes up when the IP STC has been bounced.
The number of sessions includes the control session and the data session.

Cheers
Christo

- Original Message -
From: "John Naylor" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, November 04, 2003 3:22 PM
Subject: Number of Sessions?


> Any help welcome,
> I am currently seeing an intermittant  problem  with my TSM 4.2.2 0S390
> server refusing to acceprt any new sessions after seeing these messages
>
> ANRD BPX1COMM(1978): ThreadId<20025> BindListenSocket: ANRBND(2)
> failed:
> rv=-1 rc=1115 rsn1=744C rsn2=7247
> ANR5099E Unable to initialize TCP/IP driver - error binding acceptor
> socket 55
> (rc &eq; 1).
>
> TSM has to be recycled before new sessions can connect.
> IBM support has been limited to saying that that there is no existing fix
> for our problem amd that tracing the problem further will need an upgrade
> to a supported code level.
> We will be doing exactly that, but not for a few weeks.
> In the meantime I wonder if anyone else has seen similar.
> We are not exceeding the maximum scheduled sessions and I have increased
> region size,
> but the problem is still there. The hit occurs shotly after a  burst of
new
> scheduled sessions
> connects to the host server.
> The maximum number of connected sessions I have seen before a hit is 62
> against a
> maximum scheduled sessions value of 80% of maxsessions 100
> Clients resource utilization is the default 2, so a subsidiary question is
> do both client sessions count in the number of scheduled sessions, if not
> then the maimum number of scheduled  sessions connected would only be 31.
>
> thanks,
> John
>
>
>
> **
> The information in this E-Mail is confidential and may be legally
> privileged. It may not represent the views of Scottish and Southern
> Energy plc.
> It is intended solely for the addressees. Access to this E-Mail by
> anyone else is unauthorised. If you are not the intended recipient,
> any disclosure, copying, distribution or any action taken or omitted
> to be taken in reliance on it, is prohibited and may be unlawful.
> Any unauthorised recipient should advise the sender immediately of
> the error in transmission.
>
> Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
> are trading names of the Scottish and Southern Energy Group.
> **

__

E-mail Disclaimer and Company Information

http://www.absa.co.za/ABSA/EMail_Disclaimer


Re: q vol f=g ??!?

2003-08-22 Thread Christo Heuer
Hi Sascha,

You have two (3) options when doing commands to TSM. Without the f=d, you
get
normal about, with it you get detailed info - (f=d). The 3rd option is
format=gui.
In the old days of ADSM there was a GUI that worked/works a whole lot better
than the WEB Gui. When this interface sends commands to TSM it sends it with
the
format=GUI - and interprests the output to be displayed for the GUI.

Just BTW - are any of you guys still using the GUI?
I've got 4.2.6/5.1.5/5.1.6 TSM servers and the GUI still works fine - only
where it gets to listing storage pools it stopped working after we upgraded
to 5.1.5.

Anyhow - the f=g is used by the old GUI.

Regards
Christo
--

> Hi to all the *SMers out there !
>
> I just stumbled over the output of "q vol f=g", actually is was a typo, I
> got no error Message but an output:
>
> -
> Volume Name: FRE660L1
> Storage Pool Name: LTOPOOL
> Device Class Name: LTOCLASS
> Estimated Capacity (MB): 130,177.5
> Pct Util: 20.1
> Volume Status: Full
> Volume Status (GUI): 2
> Access: Read/Write
> Access (GUI): 0
> Pct. Reclaimable Space: 79.9
> Scratch Volume?: Yes
> Scratch Volume? (GUI): 1
> In Error State?: No
> In Error State? (GUI): 0
> Number of Writable Sides: 1
> Number of Times Mounted: 76
> Write Pass Number: 1
> Approx. Date Last Written: 08/10/2003 05:50:42
> Approx. Date Last Read: 08/22/2003 13:52:56
> Date Became Pending:
> Number of Write Errors: 0
> Number of Read Errors: 0
> Volume Location:
> Last Update by (administrator):
> Last Update Date/Time: 08/05/2003 17:29:55
> -
>
> Does anyone know what these "(GUI)"-outputs mean ? Or what they are good
for
> ?
>  f=g does also work with "q ses" and produces similar outputs
also
> containing this (GUI)-stuff. I didn't try any other commands yet.
>
> Server is TSM 5.1.7.0 running under Windows2000AdvSrv.
>
> Greetings,
>
> Sascha Askani

__
"The information contained in this communication is confidential and
may be legally privileged.
It is intended solely for the use of the individual or entity to
whom it is addressed and others authorised to receive it.
If you are not the intended recipient you are hereby notified
that any disclosure, copying, distribution or taking action in
reliance of the contents of this information is strictly prohibited
and may be unlawful. Absa Bank Limited
(registration number :1986/004794/06) is liable neither for the
proper, complete transmission of the information contained in this
communication, nor for any delay in its receipt, nor for the
assurance that it is virus-free."


Re: Recover TSM Database with ERROR

2003-08-22 Thread Christo Heuer
Definately read the manual as explained by Andy already!

Cheers
Christo

> Hello , All
>
> For now , I can backup the TSM database sucessfully , however , I can not
do a sucessful database recover ,
> and the following is the command result .   please see it
>
> Backup Database sucessfully===
>
> tsm: TSM>backup db type=full devclass=file_device_class
> ANR2017I Administrator ADMIN issued command: BACKUP DB type=full
devclass=file_device_class
> ANR0984I Process 3 for DATABASE BACKUP started in the BACKGROUND at
00:32:00.
> ANR2280I Full database backup started as process 3.
> ANS8003I Process number 3 started.
>
> tsm: TSM>ANR8340I FILE volume /yszhang/tsmdb/61537520.DBB mounted.
> ANR1360I Output volume /yszhang/tsmdb/61537520.DBB opened (sequence number
1).
> ANR4554I Backed up 512 of 557 database pages.
> ANR1361I Output volume /yszhang/tsmdb/61537520.DBB closed.
> ANR4550I Full database backup (process 3) complete, 557 pages copied.
> ANR0985I Process 3 for DATABASE BACKUP running in the BACKGROUND completed
with completion state SUCCESS at
> 00:32:01.
>
>
> ==The following is I want to recover the TSM Database , and with the error
messages.==
> # ./dsmserv restore db
> ANR7800I DSMSERV generated at 08:00:25 on Dec  7 2000.
>
> Tivoli Storage Manager for AIX-RS/6000
> Version 4, Release 1, Level 2.0
>
> Licensed Materials - Property of IBM
>
> 5698-TSM (C) Copyright IBM Corporation 1990,2000. All rights reserved.
> U.S. Government Users Restricted Rights - Use, duplication or disclosure
> restricted by GSA ADP Schedule Contract with IBM Corporation.
>
> ANR0900I Processing options file /usr/tivoli/tsm/server/bin/dsmserv.opt.
> ANR000W Unable to open default locale message catalog,
/usr/lib/nls/msg/C/.
> ANR8200I TCP/IP driver ready for connection with clients on port 1500.
> ANR0200I Recovery log assigned capacity is 108 megabytes.
> ANR0201I Database assigned capacity is 116 megabytes.
> ANR0306I Recovery log volume mount in progress.
> ANR1437E No device configuration files could be used.
> #
>
> 'No device configuration files could be used.' what is the meaning ??
> and what should I do in this case ?
> Thank you !
>
> Best Regards!
> Eric
>
>
> --

>
> Eric Zhang (Yongsheng Zhang)
>
> Beijing Visionsky Information Technology Co.,Ltd.
> Tel: 8610-88091533/4/5 Ext. 212
> Fax: 8610-88091539
> Mobile Phone: 13601030319
> E-mail: [EMAIL PROTECTED]
> Web Site: http://www.visionsky.com.cn
>
> Room 830,Building B,Corporate Squrare,
> NO.35 Finance Street Xicheng district,
> Beijing, 100032, P.R.China
>
>
>
>
>
>

__
"The information contained in this communication is confidential and
may be legally privileged.
It is intended solely for the use of the individual or entity to
whom it is addressed and others authorised to receive it.
If you are not the intended recipient you are hereby notified
that any disclosure, copying, distribution or taking action in
reliance of the contents of this information is strictly prohibited
and may be unlawful. Absa Bank Limited
(registration number :1986/004794/06) is liable neither for the
proper, complete transmission of the information contained in this
communication, nor for any delay in its receipt, nor for the
assurance that it is virus-free."


Re: Strange backup behavior

2003-08-06 Thread Christo Heuer
Hi Craig,

Are you not looking at the System object paret of the backup???
The System object backup takes each file in the system/system32 and backs
it up during incremental backup - irrespective of the fact that the file has
not
changed.

That is all I can think of to cause this type of behaviour.

Regards
Christo


I have a W2K box that has been backing up just fine for months. Nothing
has changed except the addition of service pack 4 in just as long.
However, when ever I run a backup the client backs up every single file
even though it's executing an incremental. As a test we event manually
did an incr on c:\ waited for it to complete, then did it again; it
backed up every file. Client version is 5.1.0 (realize there's an
upgrade but its worked for us so far) and the server version is 5.1.6.1
.

Any thought would be greatly appriciated

Craig Riley
The Children's Hospital in Denver



DISCLAIMER:
CONFIDENTIALITY NOTICE:  The information contained in this message is
legally privileged and confidential information intended for the use of the
individual or entity named above. If the reader of this message is not the
intended recipient, or the employee or agent responsible to deliver it to
the intended recipient, you are hereby notified that any release,
dissemination, distribution, or copying of this communication is strictly
prohibited.  If you have received this communication in error, please notify
the author immediately by replying to this message and delete the original
message. Thank you.

__
"The information contained in this communication is confidential and
may be legally privileged.
It is intended solely for the use of the individual or entity to
whom it is addressed and others authorised to receive it.
If you are not the intended recipient you are hereby notified
that any disclosure, copying, distribution or taking action in
reliance of the contents of this information is strictly prohibited
and may be unlawful. Absa Bank Limited
(registration number :1986/004794/06) is liable neither for the
proper, complete transmission of the information contained in this
communication, nor for any delay in its receipt, nor for the
assurance that it is virus-free."


Re: ITSM Operational Reporting Technology Preview

2003-07-18 Thread Christo Heuer
Hi Shannon,

You do not mention your platform you are running the reports on - those type
of messages are normally related to something the application tries to do
that Windows does not like

Cheers
Christo


> I find that half the SQL Selects do not work with MVS but I can go in and
> modify the custom report to reflect Selects that do work.  The big problem
> I have is that I get the following error every time the Hourly Monitor
> runs;
> --

> syshost.exe - Application Error
> The instruction at "0x77cb17f" referenced memory at "0x021b0013".  The
> memory could
> could not be "read".
> Click on OK to terminate the program
> Click on CANCEL to debug the program
> --

> Since I have no idea what it means I hit CANCEL to debug the program and
my
> PC makes noises like it doing something serious, then the error message
> goes away until the next time the Hourly Monitor runs.  I have no idea
> where to look for debug results.  I sent a feedback to ITSM support with
> the information but I have no idea if they'll address it or not
>
>
>
>
>   "Wayne T. Smith"
>   <[EMAIL PROTECTED]> To:
[EMAIL PROTECTED]
>   Sent by: "ADSM:  cc:
>   Dist StorSubject:  Re: ITSM
Operational Reporting Technology Preview
>   Manager"
>   <[EMAIL PROTECTED]
>   .EDU>
>
>
>   07/17/2003 10:31
>   AM
>   Please respond to
>   "ADSM: Dist Stor
>   Manager"
>
>
>
>
>
>
> E Mike Collins wrote:
> > TSM operational reporting is a simple tool designed to help you keep TSM
> > running smoothly on a day-to-day basis. It runs on Windows and supports
> TSM
> > servers on all platforms....
>
> Fwiw, (as might be expected) it does not run on (the very old VM)
> Version 3.1.2, as it uses a number of tables unavailable in 3.1.2
> (summary, events, etc.).   cheers, wayne

__
"The information contained in this communication is confidential and
may be legally privileged.
It is intended solely for the use of the individual or entity to
whom it is addressed and others authorised to receive it.
If you are not the intended recipient you are hereby notified
that any disclosure, copying, distribution or taking action in
reliance of the contents of this information is strictly prohibited
and may be unlawful. Absa Bank Limited
(registration number :1986/004794/06) is liable neither for the
proper, complete transmission of the information contained in this
communication, nor for any delay in its receipt, nor for the
assurance that it is virus-free."


[no subject]

2003-07-03 Thread Christo Heuer
Hi Gianni,

Welcome to the list - maybe give us some background info of
your environment - hardware, number of servers etc.
That should get the discussion going...

Regards
Christo


Hello everybody.

My name is Gianni Pierini and I'm a new subscriber of the list.
__
"The information contained in this communication is confidential and
may be legally privileged.
It is intended solely for the use of the individual or entity to
whom it is addressed and others authorised to receive it.
If you are not the intended recipient you are hereby notified
that any disclosure, copying, distribution or taking action in
reliance of the contents of this information is strictly prohibited
and may be unlawful. Absa Bank Limited
(registration number :1986/004794/06) is liable neither for the
proper, complete transmission of the information contained in this
communication, nor for any delay in its receipt, nor for the
assurance that it is virus-free."


Re: anyone using ATA disk systems

2003-06-09 Thread Christo Heuer
Hi John,

There is currently an alternative to bypass the limitation/issues
Andrew brought up...

A company with the name of Dilligent software uses a prodcuct called
copycross which has a good approach to the problems with a very
large disk pool in TSM as a replacement for tape:
They basically simulate a DLT tape library with a Linux server - then you
just define the library to TSM - and send your data to it.
This solution will then write the data to Virtual drives/tapes - plus
you can define as many virtual drives as you require. The only issue
is the fact that it currently only works with EMC Symmetrix/Clariion disk
subsystems, but what makes it good is the fact that you can make use of
the replication facilities/srdf of the EMC boxes. So, you end up either
using
TSM's tape copy functionality for DR or you just let the EMC box mirror the
virtual tape data to another site...
This solution keeps the tapeness of the data without the drawbacks of
tape. Slow/unreliable etc
Cheers
Christo
--
Mr. Raibeck,
  I appreciate your feedback and, being involved in IT for some 30 years, I
understand the technical challenges involved. That is why I posted the
question. With the falling cost of disk architecture, a disk to disk backup
alternative seems to be coming close to rivaling disk to tape as a backup
alternative. Especially when dealing with some of the more sophisticated
tape solutions that involve the mainframe/zArchitecture.
  If I size my TSM solution such that I can recover 'x' number of
application servers in a given number of hours, then I will require a
certain number of tape drives based on the data transfer rate of each tape
drive.  Hence, any recovery process in DR mode is limited to the number of
tape drives available. Ouch!!! With disk to disk, my limitation is the TSM
server and the network.
  Add to the mix the fact that tape data transfer speeds are less than SCSI
and/or ATA data transfer speeds and the thought is that with capacity of
disk architectures increasing rapidly and the price currently lower than
tape, it makes sense to back up everything to disk!!! Faster and lower cost!
So, I/we would appreciate IBM Tivoli's support of this concept.
  With all the pressure on budgets and all these jobs NAFTAing, I MUST
arrive at the best solution! Thanks for your consideration.

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com


-Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]
Sent: Monday, June 09, 2003 1:33 PM
To: [EMAIL PROTECTED]
Subject: Re: anyone using ATA disk systems


Addendum: I as I said earlier, we continue to study the matter. Possible
outcomes include enhancements that will enable TSM to function better in
an all disk storage pool environment, although we make no commitments at
this time.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
06/09/2003 10:25
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: anyone using ATA disk systems



We have not done a lot of testing on this, so this can not be taken as a
definitive statement. With that said, the following sums up our current
thoughts on this:

- Because there is no reclamation for random access storage pools, (a)
disk fragmentation is definitely a concern, and (b) aggregates are not
rebuilt, so as objects within an aggregate expire, that space is not freed
up until all objects in the aggregate have expired. This can cause
inefficient utilization of the disk space over time.

- FILE device classes could be used, but represent configuration and
performance concerns.

- While such an environment is technically possible, it is not the
intended TSM usage model, and we do not recommend it at this time.

- We continue to study this issue.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.


Steve Schaub <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
05/23/2003 07:08
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject

Re: Database page size?

2003-01-16 Thread Christo Heuer
Nope - you are not able to change this as far as I know...

Regards
Christo
=

Can you change it? I have 4096 bytes.

Med vänlig hälsning



 Henrik Wahlstedt  Statoil   Phone: +46 8 429 6325
 IT EH SE  Torkel Knutssonsgatan 24  Mobile: +46 70 429 6325
   118 88 Stockholm  E-mail:
   Sverige
[EMAIL PROTECTED]






Farren Minns
  cc: (bcc: Henrik Wahlstedt)
Sent by: Subject: Database page size?
"ADSM: Dist
Stor Manager"
<[EMAIL PROTECTED]
RIST.EDU>


2003-01-16
11:37
Please
respond to
"ADSM: Dist
Stor Manager"






Hi TSMers

What is the default size of a page within the database?

Thanks

Farren - John Wiley & Sons Ltd





---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.=

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: HELP: MVS TSM Server Hanging on Boot

2003-01-08 Thread Christo Heuer
Hi Mark,

I had a problem like this a few months back - waited  about an
hour for the server to start up properly - got to exactly the
same place yours came - after I halted it a 2nd time it came up
relatively quickly - about 20 minutes - I think the fact that
you have an 11Gig recovery log volume will contribute quite a
lot to the time it takes.
We have a 6gig rcvry log and once the log gets full it takes
quite long as I've stated.
The next time it happens the best to do is NOT to halt your
server but to keep it going - all client transactions will
fail because the recvry log is full - but that is fine - then
you just start a database backup from the MVS/TSO console and
wait for it to complete. There is an undocumented command
'ckpt force' - which normally clears up the log in critical
situations like this...

Cheers
Christo
==
This morning we noticed our TSM recovery log was ~97-100% full.
The server was still up but we couldn't log in. We halted the server,
increased the recovery log, and are trying to bring the server back up.

The problem is the server is stuck in Recovery log undo pass in progress.

10.36.06 STC24351  ANR0900I Processing options file dsmserv.opt.

10.36.07 STC24351  ANR0990I Server restart-recovery in progress.

10.36.30 STC24351  ANR0200I Recovery log assigned capacity is 11780
megabytes.
10.36.30 STC24351  ANR0201I Database assigned capacity is 53728 megabytes.

10.36.30 STC24351  ANR0306I Recovery log volume mount in progress.

10.44.07 STC24351  ANR0353I Recovery log analysis pass in progress.

11.57.38 STC24351  ANR0354I Recovery log redo pass in progress.

12.52.38 STC24351  ANR0355I Recovery log undo pass in progress.

12.52.40 STC24351  ANR0362W Database usage exceeds 88 %% of its assigned
capacity

Can anyone please advise?

Mark Hokanson
Thomson Legal and Regulatory
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: LAN free backup for Novell

2002-12-12 Thread Christo Heuer
Hi Paul,

I agree with you on the small files and LAN-Free - BUT - we have
a few Netware servers with large btrieve databases - so I'm sure
there is a need for Lan-Free on Netware - bringing me to the
2nd part of Netware and backups - the unfortunate thing with Netware
and small files is this:
You will NEVER get even close to LAN-Free speed with the
TSM client (or any other backup tool for that matter currently),
when doing smaller files on Netware.
It does not matter that you have a gigabit network - you will
not be able to exploit it.
The best speed I was able to get with average 27Kb files was
4M/bytes per second - I promise you this is as close as the
IBM LABS also got to Gigabit speed on Netware with small
files.
The best we've seen was close to 30M/bytes per second when
using larger files of 250Meg and bigger.

Your Netware modules that Novell provide for backup tools
to use are single CPU bound - as are much of the rest of
the OS - so your bottleneck is your CPU on Netware - and beleive
me - TSM will push your CPU close to 100% if you have fast disk
and fast network.

Cheers
Christo
=

The reality is this.  File system backups of small files are not a good
candidate for SAN Tape backups.  Netware does not run any large file
databases, so there is no SAN Storage Agent requirement.  The answer is
Gigabit between your TSM server and the client and let the TSM server stream
the data to the tape drives.  You will get as go performance, if not better
than a SAN backup would ever provide.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 11, 2002 8:34 AM
To: [EMAIL PROTECTED]
Subject: LAN free backup for Novell


Hi, expets,

I'm trying to implement LAN free backup feature with TSM 4.2 on AIX 4.3.3
for client nodes with Novell servers. These NetWare servers are fibre
connected to EMC box through SAN switches. TSM support told me there is no
Storage Agent support for Netware platforms. Does anyone backup Novell
servers through Fibre Channel? What would be your suggestion to backup to
tape library that resides on SAN? Thanks again for your help!!





Jin Bae Chi (Gus)
System Admin/Tivoli
Data Center
614-287-5270
614-287-5488 Fax
[EMAIL PROTECTED]

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: Network drive backup with Storage Agent

2002-12-12 Thread Christo Heuer
Hi Jin,

Something similar to what we've done - except we didn't go
the LAN-Free route. We installed the 32Bit Windows client
for netware - mapped the Netware volumes to 2000, and then
did a TSM Lan backup to our TSM server. I'm not too sure
if the storage agent on Windows will be able to work with
a network drive - might not even recognize the drive?

The only problem we have with the mapped drives backing
up through Windows is when our Netware Cluster (6 nodes),
does a fail-over - the Windows 32bit Netware client actually
places a lock on the NSS volume for the files it is busy
with - this prevents NSS to do a fail-over.
(Before someone asks 'But why are you not using the native
Netware TSM client on the Netware box for the backup?' - it
is a case of the Netware client being single CPU bound and
the backup pushing this single CPU to 100% when there is
other workload on the box.

So - I don't think the LAN-Free route will work - maybe
someone else has tried this in their environment?

Regards
Christo
=
Hi, TSMers,

This is kind of followup question I opened previously with 'LAN free
backup of Novell'; I found out TSM doesn't have Storage Agent for
NetWare platform.

As alternative solution, I plan to install Storage Agent on a Windows
2000 client and backup all network drives ( G: H: I: J: etc) to do LAN
free backup to tape library, 3575.  As we are Novell shop, I wonder if I
can backup data with NSS format from network drives (actual data resides
on EMC Clariion box) and restore them to original place without
corrupting the data, using Windows as client platform? Any comment will
be appreciated. Thanks.








Jin Bae Chi (Gus)
System Admin/Tivoli
Data Center
614-287-5270
614-287-5488 Fax
[EMAIL PROTECTED]
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: 4.2.3 Upgradedb slow

2002-12-06 Thread Christo Heuer
Hi Sam,

Definately something else wrong - should only take a second or
two
Maybe check your OS/390 side for contention/problems - do a GRS
to see if TSM is waiting on any resources.

Regards
Christo

+++
I have just installed the 4.2.3 server upgrade and have been running the
UPGRADEDB process for almost 3 hours now.   Has anyone else experienced
this and, if so, how long will this take.  In the past, it has only run
a couple of minutes.

This is an upgrade from 4.2.2 on OS/390 R10.  DB is about 11GB 85% used.

Thanks
Sam Sheppard
San Diego Data Processing Corp.
(858)-581-9668
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: performace figure Gigabit network

2002-11-12 Thread Christo Heuer
Go to a disk staging area 1st - then stream at good speed
to your LTO...

Cheers
Christo
=
Dear Marc,

I am runnin w2k dual p4 1GB mem 3583 scsi based.

The network is GB.

Right now I am tunning

tcpwindows size
buffpoolsize
txnbytelimit

movetresh
movesize
txngroupmax

The filesystem is filed with small files from 256kb to 15 mb.

The higest performance figure was:

2.126.68 network
886,00 aggregate

FTP does 20 MB a sec ???

So maybe my LTO is performing lousy

( give me magstar any day for small files )

Any sugestions ?

Best Regards,

Koen Willems






>From: Mark Stapleton <[EMAIL PROTECTED]>
>Reply-To: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
>To: [EMAIL PROTECTED]
>Subject: Re: performace figure Gigabit network
>Date: Mon, 11 Nov 2002 16:29:51 -0600
>
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@;VM.MARIST.EDU]On Behalf Of
>Koen Willems
> > Can anybody give me an performance figure on restores speeds of a w2k
> > user directory with lots of small files on a Gigabit ethernet work.
> > I am tunnin resores and want to know wich performance is to be
> > expected on gigabit...
> > W2k + LTO + GB
>
>As with any performance question, it entirely depends upon the file and
>network environment. There are so many variables that asking a generic "how
>fast should it be?" is rather a pointless pursuit.
>
>Instead, follow good practices all around. Tune your server OS to optimum
>file I/O, clear your network of bottlenecks with best practice standards,
>and schedule TSM activities to even out the load as much as possible. If
>you're buying a new server for TSM, buy one with decent memory and CPU
>resources.
>
>If you don't do these things (or don't know how), no amount of fiddling
>with
>TSM is going to give you quality throughput speeds. The #1 reason (in my
>experience) for poor throughput in TSM is bad and/or poorly maintained
>networks.
>
>--
>Mark Stapleton ([EMAIL PROTECTED])
>Certified TSM consultant
>Certified AIX system engineer
>MCSE


_
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*.
http://join.msn.com/?page=features/virus

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



What's happened to Richard Sims?

2002-10-10 Thread Christo Heuer

Does anyone know what has happened to Richard?

He used to be very active on the discussion list.
Had no posts for a long time now

Cheers
Christo

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: dsmc Manual

2002-09-27 Thread Christo Heuer

Yeah,

Type in help under your dsmc> prompt.

Cheers
Christo
---
Hi all,
Can anybody tell me where is a manual of the dsmc and it's options?
Thanks





Estanislao Sanmartín Rejo
[EMAIL PROTECTED]

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: TSM and multiprocessing

2002-09-24 Thread Christo Heuer

Hi Sascha,

This holds true for all the OS's except MVS-OS/390-z/os - on the
mainframe you need to specify multithreading yes/no to enable
multi CP usage.
If set to no TSM will only use one CP.

Cheers
Christo
--


The multiprocessing capabilities in TSM are built in and do not need to
be configured.  IBM recommends that a TSM server be installed on a
system with 4 CPUs at most.


--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant - ADSM/TSM
eServer Systems Expert -pSeries HACMP

AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
[EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Sascha Braeuning
Sent: Tuesday, September 24, 2002 7:41 AM
To: [EMAIL PROTECTED]
Subject: TSM and multiprocessing

Hello TSMers,

is TSM Server 4.2 able to do multiprocessing? How can I configure it?

MfG
Sascha Bräuning


Sparkassen Informatik, Fellbach

OrgEinheit: 6322
Wilhelm-Pfitzer Str. 1
70736 Fellbach

Telefon:   (0711) 5722-2144
Telefax:   (0711) 5722-1634

Mailadr.:  [EMAIL PROTECTED]

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: netware dsmcad sessions hung

2002-09-19 Thread Christo Heuer

Hi Tim,

I have the same problem on NT servers - I started using the
dsmcad on NT - but because of the problems with the
managedservices I never rolled it out to other servers
Are you on 4.1.2?
You should go to 4.2.2 - that seems to be more stable as far
as the DSMCAD is concerned.

Regards
Christo



netware tsm client dsmcad is loaded but backup never runs
i have to unload and reload dsmcad to force the next backup
anybody ever see this problem

Next operation scheduled:

Schedule Name: NETWARE_MON
Action:Incremental
Objects:
Options:
Server Window Start:   22:00:00 on 09/16/2002

Waiting to be contacted by the server.

backup should of run at 22:00:00 on 09/16/2002

tsm server 5.1.1
netware client 5.1.0

netware cleint options

MANAGEDSERVICES WEBCLIENT SCHEDULE
SCHEDMODE  PROMPTED

Tim Brown
Systems Specialist
Central Hudson Gas & Electric
284 South Avenue
Poughkeepsie, NY 12601

Phone: 845-486-5643
Fax: 845-486-5921
Pager: 845-455-6985

[EMAIL PROTECTED]
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



The New and improved ADSM.ORG

2002-09-11 Thread Christo Heuer

Hi Gang,

For those of you fortunate enough to have access to the
NET from work/home, had a looked at the new Adsm.org yet?
It is very impressive!
Not too many people subscribed yet, definately worth it.

Cheers
Christo

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in its receipt, nor for the assurance that it is
virus-free."



Re: backup performance with db and log on a SAN

2002-09-03 Thread Christo Heuer

Roger,

Some very good points made here - only thing I'm curious about
is why you do not suggest to create a non-Raid LUN on the
Shark - this will give Eliza the best of both worlds.
I created a Non-Raid drawer in our shark specifically for
database types that require Non-Raid type disk behaviour,
which works well.
(It is just a pity that the Shark can only format a whole
drawer as non-raid)...

So - Eliza - just format a non-raid drawer and allocate
your database on these LUN's and see if that makes a
difference - then you will know if it is Raid-5 related
or not.

Regards
Christo


Roger,
The problem here is we have no idea what is the type of disk subsystem they
have.  Once we find that out we will know.

My TSM database is on a Shark 2105-F20 (it is RAID-5 under the covers).  My
database is 85GB and takes 1.3 hours to backup to Magstar drives.  I
consider that good for something that has 4K blocks and totally random.  We
stripe the database as well, may be not a good thing to do, but we did it
that way.  We are going to try some other things soon to see how we can
improve performance.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Roger Deschner [mailto:[EMAIL PROTECTED]]
Sent: Sunday, September 01, 2002 2:32 PM
To: [EMAIL PROTECTED]
Subject: Re: backup performance with db and log on a SAN


What a FASCINATING data point!

I think the problem is simply that it is RAID5. The WDSF/ADSM/TSM/ITSM
Database is accessed rather randomly during both normal operations, and
during database backup. RAID5 is optimized for sequential I/O operations.
It's great for things like conventional email systems that use huge mailbox
files, and read and rewrite them all at once. But not for this particular
database. Massive cache is worse than useless, because not only are you
reading from random disk locations, but each time you do, your RAID box is
doing a bunch of wasted I/O to refill the cache from someplace else as well.
Over and over for each I/O operation.

On our system, I once tried limited RAID on the Database, in software using
the AIX Logical Volume Manager, and it ran 25% slower on Database Backups.
Striping hurts, too. So I went the other way, and got bunches of small, fast
plain old JBOD disks, and it really sped things up. (Ask your used equipment
dealer about a full drawer of IBM 7133-020 9.1gb SAA disk drives - they are
cheap and ideally suited to the TSM DB.) Quite simply, more disk arms mean a
higher multiprogramming level within the server, and better performance.
Seek distances will always be high with a random access pattern, so you want
more arms all seeking those long distances at the same time.

OTOH, the Log should do fine with RAID5, since it is much more sequential.
Consider removing TSM Mirroring of the Log when you put it back into RAID5.

Can you disable the cache, or at least make it very small? That might help.

A very good use of your 2TB black box of storage: Disk Storage Pools. The
performance aspects of RAID5 should be well suited to online TSM Storage
Pools. You could finally hold a full day's worth of backups online in them,
which is an ideal situation as far as managing migration and copy pool
operations "by the book". This might even make client backups run faster.
RAID5 would protect this data from media failure, so you don't need to worry
about having only one copy of it for a while. Another good use: Set up a
Reclamation Storage Pool in it, which will free up a tape drive and
generally speed reclamation. Tape volumes are getting huge these days, so
you could use this kind of massive storage, optimized for sequential
operations, very beneficially for this.

So, to summarize, your investment in the SAN-attached Black Box O' Disk
Space is still good, for everything you probably planned to put in it,
EXCEPT for the TSM Database. That's only 36GB in your case, so leaving it
out of the Big Box is removing only 2% of it. If the other 98% works well,
the people who funded it should be happy.

P.S. I'm preparing a presentation for Share in Dallas next spring on this
exact topic; I really appreciate interesting data points like this. Thank
you for sharing it.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Sat, 31 Aug 2002, Eliza Lau wrote:

>I recently moved the 36G TSM database and 10G log from attached SCSI
>disk drives to a SAN. Backing the db now takes twice as long as it used
>to (from 40 minutes to 90 minutes).  The old attached disk drives are
>non-RAID and TSM mirrored.  The SAN drives are RAID-5 and TSM mirrored.
>I know I have to pay a penalty for writing to RAID-5.  But considering
>the massive cache of the SAN it should not be too bad.  In fact,
>performance of client backups hasn't suffered.
>
>However, the day after the move, I noticed that backup db ran for twice
>as long.  It just doesn't make sense it will take a 1

Re: Restore Performance.

2002-09-02 Thread Christo Heuer

Hi All,

Netware restores suck big time when it comes to throughput!
The TSA/SMDR modules are single threaded (as far as CPU's
are concerned - even in Netware 6).
The best we could pusk the Netware client is 4.1M/Bytes per
second - which is faster than what you are seeing but still
an issue when you have 100's of gigs of data to restore.
Novell is aware of the issues but so far have not responded
to a workaround or even an indication of a timeline for
making the TSA multiprocessor aware.
The fact that we use LTO probably helps to achieve a higher
throughput than you.

Regards
Christo

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Gallerson,Charles,GLENDALE,Information Technology
Sent: Friday, August 30, 2002 5:29 PM
To: [EMAIL PROTECTED]
Subject: Re: Restore Performance.
Importance: High


Mark,

  Does TSM 5.1.0.0 provide MOVE NODEDATA and multithreaded restores

-Original Message-
From: Mark Stapleton [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 30, 2002 5:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Restore Performance.


From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Cahill, Ricky
> Doing restores from the collocated primay pools I'm getting 10gb an hour.
> From a non collocated pool (offsite) I'm getting about 5-7gb an hour.
>
> Has anyone got any figures on restores using DLT7000's as we're going
> through the process of doing DR and having to
> restore 7 Netware servers with 1.4tb at the above speeds is going to be
> painful.
>
> PS...ANy hints and tips on doing large restores and how to speed them up
> greatly apreciated as well ;)

You're fighting an uphill battle on two different fronts. DLTs are
relatively slow tapes in terms of throughput, and they are devilishly slow
on mounts and dismounts. *AND* NetWare's TSA module (which waves the baton
for backups and restores) has never been known for promoting fast backups or
restores. (What level of NetWare and TSA/SMDR modules are you at?)

Suggestions:
1. Don't collocate your offsite pools--just your primary pools. Collocated
offsite pools don't buy you much, because your chances of needing them as
small.

2. Upgrade to server version 5.1.1. This will allow you to run MOVE NODEDATA
on your non-collocated pools prior to disaster testing. You can run it by
client, or by client filespace. Version 5.1.1 will also allow for
multithreaded restores.

3. Do the LTO upgrade. Greater capacity (fewer tapes in collocated storage
pools), much faster mounts, and faster throughput are all benefits of such a
move.

4. Use the DIRMC option. Sending directory structures to a disk-based pool
makes restore somewhat faster. (How to do so is a lengthy discussion; Read
The Fine Manual.)

--
Mark Stapleton ([EMAIL PROTECTED])
Certified TSM consultant
Certified AIX system engineer
MCSE

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in itsreceipt, nor for the assurance that it is
virus-free."



Re: cache hit rate

2002-08-22 Thread Christo Heuer

Hi Ken,

OK - maybe there is a bit of conflicting info here - Let's begin
by saying you must try and have the hit ratio above 99% for
optimum performance (As stated in the performance and Tuning manual).
If you look at selftunebufpoolsize - it will adjust the buffpool
when your cache hit ratio is below 98% - indicating the general
rule that 98% is the cut-off point for acceptable performance.
Above 97%(in my mind anyhow is 98% or higher in any case).
I think what it should state is that it should not be lower than
98%
Anycase - I've got experience on both sides of this figure - 96-97%
hit ratio my server was performing like a dog - and you will only
see this in expiration processing and big SQL queries - some of
my queries that create reports was running for 5 hours - they now run
in about half an hour. When you get a techie phoning you and saying
he's been waiting for an hour already just to get the list of files
to pick from during a restore you will know what I'm talking about.

The cache hit is now sitting at 99.11 and the performance from
everything is excellent.

When your cache hit ratio goes lower than 98% it indicates that
you have a performance problem somewhere - it might be like in
our case where our mainframe was running 100% CPU most of the day
and night - increasing the buffpoolsize in this case meant nothing.
Once we solved the CPU utilisation issues (By moving workload off
this mainframe), our cache hit ratio went back up and performance
is excellent.

So there you have it:
96%-97% - bad performance
97%-98% - OK - but not good (Let's call it acceptable)
98%-99% - Good performance
99%-100% - Excellent performance

Hope this helps!

Regards
Christo
--
No experience.  I fact I'm learning from your experience.  While you say

"from my training at IBM that a cache hit rate of 99% and above is
recommended"

I found the following statement in the web interface.

The server database performs best with a cache hit ratio above 97% in the
database bufferpool. To tune the bufferpool size, (a) reset the bufferpool
statistics, (b) execute server operations that use the database, and (c)
view the database details to see if the cache hit ratio is above 97%. If the
cache hit ration is NOT above this percentage, increase the size of the
BUFPOOLSIZE specification in the server options file.

Conflicting advice?  Perhaps your 97.7% is good and no problem exists.
Let's hear from others.  Please..


Ken
[EMAIL PROTECTED]


>>> [EMAIL PROTECTED] 08/21/2002 12:46:04 AM >>>
Hi all,

first I've to thank all participants for the amount of answers I got on my
latest questions. Now I've got a new one which relates on the
bufferpoolsize and the cache hit rate on OS/390 TSM-Servers.

we use the following bufpoolsize:  32768
our chache hit rate:97,7 %

I know from my training at IBM that a cache hit rate of 99 % and above is
recommended. Is it possible, that a lower cache hit rate can cause
performance-problems? What is your experience?


MfG
Sascha Bräuning


Sparkassen Informatik, Fellbach

OrgEinheit: 6322
Wilhelm-Pfitzer Str. 1
70736 Fellbach

Telefon:   (0711) 5722-2144
Telefax:   (0711) 5722-1630

Mailadr.:  [EMAIL PROTECTED]

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is liable neither for the proper, complete
transmission of the information contained in this communication, nor
for any delay in itsreceipt, nor for the assurance that it is
virus-free."



Re: Old client versions

2002-08-02 Thread Christo Heuer

Hi Niklas,

Don't worry about Gianluca - he is from the TSM global response team,
so he does not like old versions of the code - because he normally ends
up visiting the customer when something is not working.
Anyhow, to answer your question, you can still get it from the
following IBM FTP server index.storsys.ibm.com
I've just been there and the code is still available:
ftp> ls -l
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 54144
-rw-rw-r--   1 18125700 200 2621 Feb  1 2000  IP21855.BAT
-rw-rw-r--   1 18125700 20049256 Feb  1 2000  IP21855.README.1ST
-rw-rw-r--   1 18125700 200 4781 Feb  1 2000  IP21855.README.FTP
-rw-rw-r--   1 18125700 200  27639960 Feb  1 2000  IP21855_FULL.EXE
226 Transfer complete.
301 bytes received in 0.02 seconds (15.05 Kbytes/sec)
ftp> pwd
257 "/adsm/fixes/v3r1/win32/intel/single" is current directory.

So, you just need to go to the adsm directory and choose the
client/server versions you require.

Regards
Christo Heuer
Absa Bank
Jhb
SA



Hello

Where can I download clients older than ver 3.7? I can't find them anymore
on the Tivoli website

Regards

Niklas Lundström
Föreningssparbanken IT
08-5859 5164


__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Managedservice

2002-07-31 Thread Christo Heuer

Hi Thomas, Manuel,

The dsmcad exists on both AIX and NT/2000 - it is
just on NT the service is called "tsm client acceptor".
The part of it stopping and starting(Managing) the scheduler
process came into effect in 4.2 client code. The client's
dsm.opt can contain a line called
managedservices schedule webclient
When this is specified you do not start the Tsm Scheduler
process - the dsmcad is the only process that runs and controls
the starting/stopping of the client scheduler.
The purpose of the dsmcad was originally for the web
client, and then modified (in 4.2), to control the tsm
scheduler if you choose to do so. The whole idea was
to release memory every time the scheduler is stopped
and started - in 4.2.2 these memory issues were addressed
so you do not need the dsmcad to manage the services.
OK - that was a mouthfull - but I think you get the
picture?

Regards
Christo Heuer


Thomas,
On AIX, DSMCAD is the Client Aceptor Daemon, and it
will let you establish sessions using any browser (
IE, Netscape, etc.), however it will not start the
scheduler.
To start the scheduler use: nohup dsmc schedule.
This is the output of the ps command grepping for
dsmc:

my_server:/ $ ps -ef|grep dsmc
root 16522 1   0   Jun 18  -  4:14
./dsmcad
root 19100 1   0   Jun 18  - 22:52 dsmc
schedule
root 27066 34528   1 16:07:08  pts/4  0:00 grep
dsmc

Manuel

--- "Rupp Thomas (Illwerke)" <[EMAIL PROTECTED]>
wrote:
> Hi TSM-ers,
>
> is my understanding correct that on
> AIX I just start DSMCAD and let it start the
> Scheduler and/or the Client
> Agent
> and on
> Windows NT/2000 I have to define all 3 Services (TSM
> Client Acceptor, TSM
> Remote
> Client Agent and TSM Schedule) but only start the
> TSM Client Acceptor?
>
> Thanks for your support!
>
> Greetings from Austria
> Thomas Rupp
> Vorarlberger Illwerke AG
> MAIL:   [EMAIL PROTECTED]
> TEL:++43/5574/4991-251
> FAX:++43/5574/4991-820-8251
>
>
>
>

--
> Dieses eMail wurde auf Viren geprueft.
>
> Vorarlberger Illwerke AG
>

--


=
Thanks,
Manuel J Sanchez
Senior UNIX SA
Assurant Group
(305) 252-7035 x32153

__
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Netware problem and TSM

2002-05-23 Thread Christo Heuer

Hi everyone,

Has anyone had a requirement to really push the Netware
client for throughput to a TSM server to enable backups
of about 1Tb or more?
Let me give you some background:
Client OS: Netware6 (I know - but it wasn't in my power to change).
Tsm Server: Win2K - 3583 Ultrium library.
Disk subsystem - client and TSM server: FastT500
Network: 2 X gigabit interfaces, one on client and the other on the
server. No swithces involved - direct cable connection between the
two.
Versions of TSM software: Both client and server TSM5.1

Now here comes the questions:
1) How many people out there have actually got a Netware server
with about 1TB of data that needs to be backed up by TSM?
2) How do you do it?

The reason I'm asking is that I have a server that I've been
asked to backup and the current volume is 800Gig but to grow
to 3TB by year-end.
Some facts about the Netware server I'm backing up is it has
3Gig memory and 4 processors. After extensive testing and
playing with tuning parameters I've discovered that the TSA600.NLM
is single processor bound (So is most of the other NLM's
and thus a backup can only be pushed as fast as the single processor
can process the data.
The best we've been able to achieve is 4M/bytes per second.
I suspect that all this time people are complaining
about Netware performance, this is where the problem is.

Lan-free would've been good in this case but not supported
on Netware.

Just though I'd get others feeling on this - although not
really a TSM software issue - it still makes life difficult
from a backup point of view.

Cheers
Christo

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: DISASTER Client Restores Slow

2002-05-19 Thread Christo Heuer

Has anyone had a look at a new feature in the TSM5.1 server code that
gives you the facility to consolidate a nodes data - either on filespace
level or node level?
This will address the multiple tapes that a client's data can end up
on over an extended period of time.
For those that do not know about it, it is a new admin command called
move nodedata 

Cheers
Christo

--

I am sure TSM will wait. And while we're on this subject, we are looking at
Disaster Recovery plans and the path we must take using TSM to recover a
couple hundred servers.  It looks bleak.

We are finding that, due to incremental forever backups, recovery times are
extremely long because of tape mount after tape mount after tape mount. In a
real disaster, we expect to take an entire day or more to recover a single
server. With a limited number of tape drives the recovery time required for
100 servers could take weeks.

Has anyone else run into this dilemma? What is TSM's direction? How can I
speed up the recovery process?

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: SAN Problem - TDP for SAP on Windows 2000

2002-04-18 Thread Christo Heuer

Hi Gerrit/Paul,

Here is an extract from the readme.txt that comes with the
QLOGIC drivers for NT:
2. Select HKEY_LOCAL_MACHINE and follow the tree structure down to
  the QLogic driver as follows:

  HKEY_LOCAL_MACHINE
 SYSTEM
CurrentControlSet
   Services
  Ql2200
 Parameters
Device

   3. Double click on

  MaximumSGList:REG_DWORD:0x21

   4. Enter a value from 16 to 255 (0x10 hex to 0xFF).  A value of
  255 (0xFF) enables the maximum 1 MByte transfer size.  Setting
  a value higher than 255 results with the default of 64K
  transfers.  The default value is 33 (0x21).

Cheers
Christo

-


Been here, seen this, fixed this.  Actually we found the solution on the
Knowledge base but the answer in the knowledge base is wrong.  Sorry, I do
not remember the KB solution number, but this is what you have to do.

The default in the Device\Parameters section for the Qlogic card defaults to
0x21.  The KB says to change it to at least 131 from 21.  What it meant to
say is 131 decimal.  We set it to 255 or 0xFF and now the SAN agent works
fine.  The issue is the tape header blocks are read/written as one on the
SAN agent apparently differently than the server.  The result is what you
see.  The keyword to set is MaximumSGList.  Sorry I cannot remember the HKEY
for this.

-Original Message-
From: Joe Cascanette [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 17, 2002 10:03 AM
To: [EMAIL PROTECTED]
Subject: Re: SAN Problem - TDP for SAP on Windows 2000


Are you using Removable Storage on the Windows2000 server attached to the
SAN connected tape unit?, or are you using the drivers supplied by Tivoli
(device drivers)?.

My server is NT based (not sure about the AIX based) and I am using the
Tivoli drivers to connect to my tape storage. I noticed since I needed to
disable the Removable Storage service on my Windows 2000 server attached to
the SAN to get this to work correctly. Some how the 2 drivers were fighting
for control, by disabling this service Tivoli's device drivers had no
problem.

Joe

-Original Message-
From: Gerrit van Zyl [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 17, 2002 7:21 AM
To: [EMAIL PROTECTED]
Subject: SAN Problem - TDP for SAP on Windows 2000


Hi all TSM'ers

The setup:

TSM 4.2.1.0 Server on AIX 4.3.
TSM 4.2.1.26 Client on Windows 2000
TSM Storageagent 4.2.0.1 on Windows 2000
TDP for SAP on Windows 2000

The Win2000 is connected to the SAN via a Qlogic card.

When we start a SAP backup, the tape gets mounted in the correct tape drive,
however the following error occurs: (output from dsmsta)

ANR0400I Session 6 started for node BSASAP31 (TDP R3 WINNT) (Named Pipe).
ANR8337I LTO volume 541ABW mounted in drive DRIVE1 (\\.\Tape2). ANR8938E The
adapter for tape drive DRIVE1 (\\.\Tape2) cannot handle  the block size
needed to use the volume. ANR8468I LTO volume 541ABW dismounted from drive
DRIVE1 (\\.\Tape2) in library LIB3584. ANR1401W Mount request denied for
volume 541ABW - mount failed.

Any ideas as to why it complains about the "block size" and how to fix it?

Thanks and regards
~~
Gerrit van Zyl
Tel: +27 11 800 7400
Fax: +27 11 802 3814
Cell: +27 82 570 4266
Email: [EMAIL PROTECTED]
Web: www.faritec.co.za
~~

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Daily Backup Report: More Issues

2002-04-18 Thread Christo Heuer

Hi Orin,

Just to add to what Paul and Andy has already said:
TDP for Exchange - definately a problem - even a q files nodexyz f=d reports
a successfull backup - although the Rc425 in the exchange log says that
the backup of storage group xyz failed the TDP for exchange does NOT
comunicate
this fact to the TSM server - what we do is run SQL queries against the
TSM actvity log to see the success or failure of the backup.
Andy's statement that the q event is good for scheduled operations is not
true - I have an NT server where I have an ADMIN command to backup the
primary tape pool to a secondary tape pool with a wait=yes added.
When the server starts the command it ends without mounting the tapes,
because there is no free tapes available to make the copy.
In the TSM activity log you can see the process failed - but the q event
says it was fine.
According to me this should report a FAILED EVENT - but is says the event
was
OK return code 0.
Might be a bug - but I don't make use of the events table for reporting
AT ALL.

Cheers
Christo


Mark is absolutely correct.  This is an issue we have as well.  To solve the
problem we are running post processing of the output looking for a bogus
situation and setting return codes.  We use an external scheduler, so we can
do this.  With the TSM Scheduler you are just screwed if you are not
satisfied that a successful session does not necessarily equal a successful
backup.

The worst problem we have is the TDP for Exchange can get a 425 return code
because Norton Anti-virus has the Exchange store tied up and you still get a
successful backup.  This takes a bounce of the Exchange Server.  The issue
is you can go weeks without realizing you have not gotten an Information
Store backup.

Typically, I use SQL to looke for the message id and failed and that is how
this one is found.



I hate to correct IBM again... My statement was correct

>>
query event will NOT tell you if your backup was successful.
<<

As Andy so carefully stated in his last comment:

>>
you should not have any problems determining success or failure of the
operation. <<

The operation and the actual successful backup are two different things. In
referring to a TSM operations (i.e. ACTION=INCREMENTAL), not a script, can
show proof of missed files and errors from reports from my activity log that
a "success of the operation" does not mean that you had a successful backup.
I

We currently have a Critsit open with Tivoli and IBM hardware support and
this is one of the major issues.

Just trying to help, I guarantee that if you rely only on a q event to let
your customers know if you have all their files backed up you will get
burned.

Original Message-
From: Andrew Raibeck [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 16, 2002 10:07 AM
To: [EMAIL PROTECTED]
Subject: Re: Daily Backup Report


>>
query event will NOT tell you if your backup was successful.
<<

This is true only if the schedule definition launches a script, i.e. DEF SCH
mydomain myschedule ACTION=COMMAND OBJECTS="myscript", and the script
contains commands that run asynchronously; in that case, TSM has no way to
track the actions taken within the script. It can only say, "the script was
launched successfully". In short, the success or failure of the command
depends on the return code issued from the script.

For scheduled TSM operations (i.e. ACTION=INCREMENTAL), you should not have
any problems determining success or failure of the operation. For
ACTION=COMMAND operations where the command

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




Mark Bertrand <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 04/16/2002 07:34
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: Daily Backup Report



We also used "query event * * begindate=today-1 enddate=today ex=yes" until
I learned that this does not report on clients that were or were not backed
up successfully, this ONLY reports on if the script or schedule was
successful. To quote straight from the h q event page "Use this command to
check whether schedules were processed successfully."

I will not go into a rant about this, just learn from my mistake, query
event will NOT tell you if your backup was successful.

-Original Message-
From: Williams, Tim P {PBSG} [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 16, 2002 9:01 AM
To: [EMAIL PROTECTED]
Subject: Re: Daily Backup Report


we generally run a q event command   ex=yes
you can use begindate begintime parms, etc
help q event
fYI

-Original Message-
From: Orin Rehorst [mailto:[EMAIL PROTECT

Re: Redirecting Commands in Scripts

2002-04-12 Thread Christo Heuer

Hi Gerhard,

Is there any specific reason you want to make use of
a script?
Seeing that Tim Williams already answered your question
you should in any case look at doing what you want in a
different way.
What we do - which will give you exactly what you are
trying to achieve right now - is to run a batch job
with IKJEFT01 utility - making use of an admin cmd
line - and issue the  QUERY SYSTEM > DSM.OUTPUT.QSYSTEM
command.

Let me know if there is something specific I'm missing
in this case - else play around with batch jobs that
can be scheduled or run on demand.

Cheers
Christo
---


Hello all,
I found questions like this in the archives, but no answers..

One more try =>
I want to run a script, where one line should look like this:
QUERY SYSTEM > DSM.OUTPUT.QSYSTEM
where DSM.OUTPUT.QSYSTEM is a S390 Filename I want to take to the OFFSITE
Location.
(This Command works great on the command line, but I want to have it in a
script!)

However - I tried to do it like this:
def script test desc='Test'
upd script test "QUERY SYSTEM > DSM.OUTPUT"

Result:
ANR1454I DEFINE SCRIPT: Command script TEST defined.
ANR2002E Missing closing quote character.
ANS8001I Return code 3.

Any hints how to route an output to a file in a script  ?
I guess, the problem is, that TSM wants to direct the Output of the Update
Stmt
to a file - and when doing this, one quote is missing, of course

We are running TSM 3.7.5 on S390 (I know, not supported, but I guess this
should
work on all versions)

Regards
Gerhard Wolkerstorfer

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Question about 3590 tape drives

2002-04-03 Thread Christo Heuer

Hi Gerhard,

stop/start processing of the tape drive - the disk pool does not
stream the data fast enough to cause a continues flow of data - hence
when there has been a stop in the flow of data there needs to be some
kind of re-positioning - to continue writing where the last bit of
data was written.
Normal behavior.

Cheers
Christo
--


Hello,
I observed some strange behaviour of a 3590-E1A tape drive. During a
migrate operation from a disk pool to a sequential pool I watched the
operator panel. There I could see that the device after a write operation
did a locate followed by a read and then resumed writing. I observed this
behaviour several times. Is there an explanation for this?
I run TSM 4.1.2 on AIX 4.3.3
Best regards
Gerhard

Gerhard Rentschler   email:
[EMAIL PROTECTED]
Manager Central Servers & Services
Regional Computing Center   tel: ++49/711/6855806
University of Stuttgartfax: ++49/711/682357
Allmandring 30a
D 70550 Stuttgart
Germany
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Benefits of moving to platform other than OS/390

2002-03-22 Thread Christo Heuer

Hi Wayne - and others following this thread with interest

Firstly, how big is your current environment on OS/390 as far as
TSM is concerned. I really do not see OS/390 scaling like AIX
when it comes to TSM...
I agree that most of the people running OS/390 have all the
scheduling/reporting/tape handling sorted out and it works
very well.
OK - here comes my personal views (Based on 7+ years of
running ADSM/TSM on Sun/Aix/NT/2000 and OS/390:
General Performance: If you give OS/390 unlimited resources - so, in
other words - no memory/cpu contention - it would perform
OK. AIX on the other hand will scream if it was running on a
S80 for instance.
I/O performance: Escon on OS/390 limited to +- 15 Mb/sec - normally
you share that with the rest of the LPAR - batch and whatever else
you are running. On AIX you have FCP - a lot cheaper and faster
than escon implementations on OS/390. Like mentioned already by
other members - OS/390 you are limited in terms of robotics you
can implement - basically a STK silo or IBM 3494. On AIX you
have a very wide range of supported configs.
Reporting: You can do all your reporting still from OS/390,
even if you are running a TSM server on NT/AIX/Sun. We have
all our reporting/problem logging for our TSM environment
running from OS/390 - although the TSM servers are a mix
of OS/390 and many NT TSM servers.
Tape mgmt. your RMM, TLMS etc. are in anycase told that it
must not manage TSM data - TSM has that built in. On AIX
you just let TSM handle it. At the end of the day the only
real benefit I can see on OS/390 and tape handling is if
you have electronic vaulting set up - so your robotics and
tape drives for your off-site storage pools are located
physically off-site but genn'ed as local drives.

With scripts and maybe DRM in place it should be just as
easy to manage your tapes on AIX as on OS/390.

Hope this helps a bit.

Regards
Christo



TSM on S/390 works great here. We have good automation tools for message
handling and staff to note messages that haven't been automated. We schedule
lots of batch admin clients with the S/390 job scheduler that generate
reports that are distributed thru  the report distribution software. No
performance problems here, except we do have to compete for tape drives...

Thank you.

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: I need help

2002-03-20 Thread Christo Heuer

Hugo,

If you want people to help you, please specify additional info:
Things like - Platform (client and server), versions of the code,
connect agents etc. come to mind here.
For all I know you are trying restore do a restore with the ABC
client used to backup tandem machines

Christo
--



When I do a restore I get the following error


03/19/2002 08:17:57 The 159945th code was found to be out of sequence.
The code (3040) was greater than (2713), the next available slot in the
string table.
03/19/2002 08:17:57 The 159946th code was found to be out of sequence.
The code (3592) was greater than (2714), the next available slot in the
string table.
03/19/2002 08:17:57 The 159948th code was found to be out of sequence.
The code (3262) was greater than (2716), the next available slot in the
string table.
03/19/2002 08:17:57 The 159949th code was found to be out of sequence.
The code (3040) was greater than (2717), the next available slot in the
string table.
03/19/2002 08:17:57 The 159950th code was found to be out of sequence.

Hugo Badenhorst
[EMAIL PROTECTED]
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



People running OS/390 TSM servers - maybe for the development guys......

2002-03-20 Thread Christo Heuer

Hi all,

OK - For those of you that does not know anything about
OS/390 or don't run TSM servers on OS/390 this might be
very boring and maybe a bit cryptic - ignore or just
read for the fun.
Now for the people that still run OS/390 TSM servers:

I have always had my doubts about the scalability of
OS/390 when it comes to a TSM server.
Some of you might have seen me posting last year and early
this year about the size of your TSM environment and if
you are happy with the performance of TSM on OS/390 - only
one guy answered giving me specs on the environment they
run(Thanks to William Colwell), but other than that
most people said they moved off OS/390 and are now
running AIX or Sun or even Win2K servers.
William described their environment as an 80 Mips
processor box and a 166Gig db 88% used - and on top
of all that he has good performance from this.

Our environment is a 38 Gig db 80% used and we have
crappy performance. Now in Adsm V3 a new parameter
was introduced called MPTHREADING YES or NO. What
this does is tell TSM that it can start multiple
TCB's on multiple CP's - if you have multiple CP's
on your box.
Now we enabled this and noticed that TSM gets its
knickers in a knot when there are too many things
happening and the system is CPU constrained. In
WLM it is guarenteed to get CPU and in WDM you can
see that about 30% of the time it is delayed for
CPU. What we have done now in Omegamon is to check
how many work each of the TCB's does and then try
and figure out why TSM would perform crappy even though
it is getting about 50% of the total box.
Now - here comes the part where development might
be able to shed some light:

The TCB called dsmserv (the server task), has a
lmod caled IEANUC01 that has a CSECT of IGVGPVT
that uses about 90-95% of all the CPU cycles - remember
that this is one TCB assigned to one CP.
On further investigation our IBM'er came back and said
this IGVGPVT part controls getmain/freemain's for TSM.
Now here comes the question:
How can TSM be using 90% of a CP by just doing getmain/freemain
and all the other TCB's that has a CP allocated just sit
and wait for getmain/freemain's. This looks like a
serious scalability issue with TSM on a multiple
CP environment. According to our performance and MVS
guys the only way we are going to squeeze more juice
out of our OS/390 system with TSM is to split the
workload of TSM and run two TSM servers on one LPAR
or upgrade each CP to a more powerfull CP.
Is this in line with development, and the way it should work?

Thanks
Christo Heuer
ASBA Bank
Johannesburg
SOUTH AFRICA
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Filesize from client through server

2002-03-15 Thread Christo Heuer

Nope - that info is hidden from the SQL administrator interface.

Cheers
Christo
---



Hi All,

I would like to be able to retrieve the filesize etc. for the backupped
files for a client through the TSM server (dsmadmc).

I know this is possible through the client. But for automation sake it's in
my opion better to let the server generate such listings.

Is there anyone who knows if this can be done on the TSM server-side?

Thanx in advance,

Richard.

-
Richard van Denzel
High Availability & Storage Solutions
Senior Technical Consultant

Infrastructure Consulting & Integration
[EMAIL PROTECTED]

Getronics Infrastructure Solutions
Wiltonstraat 42
Postbus 1005
3900 BA  Veenendaal
tel.: 0318 - 567 100
fax:  0318 - 567 633

mobiel: 06 - 212 78 569

Building Futures on <>
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: What's wrong?

2002-03-14 Thread Christo Heuer

I don't think you have a problem - I received your mail via the Adsm-L.

Cheers
Christo Heuer
ABSA Bank
Johannesburg
SOUTH AFRICA

---

Hi all,

from 9.3.2002 6:21 CET I didn't recieive any message form this conference.
Do anybody know is anything wrong?
Would I re-sign-on?

But when I think about it, I probably will not recieve any answer to this
question too, isnt it? :-))

Toma9 Hrouda
Storage Specialist
HTD s.r.o. Praha
[EMAIL PROTECTED]
tel. +420 (2) 6791 3157
fax. +420 (2) 6791 3162

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Archive dates/details

2002-03-13 Thread Christo Heuer

Hi Jon,

You could do it with a SQL select statement:
select node_name, archive_date, description from archives

This would give you what you want.

Regards
Christo Heuer

Good Morning,

When attempting to retrieve archive data through the GUI, a
list/description of each available archive is shown.  .  Is there a way
through dsmadmc to list the dates/descriptions of archives for each node?

Thank You,
Jon Martin

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Session limitation

2002-03-11 Thread Christo Heuer

Hi William,

Apart from the normal - memory/Cpu constraints that you must stay
aware of - remember that your Tsm client makes use of a netware
userid to do the backups - make sure this ID can open enough
sessions to support what you want from the client - so yes there
IS a session limitation linked to the userid being used - but
this can be set by the netware administrators.
Cheers
Christo



I recently tried to backup up a Netware box straight to disk.  I did this
manually and opened 12 sessions (6 seperate backups).  We had MacAfee
running and are thinking that the combination of the two caused the Netware
Client to reboot.  It has a 100 MB card and I was trying to backup as fast
as possible for a restore to a different box.  The total of 88 GB was need
to be backed up so that it could restore.  I thought I could open as many
sessions as I wanted since I was going straight to disk and then I would
have done the same on the restore.  Is there a known limitation on the
client for open sessions according to the size of the box or otherwise?
Would the backup have caused even remotely the Netware client to reboot?
And for future use is there anyway of telling how many sessions to open on
any box?  Some guidelines or other?
 Thank you in advance.

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: MPTHREADING for OS/390

2002-03-07 Thread Christo Heuer

Hi Ann,

We run exactly the same environment - OS/390 and TSM versions.
We have been using the MPTHREADING YES parameter for the last
2-3 years - no problems directly related to the MPTHREADING.
What we have had for a while now is a call open for TSM going
into a flat spin when there is a tape I/O error. What would
happen is TSM would not except any new sessions - admin or client.
When doing a D GRS,ALL you can see TSM is waiting for SYSZTIOT.
After a certain period of time it would resolve the I/O error,
and TSM would carry on with whatever it was busy with.
How this ties in with MPTHREADING I'm not too sure about, but
development asked us to turn it off to take another variable
out of the picture.
You then really notice how much the MPTHREADING helps if you
have multiple CP's to use.
With MPTHREADING NO the server performs like a very slow dog!

Hope this helps - like I've said - there has not been one instance
where the MPTHREADING caused any undesirable results.

Cheers
Christo Heuer
ABSA Bank
Johannesburg
South Africa



We are running the following:

Our MVS is OS/390 2.10.
TSM server is Version 4, Release 1, Level 2.0

I've been asked to look into MPTHREADING and determine if there are any
reasons NOT to implement it.  I've searched the archives and found a
refrence from a long time ago concerning ADSM taking ALL of the engines in a
frame, but that was for the V3.x.  Since there was an APAR mentioned I'll
assume that it's been fixed in V4.x (but will verify of course).

Please let me know of any information concerning the implications of this
parameter.

Thanks!!
Ann...

*
DISCLAIMER:   The information contained in this e-mail may be confidential
and is intended solely for the use of the named addressee.  Access, copying
or re-use of the e-mail or any information contained therein by any other
person is not authorized.  If you are not the intended recipient please
notify us immediately by returning the e-mail to the originator.

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: logmode/bufpoolsize/mpthreading

2002-02-28 Thread Christo Heuer

Hi Joe,

Our environment is same as yours - Os/390 TSM4.1.3.0.
We run roll-forward mode on the recovery log.
Size currently allocated for the recovery log = 5364Meg
The maximum you can make this is in the region of 5.4 Gig -
check back in the archives - it has been discussed before.
It depends on how busy your TSM system gets - how many tansactions
you are doing.
Our current Bufpoolsize:
Bufpoolsize 384000 - This will depend on your system's memory
(You can always double your current bufpoolsize and see what
this does to your cache hit percentage - if it stays below
98% - increase more - if above leave as is).
Mpthreading - Have been running with it set to yes for about
a year already, no problems - we have in the last month
switched it off - (trying to determine the cause of something
else - IBM recommended switching off the MPthreading just
"in case").

Hope this helps.
Regards
Christo Heuer
ABSA Bank
Johannesburg
South Africa



> Environment:
> Server: 4.1.3.0
> Platform: S390
>
> 1.  Logmode: We're going to change the Logmode from Normal to Roll
Forward.  What determines the amount of disk I'll require?
>
> 2.  Bufpoolsize: We're going to increase from 24576K to ?.  What's the
determining factor?
>
> 3.  Mpthreading:  We're going to turn it on.  Are there any considerations
I should concern myself with?
>
> None of this info is in the manual that I'm aware of.  I get a log of "try
this" form Tivoli support.  Unfortunately, I don't work in an environment
where I can "try this" without first knowing what
> the repercussions are.
>
>
> Regards,
> Joe Wholey
> TGA Distributed Data Services
> Merrill Lynch
> Phone: 212-647-3018
> Page:  888-637-7450
> E-mail: [EMAIL PROTECTED]
>
>

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: TDP monitoring

2002-02-28 Thread Christo Heuer

What we used to do is just look at the file spaces - which
used to pretty accurate (Days since last backup completed
successfully 1) and pull out any machine that is greater
than 1 - now with the TDP for exchange 2.2 client it reports
the filespace as successfully completed even if a storage
group in exchange failed it's backup.
Have not bothered opening a call for this because as Del says,
all the actual messages are logged to the server - we pull it
out of there including other info like the throughput for
the session. This way of reporting seems the most accurate.

Cheers
Christo



> I'm currently testing TDP for Exchange for possible deployment
> in a very large enterprise environment.  Is anyone aware of
> tools/scripts that I can use to monitor the backups/restores.
> I'm aware that
> I can look at the past history of backups/restores and
> determine approximately how long it will take, however,
> this can be quite time consuming.  Also, does anyone
> know how most people are monitoring
> the success/failure of their respective backups.  I
> was going to scrape data out of the excfull.log or
> excincr.log.  This seems kind of primitive.

Joe,

Just one thought...
Also, keep in mind that all backup (and restore) events
for TDP for Exchange are logged in the TSM server
activity log... including their success or failure.
That way you can go to one central location
to find out the status of the backups, i.e. you
would not have to go to each Exchange server
to find out the status.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

"Celebrate we will. Life is short but sweet for certain..."  -- Dave
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: TDP on Exchange Server

2002-02-20 Thread Christo Heuer

Hi Kelvin,

The return code 425 means that you have a corrupt database.
We had this a while ago on one of our exchange boxes. There
was one particular user's data that was corrupt - had to restore
the total 30Gig's to get this one user's data back.
What the exchange admin did to clear the problem was create a new storage
group and moved everybody's data to the new storage group.
He could not find any utilities to fix the exchange corrupt database...

Hope this helps.

Regards
Christo



> Full: 0   Read: 38422676056  Written: 38422675890  Rate: 1,819.81 Kb/Sec
Waiting for Exchange
> server...
> Full: 0   Read: 38422676056  Written: 38422676056  Rate: 1,819.33 Kb/Sec
Backup of storage group First
> Storage Group failed.
> ACN5350E An unknown Exchange API error has occurred.
>
> ACN0151E Errors occurred while processing the request.
>
> On the server log I have the following error:-
> ANE4993E (Session: 7067, Node: EXCHANGE4) TDP MSExchgV2 NT TDP for
Microsoft Exchange: full backup of First
> Storage Group from server EXCHANGE4 failed, rc = 425.

Kelvin,

In this case, an "An unknown Exchange API error has occurred" message
means that TDP for Exchange called an Exchange Server API to get more
data from the database and the Exchange server hit an error that it
did not expect, and so it bubbled that unknown error back to TDP.

You could look in the Event Log to see if there are any Exchange
message that may help but, in most cases, if this continues
to happen (and a reboot does not help), then you will need
to take a trace in order to get the Exchange API return code.
You should call IBM support...they can guide you through
obtaining a trace. And in many of these cases, Microsoft
will need to get involved to tell us what the
"unknown error" return code really means and how to correct it.
I have seen a number of these lead to a lack of
resource issue...but until a trace is examined,
we can not know for sure.

Thanks,

Del



Del Hoobler
IBM Corporation
[EMAIL PROTECTED]

"Celebrate we will. Life is short but sweet for certain..."  -- Dave
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



OS/390 created backupset restored to AIX

2002-02-13 Thread Christo Heuer

Hi Everyone,

Has anybody ever tried this - is it a supported way of doing
things?
We just created a backupset on 3590 on OS/390 and restored
the clients data on AIX without any problems.
I know the backupset becomes a portable medium and as long
as the drive is compatible on both systems it should work,
BUT what about EBCDIC to ASCII conversion - where does this
take place in the scenario we tested?
We have done the same with a WinNT TSM server that created
the backupset and restored on the AIX client - but then there
is no EBCDIC in the picture.
The restore was done without a server in the picture - just
the TSM AIX client restoring from locally attached 3590.

Any info would be appreciated.

Thanks
Christo
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: Tivoli Decision Support and TSM server

2002-01-23 Thread Christo Heuer

Hi Bill,

What would you call a large Tsm server on OS/390?
Can you perhaps give me some detail as to Tsm db size and what
physical box you are running your TSM environment on?

To answer your question about running only selective queries - yes
the product can do that - when you run the loader you can specify
what areas you want to query - but the problem is that the TDSSMA
product builds it's own database in which the results of the loader
queries are stored - now here comes the fun part - it builds it's
own relationship in this new database about the queries that was
done on your TSM server. (This enables the product to build what they
call 3 dimensional views of your TSM data).
It is not documented what queries the loader combines to create this
3d view - so you don't know which of the resource intensive queries
could be run on a less frequent basis.
I'm currently waiting on our local IBM support to get me more info as
to which parts of the loader queries could be run seperate from the rest.

Still does not look very good to me.

Thanks for the reply.

Regards
Christo

==



Christo,

Can you teel what sql query is running?  I know some queries are
really terrible, especially 'select * from volumeusage".  Is there some
options in the TDSSMA to limit what queries are done?

I run a large tsm erver on OS/390 also.  I can do a select of
all the occupancy data in 9 minutes elapsed time.  I can't compare
the performance with w2k or aix because I don't run those servers,
but I am happy with the perfromance on os/390.

But I agree with you about TDSSMA - if it drives the server crazy it
isn't a good product.


At 12:53 PM 1/23/2002 +0200, you wrote:
>Hi,
>
>Are there any of you guys/gals that are running TDS for SMA?
>If you are - what kind of TSM server are you running:
>Platform and size of database being the ones I'll be interested
>in.
>We are busy evaluating the TDSSMA modules and are experiencing
>bad performance on our biggest TSM server - for those of you
>that are familiar with the product - when the TDS Loader runs it
>consumes about 24% of our OS/390 CPU for 28hours - the load is still
>not finished. This makes the whole TDS product useless to us.
>
>It is fine for our smaller TSM environments but it does not help
>getting different products for the smaller and bigger environments.
>To me it seems like the usefullness of this product or any other
>product that will be running SQL queries into your TSM db is severly
>impacted by the size of your TSM environment - coming back to the
>scalability of TSM's proprietary "black box" DB!
>
>Have any of you noticed this or do we need to look at other things.
>My opinion of TSM on OS/390 is that it performs worse on OS/390 than
>AIX or I'll even be as bold to say Win2K ;-)
>
>Thanks for any input.
>
>Regards
>Christo Heuer

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



TSM and Netware cluster servers

2002-01-22 Thread Christo Heuer

Hi,

Has anyone tried to backup a clustered Netware environment
with the Tsm clients?

I've got a V4.2.13 Tsm client running on Netware V5 backing
up to TSM 4.1.2 server.
When I query the filespaces from a administrative interface
it does not have any entries for the last backup start time etc.
It is as if the backup has never run before.
When checking the client schedule log and the backups table in
the TSM database it shows that the backup completed.

Has this got to do with the fact that it is a clustered Netware
environment?
I know in the days of SFT3 the Adsm code would have been SFT3
certified - I don't think it was ever really done.
On the Tivoli website I can not find any info in terms of TSM
running on Netware in a clustered environment either - is it only
the Intel based TSM client that has cluster aware capabilities?

Thanks for any info...

Regards
Christo

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: How long does a UNLOADDB take???

2002-01-21 Thread Christo Heuer
Title: Message



Hi 
Niklas,
 
Our 
OS/390 4.1 Tsm server (38Gig DB), took about two hours to unload but we 
cancelled the loaddb after about 
26 
hours. Not too sure why the unload took 2 hours (171 million entries unloaded) - 
I expected the loaddb to 
also 
be in that region. Reverted to restoring the db to the backup we took just 
before we started the unloaddb 
process.
Decided to give this a miss - silently hoping we'll 
move to AIX instead.
 
Cheers
Christo
==

   
  Hello
   
  We tried to do a 
  unloaddb this weekend to reorganise our database. It ran fine for 2 hours 
  but then it seemed to stop after 157 million entries.
  ANR4013I UNLOADDB: 
  Dumped 157654872 database entries (cumulative).
   
  But after this 
  nothing happens, we could see I/O activity  in MVS for the task. After 6 
  hours we cancelled the job.
  I've read on the 
  ADSM list about unload times over 20 hours, is this a normal 
  time?
   
  q 
  db
   
  Available    Assigned    
  Maximum    Maximum    Page    
  Total  
  Used    Pct    
  Max.Space   Capacity 
  ExtensionReduction    
  Size Usable   
  Pages  Util    
  Pct(MB) 
  (MB)  
  (MB) 
  (MB)   (bytes) Pages    
  Util---   
  44,868   44,868 
  0 
  6,216  4,096 
  11,486,208   9,737,492  84.8  84.8   
  
   
  TSM 4.2.0 running 
  on OS/390
   
   
   
   
  MVH
  Niklas Lundström
  Föreningssparbanken IT
  08-5859 5164
   
__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: can't see drive E in tsm client.

2001-12-12 Thread Christo Heuer

Hi Ofer,

Normally when you can not see a drive in the tsm client it is a
permissions problem.
Can your Win2K machine see the drive while you are signed on?
Are you running a cluster environment on Win2K?
There are special considerations when backing up cluster servers.

Let me know of your progress.

Regards
Christo Heuer
ABSA Bank
Johannesburg
South Africa



Hi All,

I have connected a Windows2 file server to an ESS box throw fiber cable.
I have a new drive in the file server e:\.
I would like to backup the drive with the TSM client. but there is no
drive E:\ in the backup\restore client.
do I need additional software for that file server to be able to backup
a drive  that is an ESS drive ?.
as far as the TSM client it's another drive on that file server. no?

I have a TSM server ver 4.1.4 and a TSM client cer 4.1.

Thanks.
Ofer Ccc.

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."



Re: ANR0208w after crash.

2001-12-04 Thread Christo Heuer

Hi Ofer,

Hope you have a good db backup on hand - seems like
you are going to have to restore the database.

Good luck
Christo Heuer
=
Hi All,

My Tsm server was crashed during a system backup because of a power failure.
I am trying to start the TSM server with no success.
I get ANR0208W which tell me that I have problem with my log file. ( I
don't have a mirror log file )
I tried to do 'dsmserv extend log' but still the tsm server wont come up.

Can you help me Pleas.

Thanks in advance
Ofer Ccc.

__
"The information contained in this communication is confidential and
may be legally privileged.  It is intended solely for the use of the
individual or entity to whom it is addressed and others authorised to
receive it.  If you are not the intended recipient you are hereby
notified that any disclosure, copying, distribution or taking action
in reliance of the contents of this information is strictly prohibited
and may be unlawful.  Absa is neither liable for the proper, complete
transmission of the information contained in this communication, any
delay in its receipt or that the mail is virus-free."