Réf. : Database question

2003-01-14 Thread Guillaume Gilbert
If all your config files (dsmserv.opt dsmserv.dsk, devconfig and volhistory) are the 
same and all database, log and stgpool files are the same, there should be no
problem. You could even move those around and have the dsmserv.dsk reflect the changes 
and it would be OK.   My SUN server just crashed big time over the weekend and we
had to reinstall the OS. All I did was reinstall TSM and recreate the config files. It 
started up without a hitch.

Guillaume Gilbert
CGI Canada




Jeff G Kloek <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-01-14 16:47:28

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Database question

Part of our upcoming move includes moving from TSM 4.2.1.9 on AIX 4.3.3 on
RS6000 S7A to TSM 5.1.5.4 on P670 on AIX 5.2.
I've set up an instance of TSM at that level on the P670. Now I'm trying to
fathom the best way to actually move the database
over. I'm going to be dropping all the disk and tape connections from the
S7A and connecting them to the P670. I'll have time
to get all the devices defined and paths defined, so that with the
exception of the path statements, the storage pools, libraries
tape drives, log directories, db directories and files will all be
configured the same as before. My question is this: If I did all the
above, could I simply do a "dsmserv upgradedb" and let it fly?
I'm sure this isn't supported, so I'm just wondering aloud.

Thanks!!







Réf. : tcp layer problem

2003-01-15 Thread Guillaume Gilbert
I had this problem on a new implementation. Is your tcpserveraddress coded with the 
dns name or the ip adress? We chenged it to the ip adress and the problems went away.
There was probably an error somewhere in the vlans but I'm not an ip guru...

Guillaume Gilbert
CGI Canada




"Conko, Steven" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-01-15 14:39:26

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : tcp layer problem

ive been going round and round with tsm support on this issue... we have an
AIX 4.3.3 client that has been upgraded several times all the way to 5.1.5
client version without success backing up to an AIX 4.3.3 tsm 4.2.2 server
over a 10/100 ethernet network (100 Full Duplex, no autonegotiate.)

there arent any network errors and the switches are all configured
correctly, and the tsm server is fine. they are beginning to insist our
problem is system/network level...

error:

ANS1809W Session is lost; initializing session reopen procedure.

appears almost constantly during backups. usually it will EVENTUALLY finish
with a few errors but take forever to run. all our parameters appear to be
okay by tsm support standards. with the combonation of the server message
saying session was terminated they say the client is severing the
connection... that tsm is getting the message from a lower layer.

we dont have any network errors appearing anywhere else on the system for
any other applications, there are no errors in errpt or /var/adm/messages
and diags come back fine.

what else can i do to diagnose this problem?







ITSM For Mail - Domino problems

2003-02-11 Thread Guillaume Gilbert
Hi All

First off :

TSM Server 5.1.5.4 on solaris
TSM Client on Windows 2000 5.1.5.9
ITSM for mail 5.1.5.0

I installed tsm client and Mail client and everything went fine. Registered both the 
"OS" client and the Mail client. When I run domdsmc q adsmserv I get a response from
my tsm server. when I run domdsmc q domino, thew command just hangs there and I get no 
response. When I first run the command I get a C++ error (don't have it in front of
me and do not have access to it at the moment). There are no errors in any .log file 
and in the event log.

Any help would be appreciated

Thanks

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL. : (514) 281-7000 ext 3642
Pager : (514) 957-2615
Email : [EMAIL PROTECTED]




Réf. : Question about collocation on/off

2003-02-20 Thread Guillaume Gilbert
We have 2 storage pools, one for small clients and one for large ones. We use 9840 
tapes so any client with less than 12 gb of stored backups goes on the small pool. We
save about 40 tapes this way.

Guillaume Gilbert
CGI Canada




brian welsh <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-02-19 15:42:51

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Question about collocation on/off

Hello,

AIX 5.1, TSM 4.2.2.8 en 3494 ATL with 700 volumes (used about 620 volumes),
and about 300 nodes. Three months ago we turned collocation off because we
have about 200 client-nodes with less then 3 GB's data stored, and every
node was using min. 1 tape, so we had a lot of tapes with low utilization,
long migration.

Now tape-utilization is much better, migration is much faster, but yesterday
there was a restore of an web-server (WinNT) and we restored almost 2 GB and
it took 6 hours and there were almost 70 tape-mounts.

Before we set collocation off such restores took half an hour. Now we are
thinking of setting collocation back on, but that implies we have to buy and
checkin about 350 new tapes, because move data will take weeks or more.

Now I was wondering how other sites are dealing with collocation, and what
are they doing to get better results during restore, when collocation is
off.

We are thinking of make a full back-up once a month to get data of one
client on less (or one) tape(s), or putting collocation back on, or...

Curious to your reaction!

Thanx,

Brian.








_
MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl







Réf. : Cannot backup Domino version 5.0.11 using tdp

2003-02-25 Thread Guillaume Gilbert
I just got this error. The problem is with version 5.2 of groupshield. Even if 
groupshield is deactivated we were getting this error. We had to completly deinstall 
it to
get TDP working correctly. You can either go back to a previous version or wait for 
5.2.1 which should correct the problem.

Guillaume Gilbert
CGI Canada




"Frost, Dave" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-02-25 09:12:18

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Cannot backup Domino version 5.0.11 using tdp

Hi,

Several of our clients have upgraded their Domino servers to v5.0.11, and
the tdp promptly stopped working.  The tdp worked fine with previous
versions of Domino on these servers.  We have tried using the very latest
tdp version from the ftp site, but no success there.

The symptoms we see are that the first time you open the GUI after
installing it, a   C++ API runtime error appears. After this, the GUI just
hangs. The command line does not appear to work either.

We _have_ found that we can backup the Domino server through the tdp gui
IF, and only if, the Domino server is shut down!

The last client is running  NAI GroupShield V5.2 within the Domino
application, but not at the server level.  We tried shutting this down, but
with no effect.

Has anybody managed to get Domino 5.0.11 to backup through the tdp yet?

tia,

-=Dave=-
+44 (0) 20 7608 7140

Accountants do it with double entry.






Extended device support

2003-02-25 Thread Guillaume Gilbert
Ok guys don't laugh

I tried migrate my old adsm 3.1.2.42 server today. It's a small server with only 7 
clients. On AIX 4.3.3. I installed 5.1.6.2 and everything was going fine until I tried
using the tape drives. Now these are very peculiar. They are 2 ESCON attached 3490 
drives. (only 800 megs per tape...) TSM 51 di not like this. I got io errors galor and
was unable to read any tapes. Well looks like they are not supported anymore. Let's go 
back to adsm 3.1.2.42. I did but I still can't read my tapes. Seems I have lost the
extended device support module and I can't find the install pack anywhere.

So my question is : does anybody out there have a copy of this thing lying around 
somewhere or can point to a place I can download it???

If anyone does, please reach me at this email adress

Thank You


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL. : (514) 281-7000 ext 3642
Pager : (514) 957-2615
Email : [EMAIL PROTECTED]



Web GUI through SSH tunnel

2003-02-28 Thread Guillaume Gilbert
Hi all

I am trying to use the web client through an ssh tunnel. I've configured my port 
forwarding to connect to port 1581 on the remote machine. I can see the console with 
the
4 options (backup, restore, archive,retrieve) but when I click one of them, I get 
error ANS2600S, blah blah connection refused. I am guessing its trying to go through
another port. Does anybody know which port or is it a random port and this is hopeless?

Thanks for the help

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL. : (514) 281-7000 ext 3642
Pager : (514) 957-2615
Email : [EMAIL PROTECTED]



Réf. : Re: Web GUI through SSH tunnel

2003-03-03 Thread Guillaume Gilbert
I did some more testing on a client I have direct access to. When the web gui opens 
up, I did a netstat -na and only port 1581 to that client is open. After I click on
any of the four buttons and the user/password prompt apears, 2 other ports are open. 
These ports are random in the > 33000 range.  So unless there is an option somewhere
to fix these ports, doing this through ssf tunneling is pratically impossible...

Guillaume Gilbert
CGI Canada




"Stapleton, Mark" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-03-01 00:01:19

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Web GUI through SSH tunnel

From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]
> I am trying to use the web client through an ssh tunnel. I've
> configured my port forwarding to connect to port 1581 on the
> remote machine. I can see the console with the 4 options
> (backup, restore, archive,retrieve) but when I click one of
> them, I get error ANS2600S, blah blah connection refused. I
> am guessing its trying to go through another port. Does
> anybody know which port or is it a random port and this is hopeless?

Port 1581 allows a remote web browser to connect to the TSM client; if
you're seeing the console, it's working.

To get the client to talk to the server, port 1500 must be open between
the TSM client and the TSM server.

--
Mark Stapleton ([EMAIL PROTECTED])






Re : Very strange reply from server

2003-03-04 Thread Guillaume Gilbert
I see this alot on 5.* servers. It comes up with any type of command, q pr, upd no, 
select... I haven't checked if there's an APAR open.

Guillaume Gilbert
CGI Canada




"Gill, Geoffrey L." <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-03-04 09:38:27

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Very strange reply from server

This is a strange one, not sure I've seen anything like it before. This
morning I saw reclamation still running and tried to cancel it. Check out
the command and reply back. I sent the command a second time and it did
take. Just one of those things

I'm on TSM client 5.1.5.9 on Win2K, server is TSM 5.1.6.2

tsm: ADSM>cancel pr 916
ANR1702W Skipping 'ÅI1^|²A]ÄIÜ' - unable to resolve to a valid server name.
ANR1700E Unable to resolve 'ÅI1^|²A]ÄIÜ' to any server(s).
ANS8001I Return code 11.

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:<mailto:[EMAIL PROTECTED]> [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154








Re: Web GUI through SSH tunnel

2003-03-05 Thread Guillaume Gilbert
Yes I've been checking that. Just can't seem to get the port forwarding to work.

Still trying...


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL. : (514) 281-7000 ext 3642
Pager : (514) 957-2615
Email : [EMAIL PROTECTED]





John Monahan <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-03-04 17:08:57

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Réf. : Re: Web GUI through SSH tunnel

Lookup the webports option in your client install and user guide.  This
will allow you to fix the port numbers.

**Note new cell number below**
__
John Monahan
Senior Consultant Enterprise Solutions
Computech Resources, Inc.
Office: 952-833-0930 ext 109
Cell: 952-221-6938
http://www.compures.com




  Guillaume Gilbert
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  ARDINS.COM>cc:
  Sent by: "ADSM: Dist   Subject:  Réf. : Re: Web GUI 
through SSH tunnel
  Stor Manager"
  <[EMAIL PROTECTED]>


  03/03/2003 09:01 AM
  Please respond to
  "ADSM: Dist Stor
  Manager"






I did some more testing on a client I have direct access to. When the web
gui opens up, I did a netstat -na and only port 1581 to that client is
open. After I click on
any of the four buttons and the user/password prompt apears, 2 other ports
are open. These ports are random in the > 33000 range.  So unless there is
an option somewhere
to fix these ports, doing this through ssf tunneling is pratically
impossible...

Guillaume Gilbert
CGI Canada




"Stapleton, Mark" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-03-01
00:01:19

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Web GUI through SSH tunnel

From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]
> I am trying to use the web client through an ssh tunnel. I've
> configured my port forwarding to connect to port 1581 on the
> remote machine. I can see the console with the 4 options
> (backup, restore, archive,retrieve) but when I click one of
> them, I get error ANS2600S, blah blah connection refused. I
> am guessing its trying to go through another port. Does
> anybody know which port or is it a random port and this is hopeless?

Port 1581 allows a remote web browser to connect to the TSM client; if
you're seeing the console, it's working.

To get the client to talk to the server, port 1500 must be open between
the TSM client and the TSM server.

--
Mark Stapleton ([EMAIL PROTECTED])













Réf. : upgrade 4.2.0.0 to 5.1.5.0

2003-03-27 Thread Guillaume Gilbert
Hi Dave

I just went from 4.2.1.7 to 5.1.6.3. I had to install the following versions :

5.1.5.0 from cd to get the licencing filesets
5.1.6.0
5.1.6.3

I did an upgrade form 3.1.2.42 to 5.1.6.2 using the same steps a month ago and it went 
smoothly. Upgrading TSM is pretty easy as long as you have a base level from a cd.

Guillaume Gilbert
CGI Canada




David Cornwell <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-03-27 15:58:42

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : upgrade 4.2.0.0 to 5.1.5.0

Can I do this in one jump; just go straight to 5.1.5.0 or do I have to go
through some intermediate patches???
Dave Cornwell
Systems Engineer
Matsushita Appliance Co.
Danville, KY






TDP SQL problem

2003-07-02 Thread Guillaume Gilbert
Hi all

I'm getting the following error when I try to open the TDP SQL GUI or run the 
tdpfull.cmd script :

ACO5421E Received the following from the MS COM component:
Message text not available. HRESULT:0x80040154

This is a fresh install on a winNT 4 system. TDP version is 5.1.5.0 and server is on 
solaris 8 5.1.5.4. I have other TDP SQL servers running without a problem.

Thanks for any help


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]



Réf. : Re: TDP SQL problem

2003-07-02 Thread Guillaume Gilbert
Thanks Richard, you were right. I should have checked there first. I found the problem.


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]





Richard Sims <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-07-02 10:00:11

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: TDP SQL problem

>I'm getting the following error when I try to open the TDP SQL GUI or
>run the tdpfull.cmd script :
>
>ACO5421E Received the following from the MS COM component:
>Message text not available. HRESULT:0x80040154
>
>This is a fresh install on a winNT 4 system. TDP version is 5.1.5.0 and
>server is on solaris 8 5.1.5.4. I have other TDP SQL servers running
>without a problem.

Do a List archives search first, to find prior discussions of such problems.
The most recent was a user who installed and tried to use the TDP without
the SQL Server actually present first.

  Richard Sims, BU






Re: LTO throughput - real world experiences

2003-07-08 Thread Guillaume Gilbert
I've seen up to 45 MB/sec backuping oracle databases lan free. Of cource oracle 
compresses pretty well. This was seen on the brocade switch so I'm guessing its 
accurate.

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]





Chris Murphy <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-07-08 13:34:33

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: LTO throughput - real world experiences

>I'm curious as to what kind of MB/sec throughput people are seeing with TSM
and LTO drives.

It varies drastically for us based upon the objects being moved, of course.
10-15MB/s / drive is normal for us overall.

>How many MB/sec does a migration process produce in your environment?

We achieve about 12-14MB/s per drive when performing migration.

>Does anyone have any DB's streaming directly to LTO and some figures?
Appreciate any feedback

Yes, our Exchange boxes stream nicely at about 15MB/s per drive.

HTH!

Chris Murphy
IT Network Analyst
Idaho Dept. of Lands
Office: (208) 334-0293
[EMAIL PROTECTED]






Re: Expiration after TSM upgrade from 5.1.6.4 to 5.2.2.1

2004-03-26 Thread Guillaume Gilbert
I have 4 rather large tsm servers, db over 30 gb. 2 are AIX, 2 are sun. The
performance on the 2 aix boxes is similar. One server is very old, installed
in 97, db is 60 db, and goes through 12 million objects, deleting between
300,00 and 1 million objects. 3000 objects/sec examined, but between 70 and
100 deleted. Expiration lasts between 60 and 90 minutes. The other aix
server is brand new. Db is 40 gb, moslty du to 2 imap servers whose backups
are kept 180 days (OUCH!!). Expiration goes through 150,000 objects,
deleting 100,000. 200 objects examined/sec, 150 deleted. The old server is a
f80, the new one a p615, soon to be a p630. 

As for the SUN servers, performance is nowhere near as good. I can't get
more than 40 objects/second, examined or deleted. I moved the database on
raw volumes, didn't help much. I can,T seem to get these servers to perform
well. The 2 instances are on a 480 with 2 cpu's. Disks are the same as the
AIX

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of Loon, E.J. van - SPLXM
> Sent: Friday, March 26, 2004 8:23 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Expiration after TSM upgrade from 5.1.6.4 to 5.2.2.1
> 
> 
> Hi Luke!
> Just because I'm currently trying to boost expiration 
> processing: I see your
> database is rather large, how long did it take you to expire 
> those 23194393
> objects?
> I really tried almost everything, from moving to 15k speed 
> disks, multiple
> database volumes, moving to RAW's, the best I have seen in my 
> environment is
> 60 objects per second.
> I posted a question some time ago (2002) where I requested for other
> people's performances, Tom Kauffman was the winner at that time with a
> dazzling 3112 objects per second!
> Kindest regards,
> Eric van Loon
> KLM Royal Dutch Airlines
> 
> 
> -Original Message-
> From: Luke Dahl [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 25, 2004 21:39
> To: [EMAIL PROTECTED]
> Subject: Re: Expiration after TSM upgrade from 5.1.6.4 to 5.2.2.1
> 
> 
> Hi Mark,
> Output from q db f=d:
>   Available Space (MB): 118,780
> Assigned Capacity (MB): 70,000
> Maximum Extension (MB): 48,780
> Maximum Reduction (MB): 15,924
>  Page Size (bytes): 4,096
> Total Usable Pages: 17,920,000
> Used Pages: 11,215,255
>   Pct Util: 62.6
>  Max. Pct Util: 77.3
>   Physical Volumes: 1
>  Buffer Pool Pages: 32,768
>  Total Buffer Requests: 2,151,221,843
> Cache Hit Pct.: 99.04
>Cache Wait Pct.: 0.00
>Backup in Progress?: No
> Type of Backup In Progress:
>   Incrementals Since Last Full: 0
> Changed Since Last Backup (MB): 2,785.80
> Percentage Changed: 6.36
> Last Complete Backup Date/Time: 03/24/04 17:33:13
> 
> This is the standard amount of expiration we see:
> 03/14/04 17:27:07 ANR0812I Inventory file expiration process 1692
> completed:
>examined 23194393 objects, deleting 
> 1690917 backup
> objec
>cts, 0 archive objects, 0 DB backup 
> volumes, and 0
> recov
>very plan files. 0 errors were encountered.
> 
> This is currently running:
>  Process Process Description  Status
> s Number
>  
> -
>   26 Expiration   Examined 22853888 objects, 
> deleting 20518954
> bac
>ckup objects, 0 archive 
> objects, 0 DB backup
> vo
>olumes, 0 recovery plan files; 1 errors
> encount
>tered.
> 
> I don't think the policies were correctly expiring data prior to the
> upgrade.  We've had numerous nodes that seemed to be storing 
> much more than
> their policy dictated.  We tried to force expiration on the 
> files and it
> didn't seem to make much of a difference.  Unfortunately 
> we've been tweaking
> policies and trying to force a lot of data to drop off prior 
> to the upgrade
> so I can't point to any particular reason why expiration is 
> now expiring so
> much data.  You can see the db size has dropped as well due to the
> expiration - and we performed an unload/reload less than a 
> month ago prior
> to the upgrade.  Being the db volume is one volume it could be disk
> contention, but it's a raw volume fiber attached (Sun T-3's).
> 
> Luke
> 
> "Stapleton,

Re: Expiration after TSM upgrade from 5.1.6.4 to 5.2.2.1

2004-03-26 Thread Guillaume Gilbert
Sent this too fast... Must have hit the a wrong key... Not enough coffee yet

I have 4 rather large tsm servers, db over 30 gb. 2 are AIX, 2 are sun. The
performance on the 2 aix boxes is similar. One server is very old, installed
in 97, db is 60 db, and goes through 12 million objects, deleting between
300,000 and 1 million objects. 3000 objects/sec examined, but between 70 and
100 deleted. Expiration lasts between 60 and 90 minutes. The other aix
server is brand new. Db is 40 gb, moslty du to 2 imap servers whose backups
are kept 180 days (OUCH!!). Expiration goes through 150,000 objects,
deleting 100,000. 200 objects examined/sec, 150 deleted. The old server is a
f80, the new one a p615, soon to be a p630. 

As for the SUN servers, performance is nowhere near as good. I can't get
more than 40 objects/second, examined or deleted. I moved the database on
raw volumes, didn't help much. I can't seem to get these servers to perform
well. The 2 instances are on a 480 with 2 cpu's. Disks are the same as the
AIX servers (HDS 9960). Is anyone getting good expiration performance on SUN
servers. If so, how are you setup?


Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of Loon, E.J. van - SPLXM
> Sent: Friday, March 26, 2004 8:23 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Expiration after TSM upgrade from 5.1.6.4 to 5.2.2.1
> 
> 
> Hi Luke!
> Just because I'm currently trying to boost expiration 
> processing: I see your
> database is rather large, how long did it take you to expire 
> those 23194393
> objects?
> I really tried almost everything, from moving to 15k speed 
> disks, multiple
> database volumes, moving to RAW's, the best I have seen in my 
> environment is
> 60 objects per second.
> I posted a question some time ago (2002) where I requested for other
> people's performances, Tom Kauffman was the winner at that time with a
> dazzling 3112 objects per second!
> Kindest regards,
> Eric van Loon
> KLM Royal Dutch Airlines
> 
> 
> -Original Message-
> From: Luke Dahl [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 25, 2004 21:39
> To: [EMAIL PROTECTED]
> Subject: Re: Expiration after TSM upgrade from 5.1.6.4 to 5.2.2.1
> 
> 
> Hi Mark,
> Output from q db f=d:
>   Available Space (MB): 118,780
> Assigned Capacity (MB): 70,000
> Maximum Extension (MB): 48,780
> Maximum Reduction (MB): 15,924
>  Page Size (bytes): 4,096
> Total Usable Pages: 17,920,000
> Used Pages: 11,215,255
>   Pct Util: 62.6
>  Max. Pct Util: 77.3
>   Physical Volumes: 1
>  Buffer Pool Pages: 32,768
>  Total Buffer Requests: 2,151,221,843
> Cache Hit Pct.: 99.04
>Cache Wait Pct.: 0.00
>Backup in Progress?: No
> Type of Backup In Progress:
>   Incrementals Since Last Full: 0
> Changed Since Last Backup (MB): 2,785.80
> Percentage Changed: 6.36
> Last Complete Backup Date/Time: 03/24/04 17:33:13
> 
> This is the standard amount of expiration we see:
> 03/14/04 17:27:07 ANR0812I Inventory file expiration process 1692
> completed:
>examined 23194393 objects, deleting 
> 1690917 backup
> objec
>cts, 0 archive objects, 0 DB backup 
> volumes, and 0
> recov
>very plan files. 0 errors were encountered.
> 
> This is currently running:
>  Process Process Description  Status
> s Number
>  
> -
>   26 Expiration   Examined 22853888 objects, 
> deleting 20518954
> bac
>ckup objects, 0 archive 
> objects, 0 DB backup
> vo
>olumes, 0 recovery plan files; 1 errors
> encount
>tered.
> 
> I don't think the policies were correctly expiring data prior to the
> upgrade.  We've had numerous nodes that seemed to be storing 
> much more than
> their policy dictated.  We tried to force expiration on the 
> files and it
> didn't seem to make much of a difference.  Unfortunately 
> we've been tweaking
> policies and trying to force a lot of data to drop off prior 
> to the upgrade
> so I can't point to any particular reason why expiration is 
> now expiring so
> much data.  You can see the db size has dropped as well due to the
> expiration - and we performed an unload/reload less than a 
> month ago prior
> 

Re: using more than one library in one storagepool

2004-04-22 Thread Guillaume Gilbert
If you use ACSLS with Gresham's EDT product, the library is the ACSLS
server. That server can manage many libraries. I manage 3 L5500 and 1 L700
and could use 2 librairies in the same pool.

Guillaume Gilbert
TSM Administrator
CGI
(514) 415-3000 x5091


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Stapleton, Mark
> Sent: Thursday, April 22, 2004 5:08 PM
> To: [EMAIL PROTECTED]
> Subject: Re: using more than one library in one storagepool
>
>
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of
> Paul Zarnowski
> >However - since you are using an ADIC library, you *might* be able to
> use
> >an ADIC product called SDLC.  This allows you to create logical
> >libraries.  We use it here to divide our physical Scalar 10k up into
> >multiple logical libraries.  But, I think it is also
> possible to define
> a
> >logical library that spans physical libraries - I'm not sure about
> this.  I
> >also don't know if the SDLC is supported in a Scalar 1K
> environment, or
> if
> >it would be cost effective.
>
> If SDLC can create logical libraries that span physical libraries, it
> will be the first product of its kind to do so. Similar packages (like
> ACSLS), and library managers such as the one on an IBM 3494,
> will create
> multiple logical libraries only within a single physical library.
>
> --
> Mark Stapleton
>


Re: Sharing a STK 5510 library

2004-05-12 Thread Guillaume Gilbert
We do even better, we also have omniback in the mix. Just configure your SAN
fabric so that only the drives you want to see for each server are visible.
ACSLS will accept requests from all these servers and mount the tapes as
needed. The Gresham product is very easy to use and does a great job.

Guillaume Gilbert
TSM Administrator
CGI
(514) 415-3000 x5091


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Bill Boyer
> Sent: 12 mai 2004 15:38
> To: [EMAIL PROTECTED]
> Subject: Sharing a STK 5510 library
>
>
> We have a client that has installed a StorageTek 5510 silo
> and will be using
> the 20-LTO2 drives with 2 Veritas media servers. Is there a
> way to also
> share this library with a TSM server and split some of the 20
> drives up
> between the 2 systems?
>
> I know that to share the STK libraries with TSM servers you
> need the Gresham
> product.
>
> Bill Boyer
> "Experience is a comb that nature gives us after we go bald." - ??
>


Re: Server-to-server communications through a firewall for Library Sharing

2004-05-06 Thread Guillaume Gilbert
I've just finished implementing this (although with a 3584). I had the 1500
port open to and from each server. Works like a charm!

Guillaume Gilbert
TSM Administrator
CGI
(514) 415-3000 x5091


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Zoltan Forray/AC/VCU
> Sent: 6 mai 2004 13:04
> To: [EMAIL PROTECTED]
> Subject: Server-to-server communications through a firewall
> for Library Sharing
>
>
> The subject pretty much covers it.
>
> I need to communicate between TSM servers, one behind a firewall (tape
> Library Client), the other not (tape Library Manager which
> communicates
> with the 3494 which has the SAN FC attached drives)...
>
> It would be nice if the two servers always kept the
> connection open and
> communicated across the same link, once the client TSM server
> established
> the connection with the Library Manager TSM server.
>
> Has anyone else done this kind of configuration ?  Is there
> something I
> can code in the server options to control/manage this ?
>


Re: What are people doing to bkup big DB2, data warehouse

2004-05-06 Thread Guillaume Gilbert
Hi Matt

We have the same architecture.  We used to back it up through GE to an MVS
TSM server and 1.6 GB tapes. We would then migrate to 9840 tapes. This was a
PITA. We never could get the performance up. Now we backup to a P615 TSM
server with 2 FE cards. Performance is about the same. We backup to disk. We
run 4 concurrent backups with parallelism set to 4, so 16 different sessions
on TSM. Our network cards run between 10 and 12 MB/sec. We also precompress
data. Our goal is to go Lanless to LTO2 drives. We've seen 3/1 compression
on this data warehouse so I'm guessing about 70 to 90 MB/sec per drive. This
is only speculation though. 

We do full backups every month and TSP backups of the tables that get
updated weekly every week. Archive logs run every day.

Guillaume Gilbert
TSM Administrator
CGI
(514) 415-3000 x5091


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of MC Matt Cooper (2838)
> Sent: 6 mai 2004 12:30
> To: [EMAIL PROTECTED]
> Subject: What are people doing to bkup big DB2, data warehouse
> 
> 
> Hello all,
>  I am wondering what people are doing to backup these beasts and
> what kind of success (backup times) they have had doing it.
> We are backup up a 3TB DB2 data warehouse but it is 
> taking almost 11
> hours, end to end.   The disk is all Shark, the client is on a p690
> 14cpu lpar, TSM 5.1 client, GB Ethenet adapter, going to TSM 5.1.8
> server on z/OS 1.4..  We are precompressing the data on the p690 and
> going direct to 6 9840 tape drives.  We are using the DB2 
> backup utility
> with parallelism set to 4, with the 6 concurrent streams.
> Ideally we would like to back it up hot, seems to big and active.
> There is too much data for mirroring or FLASHCOPY.   Has anyone tried
> just backing upthe RAWS seperatly?
> Matt
> 


Re: can library be renamed?

2004-05-06 Thread Guillaume Gilbert
I don'nt think it can be renamed but you can update a device class to point
to another library. So just create a new library with the new name and
update you devclass.

Guillaume Gilbert
TSM Administrator
CGI
(514) 415-3000 x5091


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Nancy Reeves
> Sent: 6 mai 2004 14:47
> To: [EMAIL PROTECTED]
> Subject: can library be renamed?
>
>
> TSM 5.2.2.0 on AIX 5.2
>
> I want to change the name of my tape library. Is there a
> faster / better /
> easier way than deleting all references to it, including the
> drives & tape
> storage pools, then recreating it with the new name that I want?
>
> Nancy Reeves
> Technical Support, Wichita State University
> [EMAIL PROTECTED]  316-978-3860
>


Re: Can you compress some things and not compress other things?

2004-04-29 Thread Guillaume Gilbert
How about exclude.compression

Guillaume Gilbert
TSM Administrator
CGI
(514) 415-3000 x5091


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Stapleton, Mark
> Sent: Thursday, April 29, 2004 4:24 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Can you compress some things and not compress
> other things?
>
>
> From: ADSM: Dist Stor Manager on behalf of Debi Randolph
> Can I somehow say don't compress anything except this one directory?
> 
>
> There's a workaround you can do this with. Create a second
> node definition for the client, and point it at a second
> dsm.opt client option file that includes your bad-boy
> directory and excludes everything else. On the server's
> definition for that node, turn compression on. For the
> original node's dsm.opt file, exclude that particular
> directory and include everything else. On the server's node
> definition, turn compression off.
>
> --
> Mark Stapleton
>
>


Re: Error when using server-to-server communication

2004-06-10 Thread Guillaume Gilbert
Your lladdress is probably wrong. This must be the port on which your
server accepts connections from clients.

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 938-6720
[EMAIL PROTECTED]

"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> wrote on 2004-06-10
03:05:09:

> Hi all,
>
> I have two TSM servers (assuming their name are A and B) and I
> establish server-to-server communication.
> In A_server, I did :
> DEFine SERver B PAssword admin HLAddress=192.168.131.168 LLAddress=1580
> COMMmethod=TCPIP URL=http://192.168.131.168:1580 NODEName=A
> In B_server, I did the same command but when I used 'ping server'
> command I got error message :
> "ANR1705W Ping for server 'B' was not able to establish a connection"
> Can anyone help me ?
>
> Thans so much,
>
> Nghiatd


Re: Log file filled up -- how to extend? -- URGENT!

2004-07-09 Thread Guillaume Gilbert
Hi Jack

Check the Admin Guide, Chapter 19. Everything you need to know is there

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



"Coats, Jack" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-07-09 10:48
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Log file filled up  --  how to extend? -- URGENT!






About 4AM this morning, it appears that my TSMseverer (windows 2K, TSM
server 4.3) won't start.

I try starting the sever in a console window and it tells me that the log
files are over full.
How can I add another log file and format it without having the server
start?

(the log files I watch daily and have never been over their high water
mark
of about 78% full,
and I have 6G allocated for a database of about 30G)

Suggestions? ...


Re: Library Configuration and Device Drivers

2004-07-30 Thread Guillaume Gilbert
You wre correct on the drivers. As for Gresham, just the external library
wil be ok. The library will be managed by ACSLS. You will need
DistribuTape on any client using Lanfree capabilities. They are set up the
same way your server will be.

Congratulations on the L8500, this things looks amazing.

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



Joni Moyer <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-07-30 07:28
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Library Configuration and Device Drivers






Hello everyone!

A little background of our new environment would be 2 TSM servers, 1
development and 1 production, are AIX 5.2 at the TSM 5.2.2.5 level.  They
will be attached, through a McData 6064 director, to an STK SL8500 library
with 18 LTO2 tape drives.

I am not sure what device drivers to use for our environment.  I was
looking at the TSM Implementation Guide on pg. 89 and it stated that if we
use LTO drives we should use the IBMTape device driver instead of TSM's
device driver.  Is this correct?  Also, when I used the link from that
site
it didn't exist, but if I used  ftp://ftp.software.ibm.com/storage/devdrvr

then it worked and I found the links to the device drivers for the AIX TSM
servers would be ATape device drivers and the Solaris LAN-free client
servers would be IBMTape device drivers.

Also,  with our configuration and 2 TSM servers using Gresham EDT and
ACSLS7.1, what type of library do I define in TSM?  I have defined an
external library, but now I'm wondering if I need an automated library
along with a shared library defined  with our production TSM server being
the primary library manager?  We will be sharing all 18 drives and the
tape
pool.  As far as I know the scratch tape pool is defined within ACSLS. For
the definition of the automated library it states that this is for SCSI
libraries only, which ours is fibre.  My thoughts on this are that I would
define the external library (with Lan-free capabilities) and then define a
shared library, but you don't give the master library the same name as the
shared library.  I guess I'm just a little confused on how this works and
will be set up in our environment.  Any suggestions would be appreciated!
Thanks!



Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: scratch pool

2004-08-11 Thread Guillaume Gilbert
Hi Joni

Since you will be using Gresham , you do not need to "checkin" the tapes
in TSM. Your q libv will have nothing. ACSLS will be the one handling
scratch tapes.

To have all servers using the same scratch pool, just configure elm.conf
to point to the same scratch pool. This is done with the "Pool" parameter.

As for DRM checkins and checkouts, this has to be handled by Gresham with
the eject_vol command.

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



Joni Moyer <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-08-11 14:13
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
scratch pool






Hello Everyone!

I am in the process of setting up our new TSM environment with 2 TSM AIX
5.2 servers at the 5.2.2.5 TSM level along with 8 LAN-free Solaris Clients
with an SL8500 library and 18 LTO2 tape drives.  We will be using ACSLS
7.1
to control the library robotics and Gresham to allow tape drive sharing. I
also plan on using DR Manager, but I am still not sure what this entails.
I have some questions concerning this new set up.  If you have any
suggestions, please let me know.  Any help is appreciated!

1.  What ejects the tapes that are full for the offsite DR tapes?  TSM or
ACSLS?  Where/how is this set up?
2.  Where can I see the scratch tapes that I defined to the TSM library? I
checked them in and labeled them on TSM, but where can I view how many
scratch tapes I have, etc?
3.  Does DRM handle marking tapes full/offsite and issuing eject commands
or is this a TSM admin job?
4.  How do I configure this environment so that both TSM servers share the
same scratch pool or would I want to keep them separate?  They are two
entirely different TSM environments and would not be sharing any actual
data.

If anyone has accomplished a similar environment setup and has any plans
or
order of operations or a listing of tasks and which software piece handles
this and would be will to share some of your information, that would be
great!  TIA!



Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: storage agent & device support?

2004-08-12 Thread Guillaume Gilbert
Hi Joni

If you are using IBM LTO2 drives, only the Atape driver or its Solaris
equivalent have to be installed. As for the Version levels for server and
storage agent, checkout the 5.2.3.0 readme file, there has been some
improvement as to what is supported.

ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/server/v5r2/AIX/5.2.3.0/TSMSRVAIX5230.README.SRV

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



Joni Moyer <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-08-12 08:56
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
storage agent & device support?






Hey!

We are going to install the storage agent onto our solaris and AIX servers
and it states that we should install the SCSI device support TICsm Sdev,
but we have already installed the IBM device drivers (ATape and IBMtape).
Do we have to install Tivoli's device support code?  My thoughts are that
it isn't needed with our configuration.  (We have LTO2 drives in an STK
SL8500 library)  Also, does the regular TSM client and the Storage agent
have to be at the same level?  Thank you!



Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Re: restore db to new host (LPAR) and ESS dsmfmt'd stg volumes?

2004-08-16 Thread Guillaume Gilbert
Hi Lisa

You could do this :

exportvg
unconfigure vpaths on old system
configure vpaths on new system
importvg

All your files should be there. You could even do this with the TSM
database and it should work.

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



Lisa Laughlin <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-08-16 12:32
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
restore db to new host (LPAR) and ESS dsmfmt'd stg volumes?






Hello all,

I am wondering if anyone has any experience moving TSM from one AIX host
to
another, both of which are attached to an ESS.  My basic question is
whether it is possible to move the vpaths from the one host and to the
other & be able to retain the dsmfmt'd storage pool volumes in the
reassignment, or if I will have to redefine all the vpaths and then
redsmfmt all the stg pool volumes?  I also wonder if it would be possible
to use Sysback to accomplish this, if it isn't possible through the
conventional AIX ways.

I have been trying to research this online, but really can't find anything
about it.


TIA

lisa


Re: Deleting volhistory

2004-08-18 Thread Guillaume Gilbert
Add the option totime=now to your command, this should delete all backups
except the last one.

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



Amos Hagay <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-08-18 05:32
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Re: Deleting volhistory






Hi

Write the command
Del volh type=dbb volume= AKZ901L1 todate=[the date u want to del ]
force=yes

The force=yes will delete it anyway


AH

-Original Message-
From: Mark Strasheim [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 18, 2004 10:21 AM
To: [EMAIL PROTECTED]
Subject: AW: Deleting volhistory

Thanks for the answers,
i guess, i had left to much open.
I have made a complete backup just to delete the old one.
If i got i right: A two backups must exists, so that one can be delete.
This this isn't that hard to accomplish, i just did it.

the output of q volh: ( well not all but ...  )
---
Date/Time: 07/29/04   10:18:58
 Volume Type: BACKUPFULL
   Backup Series: 2
Backup Operation: 0
  Volume Seq: 1
Device Class: AUTOLTO_CLASS
 Volume Name: AKZ901L1
 Volume Location:
 Command:

   Date/Time: 08/17/04   10:06:05
 Volume Type: BACKUPFULL
   Backup Series: 3
Backup Operation: 0
  Volume Seq: 1
Device Class: AUTOLTO_CLASS
 Volume Name: AKZ911L1
 Volume Location:
 Command:
---

the command: del volh type=dbb todate="anyvalue" -> always answers with:
--
 0 sequential volume history entries were successfully deleted.
--
which is in turn not really so successfull as it stats!

so again, what did i miss ???
or how to get rid of old database backups ???


With regrads
Mark Strasheim


Re: Node will not delete

2004-09-29 Thread Guillaume Gilbert
Try del fi nodename *

Sometimes there is a ghost filespace that hangs around.

Guillaume Gilbert
IT Specialist - TSM
Infrastructure Services Delivery
IBM Global Services
Tel.:  (514) 964-2795
[EMAIL PROTECTED]



Joni Moyer <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2004-09-29 12:23
Please respond to
"ADSM: Dist Stor Manager"


To
[EMAIL PROTECTED]
cc

Subject
Node will not delete






Hello all!

I am still locked out of adsm.org due to our company's smart filter, so I
have to ask this question again... I deleted all filespace information for
a node, but it will not delete the node due to the fact that it still
thinks data is associated with the node.  Any suggestions?  Thank you in
advance!


Joni Moyer
Highmark
Storage Systems
Work:(717)302-6603
Fax:(717)302-5974
[EMAIL PROTECTED]



Synrtax of cleanup archdir

2001-06-22 Thread Guillaume Gilbert

Hello

What is the syntax of the cleanup archdir command for version 4.1.2 . I have tried the 
following : "cleanup archdir 'node_name' fix=yes" and I have parameters missing.
That syntax worked for version 3.1.XX

Thanks

Guillaume Gilbert



Error : directory path not found

2001-07-25 Thread Guillaume Gilbert

Hello

TSM Server 4.1.3
TSM Client 4.1.2.14
AIX 4.3.3 on both

Every week we archive lotus Dominos DBs using the plain TSM client via command line. 
And every week we get the following error message around 2 hours into the archive
which halts the process :

ANS4006E Error processing '/opt/lotus/perle': directory path not found

The archives always stop at the same directory. We did a trace and guess what... 
everything worked fine... And then the next week without a trace, same problem.

The filespace is a little over 25 GBs and has 58 directories. All the directories and 
subdirectories get archived but the files are not after we get the error message.
The actlog on the server doesn't tell me a thing.

Any help would be apreciated

Guillaume Gilbert
Conseiller - Direction traitement distribué
Le groupe CGI
TEL.:  (514) 281-7000 poste 3642
FAX:   (514) 281-7843
Email : [EMAIL PROTECTED]





Re: Restoring a server

2000-07-25 Thread Guillaume Gilbert

You are right. We had an OS390 server crash a few months ago and that was
the step that was keeping us from going any further. When we finally read
about it in the manual all went fine after that.

Guillaume
- Original Message -
From: "Cook, Dwight E" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 25, 2000 5:13 PM
Subject: Re: Restoring a server


> did you do the dsmserv with the option to "install" all the newly defined
> volumes, listing the quantity of logs and their names and the quantity of
> dbvols and their names???
> I believe it is
> dsmserv install 1 logname 1 dbfilename
> double check the manual... you have to do this to init the vols prior to
> doing a restore...
>
> later,
> Dwight
>
> > --
> > From: Ray Baughman[SMTP:[EMAIL PROTECTED]]
> > Reply To: ADSM: Dist Stor Manager
> > Sent: Tuesday, July 25, 2000 2:30 PM
> > To:   [EMAIL PROTECTED]
> > Subject:  Restoring a server
> >
> > Hello All,
> >
> > I'm getting ready to migrate from ADSM to TSM and to move the server
from
> > one H50 to another H50.  This process gives me the opportunity to test
out
> > my disaster recovery procedure.  I have moved a copy of ADSM to the new
> > H50
> > and migrated to TSM.  This ran fine.  I then backedup up the TSM
database,
> > and blew TSM away,  everything!  I have since reloaded TSM, and restored
> > the
> > dsmserv.dsk, dsmserv.opt, volhist.txt and devconfig.txt files.  I have
> > also
> > used dsmfmt to recreate the database and log files and copy files.  My
> > last
> > step is to restore the database from my backup, however when I perform
the
> > restore, using dsmserv restore db.  I get the following message.
> >
> > Unable to read complete restart/checkpoint information from any database
> > or
> > recovery log volume.
> >
> > What am I doing wrong?  Any help would be appreciated.
> >
> > Ray Baughman
> > National Machinery Co.
> > Phone:   419-443-2257
> > Fax: 419-443-2376
> > Email:   [EMAIL PROTECTED]
> >



Re: Notes: conversion to logging and TDP

2001-01-05 Thread Guillaume Gilbert

We are now a good 6-7 months into this project. We have 2 RS/6000 servers
with several NOTES servers on each. Each RS/6000 has just under 1 Tb of
data. Our preproduction server is now backing up and logging with TDP. It
has been a long road for our NOTES team but I think it is well worth it. The
occupancy of each server in TSM will be decreased by half.

- Original Message -
From: "Richard L. Rhodes" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 05, 2001 2:10 PM
Subject: Notes: conversion to logging and TDP


> Hi Everyone,
>
> As with many tsm sites, we are struggling with Notes backup.
>
> Our environment is that we backup Notes from several large
> centralized replication servers, not the actual servers users
> interact with.  Our Notes servers do NOT use the v5 transaction log
> feature, so we basically do a full backup every night.  Yes, 800gb's
> worth every night!!!
>
> The Notes team has agreed to look at the change to logging and
> backing up with TDP for Notes, but this isn't going to happen any
> time soon since it's a HUDGE change - according to them.
>
> I know next to nothing about Notes, so I'd like to ask some of the
> experts out there some questions:
>
> 1)  Now big of an effort is to convert to logging?
>
> 2)  Is is possible to only use logging on our centralized rep servers
> and not on the actual production servers?  In other words, covert
> only the rep servers to logging and TDP, and, leave the production
> servers alone.
>
> Thanks
>
> Rick



Réf. : Re: New 3584 Library Firmware available

2003-07-31 Thread Guillaume Gilbert
I just Upgraded my library from 2 to 4 drives and did the full microcode upgrade. I 
think it was 3290 that was installed. Haven't had any problems since.

You think having 17 of 35 drives offline is bad, try 2 out of 2 because of stuck 
tapes. That little clip has really saved me alot of greef since it was installed...


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]





"Anthonijsz, Marcel M SITI-ITDGE13" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-07-31 
09:38:20

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: New 3584 Library Firmware available

Anybody of you daredevils already implemented this code?
What are the benefits and what differences do you see?

We have just upgraded our LTO1 drives to 35V0 and library to 3060. We had already 
planned everything through change control etc. and then IBM told us that we could also
upgrade to 3290... So this is next change...

IBM also placed a new type of spring/clip in the LTO1 drives, helping a lot against 
stuck tapes (or error code B881 on your 3584 library). The weekend before the change
we had 17 of 35 drives offline due to stuck tapes... Grrr.

So if you see many of these errors, ask your CE for the new clip placement...

Marcel Anthonijsz
Central Data Storage Manager (a.k.a. storman)
Shell Information Technology International B.V.
PO Box 1027, 2260 BA  Leidschendam, The Netherlands

Tel: +31-70 303 4984
Mob: +31-6 24 23 6522
Email: [EMAIL PROTECTED]

-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: 30. j=FAl=ED 2003 15:02
To: [EMAIL PROTECTED]
Subject: New 3584 Library Firmware available


Hi *SM-ers!
For all other proud owners of an IBM 3584 UltraScalable library: IBM has
silently published a new microcode (level 3290) for this library. You =
can
find it at: ftp://service.boulder.ibm.com/storage/358x/3584/, filename
3290bin.tar or 3290bin.zip.
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines






Re: Resolved: URGENT!! TSM client schedules not running

2003-08-14 Thread Guillaume Gilbert
Ok you guys had to start talking about this and now it has happened to me. Nearly all 
my servers in prompt mode didn't take a backup last night. Nothing in  the activity
log and nothing in the dsmsched.logs. I'm running server 5.1.6.4 on AIX 5.1 and 
clients range from 4.1.2.12 to 5.1.5.15, all Windows (NT and 2k). Most of these servers
had the MS patch installed for the blaster virus yesterday. Is this just a 
coincidence??. I know its not a network problem because all my UNIX clients where 
running
backups all night and those are not scheduled by TSM but by crontabs.

So David, what was the tcpip settings you had to change?

Thanks

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]


When I had the problem, I did open a PMR and they finally tracked my
problem down to a tcp read buffer problem.  They had me adjust the tcp
values and the problem has not reoccurred.

David



Réf. : Re: Resolved: URGENT!! TSM client schedules not running

2003-08-14 Thread Guillaume Gilbert
Hi David

Thanks for the response. The "No receive pool buffer errors" was at 88934!!! I changed 
the Receive Pool Buffer Size to 2048 (it was at 384) and am hoping everything will
be well for tonight. I am having the sysadmins bounce th eschedulers on all the 
servers that did not work last night just in case...


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]





David E Ehresman <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-08-14 11:04:49

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Resolved: URGENT!! TSM client schedules not running

My problem was also with Prompted nodes only.  Polling nodes ran ok as
did croned backups.

The tcp setting was the size of the Receive Pool Buffer.  IBM gave me
this query to determine if there was a problem:
  (echo "";date; netstat -v ent1 | grep "Receive Pool Buffer" )

The "No Receive Pool Buffer Errors" line should report zero.  They had
me set the "Receive Pool Buffer Size" up to 2048 (AIX 5.1).
Unfortunately, I don't remember the command to do that.

David

>>> [EMAIL PROTECTED] 08/14/03 10:29AM >>>
Ok you guys had to start talking about this and now it has happened to
me. Nearly all my servers in prompt mode didn't take a backup last
night. Nothing in  the activity
log and nothing in the dsmsched.logs. I'm running server 5.1.6.4 on AIX
5.1 and clients range from 4.1.2.12 to 5.1.5.15, all Windows (NT and
2k). Most of these servers
had the MS patch installed for the blaster virus yesterday. Is this
just a coincidence??. I know its not a network problem because all my
UNIX clients where running
backups all night and those are not scheduled by TSM but by crontabs.

So David, what was the tcpip settings you had to change?

Thanks

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]


When I had the problem, I did open a PMR and they finally tracked my
problem down to a tcp read buffer problem.  They had me adjust the tcp
values and the problem has not reoccurred.

David






Réf. : Re: Réf. : Re: Resolved: URGENT!! TSM client schedules notrunning (Guillaume Gilbert)

2003-08-15 Thread Guillaume Gilbert
The AIX sysadmin did it though SMIT. He had to disable the network card to do it. I 
think its in SMITT TCPIP.

Guillaume




Jacques Butcher <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-08-15 02:01:45

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Réf. : Re: Resolved: URGENT!! TSM client     schedules notrunning 
(Guillaume Gilbert)

Hi Guillaume,

What command did you use to change the size of the "Receive
Pool Buffer Size"?

On Thu, 14 Aug 2003 12:44:09 -0400
 David E Ehresman <[EMAIL PROTECTED]> wrote:
> If other people who are experiencing the problem also
> have receive pool
> buffer errors and increasing the size of the buffer pool
> helps, maybe we
> can get this added to the faq.
>
> David
>
> >>> [EMAIL PROTECTED] 08/14/03 11:50AM >>>
> Hi David
>
> Thanks for the response. The "No receive pool buffer
> errors" was at
> 88934!!! I changed the Receive Pool Buffer Size to 2048
> (it was at 384)
> and am hoping everything will
> be well for tonight. I am having the sysadmins bounce th
> eschedulers on
> all the servers that did not work last night just in
> case...
>
>
> Guillaume Gilbert
> Conseiller - Gestion du stockage
> CGI - Gestion intigri des technologies
> TEL.:  (514) 415-3000 ext 5091
> Pager:   (514) 957-2615
> Email : [EMAIL PROTECTED]
>

>
>
>
>
> David E Ehresman <[EMAIL PROTECTED]>@VM.MARIST.EDU>
> on 2003-08-14
> 11:04:49
>
> Veuillez ripondre ` "ADSM: Dist Stor Manager"
> <[EMAIL PROTECTED]>
>
> Envoyi par :  "ADSM: Dist Stor Manager"
> <[EMAIL PROTECTED]>
>
>
> Pour : [EMAIL PROTECTED]
> cc :
> Objet : Re: Resolved: URGENT!! TSM client schedules
> not running
>
> My problem was also with Prompted nodes only.  Polling
> nodes ran ok as
> did croned backups.
>
> The tcp setting was the size of the Receive Pool Buffer.
>  IBM gave me
> this query to determine if there was a problem:
>   (echo "";date; netstat -v ent1 | grep "Receive Pool
> Buffer" )
>
> The "No Receive Pool Buffer Errors" line should report
> zero.  They had
> me set the "Receive Pool Buffer Size" up to 2048 (AIX
> 5.1).
> Unfortunately, I don't remember the command to do that.
>
> David
>
> >>> [EMAIL PROTECTED] 08/14/03 10:29AM >>>
> Ok you guys had to start talking about this and now it
> has happened to
> me. Nearly all my servers in prompt mode didn't take a
> backup last
> night. Nothing in  the activity
> log and nothing in the dsmsched.logs. I'm running server
> 5.1.6.4 on
> AIX
> 5.1 and clients range from 4.1.2.12 to 5.1.5.15, all
> Windows (NT and
> 2k). Most of these servers
> had the MS patch installed for the blaster virus
> yesterday. Is this
> just a coincidence??. I know its not a network problem
> because all my
> UNIX clients where running
> backups all night and those are not scheduled by TSM but
> by crontabs.
>
> So David, what was the tcpip settings you had to change?
>
> Thanks
>
> Guillaume Gilbert
> Conseiller - Gestion du stockage
> CGI - Gestion intigri des technologies
> TEL.:  (514) 415-3000 ext 5091
> Pager:   (514) 957-2615
> Email : [EMAIL PROTECTED]
>

>
> When I had the problem, I did open a PMR and they finally
> tracked my
> problem down to a tcp read buffer problem.  They had me
> adjust the tcp
> values and the problem has not reoccurred.
>
> David

Jacques Butcher
TCM (Technology Corporate Management) Software Engineer
Cell:  +27 (0)84 676 0329
Tell:  +27 (0)11 483-2000
Fax:   +27 (0)11 728-3656
Nat. IT Diploma, MCSE, IBM Tivoli Storage Manager 5.1
Certified, NetVault Certified, IPSTor Certified,
IBM Certified Specialist - Enterprise Tape Solutions
Version 2

==
Download ringtones, logos and picture messages at Ananzi Mobile Fun.
http://www.ananzi.co.za/cgi-bin/goto.pl?mobile






Réf. : Re: TDP for Domino 6. ver 5.1.5.1

2003-08-27 Thread Guillaume Gilbert
We do something very similiar

- Compact on saturday midnight of all the databaes (DBIID is changed)
- Incremental right after that (actually a full because of the DBIID change
- archive log every hour from 07:00 to 19:00
- incremental every week night

We have 3 domino instances on 1 p690 partition. Total of 350 GB of notes data. Our 
retention is Nolimit, Nolimit, 90, 90. This gives me 2.5 TB of compressed data in TSM.


Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]





Del Hoobler <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-08-27 10:50:03

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: TDP for Domino 6.  ver 5.1.5.1

Bill,

You are correct. It also "inactivates" databases
that no longer exist on the Domino server.

If a customer only runs archivelog and selective backups,
the databases that have been "removed" from their
Domino Server will remain active and never expire.

The bigger problem I see with only running weekly SELECTIVE and
hourly ARCHIVELOG (with no INCREMENTAL) is that it will not
pick up situations when the logged databases DBIID has changed.
Once the DBIID changes, you cannot apply
the archive logs to the old backup since Domino
treats databases with a new DBIID as a "different"
database in the logger's eyes.

One very common technique is to run:
   - Weekly SELECTIVE type backups (followed by an INACTIVATELOGS command)
   - Nightly INCREMENTAL type backups
   - Hourly ARCHIVELOG backups

The time frames on the frequency of these vary depending
on Domino server load, capacity, maintenance schedule, etc.

Thanks,

Del



> I was also told that the INCREMENTAL processing expires deleted
databases.
> The SELECTIVE only backs up what's there and if you never run the
> INCREMENTAL any deleted databases will still be there as active and
won't
> expire.
>
> I have a client that I'm still trying to convince that hourly archivelog
> backups and weekly selective full backups just aren't enough.






5.1 end of support

2003-09-17 Thread Guillaume Gilbert
Hi all

5.2 came out this summer so does that mean that 5.1 will be out of support around june 
next year?. I've (tried) to search ibm.com but could not find an answer. The end of
support matrix only lists 4.1 and 4.2.

Thanks

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL.:  (514) 415-3000 ext 5091
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]



Réf. : Re: SQL server backups

2003-09-23 Thread Guillaume Gilbert
Since you are going directly to tape, why not create a storage pool just for those 
backups. You then don't have to run reclamation on it.

Guillaume Gilbert
CGI Canada





Thomas Denier <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2003-09-23 14:11:11

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: SQL server backups

> >My company has a Windows server supporting a very large SQL Server
database.
> >Before the last hardware upgrade, the database was backed up with the SQL
> >Server Connect Agent. Since the upgrade the database has been dumped to a
> >flat file which is then backed up by the backup/archive client. Either way
> >the backup arrives at the TSM server as a single file containing about
> >20 gigabytes of data. A single file this size causes a number of problems.
> >The recovery log gets pinned for hours. Tape reclamation tends to perform
> >poorly. The system administrator is now considering installing TDP for
> >SQL Server. Would this software still send the backup as a single huge
> >file?
>
> Without product, level, and tape drive details, it's hard to give a lot of
> advice; but it sounds like you have significant tape drive technology issues
> slowing everything down.  Large files are usually the optimal type of
objects
> for backup systems, allowing streaming, minimal tracking updates, etc.
> I would not approach TDP until you address underlying issues.  I'd begin by
> getting benchmark numbers on your tape drive and other subsystem throughput
> rates to isolate the bottleneck and address that.

I didn't give details about the tape configuration because it is
clearly not the problem. We used to be able to back up the
database to a disk storage pool initially (the database is slowly but
steadily growing) and had the same kind of log pinning issues. The
worst form of the tape reclamation pathology occurs with offsite tapes.
We tend to get one of the huge files starting at or near the beginning
of a 3590 J volume, filling the rest of that volume, and spilling on
to the first few percent of another volume. The next offsite reclamation
process regards the second volume as a prime candidate for reclamation
and recopies the entire file to two new volumes. In some cases we have
seen this phenomenon repeatedly for several days running. Onsite tape
reclamation handles big spanned files more gracefully, but even there
such files result in a poor trade-off between the amount of data
movement and the amount of free space generated. The pathalogical
behavior results from the relationship between file size and volume
capacity, and has absolutely no relation to streaming, tracking
updates, or whatever.






Scheduler service on MSCS cluster

2003-11-13 Thread Guillaume Gilbert
Hi All

I have a problem installing a scheduling service on a Win2k MSCS cluster.
When I run dsmc q se and generate the password, everything is fine. When I
run the dsmcutil command to install the scheduler service, I get the
following error in dsmerror.log :

11/13/2003 11:00:27 ReadPswdFromRegistry(): getRegValue(): Win32 RC=2 .
11/13/2003 11:00:27 sessOpen: Error 137 from signon authentication.

In dsmsched.log, there is a prompt for a user id and then a communication
error.

I run the command exactly as it says in the Wundows User's guide.
Clusternode yes and passwordaccesss generate are in the dsm.opt file. 

The two local nodes are working fine and a backup was run last night with no
problems.

Client is at 5.1.5.15 and server is on  sun 2.8 version 5.1.6.5

Thanks for any help

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]


Re: encryption: 56 to 128

2003-11-27 Thread Guillaume Gilbert
I always ask if any data that transits on the network is encrypted. Usually
it's not. So why would the backups be?. Unlike Netbackup, TSM does not use
tar to write on tapes. It uses its own proprietary method. And without the
TSM database, a TSM tape is worhtlees...

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Remco Post
> Sent: Thursday, November 27, 2003 10:58 AM
> To: [EMAIL PROTECTED]
> Subject: Re: encryption: 56 to 128
>
>
> On Thu, 27 Nov 2003 09:41:34 -0500
> Joe Crnjanski <[EMAIL PROTECTED]> wrote:
>
> > Hi All,
> >
> > Does anybody know if IBM is planning to upgrade their
> famous encryption
> > from 56 to 128 bit at least. Not to mention that today on
> market 512 bit
> > is not very difficult to find in other softwares.
> >
> > We lost couple of customers because they requested at least 128 bit
> > encryption.
> >
> > I know that IBM's argument is effect on speed of backup,
> but GIVE US A
> > CHANCE to choose and we can decide when to use 56 128 or 1024 bit.
> >
>
> IBM also arguments, rightfully, that if you need stronger
> encryption, you'll
> probably need to encrypt the files while they are stored on
> your disk as
> well. After some thought, I think I'll have to agree.
> Remember even 56bit
> des can currently not that easily be cracked by anyone who is
> not in the
> business of cracking strong encryption for a living.
>
>
> > Joe Crnjanski
> > Infinity Network Solutions Inc.
> > Phone: 416-235-0931 x26
> > Fax: 416-235-0265
> > Web:  www.infinitynetwork.com
>
>
> --
> Met vriendelijke groeten,
>
> Remco Post
>
> SARA - Reken- en Netwerkdiensten
> http://www.sara.nl
> High Performance Computing  Tel. +31 20
> 592 8008Fax. +31 20 668 3167
>
> "I really didn't foresee the Internet. But then, neither did
> the computer
> industry. Not that that tells us very much of course - the
> computer industry
> didn't even foresee that the century was going to end." --
> Douglas Adams
>


Re: SQL Query within TSM

2004-01-08 Thread Guillaume Gilbert
Here is a query to get all volumes used by 1 node :

select distinct -
cast(volumeusage.volume_name as char(6)) as Volume, -
cast(volumes.est_capacity_mb as decimal(6,0)) as "Capacity",-
cast(volumes.pct_utilized as decimal(4,1)) as "% used", -
cast(volumes.status as char(7)) as Status, -
cast(volumes.access as char(6)) as Access, -
date(volumes.last_write_date) as "Last write" -
from volumeusage,volumes -
where volumeusage.node_name=upper('$1') and -
volumeusage.stgpool_name='STG_LTODXN_CL' and -
volumeusage.volume_name=volumes.volume_name -
order by 1

Change your storage pool name accordingly.

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of tsmadmin account for Excaliber Business Solutions
> Sent: Thursday, January 08, 2004 7:23 AM
> To: [EMAIL PROTECTED]
> Subject: SQL Query within TSM
> Importance: High
>
>
>  Hi Guys
>
>  Would like to know if anyone has seen or wriiten a SQL Query
> in TSM to get info relating
>  to what management class, node name, cart/tape, how much of
> data for each node has been backed up.
>  The info much be joined together for each node within TSM.
>  I require for example :
>
>  nodename=TEST
>  management class = TEST
>  number of carts or which carts test data is stored on
>  how much of data test node has stored in TSM.
>
>  If anyone could help please advise.
>
>  Thks
>  Sean
>
>
> "This e-mail is sent on the Terms and Conditions that can be
> accessed by Clicking on this link
> http://www.vodacom.net/legal/email.asp "
>


Re: TSM best practices manual?

2004-01-28 Thread Guillaume Gilbert
Hi Tab

Instead of buying a sixth, buy one big one to consolidate everything. STK
has the L700E with a capactiy of nearly 1400 tapes with 2 frames and the
L5500 with nearly 5000 tapes. IBM's 3584 can go up to 16 frames now (I
think) with just as much capacity. I'm working with all three of these and
almost no problems. (Hardware is hardware, it breaks...) Put 10 LTO2 drives
in either and you're all set.

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Tab Trepagnier
> Sent: Wednesday, January 28, 2004 2:42 PM
> To: [EMAIL PROTECTED]
> Subject: Re: TSM best practices manual?
>
>
> Wanda,
>
> Thanks very much for the suggestions.  In answer to your questions:
>
> 1.  I know I have compression operating on the drives because
> our average
> tape capacity is 1.6 X uncompressed capacity on all three media.
> 2. Most of what we back up is server data; a little bit is
> OS, etc, but
> not much.  We're a Notes and Oracle shop and we have a LOT of
> data from
> both systems.  We also design and manufacture our own
> products, and our
> engineers routinely generate 100MB+ CAD files.
> 3. We keep five copies of user-created data, and two copies
> of everything
> else.  Design data is also archived but that isn't relevant to this
> discussion.
> 4. True, but I already have FIVE libraries; I am trying to
> avoid buying a
> sixth.
>
> This is what I think I'm going to do.  At present we keep everything
> except permanent archives online fulltime since we don't
> really have an
> "operator".  We have two parallel data paths: "small clients"
> going to a
> 3575, and "large clients" going to the 3583.  I'm going to
> recombine those
> paths into a single path and make liberal use of the Migration Delay
> feature.  The idea is for the incoming data to travel:  Disk
> --> 3575 -->
> 3583 --> MSL DLT --> shelf.
> The idea is to have data 1-2 days old on disk (radical!),
> data 2-10 days
> old on fast-access 3570, data 10-180 days old on LTO, and
> data older than
> 6-12 months on the shelf.  The little MSL retains nothing but
> is instead
> just a portal.
>
> As for a "best practices" guide, I've begun browsing the TSM 5.2
> Implementation Guide to see if that provides the info I'm
> looking for. I'm
> also browsing my training handout from the ADSM 3.1 Advanced
> Implementation course.
>
> Thanks again for the suggestions.
>
> Tab
>
>
>
>
>
>
>
> "Prather, Wanda" <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 01/28/2004 12:44 PM
> Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: TSM best practices manual?
>
>
> Tab,
>
> I'm not sure this is an issue of TSM design -  if your
> libraries are out
> of
> capacity in terms of SLOTS, rather than throughput, you just have "too
> much"
> data.
>
> That either means you are
>
> 1) not compressing the data as much as you can, or
> 2) backing up things you don't need to
> 3) keeping data longer/more copies  than you need to
> 4) really in need of additional library space
>
> For 1), it's a matter of checking to make sure that your
> drives do have
> compression turned on.  If you can't compress at the drive
> level, turn it
> on
> at the client level.
>
> For 2-4, I don't know any magic/automatic way of figuring it out.
>
> Here's what I do:
>
> dsmadmc -id=x -password=yyy -commadelimited  "select
> CURRENT_DATE
> as
> DATE,'SPACEAUDIT',node_name as node, backup_mb,
> backup_copy_mb,archive_mb,
> archive_copy_mb  from auditocc"`;
>
> Suck that into a spreadsheet and look to see which clients
> are taking up
> the
> most space on the server side.
>
> Then go look in detail at the management classes and exclude lists
> associated with the "hoggish" clients, and see what you can
> find out about
> the copies they are keeping.
>
> - Are you keeping copies of EVERYTHING on the client for a zillion
> versions,
> rather than just the important data files?
> - for Windows 2000, are you keeping more copies of the SYSTEM
> OBJECT than
> would likely be used?
> - Look at their dsmsched.log files and see what is actually
> being backed
> up.
>
> - Be suspicious of TDP clients not deleteing copies they are
> supposed to.
> (For exampl

Re: 9940b tape device class

2004-02-09 Thread Guillaume Gilbert
You could use generictape but it's not a good idea. The device class to use
is ecartridge.

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
> On Behalf Of Wira Chinwong
> Sent: Monday, February 09, 2004 9:12 AM
> To: [EMAIL PROTECTED]
> Subject: 9940b tape device class
>
>
> TSM Expert,
> I have to use storage tek l700e and 9940b drive. I don't
> know what type of device class that I must define in TSM.
> Could I use generic device type ?
>
> Thanks for advance.
>
>
>
> Wira Chinwong
>
>


Virtual volume performance

2004-03-02 Thread Guillaume Gilbert
Hi all

TSM 5.1.6.4 on aix 5.1

I have two TSM instances on this server (one big (60 gb db) and one very
small (2 gb db)). I have set up virtual volumes to go from the small
instance to the big one. Performance is terrible, about 64 kb a second. I'v
tried going through 127.0.0.1 and through the ip address and it's the same.
I have tested a backup db and migration from disk pool.

Can I get this to work faster. I have 9840 drives and they can go up to 9.3
MB/sec when migrating.

Thanks for any help

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]


Re: veritas backup exec client and TSM server

2004-03-15 Thread Guillaume Gilbert
Yes Veritas is a big player, but that does not meen it is better. We are a
very big outsourcing company and we use TSM, Netbackup and Omniback. I
manage 14 TSM servers by myself, and 7 of those backup up over 100 clients,
with the max being 300. I use every client there is, and every TDP. The
sysadmins are always relunctant to use TSM because they don't understand it.
After a month, the swear by it. The results are there. We were hit by mydoom
a month ago, it wiped out entire windows fileservers. Every one was retored
by TSM without a hitch. We even did a bmr of a Win2k box after a patch
install caused it to blue screen. No problem.

On the netbackup side, we have 3 administrators plus 5 (yes 5) veritas
consultants in house. There are 5 netbackup servers. We have one very big
client that uses 28 drives on 2 sites. SAP backups seem to be a pain. There
is always an obscure parameter somewhere that no one knows about except the
software engineer at Veritas and it takes you a week with support to get in
touch with him. The inhouse consultants mostly know the GUI and nothing
else. Performance guides and all that sort of documentation I cherish with
TSM kis not available to Veritas custumers, its for Veritas people only.
Netbackup was designed with small environnements in mind and grew very
poorly. You have to define LTO tapes as DLTs!!! My company used to push for
Netbackup, now its TSM only. 

All three products use the same robotic infrastructure, 3 STK L5500s and a
L700 with ACSLS. When I have a problem, its because a drive is dead or a
tape is stuck, and that doesn't happen too often. Netbackup is always having
problems. I don't know why.

So yes Netbackup offers Exchange mailbox restores and Oracle block level
backups, but to me that is only fud. I see first hand what the product is
and I will never recommend it.

Guillaume Gilbert
Backup Administrator
CGI - ITM
(514) 415-3000 x5091
[EMAIL PROTECTED]

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
> On Behalf Of Remco Post
> Sent: Monday, March 15, 2004 2:26 PM
> To: [EMAIL PROTECTED]
> Subject: Re: veritas backup exec client and TSM server
> 
> 
> On Mon, 15 Mar 2004 13:26:58 -0500
> "Remeta, Mark" <[EMAIL PROTECTED]> wrote:
> 
> > Get used to it. Wait until you have a problem with using 
> Veritas to backup
> > your IS and TSM as the storage agent... Another cost of going the
> > non-convential route.
> >
> 
> ok, that assumes that veritas is less able to support their 
> product than IBM
> is. Since Veritas is a _big_ player in the back-up market, 
> I'm not ready to
> do so just yet. Remember, this is a product in use by at least as may
> organisations as TSM is, and it wouldn't be if people can't 
> rely on the
> quality of the product.
> 
> Any argument as in 'if the software is upgraded, we need to 
> upgrade the
> client', only means exactly that. With win2003 this was true, 
> so why not
> with a mail database? I don't see the problem. _My_ problem 
> is that my users
> are asking for a functionality that I'm currently unable to 
> provide, unless
> I do unconventional things. Since SARA is more or less in the 
> buisiness of
> doing unconventional things (like running TSM!) this should 
> be no shock
> 
> >
> >
> > >-Original Message-
> > >From: Remco Post [mailto:[EMAIL PROTECTED]
> > >Sent: Monday, March 15, 2004 12:49 PM
> > >To: [EMAIL PROTECTED]
> > >Subject: Re: veritas backup exec client and TSM server
> >
> > >But still, I have no answer to my experiences question, 
> which is more
> > >important to me than IBM politics.
> >
> >
> > Confidentiality Note: The information transmitted is 
> intended only for the
> > person or entity to whom or which it is addressed and may contain
> > confidential and/or privileged material. Any review, retransmission,
> > dissemination or other use of this information by persons 
> or entities
> > other than the intended recipient is prohibited. If you 
> receive this in
> > error, please delete this material immediately.
> 
> 
> --
> Met vriendelijke groeten,
> 
> Remco Post
> 
> SARA - Reken- en Netwerkdiensten  
> http://www.sara.nl
> High Performance Computing  Tel. +31 20 
> 592 8008Fax. +31 20 668 3167
> 
> "I really didn't foresee the Internet. But then, neither did 
> the computer
> industry. Not that that tells us very much of course - the 
> computer industry
> didn't even foresee that the century was going to end." -- 
> Douglas Adams
> 


Set access problems after export/import

2002-07-18 Thread Guillaume Gilbert

Here is the scenario :

Client : AIX 4.3.3 TSM 4.2.1.21
Server : Z/OS TSM 4.2.1.15

- An archive is created on client1 to server1 with user1
- A set access command is done to give access to user2 to the files
- server1 exports the archive
- server2 imports the archive
- user2 on client2 tries to retrieve the archive from server2 but cannot see the 
archived files when doing a q archive and a retrieve commands returns message ANS1083E.

After creating user1 on client2, the retrieve was successful.

The same node_name is used for the archive and the retrieve.

 This used to work with TSM client 4.1.3 and TSM server 4.1.3. Was this a bug back 
then or is this a bug now?

Thanks

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI-Desjardins
TEL.:  (514) 281-7000 ext 3642
Pager:   (514) 957-2615
Email : [EMAIL PROTECTED]




Re: Réf. : LTO Tape OR 9840

2002-07-26 Thread Guillaume Gilbert

The licensing for Gresham is pretty good. I don't have the details but IIRC
it's per Silo used. We've been using it for about 5 years and it works
great. We interface with Library Station on Z/OS wihout a hitch. As for
Library sharing, it does all the work. No need to go through TSM for that,
just configure the drives to ELM and away you go. For that though you have
to use TSM's device driver because library sharing is not supported with the
GenericTape device class, which is what you use with Gresham' AdvanTAPE
driver. The same goes for LanLess backups. Be sure not to use quickloop on a
brocade switch to connect your drives because you will get alot of IO
errors. I have to TSM servers on the same AIX box and they use the same
drives. This setup has been running for over a year with TSM 4.1.3. I use
the same drives on my test box.

When we started we would define the volumes directly in TSM because ELM had
problems managing scratch Tapes. When I converted to TSM' device driver, I
also deleted all my 'empty' tapes in TSM. Haven't had a problem with
overwritten tapes since. Gresham supplies a script to mark used tapes as
non-scratch which I run a few times a day jsut to be sure. It's also easier
this way because you don't have to define volumes for each storage pool.

Guillaume Gilbert
CGI Canada
- Original Message -
From: "Joni Moyer" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, July 25, 2002 7:22 AM
Subject: Re: Rif. : LTO Tape OR 9840


Our problem is that with our 9310 silo, if we go the IBM way we will have
to purchase a third party software, Gresham, that will manage the library.
>From what I understand this is not a cheap solution because we have to pay
for each license we have out there which is approximately 250.  What is
your environment?  Do you have any problems/concerns with either one?
Thanks!

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338



Guillaume Gilbert
 cc:
Sent by: "ADSM: Dist   Subject: Rif. : LTO
Tape OR 9840
Stor Manager"
<[EMAIL PROTECTED]>


07/24/2002 04:38 PM
Please respond to "ADSM:
Dist Stor Manager"






Hi Joni

I personnaly prefer 9840's but the bean counters love the LTO because the
drives are very much lower cost and the cost per GB is lower. The 9840 is
probably the best
drive/tape on the market today. The native thruput of 9840b's is 20 mb/sec
while the LTO is 16 mb/sec. The start/stop on the 9840 is considerably
faster than the LTO.
Reclaiming a 40 - 50 % full LTO took me 4 to 5 hours (client compressed
data) and I can move data a full 9840 (client compressed data) in less than
30 minutes. I reclaim
my 9840 at 40% and do about 20 to 30 tapes a day easy. Is your STK silo a
9310 or a 5500? Or is it one of the smaller ones like a L180 or L700? With
the big silos the
mount time is not much of an issue. The seek time will be faster with the
9840.

As for tape life, I've been working with 9840s for a little over a year
without any tape failure.

I look at it this way : if I lose a 9840, I loose a max of 20 GB of data.
With an LTO, I loose 5 times that.

Guillaume Gilbert
CGI Canada




Joni Moyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-07-24 15:17:32

Veuillez ripondre ` "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyi par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   LTO Tape OR 9840

Hello everyone!

The environment here is going to be changing soon... We will be moving off
of the mainframe and onto an AIX server that will be on our SAN.  We will
have 1 STK silo for the tapes for 2 TSM servers.  Right now we are
considering IBM's LTO or STK's 9840.  I was just wondering if anyone out
there has had experience with either one and if so,  what are the pro's and
con's of them?  Has anyone that has worked with LTO know how long it takes
to recover a bad tape?  Considering that they are 100GB tapes, it was
assumed that it would take 5 times as long as it does to recover a Magstar
3590(20 GB).  And also, do the tapes get damaged easily or is that all a
matter of handling them to take them offsite to vaults?  Thank you

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338






=



Réf. : LTO Tape OR 9840

2002-07-26 Thread Guillaume Gilbert

Hi Joni

I personnaly prefer 9840's but the bean counters love the LTO because the drives are 
very much lower cost and the cost per GB is lower. The 9840 is probably the best
drive/tape on the market today. The native thruput of 9840b's is 20 mb/sec while the 
LTO is 16 mb/sec. The start/stop on the 9840 is considerably faster than the LTO.
Reclaiming a 40 - 50 % full LTO took me 4 to 5 hours (client compressed data) and I 
can move data a full 9840 (client compressed data) in less than  30 minutes. I reclaim
my 9840 at 40% and do about 20 to 30 tapes a day easy. Is your STK silo a 9310 or a 
5500? Or is it one of the smaller ones like a L180 or L700? With the big silos the
mount time is not much of an issue. The seek time will be faster with the 9840.

As for tape life, I've been working with 9840s for a little over a year without any 
tape failure.

I look at it this way : if I lose a 9840, I loose a max of 20 GB of data. With an LTO, 
I loose 5 times that.

Guillaume Gilbert
CGI Canada




Joni Moyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-07-24 15:17:32

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   LTO Tape OR 9840

Hello everyone!

The environment here is going to be changing soon... We will be moving off
of the mainframe and onto an AIX server that will be on our SAN.  We will
have 1 STK silo for the tapes for 2 TSM servers.  Right now we are
considering IBM's LTO or STK's 9840.  I was just wondering if anyone out
there has had experience with either one and if so,  what are the pro's and
con's of them?  Has anyone that has worked with LTO know how long it takes
to recover a bad tape?  Considering that they are 100GB tapes, it was
assumed that it would take 5 times as long as it does to recover a Magstar
3590(20 GB).  And also, do the tapes get damaged easily or is that all a
matter of handling them to take them offsite to vaults?  Thank you

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338






Backup of Win2`K Fileserver

2002-07-31 Thread Guillaume Gilbert

Hello

I have a BIG performance problem with the backup of a win2k fileserver. It used to be 
pretty long before but it was managable. But now the sysadmins put it on a compaq
storageworks SAN. By doing that they of course changed the drive letter. Now it has to 
do a full backup of that drive. The old drive had  1,173,414 files and 120 GB  of
data according to q occ. We compress at the client. We have backup retention set to 
2-1-NL-30. The backup had been running for 2 weeks!!! when we cancelled it to try to
tweak certain options in dsm.opt. The client is at 4.2.1.21 and the server is at 4.1.3 
(4.2.2.7 in a few weeks). Network is 100 mb. I know that journal backups will help
but as long as I don't get a full incremental in it doesn't do me any good. Some of 
the settings in dsm.opt :

TCPWindowsize 63
TxnByteLimit 256000
TCPWindowsize 63
compressalways yes
RESOURceutilization 10
CHAngingretries 2

The network card is set to full duplex. I wonder if an FTP test with show some 
Gremlins in the network...?? Will try it..

I'm certain the server is ok. It's a F80 with 4 processors and 1.5 GB of RAM, though I 
can't seem to get the cache hit % above 98. my bufpoolsize is 524288. DB is 22 GB
73% utilized.

I'm really stumped and I would appreciate any help

Thanks

Guillaume Gilbert



Réf. : Re: Backup of Win2`K Fileserver

2002-08-01 Thread Guillaume Gilbert

Hi

Thanks for the tips. I backup 150 other clients (mostly NT but with some AIX and 
TDP's) with not much performance problems. The server is AIX. The database is on a
Hitachi 7700E (15 GB 1rpm disks - RAID5) with dual path fiberchannel. I'm thinking 
its the compaq san thats the problem. My group didn't have any say on how it was
configured. I did an FTP yesterday (used the dsmsched.log which is now 100 Megs...) 
and it was very slow so the culprit is probably client side. The 2 ethernet (100 mbs)
cards on my server are used at max (12mbytes/sec) each when we do TDP Notes backups so 
they're working OK. There is a Token-Ring card on the server so the backup is
probably going through there. I'll check with the sysadmins if the cpu is is used over 
50% . If so I'l turn off compression.

Guillaume Gilbert
CGI Canada




Salak Juraj <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-08-01 03:32:39

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Backup of Win2`K Fileserver

Hi,

identify your bottleneck.
It is not obvious what it is.
Your backup times were sub-standard even a couple years ago.

Some tips:
 - check using FTP your network
 - check the perform,ance of your SAN drives
- copy a part of your NT drive either into NUL or on another SAN
drive, how long will it take?,
- look at the drive lights when backing-up, by this speed they
should only occasionaly light
- look at the CPU on you file server during the backup, at this
speed it should be well under 10%
- if CPU is near 100%, disable compressing
 - check your server, even with 4 CPU´s the database performance could be
the bottleneck
- copy a part of your NT file system on a drive local to TSM server
n back it up from there,
how long will it take?
If long, the database might be the bottleneck,
  in this case add spindles to your server and spread the database among
them
and check whether the physical and logical drives on your scsi
controller have caching on
(WRITE-BACK, not WRITE-THROUGH)

 - ressourceutilisation 10 -- if you have more NT drives to backup, using
this setting they will be backed up in paralell,
 maybe this kills your ressources

 - TCPWindowsize 63 -- setting it larger could improve network speed if this
is the bottleneck, have a look at older postings

 - etc.

good luck

Juraj




> -Original Message-
> From: Steve Harris [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 1:19 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Backup of Win2`K Fileserver
>
>
> Hi Guillaume
>
> RENAME FILESPACE may help here
>
> Rename the new filespace to some "temporary" name
> Rename the old filespace to the new name
> Run incremental backup
>
> After a suitable period, delete the "temporary" filespace.
>
> Regards
>
> Steve Harris
> AIX and TSM Admin
> Queensland Health, Brisbane Australia
>
> >>> [EMAIL PROTECTED] 01/08/2002 6:30:35 >>>
> Hello
>
> I have a BIG performance problem with the backup of a win2k
> fileserver. It used to be pretty long before but it was
> managable. But now the sysadmins put it on a compaq
> storageworks SAN. By doing that they of course changed the
> drive letter. Now it has to do a full backup of that drive.
> The old drive had  1,173,414 files and 120 GB  of
> data according to q occ. We compress at the client. We have
> backup retention set to 2-1-NL-30. The backup had been
> running for 2 weeks!!! when we cancelled it to try to
> tweak certain options in dsm.opt. The client is at 4.2.1.21
> and the server is at 4.1.3 (4.2.2.7 in a few weeks). Network
> is 100 mb. I know that journal backups will help
> but as long as I don't get a full incremental in it doesn't
> do me any good. Some of the settings in dsm.opt :
>
> TCPWindowsize 63
> TxnByteLimit 256000
> TCPWindowsize 63
> compressalways yes
> RESOURceutilization 10
> CHAngingretries 2
>
> The network card is set to full duplex. I wonder if an FTP
> test with show some Gremlins in the network...?? Will try it..
>
> I'm certain the server is ok. It's a F80 with 4 processors
> and 1.5 GB of RAM, though I can't seem to get the cache hit %
> above 98. my bufpoolsize is 524288. DB is 22 GB
> 73% utilized.
>
> I'm really stumped and I would appreciate any help
>
> Thanks
>
> Guillaume Gilbert
>
>
>
> **
> This e-mail, including any attachments sent with it, is confidential
> and for the sole use of the intended recipient(s). This
> confidentiality
> i

Réf. : Re: Backup of Win2`K Fileserver

2002-08-02 Thread Guillaume Gilbert

Hey there

Well it appears the sysadmin didn't tell me everything. The ethernet card was set at 
auto detect. They will set it to 100mbs Full Duplex and the performance should be
much better. Also I renamed the old filespace to match the new. That's a very good tip 
that I didn't think of. Thank you very much.

Guillaume Gilbert
CGI Canada




Salak Juraj <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-08-01 03:32:39

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Backup of Win2`K Fileserver

Hi,

identify your bottleneck.
It is not obvious what it is.
Your backup times were sub-standard even a couple years ago.

Some tips:
 - check using FTP your network
 - check the perform,ance of your SAN drives
- copy a part of your NT drive either into NUL or on another SAN
drive, how long will it take?,
- look at the drive lights when backing-up, by this speed they
should only occasionaly light
- look at the CPU on you file server during the backup, at this
speed it should be well under 10%
- if CPU is near 100%, disable compressing
 - check your server, even with 4 CPU´s the database performance could be
the bottleneck
- copy a part of your NT file system on a drive local to TSM server
n back it up from there,
how long will it take?
If long, the database might be the bottleneck,
  in this case add spindles to your server and spread the database among
them
and check whether the physical and logical drives on your scsi
controller have caching on
(WRITE-BACK, not WRITE-THROUGH)

 - ressourceutilisation 10 -- if you have more NT drives to backup, using
this setting they will be backed up in paralell,
 maybe this kills your ressources

 - TCPWindowsize 63 -- setting it larger could improve network speed if this
is the bottleneck, have a look at older postings

 - etc.

good luck

Juraj




> -Original Message-
> From: Steve Harris [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 1:19 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Backup of Win2`K Fileserver
>
>
> Hi Guillaume
>
> RENAME FILESPACE may help here
>
> Rename the new filespace to some "temporary" name
> Rename the old filespace to the new name
> Run incremental backup
>
> After a suitable period, delete the "temporary" filespace.
>
> Regards
>
> Steve Harris
> AIX and TSM Admin
> Queensland Health, Brisbane Australia
>
> >>> [EMAIL PROTECTED] 01/08/2002 6:30:35 >>>
> Hello
>
> I have a BIG performance problem with the backup of a win2k
> fileserver. It used to be pretty long before but it was
> managable. But now the sysadmins put it on a compaq
> storageworks SAN. By doing that they of course changed the
> drive letter. Now it has to do a full backup of that drive.
> The old drive had  1,173,414 files and 120 GB  of
> data according to q occ. We compress at the client. We have
> backup retention set to 2-1-NL-30. The backup had been
> running for 2 weeks!!! when we cancelled it to try to
> tweak certain options in dsm.opt. The client is at 4.2.1.21
> and the server is at 4.1.3 (4.2.2.7 in a few weeks). Network
> is 100 mb. I know that journal backups will help
> but as long as I don't get a full incremental in it doesn't
> do me any good. Some of the settings in dsm.opt :
>
> TCPWindowsize 63
> TxnByteLimit 256000
> TCPWindowsize 63
> compressalways yes
> RESOURceutilization 10
> CHAngingretries 2
>
> The network card is set to full duplex. I wonder if an FTP
> test with show some Gremlins in the network...?? Will try it..
>
> I'm certain the server is ok. It's a F80 with 4 processors
> and 1.5 GB of RAM, though I can't seem to get the cache hit %
> above 98. my bufpoolsize is 524288. DB is 22 GB
> 73% utilized.
>
> I'm really stumped and I would appreciate any help
>
> Thanks
>
> Guillaume Gilbert
>
>
>
> **
> This e-mail, including any attachments sent with it, is confidential
> and for the sole use of the intended recipient(s). This
> confidentiality
> is not waived or lost if you receive it and you are not the intended
> recipient(s), or if it is transmitted/ received in error.
>
> Any unauthorised use, alteration, disclosure, distribution or review
> of this e-mail is prohibited.  It may be subject to a
> statutory duty of
> confidentiality if it relates to health service matters.
>
> If you are not the intended recipient(s), or if you have
> received this
> e-mail in error, you are asked to immediately notify the sender by
> telephone or by return e-mail.  You should also delete this e-mail
> message and destroy any hard copies produced.
> **
>






Réf. : IBM vs. Storage Tek

2002-08-15 Thread Guillaume Gilbert

Hi Joni

What you wrote sums it up in terms of performance and reliability. I think the same 
way you do as far as LTO goes. It is very good for small enterprises and full backups.
But I don't think it fits TSM very well, espacially in a large environnement like 
yours. Almost everyday I talk about it to our analysts  so I won't have to support a
large TSM server (400 clients) with those tapes. My 9840a's are very good. The B's are 
even better. The Gresham solution is easier to implement in a lanless backup
strategy. Compare the cost of DTELM licencing to that of TSM Library Sharing. I don't 
have any figures but I bet the two are pretty close.

Sure with STK tapes you'll have more tapes but a Powderhorn can take it. Before the 
9840, we had over 3000 EE-tapes (1.6GB native) in our Powderhorn just for TSM with
only 4 drives. I was doing reclamation by copying the tapes to disk storage pool. With 
the 9840 I reclaim at 40% and do about 30 a day starting at 9:30 and it usually
finishes before I get off work. In 1 year I had 1 (one) tape failure and no drive 
failures. That's pretty reliable. I'm thinking the 3590 is as reliable.

Remember, you always get what you pay for.

Guillaume Gilbert
CGI Canada




Joni Moyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-08-15 09:29:48

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   IBM vs. Storage Tek

Hello,

I know I've asked about this before, but now I have more information so I
hope someone out there has done this.  Here is my environment for TSM.
Right now it is on the mainframe and we are using 3590 Magstars.  We have a
production and a test TSM server and each has about 13 drives and a total
of 5,500 tapes used for onsite and offsite tape pools between the 2
systems.  Two scenarios are being considered (either way TSM is being moved
onto our SAN environment with approximately 20 SAN backup clients and 250
LAN backup clients and will be on SUN servers instead of the mainframe)
Here is what I estimated I would need for tapes:

 3590 9840 9940A LTO
 10 GB 20 GB 60 GB 100 GB
Production
Onsite1375 689  231  140
Offsite   1600 800  268  161
Total 2975 1489 499  301

Test
Onsite963  483  163  101
Offsite   1324 664  223  135
Total 2287 1147 386  236

Grand
Total 5262 2636 885  537

1. IBM's solution is to give us a 3584 library with 3 frames and use LTO
tape drives.  This only holds 880 tapes and from my calculations I will
need about 600 tapes plus enough tapes for a scratch pool.  My concern is
that LTO cannot handle what our environment needs.  LTO holds 100 GB
(native), but when a tape is damaged or needs to be reclaimed the time it
takes to do either process would take quite some time in my estimation.
Also, I was told that LTO is good for full volume backups and restores, but
that it has a decrease in performance when doing file restores, archives
and starting and stopping of sessions, which is a majority of what our
company does with TSM.  Has anyone gone from a 3590 tape to LTO?  Isn't
this going backwards in performance and reliability?  Also, with
collocation, isn't a lot of tape space wasted because you can only put one
server  per volume?

2. STK 9840B midpoint load(20 GB) or 9940A(60 GB) in our Powderhorn silo
that would be directly attached to the SAN.  From what I gather, these
tapes are very robust like the 3590's, but the cost for this solution is
double IBM's LTO.  We would also need Gresham licenses for all of the SAN
backed up clients(20).

Does anyone know of any sites/contacts that could tell me the
advantages/disadvantages of either solution?  Any opinions would be greatly
appreciated.
Thanks


Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338






Réf. : Re: ANR9999 message Lock acquisition - TSM 4.2.1.15 server

2002-08-15 Thread Guillaume Gilbert

I've been running 4.2.2.6 for a few weeks on Z/OS and I haven't seen any problems. DB 
is a little under 10 GB. On AIX I have 3 servers running 4.2.2.8 without these
problems. DBs are 500 MBs, 5 GBs and 22 GBs. The 22 GB (79% used) took 53 minutes this 
morning with the following report :

 ANR0812I Inventory file expiration process 246 completed:
  examined 3847974 objects, deleting 81979 backup objects,
  4180 archive objects, 0 DB backup volumes, and 0 recovery
  plan files. 0 errors were encountered.

The longest its run since the upgrade is 90 minutes. The server is a F80 with 4 
processers and 1,5 GB of ram. Another small TSM server runs on it too (700 MB DB). The 
DB
sits on a Hitachi 7700E subsystem with 15 GB disks (10K rpm) in RAID5. Dual fiber 
through an Inrange Director. I have 4 dbv of 6,8 GBs all on the same array on the
subsystem. Haven't had a performance problem (on the DB) since we've had this config a 
little over a year ago.

I had installed 4.2.1.15 on Z/OS but ran into a problem with the q assoc command which 
would abend the server. 4.2.2.6 looks pretty stable to me.

Guillaume Gilbert
CGI Canada




"Jolliff, Dale" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-08-15 08:26:37

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: ANR message Lock acquisition - TSM 4.2.1.15 server

4.2.2.x servers on AIX are exhibiting the same problems, along with numerous
others.
Oddly enough, our 4.2.1.15 Solaris servers seem to be operating relatively
smoothly.
Note that I said "relatively" and they all have very small DB.

We have an open crit-sit with IBM/Tivoli on an AIX 4.2.2.x server -
We eventually suffered enough DB corruption that we were forced to rebuild
the server and we are in the process of recovering what archives we can from
the old server and importing it into the new server.

This isn't to say that the lock issues caused the corruption, I believe the
corruption was there when we upgraded to 4.2 - the issues that came along
with 4.2 exacerbated the long list of problems we already had.

All in all, if I had it to do over again, I would definitely wait on moving
to 4.2.2.x in production.




-Original Message-
From: Gerhard Rentschler [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 6:38 AM
To: [EMAIL PROTECTED]
Subject: Re: ANR message Lock acquisition - TSM 4.2.1.15 server


Hello,
this is obviously a known problem. See the discussion a few days earlier
with subject " Expiration problem with TSM 5.1.1.1 on AIX 4.3.3".
I opened PMR 88563,070,724 on July 31st. First I was advised to upgrade to
5.1.1.2. This didn't change anything. Then I had to supply diagnostic
information. According to the web interface to Tivoli problem management the
problem was closed on August 6th without any further information. I haven't
got any new information since  this day.
Best regards
Gerhard

---
Gerhard Rentschleremail:[EMAIL PROTECTED]
Regional Computing Center tel.   ++49/711/685 5806
University of Stuttgart   fax:   ++49/711/682357
Allmandring 30a
D 70550
Stuttgart
Germany



> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> MC Matt Cooper (2838)
> Sent: Thursday, August 15, 2002 1:22 PM
> To: [EMAIL PROTECTED]
> Subject: Re: ANR message Lock acquisition - TSM 4.2.1.15 server
>
>
> Uh oh,   I just went from 4.1.5  to 5.1.1 Monday, z/OS 1.1.
> This morning I
> am looking at my EXPIRATION process was hung and a backup of local tape to
> the copytape is hung and it all started with very similar message..
> Matt
> ANR0538I A resource waiter has been aborted.
>
> ANR0538I A resource waiter has been aborted.
>
> ANR1224E BACKUP STGPOOL: Process 23 terminated - lock conflict.
>
> ANR0538I A resource waiter has been aborted.
>
> ANR0538I A resource waiter has been aborted.
>
> ANRD IMFS(3566): ThreadId<656> Lock conflict error in obtaining sLock
> for
> ANRD node 499, filespace 35
>
> ANR0986I Process 23 for BACKUP STORAGE POOL running in the BACKGROUND
> processed
> ANR0986I 3603 items for a total of 970,174,464 bytes with a
> completion state
> of
> ANR0986I FAILURE at 06:06:44.
>
> ANRD IMFS(3566): ThreadId<662> Lock conflict error in obtaining sLock
> for
> ANRD node 499, filespace 35
>
>
> -Original Message-
> From: Nilsson Niklas [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 15, 2002 2:25 AM
> To: [EMAIL PROTECTED]
> Subject: SV: ANR message Lock acquisition - TSM 4.2.1.15 server
>
> Good morning...
>
> Im testing 4.2.2.9 (testfix) on OS/390 but still I've got these locks...

Réf. : TSM 4.2.2.2 for OS/390

2002-08-16 Thread Guillaume Gilbert

Yes there's an APAR with that very problem. Upgrade to 4.2.2.6 and it should be fixed.

Guillaume Gilbert
CGI Canada




Sascha Braeuning <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 
2002-08-16 08:00:05

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   TSM 4.2.2.2 for OS/390

Hello,

I've got a problem with TSM 4.2.2.2 for MVS. If I start a query with the
command
'query association' the server (application) crashes down. Are there any
experiences with that problem?

MfG
Sascha Bräuning


Sparkassen Informatik, Fellbach

OrgEinheit: 6322
Wilhelm-Pfitzer Str. 1
70736 Fellbach

Telefon:   (0711) 5722-2144
Telefax:   (0711) 5722-1630

Mailadr.:  [EMAIL PROTECTED]






Réf. : Move from 3590 to LTO or 9840/9940

2002-09-04 Thread Guillaume Gilbert
arted worling with TSM, I thought 
we should put everything on the mainframe. I've changed my mind since. With the
advent of the SAN and fiberchannel drives, UNIX is better positionned for backups.

14. Do you have any LAN-free backups on a SAN?

We backup 6 Oracle server with 2 or 3 distinct dbs each via lanfree. They go to LTO 
tape drives though. The throughput is pretty good, maxing out the drives.

Joni Moyer
Associate Systems Programmer
[EMAIL PROTECTED]
(717)975-8338


Guillaume Gilbert
CGI Canada



Réf. : Re: 3494 Utilization

2002-09-09 Thread Guillaume Gilbert

Hi Dale

I run this query to see how many mounts and bytes for every drive :

select distinct(drive_name) as "Drive",count(*) as "# of 
mounts",sum(bytes/1024/1024),sum(end_time-start_time) as "Total 
Time",avg(end_time-start_time) as "Avg Time" -
from summary -
where activity='TAPE MOUNT' and -
 start_time>=timestamp(current date - 2 day,'00:00:00') and -
 start_time<=timestamp(current date - 2 day,'23:59:59') -
group by drive_name

Since it is from the summury table, I don't know how accurate it is since there has 
been alot of problems with it recenlty. I still have nodes which report 0 bytes
transferred and I am at server 4.2.2.10.

Guillaume Gilbert
CGI Canada





"Jolliff, Dale" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-09 08:44:06

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: 3494 Utilization

In the long run, we are attempting to quantify exactly how "busy" our
ATLs/drives are over time for a number of reasons -- capacity planning,
adjusting schedules to better utilize resources, and possibly even justify
the purchase of new tape drives.

At this point I have been asked to simply come up with a minutes or hours
per 24 hour period any particular drive is in use.

A "query mount" every minute might work, but it just isn't a good solution
for two reasons -- for clients writing directly to tape, the mounted tape
won't show up in "query mount", and most of these servers already have an
extensive number of scripts accessing them periodically for various
monitoring and reporting functions - I hesitate to add any more to them.

My last resort is going to be to extract the activity log once every 24
hours and examine the logs and match the mount/dismounts by drive and
attempt to calculate usage that way if there isn't something better.  With
the difficulty in matching mounts to dismounts, I'm not entirely convinced
it's worth the trouble.


-Original Message-
From: Mr. Lindsay Morris [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 06, 2002 4:55 PM
To: [EMAIL PROTECTED]
Subject: Re: 3494 Utilization


If you use the atape device driver, you (supposedly) can turn on logging
within it.  Then every dismount writes a record of how many bytes were
read/written during that mount.
Never tried it ... if you can get it working, let me know how, please! We'd
love to be able to do that.

Right now we CAN show you library-as-a-whole data rates, just by layering
all the tape-drive-writing tasks (migration, backup stgpool, backup DB, etc)
one atop the other minute by minute.  Maybe that's enough - why do you need
drive-by-drive data rates?

-
Mr. Lindsay Morris
CEO, Servergraph
www.servergraph.com
859-253-8000 ofc
425-988-8478 fax


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Jolliff, Dale
> Sent: Friday, September 06, 2002 2:16 PM
> To: [EMAIL PROTECTED]
> Subject: Re: 3494 Utilization
>
>
> Paul said that Servergraph has this functionality - According to our
> hardware guys, the 3494 library has some rudimentary mount statistics
> available.
>
> I'm going to be looking into both of those options.
>
> Surely someone has already invented this wheel when trying to justify more
> tape drives - other than pointing to the smoke coming from the drives and
> suggesting that they are slightly overused
>
>
>
>
>
> -Original Message-
> From: Cook, Dwight E [mailto:[EMAIL PROTECTED]]
> Sent: Friday, September 06, 2002 6:45 AM
> To: [EMAIL PROTECTED]
> Subject: Re: 3494 Utilization
>
>
> Good question, never actually thought about it...
> I would think that the sum of the difference between mount &
> dismount times
> for each drive...
> OH THANKS. now I won't be able to sleep until I code some select
> statement to do this :-(
> if I figure it out, I'll pass it along
>
> Dwight
>
>
>
> -Original Message-
> From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, September 05, 2002 10:04 AM
> To: [EMAIL PROTECTED]
> Subject: 3494 Utilization
>
>
> I saw this topic out on ADSM, and I could not locate any type of
> functional
> resolution ...
>
> What is everyone using to calculate the "wall time" of your tape drive
> utilization?
>






Réf. : Re: !Urgent - SDLT/TSM slow after upgrade from v. 4.2.1.7 to v. 4 .2.2 .12

2002-09-18 Thread Guillaume Gilbert

On what OS are you seeing these problems?. I'm on AIX 433 and installed 4.2.2.12 last 
friday. Everything seems to be going OK. My db backups take 40 minutes which is the
same as before. I upgraded over a month ago to 4.2.2.10 from 4.1.3.0 and my summary 
table doesn't go back that far so I can't see how the performance was with 4.1.3.

Guillaume Gilbert
CGI Canada




"Seay, Paul" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-18 10:38:35

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: !Urgent - SDLT/TSM slow after upgrade from v. 4.2.1.7 to v. 4 .2.2   
  .12

We went from 4.2.2.7 to 4.2.2.12, not as large a jump as you and we saw this
problem has well.  Have you checked your db cache hit ratio?  We had to
increase the bufferpool size significantly, we needed to anyway.  It appears
4.2.2.12 is much more sensitive to low db cache hits.  As for database
backups, I think there is a problem there, ours are taking much longer than
before.  I am still trying to isolate the problem.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Halvorsen Geirr Gulbrand [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, September 18, 2002 3:59 AM
To: [EMAIL PROTECTED]
Subject: !Urgent - SDLT/TSM slow after upgrade from v. 4.2.1.7 to v. 4.2.2
.12


Hi Everyone!
For one of our customers I have recently upgraded their TSM server (and TSM
device-driver) from v. 4.2.1.7 to 4.2.2.12, because of added support for
multiple modules of library Compaq MSL5026DLX. Configuration is now: TSM
Server (+dev.driver) v. 4.2.2.12 Library Compaq MSL5025DLX (2 modules) with
3 SDLT 220 drives.

After the upgrade, many of the processes takes a very long time to complete.
Expiration used to take 10-15 minutes, takes now 1,5 hours. Database backup
used to take about 20 minutes, takes now 1-2 hours, migration is slow,
space-reclamation is very slow, and when they try to stop spacereclamation,
it takes hours before it stops.

I'm sorry I don't have any numbers (MB/s) for these processes. I might have
more data on this by tomorrow. But I would like to know if anyone else have
seen similar problems after an upgrade.

Any help is highly appreciated!

Rgds.
Geirr G. Halvorsen






Réf. : Re: Backup reporting

2002-09-18 Thread Guillaume Gilbert

I'm at level 4.2.2.12 of the server on AIX and the summary table problems are still 
there. I am currently writing a PMR on this.

Guillaume Gilbert
CGI Canada




Bill Boyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-18 15:10:13

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Backup reporting

There was a problem where the bytes transferred in the summary table as
zero. It has been fixed in later patch levels. I'm not sure what the APAR
number is or the level where it was fixed.

If you need this data, turn on the accounting records. There is an
additional field "Amount of backup files, in kilobytes, sent by the client
to the server" in addition to the "Amount of data, in kilobytes,
communicated between the client node and the server during the session". The
bytes communicated is the total bytes transferred and includes and
re-transmissions/retries. I believe the "Amount of backup files, in
kilobytes, sent by the client to the server" is just what was sent AND
stored in TSM.

I haven't fully looked into this, but if I'm trying to get a total for the
amount of data backed up I would be using this field as opposed to the bytes
transmitted field. Something for me to add to my Honey-Do list..:-)

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mark Bertrand
Sent: Wednesday, September 18, 2002 2:39 PM
To: [EMAIL PROTECTED]
Subject: Re: Backup reporting


Paul and all,

When I attempt to use any of the following select statements my "Total MB"
returned is always 0. I get my list of nodes but there is never any numbers
for size.

Since this is my first attempt at select statements, I am sure I doing
something wrong. I have tried from command line and through macro's.

I am trying this on a W2K TSM v4.2.2 server.

Thanks,
Mark B.

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 16, 2002 11:43 PM
To: [EMAIL PROTECTED]
Subject: Re: Backup reporting


See if these will help:

/* SQL Script:   */
/*   */
/* backup_volume_last_24hours.sql*/
/* Date   Description*/
/* 2002-06-10 PDS Created*/

/* Create Report of total MBs per each session */

select entity as "Node Name  ", cast(bytes/1024/1024 as decimal(10,3))
as "Total MB  ",  cast(substr(cast(end_time-start_time as char(17)),3,8) as
char(8)) as "Elapsed  ", substr(cast(start_time as  char(26)),1,19) as
"Date/Time  ", case when cast((end_time-start_time) seconds as
decimal) >0 then  cast(bytes/cast((end_time-start_time) seconds as
decimal)/1024/1024 as decimal(6,3)) else cast(0 as decimal(6,3)) end as  "
MB/Sec" from summary where start_time>=current_timestamp - 1 day and
activity='BACKUP'

/* Create Report of total MBs and length of backup for each node */

select entity as "Node Name  ", cast(sum(bytes/1024/1024) as
decimal(10,3)) as "Total MB",  substr(cast(min(start_time) as
char(26)),1,19) as "Date/Time   ",
cast(substr(cast(max(end_time)-min(start_time)  as char(20)),3,8) as
char(8)) as "Length   " from summary where start_time>=current_timestamp -
22 hours and  activity='BACKUP' group by entity

/* Create Report of total backed up*/

select sum(cast(bytes/1024/1024/1024 as decimal(6,3))) "Total GB Backup"
from summary where start_time>=current_timestamp  - 1 day and
activity='BACKUP'






Réf. : Select command to q client name and version

2002-09-19 Thread Guillaume Gilbert

Hi Karla

Here is what I use :

select platform_name,node_name,-
cast(client_version as char(1))||'.'||cast(client_release as char(1))||-
'.'||cast(client_level as char(1))||'.'||cast(client_sublevel as char(2)) -
as "Version" -
from nodes order by platform_name,node_name

Guillaume Gilbert
CGI Canada




Karla Ross <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-19 09:08:08

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Select command to q client name and version

Does anyone have a select command that will show me the clients and what
version of TSM they are running?  My TSM servers are running TSM v4.1, 4.2,
and 5.1.

Karla Ross






Réf. : RE: Réf. : Re: Backup reporting

2002-09-19 Thread Guillaume Gilbert

Unfortunatly I do not speak PERL but I guess I'll have to learn it if the summary 
table is FUBAR.

Guillaume




Bill Boyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-18 15:45:37

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   RE: Réf. : Re: Backup reporting

I believe that this only affects the summary table data, not the accounting
records. The accounting records are correct in the byte counts. I know you
can't get it with a simple SQL, but it's there. Can you speak PERL??

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Guillaume Gilbert
Sent: Wednesday, September 18, 2002 3:27 PM
To: [EMAIL PROTECTED]
Subject: Rif. : Re: Backup reporting


I'm at level 4.2.2.12 of the server on AIX and the summary table problems
are still there. I am currently writing a PMR on this.

Guillaume Gilbert
CGI Canada




Bill Boyer <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-18 15:10:13

Veuillez ripondre ` "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyi par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Backup reporting

There was a problem where the bytes transferred in the summary table as
zero. It has been fixed in later patch levels. I'm not sure what the APAR
number is or the level where it was fixed.

If you need this data, turn on the accounting records. There is an
additional field "Amount of backup files, in kilobytes, sent by the client
to the server" in addition to the "Amount of data, in kilobytes,
communicated between the client node and the server during the session". The
bytes communicated is the total bytes transferred and includes and
re-transmissions/retries. I believe the "Amount of backup files, in
kilobytes, sent by the client to the server" is just what was sent AND
stored in TSM.

I haven't fully looked into this, but if I'm trying to get a total for the
amount of data backed up I would be using this field as opposed to the bytes
transmitted field. Something for me to add to my Honey-Do list..:-)

Bill Boyer
DSS, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Mark Bertrand
Sent: Wednesday, September 18, 2002 2:39 PM
To: [EMAIL PROTECTED]
Subject: Re: Backup reporting


Paul and all,

When I attempt to use any of the following select statements my "Total MB"
returned is always 0. I get my list of nodes but there is never any numbers
for size.

Since this is my first attempt at select statements, I am sure I doing
something wrong. I have tried from command line and through macro's.

I am trying this on a W2K TSM v4.2.2 server.

Thanks,
Mark B.

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 16, 2002 11:43 PM
To: [EMAIL PROTECTED]
Subject: Re: Backup reporting


See if these will help:

/* SQL Script:   */
/*   */
/* backup_volume_last_24hours.sql*/
/* Date   Description*/
/* 2002-06-10 PDS Created*/

/* Create Report of total MBs per each session */

select entity as "Node Name  ", cast(bytes/1024/1024 as decimal(10,3))
as "Total MB  ",  cast(substr(cast(end_time-start_time as char(17)),3,8) as
char(8)) as "Elapsed  ", substr(cast(start_time as  char(26)),1,19) as
"Date/Time  ", case when cast((end_time-start_time) seconds as
decimal) >0 then  cast(bytes/cast((end_time-start_time) seconds as
decimal)/1024/1024 as decimal(6,3)) else cast(0 as decimal(6,3)) end as  "
MB/Sec" from summary where start_time>=current_timestamp - 1 day and
activity='BACKUP'

/* Create Report of total MBs and length of backup for each node */

select entity as "Node Name  ", cast(sum(bytes/1024/1024) as
decimal(10,3)) as "Total MB",  substr(cast(min(start_time) as
char(26)),1,19) as "Date/Time   ",
cast(substr(cast(max(end_time)-min(start_time)  as char(20)),3,8) as
char(8)) as "Length   " from summary where start_time>=current_timestamp -
22 hours and  activity='BACKUP' group by entity

/* Create Report of total backed up*/

select sum(cast(bytes/1024/1024/1024 as decimal(6,3))) "Total GB Backup"
from summary where start_time>=current_timestamp  - 1 day and
activity='BACKUP'



=






Réf. : Re: Backup reporting: SUMMARY TABLE ISSUE IS FOUND

2002-10-15 Thread Guillaume Gilbert

Just got word from Tivoli that APAR IC34462: BYTES FIELD IN SUMMARY TABLE SHOWS VALUE 
0 IF THE BACKUP SESSION IS LOST AND WHEN REOPENED AGAIN is fixed in maintenance
level 4.2.3.0 and is now available.

Guillaume Gilbert
CGI Canada




"Williams, Tim P {PBSG}" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-09-20 10:36:12

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Backup reporting: SUMMARY TABLE ISSUE IS FOUND

The fix for this is at these levels (per apar doc):
 Apply patch 4.2.2.5, 5.1.1.1 or higher, available through
  Tivoli Web Page to resolve this problem, or fixing
  PTF when available.


Here's a blerb from my etr/pmr (problem ticket):
The APAR you are referring to is IC34462.  This issue is specific to
the summary table not being updated when client sessions are severed and
the session has to be re-established.  This APAR is currently open, but
I have added you to the interested parties list for this APAR.  There
was also another APAR (IC33455) which addressed an issue with the
summary table simply not being updated properly during client backups.


Here's the actual 2 apars...the doc on the problem/issue:
FYI Thanks Tim


 APAR Identifier .. IC34462   Last Changed..02/09/19
  BYTES FIELD IN SUMMARY TABLE SHOWS VALUE 0 IF THE BACKUP SESSION
  IS LOST AND THEN REOPENED AGAIN (I.E. MSG ANS1809E)

  Symptom .. IN INCORROUT Status ... OPEN
  Severity ... 2  Date Closed .
  Component .. 5698ISMSV  Duplicate of 
  Reported Release . 51A  Fixed Release 
  Component Name TSM SERVER 510   Special Notice
  Current Target Date ..  Flags
  SCP ... AIXRSC
  Platform  AIX

  Status Detail: Not Available

  PE PTF List:

  PTF List:


  Parent APAR:
  Child APAR list:


  ERROR DESCRIPTION:
  When a backup session is lost and then reopened again
  successfully (e.g. MSG ANS1809E can be seen in the
  dsmsched.log), not all values in the SUMMARY TABLE of the TSM
  server will show the correct backup statistics. The BYTES field
  shows in that case a value of 0, while e.g the fields EXAMINED
  and AFFECTED show correct values.
  .
  Example:
  
  1.) select * from summary where activity='BACKUP':
 START_TIME: 2002-08-28 19:32:06.00
   END_TIME: 2002-08-28 19:54:17.00
   ACTIVITY: BACKUP
 NUMBER: 366
 ENTITY: LWLSAP2
   COMMMETH: Tcp/Ip
ADDRESS: 172.20.13.42:33133
  SCHEDULE_NAME: MO-FR-1800
   EXAMINED: 71137
   AFFECTED: 251
 FAILED: 0
  BYTES: 0  <= |
   IDLE: 0
 MEDIAW: 0
  PROCESSES: 1
 SUCCESSFUL: YES
VOLUME_NAME:
 DRIVE_NAME:
   LIBRARY_NAME:
   LAST_USE:
  .
  2.) dsmsched.log:
  08/28/02   18:01:25 Scheduler has been started by Dsmcad.
  08/28/02   18:01:25 Querying server for next scheduled event.
  08/28/02   18:01:25 Node Name: LWLSAP2
  ...
  08/28/02   18:59:19 ANS1898I * Processed66,000 files
  *
  08/28/02   19:31:50 Normal File--> 2,048,008,192
  /oracle/09p2/daten6/TBS_APOLLO4_IMG_TBS_2.ora  Sent
  08/28/02   19:31:50 ANS1809E Session is lost; initializing
  session reopen procedure.
  08/28/02   19:32:06 ... successful
  08/28/02   19:32:09 ANS1898I * Processed66,500 files
  *
  ...
  08/28/02   19:54:16 Total number of objects inspected:   71,137
  08/28/02   19:54:16 Total number of objects backed up:  251
  08/28/02   19:54:16 Total number of objects updated:  0
  08/28/02   19:54:16 Total number of objects rebound:  0
  08/28/02   19:54:16 Total number of objects deleted:  0
  08/28/02   19:54:16 Total number of objects expired: 63
  08/28/02   19:54:16 Total number of objects failed:   0
  08/28/02   19:54:16 Total number of bytes transferred:  6.46 GB


  08/28/02   19:54:16 Data transfer time:  1,868.07 sec
  08/28/02   19:54:16 Network data transfer rate:  3,629.75 KB/sec
  08/28/02   19:54:16 Aggregate data transfer rate:1,001.56 KB/sec
  08/28/02   19:54:16 Objects compressed by:   19%
  08/28/02   19:54:16 Elapsed processing time:   01:52:50
  
  It seems that when a session is lost and then reopened again,
  the new session descriptor does not hold the value for the bytes
  transfered so far anymore.
  .
  This problem was seen on a TSM AIX and NT 5.1.1.4 server, but
  maybe all TSM server levels will be affected which inlcude the
  fix for APAR IC33455.
  .
  Recreation:
  Unplug the network cable during a backup session => session lost
  Plug the network cable in again => backup session restarts
  Then check the SUMMARY TABLE for

Re : Re: Co-location

2002-10-22 Thread Guillaume Gilbert
Since we use 9840 tapes, we didn't want clients with 1-2 gb of data to use 1 whole 
tape. So we did just what you described. We have over  50% of our clients in this
situation. We put the "limit" at 10-12 GB (a 9840 holds 20 GB). Sure you have to carve 
up you're disk pool but the small clients don't require alot of disks. Their pool
is only 10 GB and it can hold a night's worth of backups easily.

Guillaume Gilbert
CGI Canada




Matt Simpson <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-10-22 09:46:13

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Co-location

At 3:53 PM -0400 10/21/02, Thach, Kevin said:
>The person that installed our environment basically set up 6 Policy Domains:
>Colodom, Exchange, Lanfree, MSSQL, Nocodom, and Oracle.
>
>99% of the clients are in the Nocodom (non-collocated) domain, which has one
>policy set, and one management class which has one backup copy group with
>retention policies set to NOLIMIT, 3, 60, 60.

This is away from the topic of Kevin's question, but his background
info led to a question that we're looking at right now.

We currently have one big disk pool for all our backups, which
migrates to one tape pool, which we copy to another tape pool for
offsite.

We turned on co-location on the onsite tape pool a couple of weeks
ago, because we just started using the SQL TDP, and the doc
recommended colocation.  We turned it back off this morning, because
we were running out of tapes and had a lot of them that were only 5%
full.

We would like to do what Kevin says he's doing: specify colocation
for a small number of our clients and leave it off for a bunch of
them.  But, if I understand correctly, colocation isn't specified
directly in the management  class.  It's specified on the tape
storage pool definition. So specifying colocation for some  clients
but not all would require multiple tape storage pools, which wouldn't
really be a problem.  But it looks like that would also require
multiple disk storage pools, because, as far as I can tell, the only
way to get a client into a different tape pool is to have it in a
different disk pool.

We'd really like to avoid carving up our disk space into more smaller
pools.  But, as far as I can tell, that's the only way to use
colocation selectively.  Am I missing something, or is that the way
it works?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
<mailto:msimpson@;uky.edu>
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.







Réf. : Re: Reclamation Setting Survey

2002-10-22 Thread Guillaume Gilbert
On our servers using 9840 tapes, we set it to 40 since the tapes are very fast and we 
can do alot in one day. On LTO and DLT its usually 50 since it is very long to
reclaim tapes.

Guillaume Gilbert
CGI Canada




J M <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-10-22 11:09:12

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Reclamation Setting Survey

Just out of curiosity- what are you using for reclamation settings for
primary tape pool data?

Currently we have over 100 WinNT platforms (all in one policy domain)backing
up data (1+ TB incremental) to large primary disk pool, which migrates to
primary tape, tape copy, etc... The data is a mix of filesystem incrementals
and TDP backup objects (database/exchange). Currently we have our
reclamation threshold set to 60, but we're curious to know what other
similar environments are successfully using?

_
Get faster connections -- switch to MSN Internet Access!
http://resourcecenter.msn.com/access/plans/default.asp







Réf. : Re: version 5.1.5 TSM and higher

2002-12-06 Thread Guillaume Gilbert
On a SUN server I migrated from 4.2.3.1 to 5.1.5.0 to 5.1.5.3. I figure any version 
that comes on a CD is a base installation

Guillaume Gilbert
CGI Canada




Joshua Bassi <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-12-06 09:13:19

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: version 5.1.5 TSM and higher

You must first install 5.1.0.0 and then upgrade to 5.1.5.x.


--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant -ADSM/TSM
eServer Systems Expert -pSeries HACMP

AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Sylvia Nergard
Sent: Friday, December 06, 2002 5:48 AM
To: [EMAIL PROTECTED]
Subject: version 5.1.5 TSM and higher


Is it right that I have 2 options to install TSM 5.1.5?

1. Install 5.1.0 and patch up to 5.1.5.1
or
2. Install 5.1.5 as a base installation

I have seen some discussions on this that confuses me a bit. Anyone that
can confirm this?



Regards
Sylvia Nergård







ANR9999D message

2002-12-09 Thread Guillaume Gilbert
Hi all

On saturday from out of the blue I started getting the following message :

ANRD smnode.c(18972): ThreadId<25> Session exiting has no affinityId cluster

Check edthe APARs and found nothing. Checked the archives and found 2 messages 
regarding this posted by Rainer Tammer and Brenda Collins but no solution was given. 
If you
got an answer from Tivoli please share.

Thanks

Guillaume Gilbert
CGI Canada



Réf. : Re: ANR9999D message

2002-12-10 Thread Guillaume Gilbert
Hi Arnaud

I opened a PMR yesterday. I hope they'll fix it soon, my actlog is getting flooded. I 
really miss version 4.1.3. This will be my 4th patch installation since this
summer...

Guillaume




PAC Brion Arnaud <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-12-10 03:00:12

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: ANRD message

Hi Guillaume,

We're facing this problem too, since we upgraded from TSM 4.2.1.15 to
TSM 4.2.3.1. Already called IBM to solve this problem, and their
response was "problem known, wait for next PTF", this is already a 3
weeks old story, and we're still waiting for the patch !!
Curiously, recycling the server gives us temporary peace, but no longer
than one day 
Try to opem a PMR with IBM to put some pressure on them !
Cheers.

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]]
Sent: Monday, 09 December, 2002 20:29
To: [EMAIL PROTECTED]
Subject: ANRD message


Hi all

On saturday from out of the blue I started getting the following message
:

ANRD smnode.c(18972): ThreadId<25> Session exiting has no affinityId
cluster

Check edthe APARs and found nothing. Checked the archives and found 2
messages regarding this posted by Rainer Tammer and Brenda Collins but
no solution was given. If you got an answer from Tivoli please share.

Thanks

Guillaume Gilbert
CGI Canada










Re: ANR9999D message

2002-12-10 Thread Guillaume Gilbert
Hi all

Just got an update on this problem. It appears that any client that is not at level 
4.2.3.1 can cause this... That's just great. And on top of that, the summary table
still isn't working right. The 3 winNT nodes I have at 4.2.3.1 don't even have entries 
in the summary table anymore and every other one has 0 in the bytes field. Things
just went from bad to worse...

Guillaume Gilbert
CGI Canada
------ Envoyé par Guillaume Gilbert/CCPEDQ/Desjardins le 2002-12-10 
16:50 -------


Guillaume Gilbert
2002-12-10 09:07

Pour : "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
cc :
Objet : Réf. : Re: ANRD message  (Document link: Guillaume Gilbert)

Hi Arnaud

I opened a PMR yesterday. I hope they'll fix it soon, my actlog is getting flooded. I 
really miss version 4.1.3. This will be my 4th patch installation since this
summer...

Guillaume



PAC Brion Arnaud <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-12-10 03:00:12

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: ANRD message

Hi Guillaume,

We're facing this problem too, since we upgraded from TSM 4.2.1.15 to
TSM 4.2.3.1. Already called IBM to solve this problem, and their
response was "problem known, wait for next PTF", this is already a 3
weeks old story, and we're still waiting for the patch !!
Curiously, recycling the server gives us temporary peace, but no longer
than one day 
Try to opem a PMR with IBM to put some pressure on them !
Cheers.

Arnaud

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: Guillaume Gilbert [mailto:[EMAIL PROTECTED]]
Sent: Monday, 09 December, 2002 20:29
To: [EMAIL PROTECTED]
Subject: ANRD message


Hi all

On saturday from out of the blue I started getting the following message
:

ANRD smnode.c(18972): ThreadId<25> Session exiting has no affinityId
cluster

Check edthe APARs and found nothing. Checked the archives and found 2
messages regarding this posted by Rainer Tammer and Brenda Collins but
no solution was given. If you got an answer from Tivoli please share.

Thanks

Guillaume Gilbert
CGI Canada












Réf. : Re: Select Statement

2002-12-11 Thread Guillaume Gilbert
I've been having this problem for awhile. Support told me to go to 4.2.3.1 on server 
and client and now these clients are missing from the summary table. This is on my
biggest server (25 GB database). On other small and new servers, things seem to be ok. 
My PMR is still open (48498) and support is working on it.

Guillaume Gilbert
Conseiller - Gestion du stockage
CGI - Gestion intégré des technologies
TEL. : (514) 281-7000 ext 3642
Pager : (514) 957-2615
Email : [EMAIL PROTECTED]





Kolbeinn Josepsson <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-12-11 
08:24:56

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: Select Statement

I have seen this problem since 5.1.0

At one site with server 5.1.5.2 and client 5.1.5.0, not only some clients
reports 0 MB, also some clients are completly missing from the summary
table.

At other site I have recently upgrated from 4.2.x to 5.1.5.4 and clients
5.1.5.0 and I'm seeing clients report 0 MB.

If this is question about client patch level, I would like to know





  Best regards,
  Kolbeinn Josepsson  7 Systems Engineer
  Tel: +354 569-7700 7 Fax: +354 569-7799
  www.nyherji.is








  "Seay, Paul"
 cc:
  Sent by: "ADSM:  Subject: Re: Select Statement
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  T.EDU>


  11.12.2002 01:30
  Please respond
  to "ADSM: Dist
  Stor Manager"






As it turns out the clients also have to be updated to correct a problem
where the summary table statistics are missing.  I talked to a TSM Level 2
person about this just last week.  I do not know what levels you have to be
at to get the problem corrected, but I thought 5.1.5 was good enough.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 10, 2002 5:16 PM
To: [EMAIL PROTECTED]
Subject: Select Statement


I got this somewhere and I know it used to work on my 4x server. Now that
I'm on 5.1.5.2 I get a lot of nodes report back with 0 Megabytes but they
obviously sent files.

Anyone who can make it work since my SQL stills are pretty much non
existant?

/**/
/* Specify a date on the run line as follows  */
/* run q_backup 2001-09-30*/
/**/
select entity as node_name, date(start_time) as date, - cast(activity as
varchar(10)) as activity, time(start_time)as start, -
time(end_time) as end, cast(bytes/100 as decimal(6,0))as megabytes, -
cast(affected as decimal(7,0)) as files, successful from summary - where
date(start_time)='$1' and activity='BACKUP' order by megabytes desc

 NODE_NAME: NODEA
  DATE: 2002-12-09
  ACTIVITY: BACKUP
 START: 20:00:51
   END: 20:48:07
MEGABYTES: 0
 FILES: 100
SUCCESSFUL: YES



This one does the same thing with some nodes. It also only reports about
half the nodes even though it looks like it's supposed to be going back a
full day.

NODEA   NT_DOM WinNT0.00
1
select summary.entity as "NODE NAME", nodes.domain_name as "DOMAIN",
nodes.platform_name as "PLATFORM", cast((cast(sum(summary.bytes) as float)
/
1024 / 1024) as decimal(10,2)) as MBYTES , count(*) as "CONECTIONS" from
summary ,nodes where summary.entity=nodes.node_name and
summary.activity='BACKUP' and start_time >current_timestamp - 1 day group
by
entity, domain_name, platform_name order by MBytes desc

A better one would also work.
Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:<mailto:[EMAIL PROTECTED]> [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154







TSM Presentation

2002-12-13 Thread Guillaume Gilbert
Hello TSMers

Anybody out there have a presentation explaning the progressive backup concept of TSM. 
I have to send one to a new client of ours.

Thanks you

Guillaume Gilbert
CGI Canada



Réf. : AW: TSM Presentation

2002-12-13 Thread Guillaume Gilbert
Hi all

I was the one who asked and realized afterwards what I had done...  Thank you Thomas 
for putting it on the web. The presentation is very well done.

Guillaume Gilbert
CGI Canada




"Rupp Thomas (Illwerke)" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-12-13 
13:41:03

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : AW: TSM Presentation

I agree, but ...
- it's impossible to know how many users will be interested in the document.

If it's only the one who asked then a download page woul be overkill
- unfortunately I don't have a site to post such documents. Therefore I will
have to
"abuse" some other site that offers some public space. That's why I want to
use this
space very carefully.

Kind regards from Austria
Thomas

-Ursprüngliche Nachricht-
Von: Peter Ford [mailto:[EMAIL PROTECTED]]
Gesendet: Freitag, 13. Dezember 2002 19:14
An: [EMAIL PROTECTED]
Betreff: Re: TSM Presentation

Would it be possible to post this on a website somewhere?  That way those
who would like a copy could just download it.  I am sure it would be easier
to do that, than to send it to 100 people!

I know I would like to see this as well.

Thanks.
Peter

Peter Ford
System Engineer


Stentor, Inc.
 5000 Marina Blvd,
 Brisbane, CA 94005-1811
 Main Phone: 650-228-
 Fax: 650 228-5566
 http://www.stentor.com
 [EMAIL PROTECTED]


--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--







Réf. : Re: FW: storage agent use

2002-12-13 Thread Guillaume Gilbert
A backup is considered LAN-FREE when the data does not pass through the LAN. If data 
comes from internal SCSI disks and goes through the HBA straght to the tapes, then no
data was transmitted on the network. For server-free backups, then you need data on 
the SAN backed up to tapes on the SAN.

Guillaume Gilbert
CGI Canada




Joshua Bassi <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-12-13 16:04:41

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour : [EMAIL PROTECTED]
cc :
Objet : Re: FW: storage agent use

Then how is that considered LAN-free?  If the data is not on the SAN
then how is it gonna be backed up "LAN-free"? It's just not possible.


--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant -ADSM/TSM
eServer Systems Expert -pSeries HACMP

AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Robert L. Rippy
Sent: Friday, December 13, 2002 12:59 PM
To: [EMAIL PROTECTED]
Subject: Re: FW: storage agent use


This is incorrect.  LAN FREE either the data from the SAN or not from
the SAN it will backup using LAN FREE or through the fiber.

Thanks,
Robert Rippy.




-Original Message-
From: Joshua Bassi [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 13, 2002 3:35 PM
To: [EMAIL PROTECTED]
Subject: Re: storage agent use


If the data to be backed up is not on the SAN, it is my understanding
that the SAN agent will send it over the IP network (traditional
backup).


--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
Tivoli Certified Consultant -ADSM/TSM
eServer Systems Expert -pSeries HACMP

AIX, HACMP, Storage, TSM Consultant
Cell (831) 595-3962
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
David E Ehresman
Sent: Friday, December 13, 2002 8:32 AM
To: [EMAIL PROTECTED]
Subject: Re: storage agent use


The tape drives have to be available via SAN to the Storage Agent
machine.  The data to be backed up up does not.

David
>>> [EMAIL PROTECTED] 12/13/02 10:56AM >>>
Hi,

I am confused about Storage Agent use. To use it with SAN, do we have to
keep the data to be backuped on SAN storage or can it be local data ?

Regards,

Tuncel Mutlu







ANS4018E - File name too long

2001-11-08 Thread Guillaume Gilbert

Hi all

We've had this error on a WinNT SP5 box ever since we upgraded the client to 4.1.3. 
It's not the actual filename that is long but the whole directory structure which has
10 levels and over 200 characters. The server is 4.1.3 on AIX 4.3.3. We get this error 
both in the client schedlog and in the server activity log.

Is there a limitation on the number of characters for a directory or file.

I have checked ADSM.ORG and there was talk about this problem but no actual solution 
did come out.

Thanks for the help.

Guillaume Gilbert
CGI - Montreal



Re: ANS4018E - File name too long

2001-11-12 Thread Guillaume Gilbert

Thanks Andrew. The option mentionned in the APAR seems to have done the
trick.

Guillaume Gilbert
CGI - Montreal
- Original Message -
From: "Andrew Raibeck" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, November 09, 2001 8:19 AM
Subject: Re: ANS4018E - File name too long


As I mentioned in my post from yesterday, this problem was addressed by
APAR IC27346. You can find additional information by reviewing the APAR
text, which I included in a prior post. See the www.adsm.org archives and
search on my last name and the APAR number:

   +raibeck +IC27346

and you will find the answer.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/IBM@IBMUS
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
"Good enough" is the enemy of excellence.




"BURDEN,Anthony" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
11/08/2001 17:49
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:Re: ANS4018E - File name too long



Good morning guys and girls

I think you will find that it cannot be fixed by TSM!

The problem is in NT and its limitations not TSM. NT provides you with
long file names 255chars; we all know that and we know that this
includes the path and filename combined. When TSM backs up the
directory/file it starts from the root of the directory path and will
report a long filename error once it reaches 255chars;

The reason NT servers can have these extra long file names (more then
255characters) is only when the file is created from a mapped point on a
server. That is if you share a directory under NT and map to that share
point the 255chars starts from that share point. But TSM does not start
counting from that point is always starts from the root.

The only way I have ever got around this problem was to reduce the
directory structure used in our department. I also notify people if
their filenames are too long and get them to reduce them, and if I
cannot find the user I map into the directory and reduce the file name
my self. I present most of my error occur with Internet favourites and I
ignore them!

That's my understanding of the issue with long file names...hope it
helps, and that I am still up to date with the real world.

ant


-Original Message-
From: Daniel Sparrman [mailto:[EMAIL PROTECTED]]
Sent: Friday, 9 November 2001 7:15 AM
To: [EMAIL PROTECTED]
Subject: Re: ANS4018E - File name too long

Hi
Yes, there is a limitation to how long the path can be. However, I think
the length of the path was supported even higher in 4.2.1.
So, try using TSM client 4.2.1 to see if the problem is resolved.
Best Regards
Daniel Sparrman
---
Daniel Sparrman
Exist i Stockholm AB
Bergkdllavdgen 31D
192 79 SOLLENTUNA
Vdxel: 08 - 754 98 00
Mobil: 070 - 399 27 51

Guillaume Gilbert <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
2001-11-08 14:48 EST
Please respond to "ADSM: Dist Stor Manager"

To: [EMAIL PROTECTED]
cc:
bcc:
Subject: ANS4018E - File name too long


Hi all

We've had this error on a WinNT SP5 box ever since we upgraded the
client to 4.1.3. It's not the actual filename that is long but the whole
directory structure which has
10 levels and over 200 characters. The server is 4.1.3 on AIX 4.3.3. We
get this error both in the client schedlog and in the server activity
log.

Is there a limitation on the number of characters for a directory or
file.

I have checked ADSM.ORG and there was talk about this problem but no
actual solution did come out.

Thanks for the help.

Guillaume Gilbert
CGI - Montreal

Notice:
The information contained in this e-mail message and any attached files
may
be confidential information, and may also be the subject of legal
professional privilege.  If you are not the intended recipient any use,
disclosure or copying of this e-mail is unauthorised.  If you have
received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete all copies of this transmission together with any attachments.



Return code=4 on scheduled backap

2002-01-09 Thread Guillaume Gilbert

Hello

We just installed a 4.2.1.15 client on an NT box and now we get the bug in the event 
reporting :

2002-01-08 17:31:06 --- SCHEDULEREC STATUS END
2002-01-08 17:31:06 --- SCHEDULEREC OBJECT END FEN_QUOT 2002-01-08 17:00:00
2002-01-08 17:31:06 ANS1512E Scheduled event 'FEN_QUOT' failed.  Return code = 4.

which reports to the server that the schedule has failed.

I searched adsm.org and this bug was supposed to be fixed in 4.2.1.15 but it clearly 
isn't. Is there a workaound to this bug or do I have to live with it until it is fixed?

The server is at 4.1.3 on AIX 4.3.3

Thanks for the help

Guillaume Gilbert
Storage administrator
CGI Group Montreal



Réf. : Re: Select statement (using events table)

2002-06-07 Thread Guillaume Gilbert

I can't seem to get a handle on this events table. The SQL you supplied isn't working 
on my server (4.1.3.0 on AIX). When I do a select with no dates in the where clause,
all I get is the admin scheds for the current day. Other than that, I can't get a 
thing. Is there something I'm missing??

Thanks

Guillaume Gilbert
CGI Canada




Zlatko Krastev/ACIT <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-07 08:54:26

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Select statement (using events table)

select schedule_name, status, result, reason -
from events -
where date(scheduled_start) <= current_date - cast(1 as interval day)

Zlatko Krastev
IT Consultant




Please respond to <[EMAIL PROTECTED]>
To: "'Zlatko Krastev/ACIT'" <[EMAIL PROTECTED]>
cc:

Subject:RE: Select statement (using events table)

Hi Zlatko,

So that's the trick...

I've trying to get the startdate set, using some example statements, but I
cannot get it done it seems. Here's what I tried:

select schedule_name,status,result,reason from events where
(scheduled_start - 1 day)

Well, that doesn't work :-( (obviously because the syntax isn't right :-)
).
Do you happen to know how that works?

Kind regards,

Rick



-Oorspronkelijk bericht-
Van: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Verzonden: 6 juni 2002 14:56
Aan: [EMAIL PROTECTED]
Onderwerp: Re: Select statement (using events table)


Rick,

the trick is that Tivoli decided for some reasons select from events to be
limited as is in Q EVENT. You need to specify WHERE clause.

Zlatko Krastev
IT Consultant




Please respond to [EMAIL PROTECTED]
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Select statement

Hi *SMers,

I've trying to figure something out using select-statements. I want to
know
what events did or didn't complete succesfully for, say, the last 24
hours.
I'm getting the results, but I only get the results since last midnight.
How
do I get the results from the last 24 hours.

This is what I have so far:

select schedule_name,scheduled_start,actual_start,result from events


Anyone here who can help?

Kind regards,

Rick Harderwijk
Systems Administrator
Factotum Media BV
Oosterengweg 44
1212 CN Hilversum
P.O. Box 335
1200 AH Hilversum
The Netherlands
Tel: +31-35-6881166
Fax: +31-35-6881199
E: [EMAIL PROTECTED]






Réf. : Re: Réf. : Re: Select statement (using events table)

2002-06-07 Thread Guillaume Gilbert

I get nothing, as in

ANR2034E SELECT: No match found using this criteria.
ANS8001I Return code 11.

I don't get it... I should have all the shceduled backups for yesterday. I've tried 
and tried to come up with a SQL query that would enable me to format the ouput of a q
ev * * the way I want it (without long node names wrapping) but nothing seems to 
work...

Guillaume Gilbert
CGI Canada




Zlatko Krastev/ACIT <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-07 10:58:30

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Réf. : Re: Select statement (using events table)

Why it does not work? What error is returned and if it complains for
syntax at which word?
To be sure it works I've tested it on TSM 3.7.0 for Win (being at customer
site) but when I return to office I will test it on 4.1.4.1 for AIX (out
test one) and on 4.2.1.15.

Zlatko Krastev
IT Consultant




Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Réf. : Re: Select statement (using events table)

I can't seem to get a handle on this events table. The SQL you supplied
isn't working on my server (4.1.3.0 on AIX). When I do a select with no
dates in the where clause,
all I get is the admin scheds for the current day. Other than that, I
can't get a thing. Is there something I'm missing??

Thanks

Guillaume Gilbert
CGI Canada




Zlatko Krastev/ACIT <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-07
08:54:26

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Select statement (using events table)

select schedule_name, status, result, reason -
from events -
where date(scheduled_start) <= current_date - cast(1 as interval day)

Zlatko Krastev
IT Consultant




Please respond to <[EMAIL PROTECTED]>
To: "'Zlatko Krastev/ACIT'" <[EMAIL PROTECTED]>
cc:

Subject:RE: Select statement (using events table)

Hi Zlatko,

So that's the trick...

I've trying to get the startdate set, using some example statements, but I
cannot get it done it seems. Here's what I tried:

select schedule_name,status,result,reason from events where
(scheduled_start - 1 day)

Well, that doesn't work :-( (obviously because the syntax isn't right :-)
).
Do you happen to know how that works?

Kind regards,

Rick



-Oorspronkelijk bericht-
Van: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Verzonden: 6 juni 2002 14:56
Aan: [EMAIL PROTECTED]
Onderwerp: Re: Select statement (using events table)


Rick,

the trick is that Tivoli decided for some reasons select from events to be
limited as is in Q EVENT. You need to specify WHERE clause.

Zlatko Krastev
IT Consultant




Please respond to [EMAIL PROTECTED]
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
cc:

Subject:Select statement

Hi *SMers,

I've trying to figure something out using select-statements. I want to
know
what events did or didn't complete succesfully for, say, the last 24
hours.
I'm getting the results, but I only get the results since last midnight.
How
do I get the results from the last 24 hours.

This is what I have so far:

select schedule_name,scheduled_start,actual_start,result from events


Anyone here who can help?

Kind regards,

Rick Harderwijk
Systems Administrator
Factotum Media BV
Oosterengweg 44
1212 CN Hilversum
P.O. Box 335
1200 AH Hilversum
The Netherlands
Tel: +31-35-6881166
Fax: +31-35-6881199
E: [EMAIL PROTECTED]






Réf. : Re: NSM Upgrade Experience

2002-06-20 Thread Guillaume Gilbert

We are currently testing version 4.2.2.2. Of course this is done on a very small basis 
so its hard to bump in to these problems. My production db is 18gb. Should I wait
until this is solved before leaving version 4.1.3 which has been very good to me the 
past year and a half?

When you talk abour corruption, what kind was it. I have some ANRD messages that 
pop up here and there and support has told me to do an audit db. This is very hard to
schedule in my environnement since we have oracle, db2 and notes apllications logging 
all through the day. I must say that Beat Largo's message has made me more confident
about the duration of an audit. I might be able to squeeze it in an 8 hour window.

Guillaume Gilbert
CGI Canada




"Jolliff, Dale" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-20 14:10:07

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: NSM Upgrade Experience

The corruption was in the DB before the upgrade.
There is a bad lock contention problem when running expiration with
4.2.2.0-4.
(4.2.2.4 did mitigate the problem some but didn't solve it)

We had symptoms prior to the upgrade, since the upgrade to 4.2.2 we have
been crashing on a regular basis.

I just shipped off a core dump, activity log and output from a lot of show
commands this morning to level 2 that showed an expiration process hanging
and the scheduler manager crashing.


It's not pretty, and I wouldn't go there if I didn't have to.

Again, YMMV - I may just be one unlucky S.O.B.




-Original Message-
From: Shamim, Rizwan (London) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 11:12 AM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


Dale,

Thanks for the info.

The majority of the work will be completed by the CE under the NSM agreement
but I was told there will need to be some housekeeping tasks that we need to
do anyway.  We'll be prepared for that and make sure that everything is
checked and documented.

What sort of problems did you encounter with the database corruption?  Our
TSM databases are between 70 and 110GB so database corruption is somewhat of
a concern to us.

Regards

Rizwan




-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 1:54 PM
To: [EMAIL PROTECTED]
Subject: Re: NSM Upgrade Experience


Our NSM support contract calls for a CE onsite to do all upgrades of that
nature.
We went directly to 4.2.20 and AIX 5.1 with EC 17.

Aside from that, immediately after the upgrade, apply OS patch 5100-02
immediately.
It's a big one, 600+ MB.  From the looks of it, it affects almost every
fileset in the OS.
It fixes some duplicate IP address errors you'll see in errpt.

After than, double check your NIC settings - if you do not do
auto-negotiation on the link, you'll need to reset it.

If I had it to do over again, I think I would stay with something earlier
than 4.2.2 - there are some serious locking issues that some corruption in
our DB has made almost unbearable for us.

YMMV, of course.




-Original Message-
From: Shamim, Rizwan (London) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, June 20, 2002 7:00 AM
To: [EMAIL PROTECTED]
Subject: NSM Upgrade Experience


Hello,

Is there anyone out there that has upgraded their NSM's to TSM version
4.2.1.9?

The upgrade procedure will also include an AIX upgrade to 5.1 (currently
4.3.3).  As the NSM's are black boxes the upgrade involves a mksysb restore
(upgrade) and customisation to be carried out by ourselves.  In total the
outage is approximately 9 hours.

I'd like to hear from anyone who has attempted this.

Regards


   
Rizwan Shamim
Central Backup Service (CBS)
Merrill Lynch Europe PLC
Tel: 020 7995 1176
Mobile: 0787 625 8246
mailto:[EMAIL PROTECTED]






IO errors on tape drives

2002-06-26 Thread Guillaume Gilbert

Hi there

Here is our setup. We use Gresham's DistribuTAPE software to manage our STK 9840 
drives, which are in an STK 9310 (BIG mainframe library).The drives are fiber attached
through a Brocade silkworm switch.

We originally used Gresham's AdvanTAPE driver with the generictape device class. After 
reading that this class could not do lanfree backups and library sharing, I decided
to use TSM's native device driver for these drives which are supported according to 
tivoli's website. Now changing a device class is not a small task. You have to migrate
all your tapes to the new class and 600 9840 tapes at 20 gb a piece is a long and 
tedeous process. Since then, we've been having all sorts of IO errors and the more
recent the Server version, the worst it is. My production server is at 4.1.3. When the 
errors pop up its usually this one :

05/21/2002 09:38:47  ANR8302E I/O error on drive TSMCPLX9840E (/dev/mt3)
  (OP=READ, CC=206, KEY=FF, ASC=FF, ASCQ=FF,
  SENSE=**NONE**, Description=General SCSI failure).  Refer
  to Appendix D in the 'Messages' manual for recommended
  action.

which is not too bad, just updates the tape to readonly and dismounts it. We also get 
CC=205 and some CC=40*. Luckily for me I have a test server around to test AIX5.1
which is connected to the sam drives so I thought I'd have better luck with 4.2.2.*. 
Well it's worse, every time a dismount occurs I get this error :

06/26/2002 15:36:39  ANR8302E I/O error on drive TSMCPLX9840E (/dev/mt0)
  (OP=OFFL, Error Number=78, CC=205, KEY=FF, ASC=FF,
  ASCQ=FF, SENSE=**NONE**, Description=SCSI adapter
  failure).  Refer to Appendix D in the 'Messages' manual
  for recommended action.

And the tape is no longer readable. I've had a PMR open for a month now on this thing 
and finally its up to level 2 support.

Does anybody out there have the same setup as I do. Any help would be appreciated.

Guillaume Gilbert
CGI Canada



Réf. : Re: IO errors on tape drives

2002-06-27 Thread Guillaume Gilbert

Hi Beat

We had our STK tech do a diagnostic and upgrade to micro-code 1.30.111 if I remember 
correctly. The diags showed that the drives were doing fine. And I am sure the
library is doing okk cause Z/OS is pounding on it nite and day without any problems. 
Our switch is at firmware level 2.4.1f which is pretty recent.

Are your drives 9840a or 9840b? I'm thinking the TSM driver is optimized for 9840b 
drives which are much faster and the 9840a is responding too slowly which is causing
timeouts. I suggest you don't upgrade to server 4.2.2.*. Every time I mount a tape it 
becomes unreadable. Wouldn't want that on a production server...

Guillaume Gilbert
CGI Canada




Beat Largo <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-27 08:01:11

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: IO errors on tape drives

Hi Gilbert

Interesting enough for us that there is somebody out there with the same
setup as we have. Now we had the same problem with a TSM-Server upgrade to
4.2.1.15. We are not yet finished with the solution (maybe next week we
will know it for sure) but it seems that the problem will be solved with an
upgrade of the microcode of the tape drives and the storagetek library.
Today the technician of StorageTek is inhouse to do the upgrade. We don't
know yet the number of the microcode upgrade but if you need it, we can
give it to you tomorrow.
Here some of our failures:
26.06.02   10:43:54  ANR8302E I/O error on drive TSMT320 (/dev/mt1)
(OP=READ,
  Error Number=5, CC=305, KEY=04, ASC=44, ASCQ=B6,

SENSE=F0.00.04.00.04.00.00.12.00.00.00.00.44.B6.00.00.00-
  .00.37.E8.37.E8.35.4D., Description=Drive
failure).
  Refer to Appendix D in the 'Messages' manual for
  recommended action.
26.06.02   11:05:04  ANR8302E I/O error on drive TSMT320 (/dev/mt6)
(OP=OFFL,
  Error Number=46, CC=203, KEY=04, ASC=53, ASCQ=01,

SENSE=70.00.04.00.00.00.00.12.00.00.00.00.53.01.00.00.00-
  .00.5C.88.00.00.00.00., Description=Manual
intervention
  required).  Refer to Appendix D in the 'Messages'
manual
  for recommended action.
27.06.02   02:01:29  ANR8302E I/O error on drive TSMT320 (/dev/mt6)
(OP=OFFL,
  Error Number=46, CC=203, KEY=04, ASC=53, ASCQ=01,

SENSE=70.00.04.00.00.00.00.12.00.00.00.00.53.01.00.00.00-
  .00.5C.88.00.00.00.00., Description=Manual
intervention
  required).  Refer to Appendix D in the 'Messages'
manual
  for recommended action.
27.06.02   10:04:19  ANR8302E I/O error on drive TSMT320 (/dev/mt5)
(OP=READ,
  Error Number=5, CC=305, KEY=04, ASC=44, ASCQ=B6,

SENSE=F0.00.04.00.04.00.00.12.00.00.00.00.44.B6.00.00.00-
  .00.37.E8.37.E8.35.4D., Description=Drive
failure).
  Refer to Appendix D in the 'Messages' manual for
  recommended action.



Best regards,

Beat Largo

Zurich Switzerland
IT SC - Shared Service Center
Storage Management, IFS
Unterrohrstr. 5
Postfach
8952 Schlieren
Phone: +4116258214 / Fax +4116258061
mailto:[EMAIL PROTECTED]
homepage:http://www.zurich.com





  Guillaume Gilbert
   cc:
  Sent by: "ADSM: DistSubject: IO errors on tape drives
  Stor Manager"
  <[EMAIL PROTECTED]>


  26.06.2002 21:58
  Please respond to
  "ADSM: Dist Stor
  Manager"






Hi there

Here is our setup. We use Gresham's DistribuTAPE software to manage our STK
9840 drives, which are in an STK 9310 (BIG mainframe library).The drives
are fiber attached
through a Brocade silkworm switch.

We originally used Gresham's AdvanTAPE driver with the generictape device
class. After reading that this class could not do lanfree backups and
library sharing, I decided
to use TSM's native device driver for these drives which are supported
according to tivoli's website. Now changing a device class is not a small
task. You have to migrate
all your tapes to the new class and 600 9840 tapes at 20 gb a piece is a
long and tedeous process. Since then, we've been having all sorts of IO
errors and the more
recent the Server version, the worst it is. My production server is at
4.1.3. When the errors pop up its usually this one :

05/21/2002 09:38:47  ANR8302E I/O error on drive TSMCPLX9840E
(/dev/mt3)
  (OP=READ, CC=206, KEY

Reclaiminig LTO Tapes

2002-07-03 Thread Guillaume Gilbert

Hey there

Maybe its because I'm used to using STK 9840 tapes but yesterday I saw an LTO tape at 
25 % utilisation take almost 4 hours to reclaim, which to me is awful. How am I
supposed to reclaim my tapes with that kind of performance?. The drives I use are IBM 
Ultriums in a 3584 library. With only 2 drives it makes it hard for users to do
restores...

Are there any options I can change to make this go a bit faster. I know the start/stop 
on LTO's isn't good.

Thanks for the help.

Guillaume Gilbert
CGI Canada



Re: Reclaiminig LTO Tapes

2002-07-04 Thread Guillaume Gilbert

We are an outsourcing firm, and the client wanted a 3584... My team (Storage
admins) didn't have a say in it and now we are stuck with a large library
dedicated to one client. When your trying to come up with an entreprise-wide
multi client multi-site SAN-based backup solution and the bean counters are
watching your every move with a magnifying glass, its tough to swallow. I am
pushing to buy two more drives since we are barely making our backup window.
We are backing up 10 Oracle databases via lanless backup (which is almost
hitting the the LTO max of 16 MB/sec). It's getting hard to fit those in
with the regular backups.

I'm merely complaining because a 50% used 9840 gets reclaim within 30 mins
or less. The data on those is client compressed so that is 10 GB of already
compressed data in 30 minutes and thats tape to tape. I reclaim my 9840s at
40% reclaimable space and theres between 10 and 30 of them evrery day in 1
major stg (380 tapes) and 4 minor ones (15 - 30 tapes). When you got 6 of
these drives in a STK 9310 silo (shared with MVS) you're in heaven :)

If I remember correclty the data on that tape was client compresssed (turned
it off a month ago). Maybe it had 115 GB full. So that is about 30 GB in 4
hours. As for a FILE devclass, I'm thinking about it but I'll have to check
if we have the disk space. See this client wanted the best of everything, so
he got two Hitachi 9960's with 5 TBs in them. Those things can hold up to 40
TBs (even more now with the new hi-density HDs). Another big waste of space!
On top of that, I'm stuck doing weekly and monthly backups with three
different nodenames for each client. At least theres only 8 AIX servers. I
thought it through and through and finally decided that the three nodenames
with a different client scheduler started for each was the best way to go.
After a few months of this, I'm still debating with myself if I should have
used archives. The backupset route was put aside because of the tape space
waste and how difficult it is to keep track of them. Anyway the weeklys and
monthlys are lanless so theres not crouding the network which is GE...

Well that's my rant and I'm sticking to it :)

Guillaume Gilbert
CGI Canada

- Original Message -
From: "Tab Trepagnier" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, July 03, 2002 6:15 PM
Subject: Re: Reclaiminig LTO Tapes


> Gilbert,
>
> Is there a particular reason you have such a large library with just two
> drives?  I have a 3583 with four drives.  I did that to prevent the very
> problem you're reporting.
>
> Assuming it hosts a primary storage pool, you will need two drives for
> reclamation unless you use a FILE devclass disk pool as a reclaim pool.
> Then you can turn one two-drive process into two one-drive processes with
> a buffer in between them.
>
> If you also might be doing reclamation of your copypool(s) - a situation I
> sometimes see on my system - you will need a third drive.
>
> A fourth drive reserves one drive for "outgoing" data to clients.  That is
> why I put four drives in all four of my tape libraries.
>
> My 3583 connected HV Diff SCSI gives a sustained 12 MB/s per drive.  That
> works out to about 43 GB/hour minus "bubbles" from when the tape is being
> repositioned.
>
> Tab Trepagnier
> TSM Administrator
> Laitram Corporation
>
>
>
>
>
>
> Guillaume Gilbert <[EMAIL PROTECTED]>
> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
> 07/03/2002 08:40 AM
> Please respond to "ADSM: Dist Stor Manager"
>
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Reclaiminig LTO Tapes
>
>
> Hey there
>
> Maybe its because I'm used to using STK 9840 tapes but yesterday I saw an
> LTO tape at 25 % utilisation take almost 4 hours to reclaim, which to me
> is awful. How am I
> supposed to reclaim my tapes with that kind of performance?. The drives I
> use are IBM Ultriums in a 3584 library. With only 2 drives it makes it
> hard for users to do
> restores...
>
> Are there any options I can change to make this go a bit faster. I know
> the start/stop on LTO's isn't good.
>
> Thanks for the help.
>
> Guillaume Gilbert
> CGI Canada



Deleting old TDP for Oracle backups

2002-07-04 Thread Guillaume Gilbert

Hello

I have old TDP for Oracle backups that I want to get rid of. This being the first time 
I used TDPO, I didn't use a nodename for each database, I used a nodename for each
server. So now I have 3 or 4 databases using the same nodename. SOme of these 
databases have since disappeared and I am stuck with the backups. The Oracle admin 
can't
seem to figure out how to get RMAN to delete the backups and since they are in the 
active state, they will never expire. I've opened a PMR on this and I'm hoping for a
miracle. That Share request about being able to delete 1 file from backups and not 
only a filespace seems to me would be migty handy here.

If anyone out there in TSM land has a recipe, please share.

Thanks

Guillaume Gilbert
CGI Canada



Re: IO errors on tape drives -- Update

2002-07-08 Thread Guillaume Gilbert

Hi all

After many trials I finally found out what was wrong. My switch was set in QuickLoop 
mode and it seems that TSM's driver doesn't like that. After turning it off,
everything is working ok.

Guillaume Gilbert
CGI Canada




Beat Largo <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-27 10:03:06

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Réf. : Re: IO errors on tape drives

Hi Gilbert

Our setup is not completely the same. We got a Powderhorn 9310 with 9940A
drives. The microcode is R1.30.212f.
We will soon know if the upgrade of the microcode will solve our problem
with the message  ANR8302E.

Anyway we will not upgrade the TSM-Server soon, maybe end of this year and
we will go directly to 5.

Best regards,

Beat





  Guillaume Gilbert
   cc:
  Sent by: "ADSM: DistSubject: Réf. : Re: IO errors on 
tape drives
  Stor Manager"
  <[EMAIL PROTECTED]>


  27.06.2002 15:04
  Please respond to
  "ADSM: Dist Stor
  Manager"






Hi Beat

We had our STK tech do a diagnostic and upgrade to micro-code 1.30.111 if I
remember correctly. The diags showed that the drives were doing fine. And I
am sure the
library is doing okk cause Z/OS is pounding on it nite and day without any
problems. Our switch is at firmware level 2.4.1f which is pretty recent.

Are your drives 9840a or 9840b? I'm thinking the TSM driver is optimized
for 9840b drives which are much faster and the 9840a is responding too
slowly which is causing
timeouts. I suggest you don't upgrade to server 4.2.2.*. Every time I mount
a tape it becomes unreadable. Wouldn't want that on a production server...

Guillaume Gilbert
CGI Canada




Beat Largo <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-06-27 08:01:11

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: IO errors on tape drives

Hi Gilbert

Interesting enough for us that there is somebody out there with the same
setup as we have. Now we had the same problem with a TSM-Server upgrade to
4.2.1.15. We are not yet finished with the solution (maybe next week we
will know it for sure) but it seems that the problem will be solved with an
upgrade of the microcode of the tape drives and the storagetek library.
Today the technician of StorageTek is inhouse to do the upgrade. We don't
know yet the number of the microcode upgrade but if you need it, we can
give it to you tomorrow.
Here some of our failures:
26.06.02   10:43:54  ANR8302E I/O error on drive TSMT320 (/dev/mt1)
(OP=READ,
  Error Number=5, CC=305, KEY=04, ASC=44, ASCQ=B6,

SENSE=F0.00.04.00.04.00.00.12.00.00.00.00.44.B6.00.00.00-
  .00.37.E8.37.E8.35.4D., Description=Drive
failure).
  Refer to Appendix D in the 'Messages' manual for
  recommended action.
26.06.02   11:05:04  ANR8302E I/O error on drive TSMT320 (/dev/mt6)
(OP=OFFL,
  Error Number=46, CC=203, KEY=04, ASC=53, ASCQ=01,

SENSE=70.00.04.00.00.00.00.12.00.00.00.00.53.01.00.00.00-
  .00.5C.88.00.00.00.00., Description=Manual
intervention
  required).  Refer to Appendix D in the 'Messages'
manual
  for recommended action.
27.06.02   02:01:29  ANR8302E I/O error on drive TSMT320 (/dev/mt6)
(OP=OFFL,
  Error Number=46, CC=203, KEY=04, ASC=53, ASCQ=01,

SENSE=70.00.04.00.00.00.00.12.00.00.00.00.53.01.00.00.00-
  .00.5C.88.00.00.00.00., Description=Manual
intervention
  required).  Refer to Appendix D in the 'Messages'
manual
  for recommended action.
27.06.02   10:04:19  ANR8302E I/O error on drive TSMT320 (/dev/mt5)
(OP=READ,
  Error Number=5, CC=305, KEY=04, ASC=44, ASCQ=B6,

SENSE=F0.00.04.00.04.00.00.12.00.00.00.00.44.B6.00.00.00-
  .00.37.E8.37.E8.35.4D., Description=Drive
failure).
  Refer to Appendix D in the 'Messages' manual for
  recommended action.



Best regards,

Beat Largo

Zurich Switzerland
IT SC - Shared Service Center
Storage Management, IFS
Unterrohrstr. 5
Postfach
8952 Schlieren
Phone: +4116258214 / Fax +4116258061
mailto:[EMAIL PROTECTED]
homepage:http://www.zurich.com





  Guillaume Gilbert
  

Questions concerning Netware backups

2002-07-12 Thread Guillaume Gilbert

Hi all

We are going to be backing up some Netware soon and I have a few questions from the 
sysadmins.

They use Open File Agent from ArcServe. Is there a similar solution fro TSM?

I searched the archives for posts concerning backing up Novell clusters and what I 
found wasn't to clear. It seems there is no clear cut solution like with MSCS. Is
anybody backing up a Novell cluster succesfully?

Also, this is the first time I'll be dealing with Netware. Are there any gotchas I 
should be aware of or does the manuel tell everything I need to know? The client
version will probably be 5.1 as will the server.

Thanks for the help

Guillaume Gilbert
CGI Canada