Re: How to save volhist and devconfig

2002-08-01 Thread Halvorsen Geirr Gulbrand

Hi,
I have had good experience in creating a TSM Administrative Schedule that
saves volhist and devcnfg to disk. Then I've made a client schedule that
sends a mail with those files (and different status messages for TSM) to
backupoperators (including an external account, in case of a complete
disaster - mailserver and TSM server are in the same location).

Rgds.
Geirr G. Halvorsen

-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]]
Sent: 1. august 2002 21:50
To: [EMAIL PROTECTED]
Subject: Re: How to save volhist and devconfig


Brian,
   Yes, we do that.

On our windows TSM server we automatically run a batch file that
1) saves the volhist and devconfig into a local directory.
2) copies all this, and the license files to two servers at other
   locations across our network
3) generates a floppy disk with the same info on it to go out with
   the database backup tape each morning, at the same time we send
   off offsite backups.

I wanted to run it from the TSM scheduler and that was a little trickier,
but doable with some help from this list :)

-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 2:18 PM
To: [EMAIL PROTECTED]
Subject: How to save volhist and devconfig


Hello,

I was wondering how other administrators are saving the volhist, devconfig,
dsmserv.dsk and dsmserv.opt.

Every day we save the volhist and devconfig and manually ftp these files to
a network-drive in case of a crash of the server.

Now I'm not familiar with scripts so far, but I want to automate these job.

So, I was wondering how other administrators are doing this job.

Thanks for your reply.

Brian.



_
Chat on line met vrienden en probeer MSN Messenger uit:
http://messenger.msn.nl



Re: 3494 i?O error on 3590 drives

2002-08-01 Thread Williams, Tim P {PBSG}

I like the mtlib -l/dev/lmcp0 -A
shows adapters and connectivity...
If you are HA on your 3494...it will show you both sides, both adapters...
FYI

-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 3:47 AM
To: [EMAIL PROTECTED]
Subject: Re: 3494 i?O error on 3590 drives


Did you install the ibmatl code for a 3494 library?  If so, can you issue
this command:

mtlib -l 3494lib -qL

This will verify that you can talk to the library.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Koen Willems [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 30, 2002 6:45 PM
To: [EMAIL PROTECTED]
Subject: 3494 i?O error on 3590 drives


Dear Listers,

When I want to define a 3590 in 3494 library i get an I/O error.

define libr 3494lib type=349X device=3494lib

define drive 3494lib drive0 device=mt2.0.0.1

Server option enable3590libr yes.

My scsi adresses on the drives are oke.

RMS is disabled.

Termination is oke.

I can see my 3590 on my adaptec 2944 scsi bus.

I use tsm device drivers fot 3590.

I am runnin W2K TSM 4.2.2.

Does anybody ever connected an 3494 to NT or W2k ?

Thnx 4 your Time,

Koen Willems

_
Join the world s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



Re: 3494 i?O error on 3590 drives

2002-08-01 Thread Seay, Paul

Did you install the ibmatl code for a 3494 library?  If so, can you issue
this command:

mtlib -l 3494lib -qL

This will verify that you can talk to the library.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Koen Willems [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 30, 2002 6:45 PM
To: [EMAIL PROTECTED]
Subject: 3494 i?O error on 3590 drives


Dear Listers,

When I want to define a 3590 in 3494 library i get an I/O error.

define libr 3494lib type=349X device=3494lib

define drive 3494lib drive0 device=mt2.0.0.1

Server option enable3590libr yes.

My scsi adresses on the drives are oke.

RMS is disabled.

Termination is oke.

I can see my 3590 on my adaptec 2944 scsi bus.

I use tsm device drivers fot 3590.

I am runnin W2K TSM 4.2.2.

Does anybody ever connected an 3494 to NT or W2k ?

Thnx 4 your Time,

Koen Willems

_
Join the world s largest e-mail service with MSN Hotmail.
http://www.hotmail.com



Re: Restoring to raw devices

2002-08-01 Thread Joshua S. Bassi

I can only think of the 2 most obvious ones:

1) Verify that the device is not being accessed.
2) Match sure both devices are of the same size.


--
Joshua S. Bassi
Independent IT Consultant
IBM Certified - AIX 4/5L, SAN, Shark
eServer Systems Expert -pSeries HACMP
Tivoli Certified Consultant - ADSM/TSM
Cell (415) 215-0326
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Jason Schram
Sent: Thursday, August 01, 2002 7:36 AM
To: [EMAIL PROTECTED]
Subject: Restoring to raw devices

Hello,

   Are there any special concerns or procedures that are needed to
restore
to a raw device?



Thank-You in advance
Jason Schram
Turning Stone Casino Reswort
Systems Technician



Re: Space reclamation of copypool

2002-08-01 Thread Joshua S. Bassi

Space reclamation is based upon a threshold.  Just because the threshold
is reduced doesn't mean active sessions are cancelled. If you want to
cancel the session issue a cancel process command against the process
ID.


--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
eServer Systems Expert -pSeries HACMP
Tivoli Certified Consultant - ADSM/TSM
Cell (415) 215-0326
[EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]] On Behalf Of
Halvorsen Geirr Gulbrand
Sent: Thursday, August 01, 2002 2:15 AM
To: [EMAIL PROTECTED]
Subject: Space reclamation of copypool

Hello everyone,
I have the following problem with Space Reclamation of my copypool.
To start reclamation I have a script running UPDATE STG COPYPOOL
RECLAIM=50
Three hours later, I run another script to stop reclamation - UPDATE STG
COPYPOOL RECLAIM=100
but reclamation never stops. This affects all of my daily operations
(migration, backup stgp..) because the
reclamation process uses both drives.
Question is, why does space reclamation not stop after updating?
Is there a way of canceling the process (by TSM script)?

Best regards
Geirr G. Halvorsen



Select statememt for DRM

2002-08-01 Thread Jin Bae Chi

Hi, TSM experts,

I try to start to use DRM and when I do 'q drm * wherestate=mountable',
it shows all the volumes and their status, but not the totoal number of
volumes. Does anyone know the correct select statement for showing the
count wiht volume numbers? So, the operator and courier will be sure of
numbers to move in and out. Thanks again for your help.






Jin Bae Chi (Gus)
Data Center
614-287-2496
614-287-5488 Fax
e-mail: [EMAIL PROTECTED]



How to backup a shared disk array?

2002-08-01 Thread Brazner, Bob

We have a storage array fronted by two, clustered NT servers.  The NT
servers are identically (or nearly identically) configured.  Each has a TSM
client installed.  We'd like to develop a TSM backup stategy that permits
daily incremental backup of both servers, including the storage array, but
without doubly backing up the storage array.  If either server fails, the
other server should be able to *automatically* assume the role of backing
up the storage array.  Either server should be able to restore storage
array files.  Any suggestions?

Bob Brazner
Johnson Controls, Inc.
(414) 524-2570



MS Exchange backup fails

2002-08-01 Thread Yahya Ilyas

We recently upgraded from WinNT to Win2K SP2, and ADSM/TSM incremental and
full backups started failing on those machines.  One server which has Win2k
from the beginning does not fail and backup works fine.
MS Exchange version is 5.5 SP4.  and TDP for MS Exchange is version 1.1.

I get following errors:

ACN3025E -- Backup error encountered.
ACN4226E -- Exchange Error: Unable to perform an incremental backup because
a required Microsoft Exchange database log file could not be found.
07/27/2002 18:06:46 Finished command. Return code is:
310
07/27/2002 18:06:46 ANS1512E Scheduled event 'EXCHMAINEX4-SCH' failed.
Return code = 310.
07/27/2002 18:06:46 Sending results for scheduled event 'EXCHMAINEX4-SCH'.
07/27/2002 18:06:46 Results sent to server for scheduled event
'EXCHMAINEX4-SCH'.


Thanks
Yahya


>   -
>   Yahya Ilyas
>   Systems Programmer Sr
>   Systems Integration & Management
>   Information Technology
>   Arizona State University, Tempe, AZ 85287-0101
>
>   [EMAIL PROTECTED]
>   Phone: (480) 965-4467
>
>



Re: Compare TSM/Legato and Veritas

2002-08-01 Thread Seay, Paul

Anything you find on the Internet is too old to be any good.  But, look at
my stuff on WWW.ADSM.ORG for NetBackup.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Ramnarayan, Sean A [EDS] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 7:58 AM
To: [EMAIL PROTECTED]
Subject: Compare TSM/Legato and Veritas


Hi

I have been asked by management to do a report/difference on the technical
specifications (very brief) of these products :
  Legato
  TSM
  Veritas

Has anyone done this before as I just need a guide line as where to go on
the Internet to find this info.


Thks
Sean Ramnarayan
EDS (South Africa)




"MMS " made the following
 annotations on 08/01/2002 01:58:16 PM

--
DISCLAIMER
This message may contain confidential information that is legally privileged
and is intended only  for the use of the parties to whom it is addressed. If
you are not an intended recipient, you are hereby notified that any
disclosure, copying, distribution or use of any information in this message
is strictly prohibited. If you have received this message in error, please
notify the sender immediately and delete the message. Thank you.


==



Backup Express?

2002-08-01 Thread Jim Kirkman

Anyone familiar with this product, from Syncsort? Master Node runs on a
Linux server.

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884



Striped backup speeds.

2002-08-01 Thread Todd Lundstedt

Environment
Server: TSM 4.2.1.7, AIX 4.3.3, two fiber gigabit ethernet cards, each
accessing disk and tape (I know, this is bad, but it shouldn't impact this
situation)
Client: W2K, 4 processors (around 900mhz, I think), just over 3GB of
physical memory, TSM B/A client 4.2.1.32, TSPSQL 2.2, Storage agent
4.2.1.7, three fiber gigabit ethernet cards (qLogic), one for disk, two for
tape access.
SAN: Shark ESS
Node setup has 4 mountpoints allowed.
Fiber Network: dual Brocade 2109 fiber switches.
Library IBM 3584 with five LTO Ultrium fiber connected drives (three
connected to one switch, two to the other switch).
Utility used to measure speed during backups: qLogic Sanblade Manager.

Backing up a 33GB SQL database located on the Shark to one tape, I can get
about 28-30 MB/second.  Backing up the same database striped to two tapes,
I would get about 31 MB/second.  I had the DB admin spread the three files
in the database over three of the four different available drive letters
(actually on 2 different packs on the ESS Shark).  That made my two-stripe
backup in the 36-37MB/second range (single stripe backups are still in the
28-30MB/second range).  I am looking to find a way to get somewhere near
the 50-60MB/s range for a backup.  It doesn't make sense to stripe a backup
when you are only getting a 20% increase in throughput (and this works out
to less than a 20% decrease in time for the backup because of the storage
agent mounting tapes, locating last files, etc, one tape at a time).  I am
doing testing in preparation for a 500GB database.

I did some tests with very large files on each of the four drive letters,
simultaneously copying the large files to >nul (using four different
command line prompts), to get a benchmark of data throughput on the disk
fiber card.  The max sustained throughput (based on the qLogic monitor) was
~66MB/second.  Most of the time, the rate hovered in the 45-55 MB/second
range.  CPU utilization on the client node during a 2-stripe backup is
around 85-95%, and during a single-stripe backup, it is around 35-45% (on
all four processors).


Can anyone suggest some things to check that might impact this?

On another note, the backups are not completing as of my last two-stripe
test.  I am working with Tivoli support regarding this problem.  Backups
with a single stripe are working.



Re: How to save volhist and devconfig

2002-08-01 Thread Taha, Hana

We setup a cronjob to ftp the saved files every morning.



Thank You,
Hana Taha


-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 2:18 PM
To: [EMAIL PROTECTED]
Subject: How to save volhist and devconfig

Hello,

I was wondering how other administrators are saving the volhist, devconfig,
dsmserv.dsk and dsmserv.opt.

Every day we save the volhist and devconfig and manually ftp these files to
a network-drive in case of a crash of the server.

Now I'm not familiar with scripts so far, but I want to automate these job.

So, I was wondering how other administrators are doing this job.

Thanks for your reply.

Brian.



_
Chat on line met vrienden en probeer MSN Messenger uit:
http://messenger.msn.nl



Re: multiple clients t a single drive

2002-08-01 Thread Prather, Wanda

This is the reason the usual way to configure TSM is to have clients backing
up to the diskpool, then have the diskpool migrate to a tape pool.
That way you can back up more clients at once than you have tape drives.


-Original Message-
From: Rob Schroeder [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 10:46 AM
To: [EMAIL PROTECTED]
Subject: multiple clients t a single drive


When my backups are running, there are a bunch of concurrent sessions
going.  I have about 55 servers sharing four tape drives.  Although there
might be 10 sessions going at once, only 4 are doing any work.  Is there
any way to get multiple backup sessions to write to a single drive
concurrently.  I know that there is excess capacity on throughput to the
drives and to the TSM server, so I know the infrastructure could handle a
larger load.  Can TSM?

Rob Schroeder
Famous Footwear



Re: multiple clients t a single drive

2002-08-01 Thread Salak Juraj

how about using primary disk pool large enough to cache one night backups?
Multiple streams on tapes are not supported in TSM,
and if they vere they would slow restores down by they very nature.

regards
juraj

> -Original Message-
> From: Rob Schroeder [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 4:46 PM
> To: [EMAIL PROTECTED]
> Subject: multiple clients t a single drive
>
>
> When my backups are running, there are a bunch of concurrent sessions
> going.  I have about 55 servers sharing four tape drives.
> Although there
> might be 10 sessions going at once, only 4 are doing any
> work.  Is there
> any way to get multiple backup sessions to write to a single drive
> concurrently.  I know that there is excess capacity on
> throughput to the
> drives and to the TSM server, so I know the infrastructure
> could handle a
> larger load.  Can TSM?
>
> Rob Schroeder
> Famous Footwear
>



Re: How to save volhist and devconfig

2002-08-01 Thread Rob Schroeder

I have written a batch file that runs on our operators' Win2k Pro machine.
The batch file maps the network drive to the TSM server and copies the
required files to local diskette and another network drive.  I just use the
Windows scheduler to have this run every day at the same time.  This may
not be the best approach, but it only took 5 minutes and my operations
staff doesn't have to worry about anything.

Rob Schroeder
Famous Footwear



  brian welsh
   cc:
  Sent by: "ADSM:  Subject:  How to save volhist and 
devconfig
  Dist Stor
  Manager"
  <[EMAIL PROTECTED]
  .EDU>


  08/01/2002 02:17
  PM
  Please respond to
  "ADSM: Dist Stor
  Manager"






Hello,

I was wondering how other administrators are saving the volhist, devconfig,
dsmserv.dsk and dsmserv.opt.

Every day we save the volhist and devconfig and manually ftp these files to
a network-drive in case of a crash of the server.

Now I'm not familiar with scripts so far, but I want to automate these job.

So, I was wondering how other administrators are doing this job.

Thanks for your reply.

Brian.



_
Chat on line met vrienden en probeer MSN Messenger uit:
http://messenger.msn.nl



Re: How to save volhist and devconfig

2002-08-01 Thread Coats, Jack

Brian,
   Yes, we do that.

On our windows TSM server we automatically run a batch file that
1) saves the volhist and devconfig into a local directory.
2) copies all this, and the license files to two servers at other
   locations across our network
3) generates a floppy disk with the same info on it to go out with
   the database backup tape each morning, at the same time we send
   off offsite backups.

I wanted to run it from the TSM scheduler and that was a little trickier,
but doable with some help from this list :)

-Original Message-
From: brian welsh [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 2:18 PM
To: [EMAIL PROTECTED]
Subject: How to save volhist and devconfig


Hello,

I was wondering how other administrators are saving the volhist, devconfig,
dsmserv.dsk and dsmserv.opt.

Every day we save the volhist and devconfig and manually ftp these files to
a network-drive in case of a crash of the server.

Now I'm not familiar with scripts so far, but I want to automate these job.

So, I was wondering how other administrators are doing this job.

Thanks for your reply.

Brian.



_
Chat on line met vrienden en probeer MSN Messenger uit:
http://messenger.msn.nl



dsmcad sched service

2002-08-01 Thread Jim Kirkman

May be wishful thinking but, do you still need to bounce the scheduler
after changes to the .opt file with the 5.1 client? And, if you're
running the scheduler as part of the managedservices (dsmcad) does that
mean unloading the dsmcad module?

This is, unfortunately, on Netware 5

thanks,

--
Jim Kirkman
AIS - Systems
UNC-Chapel Hill
966-5884



How to save volhist and devconfig

2002-08-01 Thread brian welsh

Hello,

I was wondering how other administrators are saving the volhist, devconfig,
dsmserv.dsk and dsmserv.opt.

Every day we save the volhist and devconfig and manually ftp these files to
a network-drive in case of a crash of the server.

Now I'm not familiar with scripts so far, but I want to automate these job.

So, I was wondering how other administrators are doing this job.

Thanks for your reply.

Brian.



_
Chat on line met vrienden en probeer MSN Messenger uit:
http://messenger.msn.nl



AW: Gigabit Ethernet Problem

2002-08-01 Thread Rupp Thomas (Illwerke)

Our NSM (Network Storage Manager) is running AIX 5.1 ML02

> -Ursprüngliche Nachricht-
> Von:  Ryan, Phil [SMTP:[EMAIL PROTECTED]]
> Gesendet am:  Donnerstag, 01. August 2002 15:05
> An:   [EMAIL PROTECTED]
> Betreff:  Re: Gigabit Ethernet Problem
> 
> what os are you running.
> 
> -Original Message-
> From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 6:52 AM
> To: [EMAIL PROTECTED]
> Subject: Gigabit Ethernet Problem
> 
> 
> Hi TSM-ers,
> 
> we have narrowed down our performance problem a bit and seen
> (with a sniffer) that the TCP/IP Acknowledgements from our TSM
> Server (in fact a 3466 (NSM)) sometimes takes forever (200 to
> more then 1000 SECONDS.
> Has anyone else seen such a behavior before?
> 
> Greetings from Austria (no Kangaroos here)
> Thomas Rupp
> Vorarlberger Illwerke AG
> MAIL:   [EMAIL PROTECTED]
> TEL:++43/5574/4991-251
> FAX:++43/5574/4991-820-8251
> 
> 
> 
> --
> --
> --
> Dieses eMail wurde auf Viren geprueft.
> 
> Vorarlberger Illwerke AG
> --
> --
> --
> 
> 
> 
> The views expressed in this message are those of the individual sender and
> do not necessarily represent the opinion of Southern Farm Bureau Life
> Insurance Company.  Although this E-mail and any attachments are believed
> to
> be free of any virus or other defect that might affect any computer system
> into which it is received and opened, it is the responsibility of the
> recipient to ensure that it is free of viruses or other potentially
> hazardous components, and no responsibility is accepted by Southern Farm
> Bureau Life Insurance Company for damage arising in any way from its use.
> 


--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--



Re: Exclude dir of archive backups

2002-08-01 Thread Rushforth, Tim

There is no exclude.dir for archives.  There is exclude.archive for files
though.

So you would do something like:

Exclude.archive e:\www\...\*
Exclude.archive e:\xxx\...\*
Exclude.archive f:\pagefile.sys

Tim Rushforth
City of Winnipeg

-Original Message-
From: Christian Astuni [mailto:[EMAIL PROTECTED]]
Sent: August 1, 2002 9:56 AM
To: [EMAIL PROTECTED]
Subject: Exclude dir of archive backups

Hi ...
I would like one question ... I need to perform a archive full backups
every weekend, but only some directories.
I put in the file dsm.opt same options as like this:

Domain  e: f:
exclude.dir e:\www
exclude.dir e:\
exclude.file f:\pagefile.sys

but only work fine for incremental backups
Does anyone any idea who is the method to exclude directories for archives
backups ???
Thank for yours helps !!!

Best Regards.
Christian Astuni



Exclude dir of archive backups

2002-08-01 Thread Christian Astuni

Hi ...
I would like one question ... I need to perform a archive full backups
every weekend, but only some directories.
I put in the file dsm.opt same options as like this:

Domain  e: f:
exclude.dir e:\www
exclude.dir e:\
exclude.file f:\pagefile.sys

but only work fine for incremental backups
Does anyone any idea who is the method to exclude directories for archives
backups ???
Thank for yours helps !!!

Best Regards.
Christian Astuni



Restoring to raw devices

2002-08-01 Thread Jason Schram

Hello,

   Are there any special concerns or procedures that are needed to restore
to a raw device?



Thank-You in advance
Jason Schram
Turning Stone Casino Reswort
Systems Technician



multiple clients t a single drive

2002-08-01 Thread Rob Schroeder

When my backups are running, there are a bunch of concurrent sessions
going.  I have about 55 servers sharing four tape drives.  Although there
might be 10 sessions going at once, only 4 are doing any work.  Is there
any way to get multiple backup sessions to write to a single drive
concurrently.  I know that there is excess capacity on throughput to the
drives and to the TSM server, so I know the infrastructure could handle a
larger load.  Can TSM?

Rob Schroeder
Famous Footwear



Re: question on configuring large NT client for optimum restore p roce ssing

2002-08-01 Thread Kauffman, Tom

This isn't a Netappliance NAS -- this is two IBM netfinity boxes running
Win2K server with TSM 4.2 client installed, running as an MS cluster, with
fiber-attached FastT disk. So I don't have the double load issue. I'm
pushing the NT staff for the multiple drive letter option, but current
planning is that IF we *have* to do multple drives then this 800K files will
be one of the drives. The NT group would rather set the entore 1.2 TB up as
one drive . . .

One of my thoughts was to assign several management classes with the
inlude-exclude list, then define the copygroups as co-located but use the
same storage pools. Has anyone out there tried this stunt? I mean, the
example for multiple restores for NT shows c:\users, c:\data1, and c:\data2
and then later states that 'if data2 is on a different tape' -- so I'm
trying to force the issue.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: Seay, Paul [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 3:33 AM
> To: [EMAIL PROTECTED]
> Subject: Re: question on configuring large NT client for
> optimum restore
> p roce ssing
>
>
> First thing is NAS puts double load on the network.  If you
> have the NAS box
> on a different interface with Gigabit connectivity that would
> help.  Also
> Gigabit to your TSM server.
>
> Divide the data up into as many drive letters (filesystems)
> as you can so
> that you can run simultaneous backups/restores as multiple threads.
>
> Setup the Resourceutlization has high as you can tolerate.
>
> You really needed to go for SAN on this.  The IP stack
> overhead for CIFS is
> very high which is how the NAS box communicates with the NT server.
>
> There may be some other options to use the NDMP capabilities
> of TSM check
> into that.  It may do what you want and improve performance
> dramatically.
> Especially, if the NAS box has SAN connectivity capabilities
> for backup and
> recovery.
>
> Paul D. Seay, Jr.
> Technical Specialist
> Naptheon Inc.
> 757-688-8180
>
>
> -Original Message-
> From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 31, 2002 4:05 PM
> To: [EMAIL PROTECTED]
> Subject: question on configuring large NT client for optimum
> restore proce
> ssing
>
>
> What's the preferred method of configuring a large NT
> fileserver for optimum
> data recovery speed?
>
> Can I do something with co-location at the filesystem (what
> IS a filesystem
> in NT/2000?) level?
>
> We're bringing in an IBM NAS to replace four existing NT
> servers and our
> recovery time for the existing environment stinks. The main
> server currently
> has something over 800,000 files using 167 GB (current box
> actually uses NT
> file compression, so it's showing as 80 GB on NT). We had to
> do a recovery
> last year (raid array died) and it ran to 40+ hours; I'm getting the
> feedback that over 20 hours will be un-acceptable.
>
> The TSM server and the client code are relatively recent 4.2
> versions and
> will be staying at 4.2 for the rest of this year (so any neat
> features of
> TSM 5 would be nice to know but otherwise unuseable :-)
>
> To add to the fun and games, this will be an MS cluster
> environment. With
> 1.2 TB of disk on it. We do have a couple of weeks to play
> around and try
> things out before getting serious. One advantage to the MSCS
> is that disk
> compression is not allowed, so that should speed things up a
> bit on the
> restore.
>
> Tom Kauffman
> NIBCO, Inc
>



Compare TSM/Legato and Veritas

2002-08-01 Thread Ramnarayan, Sean A [EDS]

Hi

I have been asked by management to do a report/difference on the technical 
specifications (very brief) of these products :
  Legato
  TSM
  Veritas

Has anyone done this before as I just need a guide line as where to go on the Internet 
to find this info.


Thks
Sean Ramnarayan
EDS (South Africa)




"MMS " made the following
 annotations on 08/01/2002 01:58:16 PM
--
DISCLAIMER
This message may contain confidential information that is legally privileged and is 
intended only  for the use of the parties to whom it is addressed. If you are not an 
intended recipient, you are hereby notified that any disclosure, copying, distribution 
or use of any information in this message is strictly prohibited. If you have received 
this message in error, please notify the sender immediately and delete the message. 
Thank you.

==



Re: Old client versions

2002-08-01 Thread Gianluca Mariani1

they are not supported any longer.

Cordiali saluti
Gianluca Mariani
Tivoli TSM Global Response Team, Roma
Via Sciangai 53, Roma
 phones : +39(0)659664598
   +393351270554 (mobile)
[EMAIL PROTECTED]


   
 
  Niklas Lundstrom 
 
 cc:
 
  Sent by: "ADSM: Dist Stor Subject:  Old client 
versions   
  Manager" 
 
  <[EMAIL PROTECTED]>   
 
   
 
   
 
  01/08/2002 12:37 
 
  Please respond to "ADSM: Dist
 
  Stor Manager"
 
   
 
   
 



Hello

Where can I download clients older than ver 3.7? I can't find them anymore
on the Tivoli website

Regards

Niklas Lundström
Föreningssparbanken IT
08-5859 5164






Re: Help on a TSM bat file for NT

2002-08-01 Thread Joe Pendergast

Here is a batch file from an intersting little batch programming help site I
found:
http://www.ericphelps.com/batch/nt/index.htm

:: Here's NT code that puts something like
:: 03/17/2002
:: into the DATE environment variable
@echo off
echo.|date|find "current" >t#e.bat
echo set date=%%5> the.bat
call t#e.bat
del t?e.bat > nul
:: Thanks to Joseph P. Hayes
:: [EMAIL PROTECTED]


:: Here's a one-line NT command (but a LONG line!)
:: one that splits all the date parts out, putting
:: each one into it's own variable (weekday, date, month, year)
for /f "tokens=1-4 delims=/ " %%a in ('date /t') do (set weekday=%%a& set
date=%%b& set month=%%c& set year=%%d)





Rob Hefty <[EMAIL PROTECTED]> on 07/31/2002 01:15:19 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

To:   [EMAIL PROTECTED]
cc:(bcc: Joseph Pendergast/Corona/Watson)

Subject:  Help on a TSM bat file for NT



Hello All,

I am attempting to create a bat file that will automate an archive for some
data on the NT/Win2k platform.  What I am having difficulty with is passing
the date variable into the description.  So far I have:

D:
:SETVAR
set date/t=dates
 Cd D:\Program Files\tivoli\tsm\baclient\
dsmc Archive -archmc=36-MONTH-ARCHIVE -desc="Freight Archives for Monthend
Prior to: %dates%" D:\temp\test\*
pause 5

When it completes it runs the archive fine only the description is lacking a
date stamp.  Any suggestions would be appreciated.


Thanks,
Rob Hefty
IS Operations
Lab Safety Supply



Re: Gigabit Ethernet Problem

2002-08-01 Thread Karel Bos

Yep, our netwerkpeople have had many sleepless nights due to this behavour.
It is not only on gigabit, but on all tcp/ip connections.

-Oorspronkelijk bericht-
Van: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]]
Verzonden: donderdag 1 augustus 2002 13:52
Aan: [EMAIL PROTECTED]
Onderwerp: Gigabit Ethernet Problem


Hi TSM-ers,

we have narrowed down our performance problem a bit and seen
(with a sniffer) that the TCP/IP Acknowledgements from our TSM
Server (in fact a 3466 (NSM)) sometimes takes forever (200 to
more then 1000 SECONDS.
Has anyone else seen such a behavior before?

Greetings from Austria (no Kangaroos here)
Thomas Rupp
Vorarlberger Illwerke AG
MAIL:   [EMAIL PROTECTED]
TEL:++43/5574/4991-251
FAX:++43/5574/4991-820-8251




--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG

--



Correction - Journal filling up and causing backups to fail? (mak e that recovery log)

2002-08-01 Thread Coats, Jack

> CORRECTION: I got the journal (nay: recovery log info below and DB info
> reversed ... sorry
>
>
>
> Config is TSM 4.1.3 on NT 4 service pack 6, with IBM3583 library and 2 LTO
> drives.
>
> I am getting this message in the dsmerror.log on SOME of my clients:
>
> [snip]
> 07/31/2002 14:20:31 NtliFlush: Error 10 sending data using t_snd.
> 07/31/2002 14:20:31 sessFlushVerb: Error from buffer flush, rc: -155
> 07/31/2002 14:20:31 NtliFlush: Error 10 sending data using t_snd.
> 07/31/2002 21:08:35 NtliFlush: Error 10 sending data using t_snd.
> 07/31/2002 21:08:35 sessRecvVerb: Error -155 from call to 'readRtn'.
> 07/31/2002 21:08:35 NtliFlush: Error 10 sending data using t_snd.
> 07/31/2002 21:08:35 ANS1809E Session is lost; initializing session reopen
> procedure.
> 07/31/2002 21:08:36 ANS1809E Session is lost; initializing session reopen
> procedure.
> 07/31/2002 21:08:51 ANS1811S TSM session could not be reestablished.
> 07/31/2002 21:26:03 NtliFlush: Error 4 sending data using t_snd.
> 07/31/2002 21:26:03 sessSendVerb: Error sending Verb, rc: -155
> 07/31/2002 21:26:03 ANS1809E Session is lost; initializing session reopen
> procedure.
> 07/31/2002 21:26:06 cuSignOnResp: Server rejected session; result code: 65
> 07/31/2002 21:26:06 sessOpen: Error 65 receiving SignOnResp verb from
> server
> 07/31/2002 21:26:06 ANS1316E The server does not have enough recovery log
> space to
> continue the current operation
>
> 07/31/2002 21:26:06 ANS1512E Scheduled event 'DAILY_INCREMENTAL' failed.
> Return code = 4.
> 07/31/2002 21:26:06 cuSignOnResp: Server rejected session; result code: 65
> 07/31/2002 21:26:06 sessOpen: Error 65 receiving SignOnResp verb from
> server
> [snip]
>
> My morning report shows my DB and Journals like:
> [Database]  [Correction: Make that Recovery Log]
> Available Assigned Maximum   Maximum   PageTotal Used  Pct
> Max.
> Space Capacity Extension Reduction SizeUsablePages Util
> Pct
> (MB)  (MB) (MB)  (MB)  (bytes) Pages
> Util
> -  - - --- - - -
> -
> 364   364  0 360   4,096   92,672232   0.3
> 100.6
>
> [Journal] [Correction: Make that DB]
> Available Assigned Maximum   Maximum   PageTotal Used  Pct
> Max.
> Space Capacity Extension Reduction SizeUsablePages Util
> Pct
> (MB)  (MB) (MB)  (MB)  (bytes) Pages
> Util
> -  - - --- - - -
> -
> 8,948 8,9480 2,420 4,096   2,290,688 712,344   31.1
> 31.3
>
> This report says the database was 'overfull', but I have been expiring a
> lot (removed a file system with lots of
> little files) so its use is way down.
>
> Suggestions?
>
>



Réf. : Re: Backup of Win2`K Fileserver

2002-08-01 Thread Guillaume Gilbert

Hi

Thanks for the tips. I backup 150 other clients (mostly NT but with some AIX and 
TDP's) with not much performance problems. The server is AIX. The database is on a
Hitachi 7700E (15 GB 1rpm disks - RAID5) with dual path fiberchannel. I'm thinking 
its the compaq san thats the problem. My group didn't have any say on how it was
configured. I did an FTP yesterday (used the dsmsched.log which is now 100 Megs...) 
and it was very slow so the culprit is probably client side. The 2 ethernet (100 mbs)
cards on my server are used at max (12mbytes/sec) each when we do TDP Notes backups so 
they're working OK. There is a Token-Ring card on the server so the backup is
probably going through there. I'll check with the sysadmins if the cpu is is used over 
50% . If so I'l turn off compression.

Guillaume Gilbert
CGI Canada




Salak Juraj <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 2002-08-01 03:32:39

Veuillez répondre à "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Envoyé par :   "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


Pour :[EMAIL PROTECTED]
cc :
Objet :   Re: Backup of Win2`K Fileserver

Hi,

identify your bottleneck.
It is not obvious what it is.
Your backup times were sub-standard even a couple years ago.

Some tips:
 - check using FTP your network
 - check the perform,ance of your SAN drives
- copy a part of your NT drive either into NUL or on another SAN
drive, how long will it take?,
- look at the drive lights when backing-up, by this speed they
should only occasionaly light
- look at the CPU on you file server during the backup, at this
speed it should be well under 10%
- if CPU is near 100%, disable compressing
 - check your server, even with 4 CPU´s the database performance could be
the bottleneck
- copy a part of your NT file system on a drive local to TSM server
n back it up from there,
how long will it take?
If long, the database might be the bottleneck,
  in this case add spindles to your server and spread the database among
them
and check whether the physical and logical drives on your scsi
controller have caching on
(WRITE-BACK, not WRITE-THROUGH)

 - ressourceutilisation 10 -- if you have more NT drives to backup, using
this setting they will be backed up in paralell,
 maybe this kills your ressources

 - TCPWindowsize 63 -- setting it larger could improve network speed if this
is the bottleneck, have a look at older postings

 - etc.

good luck

Juraj




> -Original Message-
> From: Steve Harris [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 1:19 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Backup of Win2`K Fileserver
>
>
> Hi Guillaume
>
> RENAME FILESPACE may help here
>
> Rename the new filespace to some "temporary" name
> Rename the old filespace to the new name
> Run incremental backup
>
> After a suitable period, delete the "temporary" filespace.
>
> Regards
>
> Steve Harris
> AIX and TSM Admin
> Queensland Health, Brisbane Australia
>
> >>> [EMAIL PROTECTED] 01/08/2002 6:30:35 >>>
> Hello
>
> I have a BIG performance problem with the backup of a win2k
> fileserver. It used to be pretty long before but it was
> managable. But now the sysadmins put it on a compaq
> storageworks SAN. By doing that they of course changed the
> drive letter. Now it has to do a full backup of that drive.
> The old drive had  1,173,414 files and 120 GB  of
> data according to q occ. We compress at the client. We have
> backup retention set to 2-1-NL-30. The backup had been
> running for 2 weeks!!! when we cancelled it to try to
> tweak certain options in dsm.opt. The client is at 4.2.1.21
> and the server is at 4.1.3 (4.2.2.7 in a few weeks). Network
> is 100 mb. I know that journal backups will help
> but as long as I don't get a full incremental in it doesn't
> do me any good. Some of the settings in dsm.opt :
>
> TCPWindowsize 63
> TxnByteLimit 256000
> TCPWindowsize 63
> compressalways yes
> RESOURceutilization 10
> CHAngingretries 2
>
> The network card is set to full duplex. I wonder if an FTP
> test with show some Gremlins in the network...?? Will try it..
>
> I'm certain the server is ok. It's a F80 with 4 processors
> and 1.5 GB of RAM, though I can't seem to get the cache hit %
> above 98. my bufpoolsize is 524288. DB is 22 GB
> 73% utilized.
>
> I'm really stumped and I would appreciate any help
>
> Thanks
>
> Guillaume Gilbert
>
>
>
> **
> This e-mail, including any attachments sent with it, is confidential
> and for the sole use of the intended recipient(s). This
> confidentiality
> is not waived or lost if you receive it and you are not the intended
> recipient(s), or if it is transmitted/ received in error.
>
> Any unauthorised use, alteration, disclosure, distribution or review
> of this e-mail is prohibited.  It may be subject to a
> statutory duty of
> confidentiality if it relates to health service matters.
>
> If you a

Re: Gigabit Ethernet Problem

2002-08-01 Thread Ryan, Phil

what os are you running.

-Original Message-
From: Rupp Thomas (Illwerke) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 01, 2002 6:52 AM
To: [EMAIL PROTECTED]
Subject: Gigabit Ethernet Problem


Hi TSM-ers,

we have narrowed down our performance problem a bit and seen
(with a sniffer) that the TCP/IP Acknowledgements from our TSM
Server (in fact a 3466 (NSM)) sometimes takes forever (200 to
more then 1000 SECONDS.
Has anyone else seen such a behavior before?

Greetings from Austria (no Kangaroos here)
Thomas Rupp
Vorarlberger Illwerke AG
MAIL:   [EMAIL PROTECTED]
TEL:++43/5574/4991-251
FAX:++43/5574/4991-820-8251




--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG

--



The views expressed in this message are those of the individual sender and
do not necessarily represent the opinion of Southern Farm Bureau Life
Insurance Company.  Although this E-mail and any attachments are believed to
be free of any virus or other defect that might affect any computer system
into which it is received and opened, it is the responsibility of the
recipient to ensure that it is free of viruses or other potentially
hazardous components, and no responsibility is accepted by Southern Farm
Bureau Life Insurance Company for damage arising in any way from its use.




Re: Backup of Win2`K Fileserver

2002-08-01 Thread William F. Colwell

Guillaume,

Set the serialization in the copypool to dynamic and changingretries to
0.  Your txnbytelimit allows transaction as large as 250 meg;  you don't
want to let anything cause a termination and rollback of such a large transaction
which is why, I assume, you have compressalways = yes.

On subsequent incremental backups you can use a different serialization.

Hope this helps,

Bill

At 04:30 PM 7/31/2002, you wrote:
>Hello
>
>I have a BIG performance problem with the backup of a win2k fileserver. It used to be 
>pretty long before but it was managable. But now the sysadmins put it on a compaq
>storageworks SAN. By doing that they of course changed the drive letter. Now it has 
>to do a full backup of that drive. The old drive had  1,173,414 files and 120 GB  of
>data according to q occ. We compress at the client. We have backup retention set to 
>2-1-NL-30. The backup had been running for 2 weeks!!! when we cancelled it to try to
>tweak certain options in dsm.opt. The client is at 4.2.1.21 and the server is at 
>4.1.3 (4.2.2.7 in a few weeks). Network is 100 mb. I know that journal backups will 
>help
>but as long as I don't get a full incremental in it doesn't do me any good. Some of 
>the settings in dsm.opt :
>
>TCPWindowsize 63
>TxnByteLimit 256000
>TCPWindowsize 63
>compressalways yes
>RESOURceutilization 10
>CHAngingretries 2
>
>The network card is set to full duplex. I wonder if an FTP test with show some 
>Gremlins in the network...?? Will try it..
>
>I'm certain the server is ok. It's a F80 with 4 processors and 1.5 GB of RAM, though 
>I can't seem to get the cache hit % above 98. my bufpoolsize is 524288. DB is 22 GB
>73% utilized.
>
>I'm really stumped and I would appreciate any help
>
>Thanks
>
>Guillaume Gilbert

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



unsubscribe

2002-08-01 Thread Cooper, Debbie

Blue Cross Blue Shield of Florida, Inc., and its subsidiary and
affiliate companies are not responsible for errors or omissions in this e-mail 
message. Any personal comments made in this e-mail do not reflect the views of Blue 
Cross Blue Shield of Florida, Inc.



Re: Help with select for non-mirrored database volumes

2002-08-01 Thread Ran Harel

Hi.
Tricky...
I don't know why the not like is not working, but you should try:
select copy1_name,avail_space_mb from dbvolumes where copy2_status is null

It will do the job.

Ran.

-Original Message-
From: Nicholas Cassimatis [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 26, 2002 12:09 AM
To: [EMAIL PROTECTED]
Subject: Help with select for non-mirrored database volumes


I'm on the right path, I think.  It's all right there in the dbvolumes, I
just can't get the info to come out right.  I want to find out how large
the unmirrored volumes are, so I can go get the space to mirror them.  I've
tried:

tsm: SERVER>select copy1_name,avail_space_mb from dbvolumes where
copy2_name not like '/%'

tsm: SERVER>select copy1_name,avail_space_mb from dbvolumes where
copy2_status not like 'Sync%'(can't do the - 'd -  in there, not sure
how to get past that)

tsm: SERVER>select copy1_name,avail_space_mb from dbvolumes where
copy2_name=''

Plus a few variations on those themes, and all I get is the proverbial:

ANR2034E SELECT: No match found using this criteria.
ANS8001I Return code 11.

(And yes, I have unmirrored DB volumes on this machine)

I'm missing some fundamental thing here, (along with my SQL guru to a
vacation...).  Can someone point me in the right direction?  Thanks!

Nick Cassimatis
[EMAIL PROTECTED]

Today is the tomorrow of yesterday.



Re: Offsite Archives & DRM

2002-08-01 Thread Steve Hnath

Gordon,

DRM will not vault primary pool or backupset media.  An alternative product,
AutoVault, has primary pool and backupset vaulting for precisely what you
are trying to accomplish.  For specific situations, it is a very efficient
way to retain data long-term without the need to duplicate the data or fill
your tape library.  The one risk is that you will not have a second copy of
the data to recover from media failure.

Using AutoVault, you would direct your archive data to a separate primary
pool hierarchy.  It sounds like all your TSM data is archives, so you may
not have a need for an incremental backup primary pool hierarchy and a
copypool.  Configure AutoVault to vault your archive tape primary pool.  It
will eject the media from your library and generate a report of what media
to take offsite.  It will also generate a report of empty tapes to return
from the vault.

Many of our customers are effectively using this technique for archive data,
backupsets, and TDP (database backup) data.  More information is available
at:  http://www.coderelief.com/tdp.htm

Regards,

Steve Hnath
Code Relief LLC
www.coderelief.com

-Original Message-
Re: Offsite Archives & DRM
 Forum:   ADSM.ORG - ADSM / TSM Mailing List Archive
 Date:  Jul 25, 14:25
 From:  Seay, Paul <[EMAIL PROTECTED]>

This is certainly an alternative solution that should be consider if not the
first choice at least the second.  I had closed my mind that he only wanted
to send primary tapes offsite.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Jolliff, Dale [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 25, 2002 8:05 AM
To: [EMAIL PROTECTED]
Subject: Re: Offsite Archives & DRM


Why not just assign an overflow location then do a move media? You have to
keep track of the tape movement manually, but at least the checkout part is
simplified.


-Original Message-
From: Seay, Paul [mailto:[EMAIL PROTECTED]]
Sent: Thursday, July 25, 2002 12:30 AM
To: [EMAIL PROTECTED]
Subject: Re: Offsite Archives & DRM


The answer to your question is a qualified no because I cannot think of a
way to safely do it.

The concept of a copy pool is to be able to protect the data in two
locations.  And DRM is the vehicle to support that.  What you really are
trying to do is a archive using primary copy pool tapes which is not what
TSM was designed to do.

As I see it you only have one choice here.  You will have to roll your own
tape movement thing.  With primary tapes you have to mark them to
unavailable using an UPDATE VOLUME vv ACCESS=UNAVAILABLE.  You can
probably create a SELECT statement that meets your criteria to create the
command as follows:

Select 'UPDATE VOLUME ', volume_name, 'ACCESS=UNAVAILABLE' from volumes
where stgpool_name='TAPEARCH' and volume_name in (select volume_name from
libvolumes) > /mark.cmd

This gives you all the tapes in your library that are in the storage pool.

Run this output as a macro command after striping off the first 2 lines.

Then do another select:

Select 'CHECKOUT LIBVOLUME ', library_name, volume_name, 'other appropriate
parameters' from libvolumes where  volume_name in (select volume_name from
volumes where access='UNAVAILABLE') > /eject.cmd

Run this the same way as above.

At this point the tapes should be checked out of your library and TSM should
not try to do anything with them.

Now, I have not tried this so you will have to do some significant testing.
There may be something I did not consider.

The real negative piece here is you have to manage what expires and is ready
to be put back in the library.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Gordon Woodward [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 24, 2002 2:57 AM
To: [EMAIL PROTECTED]
Subject: Offsite Archives & DRM


This is a follow up to a post I sent through the other day about problems
with offsite tapes we were having and TSM (for NT) not allowing anymore
tapes to be loaded into the library.

I've been reading the TSM Admin Guide today about DRM and copy storage pools
and I think have found the solution to ours problems, however I would like
to run it by some more knowledgable people. First I'll give a quick overview
how our archives are currently working, I should also mention that these are
really 'snapshots' of existing data on our servers and not actually removing
any data from their locations.

Once a month we have a scheduled job execute on our Tivoli server which runs
the command "dsmc archiveetc" on each of our servers. As the server
backs up the data to the Tivoli server it is temporarily stored in a disk
storage pool (ARCHIVE) on our SAN, then as this disk storage pool fills up
the data starts to automatically migrate across to another storage pool of
type sequential access (called TAPEARCH). The next morning a scheduled job
runs which migrates the remaining data held within the disk storage pool

Space reclamation of copypool

2002-08-01 Thread Halvorsen Geirr Gulbrand

Hello everyone,
I have the following problem with Space Reclamation of my copypool.
To start reclamation I have a script running UPDATE STG COPYPOOL RECLAIM=50
Three hours later, I run another script to stop reclamation - UPDATE STG
COPYPOOL RECLAIM=100
but reclamation never stops. This affects all of my daily operations
(migration, backup stgp..) because the
reclamation process uses both drives.
Question is, why does space reclamation not stop after updating?
Is there a way of canceling the process (by TSM script)?

Best regards
Geirr G. Halvorsen



Old client versions

2002-08-01 Thread Niklas Lundstrom

Hello
 
Where can I download clients older than ver 3.7? I can't find them anymore
on the Tivoli website
 
Regards
 
Niklas Lundström
Föreningssparbanken IT
08-5859 5164
 



Gigabit Ethernet Problem

2002-08-01 Thread Rupp Thomas (Illwerke)

Hi TSM-ers,

we have narrowed down our performance problem a bit and seen
(with a sniffer) that the TCP/IP Acknowledgements from our TSM
Server (in fact a 3466 (NSM)) sometimes takes forever (200 to
more then 1000 SECONDS.
Has anyone else seen such a behavior before?

Greetings from Austria (no Kangaroos here)
Thomas Rupp
Vorarlberger Illwerke AG
MAIL:   [EMAIL PROTECTED]
TEL:++43/5574/4991-251
FAX:++43/5574/4991-820-8251



--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--



AW: question on configuring large NT client for optimum restore p roce ssing

2002-08-01 Thread Rupp Thomas (Illwerke)

Hi Tom,

I'm in a similar situation and I have read about (but not yet used) DFS on
W2K.
Each DFS root should give you a separate filespace with all the advantages
described
below.

Question: Does anyone already use DFS?

Kind regards
Thomas Rupp
Vorarlberger Illwerke AG
MAIL:   [EMAIL PROTECTED]
TEL:++43/5574/4991-251
FAX:++43/5574/4991-820-8251


> -Ursprüngliche Nachricht-
> Von:  Karel Bos [SMTP:[EMAIL PROTECTED]]
> Gesendet am:  Donnerstag, 01. August 2002 09:25
> An:   [EMAIL PROTECTED]
> Betreff:  Re: question on configuring large NT client for optimum
> restore p roce ssing
> 
> Hi,
> 
> Configure as many filespaces (driveletters) as you can get. Thus creating
> a
> nice way for multiple (manual) restore sessions. Configure multiple
> storagepools and spread the back-up data over them. If you use LTO drive,
> we
> do, don't use collogation (my experience), but make sure that the date is
> on
> not to many tapes (depents number of tapedrives, mounttimes, sort drive).
> Also keep the performance of the TSM server at a high level (processing
> power, memory).
> 
> Gr.
> 
> Karel Bos


--
Dieses eMail wurde auf Viren geprueft.

Vorarlberger Illwerke AG
--



Re: Backing Up Dir's

2002-08-01 Thread Cahill, Ricky

I must say I'd be amazed if that was the case, it's not just a few files
it's skipping it's thousands. I've included a snippet from the server error
log.
>From what I understand, TSM is attempting to restore a file to a directory
that dosn't yet exist. We have not as yet rstored a complete server to see
what happens at the end but I've been told by IBM that TSM will not retry
the files at the end of the restore so cannot see how they wuold be
restored.

Ignore the dates, just noticed we hadn't set the date right on the server we
were playing with.

Thanks in advance.

.Rikk

19-08-2001 23:51:57 ANS1905E NetWare SMS error processing
'VOL1:/USERS/EverettV/WORK BITS/HSE Details/HS
10/form_hs10_-_annual_health_&_safety_training_and_competency_review
(EQUINET) 20-02-2002.doc':
(TSA500.NLM 5.5 315) The program's attempt to scan failed, probably because
an invalid path was specified.
19-08-2001 23:51:57 ANS1905E NetWare SMS error processing
'VOL1:/USERS/EverettV/WORK BITS/HSE Details/HS 10/RMS HS10 Manager Annual
Review 2001.doc':
(TSA500.NLM 5.5 315) The program's attempt to scan failed, probably because
an invalid path was specified.
19-08-2001 23:51:57 ANS1905E NetWare SMS error processing
'VOL1:/USERS/EverettV/WORK BITS/HSE Details/HS 10/RMS HS10B Manager Annual
Review 2001.doc':
(TSA500.NLM 5.5 315) The program's attempt to scan failed, probably because
an invalid path was specified.
--

Rikk,

Are you observing this behavior?  When NetWare is restoring files which do
not have a supporting directory structure, a temporary directory entry
(i.e., no trustee information) is created by Novell's backup API; when the
directory entries do come from the server at a later time they are
restored over the  temporary directory entry; the end result is that the
files and directories are restored fully and correctly regardless of their
ordering from the server.  If you are not seeing this behavior, I would
suggest talking to the service organization.

Thanks,
Jim

J.P. (Jim) Smith
TSM Client Development

>>>

-Original Message-
From: Talafous, John G. [mailto:[EMAIL PROTECTED]]
Sent: 31 July 2002 13:51
To: [EMAIL PROTECTED]
Subject: Re: Backing Up Dir's


Along this thread...  Are there any guidelines for retention and number of
copies in the DIRMC management class? I've looked and looked and it seems
the best scenario is to clone your management class with the longest
retention. But, is this correct?

John G. Talafous  IS Technical Principal
The Timken CompanyGlobal Software Support
P.O. Box 6927 Data Management
1835 Dueber Ave. S.W. Phone: (330)-471-3390
Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
[EMAIL PROTECTED]   http://www.timken.com


**
This message and any attachments are intended for the
individual or entity named above. If you are not the intended
recipient, please do not forward, copy, print, use or disclose this
communication to others; also please notify the sender by
replying to this message, and then delete it from your system.

The Timken Company
**



Equitas Limited, 33 St Mary Axe, London EC3A 8LL, UK
NOTICE: This message is intended only for use by the named addressee
and may contain privileged and/or confidential information.  If you are
not the named addressee you should not disseminate, copy or take any
action in reliance on it.  If you have received this message in error
please notify [EMAIL PROTECTED] and delete the message and any
attachments accompanying it immediately.

Equitas reserve the right to monitor and/or record emails, (including the
contents thereof) sent and received via its network for any lawful business
purpose to the extent permitted by applicable law

Registered in England: Registered no. 3173352 Registered address above




Re: Backup of Win2`K Fileserver

2002-08-01 Thread Salak Juraj

Hi,

identify your bottleneck.
It is not obvious what it is.
Your backup times were sub-standard even a couple years ago.

Some tips:
 - check using FTP your network
 - check the perform,ance of your SAN drives 
- copy a part of your NT drive either into NUL or on another SAN
drive, how long will it take?,
- look at the drive lights when backing-up, by this speed they
should only occasionaly light
- look at the CPU on you file server during the backup, at this
speed it should be well under 10%
- if CPU is near 100%, disable compressing
 - check your server, even with 4 CPU´s the database performance could be
the bottleneck
- copy a part of your NT file system on a drive local to TSM server
n back it up from there,
how long will it take? 
If long, the database might be the bottleneck,
  in this case add spindles to your server and spread the database among
them
and check whether the physical and logical drives on your scsi
controller have caching on
(WRITE-BACK, not WRITE-THROUGH)

 - ressourceutilisation 10 -- if you have more NT drives to backup, using
this setting they will be backed up in paralell,
 maybe this kills your ressources

 - TCPWindowsize 63 -- setting it larger could improve network speed if this
is the bottleneck, have a look at older postings

 - etc.

good luck 

Juraj




> -Original Message-
> From: Steve Harris [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, August 01, 2002 1:19 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Backup of Win2`K Fileserver
> 
> 
> Hi Guillaume
> 
> RENAME FILESPACE may help here
> 
> Rename the new filespace to some "temporary" name
> Rename the old filespace to the new name
> Run incremental backup
> 
> After a suitable period, delete the "temporary" filespace.
> 
> Regards
> 
> Steve Harris
> AIX and TSM Admin
> Queensland Health, Brisbane Australia
> 
> >>> [EMAIL PROTECTED] 01/08/2002 6:30:35 >>>
> Hello
> 
> I have a BIG performance problem with the backup of a win2k 
> fileserver. It used to be pretty long before but it was 
> managable. But now the sysadmins put it on a compaq
> storageworks SAN. By doing that they of course changed the 
> drive letter. Now it has to do a full backup of that drive. 
> The old drive had  1,173,414 files and 120 GB  of
> data according to q occ. We compress at the client. We have 
> backup retention set to 2-1-NL-30. The backup had been 
> running for 2 weeks!!! when we cancelled it to try to
> tweak certain options in dsm.opt. The client is at 4.2.1.21 
> and the server is at 4.1.3 (4.2.2.7 in a few weeks). Network 
> is 100 mb. I know that journal backups will help
> but as long as I don't get a full incremental in it doesn't 
> do me any good. Some of the settings in dsm.opt :
> 
> TCPWindowsize 63
> TxnByteLimit 256000
> TCPWindowsize 63
> compressalways yes
> RESOURceutilization 10
> CHAngingretries 2
> 
> The network card is set to full duplex. I wonder if an FTP 
> test with show some Gremlins in the network...?? Will try it..
> 
> I'm certain the server is ok. It's a F80 with 4 processors 
> and 1.5 GB of RAM, though I can't seem to get the cache hit % 
> above 98. my bufpoolsize is 524288. DB is 22 GB
> 73% utilized.
> 
> I'm really stumped and I would appreciate any help
> 
> Thanks
> 
> Guillaume Gilbert
> 
> 
> 
> **
> This e-mail, including any attachments sent with it, is confidential 
> and for the sole use of the intended recipient(s). This 
> confidentiality 
> is not waived or lost if you receive it and you are not the intended 
> recipient(s), or if it is transmitted/ received in error.  
> 
> Any unauthorised use, alteration, disclosure, distribution or review 
> of this e-mail is prohibited.  It may be subject to a 
> statutory duty of 
> confidentiality if it relates to health service matters.
> 
> If you are not the intended recipient(s), or if you have 
> received this 
> e-mail in error, you are asked to immediately notify the sender by 
> telephone or by return e-mail.  You should also delete this e-mail 
> message and destroy any hard copies produced.
> **
> 



Re: question on configuring large NT client for optimum restore p roce ssing

2002-08-01 Thread Seay, Paul

First thing is NAS puts double load on the network.  If you have the NAS box
on a different interface with Gigabit connectivity that would help.  Also
Gigabit to your TSM server.

Divide the data up into as many drive letters (filesystems) as you can so
that you can run simultaneous backups/restores as multiple threads.

Setup the Resourceutlization has high as you can tolerate.

You really needed to go for SAN on this.  The IP stack overhead for CIFS is
very high which is how the NAS box communicates with the NT server.

There may be some other options to use the NDMP capabilities of TSM check
into that.  It may do what you want and improve performance dramatically.
Especially, if the NAS box has SAN connectivity capabilities for backup and
recovery.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, July 31, 2002 4:05 PM
To: [EMAIL PROTECTED]
Subject: question on configuring large NT client for optimum restore proce
ssing


What's the preferred method of configuring a large NT fileserver for optimum
data recovery speed?

Can I do something with co-location at the filesystem (what IS a filesystem
in NT/2000?) level?

We're bringing in an IBM NAS to replace four existing NT servers and our
recovery time for the existing environment stinks. The main server currently
has something over 800,000 files using 167 GB (current box actually uses NT
file compression, so it's showing as 80 GB on NT). We had to do a recovery
last year (raid array died) and it ran to 40+ hours; I'm getting the
feedback that over 20 hours will be un-acceptable.

The TSM server and the client code are relatively recent 4.2 versions and
will be staying at 4.2 for the rest of this year (so any neat features of
TSM 5 would be nice to know but otherwise unuseable :-)

To add to the fun and games, this will be an MS cluster environment. With
1.2 TB of disk on it. We do have a couple of weeks to play around and try
things out before getting serious. One advantage to the MSCS is that disk
compression is not allowed, so that should speed things up a bit on the
restore.

Tom Kauffman
NIBCO, Inc



Re: question on configuring large NT client for optimum restore p roce ssing

2002-08-01 Thread Karel Bos

Hi,

Configure as many filespaces (driveletters) as you can get. Thus creating a
nice way for multiple (manual) restore sessions. Configure multiple
storagepools and spread the back-up data over them. If you use LTO drive, we
do, don't use collogation (my experience), but make sure that the date is on
not to many tapes (depents number of tapedrives, mounttimes, sort drive).
Also keep the performance of the TSM server at a high level (processing
power, memory).

Gr.

Karel Bos

-Oorspronkelijk bericht-
Van: Kauffman, Tom [mailto:[EMAIL PROTECTED]]
Verzonden: woensdag 31 juli 2002 22:05
Aan: [EMAIL PROTECTED]
Onderwerp: question on configuring large NT client for optimum restore
proce ssing


What's the preferred method of configuring a large NT fileserver for optimum
data recovery speed?

Can I do something with co-location at the filesystem (what IS a filesystem
in NT/2000?) level?

We're bringing in an IBM NAS to replace four existing NT servers and our
recovery time for the existing environment stinks. The main server currently
has something over 800,000 files using 167 GB (current box actually uses NT
file compression, so it's showing as 80 GB on NT). We had to do a recovery
last year (raid array died) and it ran to 40+ hours; I'm getting the
feedback that over 20 hours will be un-acceptable.

The TSM server and the client code are relatively recent 4.2 versions and
will be staying at 4.2 for the rest of this year (so any neat features of
TSM 5 would be nice to know but otherwise unuseable :-)

To add to the fun and games, this will be an MS cluster environment. With
1.2 TB of disk on it. We do have a couple of weeks to play around and try
things out before getting serious. One advantage to the MSCS is that disk
compression is not allowed, so that should speed things up a bit on the
restore.

Tom Kauffman
NIBCO, Inc