Re: Normal # of failures on tape libraries

2005-12-30 Thread Larry Peifer
We have 5 3583's with maximum # of slots and drives in each.  3 have SCSI
drives (LTO1) and 2 have FC drives (LTO2).  3 are multi-path with control
path failover.  3 libraries (2 SCSI and 1 FC) haven't had problems since
installation (3 yrs for SCSI / 1 yr for FC).  I do believe they are OEM
products but don't remember by who.  Reliability on 2 of the libraries has
been poor and it takes weeks to get repaired due to poor problem
diagnostics with IBM support techs and poor quality HW repairs with IBM
system engineers and getting DOA equipment components.  In 2006 we plan to
replace 4  - 3583's with 2 - 3584's due to the problems.




Jim Zajkowski <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" 
12/30/2005 01:27 PM
Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] AW: Normal # of failures on tape libraries






On Dec 28, 2005, at 2:02 PM, Tab Trepagnier wrote:

> IBM 3583

I wonder if the 3583 is an OEM product like the 3582 (which is made
by ADIC).

The IBM 3584, on the other hand, is true blue.  We have one in
production for about two years and have yet to have any problems
besides media age.

--Jim


Re: 3584 logical library

2005-12-30 Thread Len Boyle
Hello Jim

With the 3584 ALMS optional feature, thing become even more interesting.

With ALMS you define volser ranges assigned to each virtual library, and I 
believe that you can not have one range assigned to more then one virtual 
library. The library assigns the tape to a virtual library based on the volser. 
With the latest library code level the i/o station is virtualized. So as the 
tapes are put into the entry port the library moves the tapes to storage slots 
and assigns them to a virtual i/o station up to a user defined max number. An 
interesting side is that when the library moves the tapes from the real i/o 
station to the storage slots it uses both grippers to move 2 tapes at a time.
But I believe that you can still use the web interface to unassign the tape and 
assign it a different virtual library. But I have not tried this other then our 
first testing.

len

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Jim 
Zajkowski
Sent: Friday, December 30, 2005 4:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] 3584 logical library

On Dec 28, 2005, at 9:35 AM, Aaron Durkee wrote:

> When I eject a tape from one logical library into the exit / entry
> port and want to check it back into the other logical library via the
> exit / entry port I need to go to the web interface for the
> 3584 (or lcd panel) and manually move the tape to the other logical
> partition.

We have the same behavior here; the 3584 virtualizes the IO door between all 
library partitions, so tapes put in there by one partition do not show up in 
another.  Similarly, when you open the IO door the 3584 asks you which 
partition to assign the tapes to, and the tapes in the door at that point only 
show up there.

--Jim


Re: Solaris backup missing /var filsystem

2005-12-30 Thread Richard Sims

On Dec 30, 2005, at 3:37 PM, Richard Rhodes wrote:


Hi Everyone,

We have 2 Solaris v8 nodes that backup to TSM v5.3 server.  Scheduled
backups are missing
the /var filesystem while cmd line incrementials (dsmc inc /var) work.
...


Richard -

The problem of the Solaris client mysteriously skipping file systems
is one which goes back years, unfortunately, and has usually been
caused by the way the client regards the file system when it gathers
stat information from the operating system. This can be caused by the
way in which the file system was mounted... In the 2001 era, this was
due to automounter complications. Unexpected file system types is
another potential cause.

An unqualified 'dsmc Incremental' (as a scheduled backup usually
performs) backs up all local file systems, or those specifically
specified in client Domain statements. In this case, the client is
left to its own discretion to judge whether the file systems really
are local, and will implicitly skip those it believes not to be
local. Manually performing 'dsmc Incremental /whatever' on some
directory name is not equivalent, as that is specifying that the
object is to be backed up, whatever it is.

The non-trivial task is to determine what's odd about the /var
instances which the TSM client skips. Doing comparative 'mount -v' or
'mount -p' commands on the good and bad systems may help reveal a
difference. A client trace may be needed to see what's going on. (In
the AIX client, at least, it stats all mounted file systems, not just
the Domain-listed ones as it starts.) I would also look for boot-time
messages about /var in /var/log/messages. And don't overlook
dsmerror.log.

Richard Sims


Re: AW: Normal # of failures on tape libraries

2005-12-30 Thread Prather, Wanda
Yes, the 3583 is the same as an ADIC 100.
Dell also OEM"s the Adic 100 as a "Powervault" library.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Jim Zajkowski
Sent: Friday, December 30, 2005 4:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: AW: Normal # of failures on tape libraries


On Dec 28, 2005, at 2:02 PM, Tab Trepagnier wrote:

> IBM 3583

I wonder if the 3583 is an OEM product like the 3582 (which is made
by ADIC).

The IBM 3584, on the other hand, is true blue.  We have one in
production for about two years and have yet to have any problems
besides media age.

--Jim


Re: AW: Normal # of failures on tape libraries

2005-12-30 Thread Jim Zajkowski

On Dec 28, 2005, at 2:02 PM, Tab Trepagnier wrote:


IBM 3583


I wonder if the 3583 is an OEM product like the 3582 (which is made
by ADIC).

The IBM 3584, on the other hand, is true blue.  We have one in
production for about two years and have yet to have any problems
besides media age.

--Jim


Re: 3584 logical library

2005-12-30 Thread Jim Zajkowski

On Dec 28, 2005, at 9:35 AM, Aaron Durkee wrote:


When I eject a tape from one logical library into the exit / entry
port and want to check it back into the other logical library via
the exit / entry port I need to go to the web interface for the
3584 (or lcd panel) and manually move the tape to the other logical
partition.


We have the same behavior here; the 3584 virtualizes the IO door
between all library partitions, so tapes put in there by one
partition do not show up in another.  Similarly, when you open the IO
door the 3584 asks you which partition to assign the tapes to, and
the tapes in the door at that point only show up there.

--Jim


Solaris backup missing /var filsystem

2005-12-30 Thread Richard Rhodes
Hi Everyone,

We have 2 Solaris v8 nodes that backup to TSM v5.3 server.  Scheduled
backups are missing
the /var filesystem while cmd line incrementials (dsmc inc /var) work.

nodes:Solaris v8
node client:  TSM ba client v5.1.6, then upgraded to v5.3.0 during
testing
TSM server: TSM v5.3 (on AIX)

The sched.log for a scheduled backup says the following.  It worked, but
nothing gets backed up
out of /var.

  Querying server for next scheduled event.
  Node Name: ISOC-EBUS01P
  Session established with server TSM2: AIX-RS/6000
Server Version 5, Release 3, Level 1.4
Server date/time: 12/30/05   13:54:37  Last access: 12/30/05
13:53:28
  Next operation scheduled:
  
  Schedule Name: ZZZ-RLRTEST
  Action:Incremental
  Objects:
  Options:
  Server Window Start:   13:53:00 on 12/30/05
  
  Executing scheduled command now.
  Incremental backup of volume '/'
  Incremental backup of volume '/var'
  Incremental backup of volume '/var/run'
  Incremental backup of volume '/opt'
   . . . . deleted . . . .

Nothing in /var EVER shows getting backed up, transfered, expired, etc . .
. in the sched.log.

If we run cmd line 'dsmc inc /var' it works just fine and always gets some
files.

  isoc-ebus01p:/var/log#dsmc inc /var
  IBM Tivoli Storage Manager
  Command Line Backup/Archive Client Interface
Client Version 5, Release 3, Level 0.12
Client date/time: 12/30/05   15:11:30
  (c) Copyright by IBM Corporation and other(s) 1990, 2005. All Rights
Reserved.
  Node Name: ISOC-EBUS01P
  Session established with server TSM2: AIX-RS/6000
Server Version 5, Release 3, Level 1.4
Server date/time: 12/30/05   15:11:30  Last access: 12/30/05
14:58:41
  Incremental backup of volume '/var'
  ANS1898I * Processed 1,000 files *
  Directory--> 512 /var/ [Sent]
  Normal File-->29,652 /var/adm/lastlog [Sent]
  Normal File-->20,148 /var/adm/sulog [Sent]
  Normal File--> 5,208 /var/adm/utmpx [Sent]
  (and on)

If I delete the /var filespace on the tsm server and run the scheduled
backup, no filespace
is created.  If I then run the cmd line incremental, a filespace is
created.

It's like /var is being excluded, but I can't find anything that's doing an
exclude.  Putting
a domain clause in the dsm.sys with an explicit /var on it still doesn't
help.

Here are the client side inclexcl . . . .

  exclude.filetsmsched.log
  exclude.dir /devices
  exclude.dir /cdrom
  exclude.dir /dev
  exclude.dir /proc
  exclude.dir /tmp

Here are the server options . . . .

  exclude /unix/
  exclude.dir /unix
  exclude /.../core
  exclude /tmp/.../*
  exclude /.../Lost+found

I've tried deleting the client side excludes and run without the server
side option set but the same
things occurs.

After upgrading to client v5.3 I ran some previews.

Below is a preview of /var - it ERRORS, saying /var is a invalid spec.

  isoc-ebus01p:/opt/tivoli/tsm/client/ba/bin#dsmc prev backup /var
  Are you sure you want to continue? (Yes (Y)/No (N)) y
  ANS1081E Invalid search file specification '/var' entered
  Preview output has been successfully written to file 'dsmprev.txt'.
  isoc-ebus01p:/opt/tivoli/tsm/client/ba/bin#cat dsmprev.txt
  Preview generated on 12/30/05   14:58:44.
  Name:   Size:   Type:   Status: Pattern:Source: Mgmt Class:
  isoc-ebus01p:/opt/tivoli/tsm/client/ba/bin#

A preview of /var/ returns gobs of files to include . . . .

  isoc-ebus01p:/opt/tivoli/tsm/client/ba/bin#dsmc preview backup /var/
  Are you sure you want to continue? (Yes (Y)/No (N)) y
  Preview output has been successfully written to file 'dsmprev.txt'.
  isoc-ebus01p:/opt/tivoli/tsm/client/ba/bin#
  isoc-ebus01p:/opt/tivoli/tsm/client/ba/bin#more dsmprev.txt
  Preview generated on 12/30/05   15:29:34.
  Name:   Size:   Type:   Status: Pattern:Source: Mgmt Class:
  /var/adm512  B  Directory   Included-   -
AIX-SOLARIS
  /var/cron   512  B  Directory   Included-   -
AIX-SOLARIS
  /var/db2512  B  Directory   Included-   -
AIX-SOLARIS
  /var/dmi512  B  Directory   Included-   -
AIX-SOLARIS
  /var/docsearch  512  B  Directory   Included-   -
AIX-SOLARIS
  (and on and on . . . )

It's as if the /var filesystem simply doesn't exist!!

Here is the df for the box

  isoc-ebus01p:/var/log#df
  /  (/dev/md/dsk/d1): 4849358 blocks   458415
files
  /proc  (/

Re: Reducing database volume

2005-12-30 Thread Prather, Wanda
It will just move the data.
I think it will make you do the REDUCE first.
TSM will remove it from the LIST of DBVOLUMES.

Then you will have to delete it from the filesystem yourself. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Mochnaczewski
Sent: Friday, December 30, 2005 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Reducing database volume


Hi Everybody,

I've created and extended my database in TSM version 5.1.6. I now
realize that I would like to remove it for various reasons. I've checked
the administrator's guide , and I know I can do a delete dbvolume
. Now, just to be sure, will this move the data from the
dbvolume I'm removing to the other volumes ( prior to adding it, my db
was at 54Gb and 77% utilized  and after adding 5Gb it is now 59 Gb ) and
then I can safely remove the file it created in my filesystem or will it
both move the data to the other volumes and remove the file it created
in the filesystem ?

Rich


Reducing database volume

2005-12-30 Thread Richard Mochnaczewski
Hi Everybody,

I've created and extended my database in TSM version 5.1.6. I now realize that 
I would like to remove it for various reasons. I've checked the administrator's 
guide , and I know I can do a delete dbvolume . Now, just to be sure, 
will this move the data from the dbvolume I'm removing to the other volumes ( 
prior to adding it, my db was at 54Gb and 77% utilized  and after adding 5Gb it 
is now 59 Gb ) and then I can safely remove the file it created in my 
filesystem or will it both move the data to the other volumes and remove the 
file it created in the filesystem ?

Rich


Re: Novell backups

2005-12-30 Thread Troy Frank
Would it be possible to make the problem server the ONLY one running
backups off the san for one night?  i.e. for any other servers attached
to that san, don't run their backups for one night.

It's sounding like the problem is i/o throughput from the san, and that
would help isolate it.  Depending on what your san hardware is, you may
want to look at how much cache it has, what percentage of it is being
used for read vs. write, and how much read-ahead it's doing for this
server's volumes.


>>> [EMAIL PROTECTED] 12/30/2005 11:07:59 AM >>>
hi

The battle continues with the slow running tsm backups for netware.

-tsa is set at...20, no caching (any higher causes the backup to
fail).
-The SAN is at raid 5 (7 disk, 1P)
-San disks are each 72Gb
-FC disks
-1gb FC connections
-Host bus adapter was the 1st thing we updated
-Our backup window is tight, We have several large tsm backup running
at the same time.
(the best that we can do is stagger the starting times).

Still looking for a way to shorten the backup time.

Thanks, Gary

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf
Of
Troy Frank
Sent: Thursday, December 29, 2005 4:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Novell backups


In addition to what duane wanted to know, I would look at a couple
other
things. In the sys:\etc\sms\tsa.cfg file, check these settings...

Enable Caching - should be no. tsm client already does read-ahead
caching, the tsa caching is redundant.

Cache Memory Threshold - should be set to as high as you can get away
with. 25% is the max. If backups give you errors with it set this
high, I usually move down in increments of 5% until errors stop
happening.

To make the above changes take affect, you'd have to unload dsmcad,
run
smsstop, then reload everything.
To continue on with Duane's line of thinking, make sure that the hba's
in your server are at the latest firmware/driver versions. There's
also
a post sp5 NSS patch available (nw6nss5c). Also, since it's
san-attached, how many other servers are attached to this san, and
could
some of them also be running backups (or other things) at the same
time?


>>> [EMAIL PROTECTED] 12/29/2005 3:13:33 PM >>>
The data only took 6.6 hours to transfer. To me it would look like the
process of scanning the filesystem is taking the majority of the time.

How are your san disks set up ? Raid 5 ? How many physical disks to
make
up a volume ? FC disk or sata ? 2gb FC connections ?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Gary Osullivan
Sent: Thursday, December 29, 2005 2:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Novell backups


hi

here are a few details to add to Richard's original query.
This tsm backup of netware data is now taking us 19 hours to complete
!

Any ideas ?

-netware version 6 sp5
-tsafs (version 6.51 JUNE 18 2005)
-client server proliant DL380\G2
-Pentium 3 /1.33GHZ dual processor 512cache
-client server internal disk 17361mb
-clent server memory (2359296kb)
-NIc 6136 gigabit

-client data is SAN attached (2 data volumes, 650Gb of data on each
vol
/ 2.1 million object on each vol. -number of objects scanned 4.2
million
- objects backed up: 1,527,474
- processing time: 19:20:45
-bytes transferred: 221.36 GB
-Data transfer time: 23,651.11 sec
-Network data transfer rate: 9,814.20 KB/sec

dsp.opt file


* Tivoli Storage Manager
*
* Sample client options file for scx (ver 5.1.6.0)



* Communications settings
* Setting Options for the TCP/IP Communication Method
* ---
*
COMMMETHOD TCPip
TCPSERVERADDRESS tsmgiga.slac.ca
TCPPORT 1600


* Recommended for automated backup schedules
* ---
Passwordaccess generate


* Setting Nodename
* ---

NODENAME scx

*
* Recommended exclude list for NetWare
* ---
*
* NetWare 3.12, 3.20 specific - use the INCR BINDERY or SEL BINDERY
* command to backup the bindery; exclude the real bindery
* files:
* ---
* EXCLUDE sys:\system\net$obj.sys
* EXCLUDE sys:\system\net$prop.sys
* EXCLUDE sys:\system\net$val.sys
*
* NetWare 4.11, 4.20 & 5.00 specific - NDS leaf objects can be
included/
* excluded; container objects are not affected. This example
* shows how to include only objects in container
* .OU=Development.O=TIVOLI
* -
* EXCLUDE directory:/.O=TIVOLI/.../*
* INCLUDE directory:/.O=TIVOLI/.OU=Development.O=TIVOLI/.../*
*
* NetWare general excludes
* -

INCLUDE scx\SYS:.../* mgm_nov
INCLUDE scx\SHARE1:.../* mgm_nov
INCLUDE scx\U

Re: Novell backups

2005-12-30 Thread Ochs, Duane
What about the TSM server side of things ? Is the data being written to
disk or does it end up writing to tape ?

We experienced what we though was a network issue a couple of years back
it turned out two of our Exchange servers were using all the available
diskpool space and everything else had to write to tape. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Gary Osullivan
Sent: Friday, December 30, 2005 11:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Novell backups


hi

The battle continues with the slow running tsm backups for netware.

-tsa is set at...20, no caching (any higher causes the backup to fail).
-The SAN is at raid 5 (7 disk, 1P) -San disks are each 72Gb -FC disks
-1gb FC connections -Host bus adapter was the 1st thing we updated -Our
backup window is tight, We have several large tsm backup running at the
same time. (the best that we can do is stagger the starting times).

Still looking for a way to shorten the backup time.

Thanks, Gary

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Troy Frank
Sent: Thursday, December 29, 2005 4:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Novell backups


In addition to what duane wanted to know, I would look at a couple other
things. In the sys:\etc\sms\tsa.cfg file, check these settings...

Enable Caching - should be no.  tsm client already does read-ahead
caching, the tsa caching is redundant.

Cache Memory Threshold - should be set to as high as you can get away
with.  25% is the max.  If backups give you errors with it set this
high, I usually move down in increments of 5%  until errors stop
happening.

To make the above changes take affect, you'd have to unload dsmcad, run
smsstop, then reload everything. To continue on with Duane's line of
thinking, make sure that the hba's in your server are at the latest
firmware/driver versions.  There's also a post sp5 NSS patch available
(nw6nss5c).  Also, since it's san-attached, how many other servers are
attached to this san, and could some of them also be running backups (or
other things) at the same time?


>>> [EMAIL PROTECTED] 12/29/2005 3:13:33 PM >>>
The data only took 6.6 hours to transfer. To me it would look like the
process of scanning the filesystem is taking the majority of the time.

How are your san disks set up ? Raid 5 ? How many physical disks to make
up a volume ? FC disk or sata ? 2gb FC connections ?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Gary Osullivan
Sent: Thursday, December 29, 2005 2:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Novell backups


hi

here are a few details to add to Richard's original query.
This tsm backup of netware data is now taking us 19 hours to complete !

Any ideas ?

-netware version 6 sp5
-tsafs (version 6.51 JUNE 18 2005)
-client server proliant DL380\G2
-Pentium 3 /1.33GHZ dual processor 512cache
-client server internal disk 17361mb
-clent server memory (2359296kb)
-NIc 6136 gigabit

-client data is SAN attached (2 data volumes, 650Gb of data on each vol
/ 2.1 million object on each vol. -number of objects scanned 4.2 million
- objects backed up: 1,527,474
- processing time: 19:20:45
-bytes transferred: 221.36 GB
-Data transfer time: 23,651.11 sec
-Network data transfer rate: 9,814.20 KB/sec

dsp.opt file


* Tivoli Storage Manager
*
* Sample client options file for scx (ver 5.1.6.0)



* Communications settings
* Setting Options for the TCP/IP Communication Method
* ---
*
COMMMETHOD TCPip
TCPSERVERADDRESS tsmgiga.slac.ca
TCPPORT 1600


* Recommended for automated backup schedules
* ---
Passwordaccess generate


* Setting Nodename
* ---

NODENAME scx

*
* Recommended exclude list for NetWare
* ---
*
* NetWare 3.12, 3.20 specific - use the INCR BINDERY or SEL BINDERY
* command to backup the bindery; exclude the real bindery
* files:
* ---
* EXCLUDE sys:\system\net$obj.sys
* EXCLUDE sys:\system\net$prop.sys
* EXCLUDE sys:\system\net$val.sys
*
* NetWare 4.11, 4.20 & 5.00 specific - NDS leaf objects can be included/
* excluded; container objects are not affected. This example
* shows how to include only objects in container
* .OU=Development.O=TIVOLI
* -
* EXCLUDE directory:/.O=TIVOLI/.../*
* INCLUDE directory:/.O=TIVOLI/.OU=Development.O=TIVOLI/.../*
*
* NetWare general excludes
* -

INCLUDE scx\SYS:.../* mgm_nov
INCLUDE scx\SHARE1:.../* mgm_nov
INCLUDE scx\USER3:.../* mgm_

Re: Novell backups

2005-12-30 Thread Gary Osullivan
hi

The battle continues with the slow running tsm backups for netware.

-tsa is set at...20, no caching (any higher causes the backup to fail).
-The SAN is at raid 5 (7 disk, 1P)
-San disks are each 72Gb
-FC disks
-1gb FC connections
-Host bus adapter was the 1st thing we updated
-Our backup window is tight, We have several large tsm backup running at the 
same time.
(the best that we can do is stagger the starting times).

Still looking for a way to shorten the backup time.

Thanks, Gary

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Troy Frank
Sent: Thursday, December 29, 2005 4:47 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Novell backups


In addition to what duane wanted to know, I would look at a couple other
things. In the sys:\etc\sms\tsa.cfg file, check these settings...

Enable Caching - should be no.  tsm client already does read-ahead
caching, the tsa caching is redundant.

Cache Memory Threshold - should be set to as high as you can get away
with.  25% is the max.  If backups give you errors with it set this
high, I usually move down in increments of 5%  until errors stop
happening.

To make the above changes take affect, you'd have to unload dsmcad, run
smsstop, then reload everything.
To continue on with Duane's line of thinking, make sure that the hba's
in your server are at the latest firmware/driver versions.  There's also
a post sp5 NSS patch available (nw6nss5c).  Also, since it's
san-attached, how many other servers are attached to this san, and could
some of them also be running backups (or other things) at the same
time?


>>> [EMAIL PROTECTED] 12/29/2005 3:13:33 PM >>>
The data only took 6.6 hours to transfer. To me it would look like the
process of scanning the filesystem is taking the majority of the time.

How are your san disks set up ? Raid 5 ? How many physical disks to
make
up a volume ? FC disk or sata ? 2gb FC connections ?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Gary Osullivan
Sent: Thursday, December 29, 2005 2:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Novell backups


hi

here are a few details to add to Richard's original query.
This tsm backup of netware data is now taking us 19 hours to complete
!

Any ideas ?

-netware version 6 sp5
-tsafs (version 6.51 JUNE 18 2005)
-client server proliant DL380\G2
-Pentium 3 /1.33GHZ dual processor 512cache
-client server internal disk 17361mb
-clent server memory (2359296kb)
-NIc 6136 gigabit

-client data is SAN attached (2 data volumes, 650Gb of data on each
vol
/ 2.1 million object on each vol. -number of objects scanned 4.2
million
- objects backed up: 1,527,474
- processing time: 19:20:45
-bytes transferred: 221.36 GB
-Data transfer time: 23,651.11 sec
-Network data transfer rate: 9,814.20 KB/sec

dsp.opt file


* Tivoli Storage Manager
*
* Sample client options file for scx (ver 5.1.6.0)



* Communications settings
* Setting Options for the TCP/IP Communication Method
* ---
*
COMMMETHOD TCPip
TCPSERVERADDRESS tsmgiga.slac.ca
TCPPORT 1600


* Recommended for automated backup schedules
* ---
Passwordaccess generate


* Setting Nodename
* ---

NODENAME scx

*
* Recommended exclude list for NetWare
* ---
*
* NetWare 3.12, 3.20 specific - use the INCR BINDERY or SEL BINDERY
* command to backup the bindery; exclude the real bindery
* files:
* ---
* EXCLUDE sys:\system\net$obj.sys
* EXCLUDE sys:\system\net$prop.sys
* EXCLUDE sys:\system\net$val.sys
*
* NetWare 4.11, 4.20 & 5.00 specific - NDS leaf objects can be
included/
* excluded; container objects are not affected. This example
* shows how to include only objects in container
* .OU=Development.O=TIVOLI
* -
* EXCLUDE directory:/.O=TIVOLI/.../*
* INCLUDE directory:/.O=TIVOLI/.OU=Development.O=TIVOLI/.../*
*
* NetWare general excludes
* -

INCLUDE scx\SYS:.../* mgm_nov
INCLUDE scx\SHARE1:.../* mgm_nov
INCLUDE scx\USER3:.../* mgm_nov

EXCLUDE scx\SYS:/SYSTEM/CSLIB/LOGS/SMLOGS/*
EXCLUDE scx\SYS:/_SWAP_.MEM
EXCLUDE scx\SYS:/LTAUDIT/.../*
EXCLUDE scx\SYS:/TIVOLI/TSM/CLIENT/BA/*.log
EXCLUDE scx\SHARE1:/graphics/.../*
EXCLUDE scx\SHARE1:/Internet branding shared/.../*
EXCLUDE scx\USER1:.../* mgm_nov
EXCLUDE scx\VOL1:.../* mgm_nov
EXCLUDE scx\SHARE2:.../* mgm_nov
EXCLUDE scx\SHARE3:.../* mgm_nov
EXCLUDE scx\SHARE4:.../* mgm_nov
EXCLUDE scx\NDPS:.../* mgm_nov
EXCLUDE scx\FTP:.../* mgm_nov
EXCLUDE scx

Re: Novell backups

2005-12-30 Thread David W Litten
Rich,

You could try parallel processing through setting the RESOURCEUTILIZATION 4
in your dsm.opt file. You may also need to increase the number of
MaxMountPoints allowed for this node to say 5.

david




 Richard
 Mochnaczewski
   cc
 Sent by: "ADSM:
 Dist Stor Subject
 Manager"  [ADSM-L] Novell backups
 <[EMAIL PROTECTED]
 .edu>


 12/29/2005 12:37
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .edu>






Hi Everybody,

Is there a way to schedule multiple Novell backups to run simultaneously
for one node ? We are having trouble with our backups completing in a
timely manner as we have thousand of files which are being scanned for
backup. What we would like to do is have the volume on the node split out
and backed up through multiple sessions simultaneously. Is this possible ?

Rich