need help tsm san manger on w2k

2003-10-20 Thread TSM
hello,

environment:

windows nt 4.0
tsm server 4.2.x
tsm client 4.2.x
tsm san manager 4.2.x
lto 3584, 4 drives


i need some help upgrading nt4 .0 san agent to w2k.

the problem is, that the registry parameter maximumsglist is false (too
small), but under nt4.0 everything (also restore)
is working.

now we are going to upgrade this server to w2k and we must correct the
false parameter (to 0xff)

my question:
is it possible, that after server upgrade the old data backuped with nt 4.0
tsm san agent
cannot be restored by w2k san agent?


we will also upgrade tsm server to w2k, will there eventually problems with
old backuped data?

thanks.


with best regards

stefan savoric


Re: Online DB Reorg

2003-10-20 Thread Remco Post
On Sat, 18 Oct 2003 14:35:09 -0400
Talafous, John G. [EMAIL PROTECTED] wrote:


 Remco,
   Would you be willing to share your SQL query that reports on DB
 fragmentation?


I was allready looking at Eric (he probably saved my thingy somewhere
usefull, I just saved it in my sent-mail folder), here it is...

select cast((100 - ( cast(MAX_REDUCTION_MB as float) * 256 ) / -
(cast(USABLE_PAGES as float) - cast(USED_PAGES as float) ) * 100) as -
decimal(4,2)) as percent_frag from db

Note that I still think this is one of the more useless queries I've ever
build...

 Thanks to all,
 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   http://www.timken.com


--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer industry
didn't even foresee that the century was going to end. -- Douglas Adams


Re: Online DB Reorg

2003-10-20 Thread Loon, E.J. van - SPLXM
Hi Guys!
The SQL statement can also be found at Richard's quickfacts page:
http://people.bu.edu/rbs/ADSM.QuickFacts
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Remco Post [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 12:30
To: [EMAIL PROTECTED]
Subject: Re: Online DB Reorg


On Sat, 18 Oct 2003 14:35:09 -0400
Talafous, John G. [EMAIL PROTECTED] wrote:


 Remco,
   Would you be willing to share your SQL query that reports on DB
 fragmentation?


I was allready looking at Eric (he probably saved my thingy somewhere
usefull, I just saved it in my sent-mail folder), here it is...

select cast((100 - ( cast(MAX_REDUCTION_MB as float) * 256 ) / -
(cast(USABLE_PAGES as float) - cast(USED_PAGES as float) ) * 100) as -
decimal(4,2)) as percent_frag from db

Note that I still think this is one of the more useless queries I've ever
build...

 Thanks to all,
 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   http://www.timken.com


--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer industry
didn't even foresee that the century was going to end. -- Douglas Adams


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


Re: Online DB Reorg

2003-10-20 Thread John Naylor
OK I ran Remco's sql and it reports my database is 53% fragmented.
I know that a fragmented database is the  natural state, but is there any
level of fragmentation that you should start to get worried at ?
Not being a database expert I would say only worry if TSM cache hits start
slipping badly,
but is the right approach



**
The information in this E-Mail is confidential and may be legally
privileged. It may not represent the views of Scottish and Southern
Energy plc.
It is intended solely for the addressees. Access to this E-Mail by
anyone else is unauthorised. If you are not the intended recipient,
any disclosure, copying, distribution or any action taken or omitted
to be taken in reliance on it, is prohibited and may be unlawful.
Any unauthorised recipient should advise the sender immediately of
the error in transmission.

Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
are trading names of the Scottish and Southern Energy Group.
**


vault to vaultretrieve

2003-10-20 Thread Alegra Fernandez
Is there anyway to manually set a volume from vault to vaultretrieve?
If not, how does one recall offsite volumes in a vault state?  I have a
problem
with my expiration, and I'm running VERY low on scratches.  So I'm trying to
retrieve some of my offsite tapes.  And since expiration isn't automatically
doing this, I'm looking for a manual way to.

Thanks,
Alegra Fernandez


Re: vault to vaultretrieve

2003-10-20 Thread Richard van Denzel
It's a two-way procedure:

MOVE DRMEDIA volume WHERESTATE=VAULT TOSTATE=COURIERRETRIEVE
MOVE DRMEDIA volume WHERESTATE=COURIERRETRIEVE TOSTATE=VAULTRETRIEVE

Met vriendelijke groet, with kind regards,

Richard van Denzel
Consultant
IBM CATE, TSM Certified Consultant

Solution Professional Services B.V.

Transistorstraat 167, 1322 CN Almere
Postbus 50044, 1305 AA Almere.

T:   +31 (0)36 880 02 22
F:   +31 (0)36 880 02 44
M:  +31 (0)652 663 978
W:  www.sltngroup.com
E:   [EMAIL PROTECTED]

Solution Professional Services B.V. is onderdeel van THE SLTN GROUP
An IBM Premier Business Partner





Alegra Fernandez [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
20-10-2003 16:07
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:vault to vaultretrieve


Is there anyway to manually set a volume from vault to vaultretrieve?
If not, how does one recall offsite volumes in a vault state?  I have a
problem
with my expiration, and I'm running VERY low on scratches.  So I'm trying
to
retrieve some of my offsite tapes.  And since expiration isn't
automatically
doing this, I'm looking for a manual way to.

Thanks,
Alegra Fernandez


Re: Online DB Reorg

2003-10-20 Thread Wayne T. Smith
John Naylor wrote, in part:
OK I ran Remco's sql and it reports my database is 53% fragmented.
Maybe this makes you feel better?  My result is 99.80, but then my
database isn't what it used to be :-)cheers, wayne


History of ITSM/TSM/ADSM/...

2003-10-20 Thread asr
Howdy, all.

I've only been around TSM since ADSM 2.1; I recall hearing bits and pieces of
the earlier versions and the roots of the product.

Could someone who was there synopsize, and perhaps someone who FAQs capture
it?

I rooted around in the archives and Mr. Sims' fine quickfacts, but didn't see
anything particularly cohesive.

- Allen S. Rout


Re: Export server

2003-10-20 Thread Clark, Rodney
I would take the last option anyday.

-Original Message-
From: Dave Adams [mailto:[EMAIL PROTECTED]
Sent: Friday, October 17, 2003 1:48 PM
To: [EMAIL PROTECTED]
Subject: Export server


We are planning to reinstall our Windows 2000-based TSM server with Linux.
We only use DASD for storage. Would the following be (roughly ;-) the
correct procedure?

1) Export the TSM server to a file device class on a non-operating system
disk.
2) Reinstall the server with Linux.
3) Import the TSM server from the non-operating system disk.

Otherwise I guess I could install a whole new system for the Linux
installation and do:

 export server filedata=all toserver=linuxserver

Only thing here is that we want the Linux server to have the same hostname
as the old W2K server, which is why I thought my first idea might be easier.

Either way clients will be able to access the Linux server and do
backups/restores as they could against the W2K server, right?

Thanks

Dave


-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-


Win2K Device Support SAIT drives

2003-10-20 Thread Dean Winger
Help Please

I am in the market for a new tape library. I have been watching the developments with 
Sony's SAIT technology. Not many libraries on the market yet but I have managed to 
find a few.  So, will TSM support SAIT, and if so what is the timeline?

Thanks

Dean Winger
Network Administrator
Wisconsin Center for Education Research
1025 West Johnson Street
Madison, WI  53706
Cell (608)235-1425
Office (608)265-3202
Eamil [EMAIL PROTECTED]


Re: Online DB Reorg

2003-10-20 Thread Prather, Wanda
That query (taken here from ADSM.QuickFacts) confuses me entirely - can
someone please explain?

SELECT CAST((100 - ( CAST(MAX_REDUCTION_MB AS FLOAT) * 256 )  /
(CAST(USABLE_PAGES AS FLOAT) -
 CAST(USED_PAGES AS FLOAT) ) * 100)
AS
 DECIMAL(4,2)) AS PERCENT_FRAG FROM
DB


It finds the number of unused pages (usable_pages - used_pages).

Then it takes max-reduction and divides by unusable pages.

But, so what?  I don't get it.

The unusable pages - max_reduction tells you how much of your data base is
NOT usable.

BUT again, so what?

That doesn't say whether you need to do a data base reorg or not, does it?

If my max reduction is 8 pages and my unused pages are 10, I've got 2
unusable pages.
But if my data base is 1,000,000 pages, that certainly isn't much
fragmentation, the way a DB administrator (or space manager) would
traditionally see it.  Certainly no reason to do a DB reorg.

WHY isn't the division done with the total usable pages as the numerator?
The data base size has to enter in to the decision to reorg, somehwere.
I'm confsed

(But then, it's Monday)


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 6:38 AM
To: [EMAIL PROTECTED]
Subject: Re: Online DB Reorg


Hi Guys!
The SQL statement can also be found at Richard's quickfacts page:
http://people.bu.edu/rbs/ADSM.QuickFacts
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Remco Post [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 12:30
To: [EMAIL PROTECTED]
Subject: Re: Online DB Reorg


On Sat, 18 Oct 2003 14:35:09 -0400
Talafous, John G. [EMAIL PROTECTED] wrote:


 Remco,
   Would you be willing to share your SQL query that reports on DB
 fragmentation?


I was allready looking at Eric (he probably saved my thingy somewhere
usefull, I just saved it in my sent-mail folder), here it is...

select cast((100 - ( cast(MAX_REDUCTION_MB as float) * 256 ) / -
(cast(USABLE_PAGES as float) - cast(USED_PAGES as float) ) * 100) as -
decimal(4,2)) as percent_frag from db

Note that I still think this is one of the more useless queries I've ever
build...

 Thanks to all,
 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   http://www.timken.com


--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer industry
didn't even foresee that the century was going to end. -- Douglas Adams


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Very slow backups of Exchange 5.5 with tpdexcc

2003-10-20 Thread Tyree, David
We are running Exchange 5.5 on a box running Win2k Sp2, 1.1 GHz
x2 with 2.6 gig ram, TSM version 5.1.5.2, and TDP version 5.1.5.0.  It's
talking a TSM server running 5.1.6.

The backups are normally ran via a batch file using the command
line program with the switches we want,  tdpexcc backup * full
/tsmoptfile=dsm.opt /logfile=excsch.log  excfull.log. All paths are
correct.

The log is telling me that it runs for about 10 hours each night
and will send a total of 35 gig. We looked at various things, network, CPU
usage, etc. The scheduled backup is ran after 5 PM when most of the users
are gone for the day.

We have watched the CPU usage during the day and with just normal activity
the usage was 5-10 %. After the backup was started it jumped to 10-20 % with
occasional bumps to 80 %. Still not breaking too much of a sweat though.

Then just for the heck of it we ran the same backup via the GUI
(tdpexc.exe) and it finished in less than an hour. Same file size, same
everything else.

We watched the incoming bandwidth on the backup server and when
we ran the backup via the GUI it was almost exactly 10 times faster than
when we ran it via the command line.

In both cases, the full backups were ran during the day with all
users online and the backup server itself doing nothing else.

Is something going on here that we are missing?



David Tyree
Microcomputer Specialist
South Georgia Medical Center
229.333.1155

Confidential Notice:  This e-mail message, including any attachments, is for
the sole use of the intended recipient(s) and may contain confidential and
privileged information.  Any unauthorized review, use,  disclosure or
distribution is prohibited.  If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.


Re: Export server

2003-10-20 Thread MC Matt Cooper (2838)
Dave,
I have just completed moving 300 desktops and servers from 1 TSM
server to another.  I do not know how many clients you are dealing with but
the things I can tell you are...
1) the TCPSERVERADDRESS and the TCPPORT on the clients DSM.OPT dictate where
they are backed up to, not the TSM server name.
2) If you change the TSM server name, even though you have exported/imported
the client nodes, the nodes will have to be re-authenticated.   The server
name is part of the authentication process.
Matt

-Original Message-
From: Clark, Rodney [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 11:55 AM
To: [EMAIL PROTECTED]
Subject: Re: Export server


I would take the last option anyday.

-Original Message-
From: Dave Adams [mailto:[EMAIL PROTECTED]
Sent: Friday, October 17, 2003 1:48 PM
To: [EMAIL PROTECTED]
Subject: Export server


We are planning to reinstall our Windows 2000-based TSM server with Linux.
We only use DASD for storage. Would the following be (roughly ;-) the
correct procedure?

1) Export the TSM server to a file device class on a non-operating system
disk.
2) Reinstall the server with Linux.
3) Import the TSM server from the non-operating system disk.

Otherwise I guess I could install a whole new system for the Linux
installation and do:

 export server filedata=all toserver=linuxserver

Only thing here is that we want the Linux server to have the same hostname
as the old W2K server, which is why I thought my first idea might be easier.

Either way clients will be able to access the Linux server and do
backups/restores as they could against the W2K server, right?

Thanks

Dave


-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-


Move data

2003-10-20 Thread Gill, Geoffrey L.
Have any of you used the move data command to try and consolidate data from
filling tapes that have been sent offsite to tapes within the library that
will be sent offsite? At the moment we have over 180 tapes in filling status
offsite, anywhere from 0.1% to 100% filling. I'd like to at least see if I
could free up the tapes that are 40% and filling.



Is there an easier way of consolidating this data? Reclamation doesn't seem
to do as good a job as I'd hoped.



Any other ideas???



Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154


Re: Online DB Reorg

2003-10-20 Thread Robin Sharpe
Mine came back as 0.02  does that mean it is not very fragmented?  Our
database is 112GB, 78.8% used, was new in October 2001 and has never been
reorganized.
Robin Sharpe
Berlex Labs


|-+---
| |   Prather, Wanda|
| |   [EMAIL PROTECTED]|
| |   HUAPL.EDU  |
| |   Sent by: ADSM: |
| |   Dist Stor   |
| |   Manager|
| |   [EMAIL PROTECTED]|
| |   T.EDU  |
| |   |
| |   |
| |   10/20/03 01:03  |
| |   PM  |
| |   Please respond  |
| |   to ADSM: Dist  |
| |   Stor Manager   |
| |   |
|-+---
  
-|
  |
 |
  |
 |
  |To: [EMAIL PROTECTED]   
  |
  |cc: 
 |
  |Subject:
 |
  |Re: Online DB Reorg 
 |
  
-|



That query (taken here from ADSM.QuickFacts) confuses me entirely - can
someone please explain?

SELECT CAST((100 - ( CAST(MAX_REDUCTION_MB AS FLOAT) * 256 )  /
(CAST(USABLE_PAGES AS FLOAT) -
 CAST(USED_PAGES AS FLOAT) ) * 100)
AS
 DECIMAL(4,2)) AS PERCENT_FRAG FROM
DB


It finds the number of unused pages (usable_pages - used_pages).

Then it takes max-reduction and divides by unusable pages.

But, so what?  I don't get it.

The unusable pages - max_reduction tells you how much of your data base is
NOT usable.

BUT again, so what?

That doesn't say whether you need to do a data base reorg or not, does it?

If my max reduction is 8 pages and my unused pages are 10, I've got 2
unusable pages.
But if my data base is 1,000,000 pages, that certainly isn't much
fragmentation, the way a DB administrator (or space manager) would
traditionally see it.  Certainly no reason to do a DB reorg.

WHY isn't the division done with the total usable pages as the numerator?
The data base size has to enter in to the decision to reorg, somehwere.
I'm confsed

(But then, it's Monday)


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 6:38 AM
To: [EMAIL PROTECTED]
Subject: Re: Online DB Reorg


Hi Guys!
The SQL statement can also be found at Richard's quickfacts page:
http://people.bu.edu/rbs/ADSM.QuickFacts
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Remco Post [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 12:30
To: [EMAIL PROTECTED]
Subject: Re: Online DB Reorg


On Sat, 18 Oct 2003 14:35:09 -0400
Talafous, John G. [EMAIL PROTECTED] wrote:


 Remco,
   Would you be willing to share your SQL query that reports on DB
 fragmentation?


I was allready looking at Eric (he probably saved my thingy somewhere
usefull, I just saved it in my sent-mail folder), here it is...

select cast((100 - ( cast(MAX_REDUCTION_MB as float) * 256 ) / -
(cast(USABLE_PAGES as float) - cast(USED_PAGES as float) ) * 100) as -
decimal(4,2)) as percent_frag from db

Note that I still think this is one of the more useless queries I've ever
build...

 Thanks to all,
 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   http://www.timken.com


--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 8008Fax. +31 20 668 3167

I really didn't foresee the Internet. But then, neither did the computer
industry. Not that that tells us very much of course - the computer
industry
didn't even foresee that 

Re: Very slow backups of Exchange 5.5 with tpdexcc

2003-10-20 Thread Del Hoobler
David,

The underlying backup code is identical for the command-line
and GUI for Data Protection for Exchange.

I agree something doesn't sound right. I would take a closer
look at the logs for both Data Protection for Exchange and
the TSM Server to make sure the same data and same amount
of data was processed in both cases.

Do you know if you used different values for the BUFFERS
and/or BUFFERSIZE? (the DP for Exchange log should show
the values that were used.)

I would try this test again... and see if you see the same thing.
In both backup tests, during the backup, pay attention to the load on
1.) TSM Server 2.) Exchange database disks  3.) Exchange servers  4.)
Network.
If you consistently see this, please call support.

Thanks,

Del



We are running Exchange 5.5 on a box running Win2k Sp2, 1.1
GHz
 x2 with 2.6 gig ram, TSM version 5.1.5.2, and TDP version 5.1.5.0.  It's
 talking a TSM server running 5.1.6.

 The backups are normally ran via a batch file using the
command
 line program with the switches we want,  tdpexcc backup * full
 /tsmoptfile=dsm.opt /logfile=excsch.log  excfull.log. All paths are
 correct.

 The log is telling me that it runs for about 10 hours each
night
 and will send a total of 35 gig. We looked at various things, network,
CPU
 usage, etc. The scheduled backup is ran after 5 PM when most of the
users
 are gone for the day.

 We have watched the CPU usage during the day and with just normal
activity
 the usage was 5-10 %. After the backup was started it jumped to 10-20 %
with
 occasional bumps to 80 %. Still not breaking too much of a sweat though.

 Then just for the heck of it we ran the same backup via the
GUI
 (tdpexc.exe) and it finished in less than an hour. Same file size, same
 everything else.

 We watched the incoming bandwidth on the backup server and
when
 we ran the backup via the GUI it was almost exactly 10 times faster than
 when we ran it via the command line.

 In both cases, the full backups were ran during the day with
all
 users online and the backup server itself doing nothing else.

   Is something going on here that we are missing?


Re: Win2K Device Support SAIT drives

2003-10-20 Thread Himanshu A Madhani
SONY SAIT drives will be supported in TSM 5.2.2 which is scheduled to be
released end of this year.

Himanshu Madhani
Tivoli Storage Manager Development


Re: Move data

2003-10-20 Thread MC Matt Cooper (2838)
Gill,
I have used the MOVE DATA without any problems.  Usually it is
because of a percieved problem with the offsite tape.  But it should work
for what you want to do.
WHy doesn't Reclaimation get this done for you?
Matt

-Original Message-
From: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 1:58 PM
To: [EMAIL PROTECTED]
Subject: Move data


Have any of you used the move data command to try and consolidate data from
filling tapes that have been sent offsite to tapes within the library that
will be sent offsite? At the moment we have over 180 tapes in filling status
offsite, anywhere from 0.1% to 100% filling. I'd like to at least see if I
could free up the tapes that are 40% and filling.



Is there an easier way of consolidating this data? Reclamation doesn't seem
to do as good a job as I'd hoped.



Any other ideas???



Thanks,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154


Database fragmentation formula (was Re: Online DB Reorg)

2003-10-20 Thread Zlatko Krastev
* ATTENTION *
Those who do not like the mathematics, skip to the end (search for word
select).
*

From my mathematical background this query is not showing very good
results. The formula looks like

  ( MAX_REDUCTION_MB * 256 )
100 -  -
(USABLE_PAGES - USED_PAGES) * 100

Thus closer we are to fully used database, less accurate the formula would
be. Moreover if we fill the DB at 100% the result will be division by 0
while the database might be both fragmented and not fragmented. One of our
goals is to fully utilize our resources. In that exact moment the query
would be useless.
Also the formula is of no use if there is no legend how to interpret the
numbers. For example our test server DB is giving PERCENT_FRAG=26.92 while
being nearly unfragmented.

So I would dare to recommend another formula (in pages):

   used - needed
fragm_p = --- x 100
   used

i.e. what space is wasted from all used (in %).
The needed space (in pages) can be found if we multiply PCT_UTILIZED by
USABLE_PAGES and divide by 100 to remove percentages:

needed = PCT_UTILIZED x USABLE_PAGES / 100

while the used space (in pages) is readable from USED_PAGES column.
Therefore my final formula would be (the lines may be split by mailers):

   USED_PAGES - PCT_UTILIZED x USABLE_PAGES / 100
fragm_p =  x 100
USED_PAGES

and the final query would be:
select cast(100 * (USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100) -
  / USED_PAGES as decimal(9,5)) as Unused page parts [pages] from db

Now the percentage shows the percentage of wasted space vs. used space. 0%
would mean database is fully populated with no holes, 100% are impossible
(as completely empty pages would not be counted, and 99+% mean each page
is filled with something small just to allocate it.

PART 2.
Beyond how much space is wasted inside pages we would be also interested
in how many empty pages we are losing due to partition-allocation scheme.
Again the math first. Same formula (but now in MB):

   used - needed
fragm_p = --- x 100
   used

Now needed space is derived from CAPACITY_MB field:

needed = PCT_UTILIZED x CAPACITY_MB / 100

while actual usage is the size to which we can reduce the DB:

used = CAPACITY_MB - MAX_REDUCTION_MB

the resulting formula would look like  (the lines may be split by
mailers):

   (CAPACITY_MB - MAX_REDUCTION_MB) - PCT_UTILIZED x CAPACITY_MB / 100
fragm_p =
- x
100
CAPACITY_MB - MAX_REDUCTION_MB

Division by zero cannot happen as TSM server does not allow us to reduce
the DB under one partition.
Now the query for this percentage would be:
select cast(((CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) ) -
   / (CAPACITY_MB - MAX_REDUCTION_MB) * 100 -
   as decimal(9,5)) as Allocation waste [%] from db

And the final big-big query would look like:
select cast(USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100 -
   as decimal (20,3)) as Unused page parts [pages], -
   cast(100 * (USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100) -
   / USED_PAGES as decimal(9,5)) as Page fragmentation [%], -
   cast( (CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) as decimal (10,2)) -
   as Overallocated space [MB], -
   cast(((CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) ) -
   / (CAPACITY_MB - MAX_REDUCTION_MB) * 100 -
   as decimal(9,5)) as Allocation waste [%] from db

If someone already invented these formulae, I would congratulate him/her.
Even if I am the first dared to do this hard work, there is no Nobel price
for mathematics :-))

Zlatko Krastev
IT Consultant






Remco Post [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
20.10.2003 13:30
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: Online DB Reorg


On Sat, 18 Oct 2003 14:35:09 -0400
Talafous, John G. [EMAIL PROTECTED] wrote:


 Remco,
   Would you be willing to share your SQL query that reports on DB
 fragmentation?


I was allready looking at Eric (he probably saved my thingy somewhere
usefull, I just saved it in my sent-mail folder), here it is...

select cast((100 - ( cast(MAX_REDUCTION_MB as float) * 256 ) / -
(cast(USABLE_PAGES as float) - cast(USED_PAGES as float) ) * 100) as -
decimal(4,2)) as percent_frag from db

Note that I still think this is one of the more useless queries I've ever
build...

 Thanks to all,
 John G. Talafous  IS Technical Principal
 The Timken CompanyGlobal Software Support
 P.O. Box 6927 Data Management
 1835 Dueber Ave. S.W. Phone: (330)-471-3390
 Canton, Ohio USA  44706-0927  Fax  : (330)-471-4034
 [EMAIL PROTECTED]   

Re: Move data

2003-10-20 Thread Gill, Geoffrey L.
I have used the MOVE DATA without any problems.  Usually it is
because of a percieved problem with the offsite tape.  But it should work
for what you want to do.
WHy doesn't Reclaimation get this done for you?
Matt

That is a good question. There are no errors showing in the logs for any of
the tapes. Sometimes when I come in in the morning reclamation is still
running on the offsite pool, yet I still seem to find tapes in a 30% filling
status out there for long periods of time. Is there a better setting for a
reclamin_on script I should use that would better force these to another
tape?

I will surely give it a go and see if this helps. I hate to see so many 40%
and lower filling tapes out there when I could have them here as scratch.


Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154


Re: Database fragmentation formula (was Re: Online DB Reorg)

2003-10-20 Thread Wheelock, Michael D
Hi,

First off, let me say that this a wonderfully explained set of formulas and
analysis.

Isn't this getting a little off the mark though?  Last I checked, almost
every database on the planet (yes even pervasive sql) when allocating
pages/extents, left an amount of space unutilized at the end.  In fact, if
you do a reorg in SQL server, it specifically asks how much space you want
to remain free in each page.  Now why would you want that?  So that when you
add a row to a table with a clustered index (ie. A primary key, where the
table is physically ordered the same as the index) the database does not
have to add an extent at the end of the space to house the new row.  This
cuts down on logical fragmentation which is a far larger killer of databases
than the fragmentation that these formulas show.  By these formuls, every
signle one of my SQL database is 25% fragmented (why, because every Sunday
they do online reorgs to fix their logical fragmentation).  Logical
fragmentation turns large sequential reads into large random reads.

These are principles from Oracle and SQL server and may not apply to the TSM
database, but as a relational database, I don't know why they wouldn't.

Oh, and I don't know of a utility that can give you the info that I was
talking about for TSM, but the equivalent can be obtained in SQL by doing a
(dbcc showcontig (tablename)).

My $0.02

Michael Wheelock
Integris Health

-Original Message-
From: Zlatko Krastev [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 2:12 PM
To: [EMAIL PROTECTED]
Subject: Database fragmentation formula (was Re: Online DB Reorg)


* ATTENTION *
Those who do not like the mathematics, skip to the end (search for word
select).
*

From my mathematical background this query is not showing very good results.
The formula looks like

  ( MAX_REDUCTION_MB * 256 )
100 -  -
(USABLE_PAGES - USED_PAGES) * 100

Thus closer we are to fully used database, less accurate the formula would
be. Moreover if we fill the DB at 100% the result will be division by 0
while the database might be both fragmented and not fragmented. One of our
goals is to fully utilize our resources. In that exact moment the query
would be useless. Also the formula is of no use if there is no legend how to
interpret the numbers. For example our test server DB is giving
PERCENT_FRAG=26.92 while being nearly unfragmented.

So I would dare to recommend another formula (in pages):

   used - needed
fragm_p = --- x 100
   used

i.e. what space is wasted from all used (in %).
The needed space (in pages) can be found if we multiply PCT_UTILIZED by
USABLE_PAGES and divide by 100 to remove percentages:

needed = PCT_UTILIZED x USABLE_PAGES / 100

while the used space (in pages) is readable from USED_PAGES column.
Therefore my final formula would be (the lines may be split by mailers):

   USED_PAGES - PCT_UTILIZED x USABLE_PAGES / 100 fragm_p =
 x 100
USED_PAGES

and the final query would be:
select cast(100 * (USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100) -
  / USED_PAGES as decimal(9,5)) as Unused page parts [pages] from db

Now the percentage shows the percentage of wasted space vs. used space. 0%
would mean database is fully populated with no holes, 100% are impossible
(as completely empty pages would not be counted, and 99+% mean each page is
filled with something small just to allocate it.

PART 2.
Beyond how much space is wasted inside pages we would be also interested in
how many empty pages we are losing due to partition-allocation scheme. Again
the math first. Same formula (but now in MB):

   used - needed
fragm_p = --- x 100
   used

Now needed space is derived from CAPACITY_MB field:

needed = PCT_UTILIZED x CAPACITY_MB / 100

while actual usage is the size to which we can reduce the DB:

used = CAPACITY_MB - MAX_REDUCTION_MB

the resulting formula would look like  (the lines may be split by
mailers):

   (CAPACITY_MB - MAX_REDUCTION_MB) - PCT_UTILIZED x CAPACITY_MB /
100 fragm_p =
- x 100
CAPACITY_MB - MAX_REDUCTION_MB

Division by zero cannot happen as TSM server does not allow us to reduce the
DB under one partition. Now the query for this percentage would be: select
cast(((CAPACITY_MB - MAX_REDUCTION_MB) -
   - (PCT_UTILIZED * CAPACITY_MB / 100) ) -
   / (CAPACITY_MB - MAX_REDUCTION_MB) * 100 -
   as decimal(9,5)) as Allocation waste [%] from db

And the final big-big query would look like:
select cast(USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100 -
   as decimal (20,3)) as Unused page parts [pages], -
   cast(100 * (USED_PAGES - PCT_UTILIZED * USABLE_PAGES / 100) -
   / USED_PAGES as decimal(9,5)) as Page fragmentation [%], -
   cast( (CAPACITY_MB - 

Re: Move data

2003-10-20 Thread David Longo
Try running you reclamation at say 90% and see if that doesn't
 free a bunch up.  You may have so many that it is trying to reclaim
say 100 tapes ansd will take forever as more tapes are added each day.

Try 90% for a couple of days.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 10/20/03 03:32PM 
I have used the MOVE DATA without any problems.  Usually it is
because of a percieved problem with the offsite tape.  But it should work
for what you want to do.
WHy doesn't Reclaimation get this done for you?
Matt

That is a good question. There are no errors showing in the logs for any of
the tapes. Sometimes when I come in in the morning reclamation is still
running on the offsite pool, yet I still seem to find tapes in a 30% filling
status out there for long periods of time. Is there a better setting for a
reclamin_on script I should use that would better force these to another
tape?

I will surely give it a go and see if this helps. I hate to see so many 40%
and lower filling tapes out there when I could have them here as scratch.


Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED] 
Phone:  (858) 826-4062
Pager:   (877) 905-7154

##
This message is for the named person's use only.  It may 
contain confidential, proprietary, or legally privileged 
information.  No confidentiality or privilege is waived or 
lost by any mistransmission.  If you receive this message 
in error, please immediately delete it and all copies of it 
from your system, destroy any hard copies of it, and notify 
the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message
if you are not the intended recipient.  Health First reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of 
a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.
##


Re: Move data

2003-10-20 Thread David E Ehresman
There are no errors showing in the logs for any of
the tapes. Sometimes when I come in in the morning reclamation is
still
running on the offsite pool

This sounds like you are setting the reclaim level too low.  Start by
setting it higher, e.g. 90.  If reclaim is able to process that in your
time frame, them take it a step lower, 85 or 80.  Keep stepping down
until you get to 60 or some point above 60 that takes all of your
allocated time.

Remember, that when reclaiming offsite tapes, the tapes are mostly not
empted until the reclaim process completes.  That is why it is important
to not set the reclaim level so low that it does not complete.  The
reason is that for offsite tapes, tsm does not read all of the onsite
tapes needed to reclaim a single offsite tape and then move on to the
next.  When reclaiming offsite tapes, tsm minimizes the number of tape
mounts.  So when a onsite tape is mounted, all of the data on that
onsite tape for all of the offsite tapes that meet the reclaim criteria
is moved before mounting the next onsite tape.  Since the data for a
given offsite tape is usually spread over many onsite tapes, tsm has to
work its way thru all of them before the offsite tape is logically
emptied.

David


Re: Database fragmentation formula (was Re: Online DB Reorg)

2003-10-20 Thread Wayne T. Smith
Hi Zlatko,

If I plug into your formula, I get ...

 1492.648 0.81294 14396.6095.29128

(clearly under utilized).

But (ADSM V3.1(ancient)) shows ...

Available Assigned   Maximum   MaximumPage Total  Used   Pct
 Max.
Space Capacity Extension ReductionSizeUsable Pages
Util   Pct
 (MB) (MB)  (MB)  (MB) (bytes) Pages
   Util
-  - - --- - - -
-
   16,380   15,136 1,24428   4,096 3,874,816   183,609
4.7  38.5
This concurs with your utilization calculation, but note I have no DB
reduction possible.  I'm just noting this strangeness (fragmentation!?)
for you, since my ADSM is so old and is being retired.
Cheers and thanks for all the great insight you give on ADSM-L, wayne
--
Wayne T. Smith -- [EMAIL PROTECTED] -- University of Maine System -- UNET


SQL select problems

2003-10-20 Thread Simeon Johnston
I'm trying to run some select statements and get a summary of the bytes
transferred during a backup.
No matter what I use, the bytes transferred always = 0.  In the msi GUI
it all looks good, but the dsmadmc select queries come up w/ 0.

Is this a known problem or am I doing something wrong?  Why does it
always say 0?

Is there another table other than 'summary' that will give me a better
idea of bytes transferred?

I'm slightly worried right now...  Basically, WTF is going on?

sim


Re: SQL to determine what offsite volumes are needed for move data

2003-10-20 Thread Zlatko Krastev
-- This list of tapes is used by the operators to go and fetch the tapes to
be brought onsite.

I cannot figure it out! For what a reason you prefer to add more risk to
your offsite copies by bringing them onsite? Before new copies reach the
vault you are having *no* offsite copies (or your copies are less than
designed).
I can accept Richard's suggestion assuming he is doing delete and
subsequent stgpool backup *before* to retrieve the volumes onsite. But in
that scenario tracking what to be retrieved might be error prone process.
Yes, TSM is having rather complicated handling of offsite copies, but this
is done so intentionally!!

Zlatko Krastev
IT Consultant






Richard Sims [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
16.10.2003 18:34
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: SQL to determine what offsite volumes are needed for move 
data


...
Our tape copy pools are kept offsite.  Reclamation for these pools is
handled by leaving the TSM reclamation threshhold at 100% so that he
never
does reclaims on his own.  On a regular basis, we run a job that queries
the server for copy pool tapes with a reclamation threshhold greater than
'n' percent.  This list of tapes is used by the operators to go and fetch
the tapes to be brought onsite.  They then run a job that updates those
tapes to access=readwrite, and issues a 'move data' command for each
tape.

Now the problem.  Some of these 'move data' processes treat the volume
that is being emptied as 'offsite', even though the volume has been
loaded
into the library and its access updated to readwrite.  I'm pretty sure
the
reason for this is that the volumes in question have files at the
beginning and/or end that span to other copy pool volumes which are
themselves still offsite.
...

Bill - I gather that you are not using DRM.  I started out doing much the
   same as you, in our non-DRM offsite work.  Then I realized that I
was making it much more complicated that it need be...

You can greatly simplify the task and eliminate the problems you are
experiencing with the inevitable spanning issue by just doing
 DELete Volume ... DISCARDdata=Yes
The next Backup Stgpool will automatically regenerate an offsite volume
with that data, and pack it much fuller than Move Data can.  It's a win
all around.

  Richard Sims http://people.bu.edu/rbs


TSM Storage Pool Hierarchy Question

2003-10-20 Thread Curt Watts
Alright, time to ask the experts!

Essentially, I'm trying to have our remote TSM servers (satellite
locations) utilize the main TSM server as the next storage pool in it's
hierarchy.

The hierarchy will eventually look like this:

Remote Server (RS) Disk Cache
  -Main Server (MS) Disk Cache
 -MS Tape Pool
 -MS Offsite Copy Pool

Questions I have:
1) Is this possible?
2) Can I do this without the DR module?
3) How does the RS backup it's database - because it won't backup to
the local disk?
3b) Should the RS backup it's DB directly to tape? and if so, how is it
possible to share the tape library with no NAS?
4) Should I just break down and put some tape drives out there to
handle the DBs?

Current setup:
  MS: W2k sp4, TSM v4.3.2, IBM 3853 w/ 54 slots  2 LTO drives.
  RS: W2k sp4, TSM v5.2.0

Thanks everyone.

Curt Watts

___
Curt Watts
Network Analyst, Capilano College
[EMAIL PROTECTED]