Re: Speeding up my SQL query

2008-05-16 Thread Richard Sims

On May 16, 2008, at 10:02 AM, Choudarapu, Ramakrishna (GTS) wrote:


...
Am I right Richard?


Looks good.
Thanks for posting that - I had to run to a meeting.

   Richard


Re: Speeding up my SQL query

2008-05-16 Thread Choudarapu, Ramakrishna (GTS)
Richard meant to say to update ur query to:

select -
   actlog.nodename as "Node Name", -
   substr(char(actlog.date_time), 1, 16) as "Date/Time", -
   substr(actlog.message, 26) as "Message" -
from actlog,nodes -
where -
actlog.nodename=nodes.node_name -
and actlog.msgno in (4005,4007,4018,4037,4987) -
and (actlog.date_time between '%s' and '%s') -
and (nodes.contact like 'Windows%%') -
order by actlog.nodename

Am I right Richard?




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Loon, E.J. van - SPLXM
Sent: Friday, May 16, 2008 9:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Speeding up my SQL query


Hi Richard!
Thank you very much for your reply!
Since I'm not much of an SQL wizzard, could you please help met with
rebuilding the query?
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: vrijdag 16 mei 2008 15:08
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Speeding up my SQL query

Eric -

One thing you can do is use the SQL clause IN rather than a succession
of OR conditions to improve the query performance of testing for
multiple values.

   Richard Sims
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**


This message w/attachments (message) may be privileged, confidential or 
proprietary, and if you are not an intended recipient, please notify the 
sender, do not use or share it and delete it. Unless specifically indicated, 
this message is not an offer to sell or a solicitation of any investment 
products or other financial product or service, an official confirmation of any 
transaction, or an official statement of Merrill Lynch. Subject to applicable 
law, Merrill Lynch may monitor, review and retain e-communications (EC) 
traveling through its networks/systems. The laws of the country of each 
sender/recipient may impact the handling of EC, and EC may be archived, 
supervised and produced in countries other than the country in which you are 
located. This message cannot be guaranteed to be secure or error-free. This 
message is subject to terms available at the following link: 
http://www.ml.com/e-communications_terms/. By messaging with Merrill Lynch you 
consent to the foregoing.



Re: Speeding up my SQL query

2008-05-16 Thread Thomas Denier
-Eric van Loon wrote: -

>Hi *SM-ers!
>My TSM Operational Reporter creates a daily report for our Windows
>guys.
>Part of this report is a SQL query which returns all missed files.
>This
>is the query:
>
>select -
>   actlog.nodename as "Node Name", -
>   substr(char(actlog.date_time), 1, 16) as "Date/Time", -
>   substr(actlog.message, 26) as "Message" -
>from actlog,nodes -
>where -
>actlog.nodename=nodes.node_name -
>and (actlog.msgno=4005 or actlog.msgno=4007 or actlog.msgno=4018 or
>actlog.msgno=4037 or actlog.msgno=4987) -
>and (actlog.date_time between '%s' and '%s') -
>and (nodes.contact like 'Windows%%') -
>order by actlog.nodename
>
>The problem is that the query is taking more that 3 hours to
>complete,
>probably because it's using non-indexed database tables.
>Does anyone know how to make this query more efficient?

In my experience, sub-queries perform better than joins. With
this approach, you would select appropriate fields from the
ACTLOG table, and one of the criteria in the WHERE clause would
be:

nodename in (select node_name from nodes where contact like 'Windows%')


Re: Speeding up my SQL query

2008-05-16 Thread Loon, E.J. van - SPLXM
Hi Richard!
Thank you very much for your reply!
Since I'm not much of an SQL wizzard, could you please help met with
rebuilding the query?
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: vrijdag 16 mei 2008 15:08
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Speeding up my SQL query

Eric -

One thing you can do is use the SQL clause IN rather than a succession
of OR conditions to improve the query performance of testing for
multiple values.

   Richard Sims
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**


Re: Speeding up my SQL query

2008-05-16 Thread Loon, E.J. van - SPLXM
Hi Rama!
TSM Operational Reporter only supports SQL queries...
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Choudarapu, Ramakrishna (GTS)
Sent: vrijdag 16 mei 2008 15:17
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Speeding up my SQL query

Eric,

Did you try Q ACTLOG instead, using NODE=NODE_NAME, MSG=MSGNUM BEGINTIME
and ENDTIME parameters, which may be faster than the select, but the
resluts may not be as straight and detailed as your select...

Regards, Rama

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Loon, E.J. van - SPLXM
Sent: Friday, May 16, 2008 9:01 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Speeding up my SQL query


Hi *SM-ers!
My TSM Operational Reporter creates a daily report for our Windows guys.
Part of this report is a SQL query which returns all missed files. This
is the query:

select -
   actlog.nodename as "Node Name", -
   substr(char(actlog.date_time), 1, 16) as "Date/Time", -
   substr(actlog.message, 26) as "Message" - from actlog,nodes - where -
actlog.nodename=nodes.node_name -
and (actlog.msgno=4005 or actlog.msgno=4007 or actlog.msgno=4018 or
actlog.msgno=4037 or actlog.msgno=4987) - and (actlog.date_time between
'%s' and '%s') - and (nodes.contact like 'Windows%%') - order by
actlog.nodename

The problem is that the query is taking more that 3 hours to complete,
probably because it's using non-indexed database tables.
Does anyone know how to make this query more efficient?
Thank you VERY much for your help in advance!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
its employees shall not be liable for the incorrect or incomplete
transmission of this e-mail or any attachments, nor responsible for any
delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
Airlines) is registered in Amstelveen, The Netherlands, with registered
number 33014286
**


This message w/attachments (message) may be privileged, confidential or
proprietary, and if you are not an intended recipient, please notify the
sender, do not use or share it and delete it. Unless specifically
indicated, this message is not an offer to sell or a solicitation of any
investment products or other financial product or service, an official
confirmation of any transaction, or an official statement of Merrill
Lynch. Subject to applicable law, Merrill Lynch may monitor, review and
retain e-communications (EC) traveling through its networks/systems. The
laws of the country of each sender/recipient may impact the handling of
EC, and EC may be archived, supervised and produced in countries other
than the country in which you are located. This message cannot be
guaranteed to be secure or error-free. This message is subject to terms
available at the following link:
http://www.ml.com/e-communications_terms/. By messaging with Merrill
Lynch you consent to the foregoing.

**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**


Re: Speeding up my SQL query

2008-05-16 Thread Choudarapu, Ramakrishna (GTS)
Eric,

Did you try Q ACTLOG instead, using NODE=NODE_NAME, MSG=MSGNUM BEGINTIME
and ENDTIME parameters, which may be faster than the select, but the
resluts may not be as straight and detailed as your select...

Regards, Rama

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Loon, E.J. van - SPLXM
Sent: Friday, May 16, 2008 9:01 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Speeding up my SQL query


Hi *SM-ers!
My TSM Operational Reporter creates a daily report for our Windows guys.
Part of this report is a SQL query which returns all missed files. This
is the query:

select -
   actlog.nodename as "Node Name", -
   substr(char(actlog.date_time), 1, 16) as "Date/Time", -
   substr(actlog.message, 26) as "Message" -
from actlog,nodes -
where -
actlog.nodename=nodes.node_name -
and (actlog.msgno=4005 or actlog.msgno=4007 or actlog.msgno=4018 or
actlog.msgno=4037 or actlog.msgno=4987) -
and (actlog.date_time between '%s' and '%s') -
and (nodes.contact like 'Windows%%') -
order by actlog.nodename

The problem is that the query is taking more that 3 hours to complete,
probably because it's using non-indexed database tables.
Does anyone know how to make this query more efficient?
Thank you VERY much for your help in advance!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**


This message w/attachments (message) may be privileged, confidential or 
proprietary, and if you are not an intended recipient, please notify the 
sender, do not use or share it and delete it. Unless specifically indicated, 
this message is not an offer to sell or a solicitation of any investment 
products or other financial product or service, an official confirmation of any 
transaction, or an official statement of Merrill Lynch. Subject to applicable 
law, Merrill Lynch may monitor, review and retain e-communications (EC) 
traveling through its networks/systems. The laws of the country of each 
sender/recipient may impact the handling of EC, and EC may be archived, 
supervised and produced in countries other than the country in which you are 
located. This message cannot be guaranteed to be secure or error-free. This 
message is subject to terms available at the following link: 
http://www.ml.com/e-communications_terms/. By messaging with Merrill Lynch you 
consent to the foregoing.



Re: Speeding up my SQL query

2008-05-16 Thread Richard Sims

Eric -

One thing you can do is use the SQL clause IN rather than a succession
of OR conditions to improve the query performance of testing for
multiple values.

  Richard Sims


Speeding up my SQL query

2008-05-16 Thread Loon, E.J. van - SPLXM
Hi *SM-ers!
My TSM Operational Reporter creates a daily report for our Windows guys.
Part of this report is a SQL query which returns all missed files. This
is the query:

select -
   actlog.nodename as "Node Name", -
   substr(char(actlog.date_time), 1, 16) as "Date/Time", -
   substr(actlog.message, 26) as "Message" -
from actlog,nodes -
where -
actlog.nodename=nodes.node_name -
and (actlog.msgno=4005 or actlog.msgno=4007 or actlog.msgno=4018 or
actlog.msgno=4037 or actlog.msgno=4987) -
and (actlog.date_time between '%s' and '%s') -
and (nodes.contact like 'Windows%%') -
order by actlog.nodename

The problem is that the query is taking more that 3 hours to complete,
probably because it's using non-indexed database tables.
Does anyone know how to make this query more efficient?
Thank you VERY much for your help in advance!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**


Re: My SQL

2005-02-15 Thread Prather, Wanda
No.  Unless something has changed in TSM 5.3 client support, subfile
backups won't process a file larger than 2 GB.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Warren, Matthew (Retail)
Sent: Tuesday, February 15, 2005 5:33 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL


Could shutting down MySQL then using subfile backups be a way to achieve
it?

_-'-_
  -|-


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ung Yi
Sent: Monday, February 14, 2005 1:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL

Just thinking out loud.
I would use pre job command to backup the MySQL database to a disk then
have
the TSM backup the dump file.
MySQL comes with a backup command call mysqldump.




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
John
E. Vincent
Sent: Saturday, February 12, 2005 11:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL

John Monahan wrote:
> "ADSM: Dist Stor Manager"  wrote on 02/11/2005
> 11:26:44 AM:
>
>
>>On Feb 11, 2005, at 11:51 AM, Joe Crnjanski wrote:
>>
>>
>>>Another simple question,
>>>
>>>HOW we can backup My SQL database.
>>>
>>>I have a customer with around 5GB MySQL database. Backup is over the 
>>>internet, so dump file backup is out of question.
>>
>>A not-so-simple answer...which is:
>>"Hmmm... MySQL has an API, and TSM has an API, so an enterprising 
>>customer programmer could..."
>>
>>I'm kind of surprised to not heard of someone producing a "Customer 
>>Data Protection" client for MySQL, given the potential. There might be

>>one out there.
>>
>> Richard Sims
>
>
>
>
> Well, there just happens to be a new draft redpiece on this topic.
> "Backing Up Linux Databases with the TSM API"
>
> http://www.redbooks.ibm.com/redpieces/abstracts/redp3980.html?Open
>
> __
> John Monahan
> Senior Consultant Enterprise Solutions Computech Resources, Inc.
> Office: 952-833-0930 ext 109
> Cell: 952-221-6938
> http://www.computechresources.com

Interestingly enough, we've just moved our data warehouse from MySQL to
PostgreSQL and I was investigating something similar. PGSQL will
actually let
you archive log files with a userexit command similar to
DB2 and deal with return codes. I was going to attempt to recompile a
DB2 userexit and use that as a log archiving command. I was just wishing
there was something similar to db2adutl though.

It looks like adsmpipe has that functionality as well.

This is a really good move for IBM because there is a market waiting to
be
exploited by TSM in the opensource database arena as some enterprising
intern
has just documented.


___ Disclaimer Notice __
This message and any attachments are confidential and should only be
read by those to whom they are addressed. If you are not the intended
recipient, please contact us, delete the message from your computer and
destroy any copies. Any distribution or copying without our prior
permission is prohibited.

Internet communications are not always secure and therefore Powergen
Retail Limited does not accept legal responsibility for this message.
The recipient is responsible for verifying its authenticity before
acting on the contents. Any views or opinions presented are solely those
of the author and do not necessarily represent those of Powergen Retail
Limited. 

Registered addresses:

Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry,
CV4 8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax +44 (0) 2476 42 5432


Re: My SQL

2005-02-15 Thread Warren, Matthew (Retail)
Could shutting down MySQL then using subfile backups be a way to achieve
it?

_-'-_
  -|-


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ung Yi
Sent: Monday, February 14, 2005 1:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL

Just thinking out loud.
I would use pre job command to backup the MySQL database to a disk then
have
the TSM backup the dump file.
MySQL comes with a backup command call mysqldump.




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
John
E. Vincent
Sent: Saturday, February 12, 2005 11:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL

John Monahan wrote:
> "ADSM: Dist Stor Manager"  wrote on 02/11/2005
> 11:26:44 AM:
>
>
>>On Feb 11, 2005, at 11:51 AM, Joe Crnjanski wrote:
>>
>>
>>>Another simple question,
>>>
>>>HOW we can backup My SQL database.
>>>
>>>I have a customer with around 5GB MySQL database. Backup is over the 
>>>internet, so dump file backup is out of question.
>>
>>A not-so-simple answer...which is:
>>"Hmmm... MySQL has an API, and TSM has an API, so an enterprising 
>>customer programmer could..."
>>
>>I'm kind of surprised to not heard of someone producing a "Customer 
>>Data Protection" client for MySQL, given the potential. There might be

>>one out there.
>>
>> Richard Sims
>
>
>
>
> Well, there just happens to be a new draft redpiece on this topic.
> "Backing Up Linux Databases with the TSM API"
>
> http://www.redbooks.ibm.com/redpieces/abstracts/redp3980.html?Open
>
> __
> John Monahan
> Senior Consultant Enterprise Solutions Computech Resources, Inc.
> Office: 952-833-0930 ext 109
> Cell: 952-221-6938
> http://www.computechresources.com

Interestingly enough, we've just moved our data warehouse from MySQL to
PostgreSQL and I was investigating something similar. PGSQL will
actually let
you archive log files with a userexit command similar to
DB2 and deal with return codes. I was going to attempt to recompile a
DB2 userexit and use that as a log archiving command. I was just wishing
there was something similar to db2adutl though.

It looks like adsmpipe has that functionality as well.

This is a really good move for IBM because there is a market waiting to
be
exploited by TSM in the opensource database arena as some enterprising
intern
has just documented.


___ Disclaimer Notice __
This message and any attachments are confidential and should only be read by 
those to whom they are addressed. If you are not the intended recipient, please 
contact us, delete the message from your computer and destroy any copies. Any 
distribution or copying without our prior permission is prohibited.

Internet communications are not always secure and therefore Powergen Retail 
Limited does not accept legal responsibility for this message. The recipient is 
responsible for verifying its authenticity before acting on the contents. Any 
views or opinions presented are solely those of the author and do not 
necessarily represent those of Powergen Retail Limited. 

Registered addresses:

Powergen Retail Limited, Westwood Way, Westwood Business Park, Coventry, CV4 
8LG.
Registered in England and Wales No: 3407430

Telephone +44 (0) 2476 42 4000
Fax +44 (0) 2476 42 5432


Re: My SQL

2005-02-14 Thread Ung Yi
Just thinking out loud.
I would use pre job command to backup the MySQL database to a disk then have
the TSM backup the dump file.
MySQL comes with a backup command call mysqldump.




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of John
E. Vincent
Sent: Saturday, February 12, 2005 11:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL

John Monahan wrote:
> "ADSM: Dist Stor Manager"  wrote on 02/11/2005
> 11:26:44 AM:
>
>
>>On Feb 11, 2005, at 11:51 AM, Joe Crnjanski wrote:
>>
>>
>>>Another simple question,
>>>
>>>HOW we can backup My SQL database.
>>>
>>>I have a customer with around 5GB MySQL database. Backup is over the 
>>>internet, so dump file backup is out of question.
>>
>>A not-so-simple answer...which is:
>>"Hmmm... MySQL has an API, and TSM has an API, so an enterprising 
>>customer programmer could..."
>>
>>I'm kind of surprised to not heard of someone producing a "Customer 
>>Data Protection" client for MySQL, given the potential. There might be 
>>one out there.
>>
>> Richard Sims
>
>
>
>
> Well, there just happens to be a new draft redpiece on this topic.
> "Backing Up Linux Databases with the TSM API"
>
> http://www.redbooks.ibm.com/redpieces/abstracts/redp3980.html?Open
>
> __
> John Monahan
> Senior Consultant Enterprise Solutions Computech Resources, Inc.
> Office: 952-833-0930 ext 109
> Cell: 952-221-6938
> http://www.computechresources.com

Interestingly enough, we've just moved our data warehouse from MySQL to
PostgreSQL and I was investigating something similar. PGSQL will actually let
you archive log files with a userexit command similar to
DB2 and deal with return codes. I was going to attempt to recompile a
DB2 userexit and use that as a log archiving command. I was just wishing
there was something similar to db2adutl though.

It looks like adsmpipe has that functionality as well.

This is a really good move for IBM because there is a market waiting to be
exploited by TSM in the opensource database arena as some enterprising intern
has just documented.


Re: My SQL

2005-02-12 Thread John E. Vincent
John Monahan wrote:
"ADSM: Dist Stor Manager"  wrote on 02/11/2005
11:26:44 AM:

On Feb 11, 2005, at 11:51 AM, Joe Crnjanski wrote:

Another simple question,
HOW we can backup My SQL database.
I have a customer with around 5GB MySQL database. Backup is over the
internet, so dump file backup is out of question.
A not-so-simple answer...which is:
"Hmmm... MySQL has an API, and TSM has an API, so an enterprising
customer programmer could..."
I'm kind of surprised to not heard of someone producing a "Customer
Data Protection" client for MySQL, given the potential. There might be
one out there.
Richard Sims


Well, there just happens to be a new draft redpiece on this topic.
"Backing Up Linux Databases with the TSM API"
http://www.redbooks.ibm.com/redpieces/abstracts/redp3980.html?Open
__
John Monahan
Senior Consultant Enterprise Solutions
Computech Resources, Inc.
Office: 952-833-0930 ext 109
Cell: 952-221-6938
http://www.computechresources.com
Interestingly enough, we've just moved our data warehouse from MySQL to
PostgreSQL and I was investigating something similar. PGSQL will
actually let you archive log files with a userexit command similar to
DB2 and deal with return codes. I was going to attempt to recompile a
DB2 userexit and use that as a log archiving command. I was just wishing
there was something similar to db2adutl though.
It looks like adsmpipe has that functionality as well.
This is a really good move for IBM because there is a market waiting to
be exploited by TSM in the opensource database arena as some
enterprising intern has just documented.


Re: My SQL

2005-02-12 Thread Richard Sims
John Monahan wrote:

>Well, there just happens to be a new draft redpiece on this topic.
>"Backing Up Linux Databases with the TSM API"
>
>http://www.redbooks.ibm.com/redpieces/abstracts/redp3980.html?Open

Great find, John - fresh off the presses.
Amusing to see the old adsmpipe live on. Richard Sims


Re: My SQL

2005-02-12 Thread John Monahan
"ADSM: Dist Stor Manager"  wrote on 02/11/2005
11:26:44 AM:

> On Feb 11, 2005, at 11:51 AM, Joe Crnjanski wrote:
>
> > Another simple question,
> >
> > HOW we can backup My SQL database.
> >
> > I have a customer with around 5GB MySQL database. Backup is over the
> > internet, so dump file backup is out of question.
>
> A not-so-simple answer...which is:
> "Hmmm... MySQL has an API, and TSM has an API, so an enterprising
> customer programmer could..."
>
> I'm kind of surprised to not heard of someone producing a "Customer
> Data Protection" client for MySQL, given the potential. There might be
> one out there.
>
>  Richard Sims



Well, there just happens to be a new draft redpiece on this topic.
"Backing Up Linux Databases with the TSM API"

http://www.redbooks.ibm.com/redpieces/abstracts/redp3980.html?Open

__
John Monahan
Senior Consultant Enterprise Solutions
Computech Resources, Inc.
Office: 952-833-0930 ext 109
Cell: 952-221-6938
http://www.computechresources.com


Re: My SQL

2005-02-11 Thread Stef Coene
On Friday 11 February 2005 17:51, Joe Crnjanski wrote:
> Another simple question,
>
> HOW we can backup My SQL database.
>
> I have a customer with around 5GB MySQL database. Backup is over the
> internet, so dump file backup is out of question.
I don't know from where I got the original script, but this is working nice
for me:

#!/usr/bin/perl
# mysql-backup v1.0  (c) 26.5.2003 by Andreas Ley  (u) 27.5.2003
# MySQL Online TSM Backup

$mycnf = "/root/mysqladmin.cnf";
$dsmc = "/usr/bin/dsmc" ;
use DBI;
use Getopt::Std;

sub usage {
my $image = $0;
$image =~ s!.*/!!;
print STDERR "Usage: $image [-v] [database [...]]\n";
print STDERR "or:$image [-v] -a\n";
print STDERR "-v  Verbose mode\n";
print STDERR "-a  Lock all databases during backup\n";
print STDERR "Default is to lock databases one by one\n";
exit(1);
}

getopts('aDhxv');

&usage if ($opt_h);

@opts = ("-verbose") if ($opt_v);

open (MYCNF,$mycnf) || die "Can't read $mycnf: $!\n";
while () {
chomp;
$user = $' if (/^\s*user\s*=\s*/);
$password = $' if (/^\s*password\s*=\s*/);
}
close (MYCNF);

print STDERR "Connect to database...\n" if ($opt_v);
$dbh = DBI->connect("dbi:mysql:", $user, $password,
{ RaiseError => 1, PrintError => 0, AutoCommit => 1 });

($dummy,$datadir) = $dbh->selectrow_array("show variables like 'datadir'");

sub dsmc {
unless ($opt_v) {
open(OLDOUT,">&STDOUT");
open(STDOUT,">/dev/null");
}
#print "@_\n" ;
#map(s!/+$!!,@_);
system($dsmc,"incremental","-subdir=yes",@opts,@_);
unless ($opt_v) {
open(STDOUT,">&OLDOUT");
}
}

if ($opt_a) {
print STDERR "Lock all tables...\n" if ($opt_v);
$dbh->do("flush tables with read lock");
&dsmc("$datadir//");
} else {
if ($#ARGV >= 0) {
$databases = [EMAIL PROTECTED];
} else {
print STDERR "Read databases...\n" if ($opt_v);
$databases = $dbh->selectcol_arrayref("show databases");
}

for $database (@$databases) {
print STDERR "Lock tables in $database...\n" if ($opt_v);
$tables = $dbh->selectcol_arrayref("show tables from
$database");
$temp = join(",", @$tables);
if ( @$tables ) {
  $dbh->do("lock tables ".join(",",map("$database.$_
read",@$tables)));
&dsmc($datadir.$database.'/');
}
print STDERR "Export tables in $database...\n" if ($opt_v);
$output = $dbh->selectcol_arrayref("mysqldump
--tab=/backup/mysql --opt db_name");
}
}

$dbh->disconnect;


/root/mysqladmin.cnf:
user=mysql-user
password=password



Stef


Re: My SQL

2005-02-11 Thread Richard Sims
On Feb 11, 2005, at 11:51 AM, Joe Crnjanski wrote:
Another simple question,
HOW we can backup My SQL database.
I have a customer with around 5GB MySQL database. Backup is over the
internet, so dump file backup is out of question.
A not-so-simple answer...which is:
"Hmmm... MySQL has an API, and TSM has an API, so an enterprising
customer programmer could..."
I'm kind of surprised to not heard of someone producing a "Customer
Data Protection" client for MySQL, given the potential. There might be
one out there.
Richard Sims


Re: My SQL

2005-02-11 Thread Mike
On Fri, 11 Feb 2005, Joe Crnjanski might have said:

> Another simple question,
>
> HOW we can backup My SQL database.
>
> I have a customer with around 5GB MySQL database. Backup is over the
> internet, so dump file backup is out of question.
>
> Regards,
>
>
> Joe Crnjanski
> Infinity Network Solutions Inc.
> Phone: 416-235-0931 x26
> Fax: 416-235-0265
> Web: www.infinitynetwork.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
> Del Hoobler
> Sent: Friday, February 11, 2005 11:15 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: My SQL
>
> "ADSM: Dist Stor Manager"  wrote on 02/11/2005
> 10:44:25 AM:
>
> > Hello,
> >
> > Simple question;
> >
> > Can we backup MySQL database with TDP for MS SQL
> >
> > Regards,
> >
> > Joe Crnjanski
>
> Joe,
>
> No.
>
> Del
>

I currently use the 'dump to a file and then backup the file' method.
Is it possible to dump the file locally, gzip it, then send it
through the net?


My SQL

2005-02-11 Thread Joe Crnjanski
Another simple question,

HOW we can backup My SQL database.

I have a customer with around 5GB MySQL database. Backup is over the
internet, so dump file backup is out of question.

Regards,


Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web: www.infinitynetwork.com
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Del Hoobler
Sent: Friday, February 11, 2005 11:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: My SQL

"ADSM: Dist Stor Manager"  wrote on 02/11/2005
10:44:25 AM:

> Hello,
>
> Simple question;
>
> Can we backup MySQL database with TDP for MS SQL
>
> Regards,
>
> Joe Crnjanski

Joe,

No.

Del


Re: My SQL

2005-02-11 Thread Del Hoobler
"ADSM: Dist Stor Manager"  wrote on 02/11/2005
10:44:25 AM:

> Hello,
>
> Simple question;
>
> Can we backup MySQL database with TDP for MS SQL
>
> Regards,
>
> Joe Crnjanski

Joe,

No.

Del


Re: My SQL

2005-02-11 Thread Stef Coene
On Friday 11 February 2005 16:44, Joe Crnjanski wrote:
> Hello,
>
> Simple question;
>
> Can we backup MySQL database with TDP for MS SQL
No.


Stef


My SQL

2005-02-11 Thread Joe Crnjanski
Hello,

Simple question;

Can we backup MySQL database with TDP for MS SQL

Regards,

Joe Crnjanski
Infinity Network Solutions Inc.
Phone: 416-235-0931 x26
Fax: 416-235-0265
Web: www.infinitynetwork.com
 


Re: Speeding up my SQL statement

2004-07-02 Thread Paul Ripke
Just running on a hunch - a really good query optimiser would
have been able to do the right thing - but we all have to
remember that TSM is designed first and foremost as a backup
and recovery tool :)
On Friday, Jul 2, 2004, at 18:40 Australia/Sydney, Loon, E.J. van -
SPLXM wrote:
Hi Paul!
Thanks for your suggestion!
I have tried your statement on our test server with just one Oracle
node. It
runs half an hour before result are returned. However, when I change
"where
node_name in (select node_name from nodes where node_name like
'%-ORC')"
into "where node_name='MYORACLENODE'" the query takes about 5 seconds
to
finish.
So, the database is indeed indexed (like Wanda pointed out, thanks
Wanda!)
and statements like like or in causes TSM not to use these indexes.
I guess it's best to issue one SQL statement per node.
Thank you all for helping!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines
-Original Message-
From: Paul Ripke [mailto:[EMAIL PROTECTED]
Sent: Friday, July 02, 2004 02:20
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement
This is untested, but may be faster:
select node_name, filespace_name, ll_name, date(backup_date) from
backups
where node_name in (select node_name from nodes where node_name like
'%-ORC')
and ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'
This should prevent a full index scan over backups.
On Wednesday, Jun 30, 2004, at 00:19 Australia/Sydney, Prather, Wanda
wrote:
Hi Guys,
The SQL tables we have to play with in TSM are indeed indexed.
If you do select * from syscat.columns, you will see there is a field
called
INDEX-KEYSEQ and INDEX-ORDER.
The BACKUPS table is indexed on NODE_NAME, then FILESPACE_NAME, then
FILESPACE-ID, then STATE, in that order.
Speaking from experience, I can tell you the query DOES run faster if
you
select on an indexed field.
So if you could select on a specific NODE-NAME, you would do a lot
better.
What I don't know is the effect of using a generic match like %ORC%; I
don't
know if that negates the indexing or not.
What I have done in the past was to write a host script that generated
the
list of node_names for me, then iteratively ran the SELECT on the
backups
table with "where node_name=BLAH", sending the output to a file.
Running the individual queries against one node_name at a time
finished in
about 3 hours, where running the entire backups table (as in your
original
query) ran for over 24 (before I gave up and cancelled it!).
Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Richard Sims
Sent: Tuesday, June 29, 2004 8:03 AM
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement

I thought about that, but would that help? If TSM still has to scan
every
object for a match, it wouldn't help much... That's the problem, I
don't
know
how SQL works...
Eric - Your perception is correct: if you scan a table, it will
traverse the
   whole thing.  Whereas the Backups table is the predominant
(=huge)
table in a TSM system, it will take a long time.  Some optimization
can be
had through well-formulated queries, but the opportunities for doing
that
are
rather rare.  The only thing that really helps SQL performance is
indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is
an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.
Sometimes, the best thing to do is perform Query Backup from the
client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.
   Richard Sims

--
Paul Ripke
--
Paul Ripke
Unix/OpenVMS/TSM/DBA
I love deadlines. I like the whooshing sound they make as they fly by.
-- Douglas Adams


Re: Speeding up my SQL statement

2004-07-02 Thread Loon, E.J. van - SPLXM
Hi Paul!
Thanks for your suggestion!
I have tried your statement on our test server with just one Oracle node. It
runs half an hour before result are returned. However, when I change "where
node_name in (select node_name from nodes where node_name like '%-ORC')"
into "where node_name='MYORACLENODE'" the query takes about 5 seconds to
finish.
So, the database is indeed indexed (like Wanda pointed out, thanks Wanda!)
and statements like like or in causes TSM not to use these indexes.
I guess it's best to issue one SQL statement per node.
Thank you all for helping!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Paul Ripke [mailto:[EMAIL PROTECTED]
Sent: Friday, July 02, 2004 02:20
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


This is untested, but may be faster:

select node_name, filespace_name, ll_name, date(backup_date) from
backups
where node_name in (select node_name from nodes where node_name like
'%-ORC')
and ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This should prevent a full index scan over backups.

On Wednesday, Jun 30, 2004, at 00:19 Australia/Sydney, Prather, Wanda
wrote:

> Hi Guys,
>
> The SQL tables we have to play with in TSM are indeed indexed.
>
> If you do select * from syscat.columns, you will see there is a field
> called
> INDEX-KEYSEQ and INDEX-ORDER.
>
> The BACKUPS table is indexed on NODE_NAME, then FILESPACE_NAME, then
> FILESPACE-ID, then STATE, in that order.
> Speaking from experience, I can tell you the query DOES run faster if
> you
> select on an indexed field.
> So if you could select on a specific NODE-NAME, you would do a lot
> better.
>
> What I don't know is the effect of using a generic match like %ORC%; I
> don't
> know if that negates the indexing or not.
>
> What I have done in the past was to write a host script that generated
> the
> list of node_names for me, then iteratively ran the SELECT on the
> backups
> table with "where node_name=BLAH", sending the output to a file.
>
> Running the individual queries against one node_name at a time
> finished in
> about 3 hours, where running the entire backups table (as in your
> original
> query) ran for over 24 (before I gave up and cancelled it!).
>
> Wanda Prather
> "I/O, I/O, It's all about I/O"  -(me)
>
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
> Of
> Richard Sims
> Sent: Tuesday, June 29, 2004 8:03 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Speeding up my SQL statement
>
>
>> I thought about that, but would that help? If TSM still has to scan
>> every
>> object for a match, it wouldn't help much... That's the problem, I
>> don't
> know
>> how SQL works...
>
> Eric - Your perception is correct: if you scan a table, it will
> traverse the
>whole thing.  Whereas the Backups table is the predominant
> (=huge)
> table in a TSM system, it will take a long time.  Some optimization
> can be
> had through well-formulated queries, but the opportunities for doing
> that
> are
> rather rare.  The only thing that really helps SQL performance is
> indexing,
> where short, key columns are also kept in a hash.  Whereas TSM SQL is
> an
> overlay on a B-tree database, I don't believe there is any indexing
> opportunity, and so SQL scans are painful.
>
> Sometimes, the best thing to do is perform Query Backup from the client
> side,
> where dedicated logic gets results faster.  It is often possible to
> accomplish
> that by masquerading as each defined TSM node, via VIRTUALNodename.
> Another approach to finding flotsam, of course, is to inspect the last
> backup
> time in filespaces, which helps narrow down the search arena.
>
>Richard Sims
>
>
--
Paul Ripke
Unix/OpenVMS/TSM/DBA
I love deadlines. I like the whooshing sound they make as they fly by.
-- Douglas Adams


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


Re: Speeding up my SQL statement

2004-07-01 Thread Paul Ripke
This is untested, but may be faster:
select node_name, filespace_name, ll_name, date(backup_date) from
backups
where node_name in (select node_name from nodes where node_name like
'%-ORC')
and ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'
This should prevent a full index scan over backups.
On Wednesday, Jun 30, 2004, at 00:19 Australia/Sydney, Prather, Wanda
wrote:
Hi Guys,
The SQL tables we have to play with in TSM are indeed indexed.
If you do select * from syscat.columns, you will see there is a field
called
INDEX-KEYSEQ and INDEX-ORDER.
The BACKUPS table is indexed on NODE_NAME, then FILESPACE_NAME, then
FILESPACE-ID, then STATE, in that order.
Speaking from experience, I can tell you the query DOES run faster if
you
select on an indexed field.
So if you could select on a specific NODE-NAME, you would do a lot
better.
What I don't know is the effect of using a generic match like %ORC%; I
don't
know if that negates the indexing or not.
What I have done in the past was to write a host script that generated
the
list of node_names for me, then iteratively ran the SELECT on the
backups
table with "where node_name=BLAH", sending the output to a file.
Running the individual queries against one node_name at a time
finished in
about 3 hours, where running the entire backups table (as in your
original
query) ran for over 24 (before I gave up and cancelled it!).
Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Richard Sims
Sent: Tuesday, June 29, 2004 8:03 AM
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement

I thought about that, but would that help? If TSM still has to scan
every
object for a match, it wouldn't help much... That's the problem, I
don't
know
how SQL works...
Eric - Your perception is correct: if you scan a table, it will
traverse the
   whole thing.  Whereas the Backups table is the predominant
(=huge)
table in a TSM system, it will take a long time.  Some optimization
can be
had through well-formulated queries, but the opportunities for doing
that
are
rather rare.  The only thing that really helps SQL performance is
indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is
an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.
Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.
   Richard Sims

--
Paul Ripke
Unix/OpenVMS/TSM/DBA
I love deadlines. I like the whooshing sound they make as they fly by.
-- Douglas Adams


Re: Speeding up my SQL statement

2004-06-29 Thread Prather, Wanda
Hi Guys,

The SQL tables we have to play with in TSM are indeed indexed.

If you do select * from syscat.columns, you will see there is a field called
INDEX-KEYSEQ and INDEX-ORDER.

The BACKUPS table is indexed on NODE_NAME, then FILESPACE_NAME, then
FILESPACE-ID, then STATE, in that order.
Speaking from experience, I can tell you the query DOES run faster if you
select on an indexed field.
So if you could select on a specific NODE-NAME, you would do a lot better.

What I don't know is the effect of using a generic match like %ORC%; I don't
know if that negates the indexing or not.

What I have done in the past was to write a host script that generated the
list of node_names for me, then iteratively ran the SELECT on the backups
table with "where node_name=BLAH", sending the output to a file.

Running the individual queries against one node_name at a time finished in
about 3 hours, where running the entire backups table (as in your original
query) ran for over 24 (before I gave up and cancelled it!).

Wanda Prather
"I/O, I/O, It's all about I/O"  -(me)




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Tuesday, June 29, 2004 8:03 AM
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


>I thought about that, but would that help? If TSM still has to scan every
>object for a match, it wouldn't help much... That's the problem, I don't
know
>how SQL works...

Eric - Your perception is correct: if you scan a table, it will traverse the
   whole thing.  Whereas the Backups table is the predominant (=huge)
table in a TSM system, it will take a long time.  Some optimization can be
had through well-formulated queries, but the opportunities for doing that
are
rather rare.  The only thing that really helps SQL performance is indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.

Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.

   Richard Sims


Re: Speeding up my SQL statement

2004-06-29 Thread Loon, E.J. van - SPLXM
Hi Richard!
> Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.

True, but since the Oracle backups are made through the TDP client, a Q
BACKUP from a BA client will return zilch...

> Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.

Also true (and an implemented approach here) but that will only show you if
the backup is working. I'm trying to narrow things down to the database
backup level. I want to see if there are obsolete backup pieces or if Oracle
delete jobs are running fine. A quick scan showed a lot of backup pieces
dating back to February for a specific node. I bet I have to contact the
database guys for this and my guess is that the Oracle delete jobs are not
running on this machine... Wouldn't be the first time...
Thank you very much for your reply!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 14:03
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


>I thought about that, but would that help? If TSM still has to scan every
>object for a match, it wouldn't help much... That's the problem, I don't
know
>how SQL works...

Eric - Your perception is correct: if you scan a table, it will traverse the
   whole thing.  Whereas the Backups table is the predominant (=huge)
table in a TSM system, it will take a long time.  Some optimization can be
had through well-formulated queries, but the opportunities for doing that
are
rather rare.  The only thing that really helps SQL performance is indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.

Sometimes, the best thing to do is perform Query Backup from the client
side,
where dedicated logic gets results faster.  It is often possible to
accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last
backup
time in filespaces, which helps narrow down the search arena.

   Richard Sims


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**


Re: Speeding up my SQL statement

2004-06-29 Thread Richard Sims
>I thought about that, but would that help? If TSM still has to scan every
>object for a match, it wouldn't help much... That's the problem, I don't know
>how SQL works...

Eric - Your perception is correct: if you scan a table, it will traverse the
   whole thing.  Whereas the Backups table is the predominant (=huge)
table in a TSM system, it will take a long time.  Some optimization can be
had through well-formulated queries, but the opportunities for doing that are
rather rare.  The only thing that really helps SQL performance is indexing,
where short, key columns are also kept in a hash.  Whereas TSM SQL is an
overlay on a B-tree database, I don't believe there is any indexing
opportunity, and so SQL scans are painful.

Sometimes, the best thing to do is perform Query Backup from the client side,
where dedicated logic gets results faster.  It is often possible to accomplish
that by masquerading as each defined TSM node, via VIRTUALNodename.
Another approach to finding flotsam, of course, is to inspect the last backup
time in filespaces, which helps narrow down the search arena.

   Richard Sims


Re: Speeding up my SQL statement

2004-06-29 Thread Loon, E.J. van - SPLXM
Hi Rene!
I thought about that, but would that help? If TSM still has to scan every
object for a match, it wouldn't help much... That's the problem, I don't
know how SQL works...
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


-Original Message-
From: Lambelet,Rene,VEVEY,GLOBE Center CSC
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 13:21
To: [EMAIL PROTECTED]
Subject: Re: Speeding up my SQL statement


hi Eric, you could add 

node_name like '%ORC' to the where clause...

best regards,

René LAMBELET
NESTEC  SA
GLOBE - Global Business Excellence
Central Support Center
SD/ESN
Av. Nestlé 55  CH-1800 Vevey (Switzerland) 
tél +41 (0)21 924'35'43   fax +41 (0)21 924'45'89   local
REL-5 01
mailto:[EMAIL PROTECTED]

This message is intended only for the use of the addressee
and may contain information that is privileged and
confidential.


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Tuesday,29. June 2004 12:52
To: [EMAIL PROTECTED]
Subject: Speeding up my SQL statement


Hi *SM-ers!
I'm using the following SQL statement to retrieve obsolete Oracle backup
files:

select node_name, filespace_name, ll_name, date(backup_date) from backups
where ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This returns all Oracle backup files, created more than 100 days ago. These
should not exist anymore.
Since this statement scans ALL (millions!!) backup objects for a hit, it
runs for more than a day!
I'm looking for a way to reduce this, but I don't know how to do this.
If I would be able to limit the scan to only the objects belonging to Oracle
nodes (in our shop, the nodename ends with -ORC) it would finish much
quicker, but I don't know how.
Can anybody tell me if this is possible at all?
Thank you very much for any reply in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Re: Speeding up my SQL statement

2004-06-29 Thread Lambelet,Rene,VEVEY,GLOBE Center CSC
hi Eric, you could add 

node_name like '%ORC' to the where clause...

best regards,

René LAMBELET
NESTEC  SA
GLOBE - Global Business Excellence
Central Support Center
SD/ESN
Av. Nestlé 55  CH-1800 Vevey (Switzerland) 
tél +41 (0)21 924'35'43   fax +41 (0)21 924'45'89   local
REL-5 01
mailto:[EMAIL PROTECTED]

This message is intended only for the use of the addressee
and may contain information that is privileged and
confidential.


-Original Message-
From: Loon, E.J. van - SPLXM [mailto:[EMAIL PROTECTED]
Sent: Tuesday,29. June 2004 12:52
To: [EMAIL PROTECTED]
Subject: Speeding up my SQL statement


Hi *SM-ers!
I'm using the following SQL statement to retrieve obsolete Oracle backup
files:

select node_name, filespace_name, ll_name, date(backup_date) from backups
where ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This returns all Oracle backup files, created more than 100 days ago. These
should not exist anymore.
Since this statement scans ALL (millions!!) backup objects for a hit, it
runs for more than a day!
I'm looking for a way to reduce this, but I don't know how to do this.
If I would be able to limit the scan to only the objects belonging to Oracle
nodes (in our shop, the nodename ends with -ORC) it would finish much
quicker, but I don't know how.
Can anybody tell me if this is possible at all?
Thank you very much for any reply in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may
be disclosed, copied or distributed, and that any other action related to
this e-mail or attachment is strictly prohibited, and may be unlawful. If
you have received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. Koninklijke Luchtvaart
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be
liable for the incorrect or incomplete transmission of this e-mail or any
attachments, nor responsible for any delay in receipt.
**


Speeding up my SQL statement

2004-06-29 Thread Loon, E.J. van - SPLXM
Hi *SM-ers!
I'm using the following SQL statement to retrieve obsolete Oracle backup
files:

select node_name, filespace_name, ll_name, date(backup_date) from backups
where ((days(current_date) - days(backup_date) >= 100)) and hl_name='//'

This returns all Oracle backup files, created more than 100 days ago. These
should not exist anymore.
Since this statement scans ALL (millions!!) backup objects for a hit, it
runs for more than a day!
I'm looking for a way to reduce this, but I don't know how to do this.
If I would be able to limit the scan to only the objects belonging to Oracle
nodes (in our shop, the nodename ends with -ORC) it would finish much
quicker, but I don't know how.
Can anybody tell me if this is possible at all?
Thank you very much for any reply in advance!!!
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines


**
For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, and 
may be unlawful. If you have received this e-mail by error, please notify the sender 
immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
**