The problem is not about how well the SQL runs.  It runs in less than a second, 
and it should only be getting executed 12 times an hour. 

What I'm trying to do is to figure out how the SQL statement got executed 1000 
times more than it should have done (which brought the db to its knees for 50 
minutes with nothing being able to connect).  If there is something in DBI that 
is automatically re-executing for some unknown reason, I need to be able to 
mange how often it retries, and how many times, and even how long between 
re-executions.   The logic in the script does not account for 12000 executions 
in a 1 hour period, so I need to figure out how it happened so I can prevent it 
happening in the future.

There is locking around the perl script execution so that the perl script can't 
be started if the previous execution has not completed, and the logs show that 
in the 50 minute period in question, the script only ran once .  It didn't 
report any errors, and appeared to eventually exit successfully.  The only 
thing abnormal is that Oracle Stats Pack shows that SQL statement in the script 
executed 12000 times and was consuming all available DB threads.  I've reviewed 
the DBI debugging links provided by Brian Fennell and have now added some DBI 
debugging to the script in the hopes that I can get more info on how DBI is 
behaving.

I came to this mailing list because it seems to me that the problem is with 
DBI, either I've hit some bug, or my usage is not correct (or there is some 
other problem deeper down in the stack)

Cheers,
John






-----Original Message-----
From: Steven Lembark <lemb...@wrkhors.com> 
Sent: Tuesday, 18 February 2020 3:47 AM
To: dbi-users@perl.org
Cc: lemb...@wrkhors.com
Subject: Re: Perl script excessively executing statement

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


On Thu, 13 Feb 2020 06:58:37 +0000
JohnD Blackburn <johnd.blackb...@au.abb.com> wrote:

> The DBA pulled info from some cache that showed the SQL statement from 
> the script was executed 12610 times.
>
> So if I were to add an “or die $!” statement after the 
> fetchrow_array(), that might fix things?  (or prevent it from trying 
> to continue if there are errors?) Or catch errors after the execute 
> statement?

Q: What are you really trying to accomplish?

Basic rules:

If you need to run SQL more than once parepare it.

If you are not sure how well it runs then you can evaluate it beorehand -- 
which also spots errors, can help track down un-used indexes.

As noted, generating a single query, preparing and running once, then 
extracting the rows (preferably as arrays rather than hashes) will be the most 
effecient way.

The database is going to do a much better job filtering anything it can locally 
in native format beore you convert it to ASCII char strings and pull the same 
content across a LAN.



--
Steven Lembark                                        3646 Flora Place
Workhorse Computing                                St. Louis, MO 63110
lemb...@wrkhors.com                                    +1 888 359 3508

Reply via email to