On 26 Oct 2002 06:56:28 -0000 Bharat <[EMAIL PROTECTED]> wrote:

>       I am using perl 5.6 with Oracle 8i  on redhat Linux 7.2 
> system,  Iam facing probelm in speed of processing records,
>       I have around 30K records sets ( 1 record set means records 
>  from 30 table)  now in the database
>   (can go upto 80K) .
>    my Existing program is taking 16 hrs to read  these set and 
> create an XML and / Excel file.
>    I am preparing set of queries at the begining and  executing it 
> with parameters for every Record set.
>   which returns array of hash references for all records per 
> record set and then I am writing to either XML/Excel file.
> 
>     Can it be possible to reduce this processing time ?
>    Also , Will stored procedure help in this case and how do I use 
> it (pl mention sample code) ?

There are several things you can do.

1.  Use bind_columns() and fetch() instead of using hashes to refer to
individual data columns.

2.  Prefetch the smaller tables and keep them in a hash keyed by their
search key.

3.  If several large tables are being fetched by the same key (or partial
key for some of them, sort them all (in the database) by those keys and
fetch the corresponding row from each at the top of your main loop, only
fetch individual row sets when you have no alternative.  This requires more
Perl logic to keep the data synchronized, but it can greatly reduce the
number of queries against the database.

4.  For simple look-ups, use multi-table joins to let the database organize
the data.  Each query will be more expensive, but there will be a lot fewer
trips to the database.

I have used these techniques to do row by row comparisions of dozens of
multimillion row tables.  The longest running set took about 6 hours.

-- 
Mac :})
** I normally forward private questions to the appropriate mail list. **
Ask Smarter: http://www.tuxedo.org/~esr/faqs/smart-questions.html
Give a hobbit a fish and he eats fish for a day.
Give a hobbit a ring and he eats fish for an age.



Reply via email to