From:                   "Ho, Tony" <[EMAIL PROTECTED]>
> I am currently designing Perl DBI code to extract data from tables
> from Sybase Database under UNIX. There are a dozen of tables I need to
> extract information from. The biggest tables are ACCOUNTS and
> SUBSCRIBERS. ACCOUNT has 10 million rows and SUBSCRIBERS has 20
> million rows. SUBSCRIBERS is related to the ACCOUNTS table as every
> account has subscribers. 
> 
> At the end of the extraction process, I need to end up with 1 or 2
> flat files that shows rows of SUBSCRIBER data and rows of ACCOUNT data
> associated to those subscribers. 
> 
> Which is the better option in terms of performance and reliability: 
> 
> Access the sybase tables with SELECT+JOIN sql statements, order and
> write the results to the overall flat file file immediately ? 
> 
> OR 
> 
> BCP out the results into multiple files and manipulate/rearrange/order
> them into a single file under Unix. 

I'd leave the joining and sorting to sybase. The code there is (read 
"should be") heavily optimized and thus be better than what you 
can come up with in reasonable time.

Optimize the query, maybe add some more indexes if that helps 
and select only the things you need. That's IMHO the best you can 
do.

Jenda

P.S.: I always TRY to do as much work as possible as near to the 
data as possible and only fetch from database the data I'll need to 
show to the user, write to a file or that I use to identify an object in 
the database later.

=========== [EMAIL PROTECTED] == http://Jenda.Krynicky.cz ==========
There is a reason for living. There must be. I've seen it somewhere.
It's just that in the mess on my table ... and in my brain.
I can't find it.
                                        --- me

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to