On 2005-02-16 19:44:49 +0900, [EMAIL PROTECTED] wrote:
> On Thu, Feb 03, 2005 at 10:18:43AM +0000, Tim Bunce wrote :
> > On Thu, Feb 03, 2005 at 06:27:05PM +0900, [EMAIL PROTECTED] wrote:
> > > Dear list,
> > > 
> > >   I wrote a simple CGI script using DBD::CSV on a linux
> > > computer, and then installed it on a iMac G5. Its execution time is
> > > now alomst 10 times slower.
[...]
> > The Devel::DProf module (and/or other code profiling modules) may help.
> > 
> 
> Thank you for pointing this module. I have used it to analyse my script :
> 
> Total Elapsed Time = 63.19465 Seconds
>   User+System Time = 43.46465 Seconds

That's strange. Unless your computer is busy doing something else, it is
waiting almost 20 seconds for I/O. Unless the lines of your CSV file are
really long, I cannot imagine that simply reading a file with 109447
lines can take 20 seconds. (Even less if that is a join over several
files)

> Exclusive Times
> %Time ExclSec CumulS #Calls sec/call Csec/c  Name
>  66.5   28.91 36.463 109445   0.0003 0.0003  SQL::Statement::eval_where
[...]
>  2.95   1.284  1.284 109447   0.0000 0.0000  IO::Handle::getline
>  2.29   0.997 46.596      1   0.9965 46.595  SQL::Statement::SELECT
[...]
>       It seems that there is no bottleneck.

Most of the CPU time is spent in SQL::Statement::eval_where. Unless you
can change your query, you probably can't make it much faster (except
maybe by moving from CSV to a real RDBMS or maybe SQLite).

        hp

-- 
   _  | Peter J. Holzer      | If the code is old but the problem is new
|_|_) | Sysadmin WSR / LUGA  | then the code probably isn't the problem.
| |   | [EMAIL PROTECTED]        |
__/   | http://www.hjp.at/   |     -- Tim Bunce on dbi-users, 2004-11-05

Attachment: pgpzOVmn24wHP.pgp
Description: PGP signature

Reply via email to