So, how can I speed up the file read process: get a faster machine? It would
seem to me that a faster way to accomplish this would be to store file positions
in a list for every indexed field that was queried that matched the query.
Then, during the fetch, read the rest of the fields to see if everything else
matched. That way, instead of reading every record in the table, you can skip
to every file position that's in the list. This is just a thought. This is how
I was thinking of modifying the code. If there are any inherent problems with
this approach, I'd be more than happy to hear them.
Phillip
"Sterin, Ilya" wrote:
> Actually most of the drivers work this way. They execute() you query and
> set the cursor at the first record and gets ready to fetch() some db's like
> Oracle, actually use a buffer and on fetch stores the first X number of rows
> in that buffer for more efficient fetching.
>
> Ilya Sterin
>
> -----Original Message-----
> From: Phillip Perkins
> To: Honza Pazdziora
> Cc: [EMAIL PROTECTED]
> Sent: 05/03/2001 12:58 PM
> Subject: Re: Last fetch takes forever to undef under XBase
>
> Honza Pazdziora wrote:
>
> > On Wed, May 02, 2001 at 06:18:08PM -0400, Phillip Perkins wrote:
> > > Tried that...no good. Still took 43 seconds, I think. I've watched
> it zip
> > > through all the records until it gets to the end where fetch returns
> undef, and
> > > that seems to be taking a while.
> > >
> > > I kinda' thought that it might be the connection, too, but obviously
> that's not
> > > the deal.
> >
> > OK, let's summarize:
> >
> > You have a .dbf file (no memo files?) with cca 900 records, you do
> >
> > my $sth = $dbh->prepare(q!
> > select * from file
> > !);
> > $sth->execute;
> > while (my $data = $sth->fetchrow_arrayref) {
> > }
> >
> > and you get those 900 records in reasonable amount of time (seconds).
> > But the next (last) fetch call that just returns undef takes 20
> > seconds itself. Do I get the situation right?
> >
>
> The table itself has a little more than 3000 records and returns only
> 900 from the
> query. I actually went through the mod code to see what the deal was
> and found out
> that the mod doesn't actually return records once it executes the query.
> What
> happens is that when you do the fetch, the mod reads each record
> sequentially until
> it finds a record that matches the description. So every successive
> fetch is gonna'
> have a time lapse between each record returned. So my problem wasn't
> that it took so
> long to reach an undef state, it just took forever for the mod to reach
> EOF on the
> DBF file.
>
> My best option at this point is to sweat it out until I can modify the
> mod code.
> Thanks for everybody's help.
>
> Phillip
>
> >
> > --
> >
> ------------------------------------------------------------------------
> > Honza Pazdziora | [EMAIL PROTECTED] |
> http://www.fi.muni.cz/~adelton/
> > .project: Perl, DBI, Oracle, MySQL, auth. WWW servers, DBD::XBase.
> >
> ------------------------------------------------------------------------