Thanks for the info.

 

I ended up going with something similar to #2, collect the ID's then update
them after processing.  I'm working in a way for the process to grab 1000
records, and then process them, and then grab a 1000 more, rinse and repeat.


 

Right now it's taking under 1 min to process 50k records, after I trim down
the debug code it should run faster. 

 

I'll either make the script init.d service or set it up as a nightly cron
job.

 

Terre

 

From: Bill Karwin [mailto:b...@karwin.com] 
Sent: Friday, August 20, 2010 12:27 PM
To: Zend Framework General
Subject: Re: [fw-general] Large DB recordset, while/loop, update rows

 

 

On Aug 20, 2010, at 7:32 AM, Terre Porter wrote:


I'm guessing this is because I'm reusing the $db and since it is not a
buffered result it overwrites the original query with the update query. (am
I guessing right?)

 

Right, you can't execute another SQL statement until you finish fetching
results from the current unbuffered query. 





Anyone know how I can update a record set when using the $stmt->fetch()
loop?

 

I can think of three alternatives:

 

1. Use a buffered query, which performs a fetchall internally even though
you fetch() one row at a time in your app.  But if your query returns a
result set too large to buffer within your PHP memory limit, this might not
work.

 

2. Collect the id's to update in an array, and then issue a single update
after your loop is done.  This still might create a long array of id's, but
that should take a lot less memory than buffering all columns of the result
set.

 

$ids = array();

while ($row = $stmt->fetch()) {

  ...

  $ids[] = $row['id'];

  ...

} 

$stmt->closeCursor();

$db->update($thisDBTable, array('filtered' => 0), $db->quoteInto("id IN
(?)", $ids));

 

3. Open a second DB connection for the concurrent updates: 

 

$db2 = Zend_Db::factory( ... );

while ($row = $stmt->fetch()) {

  ...

  $db2->update($thisDBTable, array('filtered' => 0), $db2->quoteInto("id =
?", $row['id']));

  ...

} 

 

Regards,

Bill Karwin

Reply via email to