That's effectively what I'm doing now. I'm not sure there's much I can 
speed up at this point - the SELECTs take about 0.05s, it's just the 
INSERTs taking a bulk of the time - 11-15s depending on the number of rows. 
That said, I'm still running on development and there'll be a significant 
boost once it's on proper hardware.

On Monday, 24 March 2014 22:44:09 UTC+8, Jonathan Vanasco wrote:
>
> >The data comes in unordered and sometimes contains duplicates, so there's 
> a UniqueConstraint on Entry on sub, division, created.
>
> Have you tried pre-processing the list first ?
>
> I've had similar situations, when dealing with browser , user and app 
> analytics.  
>
> I normally do a first pass to restructure the raw log file and note any 
> 'selects' i might need to associate the records to; then I lock tables, 
> precache the selects, and do all the inserts.  the speed pickups have been 
> great.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to