On Sep 22, 2008, at 11:03 PM, P Kishor wrote:
> I am assuming you have good reason to not just ATTACH the old db to
> the new db and INSERT INTO new_table SELECT * FROM old_db.old_table

Partially ignorance about that, and partially because I want to use  
the ORMs involved with the DB to ensure that all native Ruby types  
will be preserved in a way that lets them come back as proper native  
values. (e.g. booleans and Time values deserialzed by the ORM that  
originally wrote them and then re-serialized by the new ORM)

>> This script transforms a 2MB sqlite DB with about 5,000 rows into a
>> 1.8MB sqlite DB with about the same number of rows. (A few fields and
>> tables get dropped along the way.)
>>
>> On my mac laptop at home (2.3GHz Core 2 Duo, 2GB RAM, 5400 RPM drive)
>> this script runs in 22 seconds. In 'better battery life' mode.
>
> All that said, 22 seconds for a 5000 row db on that machine (same as
> my laptop) seems rather slow to me.

Sure, could be faster. For example, I just wrapped the batch inserts  
for every table in a transaction and now it's 6s instead of 22. But  
while good to be aware of silly things I'm doing that might speed up  
the overall result, the fact that it's 30x slower on Windows is what  
interested me. But thank you for your comments.

>> On my XP desktop at work (2GHz Dual Pentium, 2GB RAM, 7000 RPM drive)
>> this same script on the same DB runs in 11 minutes. 30x slower.
>>


_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to