Hello, The import of my f-spot folder runs since last evening... maxbe it's more than about ten hours running...
When I tried it the first time, I used f-spot import. When it took too long I tried strace and ltrace on it (when shotwell already is running). The latter one killed shotwell. (It is an old shotwell version; I tested the ltrace killing-issue with a current version of shotwell on a different machine/system and this problem was away.) What I experienced when importing at leats with the old shotwell, was that it is importing slowly. A lot of stuff is going on... I could see this, when using ltrace directly (instead of attaching it later). Same also holds true for the current version of shotwell. There is even alot of calls going on, when an empty archive is used. >From what I saw in the latrace outputs it seems to me, that any picture files is handled seperately (maybe each one an object), and importing a file means: creating an object, which individually connects to sqlite. Just from that (without looking at the code) I think, it would make sense to have an internal representation of the data not only for one picture, but for a bunch of pictures and doing a bulk-insertion operation into the database, instead of individually insert the files unto the database. Does anyone of the shotwell developers who knows the internals of the code, can affirm or reject my assumption on individual sqlite-accesses? And if that's the case, would it be possible to have a bulk insertion feature? So that - for example - bunches of e.g. 1000 files will be inserted in one operation, instead of individually inserting any file? It seems, that the sqlite access is eating up a lot of time, when doing massively adding files to the database. Thanks, Oliver _______________________________________________ Shotwell mailing list [email protected] http://lists.yorba.org/cgi-bin/mailman/listinfo/shotwell
