On Tue, Feb 2, 2010 at 11:20 AM, Nicolas Williams
wrote:
> On Tue, Feb 02, 2010 at 09:23:36AM +0100, Sylvain Pointeau wrote:
>> I would be very interested to see some benchmark, just to see.
>
> Feel free to write the relevant program, schema, SQL statements and run
> benchmarks against it. W
arg sorry I miss your LRP (Long Running Process),
On Tue, Feb 2, 2010 at 11:55 PM, Sylvain Pointeau <
sylvain.point...@gmail.com> wrote:
> Please just don't forget that sqlite lock the file at each update,
> running multiple processes will not improve at all the speed, I think it
> will be even w
Please just don't forget that sqlite lock the file at each update,
running multiple processes will not improve at all the speed, I think it
will be even worst.
The only improvement you can do is to group your update in a single
transaction, and avoid to run one process (sqlite3) for just 1 update.
On Tue, Feb 2, 2010 at 3:13 PM, John Elrick wrote:
> Robert Citek wrote:
>> Are there some white papers or examples of how to do updates in
>> parallel using sqlite?
>
> I could be misunderstanding your requirements, but this sounds a little
> like Map Reduce:
>
> http://labs.google.com/papers/map
Robert Citek wrote:
> Are there some white papers or examples of how to do updates in
> parallel using sqlite?
>
> I have a large dataset in sqlite that I need to process outside of
> sqlite and then update the sqlite database. The process looks
> something like this:
>
> sqlite3 -separator $'\t'
On Tue, Feb 02, 2010 at 09:23:36AM +0100, Sylvain Pointeau wrote:
> On Mon, Feb 1, 2010 at 5:16 PM, Nicolas Williams
> wrote:
> > Now to parallelize this:
> >
> > function par_updates {
> I would be very interested to see some benchmark, just to see.
Feel free to write the relevant program, s
I would be very interested to see some benchmark, just to see.
On Mon, Feb 1, 2010 at 5:16 PM, Nicolas Williams
wrote:
> On Sat, Jan 30, 2010 at 10:36:56AM +0100, Sylvain Pointeau wrote:
> > echo "begin transaction" >> update.sql
> >
> > sqlite3 -separator $'\t' sample.db 'select rowid, item
On Sat, Jan 30, 2010 at 10:36:56AM +0100, Sylvain Pointeau wrote:
> echo "begin transaction" >> update.sql
>
> sqlite3 -separator $'\t' sample.db 'select rowid, item from foo;' |
> while read rowid item ; do
> status=$(long_running_process "${item}" )
> echo "update foo set status=${status} wher
Hello,
it is not aesthetic, it groups all update in a single transaction that speed
up the processing.
Using multi thread or multi-process is not efficient, it is at the end a
single process that can write to the database (a single file).
Grouping all the update in a single transaction is the onl
On 31 Jan 2010, at 12:15am, Robert Citek wrote:
> The question would be, how to modify the script to process the data in
> with parallel processes?
Run any number of of copies of the script at once. The script takes any record
which has not so far been processed, processes it and writes the re
Sure. This script can use a lot of aesthetic improvement, but it
highlights processing the data in a single process.
The question would be, how to modify the script to process the data in
with parallel processes?
Regards,
- Robert
On Sat, Jan 30, 2010 at 4:36 AM, Sylvain Pointeau
wrote:
> a go
a good thing would have been to generate one file with all the statements...
if you do that then you run sqlite with this file surrounded by transaction
begin/commit
echo "begin transaction" >> update.sql
sqlite3 -separator $'\t' sample.db 'select rowid, item from foo;' |
while read rowid item ;
Are there some white papers or examples of how to do updates in
parallel using sqlite?
I have a large dataset in sqlite that I need to process outside of
sqlite and then update the sqlite database. The process looks
something like this:
sqlite3 -separator $'\t' sample.db 'select rowid, item from
13 matches
Mail list logo