Same problem, just a different thought process.

   The .dump format could me made to use a CSV making it more compact.

   A new command ".imp" command could be made to use my proposed array based 
processing. And also use DRH's recommendation to use a TXN

   Unfortunately,  .dump is generic for any sql so that the sql can be simply 
replayed to recreate a Database  vs being specific for DATA loading. 
   The table structure .dump  and the Data .dump Could be made into two well 
defined and seperate entities. 

  I don't agree with the aproach of modify the SQL language to create a generic 
high speed loading interface. When actually the problem lies with the format of 
the .dump and the lack of an .import (or .load) command. Array based processing 
and user based addressing would most probably not help  with this problem.

Maybe a seperate programs could be developed for simple data unload and reload 
Vs the sqlite .dump ???? 

Regards,
Ken


Joe Wilson <[EMAIL PROTECTED]> wrote: You're solving a different problem. 
Better to stick with the
existing API for what you're proposing.

I just want a faster, more compact and flexible SQL INSERT command 
from the command-line without resorting to custom programming.

--- Ken  wrote:

> Joe,
> 
> that is quite interesting performance gain... 
> 
> One thing that might help with this "multi row" insert is the concept
> of binding address variables. IMHO, the binding functions (sqlite_bind_int 
> etc) should allow one
> to permanently bind an address to a placeholder.
> 
> Then you multi row insert could be implemented more simply by having the user 
> bind arrays of
> structs or arrays of an intrinsic type to the sqlite statement.
> 
> That way after a "step" using say a select would not require all of the data 
> transfer
> (sqlite_column_int) from sqlite into user space. Sqlite would already have 
> the hooks into the
> user space addressing. And the user would pass in the number of rows in the 
> array to be stepped.
> Sqlite would return the rows actually stepped (ie end of data)
> 
> This I do understand is quite a departure from the current paradigm. But one 
> that would be good
> to explore from a performance standpoint. 
> 
> 
> Regards,
> Ken



__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


Reply via email to