Joe,

that is quite interesting performance gain... 

One thing that might help with this "multi row" insert is the concept
of binding address variables. IMHO, the binding functions (sqlite_bind_int etc) 
should allow one to permanently bind an address to a placeholder.

Then you multi row insert could be implemented more simply by having the user 
bind arrays of structs or arrays of an intrinsic type to the sqlite statement.

That way after a "step" using say a select would not require all of the data 
transfer (sqlite_column_int) from sqlite into user space. Sqlite would already 
have the hooks into the user space addressing. And the user would pass in the 
number of rows in the array to be stepped. Sqlite would return the rows 
actually stepped (ie end of data)

This I do understand is quite a departure from the current paradigm. But one 
that would be good to explore from a performance standpoint. 


Regards,
Ken



Joe Wilson <[EMAIL PROTECTED]> wrote: FYI: When this multi-row-insert patch is 
combined with today's CVS 
OP_Real and OP_Int64 fix for Tickets #2733 and #2731 (Check-in [4507]), 
the multi-row ASCII INSERT statements are as much as 25% faster than 
the equivalent number of traditional individual ASCII INSERT statements 
within a transaction.

i.e.,

  -- this is 25% faster...
  INSERT INTO BIGTABLE VALUES
   (-334712687065.09, -334712687065.12, -334712687065.13),
   (-334712687065.09, -334712687065.12, -334712687065.13),
   ... 9996 similar rows ...
   (-334712687065.09, -334712687065.12, -334712687065.13),
   (-334712687065.09, -334712687065.12, -334712687065.13);

  -- ...than this
  BEGIN
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  ... 9996 similar rows ...
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  INSERT INTO BIGTABLE VALUES(-334712687065.09, -334712687065.12, 
-334712687065.13);
  COMMIT;

I also had to bump up the SQLITE_MAX_SQL_LENGTH define to 50000000
to get this to work.

The caveat is that you can't have more than approximately 10,000 to 
15,000 rows per multi-insert statement due to 

  sqlite3Select -> multiSelect -> sqlite3Select

recursion leading to stack overflow. This compound select recursion limit 
could be increased by a factor of 100 or more by rewriting these 2 
functions to use heap-allocated stack frame structs to store all function 
local variables. Or with more work, eliminate the compound select stack 
limitation altogether by using a loop instead of recursion for multiSelect 
compound select chain vdbe code generation.

--- Joe Wilson  wrote:
> The attached patch implements the compound (multi-row) INSERT statement 
> syntax against the SQLite 3.5.1 source tree. Both named and unnamed
> INSERT column syntax with DEFAULT column values are supported.
> 
> The patch transforms multi-row INSERT statements into compound SELECT
> statements separated by UNION ALLs. It should be fairly efficient,
> as all rows within the compound INSERT are inserted within a single 
> implicit transaction.
> 
> SQLITE_MAX_COMPOUND_SELECT in sqliteLimit.h also serves as the maximum 
> number of rows in a multi-row INSERT.
> 
> No regressions in "make test". Post any problems to the mailing list, 
> and I'll try to keep this patch up to date.
> 
> This patch is hereby placed in the public domain.
> 
> Examples:
> 
> SQLite version 3.5.1
> Enter ".help" for instructions
> sqlite> CREATE TABLE foo(a DEFAULT 123, b DEFAULT 'Bob');
> sqlite> insert into foo values (1,'Ace'), (2,'Two'), (3,'Three');
> sqlite> select * from foo;
> 1|Ace
> 2|Two
> 3|Three
> sqlite> delete from foo;
> sqlite> insert into foo(b,a) values (1,'Ace'), (2,'Two'), (3,'Three');
> sqlite> select * from foo;
> Ace|1
> Two|2
> Three|3
> sqlite> delete from foo;
> sqlite> insert into foo(a) values (10), (20), (30);
> sqlite> select * from foo;
> 10|Bob
> 20|Bob
> 30|Bob
> 
> 
> Index: src/parse.y
> ===================================================================
> RCS file: /sqlite/sqlite/src/parse.y,v
> retrieving revision 1.234
> diff -u -3 -p -r1.234 parse.y
> --- src/parse.y 21 Aug 2007 10:44:16 -0000 1.234
> +++ src/parse.y 14 Oct 2007 07:29:32 -0000
> @@ -624,6 +624,42 @@ inscollist(A) ::= inscollist(X) COMMA nm
>  inscollist(A) ::= nm(Y).
>      {A = sqlite3IdListAppend(pParse->db,0,&Y);}
>  
> +%ifndef SQLITE_OMIT_COMPOUND_INSERT
> +%ifndef SQLITE_OMIT_COMPOUND_SELECT
> +
> +%type inslistItem {Select*}
> +%destructor inslistItem {sqlite3SelectDelete($$);}
> +
> +inslistItem(A) ::= LP itemlist(W) RP. {
> +    SrcList *pSrc = sqlite3DbMallocZero(pParse->db, sizeof(SrcList));
> +    A = sqlite3SelectNew(pParse,W,pSrc,0,0,0,0,0,0,0);
> +}
> +
> +%type inslist {Select*}
> +%destructor inslist {sqlite3SelectDelete($$);}
> +
> +%type comma_opt {int}
> +comma_opt(X) ::= .      {X = 0;}
> +comma_opt(X) ::= COMMA. {X = 1;}
> +
> +inslist(A) ::= inslistItem(Z) COMMA. {A = Z;}
> +inslist(A) ::= inslist(X) inslistItem(Z) comma_opt(C). {
> +  C = C;
> +  if( Z ){
> +    Z->op = TK_ALL;
> +    Z->pPrior = X;
> +  }else{
> +    sqlite3SelectDelete(X);
> +  }
> +  A = Z;
> +}
> +
> +cmd ::= insert_cmd(R) INTO fullname(X) inscollist_opt(F) VALUES inslist(S).
> +            {sqlite3Insert(pParse, X, 0, S, F, R);}
> +
> +%endif  SQLITE_OMIT_COMPOUND_SELECT
> +%endif  SQLITE_OMIT_COMPOUND_INSERT
> +
>  /////////////////////////// Expression Processing 
> /////////////////////////////
>  //
>  
> Index: src/sqliteLimit.h
> ===================================================================
> RCS file: /sqlite/sqlite/src/sqliteLimit.h,v
> retrieving revision 1.2
> diff -u -3 -p -r1.2 sqliteLimit.h
> --- src/sqliteLimit.h 24 Aug 2007 11:52:29 -0000 1.2
> +++ src/sqliteLimit.h 14 Oct 2007 07:29:33 -0000
> @@ -75,7 +75,7 @@
>  ** any limit on the number of terms in a compount SELECT.
>  */
>  #ifndef SQLITE_MAX_COMPOUND_SELECT
> -# define SQLITE_MAX_COMPOUND_SELECT 500
> +# define SQLITE_MAX_COMPOUND_SELECT 5000000
>  #endif
>  
>  /*


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


Reply via email to