Re: [sqlite] Error reporting problem

2007-03-26 Thread Dan Kennedy
On Mon, 2007-03-26 at 17:08 +0200, Vivien Malerba wrote:
> On 3/26/07, Martin Jenkins <[EMAIL PROTECTED]> wrote:
> > Vivien Malerba wrote:
> > > Hi!
> > >
> > > I've got an error reporting problem when trying to insert a row which
> > > breaks a UNIQUE constraint in a table in a C program, I get the
> > > following error with sqlite3_errmsg():
> > > "SQL logic error or missing database"
> > >
> > > If I fire the sqlite3 program and run the same SQL query, I get the
> > > following error:
> > > "SQL error: column name is not unique"
> > > which is much more obvious to understand.
> > >
> > > Any idea how I could get that error message calling sqlite3_errmsg()?
> >
> > The pysqlite wrapper has a bug which reports the same error message if
> > you try to nest transactions. I fixed the wrapper (locally) by adding an
> > sqlite_reset(). I think there's a new API (sqlite_prepare_v2()?) which
> > has the same effect. If you can't search (say) gmane then post a tiny
> > code snippet and someone who uses the C API more than I do will spot the
> > problem straight away.
> 
> Here is a sample test case, just uncompress, run "make" and "./test".
> Here is what I get using SQLite 3.3.13 (On Linux):
> SQL error (step): SQL logic error or missing database
> SQL error (step): column name is not unique
> 
> It shows that calling sqlite3_reset() seems to be required (contrary
> to what the doc says), or maybe I got something wrong...

Even using the prepare_v2() interface, the database handle error-code
and error-message (the stuff returned by sqlite3_errcode() and 
sqlite3_errmsg() respectively) are not populated by sqlite3_step(). 
After sqlite3_step() reports an error you need to call either
sqlite3_reset() or sqlite3_finalize() to copy the error-code and
message from the statement to the database handle.

The change the prepare_v2() API makes is that in your example 
sqlite3_step() returns SQLITE_CONSTRAINT when it hits the constraint.
Using sqlite3_prepare() it would have returned the generic SQLITE_ERROR.

Dan.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] RE: open an open sqlite database !!!

2007-03-26 Thread mniknezhad

i use framework v1.1 (2003) not framework v2


Samuel R. Neff wrote:
> 
> 
> If you're using ASP.NET I'd suggest using the ADO.NET wrapper available
> here:  http://sqlite.phxsoftware.com/
> 
> It's an excellent implementation of ADO.NET for SQLite and would be far
> less
> trouble than using sqlite directly from .NET code.
> 
> HTH,
> 
> Sam
> 
> ---
> We're Hiring! Seeking a passionate developer to join our team building
> products. Position is in the Washington D.C. metro area. If interested
> contact [EMAIL PROTECTED]
>  
> -Original Message-
> From: mniknezhad [mailto:[EMAIL PROTECTED] 
> Sent: Monday, March 26, 2007 4:43 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] open an open sqlite database !!!
> 
> 
> i use dlls to connect to sqlite db(in asp.net).
> i have a problem:
> when the database file (sample.db) is opened 
> with a windows application (and in use - not closed),
> i can not open that database with a web application. 
> in other words : 
> how can i open an opened database with a web application?
> 
> tanx
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/open-an-open-sqlite-database-%21%21%21-tf3465738.html#a9686529
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Running multiple DDL statements in a batch (via JDBC)

2007-03-26 Thread Joe Wilson
AFAIK The behavior you're looking for is not well defined by the JDBC API:

 http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Statement.html
 
http://java.sun.com/j2se/1.4.2/docs/api/java/sql/Statement.html#addBatch(java.lang.String)

Perhaps some JDBC drivers implement the behavior you expect, but
technically, you should call addBatch() to add each individual SQL
statement to the batch prior to calling executeBatch().

Perhaps if you post to the sqlitejdbc mailing list, the author may consider
making such an extension. But you can probably get away with just splitting
your DDL string on ";" and feeding them to addBatch in a loop and then
calling executeBatch.

--- "Steven E. Harris" <[EMAIL PROTECTED]> wrote:
> I'm using the "pure" SQLite JDBC driver¹ and trying to bootstrap my
> database schema by running a batch of DDL statements. I read in the
> entire DDL script from a file, collect it into a string, and feed that
> string into either java.sql.Statement.executeUpdate() or
> java.sql.Statement.executeBatch().
> 
> In either case, only the first DDL statement takes effect, and no
> errors are signaled. Stepping through the JDBC driver, it looks as
> though it sqlite3_prepare()s a statement with my SQL string, then
> calls sqlite3_step() on the statement, and the return value comes back
> as SQLITE_DONE, upon which it finalizes the statement and returns
> successfully.
> 
> I understand that we have this JDBC layer in the middle, but it's
> pretty thin, and I'm trying to figure out which party in this
> arrangement is responsible for only executing the first statement (up
> through the first semicolon) in the SQL string.
> 
> Does SQLite normally execute more than one statement provided in a SQL
> string? I'm fearing having to cut up this DDL file into ten parts:
> three CREATE TABLE statements, one CREATE INDEX statement, and six
> CREATE TRIGGER statements for foreign key enforcement.
> 
> Please advise.
> 
> 
> Footnotes: 
> ¹ http://www.zentus.com/sqlitejdbc/



 

Sucker-punch spam with award-winning protection. 
Try the free Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/features_spam.html

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Running multiple DDL statements in a batch (via JDBC)

2007-03-26 Thread Steven E. Harris
[EMAIL PROTECTED] writes:

> The sqlite3_prepare()/sqlite3_step() API only executes a single
> statement.  The sqlite3_prepare() routine returns a pointer to
> the next statement if you give it a list of statements.

I see. It looks like the JDBC driver punts on using the pzTail
parameter¹:

,
| JNIEXPORT jlong JNICALL Java_org_sqlite_NativeDB_prepare(
| JNIEnv *env, jobject this, jstring sql)
| {
| sqlite3* db = gethandle(env, this);
| sqlite3_stmt* stmt;
| 
| const char *strsql = (*env)->GetStringUTFChars(env, sql, 0);
| int status = sqlite3_prepare(db, strsql, -1, , 0);
| (*env)->ReleaseStringUTFChars(env, sql, strsql);
| 
| if (status != SQLITE_OK) {
| throwex(env, this);
| return fromref(0);
| }
| return fromref(stmt);
| }
`

Given that the string it's feeding to sqlite3_prepare might be
allocated in the GetStringUTFChars() call before -- and released right
afterward, guessing for now at what these functions do -- it wouldn't
be prudent for the library to try to hold onto a pointer into that
ephemeral string. It would instead need to figure out an offset for
safer keeping.

> The sqlite3_exec() interface executes every statement in the string
> you hand it.

Unfortunately this library has yet to wrap this function.

Ouch.


Footnotes: 
¹ http://www.sqlite.org/capi3ref.html#sqlite3_prepare

-- 
Steven E. Harris

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Running multiple DDL statements in a batch (via JDBC)

2007-03-26 Thread drh
"Steven E. Harris" <[EMAIL PROTECTED]> wrote:
> 
> Does SQLite normally execute more than one statement provided in a SQL
> string? 

The sqlite3_prepare()/sqlite3_step() API only executes a single
statement.  The sqlite3_prepare() routine returns a pointer to
the next statement if you give it a list of statements.

The sqlite3_exec() interface executes every statement in the
string you hand it.

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Dennis Volodomanov
Yes, threading is one option I'm testing and timing as well. 

> -Original Message-
> From: Joe Wilson [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, March 27, 2007 12:29 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Questions on views
> 
> Assuming you're running on multi-core machines, spread the 
> work over a few threads/connections.
> 
> --- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> > Oops - that last sentence I wrote actually doesn't make sense :) I 
> > know what prepared statements are as I'm using them (doh!). I might 
> > have a problem that I need to add more WHERE conditions to 
> those "basic"
> > statements, which wouldn't work probably with storing them, 
> as I can't 
> > possible know all possible combinations. Still - I'll give 
> this some 
> > thought to see if I can find something in this direction.
> 
> 
> 
> 
>  
> __
> __
> Never miss an email again!
> Yahoo! Toolbar alerts you the instant new Mail arrives.
> http://tools.search.yahoo.com/toolbar/features/mail/
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Joe Wilson
Assuming you're running on multi-core machines, spread the work over a few
threads/connections.

--- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> Oops - that last sentence I wrote actually doesn't make sense :) I know
> what prepared statements are as I'm using them (doh!). I might have a
> problem that I need to add more WHERE conditions to those "basic"
> statements, which wouldn't work probably with storing them, as I can't
> possible know all possible combinations. Still - I'll give this some
> thought to see if I can find something in this direction.




 

Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives.
http://tools.search.yahoo.com/toolbar/features/mail/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Running multiple DDL statements in a batch (via JDBC)

2007-03-26 Thread Steven E. Harris
I'm using the "pure" SQLite JDBC driver¹ and trying to bootstrap my
database schema by running a batch of DDL statements. I read in the
entire DDL script from a file, collect it into a string, and feed that
string into either java.sql.Statement.executeUpdate() or
java.sql.Statement.executeBatch().

In either case, only the first DDL statement takes effect, and no
errors are signaled. Stepping through the JDBC driver, it looks as
though it sqlite3_prepare()s a statement with my SQL string, then
calls sqlite3_step() on the statement, and the return value comes back
as SQLITE_DONE, upon which it finalizes the statement and returns
successfully.

I understand that we have this JDBC layer in the middle, but it's
pretty thin, and I'm trying to figure out which party in this
arrangement is responsible for only executing the first statement (up
through the first semicolon) in the SQL string.

Does SQLite normally execute more than one statement provided in a SQL
string? I'm fearing having to cut up this DDL file into ten parts:
three CREATE TABLE statements, one CREATE INDEX statement, and six
CREATE TRIGGER statements for foreign key enforcement.

Please advise.


Footnotes: 
¹ http://www.zentus.com/sqlitejdbc/

-- 
Steven E. Harris

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PRAGMA locking_mode = EXCLUSIVE

2007-03-26 Thread Joe Wilson
Congratulations. This is a big milestone.

The single "sqlite3.c" source file release will certainly simplify 
embeddeding.

I guess the single translation unit thing is why GCC is busy working 
on link-time optimization (and why LLVM is faster in some cases).

--- [EMAIL PROTECTED] wrote:
> When you do "PRAGMA locking_mode=EXCLUSIVE", it means that
> SQLite will not release its EXCLUSIVE lock after its does its
> first write until you either close the connection or you
> do "PRAGMA locking_mode=NORMAL".  There are various optimizations
> that can occur when this is the case.
> 
> There is a plot of runtime of CVSHEAD versus version 3.3.13
> at
> 
>http://www.sqlite.org/relspeed-20070326-ephemeral.gif
> 
> (As the name implies, this GIF will ephemeral.  Do not expect it
> to be there for more than a few days.)  Any value less than
> 1.0 means that performance has improved.  The test script is
> 
>http://www.sqlite.org/cvstrac/fileview?f=sqlite/test/speed2.test
> 
> The orange bars show the relative performance of CVSHEAD with
> no special build options.  1.0 is the performance of version 3.3.13
> so anything less than 1.0 is an improvement.  The red bars show
> CVSHEAD with all the source files concatenated into a single big
> file and compiled as a single translation unit.  Compiling this
> way allows the compiler to do additional optimizations that result
> in improved performance.  The blue bars are the same sqlite-as-one-
> great-big-source-file optimization but with -DSQLITE_OMIT_XFER_OPT=1
> defined.
> 
> The tests are cumulative.  The database that results from one test
> feeds into the next.  The two "insert" tests at the top both
> consists of a large number (5) of distinct INSERT statements.
> These tests are dominated by the parser and code generator and
> all of our recent work has been directed toward optimizing the
> backend so these tests do not show any improvement.  You can see
> that the new "XFER Optimizer" actually results in a slight
> performance drop since for each INSERT statement, the parser has
> to check to see if the XFER optimization applies.  (It never
> does in the first two tests.)
> 
> The interface between the B-Tree layer and the Pager layer has
> been refactored.  This refactoring prevented the compiler from
> doing certain function inlinings that it was able to do before,
> resulting in a performance drop for the orange bar.  But when 
> the pager and b-tree modules were combined into a single source
> file, the function inlining was able to occur again and so
> performance improved.
> 
> Tests select4 through select6 are about twice as fast.  We believe
> this is because the pager cache is no longer flushed unless the
> database is changed by another process.
> 
> VACUUM is much faster due to the XFER optimization.  Furthermore,
> VACUUM with the XFER optimization greatly reduces th database
> fragmentation.  This is (we believe) why performance is so much
> better in the later tests for the orange and red bars.  Operations
> on a defragmented database file go much faster.
> 
> Changes have been made to the b-tree and pager layers that omit
> certain unnecessary writes to the database file and to the rollback
> journal.  This reduction in disk I/O results in the 20x performance
> improvements seen for tests delete1 and drop1.
> 
> CVSHEAD passes the "quick" regression tests.  But we know there
> are still issues with the code.  CVSHEAD is not currently
> recommended for use in shipping products, but it is adequate
> for development work, we believe.
> 
> In past releases of SQLite, we have made available a ZIP archive
> with preprocessed source files.  In the future, we may change this
> so that instead of a ZIP archive full of individual files, we
> ship a single "sqlite3.c" source file which contains all of the
> source file in a single translation unit.  By this mode of
> delivery, we hope to enable users to see the performance improvements
> we are seeing in our red bars.



 

Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] PRAGMA locking_mode = EXCLUSIVE

2007-03-26 Thread drh
Joe Wilson <[EMAIL PROTECTED]> wrote:
> What operations/work patterns will benefit most from 
> PRAGMA locking_mode = EXCLUSIVE?
> 
> Can you roughly quantify the speedups in such cases?
> 

When you do "PRAGMA locking_mode=EXCLUSIVE", it means that
SQLite will not release its EXCLUSIVE lock after its does its
first write until you either close the connection or you
do "PRAGMA locking_mode=NORMAL".  There are various optimizations
that can occur when this is the case.

There is a plot of runtime of CVSHEAD versus version 3.3.13
at

   http://www.sqlite.org/relspeed-20070326-ephemeral.gif

(As the name implies, this GIF will ephemeral.  Do not expect it
to be there for more than a few days.)  Any value less than
1.0 means that performance has improved.  The test script is

   http://www.sqlite.org/cvstrac/fileview?f=sqlite/test/speed2.test

The orange bars show the relative performance of CVSHEAD with
no special build options.  1.0 is the performance of version 3.3.13
so anything less than 1.0 is an improvement.  The red bars show
CVSHEAD with all the source files concatenated into a single big
file and compiled as a single translation unit.  Compiling this
way allows the compiler to do additional optimizations that result
in improved performance.  The blue bars are the same sqlite-as-one-
great-big-source-file optimization but with -DSQLITE_OMIT_XFER_OPT=1
defined.

The tests are cumulative.  The database that results from one test
feeds into the next.  The two "insert" tests at the top both
consists of a large number (5) of distinct INSERT statements.
These tests are dominated by the parser and code generator and
all of our recent work has been directed toward optimizing the
backend so these tests do not show any improvement.  You can see
that the new "XFER Optimizer" actually results in a slight
performance drop since for each INSERT statement, the parser has
to check to see if the XFER optimization applies.  (It never
does in the first two tests.)

The interface between the B-Tree layer and the Pager layer has
been refactored.  This refactoring prevented the compiler from
doing certain function inlinings that it was able to do before,
resulting in a performance drop for the orange bar.  But when 
the pager and b-tree modules were combined into a single source
file, the function inlining was able to occur again and so
performance improved.

Tests select4 through select6 are about twice as fast.  We believe
this is because the pager cache is no longer flushed unless the
database is changed by another process.

VACUUM is much faster due to the XFER optimization.  Furthermore,
VACUUM with the XFER optimization greatly reduces th database
fragmentation.  This is (we believe) why performance is so much
better in the later tests for the orange and red bars.  Operations
on a defragmented database file go much faster.

Changes have been made to the b-tree and pager layers that omit
certain unnecessary writes to the database file and to the rollback
journal.  This reduction in disk I/O results in the 20x performance
improvements seen for tests delete1 and drop1.

CVSHEAD passes the "quick" regression tests.  But we know there
are still issues with the code.  CVSHEAD is not currently
recommended for use in shipping products, but it is adequate
for development work, we believe.

In past releases of SQLite, we have made available a ZIP archive
with preprocessed source files.  In the future, we may change this
so that instead of a ZIP archive full of individual files, we
ship a single "sqlite3.c" source file which contains all of the
source file in a single translation unit.  By this mode of
delivery, we hope to enable users to see the performance improvements
we are seeing in our red bars.

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Dennis Volodomanov
Oops - that last sentence I wrote actually doesn't make sense :) I know
what prepared statements are as I'm using them (doh!). I might have a
problem that I need to add more WHERE conditions to those "basic"
statements, which wouldn't work probably with storing them, as I can't
possible know all possible combinations. Still - I'll give this some
thought to see if I can find something in this direction.

Thanks again. 

> -Original Message-
> From: Dennis Volodomanov 
> Sent: Tuesday, March 27, 2007 11:43 AM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Questions on views
> 
> Thanks for the reply!
> 
> I'm not really trying to blame SQLite here, as I know 
> there're limits on just how fast it can prepare a statement, 
> execute it and give me the results - and it's fast, I'm just 
> looking for ways to make it faster.
> 
> The reason that such a huge amount of statements needs to be 
> executed so many times very quickly is that we have a tree 
> built up based on those statements and that tree needs to be 
> pruned if the results of statements are empty in real-time as 
> the user is typing a string (a search string basically). Each 
> node in the tree has (in my test scenario) from 1000 to 2000 
> children and each child has a few (up to 10-20 children of 
> their own). There're quite a few optimizations that I've 
> already done in the application so that unnecessary 
> statements are not executed, but there're still 2000-3000 
> statements that need to be executed. And my test scenario is 
> not that big actually - the real application is expected to 
> handle at least 2-4 times more data regularly. I know it's 
> pushing everything to the limits, but that's what we need to 
> implement.
> 
> Thank you for the suggestion about prepared statements - 
> that's one thing I have looked at yet and I'll go and do some 
> reading on it now.
> I'll get back with results if I succeed in implementing it.
> 
> Regards,
> 
>Dennis 
> 
> > -Original Message-
> > From: Joe Wilson [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, March 27, 2007 11:31 AM
> > To: sqlite-users@sqlite.org
> > Subject: RE: [sqlite] Questions on views
> > 
> > --- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> > > Yes, after timing both I found that SELECT EXISTS is
> > actually a tiny
> > > bit faster, which does matter when multiplied by thousands of 
> > > executions in a row.
> > > 
> > > Unfortunately, I still cannot get it as fast as I want - it takes 
> > > approximately 1500-2000 ms per approximately 2000-3000
> > executions. Is
> > > there any way to speed this up even further somehow? The
> > scenario is
> > > that I have a table full of SQL statements that need to be
> > reexecuted
> > > often to check whether they return any results or not (from other 
> > > tables). I have all those SQL statements in memory in the
> > application,
> > > so that saves a bit of time, but can I do anything else?
> > 
> > 0.6 milliseconds per query is not fast enough? Wow!
> > What's your system doing that it needs to poll the database 
> so often?
> > 
> > Unless you want to redesign your application, there's not 
> much you can 
> > do except eliminate the parsing overhead.
> > 
> > In the table where you store the SQL statements, create a column to 
> > hold the MD5 hash value of the SQL and use that as a key to an 
> > in-memory hash map of prepared statements, where you create the 
> > prepared statement and insert it into the map with the MD5 value as 
> > its key if it does not exist. Keep in mind that your prepared 
> > statements are tied to the connection on which they were 
> created, so 
> > if you have many connections you will need many maps. Use 
> > sqlite3_prepare_v2().
> > 
> > 
> >  
> > __
> > __
> > TV dinner still cooling? 
> > Check out "Tonight's Picks" on Yahoo! TV.
> > http://tv.yahoo.com/
> > 
> > --
> > ---
> > To unsubscribe, send email to [EMAIL PROTECTED]
> > --
> > ---
> > 
> > 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Dennis Volodomanov
Thanks for the reply!

I'm not really trying to blame SQLite here, as I know there're limits on
just how fast it can prepare a statement, execute it and give me the
results - and it's fast, I'm just looking for ways to make it faster.

The reason that such a huge amount of statements needs to be executed so
many times very quickly is that we have a tree built up based on those
statements and that tree needs to be pruned if the results of statements
are empty in real-time as the user is typing a string (a search string
basically). Each node in the tree has (in my test scenario) from 1000 to
2000 children and each child has a few (up to 10-20 children of their
own). There're quite a few optimizations that I've already done in the
application so that unnecessary statements are not executed, but
there're still 2000-3000 statements that need to be executed. And my
test scenario is not that big actually - the real application is
expected to handle at least 2-4 times more data regularly. I know it's
pushing everything to the limits, but that's what we need to implement.

Thank you for the suggestion about prepared statements - that's one
thing I have looked at yet and I'll go and do some reading on it now.
I'll get back with results if I succeed in implementing it.

Regards,

   Dennis 

> -Original Message-
> From: Joe Wilson [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, March 27, 2007 11:31 AM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Questions on views
> 
> --- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> > Yes, after timing both I found that SELECT EXISTS is 
> actually a tiny 
> > bit faster, which does matter when multiplied by thousands of 
> > executions in a row.
> > 
> > Unfortunately, I still cannot get it as fast as I want - it takes 
> > approximately 1500-2000 ms per approximately 2000-3000 
> executions. Is 
> > there any way to speed this up even further somehow? The 
> scenario is 
> > that I have a table full of SQL statements that need to be 
> reexecuted 
> > often to check whether they return any results or not (from other 
> > tables). I have all those SQL statements in memory in the 
> application, 
> > so that saves a bit of time, but can I do anything else?
> 
> 0.6 milliseconds per query is not fast enough? Wow!
> What's your system doing that it needs to poll the database so often?
> 
> Unless you want to redesign your application, there's not 
> much you can do except eliminate the parsing overhead.
> 
> In the table where you store the SQL statements, create a 
> column to hold the MD5 hash value of the SQL and use that as 
> a key to an in-memory hash map of prepared statements, where 
> you create the prepared statement and insert it into the map 
> with the MD5 value as its key if it does not exist. Keep in 
> mind that your prepared statements are tied to the connection 
> on which they were created, so if you have many connections 
> you will need many maps. Use sqlite3_prepare_v2().
> 
> 
>  
> __
> __
> TV dinner still cooling? 
> Check out "Tonight's Picks" on Yahoo! TV.
> http://tv.yahoo.com/
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Joe Wilson
--- Dennis Volodomanov <[EMAIL PROTECTED]> wrote:
> Yes, after timing both I found that SELECT EXISTS is actually a tiny bit
> faster, which does matter when multiplied by thousands of executions in
> a row.
> 
> Unfortunately, I still cannot get it as fast as I want - it takes
> approximately 1500-2000 ms per approximately 2000-3000 executions. Is
> there any way to speed this up even further somehow? The scenario is
> that I have a table full of SQL statements that need to be reexecuted
> often to check whether they return any results or not (from other
> tables). I have all those SQL statements in memory in the application,
> so that saves a bit of time, but can I do anything else?

0.6 milliseconds per query is not fast enough? Wow!
What's your system doing that it needs to poll the database so often?

Unless you want to redesign your application, there's not much you can 
do except eliminate the parsing overhead.

In the table where you store the SQL statements, create a column to 
hold the MD5 hash value of the SQL and use that as a key to an in-memory 
hash map of prepared statements, where you create the prepared statement 
and insert it into the map with the MD5 value as its key if it does not 
exist. Keep in mind that your prepared statements are tied to the 
connection on which they were created, so if you have many connections
you will need many maps. Use sqlite3_prepare_v2().


 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Error reporting problem

2007-03-26 Thread Martin Jenkins

Vivien Malerba wrote:

Here is a sample test case, just uncompress, run "make" and "./test".
Here is what I get using SQLite 3.3.13 (On Linux):
SQL error (step): SQL logic error or missing database
SQL error (step): column name is not unique

It shows that calling sqlite3_reset() seems to be required (contrary
to what the doc says), or maybe I got something wrong...


Hmm. I built this on Debian (had to tweak the Makefile very very 
slightly) and got the same results as you. From my reading of:


"""Goofy Interface Alert:  [...] The problem has been fixed with the 
"v2" interface. If you prepare all of your SQL statements using either 
sqlite3_prepare_v2() or sqlite3_prepare16_v2() instead of the legacy 
sqlite3_prepare() and sqlite3_prepare16(), then the more specific 
result-codes are returned directly by sqlite3_step(). The use of the 
"v2" interface is recommended."""


you should NOT have to call sqlite3_reset() after an error. Perhaps 
someone with experience of the C API could comment?


Martin

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Questions on views

2007-03-26 Thread Dennis Volodomanov
Yes, after timing both I found that SELECT EXISTS is actually a tiny bit
faster, which does matter when multiplied by thousands of executions in
a row.

Unfortunately, I still cannot get it as fast as I want - it takes
approximately 1500-2000 ms per approximately 2000-3000 executions. Is
there any way to speed this up even further somehow? The scenario is
that I have a table full of SQL statements that need to be reexecuted
often to check whether they return any results or not (from other
tables). I have all those SQL statements in memory in the application,
so that saves a bit of time, but can I do anything else?

Thanks!

   Dennis 

> -Original Message-
> From: Dennis Cote [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, March 27, 2007 12:35 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Questions on views
> 
> Dennis Volodomanov wrote:
> > Is doing a SELECT EXISTS (...) faster than a SELECT COUNT 
> (... LIMIT 
> > 1) or would it be the same (I would expect them to be the same, but 
> > that's only my guess)?
> >
> >   
> I would expect them to be very nearly the same. Any 
> difference would only be apparent if you repeat them many 
> times (which is what you are doing). You should probably try 
> it both ways and measure the execution time to see which is faster.
> 
> HTH
> Dennis Cote
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Issue with trailing blanks

2007-03-26 Thread Dennis Cote

Joel Cochran wrote:


I do think more and more that the solution for me is to trim the trailing
blanks before INSERTing them into SQLite.


That will be your best solution, and it will make your database files 
smaller too since sqlite won't be storing the trailing spaces.


Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Issue with trailing blanks

2007-03-26 Thread James Dennett
> From: Joel Cochran [mailto:[EMAIL PROTECTED]
> 
> Hi Puneet,
> 
> I probably shouldn't have said that they 'ignore' the blanks, but they
are
> capable of treating them as white space for text matching purposes.  I
> can't
> speak for Oracle, but I'm pretty sure MySQL and SqlServer (and I know
for
> sure DB2) all allow you to search on 'A' and they will return records
for
> 'A
> ' or 'A  ', etc.  I think it is intuitive to treat trailing blanks
as
> whitespace, but that might be because of my main experience with
DB2/400.

(Not very SQLite specific, but:)

This is a difference between CHAR(n) and VARCHAR(n) in many databases; a
VARCHAR knows how many characters are significant (and they can include
whitespace), whereas CHAR(n) stores exactly n characters but views
trailing blanks as insignificant.  CHAR(n) is ugly from most design
viewpoints, but can be handled slightly more efficiently in many
databases so we accept ugly code to deal with it.  I'd definitely
recommend removing the trailing blanks when you move away from a
schema/database that requires them.

-- James

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Issue with trailing blanks

2007-03-26 Thread Teg
Hello Joel,

Why not strip the spaces before insertion?

C

Monday, March 26, 2007, 4:36:02 PM, you wrote:

JC> Howdy all,

JC> I am new to SQLite, so I hope this isn't too much of a newbie question, but
JC> I searched the Internet, the archives, and the help docs and could not find
JC> any mention of this.

JC> I am populating an SQLite database from a legacy IBM AS/400 database.  The
JC> 400 stores all character fields with padded blanks.  As a result, when I
JC> export the data the blanks remain.  Normally this is not a problem, but I
JC> noticed in SQLite when I do a select statement the MYFIELD = 'A' will not
JC> work if the data is in fact 'A ' (a trailing blank).

JC> SQLite apparently does not ignore trailing blanks on character matching like
JC> other DBs do.  Has anyone else experienced this, and if so what have you
JC> done about it?  I know I could rewrite the export routine to trim the
JC> trailing blanks as they come out of the parent DB, but I was hoping there
JC> may be another solution.  If nothing else, maybe I can find one of you to
JC> commiserate with!

JC> Thanks,




-- 
Best regards,
 Tegmailto:[EMAIL PROTECTED]


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Issue with trailing blanks

2007-03-26 Thread Joel Cochran

Hi Puneet,

I probably shouldn't have said that they 'ignore' the blanks, but they are
capable of treating them as white space for text matching purposes.  I can't
speak for Oracle, but I'm pretty sure MySQL and SqlServer (and I know for
sure DB2) all allow you to search on 'A' and they will return records for 'A
' or 'A  ', etc.  I think it is intuitive to treat trailing blanks as
whitespace, but that might be because of my main experience with DB2/400.

One difference, of course, is that these other databases allow you to define
character fields with lengths, as opposed to just TEXT.  I suppose that
inherently means that TEXT is literal (meaning that it recognizes the blank
as its ASCII character), where a CHAR(35) field would know, within the
context of its defined length, how many trailing blanks it could ignore.

I thought about the like idea, but 'AA' is potentially valid as well, so
ultimately that will create its own problems.  And in this particular case,
this one is a key fields, so EQUAL matching is pretty necessary.

I do think more and more that the solution for me is to trim the trailing
blanks before INSERTing them into SQLite.

Thanks for your input,

--
Joel Cochran
Stonewall Technologies, Inc.


On 3/26/07, P Kishor <[EMAIL PROTECTED]> wrote:


On 3/26/07, Joel Cochran <[EMAIL PROTECTED]> wrote:
> Howdy all,
>
> I am new to SQLite, so I hope this isn't too much of a newbie question,
but
> I searched the Internet, the archives, and the help docs and could not
find
> any mention of this.
>
> I am populating an SQLite database from a legacy IBM AS/400
database.  The
> 400 stores all character fields with padded blanks.  As a result, when I
> export the data the blanks remain.  Normally this is not a problem, but
I
> noticed in SQLite when I do a select statement the MYFIELD = 'A' will
not
> work if the data is in fact 'A ' (a trailing blank).

Try MYFIELD LIKE 'A%'

>
> SQLite apparently does not ignore trailing blanks on character matching
like
> other DBs do.

I am not sure they do. I am speaking from memory, but I am pretty sure
that both SQL Server and Oracle don't just ignore blanks.

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
-
collaborate, communicate, compete
=


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




Re: [sqlite] Issue with trailing blanks

2007-03-26 Thread P Kishor

On 3/26/07, Joel Cochran <[EMAIL PROTECTED]> wrote:

Howdy all,

I am new to SQLite, so I hope this isn't too much of a newbie question, but
I searched the Internet, the archives, and the help docs and could not find
any mention of this.

I am populating an SQLite database from a legacy IBM AS/400 database.  The
400 stores all character fields with padded blanks.  As a result, when I
export the data the blanks remain.  Normally this is not a problem, but I
noticed in SQLite when I do a select statement the MYFIELD = 'A' will not
work if the data is in fact 'A ' (a trailing blank).


Try MYFIELD LIKE 'A%'



SQLite apparently does not ignore trailing blanks on character matching like
other DBs do.


I am not sure they do. I am speaking from memory, but I am pretty sure
that both SQL Server and Oracle don't just ignore blanks.

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
-
collaborate, communicate, compete
=

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Issue with trailing blanks

2007-03-26 Thread Joel Cochran

Howdy all,

I am new to SQLite, so I hope this isn't too much of a newbie question, but
I searched the Internet, the archives, and the help docs and could not find
any mention of this.

I am populating an SQLite database from a legacy IBM AS/400 database.  The
400 stores all character fields with padded blanks.  As a result, when I
export the data the blanks remain.  Normally this is not a problem, but I
noticed in SQLite when I do a select statement the MYFIELD = 'A' will not
work if the data is in fact 'A ' (a trailing blank).

SQLite apparently does not ignore trailing blanks on character matching like
other DBs do.  Has anyone else experienced this, and if so what have you
done about it?  I know I could rewrite the export routine to trim the
trailing blanks as they come out of the parent DB, but I was hoping there
may be another solution.  If nothing else, maybe I can find one of you to
commiserate with!

Thanks,

--
Joel Cochran
Stonewall Technologies, Inc.


Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Alberto Simões

On 3/26/07, Joe Wilson <[EMAIL PROTECTED]> wrote:

--- Alberto Simões <[EMAIL PROTECTED]> wrote:
> The tables are related but they are already being created in parallel
> (three tables). I was joining them just because it was easier to move.
> But given the new, erm, big size, I think I'll maintain them split.
>
> Joe Wilson idea of attaching the tables seems cool. I'll give it a try.

If you're creating the 3 databases in parallel, you would be better
off creating 3 connections (each from a seperate thread or process)
to populate them in order to more efficiently use the spare cycles of
the CPU during I/O waits.


Yeah, I am using three different processes in three different machines
(using a Cluster) to populate.


Use ATTACH from one connection after the databases have been populated
to read from all of them.


I'll try that. Thanks.

--
Alberto Simões

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Any way to do this faster?

2007-03-26 Thread RB Smissaert
Had a good look at this now and doing:

delete from tableB
where not exists
(select id from tableA where tableA.id = tableB.id)

Is indeed quite a bit faster than doing:

delete from tableB
where id not in
(select tableA.id from tableA)

In my case about 3 times as fast.
Looking at the query plan with EXPLAIN QUERY PLAN was a good tip!

I think though that I gained more by looking more carefully when to put the
index on the id column. I need the index, but not before the delete, so I
gained a lot by creating the index after the delete. This meant less rows to
index plus less work to be done with the delete.
Probably there will be more places in my app where looking at the timing of
the index creation will speed things up, so thanks again for the advice.

RBS


-Original Message-
From: Dennis Cote [mailto:[EMAIL PROTECTED] 
Sent: 26 March 2007 18:16
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Any way to do this faster?

[EMAIL PROTECTED] wrote:
>
> I will need an index on that field later, but
> I could drop it prior to the delete and create
> a new one after.
>
>   
Don't do that. If you need the index, then leave it as is.

> Thinking about it I am not sure in any case of the
> value of an index after deletes on a table.
> Is it usually better to re-index after deletes?
>
>   
Indexes are updated automatically as records are added and deleted from 
a table, that's why they add overhead if they are not serving some 
purpose. Your index will be correct after you delete the records from 
tableB.

HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation

2007-03-26 Thread Joe Wilson
--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> > improved dramatically. So I attempted the creation of the index off hours on
> > the production system, and after 4 hours no index.  I can't detect any
> > activity at all. The journal file and the .db file just sit at the same size
> > for 4 hours.  Why is this failing?  It seems like it is just sitting there
> > doing nothing.  When I created the test index, I noticed the journal file
> > changing and the .db file changing during the 2.5 hours to create.  On the
> > production .db file, nothing is happening.  I have all associated processes
> > killed that ineract with the db file, so I know it is not locked.
> 
> I assume that the copied "test" database was indexed immediately after its
> creation. If this was the case then the entire file may have been in the OS
> cache resulting in very quick indexing. Try running "wc prod.db" or 
> "cat prod.db >/dev/null" and then creating the indexes on prod.db to see 
> what happens.

The original poster confirmed that cat'ting the file to /dev/null reduced index
creation time to 2.5 hours on the original database file.

Could some optional heuristic be incorporated into SQLite's pager to do 
something 
similar for such large transactions and/or queries?




 

Need Mail bonding?
Go to the Yahoo! Mail Q for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Joe Wilson
--- Alberto Simões <[EMAIL PROTECTED]> wrote:
> The tables are related but they are already being created in parallel
> (three tables). I was joining them just because it was easier to move.
> But given the new, erm, big size, I think I'll maintain them split.
> 
> Joe Wilson idea of attaching the tables seems cool. I'll give it a try.

If you're creating the 3 databases in parallel, you would be better
off creating 3 connections (each from a seperate thread or process) 
to populate them in order to more efficiently use the spare cycles of 
the CPU during I/O waits.

Use ATTACH from one connection after the databases have been populated
to read from all of them.


 

We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] round and storage class

2007-03-26 Thread Dennis Cote

Iulian Musat wrote:

Hello everybody !

First of all I have to say that I'm a fairly new user of SQLite, so be 
kind :-)


The problem is with the "round" function: the result have a decimal 
point even if the second argument is zero (the default):

sqlite> select round(1.234);
1.0
sqlite> select round(1.234,0);
1.0

I'm not sure if this shouldn't produce '1' instead of '1.0'.

The reason for bringing this up is the resulted storage class in a 
statement like this:

INSERT INTO tableint SELECT ROUND(some_expression) FROM other_table;

I know that in theory this shouldn't matter for SQLite, but I suppose 
that some queries would be more efficient if the storage class is 
INTEGER for the columns involved (please tell me if I'm wrong here).



So, here are the questions:
1. If you feel that round should behave this way, how do I submit a 
patch. Not a lot of modifications nor terrible smart (basically some 
work done at the end of roundFunc from func.c), but I'm also curios 
how one can submit patches (sorry if I missed this info from sqlite.org).
See section Contributed Code at http://www.sqlite.org/copyright.html for 
a start.


2. Is there a way to find the storage class for a particular field 
from a particular row of a table?
I'm not sure for example if the next statements will produce the same 
storage class:

create tableint (i integer);
insert into tableint values(1);
insert into tableint values(1.0);
insert into tableint values(round(1));

You can use the typeof() function to get the type (storage class) of a 
field.


   select typeof(round(1.234));

You can use the cast(x as type) syntax to change the type of the round 
result to integer.


   select cast(round(1.234) as integer);

HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Any way to do this faster?

2007-03-26 Thread Dennis Cote

[EMAIL PROTECTED] wrote:


I will need an index on that field later, but
I could drop it prior to the delete and create
a new one after.

  

Don't do that. If you need the index, then leave it as is.


Thinking about it I am not sure in any case of the
value of an index after deletes on a table.
Is it usually better to re-index after deletes?

  
Indexes are updated automatically as records are added and deleted from 
a table, that's why they add overhead if they are not serving some 
purpose. Your index will be correct after you delete the records from 
tableB.


HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How does SQLite store data?

2007-03-26 Thread drh
<[EMAIL PROTECTED]> wrote:
> 
> I am not aware of a BOOLEAN type. 
> The types defined in the website are: NULL, INTEGER, REAL, TEXT and BLOB. 
> Is BOOLEAN a hidden type? 
>

BOOLEAN is not a different type.  What Dennis meant was that
integer values 0 and 1 are stored more efficiently in the new
format - 1 byte instead of 2.  The integer values 0 and 1 are
commonly used as booleans, so we use the term "boolean type"
in conversation, though in truth a boolean is really just an
integer.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Any way to do this faster?

2007-03-26 Thread Dennis Cote

Brian Johnson wrote:

Another way of saying what Dennis said (I had to read his reply twice before I
understood it):

  

Point taken. :-)

I often think I'm being too wordy in my replies, but when I try to make 
my statements more succinct they seem to be confusing.


I guess I just don't have the gift for formulating short clear direct 
answers that some other (like Igor and Richard) do.


Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How does SQLite store data?

2007-03-26 Thread rhurst2

 Dennis Cote <[EMAIL PROTECTED]> wrote: 
> John Stanton wrote:
> > It does not have fixed length columns except for the ones which hold 
> > integer and real numbers and boolean values.
> >
> Actually, integers are stored in a variable length format as well. It 
> takes less space to store smaller integer values than it does to store 
> large values. This allows sqlite to handle full 64 bit integers, but 
> does not waste space storing unused leading zero bits for most fields 
> that typically use a much smaller range of integer values.
> 
> It also has a new file format (no longer the default) that stores 
> booleans more efficiently. Use pragma legacy_file_format=off when 
> initializing the database to select this format. I believe booleans 
> values are fixed size in both formats.
> 
> Dennis Cote
> 
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 

I am not aware of a BOOLEAN type. 
The types defined in the website are: NULL, INTEGER, REAL, TEXT and BLOB. 
Is BOOLEAN a hidden type? 
Ray 
 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] round and storage class

2007-03-26 Thread Iulian Musat

Hello everybody !

First of all I have to say that I'm a fairly new user of SQLite, so be 
kind :-)


The problem is with the "round" function: the result have a decimal 
point even if the second argument is zero (the default):

sqlite> select round(1.234);
1.0
sqlite> select round(1.234,0);
1.0

I'm not sure if this shouldn't produce '1' instead of '1.0'.

The reason for bringing this up is the resulted storage class in a 
statement like this:

INSERT INTO tableint SELECT ROUND(some_expression) FROM other_table;

I know that in theory this shouldn't matter for SQLite, but I suppose 
that some queries would be more efficient if the storage class is 
INTEGER for the columns involved (please tell me if I'm wrong here).



So, here are the questions:
1. If you feel that round should behave this way, how do I submit a 
patch. Not a lot of modifications nor terrible smart (basically some 
work done at the end of roundFunc from func.c), but I'm also curios how 
one can submit patches (sorry if I missed this info from sqlite.org).


2. Is there a way to find the storage class for a particular field from 
a particular row of a table?
I'm not sure for example if the next statements will produce the same 
storage class:

create tableint (i integer);
insert into tableint values(1);
insert into tableint values(1.0);
insert into tableint values(round(1));

I'm just getting used with VDBE, but from running these with "explain" I 
think everything is clear until reaching "MakeRecord". For example, the 
first statement would produce something like:

 cut -
5|Integer|1|0|
6|MakeRecord|1|0|d
 cut -
And the second:
 cut -
5|Real|0|0|1.0
6|MakeRecord|1|0|d
 cut -

The 'd' from MakeRecord is just the column affinity, doesn't specify how 
the actual record was stored:

sqlite> explain insert into tableint values(1.5);
...
5|Real|0|0|1.5
6|MakeRecord|1|0|d
...
Same code, but obviously 1.5 is stored as real.


Regards,
-Iulian


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Alberto Simões

On 3/26/07, Eduardo Morras <[EMAIL PROTECTED]> wrote:

At 17:22 26/03/2007, you wrote:
>Hi
>
>I have a database with three tables (bigrams, trigrams and
>tetragrams). Basically, they are:
>  TABLE( A, B, Count)  IDX(A,B)  IDX(B)
>  TABLE( A, B, C, Count)  IDX(A,B,C)  IDX(B)  IDX(C)
>  TABLE( A, B, C, D, Count) IDX (A,B,C,D) IDX(B) IDX(C) IDX(D)
>
>Now, this database is quickly getting around the 2GB of disk space.
>My question, about efficiency, is, it better to have:
>- one big database, one connection from C code to the database
>- three smaller databases, three connection from C code to the database
>
>Thank you
>Alberto

The problem with this database is that the three tables are related, no? A,B,C 
and D are the same data on all tables. If it's not true and tables have 
separate data, you can easily split in three database files, permiting you 
concurrent writing and getting faster read times if are on distinct drives.


The tables are related but they are already being created in parallel
(three tables). I was joining them just because it was easier to move.
But given the new, erm, big size, I think I'll maintain them split.

Joe Wilson idea of attaching the tables seems cool. I'll give it a try.

Thank you, folks.
--
Alberto Simões

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Any way to do this faster?

2007-03-26 Thread bartsmissaert
> also, as Dennis said .. unless you have
> some other reason for that index on
> Table B, get rid of it.

I will need an index on that field later, but
I could drop it prior to the delete and create
a new one after.

Thinking about it I am not sure in any case of the
value of an index after deletes on a table.
Is it usually better to re-index after deletes?

RBS



> Another way of saying what Dennis said (I had to read his reply twice
> before I
> understood it):
>
> your query: for every record in TableB it returns ALL the records in
> tableA
> and then looks through them
>
> Dennis's query: for every record in TableB it checks tableA directly for
> the
> existence of that ID
>
>
> the speed increase should be dramatic
>
> also, as Dennis said .. unless you have some other reason for that index
> on
> Table B, get rid of it.  Indexes slow down everything but database reads
> and
> you're not using it in this example.
>
>
> [EMAIL PROTECTED] wrote:
>>
>> Thanks Dennis, will try that when I get chance (at work now) and will
>> report back about the difference
>> it made.
>>
>> RBS
>>
>> > RB Smissaert wrote:
>> >> Simplified I have the following situation:
>> >>
>> >> 2 tables, tableA and tableB both with an integer field, called ID,
>> >> holding
>> >> unique integer numbers in tableA and non-unique integer numbers in
>> >> tableB.
>> >> Both tables have an index on this field and for tableA this is an
>> >> INTEGER
>> >> PRIMARY KEY.
>> >> Now I need to delete the rows in tableB where this number doesn't
>> appear
>> >> in
>> >> the corresponding field in tableA.
>> >>
>> >> Currently I do this with this SQL:
>> >>
>> >> Delete from tableB where ID not in (select tableA.ID from tableA)
>> >>
>> >> When table tableB gets big (say some 10 rows) this will get a bit
>> >> slow
>> >> and I wonder if there is a better way to do this.
>> >>
>> >> RBS
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> -
>> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >> -
>> >>
>> >>
>> >>
>> > Your query is doing a complete table scan of tableA for each record in
>> a
>> > table scan of tableB.
>> >
>> > SQLite version 3.3.13
>> > Enter ".help" for instructions
>> > sqlite> create table tableA(id integer primary key, b);
>> > sqlite> create table tableB(id, c);
>> > sqlite> create index b_id on tableB(id);
>> > sqlite> explain query plan delete from tableB where id not in (select
>> > tableA.id
>> > from tableA);
>> > 0|0|TABLE tableB
>> > 0|0|TABLE tableA
>> >
>> > You can improve this greatly using correlated subquery that will use
>> the
>> > primary key index on tableA to find any matching records.
>> >
>> > sqlite> explain query plan delete from tableB where not exists (select
>> > id from t
>> > ableA where tableA.id = tableB.id);
>> > 0|0|TABLE tableB
>> > 0|0|TABLE tableA USING PRIMARY KEY
>> >
>> > Note that your index on tableB.id is not used and could be eliminated
>> > unless it serves another purpose.
>> >
>> > HTH
>> > Dennis Cote
>> >
>> > -
>> > To unsubscribe, send email to [EMAIL PROTECTED]
>> > -
>> >
>> >
>> >
>>
>>
>>
>>
>> -
>> To unsubscribe, send email to [EMAIL PROTECTED]
>> -
>>
>>
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>
>




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Any way to do this faster?

2007-03-26 Thread Brian Johnson
Another way of saying what Dennis said (I had to read his reply twice before I
understood it):

your query: for every record in TableB it returns ALL the records in tableA
and then looks through them

Dennis's query: for every record in TableB it checks tableA directly for the
existence of that ID


the speed increase should be dramatic

also, as Dennis said .. unless you have some other reason for that index on
Table B, get rid of it.  Indexes slow down everything but database reads and
you're not using it in this example.


[EMAIL PROTECTED] wrote:
>
> Thanks Dennis, will try that when I get chance (at work now) and will
> report back about the difference
> it made.
>
> RBS
>
> > RB Smissaert wrote:
> >> Simplified I have the following situation:
> >>
> >> 2 tables, tableA and tableB both with an integer field, called ID,
> >> holding
> >> unique integer numbers in tableA and non-unique integer numbers in
> >> tableB.
> >> Both tables have an index on this field and for tableA this is an
> >> INTEGER
> >> PRIMARY KEY.
> >> Now I need to delete the rows in tableB where this number doesn't appear
> >> in
> >> the corresponding field in tableA.
> >>
> >> Currently I do this with this SQL:
> >>
> >> Delete from tableB where ID not in (select tableA.ID from tableA)
> >>
> >> When table tableB gets big (say some 10 rows) this will get a bit
> >> slow
> >> and I wonder if there is a better way to do this.
> >>
> >> RBS
> >>
> >>
> >>
> >>
> >>
> >>
> >> -
> >> To unsubscribe, send email to [EMAIL PROTECTED]
> >> -
> >>
> >>
> >>
> > Your query is doing a complete table scan of tableA for each record in a
> > table scan of tableB.
> >
> > SQLite version 3.3.13
> > Enter ".help" for instructions
> > sqlite> create table tableA(id integer primary key, b);
> > sqlite> create table tableB(id, c);
> > sqlite> create index b_id on tableB(id);
> > sqlite> explain query plan delete from tableB where id not in (select
> > tableA.id
> > from tableA);
> > 0|0|TABLE tableB
> > 0|0|TABLE tableA
> >
> > You can improve this greatly using correlated subquery that will use the
> > primary key index on tableA to find any matching records.
> >
> > sqlite> explain query plan delete from tableB where not exists (select
> > id from t
> > ableA where tableA.id = tableB.id);
> > 0|0|TABLE tableB
> > 0|0|TABLE tableA USING PRIMARY KEY
> >
> > Note that your index on tableB.id is not used and could be eliminated
> > unless it serves another purpose.
> >
> > HTH
> > Dennis Cote
> >
> > -
> > To unsubscribe, send email to [EMAIL PROTECTED]
> > -
> >
> >
> >
>
>
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Eduardo Morras
At 17:22 26/03/2007, you wrote:
>Hi
>
>I have a database with three tables (bigrams, trigrams and
>tetragrams). Basically, they are:
>  TABLE( A, B, Count)  IDX(A,B)  IDX(B)
>  TABLE( A, B, C, Count)  IDX(A,B,C)  IDX(B)  IDX(C)
>  TABLE( A, B, C, D, Count) IDX (A,B,C,D) IDX(B) IDX(C) IDX(D)
>
>Now, this database is quickly getting around the 2GB of disk space.
>My question, about efficiency, is, it better to have:
>- one big database, one connection from C code to the database
>- three smaller databases, three connection from C code to the database
>
>Thank you
>Alberto

The problem with this database is that the three tables are related, no? A,B,C 
and D are the same data on all tables. If it's not true and tables have 
separate data, you can easily split in three database files, permiting you 
concurrent writing and getting faster read times if are on distinct drives.

If tables haven't separate data (the A on first table is the same on the second 
and third) you must update in cascade the 3 databases.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Any way to do this faster?

2007-03-26 Thread bartsmissaert
Thanks, will try that and report back.

RBS

> I'm not sure if SQLite support this syntax, but try following statement,
>
> Delete from tableB b
> Where not exist ( select 'x'
>   from tableA a
>   where a.id = b.id )
>
> -Original Message-
> From: Dennis Cote [mailto:[EMAIL PROTECTED]
> Sent: 26 March 2007 16:12
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Any way to do this faster?
>
>
> RB Smissaert wrote:
>> Simplified I have the following situation:
>>
>> 2 tables, tableA and tableB both with an integer field, called ID,
> holding
>> unique integer numbers in tableA and non-unique integer numbers in
> tableB.
>> Both tables have an index on this field and for tableA this is an
> INTEGER
>> PRIMARY KEY.
>> Now I need to delete the rows in tableB where this number doesn't
> appear in
>> the corresponding field in tableA.
>>
>> Currently I do this with this SQL:
>>
>> Delete from tableB where ID not in (select tableA.ID from tableA)
>>
>> When table tableB gets big (say some 10 rows) this will get a bit
> slow
>> and I wonder if there is a better way to do this.
>>
>> RBS
>>
>>
>>
>>
>>
>>
>>
> 
> -
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
> 
> -
>>
>>
>>
> Your query is doing a complete table scan of tableA for each record in a
>
> table scan of tableB.
>
> SQLite version 3.3.13
> Enter ".help" for instructions
> sqlite> create table tableA(id integer primary key, b);
> sqlite> create table tableB(id, c);
> sqlite> create index b_id on tableB(id);
> sqlite> explain query plan delete from tableB where id not in (select
> tableA.id
> from tableA);
> 0|0|TABLE tableB
> 0|0|TABLE tableA
>
> You can improve this greatly using correlated subquery that will use the
>
> primary key index on tableA to find any matching records.
>
> sqlite> explain query plan delete from tableB where not exists (select
> id from t
> ableA where tableA.id = tableB.id);
> 0|0|TABLE tableB
> 0|0|TABLE tableA USING PRIMARY KEY
>
> Note that your index on tableB.id is not used and could be eliminated
> unless it serves another purpose.
>
> HTH
> Dennis Cote
>
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> 
> -
>
>
> THE INFORMATION CONTAINED IN THIS MESSAGE AND ANY ATTACHMENT MAY BE
> PRIVILEGED, CONFIDENTIAL, PROPRIETARY OR OTHERWISE PROTECTED FROM
> DISCLOSURE. If the reader of this message is not the intended recipient,
> you are hereby notified that any dissemination, distribution, copying or
> use of this message and any attachment is strictly prohibited. If you have
> received this message in error, please notify us immediately by replying
> to the message and permanently delete it from your computer and destroy
> any printout thereof.
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>
>




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Joe Wilson
--- Alberto Simões <[EMAIL PROTECTED]> wrote:
> I have a database with three tables (bigrams, trigrams and
> tetragrams). Basically, they are:
>TABLE( A, B, Count)  IDX(A,B)  IDX(B)
>TABLE( A, B, C, Count)  IDX(A,B,C)  IDX(B)  IDX(C)
>TABLE( A, B, C, D, Count) IDX (A,B,C,D) IDX(B) IDX(C) IDX(D)
> 
> Now, this database is quickly getting around the 2GB of disk space.
> My question, about efficiency, is, it better to have:
>  - one big database, one connection from C code to the database
>  - three smaller databases, three connection from C code to the database

It depends on how and how frequently they are populated, but I'd suggest 
3 database files, one connection from C code with 2 ATTACH DATABASE commands.

 http://sqlite.org/lang_attach.html


 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Questions on views

2007-03-26 Thread Joe Wilson
--- Dennis Cote <[EMAIL PROTECTED]> wrote:
> Dennis Volodomanov wrote:
> > Is doing a SELECT EXISTS (...) faster than a SELECT COUNT (... LIMIT 1)
> > or would it be the same (I would expect them to be the same, but that's
> > only my guess)?
> >   
> I would expect them to be very nearly the same. Any difference would 
> only be apparent if you repeat them many times (which is what you are 
> doing). You should probably try it both ways and measure the execution 
> time to see which is faster.

"EXPLAIN SELECT ..." is also a good way to find out what the queries are doing
if you're prepared to decipher the opcodes.



 

Need Mail bonding?
Go to the Yahoo! Mail Q for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list=396546091

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Any way to do this faster?

2007-03-26 Thread Arora, Ajay
I'm not sure if SQLite support this syntax, but try following statement,

Delete from tableB b
Where not exist ( select 'x'
  from tableA a
  where a.id = b.id )   

-Original Message-
From: Dennis Cote [mailto:[EMAIL PROTECTED] 
Sent: 26 March 2007 16:12
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Any way to do this faster?


RB Smissaert wrote:
> Simplified I have the following situation:
>
> 2 tables, tableA and tableB both with an integer field, called ID,
holding
> unique integer numbers in tableA and non-unique integer numbers in
tableB.
> Both tables have an index on this field and for tableA this is an
INTEGER
> PRIMARY KEY.
> Now I need to delete the rows in tableB where this number doesn't
appear in
> the corresponding field in tableA.
>
> Currently I do this with this SQL:
>
> Delete from tableB where ID not in (select tableA.ID from tableA)
>
> When table tableB gets big (say some 10 rows) this will get a bit
slow
> and I wonder if there is a better way to do this.
>
> RBS
>
>
>
>
>
>
>

-
> To unsubscribe, send email to [EMAIL PROTECTED]
>

-
>
>
>   
Your query is doing a complete table scan of tableA for each record in a

table scan of tableB.

SQLite version 3.3.13
Enter ".help" for instructions
sqlite> create table tableA(id integer primary key, b);
sqlite> create table tableB(id, c);
sqlite> create index b_id on tableB(id);
sqlite> explain query plan delete from tableB where id not in (select 
tableA.id
from tableA);
0|0|TABLE tableB
0|0|TABLE tableA

You can improve this greatly using correlated subquery that will use the

primary key index on tableA to find any matching records.

sqlite> explain query plan delete from tableB where not exists (select 
id from t
ableA where tableA.id = tableB.id);
0|0|TABLE tableB
0|0|TABLE tableA USING PRIMARY KEY

Note that your index on tableB.id is not used and could be eliminated 
unless it serves another purpose.

HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


THE INFORMATION CONTAINED IN THIS MESSAGE AND ANY ATTACHMENT MAY BE PRIVILEGED, 
CONFIDENTIAL, PROPRIETARY OR OTHERWISE PROTECTED FROM DISCLOSURE. If the reader 
of this message is not the intended recipient, you are hereby notified that any 
dissemination, distribution, copying or use of this message and any attachment 
is strictly prohibited. If you have received this message in error, please 
notify us immediately by replying to the message and permanently delete it from 
your computer and destroy any printout thereof.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Any way to do this faster?

2007-03-26 Thread bartsmissaert
Thanks Dennis, will try that when I get chance (at work now) and will
report back about the difference
it made.

RBS

> RB Smissaert wrote:
>> Simplified I have the following situation:
>>
>> 2 tables, tableA and tableB both with an integer field, called ID,
>> holding
>> unique integer numbers in tableA and non-unique integer numbers in
>> tableB.
>> Both tables have an index on this field and for tableA this is an
>> INTEGER
>> PRIMARY KEY.
>> Now I need to delete the rows in tableB where this number doesn't appear
>> in
>> the corresponding field in tableA.
>>
>> Currently I do this with this SQL:
>>
>> Delete from tableB where ID not in (select tableA.ID from tableA)
>>
>> When table tableB gets big (say some 10 rows) this will get a bit
>> slow
>> and I wonder if there is a better way to do this.
>>
>> RBS
>>
>>
>>
>>
>>
>>
>> -
>> To unsubscribe, send email to [EMAIL PROTECTED]
>> -
>>
>>
>>
> Your query is doing a complete table scan of tableA for each record in a
> table scan of tableB.
>
> SQLite version 3.3.13
> Enter ".help" for instructions
> sqlite> create table tableA(id integer primary key, b);
> sqlite> create table tableB(id, c);
> sqlite> create index b_id on tableB(id);
> sqlite> explain query plan delete from tableB where id not in (select
> tableA.id
> from tableA);
> 0|0|TABLE tableB
> 0|0|TABLE tableA
>
> You can improve this greatly using correlated subquery that will use the
> primary key index on tableA to find any matching records.
>
> sqlite> explain query plan delete from tableB where not exists (select
> id from t
> ableA where tableA.id = tableB.id);
> 0|0|TABLE tableB
> 0|0|TABLE tableA USING PRIMARY KEY
>
> Note that your index on tableB.id is not used and could be eliminated
> unless it serves another purpose.
>
> HTH
> Dennis Cote
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>
>




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] One big database or various smaller ones?

2007-03-26 Thread Alberto Simões

Hi

I have a database with three tables (bigrams, trigrams and
tetragrams). Basically, they are:
  TABLE( A, B, Count)  IDX(A,B)  IDX(B)
  TABLE( A, B, C, Count)  IDX(A,B,C)  IDX(B)  IDX(C)
  TABLE( A, B, C, D, Count) IDX (A,B,C,D) IDX(B) IDX(C) IDX(D)

Now, this database is quickly getting around the 2GB of disk space.
My question, about efficiency, is, it better to have:
- one big database, one connection from C code to the database
- three smaller databases, three connection from C code to the database

Thank you
Alberto

--
Alberto Simões

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Any way to do this faster?

2007-03-26 Thread Dennis Cote

RB Smissaert wrote:

Simplified I have the following situation:

2 tables, tableA and tableB both with an integer field, called ID, holding
unique integer numbers in tableA and non-unique integer numbers in tableB.
Both tables have an index on this field and for tableA this is an INTEGER
PRIMARY KEY.
Now I need to delete the rows in tableB where this number doesn't appear in
the corresponding field in tableA.

Currently I do this with this SQL:

Delete from tableB where ID not in (select tableA.ID from tableA)

When table tableB gets big (say some 10 rows) this will get a bit slow
and I wonder if there is a better way to do this.

RBS






-
To unsubscribe, send email to [EMAIL PROTECTED]
-


  
Your query is doing a complete table scan of tableA for each record in a 
table scan of tableB.


SQLite version 3.3.13
Enter ".help" for instructions
sqlite> create table tableA(id integer primary key, b);
sqlite> create table tableB(id, c);
sqlite> create index b_id on tableB(id);
sqlite> explain query plan delete from tableB where id not in (select 
tableA.id

from tableA);
0|0|TABLE tableB
0|0|TABLE tableA

You can improve this greatly using correlated subquery that will use the 
primary key index on tableA to find any matching records.


sqlite> explain query plan delete from tableB where not exists (select 
id from t

ableA where tableA.id = tableB.id);
0|0|TABLE tableB
0|0|TABLE tableA USING PRIMARY KEY

Note that your index on tableB.id is not used and could be eliminated 
unless it serves another purpose.


HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Error reporting problem

2007-03-26 Thread Vivien Malerba

On 3/26/07, Martin Jenkins <[EMAIL PROTECTED]> wrote:

Vivien Malerba wrote:
> Hi!
>
> I've got an error reporting problem when trying to insert a row which
> breaks a UNIQUE constraint in a table in a C program, I get the
> following error with sqlite3_errmsg():
> "SQL logic error or missing database"
>
> If I fire the sqlite3 program and run the same SQL query, I get the
> following error:
> "SQL error: column name is not unique"
> which is much more obvious to understand.
>
> Any idea how I could get that error message calling sqlite3_errmsg()?

The pysqlite wrapper has a bug which reports the same error message if
you try to nest transactions. I fixed the wrapper (locally) by adding an
sqlite_reset(). I think there's a new API (sqlite_prepare_v2()?) which
has the same effect. If you can't search (say) gmane then post a tiny
code snippet and someone who uses the C API more than I do will spot the
problem straight away.


Here is a sample test case, just uncompress, run "make" and "./test".
Here is what I get using SQLite 3.3.13 (On Linux):
SQL error (step): SQL logic error or missing database
SQL error (step): column name is not unique

It shows that calling sqlite3_reset() seems to be required (contrary
to what the doc says), or maybe I got something wrong...

Thanks,

Vivien


test_case.tgz
Description: GNU Zip compressed data
-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] sqlite and bcc32

2007-03-26 Thread Nuno Lucas

On 3/25/07, stripe <[EMAIL PROTECTED]> wrote:


using sqlite version 3.3.13.

... extern "C" { #include  } ...

doesn't help :(


Well, that seems like a compiler problem, not a SQLite related one.

For some reason the compiler is treating the SQLite header file as C++
code, when it should be treating it as C code (which it seems to do by
changing the source file extension).

As I don't use that compiler can't say more about it..


Hope you can find a workaround,
~Nuno Lucas

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How does SQLite store data?

2007-03-26 Thread Dennis Cote

P Kishor wrote:


interesting. As far as _I_ know, the first implementation of varint!


No, this idea has been around for a long time. It was used for ISDN 
addressing for example. I'm sure it is probably in Knuth somewhere. It 
is still a very good idea though.




is it reasonable to assume that the legacy_file_format is incompatible
with the postmodern_file_format? And, which version+ of SQLite has
this new format capability?


Yes it is incompatible. It was originally introduced in version 3.3.0 
and was set as the default format for new databases until version 3.3.7 
where the legacy format became the default again so that database file 
were backwards compatible by default. All versions since 3.3.0 can 
handle both formats.


HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Questions on views

2007-03-26 Thread Dennis Cote

Dennis Volodomanov wrote:

Is doing a SELECT EXISTS (...) faster than a SELECT COUNT (... LIMIT 1)
or would it be the same (I would expect them to be the same, but that's
only my guess)?

  
I would expect them to be very nearly the same. Any difference would 
only be apparent if you repeat them many times (which is what you are 
doing). You should probably try it both ways and measure the execution 
time to see which is faster.


HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] PRAGMA locking_mode = EXCLUSIVE

2007-03-26 Thread Joe Wilson
What operations/work patterns will benefit most from 
PRAGMA locking_mode = EXCLUSIVE?

Can you roughly quantify the speedups in such cases?


 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Error reporting problem

2007-03-26 Thread Martin Jenkins

Vivien Malerba wrote:

Hi!

I've got an error reporting problem when trying to insert a row which
breaks a UNIQUE constraint in a table in a C program, I get the
following error with sqlite3_errmsg():
"SQL logic error or missing database"

If I fire the sqlite3 program and run the same SQL query, I get the
following error:
"SQL error: column name is not unique"
which is much more obvious to understand.

Any idea how I could get that error message calling sqlite3_errmsg()?


The pysqlite wrapper has a bug which reports the same error message if 
you try to nest transactions. I fixed the wrapper (locally) by adding an 
sqlite_reset(). I think there's a new API (sqlite_prepare_v2()?) which 
has the same effect. If you can't search (say) gmane then post a tiny 
code snippet and someone who uses the C API more than I do will spot the 
problem straight away.


Martin

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Store and retreive 0D0A (CRLF) in string field

2007-03-26 Thread John Stanton

P Kishor wrote:

On 3/25/07, fangles <[EMAIL PROTECTED]> wrote:



SQLite doesn't truncate anything. Whatever you put in you get out. If
you see a truncation, it is either done by whatever wrapper you use on
top of SQLite, or simply an artifact of the way you inspect the data
(e.g. you look at the string in a debugger, and the debugger just
happens to show only the first line).

Use sqlite3_column_bytes[16] to convince yourself that SQLite gives you
the complete string.

Igor Tandetnik





Thanks for that rather blunt reply.


..

I'd rather have "blunt" replies from Igor than emotional replies from
anyone else. And, as far as I can see, the reply was about the best
that anyone could have given. It was far better than anything I could
have contributed.

When asking questions, please provide information on how you are using
SQLite, what "wrapper" or language you are using, version number of
the software, possibly even steps to reproduce the problem, etc.

This is an email list, not a class. And, yes, it is still free. Take
it as it is, and give back. Ask questions, and if you can, answer some
as well, so folks who really know (and I am not one of them) can focus
on answering the really difficult questions.

"Blunt" is not a synonym for "complete" or "informative" unless the word 
means something else in the South Pacific.


A better response would be "Thank you Igor for taking the time to answer 
my question and saving me the effort of doing a little research".


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How does SQLite store data?

2007-03-26 Thread P Kishor

On 3/26/07, Dennis Cote <[EMAIL PROTECTED]> wrote:

John Stanton wrote:
> It does not have fixed length columns except for the ones which hold
> integer and real numbers and boolean values.
>
Actually, integers are stored in a variable length format as well. It
takes less space to store smaller integer values than it does to store
large values. This allows sqlite to handle full 64 bit integers, but
does not waste space storing unused leading zero bits for most fields
that typically use a much smaller range of integer values.



interesting. As far as _I_ know, the first implementation of varint!


It also has a new file format (no longer the default) that stores
booleans more efficiently. Use pragma legacy_file_format=off when
initializing the database to select this format. I believe booleans
values are fixed size in both formats.



is it reasonable to assume that the legacy_file_format is incompatible
with the postmodern_file_format? And, which version+ of SQLite has
this new format capability?


--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
-
collaborate, communicate, compete
=

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How does SQLite store data?

2007-03-26 Thread Dennis Cote

John Stanton wrote:
It does not have fixed length columns except for the ones which hold 
integer and real numbers and boolean values.


Actually, integers are stored in a variable length format as well. It 
takes less space to store smaller integer values than it does to store 
large values. This allows sqlite to handle full 64 bit integers, but 
does not waste space storing unused leading zero bits for most fields 
that typically use a much smaller range of integer values.


It also has a new file format (no longer the default) that stores 
booleans more efficiently. Use pragma legacy_file_format=off when 
initializing the database to select this format. I believe booleans 
values are fixed size in both formats.


Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Error reporting problem

2007-03-26 Thread Vivien Malerba

Hi!

I've got an error reporting problem when trying to insert a row which
breaks a UNIQUE constraint in a table in a C program, I get the
following error with sqlite3_errmsg():
"SQL logic error or missing database"

If I fire the sqlite3 program and run the same SQL query, I get the
following error:
"SQL error: column name is not unique"
which is much more obvious to understand.

Any idea how I could get that error message calling sqlite3_errmsg()?

Thanks,

Vivien

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Store and retreive 0D0A (CRLF) in string field

2007-03-26 Thread P Kishor

On 3/25/07, fangles <[EMAIL PROTECTED]> wrote:


SQLite doesn't truncate anything. Whatever you put in you get out. If
you see a truncation, it is either done by whatever wrapper you use on
top of SQLite, or simply an artifact of the way you inspect the data
(e.g. you look at the string in a debugger, and the debugger just
happens to show only the first line).

Use sqlite3_column_bytes[16] to convince yourself that SQLite gives you
the complete string.

Igor Tandetnik




Thanks for that rather blunt reply.

..

I'd rather have "blunt" replies from Igor than emotional replies from
anyone else. And, as far as I can see, the reply was about the best
that anyone could have given. It was far better than anything I could
have contributed.

When asking questions, please provide information on how you are using
SQLite, what "wrapper" or language you are using, version number of
the software, possibly even steps to reproduce the problem, etc.

This is an email list, not a class. And, yes, it is still free. Take
it as it is, and give back. Ask questions, and if you can, answer some
as well, so folks who really know (and I am not one of them) can focus
on answering the really difficult questions.

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
-
collaborate, communicate, compete
=

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] open an open sqlite database !!!

2007-03-26 Thread Samuel R. Neff

If you're using ASP.NET I'd suggest using the ADO.NET wrapper available
here:  http://sqlite.phxsoftware.com/

It's an excellent implementation of ADO.NET for SQLite and would be far less
trouble than using sqlite directly from .NET code.

HTH,

Sam

---
We're Hiring! Seeking a passionate developer to join our team building
products. Position is in the Washington D.C. metro area. If interested
contact [EMAIL PROTECTED]
 
-Original Message-
From: mniknezhad [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 26, 2007 4:43 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] open an open sqlite database !!!


i use dlls to connect to sqlite db(in asp.net).
i have a problem:
when the database file (sample.db) is opened 
with a windows application (and in use - not closed),
i can not open that database with a web application. 
in other words : 
how can i open an opened database with a web application?

tanx


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Store and retreive 0D0A (CRLF) in string field

2007-03-26 Thread Martin Jenkins

fangles wrote:

SQLite doesn't truncate anything. Whatever you put in you get out. If
 you see a truncation, it is either done by whatever wrapper you use 
on top of SQLite, or simply an artifact of the way you inspect the 
data (e.g. you look at the string in a debugger, and the debugger 
just happens to show only the first line).


Use sqlite3_column_bytes[16] to convince yourself that SQLite gives 
you the complete string.


Thanks for that rather blunt reply.


What was blunt about it? You asked a question regarding odd behaviour 
that nobody else has seen without providing any context. Igor confirmed 
that SQLite didn't behave like that and suggested possible explanations 
for what you saw. If you don't like the answer, ask better questions. 
Anyway, I thought you Ozzies liked a bit of direct speaking. ;)


Martin

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] open an open sqlite database !!!

2007-03-26 Thread mniknezhad

i use dlls to connect to sqlite db(in asp.net).
i have a problem:
when the database file (sample.db) is opened 
with a windows application (and in use - not closed),
i can not open that database with a web application. 
in other words : 
how can i open an opened database with a web application?

tanx
-- 
View this message in context: 
http://www.nabble.com/open-an-open-sqlite-database-%21%21%21-tf3465738.html#a9669472
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-