Re: [sqlite] Virtual Table xBestIndex and NULL Search Conditions (Bug!)

2011-08-02 Thread Igor Sereda

Thanks, Simon - it's a bug then. Hope it will get pulled into the bug
tracker.

Igor



Simon Slavin-3 wrote:
> 
> 
> On 2 Aug 2011, at 1:10am, Igor Sereda wrote:
> 
>> To my humble knowledge, operations with NULL have well-defined semantics,
>> both in SQL-you-name-it standards and in SQLite. "A < B" may have three
>> results - TRUE, FALSE and NULL. It doesn't matter whether you can make
>> any
>> sense of it - it's the spec ;)
> 
> The spec for '<=' should say that comparing any number with NULL always
> gives a NULL result.  If SQLite is doing anything apart from that, it's a
> bug.
> 
> Okay, here it is: SQL92 8.2 (1) (a):
> 
> "If XV or YV is the null value, then "X  Y" is unknown."
> 
> In this context, returning 'unknown' means returning NULL.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Virtual-Table-xBestIndex-and-NULL-Search-Conditions-%28Bug-%29-tp32172549p32175828.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table xBestIndex and NULL Search Conditions (Bug?)

2011-08-01 Thread Igor Sereda

Simon, Michael -

To my humble knowledge, operations with NULL have well-defined semantics,
both in SQL-you-name-it standards and in SQLite. "A < B" may have three
results - TRUE, FALSE and NULL. It doesn't matter whether you can make any
sense of it - it's the spec ;)

Therefore I'm trying to report a bug here according to guideline at
http://www.sqlite.org/src/wiki?name=Bug+Reports - I would very much like to
hear from SQLite developers whether this report makes sense or if additional
information is needed.

Cheers,
Igor




Simon Slavin-3 wrote:
> 
> 
> On 1 Aug 2011, at 10:45pm, Black, Michael (IS) wrote:
> 
>> If it's meaningless then shouldn't it be a syntax error?
> 
> It's about as meaningless as
> 
> X <= maxreal
> 
> so it would take quite a lot of processing time to identify it as
> meaningless.  Not sure as if it's worth the processing time.  Any decent
> debugging effort should find the problem.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Virtual-Table-xBestIndex-and-NULL-Search-Conditions-%28Bug-%29-tp32172549p32174172.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table xBestIndex and NULL Search Conditions (Bug?)

2011-08-01 Thread Igor Sereda

Thanks Jay,

That's a good hint about the origin of the problem. 

However, you refer to the sort order, but the problem is with WHERE
statement. Since numeric comparison  with NULL always evaluates to NULL (see
section 4.0 of the link you gave me), a statement like "SELECT * FROM table
WHERE value > NULL" would return an empty result set on any table --
*always* -- whereas "SELECT * FROM table WHERE value IS NOT NULL" would
return all rows with non-null value. 

That works on the normal tables, but it's probably broken on the virtual
tables in the latest version.

I've checked what's happening on 3.7.4: when parsing "SELECT value FROM
table WHERE value IS NOT NULL", the xBestIndex method receives no
contraints, which, I believe, is the correct thing:

pIdxInfo->nConstraint == 0

So - who else thinks it's a bug?

Cheers
Igor




Jay A. Kreibich-2 wrote:
> 
> On Mon, Aug 01, 2011 at 12:34:33PM -0700, Igor Sereda scratched on the
> wall:
>> 
>> Hello,
>> 
>> I'm seeing strange input given into xBestIndex method of my virtual
>> table.
>> 
>> I'm maintaining sqlite4java wrapper and I'm trying to upgrade it from
>> SQLite
>> 3.7.4 to 3.7.7.1. A couple of failed tests uncovered that there's a
>> problem
>> when searching a simple virtual table with constraints that contain NULL. 
>> 
>> More specifically, the virtual table is declared as follows in xCreate
>> method:
>>
>>  CREATE TABLE x(value INTEGER)
>> 
>> When the following SQL is executed:
>> 
>>  SELECT value FROM table WHERE value IS NOT NULL
>> 
>> , xBestIndex receives the following parameters:
>> 
>> pIdxInfo->nConstraint == 1
>> pIdxInfo->aConstraint[0].usable == 1
>> pIdxInfo->aConstraint[0].iColumn == 0
>> pIdxInfo->aConstraint[0].op == 4 (GT)
>> 
>> So basically the search is going to be for condition "value > ?".
>>
>> When xFilter is called, the value passed is NULL. So instead of searching
>> for "value IS NOT NULL" the module is instructed to search for "value >
>> NULL" - which gives the opposite result.  And when SQL executed is
>> "SELECT
>> value FROM table WHERE value > NULL", all the parameters are identical.
> 
>   All values in SQLite have a consistent sort order.  As section 3.1 of
>   http://sqlite.org/datatype3.html#comparisons shows, NULL is considered
>   to be the "smallest" value.  Hence, "value > NULL" is equivalent to
>   "value IS NOT NULL".
> 
>   It might not be the most obvious logic, but it is the logic used by
>   SQLite and the query optimizer, so it is the logic that needs to be
>   used by any virtual table.
> 
>> This problem did not exist in SQLite 3.7.4.
> 
>   What did earlier versions do?
> 
>> Do I miss something or is this a bug? 
> 
>   I assume it is a change in the query optimizer.  Since this is a
>   legit way to express an IS NOT NULL, it isn't exactly "wrong", just
>   different.
> 
>-j
> 
> -- 
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
> 
> "Intelligence is like underwear: it is important that you have it,
>  but showing it to the wrong people has the tendency to make them
>  feel uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Virtual-Table-xBestIndex-and-NULL-Search-Conditions-%28Bug-%29-tp32172549p32173021.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Virtual Table xBestIndex and NULL Search Conditions (Bug?)

2011-08-01 Thread Igor Sereda

Hello,

I'm seeing strange input given into xBestIndex method of my virtual table.

I'm maintaining sqlite4java wrapper and I'm trying to upgrade it from SQLite
3.7.4 to 3.7.7.1. A couple of failed tests uncovered that there's a problem
when searching a simple virtual table with constraints that contain NULL. 

More specifically, the virtual table is declared as follows in xCreate
method:
   
 CREATE TABLE x(value INTEGER)

When the following SQL is executed:

 SELECT value FROM table WHERE value IS NOT NULL

, xBestIndex receives the following parameters:

pIdxInfo->nConstraint == 1
pIdxInfo->aConstraint[0].usable == 1
pIdxInfo->aConstraint[0].iColumn == 0
pIdxInfo->aConstraint[0].op == 4 (GT)

So basically the search is going to be for condition "value > ?".

When xFilter is called, the value passed is NULL. So instead of searching
for "value IS NOT NULL" the module is instructed to search for "value >
NULL" - which gives the opposite result. And when SQL executed is "SELECT
value FROM table WHERE value > NULL", all the parameters are identical.

This problem did not exist in SQLite 3.7.4.

Do I miss something or is this a bug? 

Thanks for your help,
Igor


-- 
View this message in context: 
http://old.nabble.com/Virtual-Table-xBestIndex-and-NULL-Search-Conditions-%28Bug-%29-tp32172549p32172549.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Unable to Open DB" on Win-7 & Vista (64 bitt) on UAC -ON

2010-07-19 Thread Igor Sereda

I suspect the permissions set on DB file allow only Administrators to change
it; and that requires escalation to Admin rights under UAC. It's likely that
your DB file is located in C:\Program Files\yourapp.

As a solution, you could relax permissions on the DB file upon installation,
or, better, keep the file under C:\Users\currentUser\AppData\Roaming\yourapp

-- Igor

-- 
View this message in context: 
http://old.nabble.com/%22Unable-to-Open-DB%22-on-Win-7---Vista-%2864-bitt%29-on-UAC--ON-tp29202752p29202990.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] using test_intarray

2010-07-14 Thread Igor Sereda

Simon,

I would agree in other cases, however, in our app, the schema is dynamic and
depends on the user data. Normally, we have about 200 tables with two or
three columns each, with indexes almost on each table. Queries with
lots-of-joins are also constructed dynamically, and we found SQLite to be
pretty effective in selecting optimal join algorithm based on ANALYZE
results.

Likewise, there are reasons for needing "40 params", or otherwise pass "IN
(int array)" in a WHERE clause; for example, to verify that a given set of,
say, 40 entities would be returned from a query that returns 100 000
entities -- without running the query itself. (This can be solved by
creating a temporary table, inserting entity ids there and joining that
table with the query - however, we're looking for a more efficient way.)

Thanks!
Igor



Simon Slavin-3 wrote:
> 
> 
> On 14 Jul 2010, at 4:17pm, Igor Sereda wrote:
> 
>> For example, where we now have a query
>> 
>>  SELECT ...lots-of-joins... WHERE ...lots-of-exprs... AND someColumn IN
>> (?,?,?,? ...40 params... )
> 
> This -- the 'lots-of-joins' and the '40 params' in particular -- suggests
> you should be rethinking your schema.  You'll probably speed up your
> entire system and save a lot of very complicated programming by merging
> some tables and/or making some separate columns of a table into a
> collection of keyed properties.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/binding-an-IN-tp29135222p29165733.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] using test_intarray

2010-07-14 Thread Igor Sereda

Thanks for mentioning test_intarray! I'm now considering rewriting parts of
our code because sqlite3_intarray_bind is more powerful than using sequences
like ?,?,?...

A question: does using a virtual table (or precisely virtual table from
test_intarray) affect query optimizer? We have lots of tables in our DB, and
queries with lots of joins, so we quite depend on ANALYZE results and the
optimizer. 

For example, where we now have a query

  SELECT ...lots-of-joins... WHERE ...lots-of-exprs... AND someColumn IN
(?,?,?,? ...40 params... )

we'd use either "AND someColumn IN vtab" or another JOIN with vtab.

I'm currently running some experiments and looking at EXPLAIN outputs, but
the results are inconclusive. 

Any hints from developers would be helpful.

Thanks!
Igor




Richard Hipp-3 wrote:
> 
> Have you looked at the "test_intarray" code.
> 
> http://www.sqlite.org/src/artifact/489edb9068bb926583445cb02589344961054207
> 
> On Sun, Jul 11, 2010 at 9:42 PM, Sam Carleton
> wrote:
> 
>> Is there any way to bind to this query?
>>
>> SELECT * FROM table WHERE tableId IN ( ? );
>>
>> Where ? should be 1,2,3,4
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 
> -- 
> -
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/binding-an-IN-tp29135222p29163181.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Documentation error in explanation of "IN subquery" (lang_expr.html)

2010-07-14 Thread Igor Sereda

On page http://www.sqlite.org/lang_expr.html :

"When a SELECT is the right operand of the IN operator, the IN operator
returns TRUE if the SELECT result contains no NULLs and if the left operand
matches any of the values in the SELECT result."

The part "SELECT result contains no NULLs" does not seem to hold true (and
for the better!):

sqlite> SELECT 'selected' WHERE 1 IN (SELECT null UNION SELECT 1);
selected

Cheers,
Igor
-- 
View this message in context: 
http://old.nabble.com/Documentation-error-in-explanation-of-%22IN-subquery%22-%28lang_expr.html%29-tp29161289p29161289.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] binding an IN

2010-07-12 Thread Igor Sereda

We have a similar task and we solve it by using a statement with lots of
parameters
 
  SELECT * FROM table WHERE tableId IN (?, ?, ?, ?, )

The number of "?", let's call it N, is fixed, and set to ~100 -- tuned by
measuring performance.

When the actual number of parameters is less than N, we bind the rest of the
parameters to NULL.

When the actual number of parameters is greater than N, we issue several
queries and combine the result manually. Another way would be to create a
table in a temporary database and INSERT ... SELECT into it several times.

However, this approach may not work if you have NOT conditions applied to
the bound parameters or the result of the selection based on them.

Hope this helps.
-- Igor



Sam Carleton-2 wrote:
> 
> Is there any way to bind to this query?
> 
> SELECT * FROM table WHERE tableId IN ( ? );
> 
> Where ? should be 1,2,3,4
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/binding-an-IN-tp29135222p29137009.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re trieve Specific record number in one shot.

2010-07-09 Thread Igor Sereda

Would

  Select * From Product order by ProductName LIMIT 1 OFFSET 209

help?


-- Igor
 

Piyush Verma-3 wrote:
> 
> Hello All,
> 
> I want to navigate to specific position in table for example I want
> row number 210 inspite of nevigating one by one how can get that row.
> 
> One way could be create a Index and use where clause to get that But
> it's not useful in my case.
> 
> I have table which have primary key, and product name(there is another
> index for ProductName). Now I sort by Product name and want to access
> row number 210.
> 
> Is that a way to get it directly?
> 
> like
> 
>>>"Select * From Product order by ProductName"
>>>move_to_row(210);
>>>read row;
> 
> 
> something like that.
> 
> 
> 
> 
> -- 
> Thanks & Regards
> 
> Piyush Verma
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Retrieve-Specific-record-number-in-one-shot.-tp29115356p29115896.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to determine when to VACUUM?

2010-07-07 Thread Igor Sereda
My two cents, to complement other answers: leave it to the user. In
case of a client-side GUI app, let the user run some maintenance
action, like an OS has "defragment disk" action, or Outlook has
"compact folders" action. In case of a server-side app, make a script
or admin command to do that.

Hope this helps!
-- Igor



On Wed, Jul 7, 2010 at 1:45 AM, Kristoffer Danielsson
 wrote:
>
> I've been reading the documentation. I've been googling and thinking.
>
>
>
> Still, I can't figure out the best way to determine when to run the 
> VACUUM-command. Note that I do NOT want to enable "auto vacuum".
>
> I do remember reading something about calculating empty space, used pages etc 
> etc. Still, no perfect answer.
>
>
>
> Q: How do I programmatically (through sqlite-APIs?) determine if it's time to 
> VACUUM a database? In general, what is the best method here?
>
>
>
> Thanks!
>
> /Chris
>
> _
> Håll skräpposten borta med nya Hotmail. Klicka här!
> http://explore.live.com/windows-live-hotmail
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to determine when to VACUUM?

2010-07-07 Thread Igor Sereda
> It's never time to VACUUM a database.

This is an interesting statement. In our application, all tables get
heavily fragmented with time (99% or more). I was considering VACUUM
as a means to defragment, and, presumably, improve search performance.
Was I misguided, and search performance does not degrade significantly
with the increased fragmentation of the DB?

(I guess it well might not on an SSD disk, but on a conventional
rotational disk, pager could read several pages ahead with one seek -
but does it?)

It would be great to see performance comparison, if anyone has ever did it.

-- Igor



On Wed, Jul 7, 2010 at 1:52 AM, Simon Slavin  wrote:
>
> On 6 Jul 2010, at 10:45pm, Kristoffer Danielsson wrote:
>
>> Q: How do I programmatically (through sqlite-APIs?) determine if it's time 
>> to VACUUM a database?
>
> It's never time to VACUUM a database.  The VACUUM command is useful only if 
> you have want to recover unused space from the database file.  So if your 
> database file once took up 5 Meg, and you deleted a lot of data from it and 
> it now takes up only 2 Meg, you could recover 3 Megabytes of disk space.  But 
> how useful is that 3 Megabytes of space to you ?  Are you going to use it for 
> something really valuable ?  And how long will it be before you get 3 
> Megabytes more data which will fill it up again ?
>
> If you're trying to get the database in shape to make copies, e.g. to burn it 
> on a DVD or send it to customers, or put it on a device with limited space, 
> then there might be some reason to use VACUUM.  If not, then it's just a 
> waste of resources.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bit libsqlite for AIX and Solaris

2010-06-24 Thread Igor Sereda

What compiler are you using? With GCC, you can use -m64 option. There must be
a similar option in other compilers.

-- Igor


Sushil-15 wrote:
> 
> Hi,
> 
> I am looking for 64 bit libsqlite for AIX and Solaris. Is there a place
> from
> where I can
> get them pre-built ?
> 
> I have downloaded sqlite-amalgamation-3.6.23.1.tar and building it. But I
> don't see any
> option in configure to build it for 64 bits. Will the default library
> built
> be 64 bits. ?
> If not, then how to build them ?
> 
> 
> Thanks,
> Sushil.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/64-bit-libsqlite-for-AIX-and-Solaris-tp28981139p28981590.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Oracle joins the SQLite Consortium

2010-06-23 Thread Igor Sereda

Greg,

Thanks for the explanation. So it's Sleepycat license, ok, but we still
can't use it in an application with proprietary code, right?

It would be interesting to track the progress of SQLite/BDB. Roger Binns has
noted some important issues, but granted those are solved, would you say
SQLite/BDB is going to be an acceptable solution as an in-process
server-side database? 

Is there a vision how BDB, now with SQLite, fits into the family of Oracle's
products? Clearly it's far from Oracle Database, but do you foresee
competition with MySQL team? Just asking :)

Igor



Gregory Burd-2 wrote:
> 
> Hello,
> 
> My name is Greg, I'm one of the product managers within Oracle working on
> the Berkeley DB products.  I joined Oracle when Sleepycat was acquired but
> I've been working on BDB for nearly nine years now.  I was the one who
> pushed hard to integrate SQLite and BDB, I think the two products go well
> together without damaging either one.  I am also the guy responsible for
> most of the messaging on the Oracle.com website (with a lot of editing
> oversight and marketing input), so if you want to question something there
> please just email me.
> 
> We here in the Berkeley DB team within Oracle's Embedded Database group
> are thrilled to have Oracle join the SQLite Consortium.  Today and in the
> past our goal with open source collaborations has been to work closely
> together, help each other out, keep things informal-yet-formal, and give
> credit where credit is due.  The SQLite product is excellent, we don't
> want or need to fork it.  The SQLite3 ANSI C API is like the BDB ANSI C
> key/value API, de-facto standards in their respective spaces.  From our
> view this combination is like chocolate and peanut butter, two great
> products that go well together.  Some will like this combo and find value
> in it, others won't.  That's okay, in fact it's the way it should be.  We
> are thrilled to be joining this community, we're not the enemy or the
> competition.
> 
> Clearly there are going to be many questions, I'm here to help answer them
> as best I can.
> 
> 
> License: Oracle Berkeley DB is not licensed under the GPL.  Berkeley DB is
> released under the terms of the Sleepycat License.  The Sleepycat License
> is significantly different from the GPL, take a look. 
> http://en.wikipedia.org/wiki/Sleepycat_License
> 
> Compatibility: "... [the] application-level behavior of the two products
> is identical..."  Okay, this is a bit of an overstatement at this point
> and I freely admit that.  This is our long-term goal, so I think it's fair
> to put have it on our site.  Basically we're telling people that we'd like
> to be as close to 100% drop-in compatible as possible while still
> providing the unique value of Berkeley DB as a btree storage engine.  For
> our first release, I think you'll have to admit that we are very close to
> the mark.  We're already nearing a patch release and it is even closer. 
> This will evolve, both SQLite and BDB's SQL will benefit along the way.
> 
> Comparison: "... improved performance, concurrency, scalability, and
> reliability."  Fundamentally, we are faster because we don't lock the
> entire database on writes as SQLite's btree does.  BDB is designed for
> concurrent multi-process/thread access, this gives us a speed advantage
> when there is any concurrency in the system.  Single-threaded performance
> is a more apples-to-apples comparison and this is more evenly matched. 
> The product is evolving fast, we're constantly finding ways to use
> advanced features in BDB for special cases in SQLite.  Again, we're only
> just in release 1 of the combined product and we're already in very good
> shape to be faster in general.
> 
> MVCC: We're going to add in support for MVCC (snapshot isolation), it's
> not there in the first release.  This will continue to help speed up
> concurrent access and prevent deadlocks.
> 
> HA: Clearly we're going to integrate (in a SQLite-friendly way) support
> for HA/replication.  It's not there in this release.  If you have ideas
> for how to properly make this fit into the product let us know!  Should it
> be a PRAGMA?  Should it be C-functions?  Something else?  Speak up now.
> 
> Compaction: We punted on compaction in the first release because we wanted
> to do it using BDB's built-in compaction code (which can compact the
> database and optimize the btree while it's being used, it can even do this
> a little bit at a time so as not to be overly disruptive).  We didn't get
> this into the code line in time for the first release, it's coming very
> soon.
> 
> Compression: BDB has support for compression of things stored in the
> database, this is something we hope to integrate into the SQL API very
> soon.
> 
> Encryption: Again, we are hard at work on this.  BDB already supports
> encrypted databases, so it won't be hard to do.
> 
> 
> We are also working on a comparison paper with Mike Owens (of "The
> Definitive Guide to SQLite" 

Re: [sqlite] Oracle joins the SQLite Consortium

2010-06-21 Thread Igor Sereda

Wow, that's interesting news. Berkeley DB is still GPL/commercial, I guess? I
hope SQLite will keep on going under public domain, including its B-tree
level. 

Also, here's an interesting statement in the BDB/SQLite announcement:


> Thus, applications written to the SQLite version 3 API can switch to using
> Oracle Berkeley DB with no code changes, by re-linking against the
> Berkeley DB SQLite library. The application-level behavior of the two
> products is identical, but the advanced features of Berkeley DB provide
> SQLite applications improved performance, concurrency, scalability, and
> reliability.
> 

Could you please comment on that? 
Does that mean SQLite storage level is less reliable than BDB? 
Are there any performance measurements and comparison of SQLite vs.
SQLite/BDB? 
Does SQLite/BDB really provide more concurrency, while maintaining
SERIALIZABLE isolation level?

Thanks!
Igor


D. Richard Hipp wrote:
> 
> The SQLite developers are pleased to announce that Oracle has joined  
> the SQLite Consortium.
> 
> The SQLite Consortium is a collaboration of major users of SQLite  
> designed to ensure the continuing vitality and independence of  
> SQLite.  In exchange for sponsorship, SQLite Consortium Members  
> receive enterprise-level technical support, access to proprietary  
> SQLite add-ons such as the SQLite Encryption Extension and TH3, and  
> guarantees that SQLite will continue to be actively maintained and  
> developed and that it will not fall under the control of a competitor.
> 
> Oracle uses the parser, code generator, and virtual machine from  
> SQLite in its Berkeley DB product.  Additional information about  
> Berkeley DB's SQL API is available at
> 
> http://www.oracle.com/technology/products/berkeley-db/sql.html
> 
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Oracle-joins-the-SQLite-Consortium-tp28947200p28947624.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Yet another SQLite wrapper for Java

2010-06-15 Thread Igor Sereda

Hello.

I have just posted our Java wrapper for SQLite as an open-source project:
http://code.google.com/p/sqlite4java

It's a thin JNI-based wrapper (no JDBC) with performance and stability being
the key concerns. The library is targeted for desktop Java apps, but may be
used in other Java environments (binaries are compiled for Windows, Linux,
Mac OS X). 

The reasons for building our own library were:
  * We wouldn't use JDBC because we needed a really tight integration. We
could possibly have a need to use any function from the SQLite C API, and we
needed additional functions implemented in C for the sake of performance.
  * Among the existing non-JDBC wrappers, we found none that wouldn't be
outdated, have satisfactory interface or implementation, or be applicable to
cross-platform Java GUI apps.

After our library got successfully deployed with our commercial
applications, we made it open-source. Granted that we pursued our own goals
first, it's not always the best solution, yet I believe it is generic enough
to be published.

So, there it is. I would appreciate feedback from the community, especially
from fellow Java developers who are using other SQLite wrappers. Feel free
to suggest improvements or send patches.

Igor

PS. It's a good occasion to say: Big thanks and kudos to D. Richard Hipp and
the team for SQLite, an excellent product!
-- 
View this message in context: 
http://old.nabble.com/Yet-another-SQLite-wrapper-for-Java-tp28890371p28890371.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using cache stats to balance transaction size for optimal performance

2010-05-27 Thread Igor Sereda

Pavel, 

Thanks for the reply! I was afraid using pcache would be the only way :) 

As for this:

Pavel Ivanov-2 wrote:
> 
> No way. Cache won't ever grow just because you have large transaction.
> It will only be spilled to disk and exclusive lock will be taken but
> never trigger unbound growth.
> 

That's true, except for the case when exclusive lock fails; at least that's
what Dan Kennedy says to my question from a couple of years ago:
http://old.nabble.com/changes-in-cache-spill-locking-since-3.5.9--td20564357.html#a20564357

I guess that's an undocumented feature.

Cheers,
Igor
-- 
View this message in context: 
http://old.nabble.com/Using-cache-stats-to-balance-transaction-size-for-optimal-performance-tp28690967p28693594.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using cache stats to balance transaction size for optimal performance

2010-05-27 Thread Igor Sereda

Michael,

Thank you for your suggestion! The problem with this approach is that N
would not be a constant that we could tune.

As I mentioned, the amount of updates may vary, depending on the data
received. 

For example, one piece of data may lead to a single INSERT. So it would be
safe and effective to have N=1000, for example. Another piece of data may
lead to 1000 INSERTs. Now, if we still have N=1000, then we'll have
1,000,000 INSERTs in a single transaction. It's completely unpredictable.
And when the amount of data changed in a single transaction is large enough,
it would cause either cache spill and exclusive lock on the database, or the
growth of cache and memory consumption.

Do you think this makes sense?

We could theoretically count the number of DML statements or steps, but this
would imply changing the underlying architecture of the application, so that
any plug-in or extension that accesses SQLite also reports how much data did
they change. It's not very convenient.

Kind regards,
Igor


Black, Michael (IS) wrote:
> 
> So only do N many records in one batch.  That's the easiest thing.  Forget
> about the cache and just use responsiveness to adjust how many records you
> allow at once.
>  
> Pseudo-code:
> recnum=0
> BEGIN;
> while more records
> INSERT
> recnum++
> if (recnum % 1000) 
> COMMIT;
> BEGIN;
> end
> COMMIT;
> 

-- 
View this message in context: 
http://old.nabble.com/Using-cache-stats-to-balance-transaction-size-for-optimal-performance-tp28690967p28692687.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using cache stats to balance transaction size for optimal performance

2010-05-27 Thread Igor Sereda

I would like each transaction to be as large as possible, but not too large
to cause cache growth or cache spill.

We have a stream of incoming data, with each piece of data causing updates
in SQLite database. The number of rows inserted/updated for each data record
may vary.

If I enclose each data record's processing in a separate transaction, there
will be too many transactions -- it would be slow. If I enclose too many
records processing in a single transaction, the cache may grow or spill to
disk -- not wanted either.

It would be great if we could dynamically assess how much of the cache is
taken up by the transaction. Then, I would issue COMMIT as soon as cache use
is over some threshold, like 50%.

Pseudocode:

while (have data) {
BEGIN
while (have data && CACHE USE < 50%) {
process next piece of data
}
COMMIT
}

Is this possible? Any other best practices for optimizing transaction size?

Thanks!
Igor


-- 
View this message in context: 
http://old.nabble.com/Using-cache-stats-to-balance-transaction-size-for-optimal-performance-tp28690967p28690967.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] changes in cache spill locking since 3.5.9?

2008-11-19 Thread Igor Sereda
Dan,

Thank you for detailed explanation.

I assume that once large transaction is over, the cache returns to
pre-configured state? That is, pages are given back to other sessions'
caches and excess memory is freed?

Best regards,
Igor


  


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:sqlite-users-
> [EMAIL PROTECTED] On Behalf Of Dan
> Sent: Tuesday, November 18, 2008 9:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] changes in cache spill locking since 3.5.9?
> 
> 
> On Nov 19, 2008, at 12:27 AM, Igor Sereda wrote:
> 
> > Hi,
> >
> > We have recently upgraded from 3.5.9 to 3.6.5 and one of tests that
> > ensures certain SQLite behavior now fails. The test basically checks
> > how cache spill is handled:
> >
> > SESSION THREAD 1SESSION THREAD 2
> >
> > Open session
> > Launch SELECT, keep stmt
> > (assert SHARED lock is held)
> > Start session 2 ->  Open session
> >Adjust cache size to 5 pages
> >INSERT data definitely larger
> > than cache
> >(assert RESERVED lock)
> >(at some point assert cache spill:
> > try EXCLUSIVE lock => fail)
> >
> >
> > The test expected that at some point during session two SQLITE_BUSY
> > will happen and transaction will be rolled back - that worked on
> > 3.5.9.
> >
> > What now happens is that all INSERTS complete successfully. If
> > followed by COMMIT, an SQLITE_BUSY will result and transaction will
> > *not* be rolled back. Also, trying to get more SHARED locks will
> > fail. So it looks like writer session holds PENDING lock.
> >
> > This change looks to be for the better, but I couldn’t find any
> > references in change log or here in the forum. So I wonder:
> >
> > 1. Is this intentional change, and will it stay in future versions?
> 
> Yes. And probably.
> 
> > 2. How is it implemented, in regards to serialized isolation level?
> > Shared lock holders should be able to read whole database in a
> > consistent state and at the same time writer is obviously able to
> > change as much as needed, spilling changes to disk.
> 
> The change is that if a cache-spill fails because it can't get the
> EXCLUSIVE lock required to write to the database, the page cache
> is allowed to grow indefinitely (well, until malloc() fails) to
> accommodate dirty pages.
> 
> There are other related changes as well. If one cache is forced to
> exceed
> its configured limit (i.e. the value configured by PRAGMA cache_size),
> then the effective cache size limit for other database connections
> in the process is temporarily reduced to compensate. Basically SQLite
> tries not to cache more than a global limit of pages, where that global
> limit is the sum of the configured cache-size limits for all database
> connections in the process.
> 
> Upcoming versions of sqlite will feature an API that allows users to
> supply their own global page cache implementation (you can see this in
> cvs at the moment). This can be useful for embedded systems that need
> to centrally control the way in which scarce memory resources are
> shared
> between sqlite page caches and the rest of the system.
> 
> Dan.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] changes in cache spill locking since 3.5.9?

2008-11-18 Thread Igor Sereda
Hi,

We have recently upgraded from 3.5.9 to 3.6.5 and one of tests that ensures 
certain SQLite behavior now fails. The test basically checks how cache spill is 
handled:

SESSION THREAD 1SESSION THREAD 2

Open session
Launch SELECT, keep stmt
(assert SHARED lock is held)
Start session 2 ->  Open session
Adjust cache size to 5 pages
INSERT data definitely larger than cache
(assert RESERVED lock)
(at some point assert cache spill:
 try EXCLUSIVE lock => fail)


The test expected that at some point during session two SQLITE_BUSY will happen 
and transaction will be rolled back - that worked on 3.5.9.

What now happens is that all INSERTS complete successfully. If followed by 
COMMIT, an SQLITE_BUSY will result and transaction will *not* be rolled back. 
Also, trying to get more SHARED locks will fail. So it looks like writer 
session holds PENDING lock.

This change looks to be for the better, but I couldn’t find any references in 
change log or here in the forum. So I wonder:

1. Is this intentional change, and will it stay in future versions?

2. How is it implemented, in regards to serialized isolation level? Shared lock 
holders should be able to read whole database in a consistent state and at the 
same time writer is obviously able to change as much as needed, spilling 
changes to disk. 


Thanks!
Igor



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_interrupt and transactions

2008-04-02 Thread Igor Sereda
Greetings!

I need to be able to interrupt a long-running query within a transaction. The 
question is: is it possible that changes made previously in this transaction 
will be affected?

Example pseudo-code:
  1. BEGIN IMMEDIATE
  2. INSERT INTO x (x) VALUES ('y');
  3. SELECT long_running_query
  4. *from another thread* interrupt SELECT via progress handler or 
sqlite3_interrupt
  5. // ignore interrupted return code
  6. COMMIT
  7. SELECT x FROM x WHERE x = 'y' (will it be there?)

When I run a simple example as described above, it works - the transaction is 
not ruined by interrupt. On the other hand, this case is not covered in the 
documentation; specs only say that "If the interrupted SQL operation is an 
INSERT, UPDATE, or DELETE that is inside an explicit transaction, then the 
entire transaction will be rolled back automatically". 

So, is it safe to assume that "If the interrupted SQL operation is a SELECT 
that is inside an explicit transaction, then the transaction is not affected"?

Thanks for your help!
Igor
  



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to determine current lock state

2008-03-22 Thread Igor Sereda
Hello,

 

Is there a way to determine current lock state of a database? More 
specifically, I’d like to be able to tell whether the main database in the 
current session is under SHARED lock, or under RESERVED/PENDING/EXCLUSIVE lock. 
This is needed for unit tests and assertions.

 

Thanks!

Igor

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob incremental i/o constraints

2008-02-13 Thread Igor Sereda
The questions around sqlite3_blob_xxx methods that Roger brought up a couple
of months ago are very interesting for me too, and I haven't seen any reply
to Roger's message. (Roger - do you have any update?)

As far as I can gather from the cited description of the problem, we should
manually acquire SHARED db lock before reading a blob, and RESERVED lock
before writing a blob. Can someone confirm that?

Also, how blob i/o operations deal with transactional context is indeed not
very clear. Based on few words in description of sqlite3_blob_close, I
assume blob i/o is transactional, but this also brings a more subtle point:

What happens with the cache when a really large BLOB (larger than cache
size) is read/written? I guess I should have run some experiments, and I
will probably do, but it seems quite probable that everything will be paged
out of cache by the blob. Considered that the reason for incremental i/o is
(supposedly) to avoid lots of memory allocation, we can expect that every
BLOB going through sqlite3_blob_read/write will be quite large, so the cache
will be thrashing. Please tell me it is not so :)

What I'd suggest is to maybe check whether there is an EXCLUSIVE lock and
write directly to filesystem, as when cache spill happens? Otherwise, I'm
thinking of creating a separate connection with very small cache
specifically to handle blob i/o.

Thanks!
Igor



> -Original Message-
> From: Roger Binns [mailto:[EMAIL PROTECTED]
> Sent: Sunday, December 02, 2007 10:16 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] blob incremental i/o constraints
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> There isn't a documentation or wiki page about the blob i/o
> functionality beyond the api reference.  I am curious if the behaviour
> I
> am seeing was intentional.
> 
> Open a blob for reading.  Insert into the same table (not affecting the
> blob or its row).  On calling sqlite3_blob_read, I get SQLITE_ABORT
> returned and the errmsg is "not an error".  This also happens if I do a
> few reads, and then the insert, on the next read.  (This also happens
> with blobs open for writing and doing read or write after the table is
> changed).
> 
> Open a blob for reading.  Call sqlite3_blob_write, and get back
> SQLITE_READONLY.  On next calling sqlite3_blob_close, I also get back
> SQLITE_READONLY.  If sqlite_blob_close is going to return any errors
> from prior reads/writes then it should also do so for the above
> paragraph (which it doesn't).
> 
> You can delete (via SQL) a blob that is open.  The next
> sqlite3_blob_read on the blob gets SQLITE_ABORT.  (Heck you can even
> change the contents via SQL).
> 
> It isn't stated anywhere what the transactional nature of blob i/o is.
> For example is data committed after each write, or does it happen on
> the
> close.
> 
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFHUluYmOOfHg372QQRAr6wAKCyo4lRyfeu5gtAxJ+yfH8/KFhhGwCfTV36
> F5Z1rGEiL8hjdSMIC+XjWTs=
> =nd4s
> -END PGP SIGNATURE-
> 
> ---
> --
> To unsubscribe, send email to [EMAIL PROTECTED]
> ---
> --


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: [sqlite] Querying DATE column with date/time string.

2007-12-06 Thread Igor Sereda
My guess is that string comparison is taking place, and so "2008-01-01" is
less than "2008-01-01 00:00:00".

HTH,
Igor
 
-Original Message-
From: Doug Van Horn [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 5:48 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Querying DATE column with date/time string.


Hi,

I'm running into a problem with the database library in Django running
against SQLite.  I'm trying to understand why the following happens:

$ sqlite3 date_test
SQLite version 3.4.2
Enter ".help" for instructions
sqlite> create table foo (d date null);
sqlite> insert into foo (d) values ('2008-01-01'); select d from foo 
sqlite> where d between '2008-01-01' and '2008-01-31';
2008-01-01
sqlite> select d from foo where d between '2008-01-01 00:00:00' and
'2008-01-31 23:59:59.99';
sqlite> .quit

In English, why does adding the 'time' portion to the between clause not
find the record?


Thanks for any help or insights...

Doug Van Horn
--
View this message in context:
http://www.nabble.com/Querying-DATE-column-with-date-time-string.-tf4956413.
html#a14193493
Sent from the SQLite mailing list archive at Nabble.com.



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Transactional DDL

2007-11-27 Thread Igor Sereda
I noticed that CREATE TABLE works well within a transaction, which was a 
pleasant surprise. I can create a table and insert some rows in it, all quite 
ACIDly - wow! 

My question is, is that a declared contract or just a peculiarity that may 
disappear in future versions? I couldn't find any specs of that behavior in 
documentation. If I missed it, please point me to the URL. If there are no 
mentions of that in docs, well, it's probably worth mentioning.

Also, which statements are not transactional? VACUUM is obviously one of them, 
are there any other?

Thanks!
Igor


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] SQLite Manager Firefox extension

2007-11-27 Thread Igor Sereda
It does look good, but it's not quite usable with large databases though.
For example, I couldn't wait till Browse and Search page showed a 2 million
rows table -- it seemed to load everything into memory, eating up resources
and causing Firefox to come up with "stop script" dialogs. 

Otherwise, a nice UI.

--
Igor 


 
-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 27, 2007 5:47 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite Manager Firefox extension

It is pretty, runs well and is easy to use.  Better than "decent".

P Kishor wrote:
> To all those looking for a decent, cross-platform SQLite gui, check 
> out
> 
> 
> SQLite Manager Firefox add-on
> 
> https://addons.mozilla.org/en-US/firefox/addon/5817
> 
> I have just started experimenting with it, and it really quite nice 
> even at version 0.2.9.1. Works well on my Mac.
> 
> --
> Puneet Kishor
> 
> --
> --- To unsubscribe, send email to 
> [EMAIL PROTECTED]
> --
> ---
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Re: How to reset errcode

2007-11-24 Thread Igor Sereda
> If you use sqlite3_prepare_v2 function to prepare your statement,
> you opt out of this legacy behavior and then sqlite3_step returns 
> the actual error code directly, and sqlite3_reset and _finalize 
> behave in a sane way. 

Thanks for the clarification!

-- Igor


 
-Original Message-
From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 23, 2007 11:14 PM
To: SQLite
Subject: [sqlite] Re: How to reset errcode

Igor Sereda <[EMAIL PROTECTED]> wrote:
> From API docs:
>
> [quote]
> The sqlite3_errcode() interface returns the numeric result code or 
> extended result code for the most recent failed sqlite3_* API call 
> associated with sqlite3 handle 'db'. If a prior API call failed but 
> the most recent API call succeeded, the return value from
> sqlite3_errcode() is undefined.
> [/quote]
>
> I find this contract very inconvenient, as for each API call I would 
> like to know exactly whether it had succeeded.

The return value of each API tells you whether it has succeeded or not.

> I suppose that the
> same contract applies to int return values from sqlite3_* calls -- for 
> example, docs for sqlite3_finalize() state that the return code can be 
> an error from previous execution of the statement.

That's because of an unfortunate historical behavior of sqlite3_step, which now 
has to be maintained for backward compatibility. When sqlite3_step fails, it 
returns a generic SQLITE_ERROR code. One has to follow up with a call to 
sqlite3_reset or sqlite3_finalize, which returns the actual error code that 
sqlite3_step should have returned in the first place, even though the actual 
reset or finalize operation succeeds.

If you use sqlite3_prepare_v2 function to prepare your statement, you opt out 
of this legacy behavior and then sqlite3_step returns the actual error code 
directly, and sqlite3_reset and _finalize behave in a sane way.

Igor Tandetnik 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How to reset errcode

2007-11-23 Thread Igor Sereda
>From API docs:

[quote]
The sqlite3_errcode() interface returns the numeric result code or extended 
result code for the most recent failed sqlite3_* API call associated with 
sqlite3 handle 'db'. If a prior API call failed but the most recent API call 
succeeded, the return value from sqlite3_errcode() is undefined.
[/quote]

I find this contract very inconvenient, as for each API call I would like to 
know exactly whether it had succeeded. I suppose that the same contract applies 
to int return values from sqlite3_* calls -- for example, docs for 
sqlite3_finalize() state that the return code can be an error from previous 
execution of the statement. So in the latter case I'm not able to tell whether 
it was sqlite3_finalize() that failed, or some previous call.

Is there any known solutions to this problem, probably some functions to reset 
error codes that apply to the sqlite3* or sqlite3_stmt* ?

Thanks!
Igor



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] benchmarking UTF8 vs UTF16 encoded databases

2007-11-23 Thread Igor Sereda
> About the endieness, you don't need to know if you 
> don't care. SQLite handles it.

SQLite does handle that, but what would be the performance loss when working
with a UTF-16 encoded database, but with endianness opposite to the system?
That's quite probable scenario, say, a database created on Intel-based
system and then moved to Mac/PPC.

Best regards,
Igor


 
-Original Message-
From: Nuno Lucas [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 23, 2007 2:01 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] benchmarking UTF8 vs UTF16 encoded databases

On 11/23/07, Jarl Friis <[EMAIL PROTECTED]> wrote:
> Hi Daniel.
>
> Thanks for the benchmark reports, interesting studies.
>
> Another reason to stay away from utf-16 is that it is not endianess 
> neutral. Which raise the question are you storing in UTF-16BE or 
> UTF-16LE ?

If you only speak Japanese and all your characters are 3 bytes or more in
UTF-8 and always 2 bytes in UTF-16 which would you tend to choose?

About the endieness, you don't need to know if you don't care. SQLite
handles it.

Regards,
~Nuno Lucas

>
> Jarl


-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Re: Any way to obtain explicit read lock?

2007-11-19 Thread Igor Sereda
Igor, thanks. I almost always use BEGIN IMMEDIATE, so I missed the BEGIN 
[DEFERRED] variant. Guess it solves the problem, though it seems the lock won't 
be acquired before SELECT happens. 

Best regards,
Igor

 
-Original Message-
From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 19, 2007 3:26 AM
To: SQLite
Subject: [sqlite] Re: Any way to obtain explicit read lock?

Igor Sereda <[EMAIL PROTECTED]> wrote:
> Suppose we need to read two tables in an isolated way, so no db change 
> is visible to the connection between first and second readout.
> As far as I see, there's no such SQL or API for that at the moment.  

Just do both SELECT's within a single transaction. See BEGIN, COMMIT

Igor Tandetnik

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Any way to obtain explicit read lock?

2007-11-18 Thread Igor Sereda

Suppose we need to read two tables in an isolated way, so no db change is 
visible to the connection between first and second readout. As far as I see, 
there's no such SQL or API for that at the moment. 

In other words:

1: // with the first step() the read lock is taken:
2: while(stmt1.step()) { read(stmt1); }   
3: // with the last step() the read lock has been released

4: // with the first step() the read lock is taken again, but...
5: // there might have been changes since line 3
6: while(stmt2.step()) { read(stmt2); }   

So if there's some constraints between tables read with these two statements, 
we can get an inconsistent readout.

One obvious workaround would be to keep a dummy table, like Oracle's "dual", 
and take one step() reading it to retrieve read lock, then release read lock by 
resetting this statement. Of course we can "begin immediate", but since no 
writing is going to be done, obtaining reserved lock will be a waste.

My question, is there maybe any direct way to obtain a read lock, which I 
missed from the docs? If not, can this be a minor feature request?

Thanks!
Igor


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Suggests for improving the SQLite website

2007-11-08 Thread Igor Sereda
I'm new to this list and to the SQLite website, so my feedback is more of
the "first impression" kind. And that impression is: the site is ok. It is
clear, simple, with almost anything I need reachable through one or two
clicks. The things I would probably do is place a google search field
somewhere in a corner and list what programming languages are supported.

The comments you mention, in my opinion, may be valid for promoting a
product sold to consumers or enterprise, which are not target audience I
would associate with this mailing list. A developer who looks for an
embedded database doesn't need eye candy, and big name users don't mean
anything (everyone uses Oracle). But that's subjective, of course. Here's
what I was looking at when making try/not try decision for sqlite: a)
license; b) features; c) could be used with Java; d) source code
availability; e) project age and release cycle (is it too young or already
dead); f) is active community present; g) options for commercial support
available.

> (1) It is not clear from the homepage that the software is free.

I guess mentioning this won't hurt, but it's no problem. Certainly someone
who can write SQL can also find "license" in the menu :)

> (2) Half the page is devoted to talking about bugs in
> the software.  This suggests low quality.

This suggests openness. If it went "we have no bugs in our software", I
would probably leave immediately. But there's a point that news column
usually takes less than 50% of page's width, something I'd agree with. 

> (3) The "News" contains scary words: "radical changes".

s/changes/improvements/ :) Seriously, there's too much care for single
words, IMHO.

> (4) Three releases in as many months suggests the code is not stable.

And 98% test coverage mentioned in the other column suggests otherwise. For
me, frequent releases are good. What will you do anyway, hide release
history? 


Hope this helps,
Igor

 
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 08, 2007 7:29 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Suggests for improving the SQLite website

We are looking at renovating the design of the SQLite website and would love
to have suggestions from the community.  If you have any ideas on how to
improve the SQLite website, please constribute either to the mailing list or
directly to me.

Here are some links to competing database products that might inspire
comments:

   http://www.postgresql.org/
   http://www.firebirdsql.org/
   http://www.hsqldb.org/
   http://opensource.ingres.com/
   http://db.apache.org/derby/
   http://exist.sourceforge.net/

Among the comments received already are these:

  (1) It is not clear from the homepage that the software
  is free.
  (2) Half the page is devoted to talking about bugs in
  the software.  This suggests low quality.
  (3) The "News" contains scary words: "radical changes".
  (4) Three releases in as many months suggests the
  code is not stable.
  (5) Move the BigNameUsers to the front page
  (see http://www.sqlite.org/wiki?p=BigNameUsers)
  (6) Need more eye-candy.

I do not necessary agree with the above comments, but I am open to any and
all ideas.  You will not hurt my feels, so speak freely.

Thanks in advance for your input.

--
D. Richard Hipp <[EMAIL PROTECTED]>




-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Optimizing performance by moving large texts into a separate table

2007-11-07 Thread Igor Sereda
Thank you! How about separate DB just for large texts? Would that be an
overkill? We could use different page sizes for the two DBs. I'm not sure
how well transactions over several DBs are handled though.

Best regards,
Igor

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 08, 2007 12:30 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Optimizing performance by moving large texts into a
separate table

"Igor Sereda" <[EMAIL PROTECTED]> wrote:
> We have a database that can possibly grow into millions of rows. Some 
> = tables have TEXT fields, which may store texts of signigicant 
> length. = All other data is mostly numeric values.
> 
> We have a thought of moving all large texts into a separate table, and 
> = replacing text_column with text_id in the rest of the schema. The = 
> assumption is that db pages are allocated fully to a single table, so 
> = the numerical part of the database will end up in a few db pages and 
> so = we'll be able to quickly run queries over them. (We won't have 
> queries = for texts, only look-ups by text_id.)
> 
> Is our assumption correct? Is that a pattern someone here has = 
> implemented maybe? How does the size of the whole database affect = 
> queries to a single table?
> 

This is a good assumption.  Keeping large CLOBs and BLOBs in a separate
table and referencing them by rowid is what I do.

--
D. Richard Hipp <[EMAIL PROTECTED]>




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Optimizing performance by moving large texts into a separate table

2007-11-07 Thread Igor Sereda
We have a database that can possibly grow into millions of rows. Some tables 
have TEXT fields, which may store texts of signigicant length. All other data 
is mostly numeric values. 

We have a thought of moving all large texts into a separate table, and 
replacing text_column with text_id in the rest of the schema. The assumption is 
that db pages are allocated fully to a single table, so the numerical part of 
the database will end up in a few db pages and so we'll be able to quickly run 
queries over them. (We won't have queries for texts, only look-ups by text_id.)

Is our assumption correct? Is that a pattern someone here has implemented 
maybe? How does the size of the whole database affect queries to a single 
table? 

Thanks!
Igor


-
To unsubscribe, send email to [EMAIL PROTECTED]
-