[sqlite] /tmp

2012-11-03 Thread Ivan Shmakov
> Richard Hipp writes:
> On Fri, Nov 2, 2012 at 6:44 PM, Tod Olson wrote:

[…]

 >> This is fine on small data, but when I load 1.8GB of data (8.8
 >> million rows) the second CREATE fails, reporting a disk I/O error.

 > You might be running out of /tmp space.  Do you have plenty of /tmp
 > space available for use by the sorter.

Do I understand it correctly that SQLite actually uses $TMPDIR,
and defaults to /tmp if it's not set or empty?

[…]

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR IGNORE safety?

2012-11-02 Thread Ivan Shmakov
>>>>> Igor Tandetnik writes:
>>>>> Ivan Shmakov <oneing...@gmail.com> wrote:
>>>>> Igor Tandetnik writes:

 >>> Note that ending up in the IGNORE branch of INSERT OR IGNORE
 >>> statement doesn't constitute failure, but normal execution.  The
 >>> same is true for INSERT ... SELECT statement where SELECT returns
 >>> zero rows so nothing is actually inserted.

 >> Thus was my question: how do I ensure that a SELECT … INSERT
 >> statement results in exactly one row being inserted?

 > In your hypothetical example of two INSERT statements in a trigger,
 > that is what would indeed happen, assuming both statements succeed.
 > If either of the two statements fails, the trigger as a whole would
 > fail and any changes it made rolled back.

Yes.

The point is that, thanks to OR IGNORE, the first one succeeds
even if the record being INSERT'ed already exists in the table.

(The ON CONFLICT clause is non-standard, and I was initially
reluctant to use it in my code.  However, the example being
discussed has made its benefits obvious to me.)

 >> (Additional constraint: it should be possible to use the code in a
 >> trigger.)  The best thing I was able to imagine is to replace:

 >> INSERT INTO "bar" ("foo")
 >> SELECT  f."rowid" FROM "foo" f WHERE f."foo" = ?1;

 >> with:

 >> INSERT INTO "bar" ("foo")
 >> VALUES ((SELECT f."rowid" FROM "foo" f WHERE f."foo" = ?1));

 > If the nested SELECT returns an empty set, then this statement
 > attempts to insert NULL into bar.foo, which would fail due to
 > NOT NULL constraint.

… Exactly as intended.

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR IGNORE safety?

2012-11-02 Thread Ivan Shmakov
> Igor Tandetnik writes:

[…]

 > Note that ending up in the IGNORE branch of INSERT OR IGNORE
 > statement doesn't constitute failure, but normal execution.  The same
 > is true for INSERT ... SELECT statement where SELECT returns zero
 > rows so nothing is actually inserted.

Thus was my question: how do I ensure that a SELECT … INSERT
statement results in exactly one row being inserted?
(Additional constraint: it should be possible to use the code in
a trigger.)  The best thing I was able to imagine is to replace:

INSERT INTO "bar" ("foo")
SELECT  f."rowid" FROM "foo" f WHERE f."foo" = ?1;

with:

INSERT INTO "bar" ("foo")
VALUES ((SELECT f."rowid" FROM "foo" f WHERE f."foo" = ?1));

… The task I'm solving seems unlike an unusual one.  Suppose
that, e. g., I'm recording a history of clients.  Each client
has a name, and is served an arbitrary (but positive) number of
times, at particular dates.  Or, for the 0'th approximation:

CREATE TABLE "history" (
"client-name"   TEXTNOT NULL,
-- as in: CAST (strftime ('%s', 'now') AS INTEGER), or now ()
"timestamp" INTEGER NOT NULL
);

The redundancy of the schema above could, however, be reduced as
follows:

CREATE TABLE "client" (
"name"  TEXTNOT NULL
);

CREATE UNIQUE INDEX "client-unique"
ON "client" ("name");

CREATE TABLE "history-1" (
"client"INTEGER NOT NULL  REFERENCES "client",
"timestamp" INTEGER NOT NULL
);

Now, I wish to preserve the simplicity of access of the first
variant.  Thus, I create a VIEW:

CREATE VIEW "history"
AS  SELECT  c."name" AS "client-name",
h."timestamp"
FROM "history-1" h
INNER JOIN "client" c
ON (c."rowid" = h."client");

… and a TRIGGER:

CREATE TRIGGER "history-insert"
INSTEAD OF INSERT ON "history" FOR EACH ROW BEGIN
INSERT OR IGNORE INTO "client" ("name")
VALUES (new."client-name");
INSERT INTO "history-1" ("client", "timestamp")
VALUES ((SELECT c."rowid"
 FROM "client".c
 WHERE c."name" = new."client-name"),
new."timestamp");
END;

This way, an application can SELECT from and INSERT into this
new "history" VIEW with just the same simplicity as was possible
with the former "history" TABLE.  Naturally, this trigger
requires that a single INSERT to the "history" VIEW results in
exactly a single INSERT to the underlying "history-1" TABLE.
(Which is what brought my question above.)

To note is that, with little CASE trickery, it's possible to
support “hybrid” (id, value) views (c."rowid" AS "client-id" in
the VIEW above), and NULL non-constraints, just as well.

-- 
FSF associate member #7257  np. ml_hygm.xm

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR IGNORE safety?

2012-11-02 Thread Ivan Shmakov
>>>>> Ivan Shmakov writes:
>>>>> Simon Slavin writes:
>>>>> On 2 Nov 2012, at 8:58am, Ivan Shmakov wrote:

 >>> INSERT OR IGNORE INTO "foo" ("foo")
 >>> VALUES (?1);
 >>> INSERT INTO "bar" ("foo")
 >>> VALUES ((SELECT f."rowid" FROM "foo" f WHERE f."foo" = ?1));

[…]

 > The end result for the command sequence I'm looking for is simple:

 > • "foo" has a (foo → ?1) record — it doesn't matter if it was
 > INSERT'ed just now, or was added at some point before;

 > • "bar" has a (foo → id) record, where ‘id’ is the ROWID of the
 > aforementioned "foo" record.

Correction: “… has a /new/ (foo → id) record…”

 > AIUI, the command sequence above does just that.

[…]

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR IGNORE safety?

2012-11-02 Thread Ivan Shmakov
>>>>> Simon Slavin writes:
>>>>> On 2 Nov 2012, at 8:58am, Ivan Shmakov wrote:

 >> INSERT OR IGNORE INTO "foo" ("foo")
 >> VALUES (?1);
 >> INSERT INTO "bar" ("foo")
 >> VALUES ((SELECT f."rowid" FROM "foo" f WHERE f."foo" = ?1));

 >> Or is there a better way to ensure that the inner SELECT either
 >> returns a single row, or fails?

 > What do you mean by 'fails'?  Returning zero rows from a SELECT is
 > not failure: it's successfully reporting that there are no such rows
 > in the table.

Which, as long as the task below is considered, indicates a
“contradiction” (of a kind.)

 > And there may be rows in the table even if the first INSERT failed:
 > the rows may have already been inserted.

Yes.

The end result for the command sequence I'm looking for is
simple:

• "foo" has a (foo → ?1) record — it doesn't matter if it was
  INSERT'ed just now, or was added at some point before;

• "bar" has a (foo → id) record, where ‘id’ is the ROWID of the
  aforementioned "foo" record.

AIUI, the command sequence above does just that.

 > If what you mean is that some syntax error or conflict prevented the
 > first INSERT from working, the correct way to do it is to look at the
 > result returned from that INSERT and see whether it is SQLITE_OK.

ACK, thanks.  However, I'm not really interested in the result
of the first INSERT, and, in this case, these commands may
actually be a part of a trigger (where, I believe, I cannot look
at the result returned all that easily.)

[…]

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR IGNORE safety?

2012-11-02 Thread Ivan Shmakov
>>>>> Clemens Ladisch writes:
>>>>> Ivan Shmakov wrote:

 >> I wonder if I'm on safe side when I use, say:

 >> INSERT OR IGNORE INTO "foo" ("foo")
 >> VALUES (?1);

 >> I mean, if the first INSERT fails for the reason other than the
 >> UNIQUE constraint set

 > <http://sqlite.org/lang_conflict.html> says:

 > | The ON CONFLICT clause applies to UNIQUE and NOT NULL constraints

[…]

 > If it fails for any other reason, you get a normal error.

The point is that I have a NOT NULL constraint as well:

 >> CREATE TABLE "foo" ("foo" TEXT NOT NULL);

(There was a missing UNIQUE in CREATE INDEX, BTW.)

However, as it's clear to me now, the second INSERT as specified
won't raise an error should there be no ?1 in "foo".

Thus, I need something like the following instead:

INSERT OR IGNORE INTO "foo" ("foo")
VALUES (?1);
INSERT INTO "bar" ("foo")
VALUES ((SELECT f."rowid" FROM "foo" f WHERE f."foo" = ?1));

Or is there a better way to ensure that the inner SELECT either
returns a single row, or fails?

TIA.

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] INSERT OR IGNORE safety?

2012-11-02 Thread Ivan Shmakov
I wonder if I'm on safe side when I use, say:

INSERT OR IGNORE INTO "foo" ("foo")
VALUES (?1);
INSERT INTO "bar" ("foo")
SELECT f."rowid"
FROM "foo" f
WHERE f."foo" = ?1;

(within a transaction) against the following schema:

CREATE TABLE "foo" ("foo" TEXT NOT NULL);
CREATE INDEX "foo-unique" ON "foo" ("foo");
CREATE TABLE "bar" ("foo" INTEGER NOT NULL REFERENCES "foo");

I mean, if the first INSERT fails for the reason other than the
UNIQUE constraint set, the second one will fail too, and it's
the success of the second one that I was interested in the first
place.

JFTR, the other possible ways to implement the first INSERT that
I'm aware of, are:

INSERT INTO "foo" ("foo")
SELECT ?1
WHERE NOT EXISTS (SELECT 1 FROM "foo" f WHERE f."foo" = ?1);

INSERT INTO "foo" ("foo")
SELECT ?1
EXCEPT SELECT f."foo" FROM "foo" f;

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BLOB concatenation?

2012-01-20 Thread Ivan Shmakov
> Pavel Ivanov writes:

[...]

 >> Unfortunately, the string concatenation operator, when applied to
 >> two BLOB's, results in a text string instead of a BLOB, like:

 >> SELECT quote (X'5445' || X'5354'); => 'TEST'

 > Maybe the following?

 > SELECT quote(cast(X'5445' || X'5354' as blob));

Indeed, it seems to work.  Thanks!

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] BLOB concatenation?

2012-01-20 Thread Ivan Shmakov
With substr (), it's possible to split a BLOB, like:

SELECT quote (substr (X'1337cafe', 3, 2));
  => X'CAFE'

However, how do I concatenate two blobs?

Unfortunately, the string concatenation operator, when applied
to two BLOB's, results in a text string instead of a BLOB, like:

SELECT quote (X'5445' || X'5354');
  => 'TEST'

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Efficient usage of sqlite

2012-01-02 Thread Ivan Shmakov
> Baruch Burstein writes:

[…]

 > My resources are a bunch of sound and image files, level data files,
 > script files and other game data stuff.  Instead of distributing my
 > game with about 20-30 small (some very small) files, I thought I
 > would roll all the files into some kind of archive.  I started to
 > roll my own format, when it occurred to me that sqlite may be well
 > suited for this.

The ar(1) archive format [1] may also be considered.  In
particular, the Debian project distributes pre-built binaries
for its software in ar archives (known as Debian package files,
or .deb's, there), which consist of a metadata part
(control.tar.gz) and the packaged files themselves
(data.tar.gz.)

However, if the intent is to associate a variety of metadata
with each of the data files, using SQLite seems to me like a
more natural solution.

[1] http://en.wikipedia.org/wiki/Ar_(Unix)

 > Which brought me to wonder if storing 5-10 tables with some of them
 > having <500 bytes of data may be very inefficient.

Which kind of game data takes so little space, I wonder?

 > I don't want to substitute a 20K file for my <10K of files.  I know
 > this is not a lot, but it still bothers me, like what when I have a
 > game with 500M of files?  (you never know, it may happen!).  No
 > searching is needed (except once for the key to load a resource)

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transparent compression implementations for SQLite?

2011-12-31 Thread Ivan Shmakov
>>>>> Simon Slavin <slav...@bigfraud.org> writes:
>>>>> On 31 Dec 2011, at 4:56pm, Ivan Shmakov wrote:

 >> The integers could take up to 32 bits long, but I deem them likely
 >> to “cluster”, like, e. g.: 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 101, 101,
 >> 102, 102, 102, 103, 103, 768, 768, etc.  My guess is that such
 >> sequences should be quite compressible, but the point is that
 >> there'll only be a few such numbers per row, and millions of such
 >> rows in the table.

 > Thing is, an integer like 103 takes two bytes to store,

Those were given as examples only.  The real values are going to
use up to 28 bits (roughly, 4 octets) or so.

 > and one of those bytes is the byte that indicates what kind of value
 > is being stored.  So it's really only taking one byte per value.

Is it?  I mean, if I know for sure that I'd only ever need, say,
eight 32-bit integers per row, and pack them into a single BLOB
instead of having them stored as proper INTEGER's, won't I be
able to save a few bytes per row?  (For a performance penalty
when accessing these fields.  Also, I'd have to forget about
indexes on these, other than a UNIQUE one on the whole
BLOB-coded tuple, I guess.)

(With a few functions to unpack these values back out of BLOB's,
I'd even be able to CREATE a suitable VIEW.  Or may be a VIRTUAL
TABLE.)

 > And you can't improve on this by compressing individual values, only
 > by compressing at the table level.  And if you're compressing at
 > table level, making any changes is rather slow.

For now, I'm interested mostly in INSERT's.  (And SELECT's, of
course.)

 >> [snip]is there a way to determine the filesystem space occupied by a
 >> particular table, or index, in SQLite?  It now seems to me that the
 >> problem is different to what I've initially presumed.

 > Not inside SQLite's API functions.  You can do it by reading the file
 > format and multiplying the page size by how many pages are used.

ACK, thanks.  Do I understand it correctly that the
sqlite-analyzer (suggested elsewhere in this thread) does this
very thing?

TIA.

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transparent compression implementations for SQLite?

2011-12-31 Thread Ivan Shmakov
>>>>> Roger Binns writes:
>>>>> On 30/12/11 20:10, Ivan Shmakov wrote:

 >> The problem is that I have a table, each row holding something like
 >> 64 bytes of data (of BLOB and INTEGER types), which don't seem too
 >> compressible, but these rows themselves tend to be similar to each
 >> other, so that the table could still be compressed as a whole, or in
 >> chunks.

 > SQLite stores integers in a variable length encoding using (almost)
 > the minimum bytes necessary.  Consequently keeping your numbers as
 > small as possible will help.

The integers could take up to 32 bits long, but I deem them
likely to “cluster”, like, e. g.: 1, 1, 1, 1, 2, 2, 2, 3, 3, 3,
101, 101, 102, 102, 102, 103, 103, 768, 768, etc.  My guess is
that such sequences should be quite compressible, but the point
is that there'll only be a few such numbers per row, and
millions of such rows in the table.

 > You don't describe what your blobs are, but again encoding them in
 > the shortest length possible will result in less data and redundancy
 > making compression less significant.

On a second thought, the BLOB's are of little concern: there are
not as many of them as of the integers, and they're already in a
separate table, like:

CREATE TABLE "foo" (
"id"INTEGER PRIMARY KEY,
"blob"  BLOBNOT NULL);

CREATE UNIQUE INDEX "foo-unique"
ON "foo" ("blob");

However, this makes me wonder, is there a way to determine the
filesystem space occupied by a particular table, or index, in
SQLite?  It now seems to me that the problem is different to
what I've initially presumed.

 > The page you linked to is a pretty good writeup on the subject.  You
 > can get good compression using CEROD but that doesn't allow making
 > changes.  Other than that you'll need to put something together
 > yourself.

ACK.  I'd rarely really need either UPDATE or DELETE on the
table in question, so I could probably move the accumulated data
into a separate read-only and compressed database file once in a
while, though such a design seems rather complex (and fragile)
to implement.

 > You can compress your fields individually but then you'll only be
 > able to equality comparisons (not ordering or partial matches).
 > Using a precomputed dictionary can give good results even on short
 > fields.  Here is an example for English(ish) text:

 > https://github.com/antirez/smaz

 > Other than that you'll either need to work at the filesystem level or
 > the SQLite page level as the article describes.  Here is another
 > compressor to consider:

 > http://blosc.pytables.org/trac

 > But I suggest you start at the beginning with an understanding of the
 > SQLite file format:

 > http://www.sqlite.org/fileformat2.html

ACK.  Thanks!

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transparent compression implementations for SQLite?

2011-12-31 Thread Ivan Shmakov
>>>>> Simon Slavin writes:
>>>>> On 31 Dec 2011, at 7:11am, Ivan Shmakov wrote:

 >> Download the code and prebuilt DLL.

 > It includes both.  In other words, the vfs_compress.c file you asked
 > for is in there.

Indeed.  Thanks!

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transparent compression implementations for SQLite?

2011-12-30 Thread Ivan Shmakov
>>>>> Simon Slavin writes:
>>>>> On 31 Dec 2011, at 4:10am, Ivan Shmakov wrote:

 >> I wonder, if anyone has any experience with [1] (and where could one
 >> get the vfs_compress.c file, BTW?), or any other transparent
 >> compression implementations for SQLite?

 > Links to the code and a pre-build .dll are in the article, about
 > 2/3rds down the page.

That's strange.  For me, it reads like the following, with the
links to the DLL (which is of little use to me), the SQLite's
own download page and the amalgamation itself (both under
http://www.sqlite.org/) and http://zlib.net/.  Nowhere can I see
a link to vfs_compress.c.  (I've tried it both with Lynx and
Iceweasel, and I've also checked the (X)HTML code itself, but to
no avail.)

TIA.

--cut: 
http://blog.ashodnakashian.com/2011/09/sqlite-with-built-in-online-compression/ 
--

Download

The code holds the same copyright claims as Sqlite, namely none. The
code is experimental. Use it at your own risk.

[78]Download the code and prebuilt DLL. This sqlite3.dll is version
3.7.7.1 amalgamation created with the default settings/flags from
the amalgamation created from original sources by the original
configure and make files. The compression code is added and it’s
built using VS2010 Sp1 and statically liked to the runtime
libraries, as such it has no dependencies.

Building

To build the code, first [79]download a recent Sqlite version. The
[80]3.7.7.1 amalgamation is perfect. The [81]latest Zlib must also
be downloaded and built.

Add the Zlib headers to the include path, copy the vfs_compress.c
file next to sqlite sources and build. Next, build sqlite3.c
amalgamation (or the original sources) and link the binaries of
sqlite3, vfs_compress and Zlib to create the executable.

…

References

…
  78. 
http://blog.ashodnakashian.com/wp-content/uploads/2011/09/sqlite3_compress.zip
  79. http://www.sqlite.org/download.html
  80. http://www.sqlite.org/sqlite-amalgamation-3070701.zip
  81. http://zlib.net/
…

--cut: 
http://blog.ashodnakashian.com/2011/09/sqlite-with-built-in-online-compression/ 
--

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] transparent compression implementations for SQLite?

2011-12-30 Thread Ivan Shmakov
I wonder, if anyone has any experience with [1] (and where could
one get the vfs_compress.c file, BTW?), or any other transparent
compression implementations for SQLite?

The problem is that I have a table, each row holding something
like 64 bytes of data (of BLOB and INTEGER types), which don't
seem too compressible, but these rows themselves tend to be
similar to each other, so that the table could still be
compressed as a whole, or in chunks.

TIA.  And HNY.

[1] 
http://blog.ashodnakashian.com/2011/09/sqlite-with-built-in-online-compression/

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] create sqlite3_value * from scratch?

2011-10-12 Thread Ivan Shmakov
>>>>> Pavel Ivanov writes:
>>>>> On Wed, Oct 12, 2011 at 11:12 AM, Ivan Shmakov wrote:

[…]

 >> Consider, e. g.:

 >> sqlite3_value *a
 >>   = sqlite3_int64_value (1);
 >> assert (a != 0);
 >> sqlite3_value *b
 >>   = sqlite3_text_value (-1, "qux");
 >> sqlite3_value *c
 >>   = sqlite3_blob_value (blob_size, blob);
 >> assert (b != 0);
 >> int r;
 >> r = db_triv_exec_bound (db, sql_1, a, b, 0);
 >> assert (r == SQLITE_OK);

[…]

 > You can do absolutely the same thing but use your_own_value* instead
 > of sqlite3_value*.  Why stick with SQLite's internal data structures?

No special reason, but it would have saved me some work.  Not to
mention that it seems to me a bit like a duplicate effort.

Note also that such a code would've to be updated should a new
type be introduced to SQLite (however unlikely, it still may
happen.)

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] create sqlite3_value * from scratch?

2011-10-12 Thread Ivan Shmakov
>>>>> Igor Tandetnik writes:
>>>>> Ivan Shmakov <i...@gray.siamics.net> wrote:

 >> I wonder, is it possible to create sqlite3_value * from scratch
 >> from within SQLite library's user code?

 > There's a circuitous route that leads there.  You can prepare a
 > statement of the form "select ?;", bind the parameter with one of
 > sqlite3_bind_* functions, then step it once, and retrieve the value
 > right back with sqlite3_column_value.  Note that the resulting
 > sqlite3_value is "unprotected" (see
 > http://sqlite.org/c3ref/value.html) and can only be used in limited
 > ways.  And of course it's only valid until the "select ?;" statement
 > is reset or finalized.

Unfortunately, the latter clearly prevents the kind of interface
I've had in mind (below.)

 >> Given some way to construct a sqlite3_value wrapping object, I
 >> could instead rely on sqlite3_bind_value () alone, thus
 >> eliminating the necessity of type specifiers in the interface.

 > Well, a hypothetical API that constructs sqlite3_value from raw data
 > would have to take the type specifier anyway, wouldn't it?  You would
 > just be moving the same logic to another place.

Yes.  But this still may make code clearer, and, occasionally,
also more concise.  Consider, e. g.:

   sqlite3_value *a
 = sqlite3_int64_value (1);
   assert (a != 0);
   sqlite3_value *b
 = sqlite3_text_value (-1, "qux");
   sqlite3_value *c
 = sqlite3_blob_value (blob_size, blob);
   assert (b != 0);
   int r;
   r = db_triv_exec_bound (db, sql_1, a, b, 0);
   assert (r == SQLITE_OK);
   r = db_triv_exec_bound (db, sql_2, b, c, 0);
   assert (r == SQLITE_OK);
   r = db_triv_exec_bound (db, sql_3, c, a, 0);
   assert (r == SQLITE_OK);
   sqlite3_value_free (a);
   sqlite3_value_free (b);
   sqlite3_value_free (c);

And I see no way to implement my_sqlite3_int64_value () using
the identity SELECT (above), as it'd require
my_sqlite3_value_free () to find the prepared statement
associated with the value (to call sqlite3_finalize () on it, so
to avoid a memory leak), and I know of no documented way of
doing that.

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] create sqlite3_value * from scratch?

2011-10-12 Thread Ivan Shmakov
I wonder, is it possible to create sqlite3_value * from scratch
from within SQLite library's user code?

The idea is as follows.  The current dbtriv.c [1] code has a
db_triv_exec_bound () function, used roughly as follows:

   /* sqlite3 *db; */
   const char *sql
 = ("INSERT INTO foo (bar, baz)"
" VALUES ($1, $2);");
   int r
 = db_triv_exec_bound (db, sql,
   SQLITE_INTEGER, (sqlite3_int64)1,
   SQLITE3_TEXT,   -1, "qux",
   -1);
   assert (r == SQLITE_OK);

This function will look up a cached prepared statement for the
SQL code passed, creating and caching one if none's found, and
then bind the parameters with sqlite3_bind_int64 (),
sqlite3_bind_text (), etc., matching the type specifiers passed.

Given some way to construct a sqlite3_value wrapping object, I
could instead rely on sqlite3_bind_value () alone, thus
eliminating the necessity of type specifiers in the interface.

TIA.

[1] https://gitorious.org/e2dis/e2dis-devel/blobs/master/lib/dbtriv.c

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DISTINCT vs. UNIQUE INDEX, and NOT EXISTS vs. EXCEPT

2011-10-12 Thread Ivan Shmakov
>>>>> Jim Morris writes:
>>>>> On 10/6/2011 10:43 PM, Ivan Shmakov wrote:
>>>>> Jim Morris writes:

[…]

 >>> INSERT INTO fts3_table (a,b,c)
 >>> SELECT 'an A','a B','a C'
 >>> WHERE NOT EXISTS
 >>> (SELECT DISTINCT a,b,c
 >>> FROM fts3_table
 >>> WHERE a='an A' AND b='a B' AND c='a C');

 >> However, I wonder, would the following (slightly more concise) query
 >> imply any performance loss in comparison to the one above?

 >> INSERT INTO fts3_table (a, b, c)
 >> SELECT 'an A', 'a B', 'a C'
 >> EXCEPT SELECT DISTINCT a, b, c
 >>FROM fts3_table;

 >> Also, I'm curious if DISTINCT may cause any performance loss in the
 >> case that the columns in question are constrained by an UNIQUE
 >> index?  Like:

 >> CREATE UNIQUE INDEX "foo-unique"
 >> ON "foo" (a, b, c);

 > I don't know for sure.  You would need to do some testing to
 > determine performance issues.  I wouldn't use distinct if the values
 > are already guaranteed to be unique

As per EXPLAIN, SELECT DISTINCT indeed doesn't take the
uniqueness constraint into consideration.

 > or in a not exits clause since it may have additional overhead.

Surprisingly enough, when used in the EXCEPT part of the query,
DISTINCT has no effect on the VM code (as shown by EXPLAIN)!

At a first glance, the code for the NOT EXISTS variant (sans
INSERT; hopefully it doesn't matter) seems simpler (one
operation less in total, one less Next, has no OpenEphemeral's,
and IdxGE instead of IdxInsert and IdxDelete):

addr  opcode p1p2p3p4 p5  comment  
  -        -  --  -
0 Trace  0 0 000   
1 Goto   0 21000   
2 OpenRead   0 2 1 1  00   
3 Rewind 0 19000   
4 Integer0 2 000   
5 Integer1 3 000   
6 OpenRead   2 3 0 keyinfo(1,BINARY)  00   
7 Column 0 0 400   
8 IsNull 4 14000   
9 SeekGe 2 144 1  00   
10IdxGE  2 144 1  01   
11Integer1 2 000   
12IfZero 3 14-1   00   
13Next   2 10000   
14Close  2 0 000   
15If 2 18100   
16Column 0 0 700   
17ResultRow  7 1 000   
18Next   0 4 001   
19Close  0 0 000   
20Halt   0 0 000   
21Transaction0 0 000   
22VerifyCookie   0 2 000   
23Transaction1 0 000   
24VerifyCookie   1 1 000   
25TableLock  1 2 0 temp.staged00   
26TableLock  0 2 0 foo00   
27Goto   0 2 000   

The code for the EXCEPT variant is:

addr  opcode p1p2p3p4 p5  comment  
  -        -  --  -
0 Trace  0 0 000   
1 OpenEphemeral  2 1 0 keyinfo(1,BINARY)  00   
2 Goto   0 22000   
3 OpenRead   1 2 1 1  00   
4 Rewind 1 9 000   
5 Column 1 0 100   
6 MakeRecord 1 1 200   
7 IdxInsert  2 2 000   
8 Next   1 5 001   
9 Close  1 0 000   
10OpenRead   0 2 0 2  00   
11Rewind 0 15000   
12 

Re: [sqlite] How to design this table?

2011-10-10 Thread Ivan Shmakov
> Simon Slavin writes:
> On 9 Oct 2011, at 3:57am, 张一帆 wrote:

 >> i have some data like "a and b or c ...",there will be a word 'and'
 >> or 'or' which means the Logical relations between each item.

 > If you have "a and b or c" does that mean

 > (a and b) or cOR
 > a and (b or c)?

 > How does your software know ?

The boolean AND and OR operations are often compared to the
usual arithmetics' × and +.  Therefore, a AND b OR c is akin to
a × b + c, and the precedence rules (borrowed from arithmetics)
will make that (a × b) + c.

And SQLite, among many others, behaves just like that:

$ sqlite3 :memory: 'SELECT 0 AND  1 OR 1' 
1
$ sqlite3 :memory: 'SELECT 0 AND (1 OR 1)' 
0
$ 

I guess that it just follows the standard.

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ensuring uniqueness of tuples spanning across multipletables?

2011-10-06 Thread Ivan Shmakov
> Jim Morris writes:

 > The recent thread may relate: "[sqlite] Is there an efficient way to
 > insert unique rows (UNIQUE(a, b, c)) into an fts3 virtual table?"

 > INSERT INTO fts3_table (a,b,c)
 > SELECT 'an A','a B','a C'
 > WHERE NOT EXISTS
 > (SELECT DISTINCT a,b,c
 > FROM fts3_table
 > WHERE a='an A' AND b='a B' AND c='a C');

 > The above SQL could be adapted to your schema.  As mentioned, the
 > performance will be slower than a straight insert.

Thanks.  It's a solution not quite for the problem I'm having,
but I'll probably stick to it (and to the denormalized schema it
imples.)

However, I wonder, would the following (slightly more concise)
query imply any performance loss in comparison to the one above?

INSERT INTO fts3_table (a, b, c)
SELECT 'an A', 'a B', 'a C'
EXCEPT SELECT DISTINCT a, b, c
   FROM fts3_table;

Also, I'm curious if DISTINCT may cause any performance loss in
the case that the columns in question are constrained by an
UNIQUE index?  Like:

CREATE UNIQUE INDEX "foo-unique"
ON "foo" (a, b, c);

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ensuring uniqueness of tuples spanning across multipletables?

2011-10-04 Thread Ivan Shmakov
>>>>> Igor Tandetnik <itandet...@mvps.org> writes:
>>>>> Ivan Shmakov <i...@gray.siamics.net> wrote:

 >> This structure is, obviously, could just as well be represented
 >> with, e. g.:

 >> CREATE TABLE "foo" (
 >> "key"   INTEGER PRIMARY KEY,
 >> "value" INTEGER NOT NULL,
 >> "value-1"   INTEGER,
 >> …
 >> "value-N"   INTEGER);

 > Or else with this:

 > CREATE TABLE "foo" (
 > key INTEGER PRIMARY KEY,
 > value   INTEGER NOT NULL);

 > create table fooDetails(
 > key INTEGER PRIMARY KEY
 > REFERENCES "foo" (key),
 > L integer not null,
 > value integer not null);

 > That's what a fully normalized schema would look like for your data.

Indeed.  But I don't quite understand how does it make my task
any easier?  I fail to see how can I ensure that the
(value _1, …, value _N) tuple is unique.  (I. e., that there's
an /invertible/ mapping of keys to value tuples.)

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ensuring uniqueness of tuples spanning across multiple tables?

2011-10-04 Thread Ivan Shmakov
Well, this case is somewhat weird.  I have a number of tables
like:

PRAGMA "foreign_keys" = 1;

CREATE TABLE "foo-L" (
key INTEGER PRIMARY KEY
REFERENCES "foo" (key),
value   INTEGER NOT NULL);

Which are tied to a single table, like:

CREATE TABLE "foo" (
key INTEGER PRIMARY KEY,
value   INTEGER NOT NULL);

This structure is, obviously, could just as well be represented
with, e. g.:

CREATE TABLE "foo" (
"key"   INTEGER PRIMARY KEY,
"value" INTEGER NOT NULL,
"value-1"   INTEGER,
…
"value-N"   INTEGER);

(I. e., by abandoning the NOT NULL constraint from the
respective columns.)  For which I could then specify an
uniqueness constraint:

CREATE UNIQUE INDEX "foo-unique"
ON "foo" ("value", "value-1", …, "value-N");

Now, I'm somewhat concerned that the table above may get overly
sparse at times, which makes me wonder if I could establish an
equivalent constraint over the original multi-relation
structure, as described above?

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] max() with LIMIT

2011-08-31 Thread Ivan Shmakov
>>>>> Igor Tandetnik writes:
>>>>> Ivan Shmakov <i...@gray.siamics.net> wrote:
>>>>> Tobias Vesterlund writes:

 >>> Is it possible to get the highest value in a "limited column" when
 >>> using LIMIT?

 >> Sure.

 >> SELECT max (id) FROM (SELECT id FROM t WHERE id > 0 LIMIT 10);

 > This only works by accident.  There's no requirement that the
 > subselect return rows in any particular order.  It can, in principle,
 > choose any ten rows.

There wasn't the requirement that the query should be
unambiguous in its interpretation in the OP, either.

However, yes, almost anytime LIMIT is used, it has to be
complemented with ORDER BY for a definite result.

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] max() with LIMIT

2011-08-31 Thread Ivan Shmakov
> Tobias Vesterlund writes:

[…]

 > If I do SELECT max(id) FROM t; it will return 99.

 > If I do SELECT id FROM t WHERE id > 0 LIMIT 10; it will return
 > 1,2,3,4,5,6,7,8,9,10

 > But If I do SELECT max(id) FROM t WHERE id > 0 LIMIT 10; it will
 > return 99.

 > My logic, which may be flawed in this case, tells me the third SELECT
 > should return 10 and not 99.

LIMIT applies /after/ the aggregation.  Consider, e. g., using
GROUP BY, and then selecting no more than 10 groups' maximums.

 > Is it possible to get the highest value in a "limited column" when
 > using LIMIT?

Sure.

SELECT max (id) FROM (SELECT id FROM t WHERE id > 0 LIMIT 10);

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Creating a database with a script or SQLite command

2011-08-30 Thread Ivan Shmakov
> Pete Helgren writes:

 > I may end up going this direction, at the moment I am not having much
 > luck with the conditional copy in Busybox.  Your suggestion:

 > cp -n newdatabase.db /data/newdatabase.db

 > Isn't supported in the version of Busybox that I am running.  Also
 > the script example I tried:

 > if  [ -f /data/newdatabase.db];
 > then
 > echo "Nothing to do, database exists"
 > else
 > cp newdatabase.db /data/newdatabase.db
 > fi

 > delivers the error  [:missing]

The primary token delimiter in POSIX Shell is space.  Hence, the
following line:

   if  [ -f /data/newdatabase.db];

Is understood as: “check if the file ‘/data/newdatabase.db]’
exists” (note the closing bracket), and it certainly lacks a
closing bracket for the ‘test’ (AKA ‘[’) command.

The solution would be as follows:

 - if  [ -f /data/newdatabase.db];
 + if  [ -f /data/newdatabase.db ];

OTOH, the when the ‘test’ form of the command is used, closing
bracket is not necessary, thus:

   if  test -f /data/newdatabase.db ;

Please also consider joining the news:comp.unix.shell newsgroup
(e. g., on Aioe, nntp://aioe.org/comp.unix.shell/), as there're
quite a few folks familiar with the arcane art of Unix Shell
programming.  (AFAIK, Thunderbird has the support for the
Internet News service.)

[…]

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite_temp_master vs. .dump (now a library, too!)

2011-08-28 Thread Ivan Shmakov
>>>>> Roger Binns writes:
>>>>> On 08/16/2011 04:59 PM, Ivan Shmakov wrote:

[…]

 >> Also, are the .dump and .read commands implemented as part of
 >> the sqlite3 binary, or are they part of the library?

 > They are part of the standalone shell (ie not the library).  The
 > shell source code is under the same license as the rest of SQLite (ie
 > as public domain as possible) so you are free to make a copy and do
 > whatever you want with it.

I've just extracted all the respective routines out of shell.c
(sqlite-autoconf-3070701.tar.gz), added a couple of wrappers,
and posted the resulting code in news:alt.sources [1].

However, .dump doesn't seem examine sqlite_temp_master, which is
somewhat unfortunate, given that it was my intent to use it as a
debugging aid for my Image reassembly tool (imrt; to be part of
the e2dis [2] suite.)

I guess that I could simply substitute sqlite_master for
sqlite_temp_master (perhaps conditionally), and get what I want?
(Possibly doing s/CREATE TABLE/CREATE TEMP TABLE/ as well.)

Anything else to consider?

TIA.

[1] news:86vctigdga@gray.siamics.net
nntp://aioe.org/alt.sources/732

http://groups.google.com/group/alt.sources/browse_thread/thread/19146b6db22067a4
[2] https://gitorious.org/e2dis

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL ROWNUM option is failed - Want to Read records in chunks from table

2011-08-26 Thread Ivan Shmakov
> Tarun  writes:

[…]

 > I planned to execute query that works on SQL ROWNUM option

 > "select * from employee2 where rownum > 1 and rownum < 2"

Perhaps:

SELECT * FROM employee2 ORDER BY oid LIMIT 1 OFFSET 1 

 > but i m getting error from sqlite3 that "no such column: rownum"

 > Please help me to achieve "read records in chunks". I am stuck.
 > Thanks in advance.

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] type of a value bound by sqlite3_bind_blob ()?

2011-08-20 Thread Ivan Shmakov
>>>>> Ivan Shmakov writes:
>>>>> Roger Binns writes:

[…]

 >> Consequently if you had a trigger pulling a stunt like this, your
 >> code could try to insert a blob and silently (wrongly) end up with a
 >> string.  SQLite won't even complain if the blob isn't a valid text
 >> encoding producing an invalid string.

 > I don't have any triggers (at least, it wasn't my intent to add
 > them.)  My code is roughly as shown below.

 > And I don't seem to understand where's the problem.

Silly mistake on my part, as I've just found.

Namely, I've had /two/ functions to alter the table in question.
One of them INSERT's the tuple, and uses sqlite3_bind_blob ().
The other, that UPDATE's the tuple, however, uses
sqlite3_bind_text ().

Before I saw that there's sound support for blobs in the current
SQLite, I've planned to use Base64 for these fields; after, I've
changed one of the functions, but not the other.

And yes, code duplication is clearly a bad thing.

Now that the discrepancy is fixed, the problem is gone.

Thanks.

[…]

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] type of a value bound by sqlite3_bind_blob ()?

2011-08-19 Thread Ivan Shmakov
>>>>> Roger Binns writes:
>>>>> On 08/17/2011 09:25 PM, Ivan Shmakov wrote:

 >> Somehow, I've assumed that sqlite3_bind_blob () will bind a
 >> parameter to a blob.

 > It does.  There are no affinity rules that will cause otherwise.

 > There are some operations that cause blobs to be silently promoted to
 > strings.  IMHO these are egregious errors in SQLite since a bucket of
 > bytes cannot be turned into characters unless you know the encoding
 > which SQLite doesn't.  Here is an example:

 > sqlite> select X'616263' || 'd';
 > abcd
 > sqlite> select typeof(X'616263' || 'd');
 > text

As a matter of personal preference, I'd be calling the code
above “an error.”  It's not meaningful to my eye, and I'd likely
avoid it in my code on the basis that even if it has some
interpretation in the language, it'd be just too much a mental
strain to remember such.

 > Consequently if you had a trigger pulling a stunt like this, your code
 > could try to insert a blob and silently (wrongly) end up with a string.
 > SQLite won't even complain if the blob isn't a valid text encoding
 > producing an invalid string.

I don't have any triggers (at least, it wasn't my intent to add
them.)  My code is roughly as shown below.

And I don't seem to understand where's the problem.

Also, I've built a simpler example following the same scheme
(except the use of sqlite3_bind_parameter_index ()), and it
doesn't seem to cast blobs to strings.

Any suggestions?

TIA.

   const char *const sql
 = ("INSERT"
" INTO \"chunk\" (\"id\", \"length\", \"sha1\", \"sha256\")"
" VALUES ($id, $length, $sha1, $sha256)");

   {
 ix_sha1
   = sqlite3_bind_parameter_index (st, "$sha1");
 assert (ix_sha1 > 0);
   }

   {
 int r
   = sqlite3_reset (st);
 assert (r == SQLITE_OK);
   }

   {
 int r
   = sqlite3_bind_blob (st, ix_sha1,
sha1, CHUNK_DB_SHA1_LEN,
SQLITE_TRANSIENT);
 assert (r == SQLITE_OK);
   }

   {
 int r
   = sqlite3_step (st);
 assert (r == SQLITE_DONE);
   }

[…]

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] type of a value bound by sqlite3_bind_blob ()?

2011-08-17 Thread Ivan Shmakov
>>>>> Roger Binns writes:
>>>>> On 08/16/2011 04:59 PM, Ivan Shmakov wrote:

 >> In the sqlite3's .dump command's output, the binary blobs may either
 >> be represented as hexadecimal X''-literals, or as text strings.

[…]

 > I suggest using typeof on the data you think is blobs to verify what
 > type they really are.

The values in question are indeed of mixed type.  Thanks!

Yet, it makes me wonder, how the following bit of code could
ever have resulted in a value of the TEXT type being bound?

  {
int r
  = sqlite3_bind_blob (st, ix_sha1,
   sha1, CHUNK_DB_SHA1_LEN,
   SQLITE_TRANSIENT);
assert (r == SQLITE_OK);
  }

Somehow, I've assumed that sqlite3_bind_blob () will bind a
parameter to a blob.

(It's SQLite 3.7.3 debian 1.)

 >> Also, are the .dump and .read commands implemented as part of
 >> the sqlite3 binary, or are they part of the library?

 > They are part of the standalone shell (ie not the library).  The
 > shell source code is under the same license as the rest of SQLite (ie
 > as public domain as possible) so you are free to make a copy and do
 > whatever you want with it.

ACK.  Thanks.

-- 
FSF associate member #7257  Coming soon: Software Freedom Day
http://mail.sf-day.org/lists/listinfo/ planning-ru (ru), sfd-discuss (en)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] forcing X'' literals in sqlite3's .dump?

2011-08-16 Thread Ivan Shmakov
In the sqlite3's .dump command's output, the binary blobs may
either be represented as hexadecimal X''-literals, or as text
strings.

I wonder, how do I force sqlite3(1) to exclusively use the X''
representation?

Also, are the .dump and .read commands implemented as part of
the sqlite3 binary, or are they part of the library?

TIA.

-- 
FSF associate member #7257

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users