Re: [sqlite] Blob incremental i/o via Python

2008-02-14 Thread Samuel Neff
Thanks for the correction.

Sam


On Thu, Feb 14, 2008 at 6:19 PM, <[EMAIL PROTECTED]> wrote:

> "Samuel Neff" <[EMAIL PROTECTED]> wrote:
> > If the images you're storing are larger than the defined page size for
> the
> > database (which is most likely the case) then you can get better
> performance
> > and reduced memory consumption by storing the images in the file system
> and
> > store only paths to the files in the database.  This means reading the
> large
> > amount of data directly from the file system instead of from sqlite's
> > linked-list of pages and bypassing the page caching layer (which you
> > probably don't want for images anyways) and freeing up more of the page
> > cache for real database data.
> >
>
> One would think.  And yet experiments suggest otherwise.  It
> turns out to be faster to read images directly out of SQLite
> BLOBs until the image gets up to about 15KB on windows and
> up to about 60KB on linux.  And even for much larger images,
> the performance difference between reading from SQLite and
> reading from a file is not that great, so it is a reasonable
> thing to do to read from SQLite if transactions are important
> to you or if it is just more convenient.
>
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing Query

2008-02-14 Thread BareFeet
Hi Nathan,

> The problem gets into the multiple where fields in the where clause.
> There could be up to 9 different where parameters.

I could be wrong, but your statement above screams out that your  
database may not be "normalized". By this I mean that perhaps you have  
a lot of null values or repeating default type values in various  
fields. If this is the case, I suggest that you restructure your  
database to split the tables to remove redundancy (ie normalize it).  
This will speed most operations, possibly reduce the file size and  
probably reduce the solution to your "problem" to be a simple query on  
one table, joined to any others from which you require data, without  
needing nine where parameters.

Can you please post the schema of your database? along with some  
sample data?

Tom
BareFeet

  --
ADSL2 - probably a hundred times faster than your
current Internet connection. Australian best deals:
http://www.tandb.com.au/broadband/?ml

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob incremental i/o via Python

2008-02-14 Thread Norman Young
On Thu, Feb 14, 2008 at 6:19 PM, <[EMAIL PROTECTED]> wrote:

> "Samuel Neff" <[EMAIL PROTECTED]> wrote:
> > If the images you're storing are larger than the defined page size for
> the
> > database (which is most likely the case) then you can get better
> performance
> > and reduced memory consumption by storing the images in the file system
> and
> > store only paths to the files in the database.  This means reading the
> large
> > amount of data directly from the file system instead of from sqlite's
> > linked-list of pages and bypassing the page caching layer (which you
> > probably don't want for images anyways) and freeing up more of the page
> > cache for real database data.
> >
>
> One would think.  And yet experiments suggest otherwise.  It
> turns out to be faster to read images directly out of SQLite
> BLOBs until the image gets up to about 15KB on windows and
> up to about 60KB on linux.  And even for much larger images,
> the performance difference between reading from SQLite and
> reading from a file is not that great, so it is a reasonable
> thing to do to read from SQLite if transactions are important
> to you or if it is just more convenient.
>
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
>
> 
>

How much larger? (Have the experiments been published?)

These are exactly the factors we're trading off: performance versus
reliability and programmer convenience.

Reliability is paramount. If there is no way to achieve transactional
semantics using files outside of SQLite, then any discussions about
performance and programmer convenience are purely academic. The application
must behave deterministically.

Performance is next most important. Our images are typically 20k JPEGs, as
captured with the current Nokia Tablet's VGA camera. Our audio streams are
encoding with Speex at about 2k/second, so a typical 1-minute recording is
about 120k. However, audio recordings can grow arbitrarily large, so
performance with 10-minute (1.2M) or 100-minute (12M) recordings could be
relevant. Also, we can anticipate video recordings in the future, with a
steeper slope.

Naturally, we are willing to sacrifice programmer convenience for the sake
of end-user reliability and performance. In this case, it appears that
reliability and programmer convenience are congruent, with transactions.

Thanks for a great database, Dr. Hipp. It suits the burgeoning portable
applications perfectly. It's transactional incremental I/O enables rich
media on  portable devices.

Norm.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob incremental i/o via Python

2008-02-14 Thread drh
"Samuel Neff" <[EMAIL PROTECTED]> wrote:
> If the images you're storing are larger than the defined page size for the
> database (which is most likely the case) then you can get better performance
> and reduced memory consumption by storing the images in the file system and
> store only paths to the files in the database.  This means reading the large
> amount of data directly from the file system instead of from sqlite's
> linked-list of pages and bypassing the page caching layer (which you
> probably don't want for images anyways) and freeing up more of the page
> cache for real database data.
> 

One would think.  And yet experiments suggest otherwise.  It
turns out to be faster to read images directly out of SQLite
BLOBs until the image gets up to about 15KB on windows and
up to about 60KB on linux.  And even for much larger images,
the performance difference between reading from SQLite and
reading from a file is not that great, so it is a reasonable
thing to do to read from SQLite if transactions are important
to you or if it is just more convenient.

--
D. Richard Hipp <[EMAIL PROTECTED]>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob incremental i/o via Python

2008-02-14 Thread Samuel Neff
If the images you're storing are larger than the defined page size for the
database (which is most likely the case) then you can get better performance
and reduced memory consumption by storing the images in the file system and
store only paths to the files in the database.  This means reading the large
amount of data directly from the file system instead of from sqlite's
linked-list of pages and bypassing the page caching layer (which you
probably don't want for images anyways) and freeing up more of the page
cache for real database data.

HTH,

Sam


On Mon, Feb 11, 2008 at 10:29 PM, Norman Young <[EMAIL PROTECTED]>
wrote:

> The documentation outlines the C interface for incremental blob
> input/output, and mentions the C typedef for a blob handle.
>
> http://www.sqlite.org/c3ref/blob_open.html
> http://www.sqlite.org/c3ref/blob.html
>
> typedef struct sqlite3_blob sqlite3_blob;
>
>
> Can this same interface be accessed in Python?
>
> My application manipulates image and audio content. However, on my
> constrained platform (Python 2.5, sqlite3, Maemo, Nokia Internet Tablet),
> reading and writing the media data via the conventional blob interface (as
> illustrated in teh following www.initd.org SnippetsBlobs.py example) could
> consume excessive memory. Specifically, all of the binary data are read
> into
> blobdata object at once, via the read() call. Since the media files can be
> arbitrarily large, this call could easily exceed available memory.
>
> *blob*data = *open*('c:\\*sqlite*3\\img.jpg','rb').read()
>
> con = *sqlite*.connect(':memory:')
> cur = con.cursor()
>
> cur.execute("Create table picture_table(images)")
> cur.execute("Insert into picture_table(images) values
> (?)",(*sqlite*.Binary(*blob*data),))
> con.commit()
>
>
> Instead, I need to incrementally read and write media data to a blob, to
> avoid consuming arbitrary memory.
>
> Can this be done from Python via the sqlite3 module? Can you point me to
> examples?
>
> Thanks.
>
> Norm.
>
> www.nbyoung.com
> www.personalsyndication.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob incremental i/o via Python

2008-02-14 Thread Norman Young
On Wed, Feb 13, 2008 at 7:54 PM, Roger Binns <[EMAIL PROTECTED]> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Norman Young wrote:
> > We had to exclude references to sqlite3_enable_load_extension and
> > sqlite3_load_extension from within apsw.c, in order to avoid undefined
> > symbol references during the build (python setup.py install).
>
> You should have modified the setup.py to include a define of
> SQLITE_OMIT_LOAD_EXTENSION.
>
> The general theory is that you should create/compile SQLite with
> whatever flags you want (you can exclude many features) and then provide
> the same flags when compiling apsw.  Unfortunately I don't know of any
> way of automatically finding out what flags/inclusions/exclusions SQLite
> was compiled with.
>
> Roger
>

Hey, hey. Right your are!

In the apsw setup.py:
define_macros.append( ('SQLITE_OMIT_LOAD_EXTENSION', '1') )

We had discovered this line of reasoning earlier, but we must have made a
mistake in making the adjustment. Thanks for the prompting. We now have a
properly-compiled apsw module.

Norm.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL error: unrecognized token: "#"

2008-02-14 Thread Ken
Dennis, 

Thanks for the info. 

Regards,
Ken


Dennis Cote <[EMAIL PROTECTED]> wrote: Ken wrote:
> The following errors out:
> 
> CREATE TABLE users (
>  user# INTEGER   NOT NULL,
>  name VARCHAR(30)   NOT NULL,
>  PRIMARY KEY (user#)
> )  ;
> 
> SQL error: unrecognized token: "#"
> 
> It appears that $ is allowed, is there a technical reason why # is ommitted 
> from valid tokens? 
> 

Ken,

Standard SQL only allows letters, digits, and underscores in 
identifiers. If you need to use other characters, like a space or a '#', 
you need to quote the identifier using double quotes.

Try "user#" instead of user# and you will be set.

HTH
Dennis Cote

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fts2 data in a different table

2008-02-14 Thread Scott Hess
On Tue, Feb 12, 2008 at 10:06 AM, Bram - Smartelectronix
<[EMAIL PROTECTED]> wrote:
>  I have various tables which all relate to the same central object, all
>  of them contain various pieces of information about this object. And I
>  want to full-text-search for these objects.
>
>  I.e. in my case sounds (which have tags, comments, metadata, categories,
>  ratings, etc etc). I wanted to use FTS2 to search through sounds, in the
>  end I settled for a trigger approach: adding a new tag for example will
>  trigger an update of the search-table.
>
>  create virtual table search using fts2(sound_id, content);
>
>  create trigger if not exists soundtag_insert after insert on tag for
>  each row
>  begin
>  update search set content=() where search.sound_id=new.sound_id;
>  end;
>
>  etcetera for all my tables related to the sound.

Don't use fts2.  There's a known design flaw which can lead to index
corruption.  fts3 fixes the flaw, and otherwise works pretty much the
same.  If you do you fts2, never ever run VACUUM!

Suggest that rather than have a sound_id column, you instead use the
implicit docid/rowid column.  docid is an fts3 thing.  If new.sound_id
is unique (I assume it is), you can just set search.docid =
new.sound_id.  That will make joins faster because the docid acts as
an implicit UNIQUE INTEGER index.

>  All well, but then I tried:
>
>  select sound.* from sound left join search on sound.id=search.sound_id
>  where search.content match "bass drum" and sound.samplerate=44100;
>
>  and got: SQL error: unable to use function MATCH in the requested context

Hmm.  Doesn't make sense to me - it's possible that the left join is
causing the system to not be able to use the fts index.  I think the
subselect should work just fine, though, and shouldn't be hideously
inefficient.

Still, I'll put it on my list of things to eventually look at.  I'd
have expected it to work as-is.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_type

2008-02-14 Thread Nicolas Williams
On Thu, Feb 14, 2008 at 02:42:16PM -0500, Igor Tandetnik wrote:
> Michael Pitchford
> <[EMAIL PROTECTED]> wrote:
> >   I'm having an issue with sqlite3_column_type in the current version
> > of sqlite (3.5.6 at the time of this email). Unless I explicitly call
> > sqlite3_step() on the statement, it returns SQLITE_NULL for every
> > column. Is this a bug or is it just how the sqlite3_column_type works
> > now?
> 
> That's how it always worked, I believe. You have to be positioned on 
> some row to obtain column type, since the type of the data in the same 
> column can vary from row to row (what with SQLite using manifest 
> typing).

http://sqlite.org/c3ref/column_decltype.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing Query

2008-02-14 Thread Dennis Cote
Nathan Biggs wrote:
> I was hoping that someone could help me with optimizing this query.  
> Basically I need to return totals from an large database (> 1million 
> records in the table).  There needs to be two different totals one by 
> "a" and one by "b" where "a" and "b" could have up to 100 rows each.  So 
> totals for up to 200 distinct total values.
> 
> The problem gets into the multiple where fields in the where clause.  
> There could be up to 9 different where parameters. 
> 
> Instead of creating multiple queries with indexes, and creating a ton of 
> indexes, I tried this approach. 
> 
> Basically I create a temp table to hold my totals.  And using a trigger, 
> cause the totals to be updated. 
> 
> This work fantastic by the way!  I was just seeing if there is a better 
> way to speed up the trigger. 
> With one statement in the trigger the query runs in about 30 seconds 
> with 1.3million records. 
> With two statements, the time increases to 50 seconds for 1.3 million 
> records. 
> 
> I really need to have 4 updates in the trigger and was hoping to keep 
> the time < 1 minute.
> 
> Thanks for your help.
> 
> Query
> --
> replace into totals
> select 0, 0, a, b, c
> from table1
> 
> Trigger
> -
> CREATE TRIGGER sum_totals after insert on totals
> BEGIN
> update totals set cnt = cnt + 1, a = a + NEW.a where id = NEW.b;
> update totals set cnt = cnt + 1, a = a + NEW.a where id = NEW.c;
> END
> 
> Totals table
> -
> create temp table totals(id integer primary key, cnt integer, a float, b 
> integer, c integer);
> Begin Transaction;
> insert into totals values (0, 0, 0.00, 0, 0,);
> insert into totals values (1, 0, 0.00, 0, 0,);
> insert into totals values (2, 0, 0.00, 0, 0);
> etc... total of 500 rows
> Commit;
> 

Nathan,

Perhaps it's just me, but I didn't follow your description very well at 
all. :-)

In the first paragraph you say you need two different totals, but at the 
end you create a table of totals and insert 500 rows.

You have defined an insert trigger on this totals table which may 
updates two rows of the table (depending upon the values of the b and c 
fields inserted) after each insert into the table.

It looks to me like you are trying to initialize the totals table with 
500 rows of zero data, each with their own id number.

Since each of these initial inserts has b and c set to zero, each one 
causes the first row (the one with id = 0) to be updated twice. Each 
update increments the cnt field of this row, so that after 500 rows are 
inserted, the cnt field in the first row will be 1000. Each update adds 
0.0 to the initial value of 0.0 in field a of the first row, so it ends 
up with a final value of 0.0 after the 500 inserts.

It looks to me you are then trying to execute the query to replace rows 
in the totals table. You select all 1M rows from table1. For each of 
those rows you replace the first row of the totals table (the one with 
id = 0). For each of these 1 M replacements your trigger fires and 
updates 0, 1, or 2 rows in the totals table (depending upon the values 
of b and c in each row of table1). If every row of table1 has values of 
b and c between 0 and 500, then each replacement of the first row of the 
totals table will update two rows in the totals table. Each of these 
updates will increment the cnt field in one of the rows, and add the 
value of a from the row in table1 to the running total value of a in the 
updated row of the totals table.

When you are done you will have 500 rows in the totals table, each with 
a cnt of the number of times the row id matched field b or field c in 
table1, and the sum of all the field a values from table1 where the 
row's id matched field b or field c.

Note the values in the first row are not valid because that row gets 
overwritten on each replacement.

If this is correct, it seems to me there should be a much easier way to 
do this.

Can you clarify?

Dennis Cote





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] View update performance (was: Updatable views)

2008-02-14 Thread Steven Fisher
On 14-Feb-2008, at 12:27 PM, Stephen Oberholtzer wrote:

> I'd love to know
> what frame of mind I was in when I wrote it, because I'm pretty sure I
> wouldn't have come up with the name 'MaterializeView' if I had tried
> to write the patch today.

Altered frames of mind are responsible for both the best and worst  
code in the world. :)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Improved sorting algorithm (was: indexing)

2008-02-14 Thread Stephen Oberholtzer
On Wed, Feb 13, 2008 at 2:12 PM,  <[EMAIL PROTECTED]> wrote:
>
>  A project on our to-do list is to implement a new sorter
>  that uses O(1) seeks.  We know how to do this.  It is just
>  finding time to do the implementation.

Do you have a link to a page or pdf describing the algorithm involved?

-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] View update performance (was: Updatable views)

2008-02-14 Thread Stephen Oberholtzer
On Thu, Feb 14, 2008 at 10:17 AM, Dennis Cote <[EMAIL PROTECTED]> wrote:
> Stephen Oberholtzer wrote:
>
>  Stephen,
>
>  FYI, your ticket was fixed on Tuesday by checkin 4782 in case you hadn't
>  noticed.
>
>  Dennis Cote
>

Wow, awesome! My claim to fame -- I submitted a patch to an
open-source project and it got accepted!  (Actually, I've done it once
before, in Subversion, but this is for a feature that actually has a
chance at being used!)

It even looks like my patch was still mostly valid.  I'd love to know
what frame of mind I was in when I wrote it, because I'm pretty sure I
wouldn't have come up with the name 'MaterializeView' if I had tried
to write the patch today.

-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing Query

2008-02-14 Thread Stephen Oberholtzer
On Thu, Feb 14, 2008 at 2:26 PM, Nathan Biggs <[EMAIL PROTECTED]> wrote:
> I was hoping that someone could help me with optimizing this query.
>  Basically I need to return totals from an large database (> 1million
>  records in the table).  There needs to be two different totals one by
>  "a" and one by "b" where "a" and "b" could have up to 100 rows each.  So
>  totals for up to 200 distinct total values.



I'm trying to understand your problem, but unfortunately you're being
extremely abstract and vague. What are "a" and "b"? Separate columns
in your main table?


>  Query
>  --
>  replace into totals
>  select 0, 0, a, b, c
>  from table1

So, 'totals' gets one row for each row in 'table1'?  That doesn't make
any sense to me.  A total is an aggregate sum, so there should only be
one row.

>  -
>  create temp table totals(id integer primary key, cnt integer, a float, b
>  integer, c integer);
>  Begin Transaction;
>  insert into totals values (0, 0, 0.00, 0, 0,);
>  insert into totals values (1, 0, 0.00, 0, 0,);
>  insert into totals values (2, 0, 0.00, 0, 0);
>  etc... total of 500 rows
>  Commit;

Okay, so now I've got:
-> 9 different where clauses
-> 4 different inserts in your trigger
-> 1.3 million rows in the table
-> 500 rows in the table

You're really going to have to explain your problem in greater detail.

-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_type

2008-02-14 Thread Igor Tandetnik
Michael Pitchford
<[EMAIL PROTECTED]> wrote:
>   I'm having an issue with sqlite3_column_type in the current version
> of sqlite (3.5.6 at the time of this email). Unless I explicitly call
> sqlite3_step() on the statement, it returns SQLITE_NULL for every
> column. Is this a bug or is it just how the sqlite3_column_type works
> now?

That's how it always worked, I believe. You have to be positioned on 
some row to obtain column type, since the type of the data in the same 
column can vary from row to row (what with SQLite using manifest 
typing).

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_column_type

2008-02-14 Thread Michael Pitchford
Hello All,
   I'm having an issue with sqlite3_column_type in the current version of 
sqlite (3.5.6 at the time of this email). Unless I explicitly call 
sqlite3_step() on the statement, it returns SQLITE_NULL for every column. Is 
this a bug or is it just how the sqlite3_column_type works now?

Thanks,
Mike Pitchford





  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Optimizing Query

2008-02-14 Thread Nathan Biggs
I was hoping that someone could help me with optimizing this query.  
Basically I need to return totals from an large database (> 1million 
records in the table).  There needs to be two different totals one by 
"a" and one by "b" where "a" and "b" could have up to 100 rows each.  So 
totals for up to 200 distinct total values.

The problem gets into the multiple where fields in the where clause.  
There could be up to 9 different where parameters. 

Instead of creating multiple queries with indexes, and creating a ton of 
indexes, I tried this approach. 

Basically I create a temp table to hold my totals.  And using a trigger, 
cause the totals to be updated. 

This work fantastic by the way!  I was just seeing if there is a better 
way to speed up the trigger. 
With one statement in the trigger the query runs in about 30 seconds 
with 1.3million records. 
With two statements, the time increases to 50 seconds for 1.3 million 
records. 

I really need to have 4 updates in the trigger and was hoping to keep 
the time < 1 minute.

Thanks for your help.

Query
--
replace into totals
select 0, 0, a, b, c
from table1

Trigger
-
CREATE TRIGGER sum_totals after insert on totals
BEGIN
update totals set cnt = cnt + 1, a = a + NEW.a where id = NEW.b;
update totals set cnt = cnt + 1, a = a + NEW.a where id = NEW.c;
END

Totals table
-
create temp table totals(id integer primary key, cnt integer, a float, b 
integer, c integer);
Begin Transaction;
insert into totals values (0, 0, 0.00, 0, 0,);
insert into totals values (1, 0, 0.00, 0, 0,);
insert into totals values (2, 0, 0.00, 0, 0);
etc... total of 500 rows
Commit;



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL error: unrecognized token: "#"

2008-02-14 Thread Dennis Cote
Ken wrote:
> The following errors out:
> 
> CREATE TABLE users (
>  user# INTEGER   NOT NULL,
>  name VARCHAR(30)   NOT NULL,
>  PRIMARY KEY (user#)
> )  ;
> 
> SQL error: unrecognized token: "#"
> 
> It appears that $ is allowed, is there a technical reason why # is ommitted 
> from valid tokens? 
> 

Ken,

Standard SQL only allows letters, digits, and underscores in 
identifiers. If you need to use other characters, like a space or a '#', 
you need to quote the identifier using double quotes.

Try "user#" instead of user# and you will be set.

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQL error: unrecognized token: "#"

2008-02-14 Thread Ken
The following errors out:

CREATE TABLE users (
 user# INTEGER   NOT NULL,
 name VARCHAR(30)   NOT NULL,
 PRIMARY KEY (user#)
)  ;

SQL error: unrecognized token: "#"

It appears that $ is allowed, is there a technical reason why # is ommitted 
from valid tokens? 

Thanks,
Ken




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Function hex

2008-02-14 Thread Ken
The output of the native sqlite function "hex" is not really a hex value
of a number.  Its a hex representation of a string.

select 5629701397680549, hex(5629701397680549) ;

5629701397680549|35363239373031333937363830353439

Where to_hex outputs:
select 5629701397680549, to_hex(5629701397680549) ;

5629701397680549|0x14002f11a5


[EMAIL PROTECTED] wrote: Mau Liste 
 wrote:
> 
> results in: SQL error: no such function: hex
> 

Added by version 3.3.13, one year ago yesterday.

   http://www.sqlite.org/releaselog/3_3_13.html

What version are you running?

--
D. Richard Hipp 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Function hex

2008-02-14 Thread Mau Liste

Ken wrote:
> Try using the attached hex.c function. You'll need to compile this as a .so 
> and call sqlite3_create_function to register it.
> 
> Hth

Thanks Ken.
This is exactly what I need.

Regards to all.
Mau.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Function hex

2008-02-14 Thread Mau Liste
...ooops!

I am "using" 3.5.5, but I was experimenting with an older SqliteSpy that 
was statically linked with 3.3.(something less than 13)

Additionally I found that the function hex is not exactly what I want ...

I need
hex(128) -> 0x80
instead I have
313238
that is the conversion of the byte string '128' into the equivalent hex 
bytes.

Thanks anyway ...
Mau.


[EMAIL PROTECTED] wrote:
> Mau Liste <[EMAIL PROTECTED]> wrote:
>> results in: SQL error: no such function: hex
>>
> 
> Added by version 3.3.13, one year ago yesterday.
> 
>http://www.sqlite.org/releaselog/3_3_13.html
> 
> What version are you running?
> 
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] View update performance (was: Updatable views)

2008-02-14 Thread Dennis Cote
Stephen Oberholtzer wrote:
> 
> If only you'd been around when I'd posted my message! Nobody said
> *anything*, so I figured nobody else cared about it.
> 
> I have reposted my mailing list message, with attachments, here:
> http://www.sqlite.org/cvstrac/tktview?tn=2938
> 
> I have not updated the patch, however.  It *should* be pretty
> straightforward -- looking at it again, it doesn't actually seem to do
> any VDBE code itself, so who knows?
> 

Stephen,

FYI, your ticket was fixed on Tuesday by checkin 4782 in case you hadn't 
noticed.

Dennis Cote

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Function hex

2008-02-14 Thread Ken
Try using the attached hex.c function. You'll need to compile this as a .so and 
call sqlite3_create_function to register it.

Hth

Mau Liste <[EMAIL PROTECTED]> wrote: Hello all,
I am trying to print some table values in hexadecimal.
I've seen in the docs that there is a function called 'hex'
but the following:

create table aa (a integer);
insert into aa values(10);
insert into aa values(11);
insert into aa values(12);
insert into aa values(13);
insert into aa values(14);
insert into aa values(15);
select hex(a) from aa;

results in: SQL error: no such function: hex

while:

select min(a) from aa;

results in the correct answer: 10

I've seen in the source code that the function hex does indeed exist and 
it is inserted into the built in function table together with the min 
and a load of other functions.

Before digging more into the sources, can you tell me if I am doing 
something stupid and how the function hex may be used?
Or at least if it is possible to print hexadecimal values at all.

Thanks all.
Mau.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

/*
** The toHexFunc returns a string of Hexidicimal values using
** a leading Prefix of 0x .
** To install use 
**  sqlite3_create_function(db,"to_hex",1, SQLITE_ANY,0,tohexFunc,0,0);
**  Library must be compiled without SQLITE_OMIT_LOAD_EXTENSION defined
*/
static void tohexFunc(
  sqlite3_context *context,
  int argc,
  sqlite3_value **argv
){
int i, n;
   static const char hexdigits[] = {
 '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
 'A', 'B', 'C', 'D', 'E', 'F' };

  const unsigned char *pBlob;
  sqlite_int64  d64;
  char  *zHex;
  char  *z = 0;

 /* Kll, extend to include optional width */

  switch( sqlite3_value_type(argv[0]) ) {
 case SQLITE_NULL:
 case SQLITE_FLOAT:return; break;
 case SQLITE_INTEGER: { 
  d64 = sqlite3_value_int64(argv[0]);
  zHex =sqlite3_mprintf("0x%lx",d64);
  sqlite3_result_text(context, zHex, -1, sqlite3_free);
  return;
 } 
 break;
 case SQLITE_TEXT:
  pBlob = sqlite3_value_text(argv[0]);
  n = sqlite3_value_bytes(argv[0]);
 break;
 case SQLITE_BLOB:
  pBlob = sqlite3_value_blob(argv[0]);
  n = sqlite3_value_bytes(argv[0]);
 break;
  default: return ; 
  }


  if( n*2+1>SQLITE_MAX_LENGTH ){
//sqlite3_result_error_toobig(context);
return;
  }
  z = zHex = sqlite3_malloc(n*2 + 3);
  if( zHex==0 ) return;

  for(i=0; i>4)&0xf];
*(z++) = hexdigits[c&0xf];
  }
  *z = 0;
  sqlite3_result_text(context, zHex, n*2, sqlite3_free);
}


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Function hex

2008-02-14 Thread drh
Mau Liste <[EMAIL PROTECTED]> wrote:
> 
> results in: SQL error: no such function: hex
> 

Added by version 3.3.13, one year ago yesterday.

   http://www.sqlite.org/releaselog/3_3_13.html

What version are you running?

--
D. Richard Hipp <[EMAIL PROTECTED]>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Function hex

2008-02-14 Thread Mau Liste
Hello all,
I am trying to print some table values in hexadecimal.
I've seen in the docs that there is a function called 'hex'
but the following:

create table aa (a integer);
insert into aa values(10);
insert into aa values(11);
insert into aa values(12);
insert into aa values(13);
insert into aa values(14);
insert into aa values(15);
select hex(a) from aa;

results in: SQL error: no such function: hex

while:

select min(a) from aa;

results in the correct answer: 10

I've seen in the source code that the function hex does indeed exist and 
it is inserted into the built in function table together with the min 
and a load of other functions.

Before digging more into the sources, can you tell me if I am doing 
something stupid and how the function hex may be used?
Or at least if it is possible to print hexadecimal values at all.

Thanks all.
Mau.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] creating a table with an index

2008-02-14 Thread BareFeet
Hi Sam,

> I am trying to create a table with two indexes:
>
> CREATE TABLE favorites (
>   cust_id CHAR(32) NOT NULL,
>   fldoid CHAR(38) NOT NULL,
>   imgoid CHAR(64) NOT NULL,
>   PRIMARY KEY (cust_id),
>   INDEX (fldoid, imgoid));
>
> SQLite keeps complaining saying there is an error around INDEX.   
> What might I be doing wrong?

Create the index separately, eg:

create index "favorites fldoid imgoid" (fldoid, imgoid);

Note that this won't create "two indexes". It creates an index sorted  
by fldoid and subsorted by imgoid, which is what you want if you'll be  
searching by fldoid and imgoid combinations.

If you instead want two individual indexes, for searching on just one  
column at a time, then you need to create two, as:

create index "favorites fldoid" (fldoid);
create index "favorites imgoid" (imgoid);

Tom
BareFeet

--
Naked ADSL2 now in Australia, at the best pricing:
http://www.tandb.com.au/broadband/?ml

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users