[sqlite] sqlite-tools-win64-x64-3290000.zip missing

2019-09-14 Thread patrick . dreier

Dear Woman and Man!

sqlite-tools-win64-x64-329.zip are missing.
How to solve this problem?

With kind Greetings!
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CSV import does not handle fields with a comma surrounded by double

2019-05-21 Thread Patrick Sherrill
I don’t know about ‘valid’ csv, it has been a moving target for decades. White 
space as far as my recollection should not be considered in parsing a csv. 

My 2 cents. 

Pat...

Sent from my iPhone

> On May 21, 2019, at 9:28 AM, Richard Hipp  wrote:
> 
>> On 5/21/19, Faheem Mitha  wrote:
>> The ".import" command does not parse CSV correctly.
>> somestuff, "some,stuff"
> 
> This is not valid CSV.  There is an extra space character after the
> comma and before the double-quote.
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Strange Corruption Issue

2018-06-18 Thread Patrick Herbst
I'm using sqlite in an embedded application, running on SSD.

journal_mode=persist
so that it is more resilient to loss of power.

I'm seeing corruption.  I'm using sqlite to log events on the system,
and the corruption is well in the middle of a power session; not at
the tail end of log when a power loss might occur.

What i'm seeing is just a few pages corrupted with random bits being
flipped.  looking in a hex editor I can see the corrupted data, and
where I can tell what values it SHOULD be, I see that they're wrong,
but only by a single bit flip in random bytes here and there.  for
example a "A" is "a", or a "E" is "A".  These are all changes of a
single bit.  there are far more examples... but in pretty much every
case (even when RowID's are wrong) its just off by a bit.

I'm using sqlite 3.7 (i know, old, but this this system is old).  Has
anyone else seen random bit flips?  Any idea what could be causing it?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How do I insert a record in an SQLite table only if the row does not already exist?

2017-12-21 Thread Patrick Skelton
Hi,

I am wanting to create an 'atomic' SQL script that will insert a record
into a table only if the supplied record's primary key does not already
exist, thus avoiding the constraint exception that occurs if the insert
goes ahead.

I have the following script which is wrong. I get an error saying the
syntax is wrong near the 'WHERE'.


*BEGIN EXCLUSIVE TRANSACTION;*

*INSERT INTO NetworkLocks*
*(*
* PK_id,*
* owner_username,*
* unique_identifier,*
* creation_time*
*)*
*VALUES*
*(*
* @ID,*
* @owner_username,*
* @unique_identifier,*
* @creation_time*
*)*
*WHERE NOT EXISTS*
* ( SELECT * FROM NetworkLocks nl WHERE nl.PK_id = @ID );*

*END TRANSACTION;*


Currently, my code works by relying on the uniqueness of the primary key
(PK_id) and catch any exception. This is, however, a foreseeable error. It
happens in normal operation. I'm not fond of exceptions for this situation.
I was hoping to change the SQL script so that it always runs to completion
but somehow returns a result. Perhaps the number of rows affected might be
zero instead of one?

Any help or advice would be very much appreciated. Also, as a newbie to
SQLite, I am finding it difficult to get a handle on how to put together
more complex queries.  There are plenty of example on the internet of
simple scripts, but it is difficult to know how to move beyond the basics.
Any pointers to good sources of learning for this would be great.


Kind wishes ~ Patrick
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Commit ad601c7962 degrade performance in a client program

2017-01-05 Thread Patrick Ohly
On Tue, 2017-01-03 at 17:49 -0800, Jianxun Zhang wrote:
> I am working in Yocto project. We have a fake-root program “pseudo”
> that uses sqlite3 at a step when building Yocto images. We found a 2%
> increase of the whole build time, and my bisecting shows the
> ad601c7962 in sqlite3 is the root cause.

That was 2% on average. In certain long-running parts of a build that
made heavy use of pseudo, that particular commit caused a slowdown of
326% - see
https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg88109.html

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_origin_name for INSERT and UPDATE

2016-06-17 Thread Rapin Patrick
2016-06-17 18:26 GMT+02:00 Igor Tandetnik :

>
> select * from t1 where col1 = ?1+?2 and col2=?1-?2;
>
> What should be the expected output of your hypothetical generic way for a
> statement like this?
>
> You seem to assume a one-to-one correspondence between columns and
> placeholders. This assumption doesn't generally hold.
>

I know that there are cases where placeholders do not match a column value.
In these cases, I am expecting NULL for column and table names, just like
sqlite3_column_origin_name is returning for SELECT when produced data is
not a table column.
These situation does not occur often in our code base at least, and I don't
care to just ignore unit dimensions then.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_origin_name for INSERT and UPDATE

2016-06-17 Thread Rapin Patrick
2016-06-17 18:24 GMT+02:00 James K. Lowden :

>
> You are encoding type information in the name.  If you move the type
> information into the data, SQLite can manage the unit dimension.  You
> could use a CHECK constraint to require that speed was in m/s, or a
> trigger to make it so.
>

That would indeed be an option. I suppose that you mean to use a string
like "2.34 m/s" or a BLOB of typically 9 bytes (a 'double' and a unit
enum).
But this would slightly increase the database size, and slow down access
due to formatting / parsing.
I targeted to have a zero-overhead support when possible.
In addition, my approach had the big advantage to being backward
compatible: older versions of our application would just ignore the column
unit type, and read/write regular 'double'.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_origin_name for INSERT and UPDATE

2016-06-17 Thread Rapin Patrick

> Le 17 juin 2016 à 13:35, Hick Gunter  a écrit :
> 
> Does
> 
> .pragma table_info();
> 
> not solve your problem?
> 

I indeed need PRAGMA table_info(), to get type names from column names.
Let my rephrase my question:

Find a generic way to retrieve column names and table name (and then column 
types with above pragma) in a arbitrary single SQL statement for all 
placeholder values.

Examples:
"INSERT INTO t1(col1, col2) VALUES(2, ?); » => (t1, col2)
"UPDATE t2 SET col1=? WHERE col2=? " => (t2, col1) and (t2, col2)
"SELECT * from t1 WHERE col2=?" => (t1, col2)

With regular expressions, I was in fact able to implement my feature for INSERT 
and UPDATE in a acceptable way (without supporting the WHERE nor WITH clauses).
But I was asking if there is a better solution, not requiring slow regex, using 
information SQLite3 already has internally about placeholder values.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_column_origin_name for INSERT and UPDATE

2016-06-17 Thread Rapin Patrick
Hello,

I am using function sqlite3_column_origin_name and friends in my SQLite3 C++ 
wrapper class to get table and column names in SELECT statements. 
I would like to have the same feature for INSERT and UPDATE statements: that 
is, find the table name and column names when writing to the database, not only 
reading.
I think this is impossible with current SQLite3 library, but I may be missing 
something.
The workaround I have implemented is to use some C++11 regular expressions to 
parse the SQL expression outside of SQLite3.
What would be the difficulty to support origin column names also for writing, 
not just reading ?

___

In background, I will explain the reason I would like such feature.
I know that there are already plenty of C++ wrappers over SQLite3, but I wrote 
another one for our company base framework.
I think it has plenty of qualities:
simple: essentially a single function, executeQuery, using a variadic signature 
like printf (but type safe, thanks to C++11 variadic templates !)
automatic caching of instructions (does not recompile same SQL expression if 
already in the cache)
automatic binding of both input and output arguments
fast: the overhead over calling manually SQLite3 functions is minimum
natively support the C++ base classes of our framework (also a drawback: I 
could not distribute that wrapper by itself)
have a limited support for units in numeric values.

The last point is the reason for my request. For example, if a table was 
created with that command:

CREATE TABLE t1 (speed REAL_m_per_s);

SQLite3 will assign to speed column an affinity of floating point value since 
it has REAL in the type.
And my C++ wrapper then knows that this column is a speed expressed in meters 
per second. 
So when making a SELECT on t1 table, the wrapper will output number objects 
with unit dimension of Speed expressed in m/s !

To be complete, I want to check when writing to database (UPDATE or INSERT) 
that the unit contained in the input values match the expected type of the 
column.
Possibly making unit conversion on the fly (m -> ft).
But for that, I need to find out the column names and table name implied in a 
UPDATE or INSERT statement.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] NOP INSERT still writes to the DB/journal

2016-02-09 Thread Patrick Donnelly
Hello,

On Mon, Dec 7, 2015 at 5:05 PM, Patrick Donnelly  
wrote:
> Update on this:
>
> On Mon, May 5, 2014 at 4:53 PM, Patrick Donnelly  
> wrote:
>> Hi,
>>
>> I have an INSERT that looks like
>>
>> INSERT INTO T
>> SELECT ...
>>
>> which I'm running numerous times a second that generally does nothing
>> because the SELECT returns no rows. Unfortunately, I've found that
>> SQLite still does numerous disk writes anyway in this situation.
>>
>> Is my only option to eliminate the INSERT by using a SELECT first to
>> check if there are no rows? Something like:
>>
>> CREATE TEMPORARY VIEW V AS
>> SELECT ...
>> SELECT COUNT(*) FROM V;
>> /* If > 0 */
>> INSERT INTO T SELECT * FROM V;
>>
>> ?
>
> I've been able to reproduce it with this minimal example:
>
> CREATE TABLE t1(a INTEGER PRIMARY KEY AUTOINCREMENT);
> .print -
> INSERT INTO t1
> SELECT 0
> WHERE 1 = 2;
> select changes();
>
> $ ./sqlite3 -vfstrace foo.db < test.sql
> ...
> -
> trace.xLock(foo.db,SHARED) -> SQLITE_OK
> trace.xAccess("/home/batrick/scm/cctools/chirp/src/sqlite-amalgamation-3090200/foo.db-journal",0)
> -> SQLITE_OK, out=0
> trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
> trace.xRead(foo.db,n=16,ofst=24) -> SQLITE_OK
> trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
> trace.xAccess("/home/batrick/scm/cctools/chirp/src/sqlite-amalgamation-3090200/foo.db-wal",0)
> -> SQLITE_OK, out=0
> trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
> trace.xLock(foo.db,RESERVED) -> SQLITE_OK
> trace.xFileControl(foo.db,20) -> SQLITE_OK
> trace.xOpen(foo.db-journal,flags=0x806) -> SQLITE_OK
> trace.xDeviceCharacteristics(foo.db) -> 0x1000
> trace.xWrite(foo.db-journal,n=512,ofst=0) -> SQLITE_OK
> trace.xWrite(foo.db-journal,n=4,ofst=512) -> SQLITE_OK
> trace.xWrite(foo.db-journal,n=1024,ofst=516) -> SQLITE_OK
> trace.xWrite(foo.db-journal,n=4,ofst=1540) -> SQLITE_OK
> trace.xLock(foo.db,EXCLUSIVE) -> SQLITE_OK
> trace.xWrite(foo.db-journal,n=4,ofst=1544) -> SQLITE_OK
> trace.xWrite(foo.db-journal,n=1024,ofst=1548) -> SQLITE_OK
> trace.xWrite(foo.db-journal,n=4,ofst=2572) -> SQLITE_OK
> trace.xDeviceCharacteristics(foo.db) -> 0x1000
> trace.xRead(foo.db-journal,n=8,ofst=3072) -> SQLITE_IOERR_SHORT_READ
> trace.xSync(foo.db-journal,FULL) -> 0
> trace.xWrite(foo.db-journal,n=12,ofst=0) -> SQLITE_OK
> trace.xSync(foo.db-journal,FULL) -> 0
> trace.xWrite(foo.db,n=1024,ofst=0) -> SQLITE_OK
> trace.xWrite(foo.db,n=1024,ofst=2048) -> SQLITE_OK
> trace.xFileControl(foo.db,21) -> 12
> trace.xSync(foo.db,FULL) -> 0
> trace.xClose(foo.db-journal) -> SQLITE_OK
> trace.xDelete("/home/batrick/scm/cctools/chirp/src/sqlite-amalgamation-3090200/foo.db-journal",0)
> -> SQLITE_OK
> trace.xFileControl(foo.db,22) -> 12
> trace.xUnlock(foo.db,SHARED) -> SQLITE_OK
> trace.xDeviceCharacteristics(foo.db) -> 0x1000
> trace.xUnlock(foo.db,NONE) -> SQLITE_OK
> 0
> trace.xDeviceCharacteristics(foo.db) -> 0x1000
> trace.xUnlock(foo.db,NONE) -> SQLITE_OK
> trace.xClose(foo.db) -> SQLITE_OK
>
> No rows were inserted but there are several writes. This behavior
> seems to be caused by AUTOINCREMENT?

Now that there is a trivial test-case, I was hoping to find this on
the bugs page [1] but nothing yet. Is this not a bug?

[1] http://www.sqlite.org/src/rptview?rn=1

-- 
Patrick Donnelly


[sqlite] NOP INSERT still writes to the DB/journal

2015-12-08 Thread Patrick Donnelly
On Mon, Dec 7, 2015 at 11:51 PM, Simon Slavin  wrote:
>
> On 8 Dec 2015, at 12:19am, Patrick Donnelly  wrote:
>
>> There are still writes:
>
> Because you have not defined any transactions, each of your INSERT commands 
> it getting wrapped in its own transaction.  A transaction has to involve 
> writes to disk.
>
> Try this ...
>
> BEGIN
> INSERT INTO t1 ...
> END
>
> See whether the INSERT command still involves as much writing.

This problem is not related to transactions. Richard Hipp posted an
example [1] where there are no writes for an "INSERT INTO T SELECT..."
 on a table (without AUTOINCREMENT), without any explicit
transactions.

My own test example without AUTOINCREMENT has no writes:

$ cat test2.sql
CREATE TABLE t1(a INTEGER PRIMARY KEY, b integer);
INSERT INTO t1(b) VALUES (1);
.print -
begin transaction;
INSERT INTO t1 (b)
SELECT 0
WHERE 1 = 0;
end transaction;
select changes();
$ diff test.sql test2.sql
1c1
< CREATE TABLE t1(a INTEGER PRIMARY KEY AUTOINCREMENT, b integer);
---
> CREATE TABLE t1(a INTEGER PRIMARY KEY, b integer);
3d2
< select * from sqlite_sequence;
$ rm foo.db*; ./sqlite3 -vfstrace foo.db < test2.sql
...
-
trace.xLock(foo.db,SHARED) -> SQLITE_OK
trace.xAccess("/home/batrick/sqlite-amalgamation-3090200/foo.db-journal",0)
-> SQLITE_OK, out=0
trace.xFileSize(foo.db) -> SQLITE_OK, size=2048
trace.xRead(foo.db,n=16,ofst=24) -> SQLITE_OK
trace.xFileSize(foo.db) -> SQLITE_OK, size=2048
trace.xAccess("/home/batrick/sqlite-amalgamation-3090200/foo.db-wal",0)
-> SQLITE_OK, out=0
trace.xFileSize(foo.db) -> SQLITE_OK, size=2048
trace.xLock(foo.db,RESERVED) -> SQLITE_OK
trace.xLock(foo.db,EXCLUSIVE) -> SQLITE_OK
trace.xFileControl(foo.db,22) -> 12
trace.xUnlock(foo.db,SHARED) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xUnlock(foo.db,NONE) -> SQLITE_OK
0
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xUnlock(foo.db,NONE) -> SQLITE_OK
trace.xClose(foo.db) -> SQLITE_OK

[1] 
http://mailinglists.sqlite.org/cgi-bin/mailman/private/sqlite-users/2014-May/052855.html

-- 
Patrick Donnelly


[sqlite] NOP INSERT still writes to the DB/journal

2015-12-07 Thread Patrick Donnelly
On Mon, Dec 7, 2015 at 5:31 PM, Igor Tandetnik  wrote:
> On 12/7/2015 5:05 PM, Patrick Donnelly wrote:
>>
>> No rows were inserted but there are several writes. This behavior
>> seems to be caused by AUTOINCREMENT?
>
>
> Could be creating sqlite_sequence table where there wasn't one before. I
> wonder if there are still writes on the second and subsequent no-op inserts.

There are still writes:

CREATE TABLE t1(a INTEGER PRIMARY KEY AUTOINCREMENT, b integer);
INSERT INTO t1(b) VALUES (1);
select * from sqlite_sequence;
.print -
INSERT INTO t1 (b)
SELECT 0
WHERE 1 = 0;
select changes();


$ rm foo.db*; ./sqlite3 -vfstrace foo.db < test.sql
...
t1|1
-
trace.xLock(foo.db,SHARED) -> SQLITE_OK
trace.xAccess("/home/batrick/sqlite-amalgamation-3090200/foo.db-journal",0)
-> SQLITE_OK, out=0
trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
trace.xRead(foo.db,n=16,ofst=24) -> SQLITE_OK
trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
trace.xAccess("/home/batrick/sqlite-amalgamation-3090200/foo.db-wal",0)
-> SQLITE_OK, out=0
trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
trace.xLock(foo.db,RESERVED) -> SQLITE_OK
trace.xFileControl(foo.db,20) -> SQLITE_OK
trace.xOpen(foo.db-journal,flags=0x806) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xWrite(foo.db-journal,n=512,ofst=0) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=512) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=1024,ofst=516) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=1540) -> SQLITE_OK
trace.xLock(foo.db,EXCLUSIVE) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=1544) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=1024,ofst=1548) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=2572) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xRead(foo.db-journal,n=8,ofst=3072) -> SQLITE_IOERR_SHORT_READ
trace.xSync(foo.db-journal,FULL) -> 0
trace.xWrite(foo.db-journal,n=12,ofst=0) -> SQLITE_OK
trace.xSync(foo.db-journal,FULL) -> 0
trace.xWrite(foo.db,n=1024,ofst=0) -> SQLITE_OK
trace.xWrite(foo.db,n=1024,ofst=2048) -> SQLITE_OK
trace.xFileControl(foo.db,21) -> 12
trace.xSync(foo.db,FULL) -> 0
trace.xClose(foo.db-journal) -> SQLITE_OK
trace.xDelete("/home/batrick/sqlite-amalgamation-3090200/foo.db-journal",0)
-> SQLITE_OK
trace.xFileControl(foo.db,22) -> 12
trace.xUnlock(foo.db,SHARED) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xUnlock(foo.db,NONE) -> SQLITE_OK
0
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xUnlock(foo.db,NONE) -> SQLITE_OK
trace.xClose(foo.db) -> SQLITE_OK


-- 
Patrick Donnelly


[sqlite] NOP INSERT still writes to the DB/journal

2015-12-07 Thread Patrick Donnelly
Update on this:

On Mon, May 5, 2014 at 4:53 PM, Patrick Donnelly  
wrote:
> Hi,
>
> I have an INSERT that looks like
>
> INSERT INTO T
> SELECT ...
>
> which I'm running numerous times a second that generally does nothing
> because the SELECT returns no rows. Unfortunately, I've found that
> SQLite still does numerous disk writes anyway in this situation.
>
> Is my only option to eliminate the INSERT by using a SELECT first to
> check if there are no rows? Something like:
>
> CREATE TEMPORARY VIEW V AS
> SELECT ...
> SELECT COUNT(*) FROM V;
> /* If > 0 */
> INSERT INTO T SELECT * FROM V;
>
> ?

I've been able to reproduce it with this minimal example:

CREATE TABLE t1(a INTEGER PRIMARY KEY AUTOINCREMENT);
.print -
INSERT INTO t1
SELECT 0
WHERE 1 = 2;
select changes();

$ ./sqlite3 -vfstrace foo.db < test.sql
...
-
trace.xLock(foo.db,SHARED) -> SQLITE_OK
trace.xAccess("/home/batrick/scm/cctools/chirp/src/sqlite-amalgamation-3090200/foo.db-journal",0)
-> SQLITE_OK, out=0
trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
trace.xRead(foo.db,n=16,ofst=24) -> SQLITE_OK
trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
trace.xAccess("/home/batrick/scm/cctools/chirp/src/sqlite-amalgamation-3090200/foo.db-wal",0)
-> SQLITE_OK, out=0
trace.xFileSize(foo.db) -> SQLITE_OK, size=3072
trace.xLock(foo.db,RESERVED) -> SQLITE_OK
trace.xFileControl(foo.db,20) -> SQLITE_OK
trace.xOpen(foo.db-journal,flags=0x806) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xWrite(foo.db-journal,n=512,ofst=0) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=512) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=1024,ofst=516) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=1540) -> SQLITE_OK
trace.xLock(foo.db,EXCLUSIVE) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=1544) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=1024,ofst=1548) -> SQLITE_OK
trace.xWrite(foo.db-journal,n=4,ofst=2572) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xRead(foo.db-journal,n=8,ofst=3072) -> SQLITE_IOERR_SHORT_READ
trace.xSync(foo.db-journal,FULL) -> 0
trace.xWrite(foo.db-journal,n=12,ofst=0) -> SQLITE_OK
trace.xSync(foo.db-journal,FULL) -> 0
trace.xWrite(foo.db,n=1024,ofst=0) -> SQLITE_OK
trace.xWrite(foo.db,n=1024,ofst=2048) -> SQLITE_OK
trace.xFileControl(foo.db,21) -> 12
trace.xSync(foo.db,FULL) -> 0
trace.xClose(foo.db-journal) -> SQLITE_OK
trace.xDelete("/home/batrick/scm/cctools/chirp/src/sqlite-amalgamation-3090200/foo.db-journal",0)
-> SQLITE_OK
trace.xFileControl(foo.db,22) -> 12
trace.xUnlock(foo.db,SHARED) -> SQLITE_OK
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xUnlock(foo.db,NONE) -> SQLITE_OK
0
trace.xDeviceCharacteristics(foo.db) -> 0x1000
trace.xUnlock(foo.db,NONE) -> SQLITE_OK
trace.xClose(foo.db) -> SQLITE_OK

No rows were inserted but there are several writes. This behavior
seems to be caused by AUTOINCREMENT?

-- 
Patrick Donnelly


[sqlite] misleading note in the documentation for WAL

2015-02-20 Thread Patrick Donnelly
If doing a SQLITE_CHECKPOINT_RESTART, the docs [1] say that the
operation *ensures* the next writer will truncate the log:

"This mode works the same way as SQLITE_CHECKPOINT_FULL with the
addition that after checkpointing the log file it blocks (calls the
busy-handler callback) until all readers are reading from the database
file only. This ensures that the next writer will restart the log file
from the beginning."

This conflicts with [2]:

"Whenever a write operation occurs, the writer checks how much
progress the checkpointer has made, and if the entire WAL has been
transferred into the database and synced and if no readers are making
use of the WAL, then the writer will rewind the WAL back to the
beginning and start putting new transactions at the beginning of the
WAL. This mechanism prevents a WAL file from growing without bound."

So if a reader begins a transaction before the next writer, then the
log file will not be restarted? [I assume this is why
SQLITE_CHECKPOINT_TRUNCATE was added?]

[1] https://www.sqlite.org/c3ref/wal_checkpoint_v2.html
[2] https://www.sqlite.org/wal.html

-- 
Patrick Donnelly


[sqlite] Extremely long running END (EXCLUSIVE) TRANSACTION in WAL mode

2015-02-20 Thread Patrick Donnelly
Hi,

I'm running into a problem where an `END TRANSACTION;` statement takes
several seconds to complete (via sqlite3_profile) on a database in WAL
mode. The transaction was exclusive (i.e. `BEGIN EXCLUSIVE
TRANSACTION;`). The transactions are small with only a ~10 inserts of
small data.

Perhaps relevant: a single concurrent reader is creating/ending
deferred transactions several times a second alongside the writer.

My first thought was that an automatic checkpoint was causing the
transaction to block for so long. However, based on my reading of the
documentation, it appears that all automatic checkpoints are "passive"
and so should not block the checkpointer? I don't see how the writer
is being blocked for so long. Can anyone provide hints on how to
further debug this?

-- 
Patrick Donnelly


[sqlite] random row from group

2014-07-08 Thread Patrick Donnelly
Hi,

I'm trying to find a way to select a random row from a group (rather
than "arbitrary"). Something like:

SELECT attr1, attr2
FROM foo
GROUP BY attr1
ORDER BY attr1, random()

but of course ORDER BY is done after GROUP BY has selected an
arbitrary row. Looking online, I've seen non-standard solutions which
aren't supported in sqlite. For example:

select distinct on (id) id, attribute
from like_this
order by id, random()

from 
http://stackoverflow.com/questions/16044828/select-random-row-for-each-group

Any pointers would be appreciated!

-- 
Patrick Donnelly
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] NOP INSERT still writes to the DB/journal

2014-05-05 Thread Patrick Donnelly
Hi,

I have an INSERT that looks like

INSERT INTO T
SELECT ...

which I'm running numerous times a second that generally does nothing
because the SELECT returns no rows. Unfortunately, I've found that
SQLite still does numerous disk writes anyway in this situation.

Is my only option to eliminate the INSERT by using a SELECT first to
check if there are no rows? Something like:

CREATE TEMPORARY VIEW V AS
SELECT ...
SELECT COUNT(*) FROM V;
/* If > 0 */
INSERT INTO T SELECT * FROM V;

?

Thanks,

-- 
Patrick Donnelly
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Where did the release announcements go?

2014-04-03 Thread Patrick Herbst
>
> This is the right place to go. Here you find announcements of new
> versions, planned updates and bug-fixes even before they arrive at
>
> Recent SQLite News


No, I think they're all supposed to show up in sqlite-announce; not
sqlite-users.

3.8.4.3 was announced in sqlite-announce.  Very happy to see that.  Thanks!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Where did the release announcements go?

2014-04-03 Thread Patrick Herbst
I depend on the sqlite-announce list to get updates on new releases.

But it seems they haven't been posting there for the last couple releases.

Why not?

The archive shows the last announcement came for the 3.8.3.1 release.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
Simon,

> Date: Thu, 20 Feb 2014 14:04:59 +
> From: Simon Slavin 
> 
> On 20 Feb 2014, at 1:56pm, Patrick Proniewski <pat...@patpro.net> wrote:
> 
>> Thanks for you reply. In fact I would like the script to remain portable. 
>> I'm running all this on Mac OS X 10.6.8 (sqlite3 3.6.12) but I plan to share 
>> it and use it on FreeBSD too. And developing a extension is probably out of 
>> my reach :)
> 
> I don't think the thing you want to do can be done easily within SQLite.  I 
> would expect to see it done in whatever programming language you're using.
> 
> One approach would be to save two columns when you extract from EXIF.  One 
> has exactly the text from the EXIF column.  Another would be to save the 
> factor as a REAL number.  Do some string processing to get the bit before the 
> '/' and the bit after it, and divide one by the other.
> 
> You might be able to do the same calculation inside SQLite but it'll be 
> horrid to look at.  Something like
> 
> SELECT 
> expTime,substr(expTime,1,instr(expTime,'/'))/substr(expTime,instr(expTime,'/')+1)
>  AS etAsReal FROM photos


I'm going to rework my database to include proper "already calculated" columns, 
but using bash to do the math would have been a little bit unpleasant (not 
enough streamlined for my liking). Fortunately exiftool has an option to 
extract raw value for EXIF tags, and I've found a way to read them exactly as I 
need, "one-pot".

thanks,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
Donald,

On 20 févr. 2014, at 15:16, Donald Griggs wrote:

> Am I right that fractional exposures will always have one as the numerator?   
> I.e., you might record an exposure as "1.5" seconds, but never as "3/2" 
> seconds?   If so, then that might simplify things.

Yes, no 3/2, only 1/x and regular REALs.


> The example below creates a column named "canon" to hold the canonical 
> exposure string value for sorting and grouping.
> It assumes any non-decimal fractions will begin with the string "1/"

Thank you for this example, the idea of storing the computed number into the 
database is very good and made me rethink the initial database feeding. I'm 
using exiftool to script EXIF reading from my files. exiftool has a very nice 
option that allows the reading of raw data. Exposure Time, displayed in "raw" 
is the REAL equivalent to my strings: 1/200 is read as 0.005. Very handy.
I'm going to redesign my database in order to include raw data aside 
human-readable data when I need it.

thanks,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
hi Kevin,

On 20 févr. 2014, at 14:11, Kevin Martin wrote:

> 
> On 20 Feb 2014, at 12:54, Patrick Proniewski <pat...@patpro.net> wrote:
> 
>> My problem is that my database holds too many different values for 
>> ExposureTime, so the resulting plot is unreadable. I want to be able to 
>> "bin" those values to create a proper histogram. It's not possible to "bin" 
>> string values, because SQLite has no idea that "1/60" has nothing to do near 
>> "1/6000".
> 
> Are you able to use an extension? A custom collation on the ExposureTime 
> column seems pretty simple (although not thought about it in detail). You 
> should then be able to bin the values as they are.


Thanks for you reply. In fact I would like the script to remain portable. I'm 
running all this on Mac OS X 10.6.8 (sqlite3 3.6.12) but I plan to share it and 
use it on FreeBSD too. And developing a extension is probably out of my reach :)

Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] calculation of a fraction stored in a text column

2014-02-20 Thread Patrick Proniewski
Hello,

I'm coming back with my EXIF database. I have a TEXT column, ExposureTime, 
where I store the EXIF representation of photograph's exposure time. Sample 
values: "1/6000", "1/250", "0.5", "1", "6"...
I need to retain this representation, because it's how photographers deal with 
exposure time. 0.004 would be great for math, sorting, etc. but the real life 
thing is "1/250".

My problem is that my database holds too many different values for 
ExposureTime, so the resulting plot is unreadable. I want to be able to "bin" 
those values to create a proper histogram. It's not possible to "bin" string 
values, because SQLite has no idea that "1/60" has nothing to do near "1/6000".

I need to convert strings like "1/6000" and "1/250" into their REAL 
counterparts "0.000166", "0.004" during my SELECT request for "binning"/sorting 
and counting purposes. I've started to work on an over-complex substr()+instr() 
combo that is not finished yet, but will probably be a dead-end.

Is there a straightforward way to do this, instead of conditionally decomposing 
the TEXT and recomposing a mathematical expression that SELECT can calculate?

regards,
Patrick

(you can Cc me, I'm subscribed to digest)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select/count/group by question

2014-02-19 Thread Patrick Proniewski
> Message: 10
> Date: Wed, 19 Feb 2014 09:11:27 -0500
> From: Igor Tandetnik 
> 
> select strftime(...) as year, sum(ISO=100)
> FROM exifdata GROUP BY year ORDER BY year;
> 
> --
> 
> Message: 11
> Date: Wed, 19 Feb 2014 09:16:56 -0500
> From: Richard Hipp 

> Maybe this:
> 
> SELECT strftime('%Y', CreateDate, 'unixepoch') AS year,
> sum(iso==100)
> FROM exifdata WHERE year NOT NULL
> GROUP BY year
> ORDER BY year;
> 
> The main trick is to use "sum(iso==100)" instead of "count(iso)".  Note
> also that SQL strings use a single quote, not a double-quote.  And the
> datetime() function call is redundant as strftime can do the unixepoch
> conversion for you.  And you don't need to GROUP BY iso, since you are only
> interested in a single iso value.


Thank you Igor and Richard for your fast replies.
It solved my problem.

I thought about using sum() or total() but the ISO==100 part is something I 
would never have thought about.

thanks again,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] select/count/group by question

2014-02-19 Thread Patrick Proniewski
Hello,

I'm currently designing a small tool with shell scripts, sqlite3 backend, and 
gnuplot for graphical output.
My SQLite database has one table, each row being a picture name and a subset of 
its EXIF tags (date, ISO, metering mode...).
In order to process data into gnuplot, I use SELECT to output what I need in a 
flat file.

Example: 

For ISO = 100, I issue this request: 

SELECT strftime("%Y", datetime(CreateDate,"unixepoch")) as year, COUNT(ISO) 
FROM exifdata WHERE ISO = 100 AND year not null GROUP BY ISO, year ORDER BY 
year;

It returns :

2008 9
2009 581
2010 3144
2011 768
2012 1195
2013 1270

That is correct, but not enough: the database includes pictures taken in 2014, 
none of which at ISO = 100, but I need a line "2014 0" in my output (otherwise 
gnuplot mixes things when it draws stacked histograms).

My question is: how can I write a request that will successfully pull a result 
for every years, even if the count is 0?

I've created a second table "years", with only one column ("year"), rows are 
"2008", "2009"..., "2014". Then I've tried using JOIN to get at least a NULL 
result, without success.

Full table with create statement is available here: 
http://patpro.net/~patpro/exifdata.sql.zip (106 kB).
Any hint appreciated!

Thanks,
Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] logical to use sqlite to replace Berkeley DB ?

2013-09-14 Thread Patrick

Thanks very much Howard and Rob

I am having trouble with my email settings or hosting provider and 
actually reading your posts from the archive online, I can't private 
message.


I am not yet a developer but I really like open Cobol:
http://www.opencobol.org/

This website is frozen due to spambots but the community is alive and 
well here:

http://sourceforge.net/projects/open-cobol/

We no longer have a lead developer. There is still development taking 
place but it would be good if it was more vibrant.


BDB is used for file related operations such as sorting.

open CObol can be compiled with the older BDB before Oracle or versions 
after but it looks like it has now been a number of years since Oracle 
did this.


Open Cobol compiles to intermeditate C so it can play nicely with other 
code for the most part. While it can call or be called from a variety of 
languages, for the most part library functionality comes from building 
it right into the language(this is not such a great thing).


I still need a couple more months to get my skills up to speed to 
contribute but when I do, I was thinking it would be nice to rip out the 
aging BDB and replace it with something else AND and add SQL 
functionality to the language.


Thanks again-Patrick





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] logical to use sqlite to replace Berkeley DB ?

2013-09-13 Thread Patrick

Hi Everyone

After Oracle acquired Berkeley DB they changed the license. If people 
don't pay a significant licensing fee, it can now only be used for GPL code.


I don't know Berkeley DB very well but I do know a moderate amount of 
Sqlite.


I want to tinker with a compiler that uses DB, I was thinking about 
ripping it out and replacing it with Sqlite. Does this make sense?


I know they are both zero configuration embedded DBs but DB is a 
key-value based one and I am assuming lighter, is this true? Any idea of 
how close they would be in terms of memory use and execution speed?


Thanks for reading-Patrick


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New Optimizations in 3.7.16 -- Explain please?

2013-04-03 Thread Patrick Herbst
> Can someone give me a case where the new changes make a difference in
> relation to the following two changes:
>
> (from http://www.sqlite.org/releaselog/3_7_16.html)
> - Enhance virtual tables so that they can potentially use an index
> when the WHERE clause contains the IN operator.
> - Allow indices to be used for sorting even if prior terms of the
> index are constrained by IN operators in the WHERE clause.
>
> I use SELECT's using IN's... but I'm not seeing any improved performance.
>
> Anyone know exactly what is enhanced?  or how it now allows indices to
> be used where they couldn't be before?
>
> Thanks!!

Anyone have any use-cases to show the new optimizations?  Please?

Thanks!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Q: When to use sqlite3_shutdown ?

2013-04-01 Thread Patrick Herbst
When is there a need to use sqlite3_shutdown?

I don't really see a clear explanation of when/how its needed.

Any tips, please?

Thanks!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] New Optimizations in 3.7.16 -- Explain please?

2013-03-25 Thread Patrick Herbst
Can someone give me a case where the new changes make a difference in
relation to the following two changes:

(from http://www.sqlite.org/releaselog/3_7_16.html)
- Enhance virtual tables so that they can potentially use an index
when the WHERE clause contains the IN operator.
- Allow indices to be used for sorting even if prior terms of the
index are constrained by IN operators in the WHERE clause.

I use SELECT's using IN's... but I'm not seeing any improved performance.

Anyone know exactly what is enhanced?  or how it now allows indices to
be used where they couldn't be before?

Thanks!!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] internals. IPC or disk write based?

2013-01-24 Thread Patrick

You stated off by making some assumptions which would normally be correct, but 
are not valid for SQLite.  Your last two paragraphs suggest you realized what 
was really happening.  There is no persistent SQLite Server process, accepting 
requests from many users.  SQLite has no multi-user model and is not primarily 
intended for use as a server/client database.  All changes are performed by 
changing data in the database file or in temporary 'journal' files on disk.  
The only thing you could say is shared between users is whether the file on 
disk is locked (indicating that it's currently being updated by a user) or not.

Think of SQLite as the simple database system your TV uses to list channel 
names, or your mobile phone uses to keep its phone book (both of which are, in 
fact, things SQLite really is used for with literally millions of 
installations).  The fact that SQLite works at all with huge datasets being 
accessed by many users at once is just a result of good programming.

As your questions indicate, if you're starting off a programming project 
intending to use big datasets accessed concurrently by many users across a 
network, SQLite is probably not the right database engine to use.  More about 
this can be found at

<http://www.sqlite.org/whentouse.html>

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Hi Simon

Thanks for your very helpful post.

Actually for everything I want to do DB-wise, sqlite is perfect . I 
actually wanted to understand this to use in a totally different project 
using the Ada language.


Have a great day-Patrick

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] internals. IPC or disk write based?

2013-01-24 Thread Patrick

Hi Everyone

I bought the Apress book. There is a chapter on internals. I have tried 
to read what I could find on the net too but I am still mixed up about 
something.


I don't know how sqlite interacts with visitors while maintaining state. 
If everything was read into memory it would be easy to imagine how 
things could work but then how would a second application access it?


I am guessing other DBs that have authentication are socket based but I 
guessing sqlite is not socket based.


If it was all based on disk writes then multiple applications could read 
and write to it, via a queue with locked files. Is this the case? What 
sort of inter-process communication is sqlite based on?



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Select into outfile C API

2012-06-05 Thread patrick

Is their a syntax similar to mysql's 'SELECT INTO OUTFILE' in the C API?
I see the sqlite3 binary has an interactive .output parameter.  Anything 
similar in the C API?

TIA
Pat...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Select into outfile C API

2012-06-04 Thread Patrick

Is their a syntax similar to mysql's 'SELECT INTO OUTFILE' in the C API?
I see the sqlite3 binary has an interactive .output parameter.  Anything 
similar in the C API?

TIA
Pat...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problem with a SELECT DISTINCT query in Version : 3.7.8

2011-09-29 Thread Patrick Villette

Hi,

I noticed differences in the result of a "SELECT DISTINCT" query between 
version 3.7.7.1 and 3.7.8


The query is :

select distinct IdProjet, IdActivite from Taches where Deleted=0 order 
by Debut desc limit 5



With Version 3.7.7.1 the result ( which seems correct ) is :

IdProjet   IdActivite
  53  23
  23  9
  5   17
  53  9
  53  37


With Version 3.7.8 the result is :

IdProjetIdActivite
  5323
  5017
  16-1
  2035
  2010


You will find enclosed the database for the test : "test.db".

Regards.

Patrick.
--
*Patrick Villette*
*Innovation Net*
courriel : patrick.ville...@innovation-net.fr

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-28 Thread Patrick Proniewski
On 28 sept. 2011, at 13:38, Paul Linehan wrote:

>> The granularity I'm looking for is between 1 second and 10 seconds. Cron is 
>> not
>> an option here.
> 
> I woke up this morning and there is a way that cron *_could_* do what you
> want. You appear to have figured out a way that suits you, but cron could
> be used.
> 
> 10 second granularity.
> 
> You have 6 cron jobs, each launched on the minute.
> 
> The first launches iostat and puts data into SQLite.
> The second does a sleep 10, launches iostat and puts data into SQLite,
> the third sleep 20 
> 
> I know it's an appalling hack, but could be useful to somebody?

That's appalling :)
Especially if you consider the fact that some systems can have a crond launched 
with the -j flag (jitter : adds a random sleep before running cron job).

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 23:11, Scott Hess wrote:

> On Tue, Sep 27, 2011 at 2:07 PM, Patrick Proniewski <pat...@patpro.net>wrote:
> 
>> On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:
>>> gawk has fflush()
>> 
>> none of them is available out of the box on Mac OS X, or FreeBSD. gawk can
>> be installed, but I'd rather use my "while true" loop instead of installing
>> gawk.
>> 
> 
> Did you try it?

nop, I don't have gawk so I didn't even think about trying.

>  On my Mac fflush() fixes it.

indeed. Thanks. So it's not specific to gawk, that's great news! My problem is 
solved.

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:44, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski <pat...@patpro.net>:
> 
>>> Take a look at a utility called dstat.
> 
>> no, it's linux only.
> 
> But it is written in Python - so it should be relatively
> transportable.

and it relies on /proc/, Mac OS X does not have a /proc/

patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:41, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski <pat...@patpro.net>:
> 
>> That's what I do, but I think using a loop is ugly, and I would like to find 
>> a way
>> to feed data continuously into sqlite.
> 
> I can't see why you would want to do this more than once every minute
> - or do you?

The granularity I'm looking for is between 1 second and 10 seconds. Cron is not 
an option here.

> Why, exactly, do you want to do this anyway? I'm interested because I've
> done something similar.

I've performance issue on a file server hooked to a raid enclosure, and 
exporting the corresponding volume via NFS.
The performance problem seems to be on the raid itself. So I'm logging I/O 
performances during production, to detect anomaly.
sample: http://perso.univ-lyon2.fr/~pproniew/kbpt-2011-09-27-22.png (besier 
smoothing, 24 hours of data).
We will change the storage in few days, and this iostat logging will help 
compare before/after performances.

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:

> gawk has fflush()


On 27 sept. 2011, at 20:29, Roger Andersson wrote:

> stdbuf?
> unbuffer?


none of them is available out of the box on Mac OS X, or FreeBSD. gawk can be 
installed, but I'd rather use my "while true" loop instead of installing gawk.

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:14, David Garfield wrote:

> Any entry in a pipe could be buffering.  In a quick test here, awk is
> buffering.  To find the buffering, try using the pieces up to a given
> stage with " | cat " added at the end.  If this buffers, you've found
> the problem.

as far as my understanding goes, the simple fact I don't have my last output 
into a tty is enough to trigger buffering.


>  Unbuffered output is usually slower, so it is normally
> done only to a terminal.  I think the only easy way to externally
> disable the buffer is to wrap the program in a pseudo-tty.

apparently... not so easy by the way :)

> Alternatively, look for an option that lets you explicitly unbuffer.
> (for instance, in perl, do: $| = 1; )

nothing in awk, but I could try sed instead (-l  Make output line buffered)

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:04, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski <pat...@patpro.net>:
> 
> 
>> I'm facing a challenging problem. I want to log some data into an SQLite3 DB.
>> Data come from a system command (iostat) in an endless steam, one row every 
>> X seconds:
> 
> 
> Take a look at a utility called dstat. 

no, it's linux only.

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 18:31, Roger Andersson wrote:

> I do not know if tee makes any difference or if it's available on Mac?
> http://unixhelp.ed.ac.uk/CGI/man-cgi?tee

tee is available, but no more luck here, as it won't allow to disable the 
buffer.


> iostat -d -w 10 disk0 | tee -a logfile
> and then
> tail -f logfile | awk '!/[a-zA-Z]/ {print "INSERT INTO io 
> VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
> sqlite3 iostat.db

same problem here ;)

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 13:44, Simon Slavin wrote:

> On 27 Sep 2011, at 12:03pm, Patrick Proniewski wrote:
> 
>> You're assuming I'm running Linux, but I'm running Mac OS X Server (or 
>> FreeBSD by the way), so no /proc here, and iostat is probably working 
>> differently too.
>> <http://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/10.5/man8/iostat.8.html>
> 
> If you're using the OS X version, I don't think you need to run iostat as a 
> continuous process.  Write a shell script with a timed loop which runs iostat 
> without the '-w 10'.  So you could write a script which does

That's what I do, but I think using a loop is ugly, and I would like to find a 
way to feed data continuously into sqlite. 

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:31, Baptiste Daroussin wrote:

> You don't need awk :)
> 
> iostat -d -w 10 disk0 | while read a b c; do case $a in *[a-zA-Z]*)
> continue ;; *) sqlite3 iostat.db "INSERT INTO io
> VALUES(datetime('now', 'localtime'), \"$a\", \"$b\", \"$c\");" ;;
> esac; done


Ok, this forks less, but still, you can't get rid of the loop ;) (I love awk)

thanks,
patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:02, Stephan Beal wrote:

> That's a tricky one, it seems. If you're not restricted to shell code, you
> could possibly do this using perl, PHP, or similar. You could open a pipe
> for iostat, read a line from the pipe, and feed that line to your db (not in
> the form of a raw text line but using the script language's sqlite3 API).
> Repeat until the pipe is eof or a signal is caught or whatever.

Changing languages could be an option, but I'd rather keep my ugly while loop 
than learn PERL :)

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 12:58, Simon Slavin wrote:

> On 27 Sep 2011, at 6:48am, Patrick Proniewski wrote:
> 
>> I've tried various solutions with named pipes, file descriptors 
>> redirections… but none worked, because they all seem to require the data 
>> steam to end before feeding data into the DB.
> 
> Most of your problems are caused because you're using iostat.  Can you 
> instead read the data directly out of /proc ?  Take a look at the end of the 
> 'man iostat' page for details.

You're assuming I'm running Linux, but I'm running Mac OS X Server (or FreeBSD 
by the way), so no /proc here, and iostat is probably working differently too.
<http://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/10.5/man8/iostat.8.html>

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:21, Roger Binns wrote:

> The easiest solution is to just be patient and accept the data will be a
> little delayed.

that won't work for me, because my SQL command includes a datetime('now'). Any 
row input that is delayed won't be recorded with the proper datetime. That's 
one of the reasons why I must use tail -1 in my infinite loop. When I send more 
than one line, they all have the same datetime.


> Other solutions involve various helper programs such as using a pty so that
> the programs think they are using terminals:
> 
>  http://stackoverflow.com/questions/1000674/turn-off-buffering-in-pipe

I've neither unbuffer nor socat available on my system, but I'll read the full 
thread to grab info.

thanks,
patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-26 Thread Patrick Proniewski
Hello,

I'm facing a challenging problem. I want to log some data into an SQLite3 DB. 
Data come from a system command (iostat) in an endless steam, one row every X 
seconds:

 disk0 
   KB/t tps  MB/s 
   4.02 2318  9.09 
   4.00 1237  4.83 
   6.63 979  6.34 
  46.30  15  0.69 
  30.58  23  0.69 
  12.90  32  0.41 
 107.85  55  5.75 

I though I could easily pipe data into SQLite:

iostat -d -w 10 disk0 |\
awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
sqlite3 iostat.db

but it won't work, because sqlite3 won't record any data until the iostat 
command ends. And of course, this iostat command will never end.
So I'm stuck with a working but very ugly script:

while true; do
 iostat -c 2 -d -w 10 disk0 |\
 tail -1 |\
 awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
 sqlite3 iostat.db
done

endless loop, forking iostat for 2 rows of data (-c 2), keeping only the last 
row because the first one is an artifact (tail -1).
I've tried various solutions with named pipes, file descriptors redirections… 
but none worked, because they all seem to require the data steam to end before 
feeding data into the DB.

Any idea?

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] System.Data.SQLite Release

2011-08-31 Thread Patrick Earl
Just wondering when the next release of System.Data.SQLite will be
available.  There's a bug, that was already reported and fixed, in the
current release that badly breaks NHibernate / ActiveRecord.

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.SQLite Reserved Words Bug

2011-07-10 Thread Patrick Earl
Thanks for fixing that so quickly.  Looking forward to a new release.

Patrick Earl

On Sat, Jul 9, 2011 at 2:39 PM, Joe Mistachkin <sql...@mistachkin.com> wrote:
>
> Patrick Earl wrote:
>>
>> System.Resources.MissingManifestResourceException was unhandled
>>   Message=Could not find any resources appropriate for the specified
>> culture or the neutral culture.  Make sure
>> "System.Data.SQLite.SR.resources" was correctly embedded or linked
>> into assembly "System.Data.SQLite" at compile time, or that all the
>> satellite assemblies required are loadable and fully signed.
>>
>
> This issue appears to be caused by an incorrect resource name in the
> mixed-mode assembly compiled with VS 2010.  The following line in the
> project file "SQLite.Interop.2010.vcxproj" is incorrect:
>
> $(IntDir)System.Data.SQLite.%(Filename).resources
>
> It should read:
>
> System.Data.SQLite.%(Filename).resources
>
> This issue has been fixed in:
>
> http://system.data.sqlite.org/index.html/ci/55f56ce508
>
> Thanks for pointing out this problem.
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] System.Data.SQLite Reserved Words Bug

2011-07-09 Thread Patrick Earl
First I wanted to say that I was so excited to see the 1.0.74 release
with .NET 4, zip files, and SQLite 3.7.7.  I've been waiting for .NET
4 support for a long while.  Thanks so much. :)

Unfortunately, I was unable to upgrade from 1.0.66 because of the
following problem.

Using this code produces the following exception:

using System;
using System.Collections.Generic;
using System.Data.SQLite;
using System.Linq;
using System.Text;

namespace ConsoleApplication4
{
class Program
{
static void Main(string[] args)
{
SQLiteConnection conn = new SQLiteConnection("Data Source=test.db");
conn.Open();
conn.GetSchema("ReservedWords");
}
}
}

/*
System.Resources.MissingManifestResourceException was unhandled
  Message=Could not find any resources appropriate for the specified
culture or the neutral culture.  Make sure
"System.Data.SQLite.SR.resources" was correctly embedded or linked
into assembly "System.Data.SQLite" at compile time, or that all the
satellite assemblies required are loadable and fully signed.
  Source=mscorlib
  StackTrace:
   at 
System.Resources.ManifestBasedResourceGroveler.HandleResourceStreamMissing(String
fileName)
   at 
System.Resources.ManifestBasedResourceGroveler.GrovelForResourceSet(CultureInfo
culture, Dictionary`2 localResourceSets, Boolean tryParents, Boolean
createIfNotExists, StackCrawlMark& stackMark)
   at System.Resources.ResourceManager.InternalGetResourceSet(CultureInfo
requestedCulture, Boolean createIfNotExists, Boolean tryParents,
StackCrawlMark& stackMark)
   at System.Resources.ResourceManager.InternalGetResourceSet(CultureInfo
culture, Boolean createIfNotExists, Boolean tryParents)
   at System.Resources.ResourceManager.GetString(String name,
CultureInfo culture)
   at System.Data.SQLite.SR.get_Keywords() in
c:\dev\sqlite\dotnet\System.Data.SQLite\SR.Designer.cs:line 87
   at System.Data.SQLite.SQLiteConnection.Schema_ReservedWords()
in c:\dev\sqlite\dotnet\System.Data.SQLite\SQLiteConnection.cs:line
1239
   at System.Data.SQLite.SQLiteConnection.GetSchema(String
collectionName, String[] restrictionValues) in
c:\dev\sqlite\dotnet\System.Data.SQLite\SQLiteConnection.cs:line 1223
   at System.Data.SQLite.SQLiteConnection.GetSchema(String
collectionName) in
c:\dev\sqlite\dotnet\System.Data.SQLite\SQLiteConnection.cs:line 1176
   at ConsoleApplication4.Program.Main(String[] args) in
c:\temp\projects\ConsoleApplication4\Program.cs:line 15
   at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly,
String[] args)
   at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
   at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state, Boolean
ignoreSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext
executionContext, ContextCallback callback, Object state)
   at System.Threading.ThreadHelper.ThreadStart()
*/

Thanks for your help with this.

  Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Two Requests for System.Data.SQLite

2011-06-01 Thread Patrick Earl
1.  Please release a zipped version of the x86 and x64 binaries so we
don't have to install the package, copy the files, then uninstall it.
2.  Please release a version built against .NET 4 so the legacy
runtime flag can be avoided.

Thanks for considering these things.  They would certainly make my life easier.

 Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3_bind_parameter_name

2011-06-01 Thread Patrick Earl
That's awesome.  Thanks so much. :)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3_bind_parameter_name

2011-06-01 Thread Patrick Earl
So, to move forward on this bug a decision needs to be made at what
level it will be handled.

1.  At the SQLite level.
2.  At the provider level (ex. System.Data.SQLite, Python providers, etc.)
3.  At the user level (ex. NHibernate, Entity Framework, User
Applications, etc.)

Doing it in #3 would involve figuring out which parameters would be
removed and not including those, a very difficult option.
Doing it in #2 would involve adding something that does manual
parameter parsing and validation (such as the parameter tokenizer).
Doing it in #1 would involve things that I don't understand, though it
would simultaneously correct the problems with all providers mentioned
on this thread.

I'm not clear on who is maintaining System.Data.SQLite, but I would
certainly be happy to see some progress towards the resolution of this
issue.  Since I don't believe #3 is a feasible option (nor even the
correct place to abstract away the SQLite oddity), the solution should
be #1 or #2.

For #2 there is a performance hit on every query performed using such
a provider (.NET, Python, etc.).  The queries need to be parsed by the
provider to determine validity.  If the SQLite syntax changes, these
providers need to be updated.

For #1, it sounds like there is a minor performance penalty, but
perhaps it can be implemented in a way where the effects are minimal.
#1 has the advantage that it may be possible to avoid any sort of
significant performance hit unless an "optimized" parameter is
encountered.  #2 doesn't have this luxury since it doesn't know when a
parameter might be optimized out.

What would the SQLite team suggest to help progress the fix for this?

If it's at the System.Data.SQLite level, I would be willing to help
contribute a fix.  If that were the case, I would hope that the SQLite
syntax could be parsed by a regex for performance reasons.

    Patrick Earl

On Wed, Jun 1, 2011 at 10:36 AM, Stephan Beal <sgb...@googlemail.com> wrote:
> On Wed, Jun 1, 2011 at 6:24 PM, Patrick Earl <pate...@patearl.net> wrote:
>
>> From these, the "best" solution seems to be re-tokenizing the sql in
>> the provider.  Is there another work-around?  It seems that
>> implementing this at the database level would be the most efficient
>> approach.  Obviously re-tokenizing all the SQL would be expensive.
>
>
> Actually... if you just want to tokenize the SQL for the parameters, as
> opposed to checking the validity of the SQL itself, it is not all that
> difficult to do. i recently did just that to add named parameter support to
> the MySQL back-end of a db access abstraction API. MySQLv5 doesn't support
> named parameters natively, so i wrote a relatively simple tokenizer which
> can fish them out and another routine which converts named parameters to
> question marks so that we can feed the data to mysql (but can also remember
> enough info to map the named param positions to qmark positions):
>
> http://fossil.wanderinghorse.net/repos/cpdo/index.cgi/artifact/b5ba5aa115c73b63179456d1fed7846a11878c05
>
> Search that file for:
>
> cpdo_find_next_named_param
> cpdo_named_params_to_qmarks
>
> The code for those functions is public domain and should be generic enough
> to extract for your own use (almost without modification - i think only the
> return code symbols would need to be replaced). The docs are in the header
> file:
>
> http://fossil.wanderinghorse.net/repos/cpdo/index.cgi/artifact/6749b977687afa081f6b5b3e7fc3b19cd62bb70a
>
> That code has worked for me very well so far, and i haven't yet had any
> mis-parsing except on code which was itself not legal SQL (i.e. it doesn't
> seem to create any new problems where there isn't already a problem).
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3_bind_parameter_name

2011-06-01 Thread Patrick Earl
On Wed, Jun 1, 2011 at 12:32 AM, Roger Binns <rog...@rogerbinns.com> wrote:
> It does seem to me that this specific example is "wrong".  Selects return
> zero or more rows yet the equality wants exactly one value.  I would expect
> an error rather an implied "LIMIT 1" should there be anything other than
> exactly one row.  In some cases you'd only know by executing that subquery
> how many rows it returns, rather than at prepare time.

This is my general feeling as well, but I'm ignoring this since I'm
okay with erroneous queries generating ambiguous results.

>>> So, in summary, the "problem" has been with us for 6 years and nobody has
>>> cared.  And "fixing" it reduces the battery life on your cellphone by some
>>> small amount.  Are you *sure* this is something that needs to change?
>
> I think it is important to correct, especially as there is no workaround.

The fact that there is no straight-forward work-around is the most
problematic part.  One possible work-around would be to build a
tokenizer that re-extracts all the SQLite parameters so their values
can be verified.  Another possibility is to completely ignore missing
named parameters, leading to the inability to detect errors for the
user.  Understanding the query would be more of a leap, so I don't
think it's a reasonable solution to attempt to remove the "extra"
parameter.

>From these, the "best" solution seems to be re-tokenizing the sql in
the provider.  Is there another work-around?  It seems that
implementing this at the database level would be the most efficient
approach.  Obviously re-tokenizing all the SQL would be expensive.

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3_bind_parameter_name

2011-05-31 Thread Patrick Earl
I just had an amusing realization... if SQLite itself won't change,
then the problem falls on the shoulders of System.Data.SQLite, which
is incidentally also handled by this mailing list.  It's ultimately
this code that is causing the problem because of the case mentioned
here:

private void BindParameter(int index, SQLiteParameter param)
{
  if (param == null)
throw new SQLiteException((int)SQLiteErrorCode.Error,
"Insufficient parameters supplied to the command");

This exception is being thrown because SQLite is eating one of the
parameters as Richard mentioned.

I haven't analyzed the consequences of changing or disabling the check
in some way.  If somebody beats me to it, I'd be happy about that too.
:)

     Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3_bind_parameter_name

2011-05-31 Thread Patrick Earl
The generated limit parameter does have a value of 1, so it's a valid
query.  It's SQLite that has taken this valid query with a valid
parameter value of 1 and has exposed its internal implementation
details by removing it and causing additional work-arounds in
parameter binding.  It's possible the work-arounds aren't major... I
haven't had time to look at the picture in depth.

 Patrick Earl

On Tue, May 31, 2011 at 12:47 PM, Jan Hudec <b...@ucw.cz> wrote:
> On Tue, May 31, 2011 at 08:00:40 -0400, Richard Hipp wrote:
>> On Mon, May 30, 2011 at 11:27 PM, Patrick Earl <pate...@patearl.net> wrote:
>> > SELECT this_.studentId as studentId143_0_,
>> >     this_.Name as Name143_0_,
>> >     this_.address_city as address3_143_0_,
>> >     this_.address_state as address4_143_0_,
>> >     this_.preferredCourseCode as preferre5_143_0_
>> > FROM Student this_
>> > WHERE this_.Name = (
>> >     SELECT this_0_.Name as y0_
>> >     FROM Student this_0_
>> >     WHERE this_0_.studentId = @p0
>> >     ORDER BY this_0_.Name
>> >     asc limit @p1)
>> [...]
>>
>> The LIMIT in a scalar subquery is always ignored.  A scalar subquery
>> operates with a LIMIT of 1 regardless of any LIMIT that you might specify.
>
> It's quite obvious that any other limit in a scalar subquery does not make
> sense, not only in SQLite, but in any SQL database, since only one value will
> ever be used. Which leads me to wonder what causes it to be generated (you
> don't want to tell me the weird names are invented manually, right?) and
> whether
>  - it should have not generated the limit, or
>  - it should have used "in" instead of "=" (thus making it list query which
>   can meaningfuly have limit).
>
> --
>                                                 Jan 'Bulb' Hudec <b...@ucw.cz>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug in sqlite3_bind_parameter_name

2011-05-30 Thread Patrick Earl
Greetings!

I have the following query:

SELECT this_.studentId as studentId143_0_, this_.Name as Name143_0_,
this_.address_city as address3_143_0_, this_.address_state as
address4_143_0_, this_.preferredCourseCode as preferre5_143_0_ FROM
Student this_ WHERE this_.Name = (SELECT this_0_.Name as y0_ FROM
Student this_0_ WHERE this_0_.studentId = @p0 ORDER BY this_0_.Name
asc limit @p1)

sqlite3_bind_parameter_count returns 2.

sqlite3_bind_parameter_name(1) returns @p0
sqlite3_bind_parameter_name(2) returns "", not @p1

Perhaps this is related to being a limit in a subquery.

Though nobody may care, this is currently preventing the NHibernate
test suite from passing.

Appreciate your help with this. :)

     Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lack of "decimal" support

2011-03-26 Thread Patrick Earl
On Sat, Mar 26, 2011 at 10:03 PM, Darren Duncan <dar...@darrenduncan.net> wrote:
> You could store your exact precision numbers as a pair of integers 
> representing
> a numerator/denominator ratio and then have math operators that work on these
> pairs like they were one number.  You would then know at the end how to move 
> the
> radix point since that was kept track of along with the number. -- Darren 
> Duncan

If you did this, you wouldn't be able to compare numbers in the
database without resorting to division.  If you just specified how
many fixed decimal places there were, you could zero-pad strings if
you only needed to perform comparison operations.  Obviously you'd
need to create custom operations, as you suggest, for other math
operators.

If SQLite can't decide on a base-10 format itself, perhaps the answer
lies in enhancing the API to allow for custom type storage and
operators.

 Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lack of "decimal" support

2011-03-26 Thread Patrick Earl
If you use a view to return a double, you've lost the exact value you
were trying to save by storing the decimal as a text value.  If you
continue to work with it as an integer, it's exact, but that requires
continual awareness of the number of decimal places at any point in
time.  In essence, you have to build significant numeric
infrastructure into your program to emulate the missing numeric
infrastructure in SQLite.

Patrick Earl

On Sat, Mar 26, 2011 at 9:52 PM, BareFeetWare <list@barefeetware.com> wrote:
> On 27/03/2011, at 2:09 PM, Patrick Earl wrote:
>
>> if you're in a context where you don't have significant understanding of the 
>> user's query, how do you determine if 1.05 is $1.05 or 105%?
>
> Can you give us a bit more background and an example of this?
>
> How is the interface for the query represented to the user and what can they 
> enter there to create a query?
>
> You can probably do this fairly easily via views which display data in a 
> particular format for the user to see or create a query.
>
> Tom
> BareFeetWare
>
> --
> iPhone/iPad/iPod and Mac software development, specialising in databases
> develo...@barefeetware.com
>  --
> Comparison of SQLite GUI tools:
> http://www.barefeetware.com/sqlite/compare/?ml
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lack of "decimal" support

2011-03-26 Thread Patrick Earl
You're right, it doesn't make sens to multiply dollars, but if you're
in a context where you don't have significant understanding of the
user's query, how do you determine if 1.05 is $1.05 or 105%?

I understand that one can custom-code everything for SQLite and get
reasonable results in some cases, but please understand that I'm
looking for solutions that don't require the framework to understand
the user's intentions any more than "I want to work with base-10
numbers up to a certain precision/scale."

      Patrick Earl

On Sat, Mar 26, 2011 at 8:43 PM, Gerry Snyder <mesmerizer...@gmail.com> wrote:
> Do money values really get multiplied together?
>
> What is the meaning of square cents as a unit?
>
> Gerry
>
> On 3/26/11, Patrick Earl <pate...@patearl.net> wrote:
>> That is true, but then when you are formulating generic queries within
>> a place such as an ORM like NHibernate, you would need to figure out
>> when to translate the user's "100" into "1".  As well, if you
>> multiplied numbers, you'd need to re-scale the result.  For example,
>> (1 * 1) would be (100 * 100 = 1), which is 1 * 1 = 100. :(  If one
>> wanted to get excessively complicated, they could implement a series
>> of user functions that perform decimal operations using strings and
>> then reformulate queries to replace + with decimal_add(x,y).  That
>> said, it'd be so much nicer if there was just native support for
>> base-10 numbers. :)
>>
>>        Patrick Earl
>>
>> On Sat, Mar 26, 2011 at 8:15 PM, BareFeetWare <list@barefeetware.com>
>> wrote:
>>> On 27/03/2011, at 12:39 PM, Patrick Earl wrote:
>>>
>>>> Base-10 numbers are frequently used in financial calculations because
>>>> of their exact nature.  SQLite forces us to store decimal numbers as
>>>> text to ensure precision is not lost.  Unfortunately, this prevents
>>>> even simple operations such as retrieving all rows where an employee's
>>>> salary is greater than '100' (coded as a string since decimal types
>>>> are stored as strings).
>>>
>>> Can you store all money amounts as integers, as the cents value? That is
>>> exact, searchable etc.
>>>
>>> Thanks,
>>> Tom
>>> BareFeetWare
>>>
>>> --
>>> iPhone/iPad/iPod and Mac software development, specialising in databases
>>> develo...@barefeetware.com
>>>  --
>>> Comparison of SQLite GUI tools:
>>> http://www.barefeetware.com/sqlite/compare/?ml
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> --
> Sent from my mobile device
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lack of "decimal" support

2011-03-26 Thread Patrick Earl
That is true, but then when you are formulating generic queries within
a place such as an ORM like NHibernate, you would need to figure out
when to translate the user's "100" into "1".  As well, if you
multiplied numbers, you'd need to re-scale the result.  For example,
(1 * 1) would be (100 * 100 = 1), which is 1 * 1 = 100. :(  If one
wanted to get excessively complicated, they could implement a series
of user functions that perform decimal operations using strings and
then reformulate queries to replace + with decimal_add(x,y).  That
said, it'd be so much nicer if there was just native support for
base-10 numbers. :)

   Patrick Earl

On Sat, Mar 26, 2011 at 8:15 PM, BareFeetWare <list@barefeetware.com> wrote:
> On 27/03/2011, at 12:39 PM, Patrick Earl wrote:
>
>> Base-10 numbers are frequently used in financial calculations because
>> of their exact nature.  SQLite forces us to store decimal numbers as
>> text to ensure precision is not lost.  Unfortunately, this prevents
>> even simple operations such as retrieving all rows where an employee's
>> salary is greater than '100' (coded as a string since decimal types
>> are stored as strings).
>
> Can you store all money amounts as integers, as the cents value? That is 
> exact, searchable etc.
>
> Thanks,
> Tom
> BareFeetWare
>
> --
> iPhone/iPad/iPod and Mac software development, specialising in databases
> develo...@barefeetware.com
>  --
> Comparison of SQLite GUI tools:
> http://www.barefeetware.com/sqlite/compare/?ml
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lack of "decimal" support

2011-03-26 Thread Patrick Earl
I've found the decimal numbers to be most generally useful in narrow
ranges.  For reference, here are a couple notes on how other databases
implement them:

MSSQL stores up to 38 digits in 17 bytes, with a specific precision.
http://msdn.microsoft.com/en-us/library/aa258832(v=sql.80).aspx

PostgreSQL is more flexible and supports up to 1000 digits.

http://www.postgresql.org/docs/current/static/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL

In order to get a jump on the implementation, I would suggest that it
might be possible to use C routines from the PostgreSQL project or
some appropriately licensed library.  Perhaps an author from a numeric
library would be willing to donate their work to the SQLite project.

 Patrick Earl

On Sat, Mar 26, 2011 at 7:43 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 27 Mar 2011, at 2:39am, Patrick Earl wrote:
>
>> Base-10 numbers are frequently used in financial calculations because
>> of their exact nature.  SQLite forces us to store decimal numbers as
>> text to ensure precision is not lost.  Unfortunately, this prevents
>> even simple operations such as retrieving all rows where an employee's
>> salary is greater than '100' (coded as a string since decimal types
>> are stored as strings).
>>
>> I would like to encourage the developers to consider adding support
>> for base-10 numbers.  This is clearly a very pertinent issue, as even
>> this month there was another thread regarding decimal support.
>
> Intersting idea.  You will need to develop your own C routines to do 
> calculations with decimals.  Do you feel they should be implemented at a 
> fixed length or would you want to be able to use decimal strings of arbitrary 
> lengths ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite & NHibernate

2011-03-26 Thread Patrick Earl
Greetings.

I'm a committer for NHibernate who has been working on improving the
support for SQLite.  I've been able to get most of the over 3000 tests
passing on SQLite.  Kudos to Richard and the team for producing such
an impressive little database.  I wanted to share with you the main
limitations I found on this journey in the hopes that some day they
will no longer be limitations.  They are ordered by my view on their
importance.

1.  Support for a base-10 numeric data type.
2.  Support for altering tables (especially the removal or addition of
foreign keys).  Granted, tables can be updated by turning off foreign
key constraints, copying all data, manually checking foreign key
consistency, and then turning on foreign key constraints again.  Not
having the ability to alter tables ultimately leads to a great of
complexity in any system that has to deal with updating database
schemas.
3.  FULL OUTER JOIN support.  There are work-arounds, but implementing
those as part of NHibernate proved quite complicated, so I opted to
wait unless there seems to be extreme demand for it.
4.  Some sort of locate function to get the index of substring within
another string.  I couldn't even find any way to emulate this (aside
from user defined functions).
5.  Support for operations like "= all (subquery)", "= some
(subquery)", and "= any (subquery)".
6.  Better support for distributed transactions.  I don't pretend to
be an expert here, but it seems plausible that SQLite could
participate in a transaction across multiple databases.  Perhaps
implementing two phase commit would help with this.

Thanks for your consideration.

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Lack of "decimal" support

2011-03-26 Thread Patrick Earl
Base-10 numbers are frequently used in financial calculations because
of their exact nature.  SQLite forces us to store decimal numbers as
text to ensure precision is not lost.  Unfortunately, this prevents
even simple operations such as retrieving all rows where an employee's
salary is greater than '100' (coded as a string since decimal types
are stored as strings).

I would like to encourage the developers to consider adding support
for base-10 numbers.  This is clearly a very pertinent issue, as even
this month there was another thread regarding decimal support.

Thanks for your consideration.

   Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign constraints and table recreation

2010-05-11 Thread Patrick Earl
Okay, I tried the strategy discussed previously but I'm still having
problems.  Either I'm not seeing something, or there's a bug in the
foreign constraint support.  Take a look at the following two
execution snippets:

sqlite>
sqlite> commit transaction;
sqlite>
sqlite> begin transaction;
sqlite>
sqlite> DROP TABLE "ParkingLotLevel";
sqlite> DROP TABLE "Car";
sqlite> DROP TABLE "ParkingLot";
sqlite>
sqlite> Commit transaction;
Error: foreign key constraint failed

And now, we switch Car and ParkingLotLevel...

sqlite>
sqlite> commit transaction;
sqlite>
sqlite> begin transaction;
sqlite>
sqlite> DROP TABLE "Car";
sqlite> DROP TABLE "ParkingLotLevel";
sqlite> DROP TABLE "ParkingLot";
sqlite>
sqlite> Commit transaction;
sqlite>

No error!  Since the constraints are deferred, the order of the table
drops shouldn't matter, but it clearly does.

I've included a full failing example at the following link.  Beware
that it's quite long and full of boring automated code.  The
interesting part is right at the end and corresponds with what I wrote
above.

http://patearl.net/files/broken4.txt

This problem (or some variation thereof) has been making me crazy for
the past three days.  So nice to finally have a reasonable looking
test case. :)

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign constraints and table recreation

2010-05-10 Thread Patrick Earl
On Mon, May 10, 2010 at 10:58 AM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> It should not be possible to have circular dependencies.  Because you somehow 
> got the data in in the first place, and /that/ wouldn't have been possible 
> had you had circular dependencies.  Part of normalising your data structure 
> includes making sure that you haven't duplicated data.

I meant foreign keys that cause tables to have circular relationships.
 For example, a customer might have a list of credit cards (the credit
cards table has a customer id) and the customer has a default credit
card (the customer table has a credit card id).  I realize you could
make a third table to store the "default credit card" relationship,
but as you observed, I'm looking at the general case.

> I do note that you appear to be trying to solve an extremely general case, as 
> if you, the programmer, have no idea why your schema is the way it is.  I 
> have to warn you that if you're going to solve the general case, you are 
> going to run into situations which are not solvable without considering 
> individual rows of a table.

Indeed, hence why it's so complicated without using deferred
constraints.  Unfortunately, enabling deferred constraints leads to
later detection of errors during typical development.  With immediate
constraints, even using null in fields temporarily might not solve the
issue, since there may be not-null constraints to deal with.  In any
case, suffice to say that it is indeed quite complicated.  Getting
back to one of the points that started this conversation, the complex
nature of operations needed to transactionally modify tables with
foreign key integrity preservation suggests to me that this would be
something the database engine could provide a helping hand with.  It
wouldn't necessarily need to be full support for alter table, other
options presented previously would also help.

> But this gets back to an earlier point of mine: why go through any of this 
> performance at all ?  Why do you need to create temporary copies of databases 
> only to originally delete and replace the originals ?

The simplified example I provided had no changes to the tables, but in
the real scenario, at least one of the tables will need some sort of
modification.

Thanks for the detailed replies.  I appreciate your insight.

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign constraints and table recreation

2010-05-10 Thread Patrick Earl
Thanks Simon.  I believe you're correct in that I can recreate all
dependant tables.  I had attempted this trick earlier, but was doing
so in the context of immediate mode constraints, and that made the
re-insertion of data and dropping of tables exceptionally complicated
in some cases (such as circular references between tables).

So to summarize, the strategy for modifying a table with foreign
constraints enabled is to:

1.  Find all direct and indirect dependants of the table being modified.
2.  Create temporary tables for all of these.
3.  Copy the data from the main tables into these temporary tables.
3a.  If no circular dependencies, do a topological sort on the tables
to get the correct insertion order.
3b.  If circular dependencies, either use deferred constraints or come
up with a sophisticated algorithm to reinsert the original data (needs
to take into account not null columns with circular references
present).
4.  Drop all the original tables, again with similar steps to 3a and 3b.
5.  Rename all the temporary tables to their original names.

I will try this algorithm today and report back if I fail.   Since I
don't have time to imagine an algorithm to delete/insert/update rows
in an order that doesn't break constraints, I've ended up using
deferred constraints (undesirable in my case) just to support table
modification.  It would be great if the kind of complexity above was
somehow encapsulated in the database engine, instead of having users
work around it with non-trivial steps.  As a side note, the above
algorithm isn't likely to be particularly performant on databases with
significant data present.  In the general case of multiple individual
modifications to tables (such as in the context of a database change
manangement framework), the amount of work being done by the DB to
modify the table is quite time consuming.

Thanks for your help with this.

   Patrick Earl

On Mon, May 10, 2010 at 5:18 AM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 10 May 2010, at 7:34am, Patrick Earl wrote:
>
>>    PRAGMA foreign_keys = ON;
>>
>>    CREATE TABLE ParkingLot (Id int NOT NULL PRIMARY KEY);
>>    CREATE TABLE Car (Id int NOT NULL PRIMARY KEY, ParkingLotId int
>> NOT NULL REFERENCES ParkingLot (Id) DEFERRABLE INITIALLY DEFERRED);
>>    INSERT INTO ParkingLot (Id) VALUES (1);
>>    INSERT INTO Car (Id, ParkingLotId) VALUES (2, 1);
>>
>>    BEGIN TRANSACTION;
>>    CREATE TABLE ParkingLotTemp (Id int NOT NULL PRIMARY KEY);
>>    INSERT INTO ParkingLotTemp (Id) SELECT Id FROM ParkingLot;
>>    DROP TABLE ParkingLot;
>>    ALTER TABLE ParkingLotTemp RENAME TO ParkingLot;
>>    COMMIT TRANSACTION;
>>
>> Even though at the end of the transaction you can select and find the
>> appropriate rows in the car and parking lot tables, committing the
>> transaction causes a foreign constraint violation.
>
> I'm not sure how you expected this to work.  You declare ParkingLot as a 
> parent table of Car, but then you DROP TABLE ParkingLot, leaving Car an 
> orphan.  The only legitimate way to do this is to DROP TABLE Car first, or to 
> remove the foreign key constraint from it (which SQLite doesn't let you do).  
> The fact that you rename another table 'ParkingLot' later has nothing to do 
> with your constraint: the constraint is linked to the table, not to the 
> table's name.
>
> If you're going to make a temporary copy of ParkingLot, then make a temporary 
> copy of Car too:
>
>   CREATE TABLE CarTemp (Id int NOT NULL PRIMARY KEY, ParkingLotId int
> NOT NULL REFERENCES ParkingLotTemp (Id) DEFERRABLE INITIALLY DEFERRED);
>   INSERT INTO CarTemp (Id) SELECT Id FROM Car;
>
> Then you can drop both original tables and rename both 'temp' tables.  
> However, I don't see why you're doing any of this rather than just adding and 
> removing rows from each table as you need.
>
> Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Foreign constraints and table recreation

2010-05-10 Thread Patrick Earl
Greetings all.

I've been having much trouble with foreign constraints and updating
the table schema.  I used to just recreate the table and rename, but
now that I wish to enable foreign keys on my connections, it seems
that I can no longer do this.  The following statements produce a
foreign constraint violation, which seems to contradict the current
documentation, which says that dropping the table with deferred
constraints will only produce a violation if not corrected by the end
of the transaction.

PRAGMA foreign_keys = ON;

CREATE TABLE ParkingLot (Id int NOT NULL PRIMARY KEY);
CREATE TABLE Car (Id int NOT NULL PRIMARY KEY, ParkingLotId int
NOT NULL REFERENCES ParkingLot (Id) DEFERRABLE INITIALLY DEFERRED);
INSERT INTO ParkingLot (Id) VALUES (1);
INSERT INTO Car (Id, ParkingLotId) VALUES (2, 1);

BEGIN TRANSACTION;
CREATE TABLE ParkingLotTemp (Id int NOT NULL PRIMARY KEY);
INSERT INTO ParkingLotTemp (Id) SELECT Id FROM ParkingLot;
DROP TABLE ParkingLot;
ALTER TABLE ParkingLotTemp RENAME TO ParkingLot;
COMMIT TRANSACTION;

Even though at the end of the transaction you can select and find the
appropriate rows in the car and parking lot tables, committing the
transaction causes a foreign constraint violation.

I used to be able to do all my DDL inside of transactions to ensure
sanity was preserved, but I can't figure out how to make that work any
longer.  It seems I'd have to drop out of a transaction, turn off the
foreign keys, do the DDL, then turn the foreign keys back on.

I'm not sure if the above behavior was by design or not, but it's not
even ideal using deferred constraints and drop/rename to put the table
back.  The following potential features may also help contribute to a
solution.

1.  Allow ALTER TABLE to add/remove foreign constraints.
2.  Allow foreign constraints to be deferred for the duration of a
transaction.  (ie. SET CONSTRAINTS)
3.  Allow foreign keys to be disabled / enabled within transactions
using pragma.
4.  Allow tables to be renamed without causing foreign references to
that table to rename.  You could then rename your original table,
create a new one in its place, and put all the original data back.

In any case, I'm looking forward to some sort of improvement to the
situation.  Maybe I'm missing something, but I've spent my weekend
banging my head against this one.

Best regards,

Patrick Earl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newbie problem using special column name

2010-01-25 Thread Patrick Ben Koetter
* Martin Engelschalk <sqlite-users@sqlite.org>:
> try enclosing your column name with double quotes "
> 
> create table test("column-1" varchar(255))
> 
> 
> However, i strongly advise not to use this character, because it is the 
> minus-operator in sql.  You will have to make sure that you enclose the 
> column name every time you (or somone other) uses ist.

Thanks. Unfortunately I don't have a choice. An application I don't have
control over expects such strange table names.

p...@rick



> 
> Martin
> 
> Patrick Ben Koetter wrote:
> > Can I add a column name containing a dash "-" and if yes, how would I do 
> > that?
> >
> > I am asking because I fail to add a column name that contains a dash "-" 
> > and I
> > don't know if I cause the problem (easy solution) or if its something else
> > causing this to fail.
> >
> > Here's what I try:
> >
> >   sqlite> create table test(column-1 varchar(255));
> >   SQL error: near "-": syntax error
> >
> > So far I have had a look at the SQLite documentation, but couldn't find
> > anything that would tell me about 'reserved' characters or how I would 
> > escape
> > a dash.
> >
> > Thanks,
> >
> > p...@rick
> >   
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Newbie problem using special column name

2010-01-25 Thread Patrick Ben Koetter
Can I add a column name containing a dash "-" and if yes, how would I do that?

I am asking because I fail to add a column name that contains a dash "-" and I
don't know if I cause the problem (easy solution) or if its something else
causing this to fail.

Here's what I try:

  sqlite> create table test(column-1 varchar(255));
  SQL error: near "-": syntax error

So far I have had a look at the SQLite documentation, but couldn't find
anything that would tell me about 'reserved' characters or how I would escape
a dash.

Thanks,

p...@rick

-- 
state of mind
Digitale Kommunikation

http://www.state-of-mind.de

Franziskanerstraße 15  Telefon +49 89 3090 4664
81669 München  Telefax +49 89 3090 4666

Amtsgericht MünchenPartnerschaftsregister PR 563
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] best language match for SQLite?

2008-09-16 Thread Patrick
I am a beginner to intermediate Python Programmer. I can use SQLite with 
it just fine but it is my understanding that relational database and 
object oriented programming our not the perfect marriage.

I was just wondering if anyone had an opinion on the most ideal language 
to use with SQLite?

I love Python but I LOVE SQLite, I would learn another language just to 
use it better-Patrick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: [sqlite] May I ask why the source distribution mechanism was changed starting with 3.3.14?

2007-04-30 Thread Bennett, Patrick
People (myself being one of them) were asking if it could be put back
the way it was.  
Several of us (those that replied at least) stated our dislike of the
new single .c file format. 
It was also a question (hence the subject line).  No one ever replied.
It seemed like something worthy of at least a reply.

- Patrick Bennett

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 30, 2007 5:46 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] May I ask why the source distribution mechanism
was changed starting with 3.3.14?

Martin Jenkins <[EMAIL PROTECTED]> wrote:
> 
> As fas as I know, the dev team is Dr Hipp and Dan Kennedy (apologies
if 
> there's someone else and I missed you) and I agree, it is slightly odd

> for neither of them to reply.
> 

Why is it odd?  The issue is not something that needs replying
to.  This is not a bug.  There has been a request for a different
kind of partially-compiled code distribution for the convenience
of some users.  We have observed the request. What more needs to 
be said?

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] May I ask why the source distribution mechanism was changed starting with 3.3.14?

2007-04-30 Thread Bennett, Patrick
Ok, thanks for pointing that out.  
I wasn't sure who maintained the binary distribution and based on the
recent list activity, I assumed someone who was responsible would've
already replied.

Patrick

-Original Message-
From: Martin Jenkins [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 30, 2007 4:09 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] May I ask why the source distribution mechanism
was changed starting with 3.3.14?

Bennett, Patrick wrote:
> No comment at all?  That's three users asking for this now.  :(

Dr Hipp usually responds pretty quickly, but sometimes he's away on 
business. You know, supporting the paying customers... ;)

Martin


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] May I ask why the source distribution mechanism was changed starting with 3.3.14?

2007-04-30 Thread Bennett, Patrick
No comment at all?  That's three users asking for this now.  :(

Patrick

-Original Message-
From: Clark Christensen [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 27, 2007 11:12 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] May I ask why the source distribution mechanism
was changed starting with 3.3.14?

In general, I agree.  I miss the zipped set of pre-processed C source.

Since you have the Linux-based build system at your disposal, you can
get what you're used to having with

make target_source

on the Linux system.  This creates a tsrc directory containing the
familiar pre-processed C source for use in your Windows build.

For what it's worth, I was able to build the sqlite3.dll on Windows
(VC6) from the "amalgam" sqlite3.c file without having to have lemon or
any of the other SQLite-specific build components - except that pesky
sqlite3.def file.  I still have to download the precompiled Win32 DLL to
get that.

Looking at the generated makefile (from the configure script) on Linux,
I see there's a rule for making sqlite3.def.  The algorithm (grep ... |
sed ...) basically works on Windows, using the sqlite3.obj, but the
resulting def is short a few lines compared to the one I downloaded.

Last, I tried to build the shell from the compiled amalgam, but it
errors-out on the link phase complaining about unresolved externals
(sqlite3_iotrace or somesuch).  And I don't have time right now to
figure it out.  I'm not a C programmer, and am unfamiliar with
compilers, linkers, compiler options, and all that.  I usually just have
to do trial and error (lots of error) and eventually I figure it out.

 -Clark

- Original Message 
From: "Bennett, Patrick" <[EMAIL PROTECTED]>
To: sqlite-users@sqlite.org
Sent: Friday, April 27, 2007 6:57:38 AM
Subject: [sqlite] May I ask why the source distribution mechanism was
changed starting with 3.3.14?

The last time I downloaded SQLite was version 3.3.12.
For that version (and many prior versions), I could download a
preprocessed archive containing all of the source code, except parse.h,
parse.c and opcode.h(? - this is from memory) were the 'generated'
versions.
The source for the command-line shell was there as well as all other
source code.  This was close to ideal for me, as I was able to use it in
our (custom) build system and build for Windows and Linux with almost no
effort at all.
Now, I have two choices:  
1) Download a tar of everything and have a version of lemon available
(which nothing else here uses), as well awk and other tools which don't
fit well within our windows-side compiles, or..
2) Download a preprocessed archive that contains only two files
(sqlite3.c and sqlite3.h) losing the ability to easily track changes (or
make local patches / fixes if necessary) as well as no longer having the
shell code.  I'll have to download both archives and piece together my
own build.

Hopefully this doesn't come off as too much of a nag, but the way it was
before was quite convenient and the new method seems to have taken the
'preprocessed' notion to the extreme.  
If this is how it's going to be from now on, I'll just have to adjust,
but if there wasn't any specific reason for changing, you can count this
as a vote for the 'old' way.  :)

Cheers...
Patrick Bennett



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] May I ask why the source distribution mechanism was changed starting with 3.3.14?

2007-04-27 Thread Bennett, Patrick
The last time I downloaded SQLite was version 3.3.12.
For that version (and many prior versions), I could download a
preprocessed archive containing all of the source code, except parse.h,
parse.c and opcode.h(? - this is from memory) were the 'generated'
versions.
The source for the command-line shell was there as well as all other
source code.  This was close to ideal for me, as I was able to use it in
our (custom) build system and build for Windows and Linux with almost no
effort at all.
Now, I have two choices:  
1) Download a tar of everything and have a version of lemon available
(which nothing else here uses), as well awk and other tools which don't
fit well within our windows-side compiles, or..
2) Download a preprocessed archive that contains only two files
(sqlite3.c and sqlite3.h) losing the ability to easily track changes (or
make local patches / fixes if necessary) as well as no longer having the
shell code.  I'll have to download both archives and piece together my
own build.

Hopefully this doesn't come off as too much of a nag, but the way it was
before was quite convenient and the new method seems to have taken the
'preprocessed' notion to the extreme.  
If this is how it's going to be from now on, I'll just have to adjust,
but if there wasn't any specific reason for changing, you can count this
as a vote for the 'old' way.  :)

Cheers...
Patrick Bennett





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] help with understanding the C interface

2007-02-06 Thread Patrick X

int sqlite3_open(
 const char *filename,   /* Database filename (UTF-8) */
 sqlite3 **ppDb  /* OUT: SQLite db handle */
);
int sqlite3_open16(
 const void *filename,   /* Database filename (UTF-16) */
 sqlite3 **ppDb  /* OUT: SQLite db handle */
);

I'm trying to write a more portable Common Lisp interface to sqlite3
using the C interface.

the above functions have me a little confused.  So sqlite3_open, does
it returns a pointer to the open database or just the success or error
code or both.

Second, if it returns a pointer to the open db is it needed to be
stored in memory to pass it to the close or other functions within
sqlite3.

--
=
knot in cables caused data stream to become twisted and kinked.
http://groups.google.com/group/lispstl
http://www.cwelug.org/
Patrick Pippen

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] using auto increment in java

2006-11-11 Thread Patrick Marchwiak

I am working on a Java program that uses SQLite for its database. I am using
auto increment on some of my inserts and am wondering if there is some easy
way to retrieve the auto incremented rowid right after inserting. I realize
that there is a C++ function that does just that , but googling did not turn
up much for Java. Thanks.

-Pat


[sqlite] Missing files?

2006-03-19 Thread Patrick Bangert
Hello SQLite Community,

I found the package and want to test it, it seems very cool indeed from what
I hear. So I downloaded the tar-ball of the complete source for version
3.3.4 and tried to compile it.

This tar-ball is missing the following files: sqlite3.h, parse.h and
opcodes.h

The "pure C" archive is missing them too ... and now that I look the older
versions miss them also. Am I being silly? If this is a stupid request,
please excuse me.

I would greatly appreciate pointers as to where I can get these files so
that I could compile and test the package. Thank you so much!

Best,
Pat



RE: [sqlite] Segmentation fault on large selects

2005-08-02 Thread Patrick Dunnigan
I had something similar a while back on a 64bit HPUX box compiled with gcc.
I was getting a core dump / seg fault on big table select when I did
count(*)'s and sum()'s in the query. It wasn't all queries but it was
consist ant and repeatable. 

I was able to get the queries to work with SQLite 2.8.16. It seemed odd to
me but I couldn't figure it out or get an answer as to why it was happening.
Being under the gun I ended up going live on 2.8.16.

Could you download 2.8.16 and let us know if your process works with that
version? If so it may be the same issue and might raise the visibility. With
the performance improvements I'd much rather be on the latest version.

One side note: when I did compile using 32 bit it worked fine on version
3.2.1 but that wasn't an option I could use on those machines.

Thanks,
Patrick

-Original Message-
From: scunacc [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 02, 2005 7:16 AM
To: Christian Smith
Cc: sqlite-users@sqlite.org
Subject: Re: [sqlite] Segmentation fault on large selects

Dear Christian,


> Doesn't matter how much memory you have. If ulimits restrict how much
> memory a process can have, something has to give. Try:

The process has unlimited ulimits. 

Thanks for the suggestion, but other Perl scripts that run already use
huge amounts of memory on this machine, so that's something that was
tackled months back.

Appreciate the contribution - but - no nearer.

Kind regards

Derek Jones.







[sqlite] difference between sqlite3_reset() and sqlite3_clear_bindings()

2005-07-07 Thread Patrick Dunnigan
Can someone please explain the difference between sqlite3_reset() and 
sqlite3_clear_bindings() and when would I use one over the other.

I've read the 3.2.2 docs and it's not 100% clear to me. 

Thanks

Re: [sqlite] using sqlite as a temporary database to process lots of data

2005-06-28 Thread Patrick Dunnigan
I am currently using SQLite to process @ 400 million records (and climbing) 
a day by reading files, importing them into SQLite, and summarizing. The 
summed data goes into Oracle. This is a production application that is very 
stable. Holding the data in SQLite in memory as opposed to a C struct 
reduced development time and makes it easy to change the summarization 
output by just modifying the SQL statements.

If you think the data layouts may be changed in the future or even dynamic / 
"defined on the fly" then SQLite would be a good tool.

- Original Message - 
From: "Lorenzo Jorquera" <[EMAIL PROTECTED]>
To: 
Sent: Monday, June 27, 2005 4:55 PM
Subject: [sqlite] using sqlite as a temporary database to process lots of 
data


Hi,
 My company is starting a project in wich we will have to process
large amounts of data and save the results to a db2 database.
 Because this data process needs to be very fast, the idea is to load
the data in memory and apply the transformations in C++. I think that
sqlite inmemory databases could be a good alternative to this, but
when I proposed this solution to my boss, he asked if someone has
already used sqlite for this purpose. I found that this use of sqlite
is cited in the wiki, but I can't find any concrete examples. If any
of you  can share some success histories with me,  I will be very
grateful.
 Thanks in advance,
 Lorenzo.
PD: please, forgive my english, it is a little rusty...





[sqlite] PRAGMA page_size problems

2005-05-05 Thread Patrick Dunnigan
This is related to a posting and also bug report (ticket 1230, I since closed) 
from last week.

First: HPUX 11.11 64bit rp4440 server, 6 CPUs, 8 gig of RAM, tcl 8.4.9, sqlite 
3.2.1

I was having problems inserting into a :memory: database using tclsqlite last 
week. 
First I create a table, then start doing inserts.

At about record 2400, I was getting a "Bus error(coredump)".

Then I added 
PRAGMA page_size = 8192
into my code prior to creating my table. This allowed me to get past the 24xx 
row problem.

I've seen dhr mention that the page size can go up to 32k, so for kicks set it 
up that high and reran. The process stopped responding and at the same time the 
machine went down. I am not 100% certain that it was my process, but the timing 
was right. 

I looked in the src and found that unless I compile it with the 
-DSQLITE_MAX_PAGE_SIZE=x flag that it will not allow me to go above 8192 
regarless of what I set using PRAGMA.

Here are my questions:
1. Why would one want to override the default page size (it's not completely 
clear to me though I know it helped me get past my first problem)? I have read 
the docs and still don't really understand what the purpose is.

2. Why did my process crash at row 24xx when I didn't override the default?

3. What should happen if I try to set it above the Max default of 8192?

Thanks,
Patrick


Re: [sqlite] BLOB read/write in chunks?

2005-05-05 Thread Patrick Dunnigan

Ran across this this morning ... removed in 3.0

(10) Are there any known size limits to SQLite databases?

  As of version 2.7.4, SQLite can handle databases up to 241 bytes (2 
terabytes) in size on both Windows and Unix. Older version of SQLite were 
limited to databases of 231 bytes (2 gigabytes).

  SQLite version 2.8 limits the amount of data in one row to 1 megabyte. 
SQLite version 3.0 has no limit on the amount of data that can be stored in 
a single row.

  The names of tables, indices, view, triggers, and columns can be as long 
as desired. However, the names of SQL functions (as created by the 
sqlite_create_function() API) may not exceed 255 characters in length.

  http://www.sqlite.org/faq.html#q10
- Original Message - 
From: "Kervin L. Pierre" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, May 05, 2005 9:45 AM
Subject: Re: [sqlite] BLOB read/write in chunks?


D. Richard Hipp wrote:
> On Wed, 2005-05-04 at 23:31 -0400, Henrik Bruun wrote:
>
>>Does Sqlite support reading/writing BLOB data in chunks?  I'm
>>considering storing very large items (200MB+) in BLOB fields.
>>
>
>
> With SQLite, the entire BLOB must be read or written all
> at once.  There is no limit on the size of BLOBs.  (The

I was under the impression that there was a row size
limit of 1 MB.  Saw this in documentation somewhere.

Was that limit removed?

Regards,
Kervin





Re: [sqlite] how to COPY FROM stdin in 3.x?

2005-05-04 Thread Patrick Dunnigan
Is this also true for for in memory databases? 
- Original Message - 
From: "D. Richard Hipp" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, May 04, 2005 5:03 PM
Subject: Re: [sqlite] how to COPY  FROM stdin in 3.x?


On Wed, 2005-05-04 at 13:49 -0700, Darren Duncan wrote:
> At 1:27 PM -0700 5/4/05, scott wrote:
> >I can rewrite this to do individual inserts, but would like to find 
> >out whether there's a better workaround/idiom for this.
> 
> If you are going to use an INSERT statement, then your usage is a 
> prime candidate to use prepared statements with.  Your insert 
> statement is parsed once and then the actual data insert is 
> relatively little work and performs quickly.  I believe this sort of 
> activity is what happens to implement the you wanted anyway. -- 
> Darren Duncan
> 

Be sure to enclose your INSERTs inside a BEGIN...COMMIT.
Most of the time is spent doing COMMIT.  INSERT is very fast - 
many tens of thousands per second on a typical workstation.
But COMMIT is limited by disk rotation speed to about 60/sec.
(It is not possible to COMMIT faster than this and still
be ACID.) If you fail to enclose your INSERTs in a single
BEGIN...COMMIT then separate BEGIN...COMMITs are automatically
added around each INSERT, which really slows things down.
-- 
D. Richard Hipp <[EMAIL PROTECTED]>






Re: [sqlite] OT: General SQL Question

2005-04-30 Thread Patrick Dunnigan
The it would be something like


select A.item1,A.item2, B.col1, B.col2, C.supplier_name
from audititems A, audits B, suppliers C
where A.parentauditid = B.parentauditid
and B.supplierid = C.supplierid
and < rest of where clause >

- Original Message - 
From: "Dan Keeley" <[EMAIL PROTECTED]>
To: <sqlite-users@sqlite.org>
Sent: Saturday, April 30, 2005 7:36 AM
Subject: Re: [sqlite] OT: General SQL Question


well audititems has a field parentauditid which links into audits.  Audits
has a supplierid field which is the key on suppliers..

From: "Patrick Dunnigan" <[EMAIL PROTECTED]>
Reply-To: sqlite-users@sqlite.org
To: <sqlite-users@sqlite.org>
Subject: Re: [sqlite] OT: General SQL Question
Date: Sat, 30 Apr 2005 07:16:55 -0400

It is possible depending on the make up of the tables.

post the table structures and index / keys and I'll help


- Original Message -
From: "Dan Keeley" <[EMAIL PROTECTED]>
To: <sqlite-users@sqlite.org>
Sent: Saturday, April 30, 2005 6:38 AM
Subject: [sqlite] OT: General SQL Question


Hi, I know this isnt related to SQLite, however i dont really know where
else to ask.

I have 3 tables - Supplier, Audits and Audititems.

At the moment, i select and join audits and Audititems and it works great.
However i need fields from the supplier chain, so i effectively need to do a
3 way join.

Is this possible?

Thanks!
Dan






Re: [sqlite] OT: General SQL Question

2005-04-30 Thread Patrick Dunnigan
It is possible depending on the make up of the tables.

post the table structures and index / keys and I'll help


- Original Message - 
From: "Dan Keeley" <[EMAIL PROTECTED]>
To: 
Sent: Saturday, April 30, 2005 6:38 AM
Subject: [sqlite] OT: General SQL Question


Hi, I know this isnt related to SQLite, however i dont really know where
else to ask.

I have 3 tables - Supplier, Audits and Audititems.

At the moment, i select and join audits and Audititems and it works great.
However i need fields from the supplier chain, so i effectively need to do a
3 way join.

Is this possible?

Thanks!
Dan







Re: [sqlite] does sqlite run on 64 bit?

2005-04-29 Thread Patrick Dunnigan
This is HPUX 11.11 and I'm using the tcllib. gcc 3.4.3.



- Original Message - 
From: "Ulrik Petersen" <[EMAIL PROTECTED]>
To: <sqlite-users@sqlite.org>
Sent: Friday, April 29, 2005 6:32 PM
Subject: Re: [sqlite] does sqlite run on 64 bit?


Hi Patrick,

Patrick Dunnigan wrote:

>Has anyone been successful running SQLite on a 64 bit platform? If so, what 
>platform? Using C libs or TCLSQLite?
>
>
I've had success in running 2.8.15 in 64-bit mode on AMD64, Gentoo
Linux, gcc compiler, C (not tcl).  It worked out of the box, so I'm
afraid I can't tell you how to make it work.  What platform are you
trying to run it on?

Ulrik Petersen

-- 
Ulrik Petersen, MA, B.Sc.
University of Aalborg, Denmark







[sqlite] does sqlite run on 64 bit?

2005-04-29 Thread Patrick Dunnigan
Has anyone been successful running SQLite on a 64 bit platform? If so, what 
platform? Using C libs or TCLSQLite?
I am having limited success and need guidance.
Thanks

[sqlite] memory fault in TCLSQLLite on HPUX 11.11 64 bit creating table

2005-04-28 Thread Patrick Dunnigan
Hello,
I am consistantly getting a memory fault in sqlite both through the tclsqlite 
shared library and the sqlite3 executable when I try to create a table. I have 
tried both sqlite 3.2.1 and 2.8.16. The TCL Versions tried are 8.4.6 and 8.4.9.

I have had success on 32bit HPUX 11.11 but when porting to 64 bit we started 
receiving these errors.

I will note that both TCL and SQLite where compiled with a 64 bit gcc compiler.

When I traced this down, it seems that the code blows up on the following line 
in printf.c ->
 *(--bufpt) = cset[longvalue%base];
longvalue is defined as 
  UINT64_TYPE longvalue; /* Value for integer types */

and in this case, longvalue is 0. When I change the do .. while loop  to a 
while (longvalue > 0) .. it gets past this problem but then I get the error 
"SQL logic error or missing database" further on down. 

If I set -DVDBE_PROFILE=1 at compile time, I get the message "unknown opcode". 

The code that I perform to reproduce in tcl is:
load /opt/centadm/builds/sqlite/bld/.libs/libtclsqlite3.sl
sqlite db :memory:
db eval "create table t1 (a,b);"

And at the sqlite prompt:
create table t1 (a,b);

Please let me know any other debugging output that would be helpful.
Thanks

Here is output from vdbe_profile.out:

 
770136623e623e676260153a165c2920072016321632627d15121a31214e206260151d6c16315c5c5c125c3c29777a20623e7827637b776e
 0  000 Goto 0   52
 0  001 ReadCookie   01
 0  002 If   07
 0  003 Integer  10
 0  004 SetCookie01
 0  005 Integer  10
 0  006 SetCookie04
 0  007 CreateTable  00
 0  008 Integer  00
 0  009 OpenWrite01
 0  00   10 SetNumColumns05
 0  00   11 NewRecno 00
 0  00   12 Dup  00
 0  00   13 String8  00
 0  00   14 PutIntKey00
 0  00   15 Close00
 0  00   16 Pull 10
 0  00   17 Close00
 0  00   18 Dup  00
 0  00   19 MemStore 01
 0  00   20 Dup  10
 0  00   21 MemStore 11
 0  00   22 Integer  00
 0  00   23 OpenRead 01 # sqlite_master
 0  00   24 SetNumColumns05
 0  00   25 MemLoad  10
 0  00   26 MustBeInt1   30
 0  00   27 NotExists0   30 pk
 0  00   28 Recno00
 0  00   29 ListWrite00
 0  00   30 Close00
 0  00   31 Integer  00
 0  00   32 OpenWrite01
 0  00   33 SetNumColumns05
 0  00   34 ListRewind   00
 0  00   35 ListRead 0   46
 0  00   36 Dup  00
 0  00   37 NotExists0   35
 0  00   38 String8  00 table
 0  00   39 String8  00 t1
 0  00   40 String8  00 t1
 0  00   41 MemLoad  00
 0  00   42 String8  00 CREATE TABLE t1 (a,b)
 0  00   43 MakeRecord   50 tttit
 0  00   43 MakeRecord   50 tttit
 0  00   44 PutIntKey00
 0  00   45 Goto 0   35
 0  00   46 ListReset00
 0  00   47 Close00
 0  00   48 Integer  10
 0  00   49 SetCookie00
 0  00   50 ParseSchema  00 tbl_name='t1'
 0  00   51 Halt 00
 0  00   52 Transaction  01
 0  00   53 VerifyCookie 00
 0  00   54 Goto 01
 0  00   55 Noop 00



[sqlite] Improving text search using LIKE '%string to search%'

2004-10-07 Thread Patrick Peccatte
I use SQLite 2.8.14 for a documentation application which needs a lot of
text search.
Text searchs are done using somethink like:
SELECT * FROM Texts WHERE CurrentText LIKE '%string to search%';
I use index on every searchable columns.
Response time are acceptable until about 10 texts stored in database,
but are very very long with 50 texts in database.
Is exist any method to improve this kind of search ?
--
Patrick Peccatte - Soft Experience