Re: [sqlite] Spooky behavior of sqlite3_prepare_v2

2010-10-12 Thread Zvone
Solved this myself so no help needed.

Apparently the problem was last parameter in sqlite3_prepare_v2 which
was pointing to unallocated memory after execution. As NULL is
allowed, I simply put it to NULL to ignore it which solved the whole
thing.

That kind of explains all the spooky behaviour as obviously, additing
additional statements moved the pointer in memory.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lemon maintainer

2010-10-12 Thread Richard Hipp
On Tue, Oct 12, 2010 at 6:54 PM, Vincent Adam Burns wrote:

> Is there an active maintainer for the Lemon Parser? I'm getting some
> parsing conflicts, ex:
>
> statement ::= IF LEFT_PAREN expression RIGHT_PAREN statement ELSE
> statement.
> statement ::= IF LEFT_PAREN expression RIGHT_PAREN statement.
>
> expr ::= expr EQ|NE|GT|GE|LT|LE expr.
> expr ::= expr MOD|DIV|MOD expr.
> expr ::= expr ADD|SUB expr.
> expr ::= primaryExpr.
>

Problems with Lemon should be directed to the SQLite mailing list.

However, parsing conflicts are not problems with Lemon, but rather problems
with your grammar.  Unfortunately there is (to my knowledge) no tutorial on
how to write conflict-free LALR(1) grammers for Lemon.  You have to grok
LALR(1) first, then Lemon just makes sense.

Perhaps you should start with a copy of the Dragon book.  New copies are
wildly over-priced.  But it looks like you can get a used copy from Amazon
for just a few bucks.
http://www.amazon.com/Compilers-Principles-Techniques-Alfred-Aho/dp/0201100886



>
> --
> --- You know you've achieved perfection in design, not when you have
> nothing more to add, but when you have nothing more to take away.
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Spooky behavior of sqlite3_prepare_v2

2010-10-12 Thread Zvone
I don't believe this!

Now I want to do a wrapper for Insert - by doing simple calls to:

loop this:
{
sqlite3_prepare_v2
sqlite3_bind_*
sqlite3_step
sqlite3_finalize
}

By my understanding this is what SQLite exec does - prepare, then
step, then finalize and it works.

The problem is - when I insert in that function any (unrelated)
statement, like just printing to log file or something... it stops
working in weirdest possible way.

sqlite3_prepare_v2 simply doesn't return (in debugger nor in
executable). doesn't use CPU time, doesn't allocate extra memory... it
is just waiting and waiting forever.

then I add another log file line or anything unrelated - and
sqlite3_prepare_v2 returns normally (SQLITE_OK result).

Am I going nuts? Any clues what to look for? Is some 8-byte or some
byte-aligning at play here? And more importantly why it happens ONLY
with sqlite3_prepare statement and not on others like open or
finalize?

Am I doing wrong bindings? Shouldn't that return some error rather
than endless loop?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Single Support

2010-10-12 Thread Stefan_E

If I get it correctly, currently SQLite only supports REALS only as Double,
requiring 8 byte per number, which is for many applications quite expensive.

What are the chances to get Single support in a future release? What is the
appropriate way to propose such a feature?

I understand that there might be problems with type affinity, specifically
in determining loss-less conversion between Double and Single; so this might
have to be a compile option or something - which might be just fine for many
practical purposes ...

But it would still make many large .sqlite files considerably smaller!

Thanks for your help!   Stefan


-- 
View this message in context: 
http://old.nabble.com/Single-Support-tp29946932p29946932.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unsafe trigger functions (was: Registering a custom tokenizer per database rather than per connection)

2010-10-12 Thread Drake Wilson
Quoth Scott Hess , on 2010-10-12 10:33:54 -0700:
> Since the tokenizer-registration code accepts an encoded pointer to a
> vtable, it probably should be considered unsafe to expose to users.
> 
> For Gears and Chrome, where SQLite is exposed to web developers, we
> did manual tokenizer registration and disabled the SQL-based
> registration code.  This can be done using authorizers.

Yes, it can.  However, it seems like surprising behavior that opening
an arbitrary database and writing to it can corrupt the host program
(via a specially-crafted trigger) unless you apply a restrictive
authorizer first.  In particular, this makes writing to arbitrary
databases from the SQLite shell unsafe without a lot of rigamarole,
since I know of no way to even set an authorizer in that case; one
would have to remember to select and examine all the triggers first.

It may be that a better approach would be to be more restrictive about
triggers by default, then allow the user to reset a separate "trigger
authorizer" to something more permissive if it is desired.  I think
this would not break existing code so long as it did not restrict new
functions by default, since calling these particular unsafe functions
in triggers (I think only FTS3 registration falls into this category
so far?) is already mostly useless.

In fact, I think my thoughts on this are now well-formed enough to
submit an actual ticket to this effect.  Since the bug report page at
http://www.sqlite.org/src/wiki?name=Bug+Reports says that posting to
the list is the correct way to do this, please consider the above such
a request.  (I will look into a patch if I have time, though this is
moderately unlikely.)

Additional comments are naturally welcome.

   ---> Drake Wilson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-12 Thread Simon Slavin

On 12 Oct 2010, at 6:30pm, Scott Hess wrote:

>  Something like:
> 
>  INSERT INTO my_table VALUES (blah, blah, blah);
>  DELETE FROM my_table WHERE date_or_whatever < some_limit LIMIT 5;
> 
> If your insert load consistently tracks from day to day, this mostly
> looks like a circular buffer.  The main downside is that it makes your
> heaviest insert period also be your heaviest delete period, which is
> unfortunate.

Instead of having a queue length which is an integer number of days, have one 
which is, for example, 7.5 days long.  That way at 3pm you're deleting the 
records which were made at 3am in the morning.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Registering a custom tokenizer per database rather than per connection

2010-10-12 Thread Scott Hess
On Tue, Oct 12, 2010 at 8:40 AM, Drake Wilson  wrote:
> Just to clarify, a trigger isn't useful for this in general, because
> you still have nowhere to get the function pointer from.  Above I was
> worrying that the current SQLite code might be less safe than it could
> be as a result of allowing this in triggers by default.

Since the tokenizer-registration code accepts an encoded pointer to a
vtable, it probably should be considered unsafe to expose to users.
For Gears and Chrome, where SQLite is exposed to web developers, we
did manual tokenizer registration and disabled the SQL-based
registration code.  This can be done using authorizers.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-12 Thread Scott Hess
On Mon, Oct 11, 2010 at 8:13 AM, Jay A. Kreibich  wrote:
> On Mon, Oct 11, 2010 at 02:08:54PM +0200, Michele Pradella scratched on the 
> wall:
>>   Ok so the main idea it's always the same: split the DELETE to make the
>> operation on less records, but do it more often.
>
>  Another thought occurs to me...   If your insert rates are fairly
>  consistent (e.g. the number of records per minute is consistent, or
>  at least predictable) then you can just use the table as a circular
>  buffer.  That is, rather than inserting new data and deleting the
>  old data, simply overwrite the older records with the new data.
>  Rather than inserting new records, simply find the oldest record and
>  update it with the new data.

In a past life, I ran a large production system on MYSQL, and what we
did in a similar situation is add the appropriate delete after
inserts.  Something like:

  INSERT INTO my_table VALUES (blah, blah, blah);
  DELETE FROM my_table WHERE date_or_whatever < some_limit LIMIT 5;

If your insert load consistently tracks from day to day, this mostly
looks like a circular buffer.  The main downside is that it makes your
heaviest insert period also be your heaviest delete period, which is
unfortunate.  The LIMIT 5 is there so that if your volume varies, you
eventually catch up.  If your volume varies a lot, you probably would
need to increase the value.

It might seem like this will make inserts twice as slow, but I'd
implement it this way before complexifying things.  Since the index
the delete runs against is probably in the page cache, you might find
that it works just fine.  If I had to modify it, I might have it do
the delete on every 100th insert, and change the limit to 100 +
epsilon.  After the delete, use sqlite3_changes() to check whether it
deleted to the limit, and increase epsilon a bit for the next pass,
until the limit is not reached, then drop epsilon back towards 0.
But, like I said, you might find that the simple solution does the job
just fine.

Modifying your selects to ignore too-old records shouldn't slow things
down materially.  If you had done the query 10 minutes ago, you'd have
gotten those 25 records, and when you do it now you get those 25
records plus 1 more, and throw away 3 of them - but if it would have
been adequate performance 10 minutes ago, why is it not adequate now?
So long as you keep the number of filtered records contained, it's
just a slight constant factor, and if you keep them very tightly
contained, the cost is almost minimal unless the query could otherwise
run against indices along (once SQLite has fetched the row from the
table, doing an additional check on data within that row is generally
not expensive).

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Graham Smith
Oliver

> I use OOo under WinXP and had a problem related to 3.2.1 in connection with
> UNIQUE CONSTRAINTS

Not sure I understand everything in the thread, but worth knowing as I learn.

Thanks,

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Graham Smith
Vivien,

> Libgda: http://www.gnome-db.org

This is certainly a new one for me, I will give it a look.

Thanks,

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Oliver Peters
Graham Smith  writes:

[...]

> Mmmm, still stuck on 3.2.1 with current release of Ubuntu (the OS I
> use most of the time, followed by my Mac), so interesting to hear your
> views on 3.2.1.
> 

I use OOo under WinXP and had a problem related to 3.2.1 in connection with
UNIQUE CONSTRAINTS

read this:
http://thread.gmane.org/gmane.comp.db.sqlite.general/57052/focus=57078
(no problem with this in 3.2.0)

and there were 1 or 2 other things that disappeared when I started using 330 m9
(connected with macros)

Oliver

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Graham Smith
David

> Take a look at Kexi. (kexi-project.org) It is part of the Koffice suite, but 
> can be used >standalone. I had tested it on Windows and had no trouble 
> installing it.

I have also used this before (on Ubuntu) as well  as the OOo option, I
suppose I find the SQLite GUIs that much nimbler than the OOo or Kexi
options, that I was hoping for an alternative.

However, its looking like I will be looking at OOo and Kexi again,
which is fine, I was just looking for suggestions from people with
more experience.

Thanks,

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Graham Smith
> the best is to take OOo 330 m9
>
> that is a milestone without all former illnesses - believe me I suffered a lot
>
> D o n' t use 3.2.1!!!

Mmmm, still stuck on 3.2.1 with current release of Ubuntu (the OS I
use most of the time, followed by my Mac), so interesting to hear your
views on 3.2.1.

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Vivien Malerba
On 12 October 2010 17:51, Graham Smith  wrote:
> There are several GUI editors for SQLite, but is there one that allows
> creation of simple forms to allow data entry.  Although some allow
> adding data to a single table, none that I have looked at seem to
> allow a new record to be added when it involves more than one table.
>
> It needs to be cross platform (Linux, Widows and Mac) and ideally
> free, but a commercial product would still be considered.
>

Libgda: http://www.gnome-db.org

Regards,
Vivien
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread David Bicking
Take a look at Kexi. (kexi-project.org) It is part of the Koffice suite, but 
can be used standalone. I had tested it on Windows and had no trouble 
installing it.

It uses sqlite as its back end storage and allows you to create forms, etc. Its 
aim is to be "like" MS Access.

David


--- On Tue, 10/12/10, Graham Smith  wrote:

> From: Graham Smith 
> Subject: Re: [sqlite] gui for data entry
> To: "General Discussion of SQLite Database" 
> Date: Tuesday, October 12, 2010, 12:09 PM
> Oliver
> 
> > OpenOffice Base
> >
> > you need an ODBC driver too (http://www.ch-werner.de/sqliteodbc/)
> 
> Thanks, I did consider this, but the last time I tried, it
> was very
> clunky,  and crashed a lot, but I will have another
> look since you
> have suggested it.
> 
> Graham
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] problems with getBytes

2010-10-12 Thread Fabio Mattei
I hope I'm not OT, if I am please forgive me!

I'm writing a Java application I use sqlite to save data on disk.

Everything works fine, I used sqlite successfully for months, with any
type of data but bytes.

I need to store and read from my sqlite database BLOB fields.

I created a table "CREATE TABLE DOCUMENTS (id integer primary key,
saveddata blob)"

And I try to query the table with the simple query: "SELECT saveddata
FROM DOCUMENTS WHERE id = ? "

I'm using the jdbc driver: http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC

I cannot figure out why, when I call the fuction getBytes in my result
set, my application crashes.

The Java driver is simply a wrapper, the application does not throw
any exeption, so I suppose that the problem is in the sqlite layer.

The code is very basic:

FileOutputStream str = new FileOutputStream("Filename.");
Connection conn = SQLiteConnection.openConnection(fileName);
Statement stat = SQLiteConnection.openStatement(conn);
PreparedStatement prep = SQLiteConnection.openPreparedStatement(conn,
"SELECT saveddata FROM DOCUMENTS WHERE id = ? ");
byte[] by = null;
try {
 prep.setLong(1, id);
 prep.execute();

 ResultSet rs = prep.getResultSet();

 if (rs.next()) {
   byte[] outbyt = rs.getBytes(1);
   try {
str.write(outbyt);
   } catch (IOException ex)
Exceptions.printStackTrace(ex);
   }
 }

 rs.close();
 prep.close();
 conn.close();
} catch (SQLException ex) {
 Exceptions.printStackTrace(ex);
}


The execution of the code stops when the rs.getBytes(1); is called.

What can I do?

Fabio
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Oliver Peters
ahh I forgot:

the best is to take OOo 330 m9

that is a milestone without all former illnesses - believe me I suffered a lot

D o n' t use 3.2.1!!!

Oliver

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Graham Smith
Oliver

> works fine now but take the latest odbc driver from here:
>
> http://www.ch-werner.de/sqliteodbc/sqliteodbc-088pre2.exe
>
> because the 0.87 has a problem under special circumstances (makros) - 
> something
> you will use if you like to automate some functions in your forms or make them
> more userfriendly

Thanks again,

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Oliver Peters
Graham Smith  writes:

[...]
> 
> Thanks, I did consider this, but the last time I tried, it was very
> clunky,  and crashed a lot, but I will have another look since you
> have suggested it.

[...]

works fine now but take the latest odbc driver from here:

http://www.ch-werner.de/sqliteodbc/sqliteodbc-088pre2.exe

because the 0.87 has a problem under special circumstances (makros) - something
you will use if you like to automate some functions in your forms or make them
more userfriendly

Oliver


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Graham Smith
Oliver

> OpenOffice Base
>
> you need an ODBC driver too (http://www.ch-werner.de/sqliteodbc/)

Thanks, I did consider this, but the last time I tried, it was very
clunky,  and crashed a lot, but I will have another look since you
have suggested it.

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] gui for data entry

2010-10-12 Thread Oliver Peters
Graham Smith  writes:

> 
> There are several GUI editors for SQLite, but is there one that allows
> creation of simple forms to allow data entry.  Although some allow
> adding data to a single table, none that I have looked at seem to
> allow a new record to be added when it involves more than one table.
> 
> It needs to be cross platform (Linux, Widows and Mac) and ideally
> free, but a commercial product would still be considered.


[...]


OpenOffice Base

you need an ODBC driver too (http://www.ch-werner.de/sqliteodbc/)

Oliver

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] gui for data entry

2010-10-12 Thread Graham Smith
There are several GUI editors for SQLite, but is there one that allows
creation of simple forms to allow data entry.  Although some allow
adding data to a single table, none that I have looked at seem to
allow a new record to be added when it involves more than one table.

It needs to be cross platform (Linux, Widows and Mac) and ideally
free, but a commercial product would still be considered.

I realise that the emphasis with SQLite is on it being the backend to
a database application, but I don't really need an application just
something a bit more feature rich than the GUIs I've looked at, to
replace Excel as a data storage tool.

Thanks,

Graham
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Registering a custom tokenizer per database rather than per connection

2010-10-12 Thread Drake Wilson
Quoth Travis Orr , on 2010-10-12 08:17:38 -0700:
> Drake Wilson said:
> - However, it now occurs to me that it may be possible to use the
> - fts3_tokenizer() function in a trigger, which is probably a bad thing
> - when writing to untrusted databases.  
> 
> I suppose the only way to accomplish it would be to recompile sqlite3
> with my custom tokenizer in place. I think that may be more work that it
> is worth. The idea of using a trigger is interesting, though I suspect
> the performance hit from having to register the tokenizer on each
> connection is minimal.

Just to clarify, a trigger isn't useful for this in general, because
you still have nowhere to get the function pointer from.  Above I was
worrying that the current SQLite code might be less safe than it could
be as a result of allowing this in triggers by default.

I wouldn't worry about the performance; the registration will likely
be a minimal hit compared to setting up the connection in the first
place.

> Travis

   ---> Drake Wilson

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-12 Thread Michele Pradella
  Thank you for the advice, I'll try it in my application.
regards

Il 12/10/2010 17.17, Pavel Ivanov ha scritto:
> Michele,
>
> Here is another thought for you to consider. Apparently your
> application consistently generates some records, each record is marked
> with a timestamp of its creation and after some time you have to
> garbage-collect all records that are at least at a certain amount of
> time in the past. You can do this with generational gc: split your
> full database in parts according to record creation time and then your
> garbage collection will consist of deletion of database file from file
> system without deleting any records. E.g. let's say you start with one
> part, you write your records in it for 15 minutes. Then you create new
> part and write all new records in it. Old part just sits there for
> querying. After 15 minutes you create new part and so on. Then when
> time comes and all records in the first part must be deleted you just
> delete that database file and that's it - no interruption in access
> for newly created records. Of course this implementation complicates
> querying of existing records especially when you need some statistics
> pretty often (as opposed to querying individual records) but for some
> access patterns it can give you a huge performance improvement (it
> actually did for my application).
>
>
> Pavel
>
> On Mon, Oct 11, 2010 at 11:13 AM, Jay A. Kreibich  wrote:
>> On Mon, Oct 11, 2010 at 02:08:54PM +0200, Michele Pradella scratched on the 
>> wall:
>>>Ok so the main idea it's always the same: split the DELETE to make the
>>> operation on less records, but do it more often.
>>   Another thought occurs to me...   If your insert rates are fairly
>>   consistent (e.g. the number of records per minute is consistent, or
>>   at least predictable) then you can just use the table as a circular
>>   buffer.  That is, rather than inserting new data and deleting the
>>   old data, simply overwrite the older records with the new data.
>>   Rather than inserting new records, simply find the oldest record and
>>   update it with the new data.
>>
>>   If the records have similar static length content (e.g. no variable
>>   length text strings) this should be fairly fast, and will eliminate
>>   the need to to delete the whole records.  The UPDATE might be a tad
>>   slower than a unencumbered INSERT, but it is still likely to be
>>   faster than an INSERT plus a later DELETE.
>>
>>   Be careful, however, as even integers are variable-length records in
>>   SQLite (if their magnitude is different).  It might be best to use
>>   fixed size strings, even for the numeric values.  The UPDATE will be
>>   much faster if the new data fits into the same "slot" as the old
>>   record, and it can be updated in-place.
>>
>>   You could setup the inserts to find the oldest time and update that
>>   records (which should be fairly quick if there is an index on your
>>   timestamp column) or you could just manually create however many
>>   NULL entries you need and explicitly update incremental records.
>>   When the application starts up, just find the oldest date and
>>   continue.
>>
>>   This all depends on being able to predict the number of records
>>   required to meet you storage needs, however.  I suppose you could
>>   allow the database to expand as needed (that is, get the oldest date,
>>   and if it is not outside your window, INSERT rather than UPDATE).
>>   There are lots of ways to do this, the specifics depend on your
>>   needs.
>>
>>
>>-j
>>
>> --
>> Jay A. Kreibich<  J A Y  @  K R E I B I.C H>
>>
>> "Intelligence is like underwear: it is important that you have it,
>>   but showing it to the wrong people has the tendency to make them
>>   feel uncomfortable." -- Angela Johnson
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-12 Thread Pavel Ivanov
Michele,

Here is another thought for you to consider. Apparently your
application consistently generates some records, each record is marked
with a timestamp of its creation and after some time you have to
garbage-collect all records that are at least at a certain amount of
time in the past. You can do this with generational gc: split your
full database in parts according to record creation time and then your
garbage collection will consist of deletion of database file from file
system without deleting any records. E.g. let's say you start with one
part, you write your records in it for 15 minutes. Then you create new
part and write all new records in it. Old part just sits there for
querying. After 15 minutes you create new part and so on. Then when
time comes and all records in the first part must be deleted you just
delete that database file and that's it - no interruption in access
for newly created records. Of course this implementation complicates
querying of existing records especially when you need some statistics
pretty often (as opposed to querying individual records) but for some
access patterns it can give you a huge performance improvement (it
actually did for my application).


Pavel

On Mon, Oct 11, 2010 at 11:13 AM, Jay A. Kreibich  wrote:
> On Mon, Oct 11, 2010 at 02:08:54PM +0200, Michele Pradella scratched on the 
> wall:
>>   Ok so the main idea it's always the same: split the DELETE to make the
>> operation on less records, but do it more often.
>
>  Another thought occurs to me...   If your insert rates are fairly
>  consistent (e.g. the number of records per minute is consistent, or
>  at least predictable) then you can just use the table as a circular
>  buffer.  That is, rather than inserting new data and deleting the
>  old data, simply overwrite the older records with the new data.
>  Rather than inserting new records, simply find the oldest record and
>  update it with the new data.
>
>  If the records have similar static length content (e.g. no variable
>  length text strings) this should be fairly fast, and will eliminate
>  the need to to delete the whole records.  The UPDATE might be a tad
>  slower than a unencumbered INSERT, but it is still likely to be
>  faster than an INSERT plus a later DELETE.
>
>  Be careful, however, as even integers are variable-length records in
>  SQLite (if their magnitude is different).  It might be best to use
>  fixed size strings, even for the numeric values.  The UPDATE will be
>  much faster if the new data fits into the same "slot" as the old
>  record, and it can be updated in-place.
>
>  You could setup the inserts to find the oldest time and update that
>  records (which should be fairly quick if there is an index on your
>  timestamp column) or you could just manually create however many
>  NULL entries you need and explicitly update incremental records.
>  When the application starts up, just find the oldest date and
>  continue.
>
>  This all depends on being able to predict the number of records
>  required to meet you storage needs, however.  I suppose you could
>  allow the database to expand as needed (that is, get the oldest date,
>  and if it is not outside your window, INSERT rather than UPDATE).
>  There are lots of ways to do this, the specifics depend on your
>  needs.
>
>
>   -j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Intelligence is like underwear: it is important that you have it,
>  but showing it to the wrong people has the tendency to make them
>  feel uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Registering a custom tokenizer per database rather than per connection

2010-10-12 Thread Travis Orr
Drake Wilson said:
- However, it now occurs to me that it may be possible to use the
- fts3_tokenizer() function in a trigger, which is probably a bad thing
- when writing to untrusted databases.  

I suppose the only way to accomplish it would be to recompile sqlite3
with my custom tokenizer in place. I think that may be more work that it
is worth. The idea of using a trigger is interesting, though I suspect
the performance hit from having to register the tokenizer on each
connection is minimal.

Travis

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Drake Wilson
Sent: October 8, 2010 9:02 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Registering a custom tokenizer per database rather
than per connection

Quoth Travis Orr , on 2010-10-08 13:37:25 -0700:
> Is it possible to register a custom FTS3 tokenizer to be persistent in
> the database so that it is available every time a connection is
opened?

Not really.  How would this work?  You open a database, and the
function pointer for the tokenizer comes from... where?  The physical
pointer usually isn't guaranteed to be the same even for the same
application running twice on the same machine.

This would also have an impact on security: opening a database
shouldn't allow it to access arbitrary C code from your program, and
declaring which functions are allowed and which are not is just as
much work as reregistering the tokenizer in the first place.

However, it now occurs to me that it may be possible to use the
fts3_tokenizer() function in a trigger, which is probably a bad thing
when writing to untrusted databases.  Hmm.

   ---> Drake Wilson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Detach command in 3.7.1 gives "database is locked" error

2010-10-12 Thread Richard Hipp
On Sat, Oct 9, 2010 at 11:28 AM, Jim Wilcoxson  wrote:

> This may be a bug in 3.7.1, or I may have been taking advantage of a flaw
> in
> 3.6.18:
>
> HashBackup uses SQLite to store backup data in a meta data db and file data
> archive db.  First I open the main database, then attach an archive
> database.  When the archive gets full, I detach it, create a new archive
> db,
> and attach that.  This all worked fine with 3.6.18.
>
> With 3.7.1, I get an error on the detach:  OperationalError: database arc
> is
> locked
>
> The database is not opened in Exclusive mode, and a commit gets executed
> before the detach.  What may be causing this - just a guess - is that there
> are selects open on the main database at the time of the detach, but the
> archive database is not referenced.
>
> Is this a bug in 3.7.1, or a correction to a flaw in 3.6.18?
>

Without more information about what your application is doing, we cannot
reproduce the problem and cannot determine what is going on here.

I'm guessing you still have a transaction active on the database you are
trying to detach and you just don't realize it.



>
> Thanks,
> Jim
> --
> HashBackup: easy onsite and offsite Unix backup
> http://sites.google.com/site/hashbackup
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] String field as primarykey

2010-10-12 Thread Durga D
journal file is creating between transactions.

On Tue, Oct 12, 2010 at 3:00 PM, Oliver Peters  wrote:

> Durga D  writes:
>
> [...]
>
> >  But performance issue is there.
> >
> >   Is there anyway to optimize this?
> >
>
>
> maybe you should try
>
> PRAGMA journal_mode = OFF;
>
>
> Oliver
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] String field as primarykey

2010-10-12 Thread Durga D
Hi Oliver,

   Thank you for your prompt response.

   I am using C++ wrappers in VC++ 8.0 applicaiton.

   Where should I add PRAGMA?

   And also  when I insert 100k records by using statement,
"m_oDB.execDML(strQuery);"
virtual memory is keep on increasing. Any idea?

Durga.

On Tue, Oct 12, 2010 at 3:00 PM, Oliver Peters  wrote:

> Durga D  writes:
>
> [...]
>
> >  But performance issue is there.
> >
> >   Is there anyway to optimize this?
> >
>
>
> maybe you should try
>
> PRAGMA journal_mode = OFF;
>
>
> Oliver
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE: sorting

2010-10-12 Thread Jean-Christophe Deschamps
Look in your mailbox.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] using SQLite with mod_perl

2010-10-12 Thread Dami Laurent (PJ)
>-Message d'origine-
>De : sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>boun...@sqlite.org] De la part de Clark Christensen
>Envoyé : lundi, 11. octobre 2010 17:26
>À : General Discussion of SQLite Database
>Objet : Re: [sqlite] using SQLite with mod_perl
>
>>So, I want all the perl modules to be loaded when Apache2 starts, and
>>then a $dbh created for each user when the user comes to the web site,
>>but not recreated for the same user on every reload. I am assuming
>>that would be the correct way to work speedily.
>
>I had the same problem, and it's a huge pain, even when you have root
>privilege.
>
>I never did completely solve it.  I had some success with explicit
>$dbh->disconnect calls before the script exits, but I could never prove
>why it
>worked. 

Creating a persistent dbh for each user doesn't make sense to me, because
if the first request from user U1 is served by Apache process P4, there is
no garantee that the next request will again go to process P4; if it goes
to process P2, then that process will need its own connection.  So the 
common practice is one persistent database connection for each Apache process,
not for each user; if you need user_specific stuff, you pass that information
through the shared connection.

Furthermore, SQLite is just a file, it has no notion of users, so one 
connection per user makes even less sense.


 Ultimately, I abandoned mod_perl.  Having to restart Apache
>every time I update code in a module was a big obstacle.
>

Updated scripts are reloaded automatically. Updated modules are not reloaded by
default, but if you want that behaviour you can activate 
http://search.cpan.org/~PHRED/Apache-Reload/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE: sorting

2010-10-12 Thread Uy, Mary
Hi,

I saw this in the forum:

I've developped an SQLite extension including a very similar collation:
it sorts the (integral) prefix first and, in case of a draw, orders
based on the Unicode suffix.
It currently doesn't cope with floating-point prefixes but can surely
be adapted easily to do so. It's only a small part of a larger code
for facilitating scalar handling of Unicode text, but you can trim it
down to only the part you need.
If you're interessed in the source, just drop me a mail.

Would it be ok to ask a copy of this?

Thanks!
Mary
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding a string in front of what is already in a field

2010-10-12 Thread jose isaias cabrera


"P Kishor" wrote...


> José,
>
> Please note Igor's very important cautionary note below --
>
> On Mon, Oct 11, 2010 at 7:05 AM, Igor Tandetnik  
> wrote:
>> P Kishor  wrote:
>>> UPDATE OpenJobs
>>> SET notes = 'string to add in front\r\n' || notes
>>> WHERE spid = 32;
>>
>> Note that SQLite doesn't understand C-style escapes. '\r' is a string 
>> consisting of two characters, a backslash and a letter r. If you want to 
>> insert CRLF pair, you'd need something like this:
>>
>> SET notes = 'string to add in front' || cast(x'0d0a' as text) || notes
>>
>
> So, if you are using a programming language, you can do like so
>
>UPDATE OpenJobs
>SET notes = ? || notes
>WHERE spid = ?
>
> and then, in your application (for example, Perl code below; note, use
> of double quotes)
>
>$sth->execute( "string to add in front\r\n", 32);
>
> Or, in the command line sqlite3 program, you can simply hit "enter" on
> your keyboard and then close-single-quote the string.
>
> sqlite > UPDATE OpenJobs SET notes = 'string to add in front
> sqlite > ' || notes WHERE spid = 32;
>
>
>> Or, you can use a parameter in place of the string literal, and bind a 
>> string to it in your program. Such a string can contain any characters 
>> you want.

Yep.  Already working thanks.

josé 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] String field as primarykey

2010-10-12 Thread Oliver Peters
Durga D  writes:

[...]

>  But performance issue is there.
> 
>   Is there anyway to optimize this?
> 


maybe you should try

PRAGMA journal_mode = OFF;


Oliver

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] String field as primarykey

2010-10-12 Thread Durga D
Hi All,

I have created a sqlite database with "create table EMP (F1 INTEGER ,F2
TEXT(1120) primary key collate nocase,F3 INTEGER ,LMD TEXT(1120) ,F4
TEXT(1120) ,F5 TEXT(1120));".

   Here F2(Field2) is primary key. I used transaction queries for every
25,000 records. But performance issue is there.

  Is there anyway to optimize this?

  Requirement: Insert 100,000,000 records.

Thanks and Regards,
Durga.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users