Hi again,
sorry, but it looks that the problem was on my side...
An error printf that I added to trace the condition causes
the crash and not the sqlite... sorry sorry, I'm getting old...
Marcus
Marcus Grimm wrote:
> Hi Ken,
>
> thanks for the code, but I guess it will still crash:
>
> Ken
Hi,
I am using Sqlite-3.3.6 in arm-linux machine on JFFS2 File system.
We have modified Page size t 8196 and Page count to 256. as we have only 3MB
of JFFS2.
When data is being inserted to Table continuously after some amout of data
(when DB file reaches to 200k
The system is getting
On Dec 3, 2008, at 7:09 PM, Karl Thiessen wrote:
> I've been lightly using TWS at http://karlht.gigdrag.net/ for some
> years now, and it's been a pleasure to use.
>
> But I've wondered if there are any plans to update it to SQLite 3 and
> Tcl 8.5? I'd hate to duplicate someone else's effort,
Hi again, Daniel,
So I guess you're still having certain queries that take about 200x
longer than with your custom code, right?
There's nothing magical about sqlite, so it's not surprizing that code
customized for an application can outperform a generalized sql engine,
but a factor of 200 does
Howard Lowndes <[EMAIL PROTECTED]> wrote:
> I'm playing about with the following syntax:
>
> CREATE TABLE atable (
> a_ref integer
> not null
> primary key autoincrement,
> a_text text
> );
>
> CREATE TABLE btable (
> b_ref integer
>
try looking at the pragmas page and determine what you can get away with.
For me, I relaxed the synchronization requirements and also the locking
strictness, and I was able to boost my speeds to 80,000 records per second
:) FYI, my records only consist of 6 numbers and a binary.
On Wed, Dec 3,
Oyvind Idland wrote:
> Hi,
>
> I am fiddling around with the r-tree module a bit, which works great and
> gives the effect I am looking for.
>
> The only thing is that I wish I could speed up inserts. Populating the
> rtree-index with 1 million objects
> takes about 180 seconds (using prepared
Is there any important reason for counter(X) function to not be included
in main sqlite?
There is already an implementation of counter function in
src\test_func.c and given the usefulness of counter function in
analytics, it is a petty to have to write obnoxious queries to
workaround the lack
Unique_User <[EMAIL PROTECTED]>
wrote:
> 1. If I open database A and B and get back handleA and handleB, can
> I call sqlite3* function for database A using handleB?
What do you mean, for database A? The connection handle determines which
database you are talking to.
> 2. Could I close handleA
Oyvind Idland wrote:
>
>
> Is there any trick to speed up the inserts here ?
>
Are you doing the inserts inside a transaction?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On 12/3/08, Oyvind Idland <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am fiddling around with the r-tree module a bit, which works great and
> gives the effect I am looking for.
>
> The only thing is that I wish I could speed up inserts. Populating the
> rtree-index with 1 million objects
> takes
Hi Ken,
thanks for the code, but I guess it will still crash:
Ken wrote:
> Marcus try something like this pseudo code:
>
> local_Exec( exec_Str) {
>*pStmt = NULL;
>
> rc = prepare_v2 (exec_Str)
> if (rc != SQLITE_OK) goto exec_err
ok, but this usually works
Marcus try something like this pseudo code:
local_Exec( exec_Str) {
*pStmt = NULL;
rc = prepare_v2 (exec_Str)
if (rc != SQLITE_OK) goto exec_err
rc = step ( );
if (rc != SQLITE_OK or SQLITE_DONE ) goto exec_err
rc = finalize( )
if rc
Hi, I'm using SQLite from a scripting language (Rebol).
There I have a CONNECT/CREATE function that calls sqlite3_open to open and
if necessary create a database file.
My situation is now that I have an application that needs to create several
database files. Hence I need to call sqlite3_open
Hi,
> And will only work if you never delete any rows from the table.
Yes, I am aware of that limitation and I am not ever deleting apart from
a full truncate. I have yet to test that case, if shut turn out to not
work I can live with a DROP TABLE or even delete the database file. It
is
On Thu, Nov 27, 2008 at 08:12:02AM +, Simon Bulman wrote:
> Morning,
>
> Table 1
>
> BIGINT (index), VARCHAR(30), VARCHAR(10)
>
>
>
> Table 2
>
> BIGINT (index), FLOAT
For the second table, the index will contain the BIGINT value and the table
rowid, which is almost as big as the
Hi all,
Doing a join on a fts3 table can be very slow. I'm using these tables:
CREATE TABLE general (
ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
...
);
CREATE VIRTUAL TABLE general_text using fts3 (
ID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
a TEXT,
b TEXT,
c TEXT,
d TEXT,
> > select max(rowid) from sometable;
>
> Looks good and is instantaneous. Thank you very much.
And will only work if you never delete any rows from the table.
___
sqlite-users mailing list
sqlite-users@sqlite.org
Hi,
> select max(rowid) from sometable;
Looks good and is instantaneous. Thank you very much.
Ciao, MM
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Hi,
I am fiddling around with the r-tree module a bit, which works great and
gives the effect I am looking for.
The only thing is that I wish I could speed up inserts. Populating the
rtree-index with 1 million objects
takes about 180 seconds (using prepared statements).
Is there any trick to
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Why does the order of indexed columns matter when creating an index?
When you create, say, a dictionary, you sort words by first letter, then
by second letter, and so on. You can then quickly look up a word that
begins with a given
Yeah, if you dump the file to a .read-able file then his idea is
simpler and easier. The approach I described is probably better
suited to situations where you need to .import a file, because in that
case the file is raw data rather than SQL statements.
-T
On Wed, Dec 3, 2008 at 12:41
"Marian Aldenhoevel"
<[EMAIL PROTECTED]> wrote in
message news:[EMAIL PROTECTED]
> SELECT COUNT(*) FROM sometable;
>
> Takes 10 seconds on my testcase (340.000 rows, Database on CF, dead
> slow CPU).
>
> Is there a quicker way? Does SQLLite maybe store the total number of
> records somewhere else?
Hi,
> Depending on your setup and depending on how often you need to query the
> count you could trade off a higher INSERT time for a lightning fast
> count by using a trigger.
I have thought about that as well, yes. I have bulk-Inserts that
prepropulate the table with a starting set of about
Hello,
I've looked for an explaination in the following pages without success:
http://www.sqlite.org/optoverview.html
http://www.sqlite.org/cvstrac/wiki?p=QueryPlans
So I hope someone can help me...
CREATE TABLE node(
id INTEGER PRIMARY KEY,
name TEXT
);
CREATE TABLE edge(
orig INTEGER
Previously someone advised that I use the "*" char to achieve partial search
results with fts. eg ver* will match version. This works ok, but only for
end parts of a word.
Is there anyway to get partial matches for beginning or middle parts of a
word?
e.g. *sion - to match version or
*si* to
Hi all,
while doing a stress test on my embedded server application
I'm noting a crash in sqlite3_finalize that I don't understand,
and I'm wondering if I'm doing the right error handling.
Background: In order to encapsulate writings to the tables I'm using
BEGIN EXCLUSIVE TRANSACTION to block
Marian Aldenhoevel wrote:
> Hi,
>
> SELECT COUNT(*) FROM sometable;
>
> Takes 10 seconds on my testcase (340.000 rows, Database on CF, dead slow
> CPU).
>
> Is there a quicker way? Does SQLLite maybe store the total number of
> records somewhere else?
>
> The table only ever grows, there are now
Hi,
SELECT COUNT(*) FROM sometable;
Takes 10 seconds on my testcase (340.000 rows, Database on CF, dead slow
CPU).
Is there a quicker way? Does SQLLite maybe store the total number of
records somewhere else?
The table only ever grows, there are now DELETEs on it ever, apart from
complete
29 matches
Mail list logo