Hi Eric,
As I know you from Delphi related projects, I thought it would be nice to
share my ideas.
I am using a home-grown framework for sqlite persistence, optimized for
storing records into tables.
Main ideas:
- using virtual tables (with the same structure) to speed up sqlite inserts
a write tr
I asked a similar question some time ago...
See here:
https://www.mail-archive.com/sqlite-users@sqlite.org/msg77488.html
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Hi,
From the old time in school, Bison is a LL parser and I have see that
Lemon is LR (LALR(1) ).
That's it, Bison reduces as soon as it can while Lemon shifts as much as
it can.
You have to force Lemon to reduce "prog" rule, i.e. newline should be
part of another rule:
E.g. modified ladd.y
Simon,
I have asked several times similar questions, and got similar replies.
This answer is simply NOT correct.
While I understand nobody wants to commit to a release/stable version, I
really wanted to evaluate how it fits my needs.
Therefore I took some time to understand how could I compile i
Hi James,
About complexity: I'm not sure it's NlogN; for each N you need to count
N-1 columns, that's N^2 IMO.
And you have an EXTRA temporary B-TREE? Doesn't it matter??
Although I don't really understand why, it has an index on it.
My original concern is indeed simplicity and efficiency.
But
Keith,
I find this a little arrogant. I could say exactly the opposite, not
caring about efficiency is ... (introduce whatever words you like).
Why is this ranking addressed here?
http://www.schemamania.org/sql/#rank.rows
Why do you need "order by" in sql but may not ask the ranking having a
James,
I reply from the web and I can't easily quote.
I don't really want to argue whether it's a workaround or not. I
understand perfectly that's valid standard sql.
However please accept that the given sql is quite complex; you have to
duplicate in the join clause the ordering...
About the
James,
Indeed, that's the case.
However, I can't imagine this to be efficient. It's just a pure sql
workaround to a counter.
The only advantage is, it is standard sql and should work with any engine.
I wonder if sqlite would make some optimizations out of it, otherwise
it's O(n^2) WHEN having
Simon,
Of course the rowid is THE field to use to uniquely identify a row (for
edit, delete, etc...).
I am talking about some ranking a result set. The ranking can be used
for displaying (in HTML or a desktop GUI) or as source for insert/update
sql.
Of course in code you can have your own coun
Hi Simon,
No problem, but I don't understand what has this to do with multi -
processing / users?? (the query program has a lock on the table anyway).
The counter can be either reused (see the other "LIMIT" discussion) or
it needs a different implementation.
As I don't think that the implemen
Hi,
My opinion is a little different:
a) of course the compiler needs to change the query program (only if
"nrow" is requested/used)
b) I don't know the internals, but I just can't believe that such a
value could not be exported somehow
c) I understand it would be non-standard; however there a
Simon,
As I wrote it's easy to do when you control the loop.
Sometimes there is either no explicit loop (direct sql, no programming
code) or the loop is out of your reach (3rd party library).
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.
Then this register value is exactly the needed result.
There is also the other syntax, "limit n, m"; you have to skip somehow
"m" rows.
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlit
I also needed this for some special update queries.
Without many details, it was some kind of "insert into xxx select
, otherfields from source order by ".
For this case there is a workaround, selecting first into a temporary
table with auto generated rowid and using it afterwards for insert.
This is not at all my case ...
I don't obviously write 1 by 1, but using blocks of data ( array of
struct ), virtual tables wrappers, and "insert ... select".
This way I can achieve >200k rec/s, or at least 100k when having some
more fields.
Right now I'm completely CPU bound, it's 100% load at
I admit I didn't think (or didn't even read in detail) about technical
implementation.
This is an extract from analyzer:
*** Table AE_DATA
Percentage of total database.. 99.89%
Number of entries. 1030371
Bytes of
Again sorry for count(a), I wrote too fast. I understand of course about
null values.
Otherwise by rowid I mean the autogenerated primary key. In my actual
case, I have a field as alias.
CREATE TABLE t(id integer primary key, a);
explain query plan select count(*) from t -> scan table
create
OK I understand, then it remains the question why it does not use the
primary key??
Thanks,
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
I was surprised to find that simple query "select count(*) from table" took
too much time, and found that it does NOT use the primary key index??
e.g.
CREATE TABLE t(a);
explain query plan select count(*) from t
I get : SCAN TABLE t (~100 rows)
If I use CREATE TABLE t(a unique), then it uses
That's why I asked about feedback.
My implementation is ~100 lines longer, so I think it's still "lite".
There is nothing complex in it; apart from heap implementation, there
are quite a few simplifications in the original code.
Gabriel
___
sqlite-us
I thought more about a "minus" (subtract minimum), but this might be a
better option.
Regards,
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
I'm not sure we're talking about the same thing.
For me caching here means avoiding IO, not memory and/or locks.
The heap itself also needs some work, but logarithmic.
The default value of 2000 pages cache should me enough for most useful
pages (indices, roots...), having an overhead of max ~11 e
Following a few other discussions, I had the feeling that sqlite should
benefit from a cache which discards cached pages in a least frequently
used order.
It generally means, index pages, often used data pages, etc, should be
preferred (meaning kept in memory) compared to some infrequent used pa
I admit I didn't check what the standards say about "select", I just
wanted to make sure the potential users (which are by no means
"developers") get the data properly.
But you misread my example, I had "order by id" everywhere (no mixed
sorting).
I expected that the optimizer would "see" it's t
The problem is, it is not ported to Windows as all...
I made a quick (and dirty) port of lsm_unix, but other things (e.g. the
environment) are also missing.
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/
It's hard to accept this conclusion... it seems like a simple justification.
If you say so, why is "select from v order by id" not doing a sort (with
the data from view)?
Obviously it "sees" the id is the primary key and uses it for sorting.
I read here lots of messages about complex query opti
I understand, but I wanted to make a performance comparison. I read some
good news, but I need to test it for my case.
Am in a situation where the bottleneck is the CPU (sqlite), not IO.
Therefore I'm very interested in an early idea about performance.
Even if it's not ready, I could at least p
I need some clarifications on this issue. Here is a simplified example.
There is a table:
CREATE TABLE t(id integer primary key, data integer);
and a (simplified) view:
CREATE VIEW v as SELECT * FROM "t" order by id;
I included the "order by" in view because it's meant for some end-users
and I
I am also on final steps of a new project, and I would love to compare
sqlite4 before release (the data files will be public, therefore a later
switch would be problematic).
Is there any chance to get it for windows? I usually need the dll.
Last time I could not compile it (mingw), there are s
Wondering how nobody suggested, did you try "PRAGMA synchronous = OFF" ??
For me it is always the default... I can imagine how slow such a
combination can be.
Just my 2c...
Gabriel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.o
Hi,
I have some problems trying to truncate a db file to its "normal" size.
I use sqlite as a file format in a mostly append only mode.
Because I need to be prepared for high data rate, I chose to use chunk size
to avoid file fragmentation (e.g, I use a chunk size of 4MB, but sometimes
up to 100MB
Yuriy Kaminskiy wrote:
>Check
>PRAGMA secure_delete;
>(and disable if it was enabled by default; it would be rendered
ineffective by your trick anyway).
It is not enabled.
>Probably there are way to make it work (altering PRAGMA schema_version; or
>something), but I think it is way to hackeris
Rather than dropping all tables, why not just use
DELETE FROM TABLE myTable
with no clauses ?
Also, I would guess that the thing which is taking most of the time is
the VACUUM command. Do you >really need it ? Are you very short of
filespace ? Are you about to make special backup copies
I poked through the sqlite source code and tried to force
"sqlite3ResetOneSchema" or "sqlite3SchemaClear"... and found no clear way
to do it...
I found however an workaround: before clearing the file I get the table
list and execute a rename (alter) for each table using fake names. Then
d
Hello,
I have the following scenario: I need to "clear"/"initialize" a db file
while potential readers are active (polling for data).
The "normal" way to do it is begin a transaction, drop all tables, recreate
tables, commit (vacuum to regain space).
The biggest problem is that dropping a "very l
I read about it, but I prefer to use dll binding.
With the amalgamation, it's also quite easy to compile to one obj and link
directly in Delphi (similar to jpeg; that's probably what you also do). No
pun intended, why should I pay for it?
Off-topic: what do I need to do for a proper reply??
There is one more reason to use DLLs, I'm surprised noone mentioned it.
What if you don't use C??? (I use myself Delphi with a header conversion).
Gabriel
--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
sqlite-users mailing
Or you can do your immediate writing to a database in memory, and have
anotherprocess dump memory to disk in the background. Depending on how
recent youneed reading you can read the one in memory or the one on disk.
It seems I have reached the CPU boundary (>90% one 1 core), not waiting
Hi,
I have a similar problem; I need to reach writing 10 records/s and
parallel reading without blocking the writer.
While previously it was not possible (I was considering HDF5 for this),
now I decided to go back to sqlite.
Using WAL mode and lots of optimizations I am able to write >1
Hello all,
I'm currently designing a new file format using sqlite for some
specialized hardware; previously it wasn't possible, but the new WAL
mode fulfills my needs.
Except for some small summary info, the data is appended only.
I have some other simple requirement to preallocate a file size:
I
Hello,
Thanks for your attention,
> Although speaking generally such method could be used in some situations, I
> don't think it's good to allow to use it even with a "i know what I'm
> doing"
> pragma. Any structured file (sqlite is an example) have internal
> dependencies. One of the reasons to
reading there would be some simple read/check of a value
before using the page list.
These relatively simple changes would open it for other applications; by
searching I saw that I'm not the only one who needs this kind of behavior.
Regards,
Gabriel Corneanu
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Sure I could have some kind of intermediate storage, but that would mean
unnecessary data moving / copying.
I really hope that I'll find some time and try to study the source and
eventually implement my ideas (maybe others find it interesting and/or
useful too).
You said that only references are c
Hello everybody,
I have the following situation:
1. a writer needs to continuously append some data in 1 or 2 tables, *
without* any possibility to be blocked.
2. one (or eventually more) reader needs to read the data for analysis.
Pt 1 is very important; therefore I use a "PRAGMA locking_mode =
44 matches
Mail list logo