Is the sync necessary to commit a transaction slow? Performance of
that sync depends on the OS, file system, hardwar, etc. IIRC, so IOs
may be fast but it's possible that the syncs are killing you.
-T
On Tue, Sep 22, 2009 at 5:14 PM, Mark godef...@gmail.com wrote:
Lothar Scholz wrote:
Not in the InstallShield/MSI format that I imagine you're looking
for no. But that's the whole point, really.
See the Precompiled Binaries section of the download page.
You'll find what you need (if not necessarily what you're looking for)
there.
-T
On Wed, Jul 22, 2009 at 4:42 PM,
That's pretty common in academic papers, actually.
-T
On 7/18/09, Rick Ratchford r...@amazingaccuracy.com wrote:
Yes. You are correct. That is what they are.
Thanks for pointing this out. It probably should have been made clear at
the
beginning of the book since this is not common in
I think you have to factor the age of SQLite into that explanation
as well. I think the first versions of SQLite were released about 10
years ago, at which point C++ compilers were even more non-standard
than they are today. Then, once it's functional and stable in C, why
rewrite it?
On Tue,
As others have already mentioned, hash joins can help in a
situation where there are no appropriate indexes. They can make
things worse if the inputs aren't large enough though, so there's
still some gray area.
The biggest thing that other databases have going for them - MSSQL
and Oracle
I'd be willing to bet that amongst experienced SQLite users, you're
in the minority.
More importantly, I don't think Dr. Hipp agrees with you, so the
discussion is very likely moot. :)
-T
On Wed, Mar 25, 2009 at 9:02 AM, sqlite.20.tomca...@spamgourmet.com wrote:
Hi everyone,
I'm
If you need high concurrency then you probably shouldn't spend too
much time looking at SQLite.
That said, how often do you actually need to read information from
this database? If you need to insert a lot but not necessarily read a
lot you might consider simply appending new information
Setting aside the fact that it seems silly to try to show people an
8 million row resultset...
You could merge the two ideas: create a temp table containing just
the rowids you want, in the order that you want, and then use LIMIT
and OFFSET to get the particular ones you want. Using those
on each
column, if i add more indexes across other columns i'm thinking it'll
get too big to cache up.
thanks
Thomas Briggs wrote:
For starters, I think that loading the index into the cache in
MySQL is biasing your performance measures. SQLite will automatically
load pages of any
For starters, I think that loading the index into the cache in
MySQL is biasing your performance measures. SQLite will automatically
load pages of any necessary indexes into memory as part of executing
the query, but doing so takes time. By preloading the index on MySQL,
you're removing that
Nope, that's the solution. :)
On Fri, Feb 20, 2009 at 11:56 AM, Boucher, Michael
michael.bouc...@ironmountain.com wrote:
Hi there,
I need to migrate data from a different database into a SQLite database.
What I've done is written a simple C++ app which opens both databases,
does a
Interesting point about indexes not being updated until the
transaction commits. I'm still curious why dropping and recreating
the indexes is seen as a bad thing though...
On Fri, Feb 20, 2009 at 2:59 PM, Jay A. Kreibich j...@kreibi.ch wrote:
On Fri, Feb 20, 2009 at 11:56:52AM -0500,
It won't be too big... famous last words.
I think the rowid is probably safe for what you're trying to do,
despite the well-intentioned advice others have given you against it.
Also, if you think the underlying data may change, then I'm not
sure what good reading the whole table will
Depending on the nature of the data and queries, increasing the
block size may help.
Posting some information about your schema and queries is the only
way to get truly good advice on this though, I think. There is no
-runfast switch you can include on the command line to fix things.
:)
I think the answer depends on the poster's actual experience with
SQLite. There are plenty of people that use SQLite without any kind
of programming tool (think SQLiteExplorer), so for them it's a
database. There are plenty of people who have used SQLite as a simple
data store for PHP apps,
When you say the load stops, what do you mean? Does the sqlite3
process end? Does it sit there doing nothing?
The first thing I would do is look at line 55035 of the source file
and see if there's something weird about it.
Also, have you done a line count on the file so you know
If you really are only updating 20 records at a time you should be
able to make it work plenty fast enough with plain old SQL.
Something feels wrong about using an exclusive transaction here
too. I can't say why, and I may well be wrong, but... just a gut
hunch.
On Thu, Jan 29, 2009 at
Why not just use Cygwin?
On Wed, Jan 14, 2009 at 12:55 PM, J. R. Westmoreland j...@jrw.org wrote:
So far, everything I have looked at is very old, long before Vista hit the
scene.
Still looking.
I guess I could give up and turn around to the Linux console and try it
there. grin
But, you
I actually thought the original question was perfectly clear. I
thought the proposed solution (included in the original post) was
perfectly logical too. So what's all the fuss?
On Wed, Jan 7, 2009 at 7:28 AM, P Kishor punk.k...@gmail.com wrote:
On 1/6/09, Edward J. Yoon
I've been using SQLite for about 5 years now, and the put the
commands in a file is the best answer I'm aware of.
-T
On Mon, Jan 5, 2009 at 1:44 PM, Webb Sprague webb.spra...@gmail.com wrote:
If I understand correctly, all you need to do is write the desired
commands out to a text file,
I think it had more to do with a) the fact that it's non-standard
syntax and b) compiling in the regex library would unnecessary bloat
the binary.
See pcre.org for a free (non-GPL) regex library that has proven to
work nicely with SQLite. :)
-T
On Tue, Dec 16, 2008 at 1:44 PM, Griggs,
Secondly, AUTOINCREMENT in SQLite only works on an INTEGER PRIMARY
KEY, not on any other kind of primary key or on any non-primary-key
field. BIGINT PRIMARY KEY is not an INTEGER PRIMARY KEY and so
AUTOINCREMENT won't work on it.
I think he understands that. :) His question is why. I
thus curious myself.
-T
On Mon, Dec 15, 2008 at 10:58 AM, D. Richard Hipp d...@hwaci.com wrote:
On Dec 15, 2008, at 10:52 AM, Thomas Briggs wrote:
Secondly, AUTOINCREMENT in SQLite only works on an INTEGER PRIMARY
KEY, not on any other kind of primary key or on any non-primary-key
field
I think Nathan's point is that the integer you get when declaring a
column INTEGER PRIMARY KEY can hold a 64-bit value anyway, so why
couldn't it simply be declared differently and behave the same?
INTEGER PRIMARY KEY is the exception to the rules for SQLite
datatypes. Any other column
, Jerry Krinock [EMAIL PROTECTED] wrote:
On 2008 Dec, 02, at 21:19, Thomas Briggs wrote:
Try removing the semi-colon at the end of the .read statement. The
semi-colon is the query terminator, but because dot-commands aren't
queries they don't require the semi. As such the .read command
Put both commands (the pragma and the read) into a file (e.g.
foo.txt) and then do:
sqlite3 newDatabase.sqlite '.read foo.txt'
-T
On Tue, Dec 2, 2008 at 8:48 PM, Jerry Krinock [EMAIL PROTECTED] wrote:
I need a command-line script running on Mac OS 10.5 to rebuild sqlite
3 database
(placesDump.txt ;).
-T
On Tue, Dec 2, 2008 at 11:56 PM, Jerry Krinock [EMAIL PROTECTED] wrote:
On 2008 Dec, 02, at 19:44, Thomas Briggs wrote:
Put both commands (the pragma and the read) into a file (e.g.
foo.txt) and then do:
sqlite3 newDatabase.sqlite '.read foo.txt'
Looked like a great
SQLite will complain because of the duplicate index names, but in other
database packages it will be accepted. You then have to specify the table
name when deleting indexes.
The only database I'm aware of that does this is SQL Server. Don't
overgeneralize. :)
-T
If I had to guess I'd say that the performance problems are
transaction related. Switching to text file export/import will give
you an opportunity to solve that problem.
-T
On Wed, Nov 12, 2008 at 11:29 AM, Baskaran Selvaraj
[EMAIL PROTECTED] wrote:
Thanks Tom. I tried using DTS but
I'm not 100% sure this is what you're asking for, but try this...
CREATE TABLE foo(Ranking INTEGER PRIMARY Key, Col1, Col2, ... );
INSERT INTO foo SELECT ... ORDER BY ...;
SELECT * FROM foo WHERE Ranking BETWEEN x AND y;
Warnings in the documentation aside, this will give you the rows in
at 5:43 AM, Andrew Gatt [EMAIL PROTECTED] wrote:
Thomas Briggs wrote:
Ultimately it'll depend on your schema and the query you're
running, but you're probably better off creating an index that covers
the SELECT query you're executing. That should make the query fast
and save you the hassle
PROTECTED] wrote:
Thomas Briggs wrote:
How much slower is the index than your pre-populated table? If
you're really comparing apples to apples it would be good to know how
big the different is.
If you post your schema and queries you'll probably get better
advice. At that this I'm just
Ultimately it'll depend on your schema and the query you're
running, but you're probably better off creating an index that covers
the SELECT query you're executing. That should make the query fast
and save you the hassle of writing and maintaining triggers.
The later post about
On Mon, Oct 27, 2008 at 1:36 PM, MikeW [EMAIL PROTECTED] wrote:
On Oct 26, 2008, at 10:01 PM, Julian Bui wrote:
Hi Dr. Hipp,
Julian,
I think it's D.R. Hipp - unless you know better !
But I'm sure Richard appreciates the accolade !!
Actually, you're both right... see
It's been a while since I used FreeBSD, but I remember sometimes
needing to use gmake (vs. just plain make) to get ports up and going.
-T
On Mon, Oct 20, 2008 at 12:02 PM, Adrian [EMAIL PROTECTED] wrote:
Hello,
I'm having trouble installing the SQLite3 port on FreeBSD, and I wanted to
I think there are some self-identifying bits at the start of a
valid SQLite file... you could open the file directly and check for
those.
Or, if you're going to retrieve a list of table names from
sqlite_master when first opening the database, you could trap the
SQLITE_NOTADB when executing
What you've described here is column partitioning - most databases implement
row partitioning, where the rows in the table are split between multiple,
hidden sub-tables based on the value(s) in one or more columns within the row.
The most common application of which is separating
Oracle 10.1
AVG(A) AVG(B)
-- --
3. 3.
SQL 2005
--- --
3 3.33
DB2 8.2
1 2
---
3 +3.33E+000
I am beginning to believe that maybe I was wrong in my
assumption that 'if a table has an index, that index shows in
sqlite_master'. Then my problem is now to find another way to
get index information for a table. Any suggestions?
Your assumption is correct. Where you are incorrect
See http://www.sqlite.org/autoinc.html - INTEGER PRIMARY KEY will
autoincrement only until you delete a row from the table.
-Tom
-Original Message-
From: Mario Gutierrez [mailto:[EMAIL PROTECTED]
Sent: Monday, October 24, 2005 11:05 PM
To: sqlite-users@sqlite.org
Subject:
However, as SQLite files are single files, a crash during a
non-important
transaction could still hose the entire database. In this
sense, there is
no such thing as a non-important transaction.
Hrm... A very good point. Thanks for spotting the flaw in my
thinking. :)
-Tom
Given my understanding of the codebase (you get to decide what that's
worth), the value of the synchronous pragma determines decisions going
forward, so changing it mid-process should impact only transaction
handling from that point forward. I do know, however, that there are
places in the
While I can understand your general sentiment, allowing minor
problems like this to clutter the output from valgrind makes spotting
the real errors amidst the noise more difficult. Eventually, when
enough of these types of problems exist, valgrind stops being used
altogether, because it's too
Perhaps the use of EXPLAIN would show the way to implement
certain types
of common accesses I expect to be done frequently, such as
inserting one
row, selecting one row using a unique key or updating one
row. Then I
could correlate the virtual machine instructions with
specific
Solaris also does not support fdatasync, and as such doesn't compile
without this workaround.
-Original Message-
From: Jolan Luff [mailto:[EMAIL PROTECTED]
Sent: Saturday, September 17, 2005 4:14 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Version 3.2.6
On Sat, Sep
, September 22, 2005 11:41 AM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Version 3.2.6
Please try the patch at http://www.sqlite.org/cvstrac/chngview?cn=2732
and let me know if this fixes the fdatasync problem on Solaris and
OpenBSD.
On Thu, 2005-09-22 at 10:33 -0400, Thomas Briggs wrote
You'll need to provide more information to get a helpful answer.
What version are you using? What indexes are present on the table? How
was the table defined?
-Tom
-Original Message-
From: Da Martian [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 15, 2005 8:59 AM
To:
However, I would very much like a bulk insert - call to
sqlite (Oracle
OCI does this, for example), where i can put many (thousands)
of records
into the database with one call. Is there any chance of
something like
this ever to be added to sqlite?
I can't speak authoritatively,
Actually, every database I'm aware of returns NULL for any aggregate
whenever the inputs are NULL, and also when no input rows are processed
(i.e. if no rows match the criteria in the WHERE clause, the result of
the aggregation is NULL).
-Tom
-Original Message-
From: Will
So then, if there are no input rows at all (if no rows
match the WHERE clause) then SUM returns 0. (This makes
sense because if you say:
SELECT sum(amt) FROM sales WHERE month='october';
and you didn't sell anything in October, you want an
answer of 0, not NULL.) Or if *some* of
Can somebody come up with a realistic scenario where they would
actually want SUM() to return NULL instead of 0? NULL is such
I think your example of totaling sales in October makes the argument
itself - you didn't make sales totaling $0 in October, you just didn't
make any sales. A
Postgres 7.4:
a | b | sum
---+---+-
1 | 2 | 2
| 2 | 4
1 | | 8
| | 16
DB2 8.2
A B 3
--- --- ---
1 2 2
- 2 4
1 - 8
-
To avoid unintentionally finding indexes with the given name, you
should use:
select * from sqlite_master where type = 'table' and name = 'TABLENAME'
-Tom
-Original Message-
From: Sergey Startsev [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 31, 2005 7:35 AM
To: Massimo
I can reproduce this behavior using 3.2.5 compiled with MS VC++ 7.
3.2.2 didn't produce consistent behavior cross-platform when rounding
numbers that ended with .5, so yes, it should have rounded up to 10.0.
:)
-Tom
-Original Message-
From: Bob Dankert [mailto:[EMAIL
I think that the semi-colon at the end of the statement is what's
causing your problem. The command shell requires the semi-colon to mark
the end of a statement; in code, your statement ends at the end of the
string.
-Tom
-Original Message-
From: Greg Stark [mailto:[EMAIL
I can also confirm that the original test case posted works correctly
when moving the file from Linux to Sparc (Solaris) and PA-RISC (HP-UX).
-Tom
-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED]
Sent: Thursday, August 18, 2005 2:21 PM
To:
However if I use something like:
select * from myTable where column2!='';
(Takes a long time to return).
I guess because the column in that row isn't indexed? Any
alternatives?
I don't believe that indices can be used to satisfy != conditions, so
even if the column is indexed, you
in WHERE clause
On May 26, 2005, at 7:49 AM, Thomas Briggs wrote:
It's been our
experience that the only truly reliable way to avoid this problem
is to
be explicit.
I agree, and that's what I've always done up until now because it
never occurred to me that the SQL engine would
If you create your Ordering column in table B as type INTEGER PRIMARY
KEY, you can do:
INSERT INTO tableB (value)
SELECT value
FROM tableA
ORDER BY ...
And when you're done, the value of tableB.Ordering will represent the
order in which the rows were inserted into the
Out of curiosity, why doesn't the idea I proposed work? If there's a
situation where it doesn't work I'd like to know, so I don't try using
it myself should a similar situation arise. :)
-Tom
-Original Message-
From: Brown, Dave [mailto:[EMAIL PROTECTED]
Sent: Thursday, May
Sorry, didn't mean to imply it wouldn't. You added this caveat though:
Note however that this really only does what you want when
tableB is initially empty
Yeah, that's fairly easy to work around though. Assuming you know
the structure of the ultimate destination table, which seems
I think that you can put the aggregates directly into the SELECT
clause rather than referring to them by alias, i.e.
select city, sum(Weight)/count(id) as AvgWeight
-Tom
-Original Message-
From: de f [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 17, 2005 1:55 PM
To:
No, that seems to work fine. I guess the table is locked for a
specific transaction, so you cannot have any problems with a lock held
by the very same transaction.
ie Below would return database table is locked?
BEGIN TRANSACTION
SELECT * from table1 WHERE col x
UPDATE table1
This question seems to come up often, and I'm still confused as to
what problems people are having. What APIs are you using to perform
these steps? In particular, when you want to update a row, are you
using a prepared query that is executed multiple times, or are you
creating an SQL
. :)
-Tom
-Original Message-
From: Martin Engelschalk [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 12, 2005 9:53 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] database table is locked
@Thomas Briggs, Jay Sprenckle
I use the C Api described at http://www.sqlite.org
A transaction is a way to make a group of things that happens atomic,
but an SQL statement that generates an error doesn't really make
anything happen, so it has no impact on the transaction itself or any of
the other actions within it. That kinda the whole point, in a way - one
statement
This isn't an SQLite thing either... All databases work
this way, as
far as I'm aware.
Postgres refuses to process any further sql statements in a
transaction
after an error occurs with
one of the sql statements.
Heh. I should have said that all databases with which I am
Is there a way to see a list of the latest changes made in CVS? I
know that I can view the list of changes made per-file, and thus far
I've been looking at the revision history for each file as I noticed
that it's updated, but it's much more convenient to have all changes
logged
-0400, Thomas Briggs wrote:
Is there a way to see a list of the latest changes made
in CVS? I
Use this: http://www.red-bean.com/cvs2cl/
--
Andrew Piskorski [EMAIL PROTECTED]
http://www.piskorski.com/
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] List of latest changes in CVS
On Fri, 2005-05-06 at 08:59 -0400, Thomas Briggs wrote:
Is there a way to see a list of the latest changes made
in CVS? I
know that I can view the list of changes made per-file, and thus far
I've been looking
I for one would find this very useful. I would have found it most
useful when initially evaluating SQLite a couple months ago, but I would
still have a number of uses for it going forward.
Not that my vote actually does you any good when it comes to
implementing anything. :)
-Tom
From the looks of this warning, I would guess that you could redefine
SQLITE_STATIC like this (or some variation of this that is
legal C++) to solve
the problem:
#define SQLITE_STATIC ((extern C void(*)(void*)) 0)
I don't think there's any legal way to do this, is there? Linkage
Oracle and DB2 treat all object names in a case-insensitive manner,
and to that end store all object names in upper case in the data
dictionary. As such, field names, unless an alias is provided using AS,
come back in all upper case for those databases. MS SQL Server returns
the field name
I'd guess that column is now a reserved word - probably because of
the addition of ALTER TABLE. I have no proof that back that up though.
:)
-Tom
-Original Message-
From: William Hachfeld [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 21, 2005 1:07 PM
To:
What APIs are you guys using to retrieve results and execute the
subsequent updates? Are you using prepare/step or sqlite3_exec with a
callback to retrieve the results? Would it be possible for you to post
more detailed pseudo-code? Conceptually I think we're all on the same
page; I think
;
}
// clean up when finished
sqlite3_finalize();
//process your list here
sqlite3_close( db );
On 4/20/05, Thomas Briggs [EMAIL PROTECTED] wrote:
What APIs are you guys using to retrieve results and execute the
subsequent updates? Are you using prepare/step or
sqlite3_exec
I was puzzled that removing the single-column index on
Season actually enabled the original query to
complete, having read somewhere in the sqlite docs
that indexing columns used in WHERE conditions
improves performance. Is this something to do with the
That's a true statement in
is a good sign.
-Tom
-Original Message-
From: Ted Unangst [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 19, 2005 1:28 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Indexing problem
Thomas Briggs wrote:
I think the common misconception is that indexes on multiple
Without having seen the EXPLAIN output for the query both with and
without the indexes present: the indexes you've created don't really
support your query very well. Of the six indexes that you've created, I
believe that only one can be used, so I'd speculate that the cause of
the slowdown is
well, it could be true, but not in the queries i have posted. i group
by column a and there is an index on column a, so sqlite does not
have to do anything to compute key. it does not even have to back to
Do not confuse the index key with the aggregator key. The two may be
the same in
with sum(n1) added query runs twice as slow. as i was told its because
sqlite has to fetch data row. fine, but why its soo slow?! and it
Because for each row it has to compute the aggregate key, find the
aggregator for that key and increment the sum for that aggregate key.
That's a lot
Aliases rowid to rowid seems to work for me, i.e.
SELECT rowid as rowid, primary_key_col
-Tom
-Original Message-
From: Will Leshner [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 12, 2005 5:36 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] determing the primary key
I
Is it possible to limit the amount of memory SQLite uses while
processing an aggregate query?
I have a 1GB database containing a single table. Simple queries
against this table (SELECT COUNT(*), etc.) run without using more than a
few MBs of memory; the amount used seems to correspond
at 10:09 -0500, Thomas Briggs wrote:
I have a 1GB database containing a single table. Simple queries
against this table (SELECT COUNT(*), etc.) run without
using more than a
few MBs of memory; the amount used seems to correspond
directly with the
size of the page cache, as I
: Thursday, March 24, 2005 11:19 AM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Memory usage for queries containing a
GROUP BY clause
On Thu, 2005-03-24 at 10:57 -0500, Thomas Briggs wrote:
After posting my question, I found the discussion of how
aggregate
operations are performed
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Memory usage for queries containing a
GROUP BY clause
On Thu, 2005-03-24 at 13:59 -0500, Thomas Briggs wrote:
I feel like I'm missing something, but that didn't seem
to help. I
can see in the code why it should be behaving differently
You are welcomed to experiment with changes that will store the
entire result set row in the btree rather than just a pointer.
If you can produce some performance improvements, we'll likely
check in your changes.
Am I wrong in interpreting your comment to mean that this should be
. Richard Hipp [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 24, 2005 4:26 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Memory usage for queries containing a
GROUP BY clause
On Thu, 2005-03-24 at 16:08 -0500, Thomas Briggs wrote:
Am I wrong in interpreting your comment to mean
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] COUNT(DISTINCT)
On Wed, 2005-03-23 at 09:02 -0500, Thomas Briggs wrote:
Bearing in mind that I'm aware of the published workaround for
COUNT(DISTINCT x), and also that as of yet I know nothing of the
internals of SQLite: what would
89 matches
Mail list logo