Thanks.
Have a feeling I made this same mistake before and posted to this
forum as well ...
RBS
On Wed, Oct 17, 2012 at 4:13 AM, Igor Tandetnik itandet...@mvps.org wrote:
Bart Smissaert bart.smissa...@gmail.com wrote:
To do with the same, what is wrong with this update SQL?
update final2
This smells like a program that is either installed by you, or, from your
CD when you reloaded, is looking for the sqlite3.dll but can't find it.
Download the Win32 version from the SQLite site and extract the DLL to the
c:\windows\system32 directory (You'll need administrative rights) and you
Actually, the 32 bits version should go to c:\windows\syswow64 as
c:\windows\system32 is for 64 bits DLLs on a 64bits Windows.
Stephen Chrzanowski wrote:
This smells like a program that is either installed by you, or, from your
CD when you reloaded, is looking for the sqlite3.dll but can't
Hello Richard,
Thank You for the answer.
We think it would make sense to reflect this rule for zeroblobs in the
documentation,
so other teams would save time not spending it when trying to fix bugs that
don't really exist.
Probably, we just missed this mentioning in the documentation, so if we
Hello Igor, thank you for the information, it is a big help.
If you have a one-to-one relationship between two tables, is there a
reason why you don't simply combine the two into a single, wider table?
They way I think about a database is that you subdivide the data based
on how you may want
I'm not having much luck with this.
When I try to go to,
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
I get a failed to connect,
The connection was refused when attempting to contact sqlite.org:8080.
I tried in both seamonkey and ie and get the same message.
LMHmedchem
--
I found a bug in latest (3.7.14.1) with a very specific #define that can causes
a database corruption after truncation because of missing backup pages.
If you specify this define:
#define SQLITE_OMIT_AUTOVACUUM
And then execute a VACUUM operation that shrinks the database, due to the
#ifndef
I have program which with some period insert row in sqlite database. I need
calc hash sum of database file after each insert without close connection.
I have some problem with that, after insert database file hash sum are same
with they have before insert. Only after closing connection hash sum
On Tue, Oct 16, 2012 at 2:03 PM, LMHmedchem lmh_users-gro...@molconn.comwrote:
I'm not having much luck with this.
When I try to go to,
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
I get a failed to connect,
The connection was refused when attempting to contact
On 16 Oct 2012, at 6:50pm, LMHmedchem lmh_users-gro...@molconn.com wrote:
Some of
the other tables are fairly large (500-2500 cols)
A table with 500 columns is itself a bad sign. You should be able to think
about the entire table makeup in your head without needing to refer to written
On Tuesday, 16 October, 2012, 11:51, LMHmedchem said:
You could always create a table JUST for for the compsite key which gives you
JUST the single-key which you then use to look up data in the other tables. If
you specifically name the rowid columns, then you can use simple and efficient
Are you maybe using WAL mode?
Do you have any other files alongside your database like *.db-shm or *.db-wal?
If so, you can just cat all the files together and pipe through md5sum or such.
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Hello again,
Attached is a test application which replicates the problem.
I expected the transactions to block each other exactly like they do in
the beginning (one connection successfully begins and the other receives
SQLITE_BUSY), but I didn't expect the blocked connection to never get
The problem is you are starting read-only transaction by executing
SELECT and then try to convert this transaction into writing one by
executing BEGIN IMMEDIATE. If in such situation SQLITE_BUSY is
returned you have to finish the transaction and start it again. In
your code solution is easy:
It appears that regardless of a pragma synchronous=off that there are still
syncs to disk done at key points in WAL mode such as in a pragma
wal_checkpoint(RESTART). I think that this is true based on the application
cpu and disk I/O patterns I observe when logging shows the wal checkpoint to
On Wed, Oct 17, 2012 at 11:04 AM, Bob Price rjp_em...@yahoo.com wrote:
It appears that regardless of a pragma synchronous=off that there are
still syncs to disk done at key points in WAL mode such as in a pragma
wal_checkpoint(RESTART). I think that this is true based on the
application cpu
Hello!
I've got Out-Of-Memory error when delete records from a table that has
about 150,000,000 records.
The table is created as:
CREATE TABLE differential_parts_temp (plan_id TEXT, [version_id] INT NOT
NULL, [page_index] INT NOT NULL, [md5low] INT64 NOT NULL, [md5high] INT64
NOT NULL);
CREATE
strace is a nice tool.
Once I figured out that it needed the -f option (for threads maybe as I don't
fork other processes) I got good output, and, no, it does not contain any fsync
or any other *sync* calls.
So that answers my question. Testing this out on a different Linux box I did
not see
On 17 Oct 2012, at 4:58pm, Ivan P iva...@cloudberrylab.com wrote:
Why the DELETE statement can eat so much memory?
Because it doesn't delete each one row singly, doing all the file updates that
are needed to delete that row, then move on to the next row. If it did it
would take an extremely
Hello
I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll INTEGER
DEFAULT 0, historic INTEGER DEFAULT 0, sort INTEGER DEFAULT
On 10/18/2012 01:32 AM, Imanuel wrote:
Hello
I tested this on an SSD with a database with one single table with
5,553,534 entries:
CREATE TABLE Namen(id INTEGER PRIMARY KEY, geonameid INTEGER, lang TEXT,
name TEXT, pref INTEGER DEFAULT 0, short INTEGER DEFAULT 0, coll INTEGER
DEFAULT 0,
On Wed, Oct 17, 2012 at 11:58 AM, Ivan P iva...@cloudberrylab.com wrote:
Hello!
I've got Out-Of-Memory error when delete records from a table that has
about 150,000,000 records.
The table is created as:
CREATE TABLE differential_parts_temp (plan_id TEXT, [version_id] INT NOT
NULL,
All,
I don't know how to do something very simple like this. I have two table
and I would like to see the value of one table as it is expressed in the
other.
CREATE TABLE [REFTABLE] (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
[RefItem] CHAR);
INSERT INTO [REFTABLE]([RefItem]) VALUES('One');
Hello,
Sqlite has some minor valgrind issues (some memory area point to
unitialized bytes).
You can easily avoid this by replacing malloc by calloc in src/mem1.c:84
and src/mem2.c:255
Is it possible to integrate those improvements into the mainline of Sqlite ?
Thank you,
Alfred.
--
On Wed, Oct 17, 2012 at 4:33 PM, Alfred Sawaya alf...@huji.fr wrote:
Hello,
Sqlite has some minor valgrind issues (some memory area point to
unitialized bytes).
Really? We run many of our test cases here through valgrind and don't see
any problems. Can you be more specific?
You can
On Wed, Oct 17, 2012 at 4:36 PM, Richard Hipp d...@sqlite.org wrote:
On Wed, Oct 17, 2012 at 4:33 PM, Alfred Sawaya alf...@huji.fr wrote:
Hello,
Sqlite has some minor valgrind issues (some memory area point to
unitialized bytes).
Really? We run many of our test cases here through
I send you the valgrind report, in attached file.
I use sqlite with sqlcipher but it is not a sqlcipher related issue I think
(please see the sqlcipher team reply :
https://github.com/sqlcipher/sqlcipher/issues/33 ).
Thank you.
2012/10/17 Richard Hipp d...@sqlite.org
On Wed, Oct 17, 2012 at
On 10/17/2012 4:23 PM, Gert Van Assche wrote:
I don't know how to do something very simple like this. I have two table
and I would like to see the value of one table as it is expressed in the
other.
CREATE TABLE [REFTABLE] (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
[RefItem] CHAR);
INSERT
On Wed, Oct 17, 2012 at 4:56 PM, Alfred Sawaya alf...@huji.fr wrote:
I send you the valgrind report, in attached file.
This mailing list deletes attachments. Please include the valgrind report
inline. Thanks.
I use sqlite with sqlcipher but it is not a sqlcipher related issue I think
The list block big messages...
Here is a pastebin : http://pastebin.com/raw.php?i=QjN18m4h
2012/10/17 Richard Hipp d...@sqlite.org
On Wed, Oct 17, 2012 at 4:56 PM, Alfred Sawaya alf...@huji.fr wrote:
I send you the valgrind report, in attached file.
This mailing list deletes
On Wed, Oct 17, 2012 at 5:18 PM, Alfred Sawaya alf...@huji.fr wrote:
The list block big messages...
Here is a pastebin : http://pastebin.com/raw.php?i=QjN18m4h
Can you show us what SQL you are running in order to get these errors?
2012/10/17 Richard Hipp d...@sqlite.org
On Wed, Oct
It is not related to a particular SQL request. And the errors are corrected
by using calloc instead of malloc in mem1.c and mem2.c
Maybe sqlite team prefer to let the caller memset the allocated area ?
Anyway,
For this one :
==32575== Conditional jump or move depends on uninitialised value(s)
On Wed, Oct 17, 2012 at 4:33 PM, Alfred Sawaya alf...@huji.fr wrote:
Hello,
Sqlite has some minor valgrind issues (some memory area point to
unitialized bytes).
You can easily avoid this by replacing malloc by calloc in src/mem1.c:84
and src/mem2.c:255
Is it possible to integrate those
calloc is C89 and C99. Is there any problem using it in sqlite ?
2012/10/17 Richard Hipp d...@sqlite.org
On Wed, Oct 17, 2012 at 4:33 PM, Alfred Sawaya alf...@huji.fr wrote:
Hello,
Sqlite has some minor valgrind issues (some memory area point to
unitialized bytes).
You can easily
No, the performance stays the same.
I have also tried using a big cache_size, but that didn't change
anything, too.
Yes, I can share the database - it is currently uploading, I will mail
the link tomorrow.
Imanuel
Am 17.10.2012 22:08, schrieb Dan Kennedy:
On 10/18/2012 01:32 AM, Imanuel
Hi All,
I'm using the latest System.Data.Sqlite with c# and .Net 4.
Is there any method of writing to a BLOB in byte array chunks rather
than in one big lump? (I can see how using SQLiteDataReader GetBytes I
can read a blob back in chunks).
Best Regards,
On 17 Oct 2012, at 11:59pm, Mike King making1...@gmail.com wrote:
I'm using the latest System.Data.Sqlite with c# and .Net 4.
Is there any method of writing to a BLOB in byte array chunks rather
than in one big lump? (I can see how using SQLiteDataReader GetBytes I
can read a blob back in
I believe that unless you have constrained nR1 nR2 nR3 ... nR6 as NOT NULL then
select
(select RefItem from REFTABLE where id=nR1),
(select RefItem from REFTABLE where id=nR2),
(select RefItem from REFTABLE where id=nR3),
(select RefItem from REFTABLE where id=nR4),
(select
Just wanted to say thanks to everyone for the help... I've decided to
use MySQL for this particular setup rather than go down the route of
custom builds. SQLite remains my preference for exclusive scenarios
and it's only because I have to support scaling to simultaneous
connections that I'm
39 matches
Mail list logo