[sqlite] Corrupted database

2009-10-12 Thread Filip Navara
Hello,

for a few months we have been occasionally getting corrupted databases
in the field. So far we were unable to acquire any of them from our
customers, but this week I finally got hold of one. Output from
"pragma integrity_check" is included below.

The schema is the following:

CREATE VIRTUAL TABLE "LocalMailsIndex3" USING fts3 (id INTEGER,
partName TEXT, content TEXT);
CREATE TABLE LocalMailsIndex3_content(  docid INTEGER PRIMARY
KEY,c0id, c1partName, c2content);
CREATE TABLE LocalMailsIndex3_segdir(  level integer,  idx integer,
start_block integer,  leaves_end_block integer,  end_block integer,
root blob,  primary key(level, idx));
CREATE TABLE LocalMailsIndex3_segments(  blockid INTEGER PRIMARY KEY,
block blob);
CREATE INDEX "LocalMailsIndex3_contentIndex" ON
"LocalMailsIndex3_content" ("c0id", "c1partName");

The database is created using SQLite 3.6.14.2, thread safe, on Windows
with auto_vacuum=incremental. It is always opened as attached database
with journal_mode=persist. Application crashes were most probably
involved, but no operating system / power crashes as far as I know.

One thread in the application is periodically running "pragma
freelist_count" and "pragma incremental_vacuum(...)". Other threads
are running combination of the following commands and no other:

INSERT INTO mail_fti.LocalMailsIndex3("id", "partName", "content")
VALUES (@id, @partName, @content)
SELECT c2content AS content FROM mail_fti.LocalMailsIndex3_content
WHERE c0...@id AND c1partna...@partname
SELECT docid FROM mail_fti.LocalMailsIndex3_content WHERE c0id IN (...)
DELETE FROM mail_fti.LocalMailsIndex3 WHERE doc...@docid
SELECT ... WHERE "id" IN (SELECT "id" FROM mail_fti.LocalMailsIndex3
WHERE content MATCH "...")

Anybody has seen something like this?
Anybody willing to look at it? I can send the database privately.

Best regards,
Filip Navara

Main freelist: Bad ptr map entry key=5143 expected=(2,0) got=(3,4467)
Main freelist: freelist leaf count too big on page 5143
Main freelist: Bad ptr map entry key=5449 expected=(2,0) got=(4,5143)
Main freelist: freelist leaf count too big on page 5449
Main freelist: 904 of 908 pages missing from overflow list starting at 5143
On tree page 3878 cell 26: invalid page number 5737
On tree page 3878 cell 26: Child page depth differs
On tree page 3878 cell 27: Failed to read ptrmap key=5746
On tree page 3878 cell 27: invalid page number 5746
On tree page 3878 cell 28: Failed to read ptrmap key=5748
On tree page 3878 cell 28: invalid page number 5748
On tree page 3878 cell 29: Failed to read ptrmap key=5749
On tree page 3878 cell 29: invalid page number 5749
On tree page 3878 cell 30: Failed to read ptrmap key=5755
On tree page 3878 cell 30: invalid page number 5755
On tree page 3878 cell 31: Failed to read ptrmap key=5757
On tree page 3878 cell 31: invalid page number 5757
On tree page 3878 cell 32: Failed to read ptrmap key=5759
On tree page 3878 cell 32: invalid page number 5759
On tree page 3878 cell 33: Failed to read ptrmap key=5761
On tree page 3878 cell 33: invalid page number 5761
On tree page 3878 cell 34: Failed to read ptrmap key=5763
On tree page 3878 cell 34: invalid page number 5763
On tree page 3878 cell 35: Failed to read ptrmap key=5767
On tree page 3878 cell 35: invalid page number 5767
On tree page 3878 cell 36: Failed to read ptrmap key=5769
On tree page 3878 cell 36: invalid page number 5769
On tree page 3878 cell 37: Failed to read ptrmap key=5771
On tree page 3878 cell 37: invalid page number 5771
On tree page 3878 cell 38: Failed to read ptrmap key=5773
On tree page 3878 cell 38: invalid page number 5773
On tree page 3878 cell 39: Failed to read ptrmap key=5775
On tree page 3878 cell 39: invalid page number 5775
On tree page 3878 cell 40: Failed to read ptrmap key=5777
On tree page 3878 cell 40: invalid page number 5777
On tree page 3878 cell 41: Failed to read ptrmap key=5780
On tree page 3878 cell 41: invalid page number 5780
On tree page 3878 cell 42: Failed to read ptrmap key=5783
On tree page 3878 cell 42: invalid page number 5783
On tree page 3878 cell 43: Failed to read ptrmap key=5787
On tree page 3878 cell 43: invalid page number 5787
On tree page 3878 cell 44: Failed to read ptrmap key=5789
On tree page 3878 cell 44: invalid page number 5789
On tree page 3878 cell 45: Failed to read ptrmap key=5793
On tree page 3878 cell 45: invalid page number 5793
On tree page 3878 cell 46: Failed to read ptrmap key=5795
On tree page 3878 cell 46: invalid page number 5795
On tree page 3878 cell 47: Failed to read ptrmap key=5797
On tree page 3878 cell 47: invalid page number 5797
On tree page 3878 cell 48: Failed to read ptrmap key=5801
On tree page 3878 cell 48: invalid page number 5801
On tree page 3878 cell 49: Failed to read ptrmap key=5805
On tree page 3878 cell 49: invalid page number 5805
On tree page 3878 cell 50: Failed to read ptrmap key=5807
On tree page 3878 cell 50: invalid page number 5807
On tree page 3878 cell 51: Failed to read ptrmap key=58

[sqlite] Corrupted database

2015-03-04 Thread Alexandr Němec
Hi all,
?
we have a product that uses SQLite. Because it was running very stable for 
years, we are still using SQLite 3.7.17. Now we've seen on one of our 
installations that the database has been corrupted, we saw that there was a 
power failure around the time of the corruption. We tested power failures by 
ourselves in the past, the database had always survived. We are also aware of 
this https://www.sqlite.org/atomiccommit.html#sect_9_0 

?
Nevertheless, I have the corrupted database and I could make it available if 
someone?likes to have a look at it. Maybe it would be possible to inspect the 
kind of the corruption to see?whether it might be an SQLite problem or if one 
of the "Things that can go wrong" (see the aforementioned link) kicked in...
?
Thanks
?
Alex



[sqlite] Corrupted database

2015-03-04 Thread Scott Hess
On Wed, Mar 4, 2015 at 1:11 AM, Alexandr N?mec  wrote:
> we have a product that uses SQLite. Because it was running very stable for
> years, we are still using SQLite 3.7.17. Now we've seen on one of our
> installations that the database has been corrupted, we saw that there was a
> power failure around the time of the corruption. We tested power failures by
> ourselves in the past, the database had always survived. We are also aware
> of this https://www.sqlite.org/atomiccommit.html#sect_9_0
> 
>
> Nevertheless, I have the corrupted database and I could make it available if
> someone likes to have a look at it. Maybe it would be possible to inspect
> the kind of the corruption to see whether it might be an SQLite problem or
> if one of the "Things that can go wrong" (see the aforementioned link)
> kicked in...

In the time I've been involved with high-volume SQLite clients (Google
Gears, then Chrome), what I've found is that the corruption invariably
(*) ends up being a case where distinct pages were not written
atomically, but where each page in isolation is completely valid when
you inspect the data.  For instance, you update a row and the
corresponding page for the table btree is updated but the page for the
index btree is not.  Usually this correlated with power-failure
events, but since corruptions of that sort aren't detected until
SQLite actually compares the pages, the cause and detect can be
separated by months.

After analyzing many dozens of these over time, I mostly stopped
tracking down and reviewing people's files, because it really wasn't
telling me anything actionable.  SQLite wasn't breaking the files, and
AFAICT SQLite's system to guarantee atomicity works, so if some
combination of the operating system and hardware are doing causing the
problem you just need to figure out ways to recognize and deal with it
at a higher layer.

-scott

(*) I have also seen cases where someone intentionally messed with
disk I/O and screwed SQLite up.  I don't really count those, because
there's not much you can do to protect yourself against clever but
mis-guided programmers.


[sqlite] Corrupted database

2015-03-04 Thread Simon Slavin

On 4 Mar 2015, at 9:07pm, Scott Hess  wrote:

> In the time I've been involved with high-volume SQLite clients (Google
> Gears, then Chrome), what I've found is that the corruption invariably
> (*) ends up being a case where distinct pages were not written
> atomically, but where each page in isolation is completely valid when
> you inspect the data.  For instance, you update a row and the
> corresponding page for the table btree is updated but the page for the
> index btree is not.  Usually this correlated with power-failure
> events, but since corruptions of that sort aren't detected until
> SQLite actually compares the pages, the cause and detect can be
> separated by months.

Does SQLite automatically detect and uncorrupt these problems the next time it 
opens the database, or has something been done to the hardware to break 
in-order writing ?

Simon.


[sqlite] Corrupted database

2015-03-04 Thread Dave Dyer

I'd be interested if you could characterize the corruption.  For 
example, can use still use .dump to dump the database, and if so
what kind of damage is there?

The cases I've encountered recently, the "corruption" was only 
a few duplicated records, which broke the uniqueness constraint
on an index.

Interestingly, and perhaps alarmingly, if the index hadn't existed,
no corruption would have been detected, although the internal events
that allowed the duplicate entries would still be an undetected error.



[sqlite] Corrupted database

2015-03-04 Thread Dave Dyer

I'd be interested if you could characterize the corruption.  For 
example, can use still use .dump to dump the database, and if so
what kind of damage is there?

The cases I've encountered recently, the "corruption" was only 
a few duplicated records, which broke the uniqueness constraint
on an index.

Interestingly, and perhaps alarmingly, if the index hadn't existed,
no corruption would have been detected, although the internal events
that allowed the duplicate entries would still be an undetected error.



[sqlite] Corrupted database

2015-03-05 Thread Alexandr Němec
Hi Simon,
?
> Does SQLite automatically detect and uncorrupt these problems the next time 
> it opens the database,
>?or has something been done to the hardware to break in-order writing ?
?
No. The database can be opened successfully, but a simple "select" ends up with 
"database disk image is malformed" in the command line shell. I can, however, 
".dump" the database. Using the SQLite API, the "sqlite3_prepare_v2" function 
ends up with error code 11 (SQLITE_CORRUPT).
?
I don't understand exactly, what you mean by "breaking in-order writing". The 
database has been running on a Windows 2012 R2 server and was placed (with a 
lot of other data) on a RAID-5 disk array made to one logical disk for the 
server. AFAICT it was a pretty standard Windows 2012 configured server.

Alex


[sqlite] Corrupted database

2015-03-05 Thread Alexandr Němec
Hi Dave,
?
thanks for yoyr?reply.
?
>I'd be interested if you could characterize the corruption.
?
As I already replied to Simon, I can ".dump" the database but I can't run any 
''select" queries. The SQLite shell displays "database disk image is malformed" 
and using C?API, error code 11?(SQLITE_CORRUPT)?is returned.
?
Alex
?


[sqlite] Corrupted database

2015-03-05 Thread Simon Slavin

> On 5 Mar 2015, at 7:28am, Alexandr N?mec  wrote:
> 
> Hi Simon,
>  
>> Does SQLite automatically detect and uncorrupt these problems the next time 
>> it opens the database,
>>  or has something been done to the hardware to break in-order writing ?
>  
> No. The database can be opened successfully, but a simple "select" ends up 
> with "database disk image is malformed" in the command line shell. I can, 
> however, ".dump" the database. Using the SQLite API, the "sqlite3_prepare_v2" 
> function ends up with error code 11 (SQLITE_CORRUPT).

Okay.  A few things, all of them agreeing with your description.  The _open() 
call doesn't actually access the database files.  It just makes a note of where 
they are for later use.  So to have SQLite automatically detect and recover 
from unfinished changes it's necessary to issue at least one SQL command that 
reads something from the database file.  For example, "SELECT rowid FROM atable 
LIMIT 1".

The '.dump' command reads the table without using searching or sorting, so it 
would not spot a corrupt index.

So from all the above it's likely that the data in your tables is okay, and 
only the indexes in that file are corrupt.  So dropping and recreating your 
indexes should fix the file perfectly unless the violation of the UNIQUE 
requirement is in the primary index for a table.

>  I don't understand exactly, what you mean by "breaking in-order writing". 
> The database has been running on a Windows 2012 R2 server and was placed 
> (with a lot of other data) on a RAID-5 disk array made to one logical disk 
> for the server. AFAICT it was a pretty standard Windows 2012 configured 
> server.

That does sound like a common setup and it is subject to corruption.  'in-order 
writing' is the assurance from the computer to the programmer that if they 
issue a number of write commands to disk, they will be executed in the order 
they were given rather than something like 1, 2, 5, 4, 3.  Without in-order 
writing, it is impossible to maintain a database in a trustworthy manner since 
whatever locking or corruption-checking you do can be easily defeated.  For 
more on this see





Unfortunately, most disk subsystems do not enforce in-order writing because it 
is far faster to write changes to disk in an order given by the fastest path 
which covers all parts of the disk which need to be written.  For a normal 
computer used mainly for Web/Word/Email/Games this doesn't matter.  For a 
computer used for databases it does matter.  Many Server-class hard disks or 
hard disk drivers come with settings which enforce correct behaviour (at the 
costs of slowing down writing).  Some RAID software has settings which do the 
same.  You might find these settings near, or part of, the ones to do with 
caching.

Unfortunately I don't know enough about Windows Server to advise you.

Simon.


[sqlite] Corrupted database

2015-03-05 Thread Dave Dyer
If you can .dump it, can you also use .read to restore it? And if so, how 
damaged does it appear to be?

My databases, with a corrupt index, couldn't be restored directly, but
the duplicate entries could be seen, cleaned up, and then the restore
succeeded.  Or (more conveniently) remove the index creation from the
.dump, restore, use queries to find and remove duplicates, then reinstate
the index.

My thinking is that the kind of corruption I've had should at least
be a different error code, and that a pragma to drop the index could
allow repair without the extreme of dumping and editing the dump file.



[sqlite] Corrupted database

2015-03-06 Thread Alexandr Němec
Hi Simon,

>So from all the above it's likely that the data in your tables is okay
>, and only the indexes in that file are corrupt. ?So dropping and
>?recreating your indexes should fix the file perfectly unless the
>?violation of the UNIQUE requirement is in the primary index for a table.
?
Yes, this seems to work.
?
>Many Server-class hard disks or hard disk drivers come with settings
>?which enforce correct behaviour (at the costs of slowing down writing).
>?Some RAID software has settings which do the same.
?
That's a good point, I will go through the configuration?and look for these 
settings.
?
Thanks
?
Alex

?


[sqlite] Corrupted database

2015-03-06 Thread Alexandr Němec
Hi Dave,
?
thanks. In fact, I know, how to repair the database and get access to the data. 
What I was trying to track down a bit, is the reason for the database to get 
corrupted. But as Simon explained, this might have to do with 
"in-order-writing" of the RAID we're using.
?
Alex
?
?


[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

I have some addition evidence that there is an underlying problem,
exacerbated by some failure in SMB file sharing.

In this instance, there is a set of duplicated records that did not
directly cause an indexing error, but which could have been created
if a transaction failed (presumably due to a file i/o error), was 
incorrectly unwound, and then repeated.

- Details -

Using the sqlite3 tool, starting with the damaged database;
 I dropped the indexes that had directly caused the complaint
 queried to find the duplicated records
 deleted the duplicated records
 tried to recreate the indexes (expecting this would succeed).  
 It did not.  I got a "database is malformed" error.

I take this as evidence that there was some actual damage to the
database, not just cleanly duplicated records with a bad index.

I did a full dump of the original database, removed the bad index
request, created a new database from the dump, repeated the duplicate 
record removal, and successfully created the index.  

This "fully repaired" database turned out to contain a duplicated set of 
records which did not cause an indexing problem, but which should not have
occurred, and was consistent with a duplicated transaction.  If this had 
been caused by a program error - ie; I really inserted the records twice, 
the database would not have been really damaged, and the shortcut repair I
tried first would have succeeded.

--

In this case, the client is a mac running os 10.7.5, the file
server is a PC running OS 8 server, and the sharing is via SMB







[sqlite] Corrupted database

2015-03-16 Thread Simon Slavin

On 16 Mar 2015, at 6:35pm, Dave Dyer  wrote:

> This "fully repaired" database turned out to contain a duplicated set of 
> records which did not cause an indexing problem, but which should not have
> occurred, and was consistent with a duplicated transaction.  If this had 
> been caused by a program error - ie; I really inserted the records twice, 
> the database would not have been really damaged, and the shortcut repair I
> tried first would have succeeded.

You are correct in what you have noticed but the explanation is simpler and 
well-known.  Because of the format in which SQLite keeps its indexes, you 
cannot always repair a corrupted index by deleting the rows which you think are 
corrupt.  Sometimes you need to delete other rows too (for example those 
immediately before and after a corrupt row) and you need detailed analysis of 
what's wrong with the index to know exactly what you need to delete.  If you do 
leave corrupt information in the index is can lead to additional corruption 
when you add more rows to those indexes.

The only safe thing to do is to drop the index and remake it.  Or do to 
something which does that (e.g. VACUUM).

Simon.


[sqlite] Corrupted database

2015-03-16 Thread Richard Hipp
On 3/16/15, Simon Slavin  wrote:
>
> Because of the format in which SQLite keeps its indexes, you
> cannot always repair a corrupted index by deleting the rows which you think
> are corrupt.  Sometimes you need to delete other rows too (for example those
> immediately before and after a corrupt row) and you need detailed analysis
> of what's wrong with the index to know exactly what you need to delete.

But you can always repair corruption in indexes using the "REINDEX" command.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>The only safe thing to do is to drop the index and remake it.  Or do to 
>something which does that (e.g. VACUUM).

As I said, the first step in my repair was to drop the offending index.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>The only safe thing to do is to drop the index and remake it.  Or do to 
>something which does that (e.g. VACUUM).

As I said, the first step in my repair was to drop the offending index.



[sqlite] Corrupted database

2015-03-16 Thread R.Smith


On 2015-03-16 08:35 PM, Dave Dyer wrote:
> I have some addition evidence that there is an underlying problem,
> exacerbated by some failure in SMB file sharing.
>
> In this instance, there is a set of duplicated records that did not
> directly cause an indexing error, but which could have been created
> if a transaction failed (presumably due to a file i/o error), was
> incorrectly unwound, and then repeated.
>
> - Details -
>
> Using the sqlite3 tool, starting with the damaged database;
>   I dropped the indexes that had directly caused the complaint
>   queried to find the duplicated records
>   deleted the duplicated records
>   tried to recreate the indexes (expecting this would succeed).
>   It did not.  I got a "database is malformed" error.
>
> I take this as evidence that there was some actual damage to the
> database, not just cleanly duplicated records with a bad index.

This is unfortunately an assumption and not evidence of any sort. How do 
you know the database is "healthy" in total now that you have deleted a 
couple of records? Did you run an integrity check which passed before 
attempting to recreate the indices?


> I did a full dump of the original database, removed the bad index
> request, created a new database from the dump, repeated the duplicate
> record removal, and successfully created the index.
>
> This "fully repaired" database turned out to contain a duplicated set of 
> records which did not cause an indexing problem, but which should not have
> occurred, and was consistent with a duplicated transaction.  If this had
> been caused by a program error - ie; I really inserted the records twice,
> the database would not have been really damaged, and the shortcut repair I
> tried first would have succeeded.

Another assumption I'm afraid. If a transaction failed, the script might 
call for any amount of things based on your ON CONFLICT control 
settings.  If you are logging the DB activity and a rollback was logged 
that would get us closer to evidence.

Also, one has to assume that SQLite has full access and control over the 
file and locking mechanisms to assume that any damage must be due SQLite 
not rolling back a transaction correctly - which incidentally is 
directly proved to not be the case by the very next statement:

> In this case, the client is a mac running os 10.7.5, the file
> server is a PC running OS 8 server, and the sharing is via SMB

Do you still have a copy of the originally damaged Database? I believe a 
closer look to it will reveal more corruption than the assumed.

I do however think you are on the right track with thinking that the 
duplicated transaction (or failure of rollback or program error or 
mishandling of a duplicate insert fail that cause the transaction to be 
either committed twice or not failed correctly or resubmitted without 
clear prior failure or success) is the culprit here, and I am fairly 
certain this can happen in the setup described above as per the 
http://www.sqlite.org/howtocorrupt.html




[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread R.Smith

On 2015-03-16 09:49 PM, Dave Dyer wrote:
>> Do you still have a copy of the originally damaged Database? I believe a 
>> closer look to it will reveal more corruption than the assumed.
> I have the original database.  What other tests could I do to look for 
> evidence?
>
> It appears (so far) that the database I reconstructed from the dump (minus an 
> index request)
> contains all the original data, but that's hard to prove.

Yes but the dump doesn't provide automatic indices and row-ids etc. 
(some of which might be corrupt too). It is great that all the data is 
perfectly in tact, which means that the duplication problem ran into is 
likely the only problem that occurred.

>
>> I do however think you are on the right track with thinking that the 
>> duplicated transaction (or failure of rollback or program error or 
>> mishandling of a duplicate insert fail that cause the transaction to be 
>> either committed twice or not failed correctly or resubmitted without clear 
>> prior failure or success) is the culprit here, and I am fairly certain this 
>> can happen in the setup described above as per the 
>> http://www.sqlite.org/howtocorrupt.html
> It's likely in this case that the apple/microsoft cooperation on SMB is 
> ultimately at fault
> because it is breaking some guarantee that sqlite is depending on.  Their 
> file sharing is known
> to be crap. It's still worthwhile to try to get to the bottom of it as a 
> preliminary step to fixing it.

It isn't likely, it is almost guaranteed - but not because of the 
above-mentioned players as such - There simply is no actual networked 
system in existence (that isn't excruciatingly slow) that does 
file-locking correct via a network of any sort.

I will have to say very sadly that if you absolutely have to have a 
system where the DB file is not on the exact same machine (physical) 
than the client software (the bit using SQLite code), then SQLite is not 
a good choice - consider MySQL or Postgres perhaps. There was a crowd 
some time ago making an SQLite for client-server environments 
(SQLitening if memory serves), but I believe it's a commercial system.
See: http://www.sqlite.org/whentouse.html





[sqlite] Corrupted database

2015-03-16 Thread Keith Medcalf

>I will have to say very sadly that if you absolutely have to have a
>system where the DB file is not on the exact same machine (physical)
>than the client software (the bit using SQLite code), then SQLite is not
>a good choice - consider MySQL or Postgres perhaps. There was a crowd
>some time ago making an SQLite for client-server environments
>(SQLitening if memory serves), but I believe it's a commercial system.
>See: http://www.sqlite.org/whentouse.html

QNX also modifies SQLite (QDB) to work with their distributed interprocess 
messaging, thus all operations, even from remote nodes, occur against a local 
filesystem ... or at least that was the case last time I looked at it.

---
Theory is when you know everything but nothing works.  Practice is when 
everything works but no one knows why.  Sometimes theory and practice are 
combined:  nothing works and no one knows why.






[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted database

2015-03-16 Thread Dave Dyer

>
>Do you still have a copy of the originally damaged Database? I believe a 
>closer look to it will reveal more corruption than the assumed.

I have the original database.  What other tests could I do to look for 
evidence? 

It appears (so far) that the database I reconstructed from the dump (minus an 
index request)
contains all the original data, but that's hard to prove.


>I do however think you are on the right track with thinking that the 
>duplicated transaction (or failure of rollback or program error or mishandling 
>of a duplicate insert fail that cause the transaction to be either committed 
>twice or not failed correctly or resubmitted without clear prior failure or 
>success) is the culprit here, and I am fairly certain this can happen in the 
>setup described above as per the http://www.sqlite.org/howtocorrupt.html

It's likely in this case that the apple/microsoft cooperation on SMB is 
ultimately at fault
because it is breaking some guarantee that sqlite is depending on.  Their file 
sharing is known
to be crap. It's still worthwhile to try to get to the bottom of it as a 
preliminary step to fixing it.



[sqlite] Corrupted Database

2011-10-31 Thread Özgür KELEŞ

Hi,
We use sqlite in our industrial devices. But sometimes the database 
corrupted. We could not find the problem , how it can be possible to 
corrupt the database.  It is possible to see powerless on devices and OS 
crashes cause of electromagnetic noises, rarely. I attached some of 
corrupted databases. Can you help me on this subject?


Our Tools:
*OS: Embedded Linux 2.6.30.4
*QT Framework 4.7.2
*File System: yaffs2


Best Rigards
--
*Özgür KELEŞ*

Kordonboyu Mh. Barbaros Cd. Usta 1 Apt. No:11/16 Kartal/Istanbul/Turkey

T:+90216 353 41 56 (pbx) F:+90216 374 19 15

*www.enmos.com *



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-12 Thread McClellen, Chris
What is your synchronous set to?  Full?  FYI If you are using .NET data
providers, it is set to "Normal" by default. 

If it is not set to full, I have seen corruption when an application
crashes, or exits when a thread is in the middle of updating the db
(Synchronous = OFF makes corruption even easier in this case).  I have
seen apps that do not wait for background threads to finish before
termination, and without full sync on, either the db or the log gets
corrupted.  A corrupted log can cause problems for you db on next run
when recovery happens.



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Filip Navara
Sent: Monday, October 12, 2009 12:38 PM
To: General Discussion of SQLite Database
Subject: [sqlite] Corrupted database

Hello,

for a few months we have been occasionally getting corrupted databases
in the field. So far we were unable to acquire any of them from our
customers, but this week I finally got hold of one. Output from
"pragma integrity_check" is included below.

The schema is the following:

CREATE VIRTUAL TABLE "LocalMailsIndex3" USING fts3 (id INTEGER,
partName TEXT, content TEXT);
CREATE TABLE LocalMailsIndex3_content(  docid INTEGER PRIMARY
KEY,c0id, c1partName, c2content);
CREATE TABLE LocalMailsIndex3_segdir(  level integer,  idx integer,
start_block integer,  leaves_end_block integer,  end_block integer,
root blob,  primary key(level, idx));
CREATE TABLE LocalMailsIndex3_segments(  blockid INTEGER PRIMARY KEY,
block blob);
CREATE INDEX "LocalMailsIndex3_contentIndex" ON
"LocalMailsIndex3_content" ("c0id", "c1partName");

The database is created using SQLite 3.6.14.2, thread safe, on Windows
with auto_vacuum=incremental. It is always opened as attached database
with journal_mode=persist. Application crashes were most probably
involved, but no operating system / power crashes as far as I know.

One thread in the application is periodically running "pragma
freelist_count" and "pragma incremental_vacuum(...)". Other threads
are running combination of the following commands and no other:

INSERT INTO mail_fti.LocalMailsIndex3("id", "partName", "content")
VALUES (@id, @partName, @content)
SELECT c2content AS content FROM mail_fti.LocalMailsIndex3_content
WHERE c0...@id AND c1partna...@partname
SELECT docid FROM mail_fti.LocalMailsIndex3_content WHERE c0id IN (...)
DELETE FROM mail_fti.LocalMailsIndex3 WHERE doc...@docid
SELECT ... WHERE "id" IN (SELECT "id" FROM mail_fti.LocalMailsIndex3
WHERE content MATCH "...")

Anybody has seen something like this?
Anybody willing to look at it? I can send the database privately.

Best regards,
Filip Navara

Main freelist: Bad ptr map entry key=5143 expected=(2,0) got=(3,4467)
Main freelist: freelist leaf count too big on page 5143
Main freelist: Bad ptr map entry key=5449 expected=(2,0) got=(4,5143)
Main freelist: freelist leaf count too big on page 5449
Main freelist: 904 of 908 pages missing from overflow list starting at
5143
On tree page 3878 cell 26: invalid page number 5737
On tree page 3878 cell 26: Child page depth differs
On tree page 3878 cell 27: Failed to read ptrmap key=5746
On tree page 3878 cell 27: invalid page number 5746
On tree page 3878 cell 28: Failed to read ptrmap key=5748
On tree page 3878 cell 28: invalid page number 5748
On tree page 3878 cell 29: Failed to read ptrmap key=5749
On tree page 3878 cell 29: invalid page number 5749
On tree page 3878 cell 30: Failed to read ptrmap key=5755
On tree page 3878 cell 30: invalid page number 5755
On tree page 3878 cell 31: Failed to read ptrmap key=5757
On tree page 3878 cell 31: invalid page number 5757
On tree page 3878 cell 32: Failed to read ptrmap key=5759
On tree page 3878 cell 32: invalid page number 5759
On tree page 3878 cell 33: Failed to read ptrmap key=5761
On tree page 3878 cell 33: invalid page number 5761
On tree page 3878 cell 34: Failed to read ptrmap key=5763
On tree page 3878 cell 34: invalid page number 5763
On tree page 3878 cell 35: Failed to read ptrmap key=5767
On tree page 3878 cell 35: invalid page number 5767
On tree page 3878 cell 36: Failed to read ptrmap key=5769
On tree page 3878 cell 36: invalid page number 5769
On tree page 3878 cell 37: Failed to read ptrmap key=5771
On tree page 3878 cell 37: invalid page number 5771
On tree page 3878 cell 38: Failed to read ptrmap key=5773
On tree page 3878 cell 38: invalid page number 5773
On tree page 3878 cell 39: Failed to read ptrmap key=5775
On tree page 3878 cell 39: invalid page number 5775
On tree page 3878 cell 40: Failed to read ptrmap key=5777
On tree page 3878 cell 40: invalid page number 5777
On tree page 3878 cell 41: Failed to read ptrmap key=5780
On tree page 3878 cell 41: invalid page number 5780
On tree page 3878 cell 42: Failed to read ptrmap key=5783
On tree page 3878 cell 4

Re: [sqlite] Corrupted database

2009-10-12 Thread Dan Kennedy

On Oct 13, 2009, at 1:56 AM, McClellen, Chris wrote:

> What is your synchronous set to?  Full?  FYI If you are using .NET  
> data
> providers, it is set to "Normal" by default.
>
> If it is not set to full, I have seen corruption when an application
> crashes, or exits when a thread is in the middle of updating the db
> (Synchronous = OFF makes corruption even easier in this case).  I have
> seen apps that do not wait for background threads to finish before
> termination, and without full sync on, either the db or the log gets
> corrupted.  A corrupted log can cause problems for you db on next run
> when recovery happens.

In theory, this shouldn't happen. Unless the application is actually
buffering data that SQLite thinks has been written to the database or
journal file in the process space on some systems.

The "synchronous" setting should only make a difference in the event
of a power or OS failure. That's the theory, anyway.

Dan.



>
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Filip Navara
> Sent: Monday, October 12, 2009 12:38 PM
> To: General Discussion of SQLite Database
> Subject: [sqlite] Corrupted database
>
> Hello,
>
> for a few months we have been occasionally getting corrupted databases
> in the field. So far we were unable to acquire any of them from our
> customers, but this week I finally got hold of one. Output from
> "pragma integrity_check" is included below.
>
> The schema is the following:
>
> CREATE VIRTUAL TABLE "LocalMailsIndex3" USING fts3 (id INTEGER,
> partName TEXT, content TEXT);
> CREATE TABLE LocalMailsIndex3_content(  docid INTEGER PRIMARY
> KEY,c0id, c1partName, c2content);
> CREATE TABLE LocalMailsIndex3_segdir(  level integer,  idx integer,
> start_block integer,  leaves_end_block integer,  end_block integer,
> root blob,  primary key(level, idx));
> CREATE TABLE LocalMailsIndex3_segments(  blockid INTEGER PRIMARY KEY,
> block blob);
> CREATE INDEX "LocalMailsIndex3_contentIndex" ON
> "LocalMailsIndex3_content" ("c0id", "c1partName");
>
> The database is created using SQLite 3.6.14.2, thread safe, on Windows
> with auto_vacuum=incremental. It is always opened as attached database
> with journal_mode=persist. Application crashes were most probably
> involved, but no operating system / power crashes as far as I know.
>
> One thread in the application is periodically running "pragma
> freelist_count" and "pragma incremental_vacuum(...)". Other threads
> are running combination of the following commands and no other:
>
> INSERT INTO mail_fti.LocalMailsIndex3("id", "partName", "content")
> VALUES (@id, @partName, @content)
> SELECT c2content AS content FROM mail_fti.LocalMailsIndex3_content
> WHERE c0...@id AND c1partna...@partname
> SELECT docid FROM mail_fti.LocalMailsIndex3_content WHERE c0id IN  
> (...)
> DELETE FROM mail_fti.LocalMailsIndex3 WHERE doc...@docid
> SELECT ... WHERE "id" IN (SELECT "id" FROM mail_fti.LocalMailsIndex3
> WHERE content MATCH "...")
>
> Anybody has seen something like this?
> Anybody willing to look at it? I can send the database privately.
>
> Best regards,
> Filip Navara
>
> Main freelist: Bad ptr map entry key=5143 expected=(2,0) got=(3,4467)
> Main freelist: freelist leaf count too big on page 5143
> Main freelist: Bad ptr map entry key=5449 expected=(2,0) got=(4,5143)
> Main freelist: freelist leaf count too big on page 5449
> Main freelist: 904 of 908 pages missing from overflow list starting at
> 5143
> On tree page 3878 cell 26: invalid page number 5737
> On tree page 3878 cell 26: Child page depth differs
> On tree page 3878 cell 27: Failed to read ptrmap key=5746
> On tree page 3878 cell 27: invalid page number 5746
> On tree page 3878 cell 28: Failed to read ptrmap key=5748
> On tree page 3878 cell 28: invalid page number 5748
> On tree page 3878 cell 29: Failed to read ptrmap key=5749
> On tree page 3878 cell 29: invalid page number 5749
> On tree page 3878 cell 30: Failed to read ptrmap key=5755
> On tree page 3878 cell 30: invalid page number 5755
> On tree page 3878 cell 31: Failed to read ptrmap key=5757
> On tree page 3878 cell 31: invalid page number 5757
> On tree page 3878 cell 32: Failed to read ptrmap key=5759
> On tree page 3878 cell 32: invalid page number 5759
> On tree page 3878 cell 33: Failed to read ptrmap key=5761
> On tree page 3878 cell 33: invalid page number 5761
> On tree page 3878 cell 34: Failed to read ptrmap key=5763
> On tree page 3878 cell 34: invalid page number 5763
> On

Re: [sqlite] Corrupted database

2009-10-13 Thread McClellen, Chris
But it does happen and we can reproduce it.  Hard killing a thread is
essentially equivalent to turning off the power.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Tuesday, October 13, 2009 12:35 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database


On Oct 13, 2009, at 1:56 AM, McClellen, Chris wrote:

> What is your synchronous set to?  Full?  FYI If you are using .NET  
> data
> providers, it is set to "Normal" by default.
>
> If it is not set to full, I have seen corruption when an application
> crashes, or exits when a thread is in the middle of updating the db
> (Synchronous = OFF makes corruption even easier in this case).  I have
> seen apps that do not wait for background threads to finish before
> termination, and without full sync on, either the db or the log gets
> corrupted.  A corrupted log can cause problems for you db on next run
> when recovery happens.

In theory, this shouldn't happen. Unless the application is actually
buffering data that SQLite thinks has been written to the database or
journal file in the process space on some systems.

The "synchronous" setting should only make a difference in the event
of a power or OS failure. That's the theory, anyway.

Dan.



>
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Filip Navara
> Sent: Monday, October 12, 2009 12:38 PM
> To: General Discussion of SQLite Database
> Subject: [sqlite] Corrupted database
>
> Hello,
>
> for a few months we have been occasionally getting corrupted databases
> in the field. So far we were unable to acquire any of them from our
> customers, but this week I finally got hold of one. Output from
> "pragma integrity_check" is included below.
>
> The schema is the following:
>
> CREATE VIRTUAL TABLE "LocalMailsIndex3" USING fts3 (id INTEGER,
> partName TEXT, content TEXT);
> CREATE TABLE LocalMailsIndex3_content(  docid INTEGER PRIMARY
> KEY,c0id, c1partName, c2content);
> CREATE TABLE LocalMailsIndex3_segdir(  level integer,  idx integer,
> start_block integer,  leaves_end_block integer,  end_block integer,
> root blob,  primary key(level, idx));
> CREATE TABLE LocalMailsIndex3_segments(  blockid INTEGER PRIMARY KEY,
> block blob);
> CREATE INDEX "LocalMailsIndex3_contentIndex" ON
> "LocalMailsIndex3_content" ("c0id", "c1partName");
>
> The database is created using SQLite 3.6.14.2, thread safe, on Windows
> with auto_vacuum=incremental. It is always opened as attached database
> with journal_mode=persist. Application crashes were most probably
> involved, but no operating system / power crashes as far as I know.
>
> One thread in the application is periodically running "pragma
> freelist_count" and "pragma incremental_vacuum(...)". Other threads
> are running combination of the following commands and no other:
>
> INSERT INTO mail_fti.LocalMailsIndex3("id", "partName", "content")
> VALUES (@id, @partName, @content)
> SELECT c2content AS content FROM mail_fti.LocalMailsIndex3_content
> WHERE c0...@id AND c1partna...@partname
> SELECT docid FROM mail_fti.LocalMailsIndex3_content WHERE c0id IN  
> (...)
> DELETE FROM mail_fti.LocalMailsIndex3 WHERE doc...@docid
> SELECT ... WHERE "id" IN (SELECT "id" FROM mail_fti.LocalMailsIndex3
> WHERE content MATCH "...")
>
> Anybody has seen something like this?
> Anybody willing to look at it? I can send the database privately.
>
> Best regards,
> Filip Navara
>
> Main freelist: Bad ptr map entry key=5143 expected=(2,0) got=(3,4467)
> Main freelist: freelist leaf count too big on page 5143
> Main freelist: Bad ptr map entry key=5449 expected=(2,0) got=(4,5143)
> Main freelist: freelist leaf count too big on page 5449
> Main freelist: 904 of 908 pages missing from overflow list starting at
> 5143
> On tree page 3878 cell 26: invalid page number 5737
> On tree page 3878 cell 26: Child page depth differs
> On tree page 3878 cell 27: Failed to read ptrmap key=5746
> On tree page 3878 cell 27: invalid page number 5746
> On tree page 3878 cell 28: Failed to read ptrmap key=5748
> On tree page 3878 cell 28: invalid page number 5748
> On tree page 3878 cell 29: Failed to read ptrmap key=5749
> On tree page 3878 cell 29: invalid page number 5749
> On tree page 3878 cell 30: Failed to read ptrmap key=5755
> On tree page 3878 cell 30: invalid page number 5755
> On tree page 3878 cell 31: Failed to read ptrmap key=5757
> On tree page 3878 cell 31: invalid page num

Re: [sqlite] Corrupted database

2009-10-13 Thread Dan Kennedy

On Oct 14, 2009, at 12:42 AM, McClellen, Chris wrote:

> But it does happen and we can reproduce it.  Hard killing a thread is
> essentially equivalent to turning off the power.

We have always assumed that it is different. When you write data to
a file, the write is buffered in volatile memory by the OS for a time.
If a power failure occurs during this time, the write is lost. But if
a thread is killed, the OS should still eventually make sure the data
gets to stable storage.

If you kill the application, then open the database using the shell
tool, is the database corrupted?

Dan.



> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Tuesday, October 13, 2009 12:35 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Corrupted database
>
>
> On Oct 13, 2009, at 1:56 AM, McClellen, Chris wrote:
>
>> What is your synchronous set to?  Full?  FYI If you are using .NET
>> data
>> providers, it is set to "Normal" by default.
>>
>> If it is not set to full, I have seen corruption when an application
>> crashes, or exits when a thread is in the middle of updating the db
>> (Synchronous = OFF makes corruption even easier in this case).  I  
>> have
>> seen apps that do not wait for background threads to finish before
>> termination, and without full sync on, either the db or the log gets
>> corrupted.  A corrupted log can cause problems for you db on next run
>> when recovery happens.
>
> In theory, this shouldn't happen. Unless the application is actually
> buffering data that SQLite thinks has been written to the database or
> journal file in the process space on some systems.
>
> The "synchronous" setting should only make a difference in the event
> of a power or OS failure. That's the theory, anyway.
>
> Dan.
>
>
>
>>
>>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Filip Navara
>> Sent: Monday, October 12, 2009 12:38 PM
>> To: General Discussion of SQLite Database
>> Subject: [sqlite] Corrupted database
>>
>> Hello,
>>
>> for a few months we have been occasionally getting corrupted  
>> databases
>> in the field. So far we were unable to acquire any of them from our
>> customers, but this week I finally got hold of one. Output from
>> "pragma integrity_check" is included below.
>>
>> The schema is the following:
>>
>> CREATE VIRTUAL TABLE "LocalMailsIndex3" USING fts3 (id INTEGER,
>> partName TEXT, content TEXT);
>> CREATE TABLE LocalMailsIndex3_content(  docid INTEGER PRIMARY
>> KEY,c0id, c1partName, c2content);
>> CREATE TABLE LocalMailsIndex3_segdir(  level integer,  idx integer,
>> start_block integer,  leaves_end_block integer,  end_block integer,
>> root blob,  primary key(level, idx));
>> CREATE TABLE LocalMailsIndex3_segments(  blockid INTEGER PRIMARY KEY,
>> block blob);
>> CREATE INDEX "LocalMailsIndex3_contentIndex" ON
>> "LocalMailsIndex3_content" ("c0id", "c1partName");
>>
>> The database is created using SQLite 3.6.14.2, thread safe, on  
>> Windows
>> with auto_vacuum=incremental. It is always opened as attached  
>> database
>> with journal_mode=persist. Application crashes were most probably
>> involved, but no operating system / power crashes as far as I know.
>>
>> One thread in the application is periodically running "pragma
>> freelist_count" and "pragma incremental_vacuum(...)". Other threads
>> are running combination of the following commands and no other:
>>
>> INSERT INTO mail_fti.LocalMailsIndex3("id", "partName", "content")
>> VALUES (@id, @partName, @content)
>> SELECT c2content AS content FROM mail_fti.LocalMailsIndex3_content
>> WHERE c0...@id AND c1partna...@partname
>> SELECT docid FROM mail_fti.LocalMailsIndex3_content WHERE c0id IN
>> (...)
>> DELETE FROM mail_fti.LocalMailsIndex3 WHERE doc...@docid
>> SELECT ... WHERE "id" IN (SELECT "id" FROM mail_fti.LocalMailsIndex3
>> WHERE content MATCH "...")
>>
>> Anybody has seen something like this?
>> Anybody willing to look at it? I can send the database privately.
>>
>> Best regards,
>> Filip Navara
>>
>> Main freelist: Bad ptr map entry key=5143 expected=(2,0) got=(3,4467)
>> Main freelist: freelist leaf count too big on page 5143
>> Main freelist: Bad ptr m

Re: [sqlite] Corrupted database

2009-10-14 Thread Filip Navara
On Mon, Oct 12, 2009 at 8:56 PM, McClellen, Chris
 wrote:
> What is your synchronous set to?  Full?  FYI If you are using .NET data
> providers, it is set to "Normal" by default.

Normal or Off, but no power failure was involved. (Yes,
System.Data.SQLite is used)

> If it is not set to full, I have seen corruption when an application
> crashes, or exits when a thread is in the middle of updating the db
> (Synchronous = OFF makes corruption even easier in this case).  I have
> seen apps that do not wait for background threads to finish before
> termination, and without full sync on, either the db or the log gets
> corrupted.  A corrupted log can cause problems for you db on next run
> when recovery happens.

Sounds suspiciously like our case, but still Synchronous=off is
supposed to work in the event of application crash, hard killed
threads and so on. Previous version of the application frequently
forgot to close the databases on exit and did other nasty things that
and now fixed, but none of them should cause the database to be
corrupted.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread McClellen, Chris
Yes, if we are in the middle of a lot of updates/inserts and just
terminate the thread, pragma integrity_check from the sqlite3 command
line tool will report corruption at times.  Normally, when we hard kill
a thread in the middle of these ops, a journal is left behind.  I think
we only see corruption in this case (journal left behind), but cannot be
sure.  Our transactions can be large (a few thousand records totaling a
few megabytes).  

Summary of steps:

1) hard-Kill a thread in the middle of inserting/updating a large
transaction (2mb+ transaction).  
2) with nothing else running, do a pragma integrity_check in sqlite3
command line client against the db.  Obviously after executing sqlite3
client, the journal disappears since I guess recovery ran.
3) integrity_check spits out lots of errors.

Note that we have never had this happen with Synchronous=Full, only
Synchronous=Off.  Have not tried normal.  That's all I was trying to
tell the original person asking.  I don't know why this would happen; it
seems logical that once you execute a write to the FS, whether or not
the app crashes/spontaneously exits that the write would make it.
However, all of this may be a red herring.. It turns out we moved to
Synch=Full after we sped up our db operations greatly... see below as to
why that may be the factor (speed), and not what Synch is set to.

By the way, this is all under windows.

I'm assuming that sqlite3 uses sync i/o via writefile.  Here's a quick
read on sync IO for windows:
http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx --
specifically what worries me is that TerminateThread() looks like it can
interrupt an I/O operation.

The problem I think may be people using TerminateThread();  that's how
you hard kill a thread.  It seems that can interrupt an I/O operation --
ie an operation that writes more than one cluster at a time.  Meaning,
synch = full may have nothing to do with it.  If you have to say write
more than one cluster (disk block), TerminateThread looks like it can
abort the IO op in the middle of a multi-block op?  I'm trying to run
that down but can't yet find anything that verifies this. 

So, here is what I think:
1) You need to write some data, lets say a 8K write.  In this
theoretical example, disk blocks are 4k.  
2) You call WriteFile(8K).
3) WriteFile causes a system call, and the system schedules the first
bit of the i/o (1st 4k).
4) terminatethread() is called
5) I/O operation is cancelled (ie, as if CancelIO had been called?),
meaning block #2 was never scheduled.  Database now corrupt.

A lot of people using windows tend to have this kind of threading
architecture:
1) Signal the thread to exit on its own
2) Wait for some grace period.
3) if grace period expires, and thread is still running --
TerminateThread() because once the grace period expires, the thread is
considered hung.

So, large transactions in a thread could cause people to use
TerminateThread() at a critical time, especially if that causes the
thread to go over its grace period.  For us, these large transactions
took longer than our grace period to complete, and thus were subject to
TerminateThread -- and lots of corruption.  Once we sped everything up
and moved to synch=full, no transaction was even close to the grace
period, and such, terminatethread() is never called.. and we get no
corruption.

Just a thought.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Wednesday, October 14, 2009 12:36 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database


On Oct 14, 2009, at 12:42 AM, McClellen, Chris wrote:

> But it does happen and we can reproduce it.  Hard killing a thread is
> essentially equivalent to turning off the power.

We have always assumed that it is different. When you write data to
a file, the write is buffered in volatile memory by the OS for a time.
If a power failure occurs during this time, the write is lost. But if
a thread is killed, the OS should still eventually make sure the data
gets to stable storage.

If you kill the application, then open the database using the shell
tool, is the database corrupted?

Dan.



> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Tuesday, October 13, 2009 12:35 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Corrupted database
>
>
> On Oct 13, 2009, at 1:56 AM, McClellen, Chris wrote:
>
>> What is your synchronous set to?  Full?  FYI If you are using .NET
>> data
>> providers, it is set to "Normal" by default.
>>
>> If it is not set to full, I have seen corruption when an application
>> crashes, or exits when a thread is in the middle of updating the db
>> (Synchronous = OFF makes corruption even e

Re: [sqlite] Corrupted database

2009-10-14 Thread McClellen, Chris
Do you ever teriminate threads via TerminateThread() or run as a service (where 
the scm will call terminateThread() once your main service thread exits)?  In 
other words do you hard-kill your threads?


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Filip Navara
Sent: Wednesday, October 14, 2009 7:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database

On Mon, Oct 12, 2009 at 8:56 PM, McClellen, Chris
 wrote:
> What is your synchronous set to?  Full?  FYI If you are using .NET data
> providers, it is set to "Normal" by default.

Normal or Off, but no power failure was involved. (Yes,
System.Data.SQLite is used)

> If it is not set to full, I have seen corruption when an application
> crashes, or exits when a thread is in the middle of updating the db
> (Synchronous = OFF makes corruption even easier in this case).  I have
> seen apps that do not wait for background threads to finish before
> termination, and without full sync on, either the db or the log gets
> corrupted.  A corrupted log can cause problems for you db on next run
> when recovery happens.

Sounds suspiciously like our case, but still Synchronous=off is
supposed to work in the event of application crash, hard killed
threads and so on. Previous version of the application frequently
forgot to close the databases on exit and did other nasty things that
and now fixed, but none of them should cause the database to be
corrupted.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Dan Kennedy

> The problem I think may be people using TerminateThread();  that's how
> you hard kill a thread.  It seems that can interrupt an I/O  
> operation --
> ie an operation that writes more than one cluster at a time.  Meaning,
> synch = full may have nothing to do with it.  If you have to say write
> more than one cluster (disk block), TerminateThread looks like it can
> abort the IO op in the middle of a multi-block op?  I'm trying to run
> that down but can't yet find anything that verifies this.

Even if it does, which seems quite plausible, the only way
I can see this causing corruption is if you are in persistent
journal mode and a (weak) checksum gives you a false positive
on the last, corrupted, record in the journal file.

Maybe if you're using a version from a few years back it could
happen without persistent-journal mode too.

It's easy to be wrong about this kind of thing though.

Dan.



> So, here is what I think:
> 1) You need to write some data, lets say a 8K write.  In this
> theoretical example, disk blocks are 4k.
> 2) You call WriteFile(8K).
> 3) WriteFile causes a system call, and the system schedules the first
> bit of the i/o (1st 4k).
> 4) terminatethread() is called
> 5) I/O operation is cancelled (ie, as if CancelIO had been called?),
> meaning block #2 was never scheduled.  Database now corrupt.
>
> A lot of people using windows tend to have this kind of threading
> architecture:
> 1) Signal the thread to exit on its own
> 2) Wait for some grace period.
> 3) if grace period expires, and thread is still running --
> TerminateThread() because once the grace period expires, the thread is
> considered hung.
>
> So, large transactions in a thread could cause people to use
> TerminateThread() at a critical time, especially if that causes the
> thread to go over its grace period.  For us, these large transactions
> took longer than our grace period to complete, and thus were subject  
> to
> TerminateThread -- and lots of corruption.  Once we sped everything up
> and moved to synch=full, no transaction was even close to the grace
> period, and such, terminatethread() is never called.. and we get no
> corruption.
>
> Just a thought.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
> Sent: Wednesday, October 14, 2009 12:36 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Corrupted database
>
>
> On Oct 14, 2009, at 12:42 AM, McClellen, Chris wrote:
>
>> But it does happen and we can reproduce it.  Hard killing a thread is
>> essentially equivalent to turning off the power.
>
> We have always assumed that it is different. When you write data to
> a file, the write is buffered in volatile memory by the OS for a time.
> If a power failure occurs during this time, the write is lost. But if
> a thread is killed, the OS should still eventually make sure the data
> gets to stable storage.
>
> If you kill the application, then open the database using the shell
> tool, is the database corrupted?
>
> Dan.
>
>
>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
>> Sent: Tuesday, October 13, 2009 12:35 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Corrupted database
>>
>>
>> On Oct 13, 2009, at 1:56 AM, McClellen, Chris wrote:
>>
>>> What is your synchronous set to?  Full?  FYI If you are using .NET
>>> data
>>> providers, it is set to "Normal" by default.
>>>
>>> If it is not set to full, I have seen corruption when an application
>>> crashes, or exits when a thread is in the middle of updating the db
>>> (Synchronous = OFF makes corruption even easier in this case).  I
>>> have
>>> seen apps that do not wait for background threads to finish before
>>> termination, and without full sync on, either the db or the log gets
>>> corrupted.  A corrupted log can cause problems for you db on next  
>>> run
>>> when recovery happens.
>>
>> In theory, this shouldn't happen. Unless the application is actually
>> buffering data that SQLite thinks has been written to the database or
>> journal file in the process space on some systems.
>>
>> The "synchronous" setting should only make a difference in the event
>> of a power or OS failure. That's the theory, anyway.
>>
>> Dan.
>>
>>
>>
>>>
>>>
>>>
>>> -Original Message-
>>> From: sqlite-u

Re: [sqlite] Corrupted database

2009-10-14 Thread Teg
Hello Chris,

It's always a mistake to use TerminateThread. Even Microsoft warns
against it.

>From MSDN:

- If the target thread owns a critical section, the critical section will not 
be released.
- If the target thread is allocating memory from the heap, the heap lock will 
not be released.
- If the target thread is executing certain kernel32 calls when it is
terminated, the kernel32 state for the thread's process could be
inconsistent.
- If the target thread is manipulating the global state of a shared DLL,
the state of the DLL could be destroyed, affecting other users of the
DLL.

Terminating a thread can hang the whole application of I read #1 and #2
correctly. If you use TerminateThread regularly, you should really
re-consider your design.

C


Wednesday, October 14, 2009, 11:29:29 AM, you wrote:

MC> Yes, if we are in the middle of a lot of updates/inserts and just
MC> terminate the thread, pragma integrity_check from the sqlite3 command
MC> line tool will report corruption at times.  Normally, when we hard kill
MC> a thread in the middle of these ops, a journal is left behind.  I think
MC> we only see corruption in this case (journal left behind), but cannot be
MC> sure.  Our transactions can be large (a few thousand records totaling a
MC> few megabytes).  

MC> Summary of steps:

MC> 1) hard-Kill a thread in the middle of inserting/updating a large
MC> transaction (2mb+ transaction).  
MC> 2) with nothing else running, do a pragma integrity_check in sqlite3
MC> command line client against the db.  Obviously after executing sqlite3
MC> client, the journal disappears since I guess recovery ran.
MC> 3) integrity_check spits out lots of errors.

MC> Note that we have never had this happen with Synchronous=Full, only
MC> Synchronous=Off.  Have not tried normal.  That's all I was trying to
MC> tell the original person asking.  I don't know why this would happen; it
MC> seems logical that once you execute a write to the FS, whether or not
MC> the app crashes/spontaneously exits that the write would make it.
MC> However, all of this may be a red herring.. It turns out we moved to
MC> Synch=Full after we sped up our db operations greatly... see below as to
MC> why that may be the factor (speed), and not what Synch is set to.

MC> By the way, this is all under windows.

MC> I'm assuming that sqlite3 uses sync i/o via writefile.  Here's a quick
MC> read on sync IO for windows:
MC> http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx --
MC> specifically what worries me is that TerminateThread() looks like it can
MC> interrupt an I/O operation.

MC> The problem I think may be people using TerminateThread();  that's how
MC> you hard kill a thread.  It seems that can interrupt an I/O operation --
MC> ie an operation that writes more than one cluster at a time.  Meaning,
MC> synch = full may have nothing to do with it.  If you have to say write
MC> more than one cluster (disk block), TerminateThread looks like it can
MC> abort the IO op in the middle of a multi-block op?  I'm trying to run
MC> that down but can't yet find anything that verifies this. 

MC> So, here is what I think:
MC> 1) You need to write some data, lets say a 8K write.  In this
MC> theoretical example, disk blocks are 4k.  
MC> 2) You call WriteFile(8K).
MC> 3) WriteFile causes a system call, and the system schedules the first
MC> bit of the i/o (1st 4k).
MC> 4) terminatethread() is called
MC> 5) I/O operation is cancelled (ie, as if CancelIO had been called?),
MC> meaning block #2 was never scheduled.  Database now corrupt.

MC> A lot of people using windows tend to have this kind of threading
MC> architecture:
MC> 1) Signal the thread to exit on its own
MC> 2) Wait for some grace period.
MC> 3) if grace period expires, and thread is still running --
MC> TerminateThread() because once the grace period expires, the thread is
MC> considered hung.

MC> So, large transactions in a thread could cause people to use
MC> TerminateThread() at a critical time, especially if that causes the
MC> thread to go over its grace period.  For us, these large transactions
MC> took longer than our grace period to complete, and thus were subject to
MC> TerminateThread -- and lots of corruption.  Once we sped everything up
MC> and moved to synch=full, no transaction was even close to the grace
MC> period, and such, terminatethread() is never called.. and we get no
MC> corruption.

MC> Just a thought.

MC> -Original Message-
MC> From: sqlite-users-boun...@sqlite.org
MC> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
MC> Sent: Wednesday, October 14, 2009 12:36 AM
MC> To: General Discussion of SQLite Database
MC> Subject: Re: [sqlite] Corrupted database


MC> On Oct 14, 2009, at 12:42 AM

Re: [sqlite] Corrupted database

2009-10-14 Thread McClellen, Chris
Yes, I agree.  What I am now trying to find out is if things like
running a service or .NET service causes terminatethread to be called
behind the scenes as some kind of cleanup.  The testing was to show that
this can be a problem, to characterize why some dbs can get corrupted on
"normal exits"



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Teg
Sent: Wednesday, October 14, 2009 12:10 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database

Hello Chris,

It's always a mistake to use TerminateThread. Even Microsoft warns
against it.

>From MSDN:

- If the target thread owns a critical section, the critical section
will not be released.
- If the target thread is allocating memory from the heap, the heap lock
will not be released.
- If the target thread is executing certain kernel32 calls when it is
terminated, the kernel32 state for the thread's process could be
inconsistent.
- If the target thread is manipulating the global state of a shared DLL,
the state of the DLL could be destroyed, affecting other users of the
DLL.

Terminating a thread can hang the whole application of I read #1 and #2
correctly. If you use TerminateThread regularly, you should really
re-consider your design.

C


Wednesday, October 14, 2009, 11:29:29 AM, you wrote:

MC> Yes, if we are in the middle of a lot of updates/inserts and just
MC> terminate the thread, pragma integrity_check from the sqlite3
command
MC> line tool will report corruption at times.  Normally, when we hard
kill
MC> a thread in the middle of these ops, a journal is left behind.  I
think
MC> we only see corruption in this case (journal left behind), but
cannot be
MC> sure.  Our transactions can be large (a few thousand records
totaling a
MC> few megabytes).  

MC> Summary of steps:

MC> 1) hard-Kill a thread in the middle of inserting/updating a large
MC> transaction (2mb+ transaction).  
MC> 2) with nothing else running, do a pragma integrity_check in sqlite3
MC> command line client against the db.  Obviously after executing
sqlite3
MC> client, the journal disappears since I guess recovery ran.
MC> 3) integrity_check spits out lots of errors.

MC> Note that we have never had this happen with Synchronous=Full, only
MC> Synchronous=Off.  Have not tried normal.  That's all I was trying to
MC> tell the original person asking.  I don't know why this would
happen; it
MC> seems logical that once you execute a write to the FS, whether or
not
MC> the app crashes/spontaneously exits that the write would make it.
MC> However, all of this may be a red herring.. It turns out we moved to
MC> Synch=Full after we sped up our db operations greatly... see below
as to
MC> why that may be the factor (speed), and not what Synch is set to.

MC> By the way, this is all under windows.

MC> I'm assuming that sqlite3 uses sync i/o via writefile.  Here's a
quick
MC> read on sync IO for windows:
MC> http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx --
MC> specifically what worries me is that TerminateThread() looks like it
can
MC> interrupt an I/O operation.

MC> The problem I think may be people using TerminateThread();  that's
how
MC> you hard kill a thread.  It seems that can interrupt an I/O
operation --
MC> ie an operation that writes more than one cluster at a time.
Meaning,
MC> synch = full may have nothing to do with it.  If you have to say
write
MC> more than one cluster (disk block), TerminateThread looks like it
can
MC> abort the IO op in the middle of a multi-block op?  I'm trying to
run
MC> that down but can't yet find anything that verifies this. 

MC> So, here is what I think:
MC> 1) You need to write some data, lets say a 8K write.  In this
MC> theoretical example, disk blocks are 4k.  
MC> 2) You call WriteFile(8K).
MC> 3) WriteFile causes a system call, and the system schedules the
first
MC> bit of the i/o (1st 4k).
MC> 4) terminatethread() is called
MC> 5) I/O operation is cancelled (ie, as if CancelIO had been called?),
MC> meaning block #2 was never scheduled.  Database now corrupt.

MC> A lot of people using windows tend to have this kind of threading
MC> architecture:
MC> 1) Signal the thread to exit on its own
MC> 2) Wait for some grace period.
MC> 3) if grace period expires, and thread is still running --
MC> TerminateThread() because once the grace period expires, the thread
is
MC> considered hung.

MC> So, large transactions in a thread could cause people to use
MC> TerminateThread() at a critical time, especially if that causes the
MC> thread to go over its grace period.  For us, these large
transactions
MC> took longer than our grace period to complete, and thus were subject
to
MC> TerminateThread -- and lots of corruption.  

Re: [sqlite] Corrupted database

2009-10-14 Thread Reusche, Andrew
We get a "2" returned when we check synchronous.  I think that's the
enum for "full".  We do not explicitely kill any threads, but when we
decide to reboot or shutdown, we call "ExitProcess(0)" without stopping
any DB threads, and I'm sure this isn't very healthy.

Andrew

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread D. Richard Hipp

On Oct 14, 2009, at 2:51 PM, Reusche, Andrew wrote:

> We get a "2" returned when we check synchronous.  I think that's the
> enum for "full".  We do not explicitely kill any threads, but when we
> decide to reboot or shutdown, we call "ExitProcess(0)" without  
> stopping
> any DB threads, and I'm sure this isn't very healthy.


I don't think that should cause problems.  But on the other hand, I'm  
a unix programmer and I sometimes find the behavior of windows to be  
baffling.  On windows, if you do a write() followed immediately by  
ExitProcess(), is it the case that the write might not actually occur?

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Teg
Hello Chris,

Customer PC's right? I've never had a corrupt DB3 here and that
includes me jumping out of the debugger mid-transaction but, I do have
customers who get corrupted DB's even with sync set to full. For some
customers, deleting the DB3's, running once to let them get created and
then restarted is enough to damage them.

I've come to the conclusion that in many cases, the PC's themselves
are the culprit. In many cases, my idiot users (not all are idiots)
will have multiple security packages installed, each with kernel
drivers that watch disk IO and interfere with it under the hood. These
same users usually have a litany of weird problems while the majority
of user have none.

I typically ask these "problem children" to remove their security
packages and reboot. Then test with no security packages installed.
Many times this solves the issue. Sometimes simply upgrading to the
most current version is enough to solve the problem.

I've come to the conclusion that AV packages and software firewalls
are more of a problem than the viruses they're trying to catch. PC
hardware really isn't that reliable either. I know this doesn't help
you but, you might want to consider the PC's themselves as you
investigate this.  I have >10,000 active users. The number of
users reporting these issues is perhaps 50-100.

C

Wednesday, October 14, 2009, 1:48:36 PM, you wrote:

MC> Yes, I agree.  What I am now trying to find out is if things like
MC> running a service or .NET service causes terminatethread to be called
MC> behind the scenes as some kind of cleanup.  The testing was to show that
MC> this can be a problem, to characterize why some dbs can get corrupted on
MC> "normal exits"



MC> -Original Message-
MC> From: sqlite-users-boun...@sqlite.org
MC> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Teg
MC> Sent: Wednesday, October 14, 2009 12:10 PM
MC> To: General Discussion of SQLite Database
MC> Subject: Re: [sqlite] Corrupted database

MC> Hello Chris,

MC> It's always a mistake to use TerminateThread. Even Microsoft warns
MC> against it.

>>From MSDN:

MC> - If the target thread owns a critical section, the critical section
MC> will not be released.
MC> - If the target thread is allocating memory from the heap, the heap lock
MC> will not be released.
MC> - If the target thread is executing certain kernel32 calls when it is
MC> terminated, the kernel32 state for the thread's process could be
MC> inconsistent.
MC> - If the target thread is manipulating the global state of a shared DLL,
MC> the state of the DLL could be destroyed, affecting other users of the
MC> DLL.

MC> Terminating a thread can hang the whole application of I read #1 and #2
MC> correctly. If you use TerminateThread regularly, you should really
MC> re-consider your design.

MC> C


MC> Wednesday, October 14, 2009, 11:29:29 AM, you wrote:

MC>> Yes, if we are in the middle of a lot of updates/inserts and just
MC>> terminate the thread, pragma integrity_check from the sqlite3
MC> command
MC>> line tool will report corruption at times.  Normally, when we hard
MC> kill
MC>> a thread in the middle of these ops, a journal is left behind.  I
MC> think
MC>> we only see corruption in this case (journal left behind), but
MC> cannot be
MC>> sure.  Our transactions can be large (a few thousand records
MC> totaling a
MC>> few megabytes).  

MC>> Summary of steps:

MC>> 1) hard-Kill a thread in the middle of inserting/updating a large
MC>> transaction (2mb+ transaction).  
MC>> 2) with nothing else running, do a pragma integrity_check in sqlite3
MC>> command line client against the db.  Obviously after executing
MC> sqlite3
MC>> client, the journal disappears since I guess recovery ran.
MC>> 3) integrity_check spits out lots of errors.

MC>> Note that we have never had this happen with Synchronous=Full, only
MC>> Synchronous=Off.  Have not tried normal.  That's all I was trying to
MC>> tell the original person asking.  I don't know why this would
MC> happen; it
MC>> seems logical that once you execute a write to the FS, whether or
MC> not
MC>> the app crashes/spontaneously exits that the write would make it.
MC>> However, all of this may be a red herring.. It turns out we moved to
MC>> Synch=Full after we sped up our db operations greatly... see below
MC> as to
MC>> why that may be the factor (speed), and not what Synch is set to.

MC>> By the way, this is all under windows.

MC>> I'm assuming that sqlite3 uses sync i/o via writefile.  Here's a
MC> quick
MC>> read on sync IO for windows:
MC>> http://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx --
MC>&

Re: [sqlite] Corrupted database

2009-10-14 Thread McClellen, Chris
I think the issue is :

Thread 1 does exitprocess/terminateprocess  (or process.kill, or
anything like that)
Thread 2 does write() -- the write I believe can be interrupted when
partially complete in these cases  (only part of the blocks have been
written to disk, the others are not even scheduled).  When a database
has overflow chains (and a chain is being modified), this is
particularly disastrous.  

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of D. Richard Hipp
Sent: Wednesday, October 14, 2009 2:57 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database


On Oct 14, 2009, at 2:51 PM, Reusche, Andrew wrote:

> We get a "2" returned when we check synchronous.  I think that's the
> enum for "full".  We do not explicitely kill any threads, but when we
> decide to reboot or shutdown, we call "ExitProcess(0)" without  
> stopping
> any DB threads, and I'm sure this isn't very healthy.


I don't think that should cause problems.  But on the other hand, I'm  
a unix programmer and I sometimes find the behavior of windows to be  
baffling.  On windows, if you do a write() followed immediately by  
ExitProcess(), is it the case that the write might not actually occur?

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread D. Richard Hipp

On Oct 14, 2009, at 2:59 PM, McClellen, Chris wrote:

> I think the issue is :
>
> Thread 1 does exitprocess/terminateprocess  (or process.kill, or
> anything like that)
> Thread 2 does write() -- the write I believe can be interrupted when
> partially complete in these cases  (only part of the blocks have been
> written to disk, the others are not even scheduled).  When a database
> has overflow chains (and a chain is being modified), this is
> particularly disastrous.


We assume that partial writes can occur on an unclean shutdown.  In  
fact, we run thousands and thousands of test cases to verify that  
partial writes do not corrupt the database file.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Filip Navara
Some threads were hard killed (using the .NET
"Thread.IsBackgroundThread" mechanism, which uses TerminateThread
AFAIK) in that version during normal operation, but none of them do
database writes. They could have been doing database reads though.

The whole application has been hard-killed few times when exception
happened during database manipulation though.

Best regards,
Filip Navara

On Wed, Oct 14, 2009 at 5:45 PM, McClellen, Chris
 wrote:
> Do you ever teriminate threads via TerminateThread() or run as a service 
> (where the scm will call terminateThread() once your main service thread 
> exits)?  In other words do you hard-kill your threads?
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Filip Navara
> Sent: Wednesday, October 14, 2009 7:15 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Corrupted database
>
> On Mon, Oct 12, 2009 at 8:56 PM, McClellen, Chris
>  wrote:
>> What is your synchronous set to?  Full?  FYI If you are using .NET data
>> providers, it is set to "Normal" by default.
>
> Normal or Off, but no power failure was involved. (Yes,
> System.Data.SQLite is used)
>
>> If it is not set to full, I have seen corruption when an application
>> crashes, or exits when a thread is in the middle of updating the db
>> (Synchronous = OFF makes corruption even easier in this case).  I have
>> seen apps that do not wait for background threads to finish before
>> termination, and without full sync on, either the db or the log gets
>> corrupted.  A corrupted log can cause problems for you db on next run
>> when recovery happens.
>
> Sounds suspiciously like our case, but still Synchronous=off is
> supposed to work in the event of application crash, hard killed
> threads and so on. Previous version of the application frequently
> forgot to close the databases on exit and did other nasty things that
> and now fixed, but none of them should cause the database to be
> corrupted.
>
> Best regards,
> Filip Navara
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database

2009-10-14 Thread Filip Navara
On Wed, Oct 14, 2009 at 6:06 PM, Dan Kennedy  wrote:
>
>> The problem I think may be people using TerminateThread();  that's how
>> you hard kill a thread.  It seems that can interrupt an I/O
>> operation --
>> ie an operation that writes more than one cluster at a time.  Meaning,
>> synch = full may have nothing to do with it.  If you have to say write
>> more than one cluster (disk block), TerminateThread looks like it can
>> abort the IO op in the middle of a multi-block op?  I'm trying to run
>> that down but can't yet find anything that verifies this.
>
> Even if it does, which seems quite plausible, the only way
> I can see this causing corruption is if you are in persistent
> journal mode and a (weak) checksum gives you a false positive
> on the last, corrupted, record in the journal file.

This is quite possibly happening in our case. Any way to prove the
theory? What should we look for?

The problem is that journal_mode=persist was the only usable
journaling mode on Windows due to the way file deleting is handled (in
SQLite). Using journal_mode=delete is problematic on any machine with
TortoiseSVN/TortoiseGIT or other programs installed. Now that
journal_mode=truncate exists we can try switching to that.

Best regards,
Filip Navara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database repairing

2008-07-24 Thread Alexey Pechnikov
Hello!

Is any way to repair corrupted database? May be I have archive copy of 
database and corrupted this pages - can I get correct pages and merge their 
with archive database?

P.S. I have no corrupted database now but this question is important for me.

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database file.

2011-01-05 Thread Greg Morehead
I seem to have a corrupted DB which I was able to fix with a vacuum command.  
Any insights on how this could occur would be greatly appreciated.

Before the vacuum command:

 - sql: select rowid, timestamp from DCDCs order by rowid asc limit 1;
 - Result: 7513686  1284840120682

 - sql: select rowid, timestamp from DCDCs order by rowid desc limit 1;
 - Result: 7511084  1284834960299

* The first hint at an issues is that the resulting rowid of the first SHOULD 
be less then the second!

After the vacuum command:

 - sql: select rowid, timestamp from DCDCs order by rowid asc limit 1;
 - Result: 7511062  1284834900371

 -sql: select rowid, timestamp from DCDCs order by rowid desc limit 1;
 - Result: 8168861  1285906501933

** These results are correct.

I can forward the database if someone wants it.  It's large so I did not want 
to forward it to the list.

Thanks,
Greg


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database files

2014-06-05 Thread Lasse Jansen
Hi,

we have a Mac app that uses CoreData which internally uses SQLite. Some of
the queries are not expressible within CoreData, so we send them manually
using the sqlite library that comes with Mac OS X. Now some of our users
have reported that their database file got corrupted and after some
researching I think it's because of multiple copies of SQLite being linked
into the same application as described here:

http://www.sqlite.org/howtocorrupt.html

Even though we link CoreData to our application and CoreData uses sqlite
internally we still have to explicitly link libsqlite as the CoreData
version of sqlite is inaccessible due to the usage of two-level-namespacing.

So I have two questions:
1. Can this be solved without dropping CoreData?
2. If not, is there a workaround that we could use until we replaced
CoreData with something of our own?

I'm thinking of this:
As the problem seems to occur due to calling close() and we only use
libsqlite for read-only access, would just not closing the read-only
database connection prevent the corruption?

Thanks!
Lasse
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted Database

2011-10-31 Thread Igor Tandetnik
Özgür KELEŞ  wrote:
> We use sqlite in our industrial devices. But sometimes the database
> corrupted. We could not find the problem , how it can be possible to
> corrupt the database.

http://www.sqlite.org/lockingv3.html

Section 6.0 "How To Corrupt Your Database Files"
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-24 Thread Alexey Pechnikov
В сообщении от Thursday 24 July 2008 20:48:08 Alexey Pechnikov написал(а):
> Hello!
>
> Is any way to repair corrupted database? May be I have archive copy of
> database and corrupted this pages - can I get correct pages and merge their
> with archive database?
>
> P.S. I have no corrupted database now but this question is important for
> me.

If page allocation data is correct than exists chance to repair non-corrupted 
pages. But how do it? And can I disable database schema reading and get 
access to non-corrupted pages?

Can I manually set database schema (may be in memory only for current session) 
for get access to tables?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-24 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Alexey Pechnikov wrote:
> Is any way to repair corrupted database? 

- From a theoretical point of view the only way to repair a corrupted
database is if there are multiple redundant copies of data or of
generating that data.  Since SQLite doesn't do that (exception: indices
can be regenerated from uncorrupted data) you are mainly out of luck.
Instead SQLite takes the approach of trying to prevent corruption in the
first place.

You can address this problem yourself.  Write your own custom VFS layer
where you can store multiple redundant copies, checksums or whatever
else you are trying to defend against.  You can also use it to verify
that SQLite handles situations well, in addition to your own code.  (For
example make a write routine emulate disk full).  [BTW the SQLite test
suite is full of tests like this anyway]

It is also worth noting that unless you are running on mainframes or
"server" hardware, other corruption will be ignored.  For example
commodity machines don't have error checking or correcting RAM, checking
CPUs, checking hard disk controllers.  The good news is that it is
slowly coming such as end to end checksums in ZFS, checksums in the SATA
spec etc.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIiRLmmOOfHg372QQRAvh+AKCfOBIFCNDFt+3pPjR0dMAm+nMcggCgwrkb
Z3HWu8qk90LKDD5rgVO9kZs=
=ikGn
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-24 Thread Alexey Pechnikov
В сообщении от Friday 25 July 2008 03:40:22 Roger Binns написал(а):
> Alexey Pechnikov wrote:
> > Is any way to repair corrupted database?
>
> From a theoretical point of view the only way to repair a corrupted
> database is if there are multiple redundant copies of data or of
> generating that data.  Since SQLite doesn't do that (exception: indices
> can be regenerated from uncorrupted data) you are mainly out of luck.
> Instead SQLite takes the approach of trying to prevent corruption in the
> first place.
>
> You can address this problem yourself.  Write your own custom VFS layer
> where you can store multiple redundant copies, checksums or whatever
> else you are trying to defend against.

May be on FS layer? Which FS can help me for this? I'm using ext3 FS now on my 
debian box. May be rsync or like software can restore corruption blocks from 
full or incremental backups? 

> It is also worth noting that unless you are running on mainframes or
> "server" hardware, other corruption will be ignored.  

Yes, I'm using SQLite on servers. On winmobile PDA/smartphones I have no 
problems with SQLite and if database corrupted on this environment than 
winmobile must be reinstalled and FS reformatted. But on server I must make 
provision for data restoring in any cause.

> For example 
> commodity machines don't have error checking or correcting RAM, checking
> CPUs, checking hard disk controllers.  The good news is that it is
> slowly coming such as end to end checksums in ZFS, checksums in the SATA
> spec etc.

Database servers such as Oracle or PostgreSQL have transactions log and 
restore log. How can I provide restore mechanisms for SQLite database? May be 
any fuse VFS module can do incremental delta of changes? May be I can do 
incremental copy of SQLIte database after every writing transaction "on the 
fly" (without database blocking) by rsync/... ?

P.S. Database servers not good for me because I need for free, fast and 
reliable embedded database in my multi-thread application server. 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-25 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Alexey Pechnikov wrote:
> May be on FS layer? 

I specifically said VFS which is SQLite functionality - see
http://www.sqlite.org/c3ref/vfs.html and
http://www.sqlite.org/c3ref/io_methods.html

> Which FS can help me for this? I'm using ext3 FS now on my 
> debian box.

Filesystems don't help since they don't store redundant copies of data.
 At the block layer things like RAID do.

> May be rsync or like software can restore corruption blocks from 
> full or incremental backups? 

How?  The backup would have to correspond exactly to the current file
otherwise you could be restoring stale blocks.  RAID is a far better
approach.

> Yes, I'm using SQLite on servers. 

Just because you call it a server doesn't make it "server" hardware :-)
When you pay the big bucks you get memory that can detect and correct
errors, cpus run in lockstep with failure detection, multiple paths to
storage devices, raid and similar technologies on the storage subsystem
etc.  Standard PCs have none of this.

> Database servers such as Oracle or PostgreSQL have transactions log and 
> restore log. 

Those logs are effectively duplicates of the data or ways of
reconstructing the data.  SQLite has a transaction log for the last
transaction only and only while it is progress.

> How can I provide restore mechanisms for SQLite database?

You need to sit down and work out how much your data is worth, what you
want to protect against, how often bad things happen and how much you
are prepared to pay.  Food for thought:

http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=504
http://lwn.net/Articles/290141/
http://www.newscientist.com/blog/technology/2008/03/do-we-need-cosmic-ray-alerts-for.html
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Error-free_data_transfer

> May be
> any fuse VFS module can do incremental delta of changes? May be I can do 
> incremental copy of SQLIte database after every writing transaction "on the 
> fly" (without database blocking) by rsync/... ?

You can write a SQLite VFS module (*not a fuse one*) that does data
duplication.  You only need to worry about the duplicated data when
xSync is called.

> P.S. Database servers not good for me because I need for free, fast and 
> reliable embedded database in my multi-thread application server. 

You can't have 100% reliability as well as free (hardware and software).
 You can aim for 99 point some number of nines, but the more nines you
have the more expensive it gets exponentially.

In the real world, you will find that SQLite is suitably reliable for a
large number of people and projects, taking
http://www.sqlite.org/lockingv3.html#how_to_corrupt into account.

If you are more paranoid than that then write a SQLite VFS module that
effectively does custom file level 'raid'. Duplicate data as many times
as you want (don't forget to send it over the network as well for other
machines to check).  You'll also want to check multiple copies on reads
in case one is corrupt.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIiYvnmOOfHg372QQRAvvzAKCWC33+kPicfrqltkHKTrB64LwV1gCghmKk
z0uTsHRi39IvLEd0mE/qWIU=
=ESiE
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-25 Thread Alexey Pechnikov
В сообщении от Friday 25 July 2008 12:16:39 Roger Binns написал(а):
> Alexey Pechnikov wrote:
> > May be on FS layer?
>
> I specifically said VFS which is SQLite functionality - see
> http://www.sqlite.org/c3ref/vfs.html and
> http://www.sqlite.org/c3ref/io_methods.html

Thanks, last link may be helpful for me.

> > Yes, I'm using SQLite on servers.
>
> Just because you call it a server doesn't make it "server" hardware :-)
> When you pay the big bucks you get memory that can detect and correct
> errors, cpus run in lockstep with failure detection, multiple paths to
> storage devices, raid and similar technologies on the storage subsystem
> etc.  Standard PCs have none of this.

I have server hardware with mirror raid, ECC RAM etc. But hardware and OS is 
not ideal.

> > Database servers such as Oracle or PostgreSQL have transactions log and
> > restore log.
>
> Those logs are effectively duplicates of the data or ways of
> reconstructing the data.  SQLite has a transaction log for the last
> transaction only and only while it is progress.

Can I get full log of sql statements for to sent it other network or store to 
outher device? I may to get a part of this info by using "authorizer" method:

SQLITE_DELETE sqlite_master {} main {}
SQLITE_DROP_TABLE events {} main {}
SQLITE_DELETE events {} main {}
SQLITE_DELETE sqlite_master {} main {}
SQLITE_READ sqlite_master tbl_name main {}
SQLITE_READ sqlite_master type main {}
SQLITE_UPDATE sqlite_master rootpage main {}
SQLITE_READ sqlite_master rootpage main {}
SQLITE_UPDATE sqlite_master rootpage main {}
SQLITE_READ sqlite_master rootpage main {}
SQLITE_INSERT sqlite_master {} main {}
SQLITE_CREATE_TABLE events {} main {}
SQLITE_UPDATE sqlite_master type main {}
SQLITE_UPDATE sqlite_master name main {}
SQLITE_UPDATE sqlite_master tbl_name main {}
...

May be I must reimplement authorizer function? Now this best for security, of 
course.

> > How can I provide restore mechanisms for SQLite database?
>
> You need to sit down and work out how much your data is worth, what you
> want to protect against, how often bad things happen and how much you
> are prepared to pay.  Food for thought:
>
> http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=504
> http://lwn.net/Articles/290141/
> http://www.newscientist.com/blog/technology/2008/03/do-we-need-cosmic-ray-a
>lerts-for.html
> http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Error-free_data_
>transfer

Thanks, I'll read the links.

> You can't have 100% reliability as well as free (hardware and software).
>  You can aim for 99 point some number of nines, but the more nines you
> have the more expensive it gets exponentially.

Yes, but I'm prefer thinking about data integrity in good time.

> In the real world, you will find that SQLite is suitably reliable for a
> large number of people and projects, taking
> http://www.sqlite.org/lockingv3.html#how_to_corrupt into account.

I'm successfully using SQLite from 2005 year and now I want to know how to use 
SQLite in mission-critical applications.

> If you are more paranoid than that then write a SQLite VFS module that
> effectively does custom file level 'raid'. Duplicate data as many times
> as you want (don't forget to send it over the network as well for other
> machines to check).  You'll also want to check multiple copies on reads
> in case one is corrupt.

Well, I think it may be good idea.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-25 Thread Derrell Lipman
On Fri, Jul 25, 2008 at 5:23 AM, Alexey Pechnikov <[EMAIL PROTECTED]>
wrote:

> > > Database servers such as Oracle or PostgreSQL have transactions log and
> > > restore log.
> >
> > Those logs are effectively duplicates of the data or ways of
> > reconstructing the data.  SQLite has a transaction log for the last
> > transaction only and only while it is progress.
>
> Can I get full log of sql statements for to sent it other network or store
> to
> outher device?


Maybe sqlite3_trace() or sqlite3_profile() can help with what you're looking
for here.

Derrell
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-25 Thread Alexey Pechnikov
В сообщении от Friday 25 July 2008 16:32:26 Derrell Lipman написал(а):
> > Can I get full log of sql statements for to sent it other network or
> > store to
> > outher device?
>
> Maybe sqlite3_trace() or sqlite3_profile() can help with what you're
> looking for here.

Thanks!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-25 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Alexey Pechnikov wrote:
>> Maybe sqlite3_trace() or sqlite3_profile() can help with what you're
>> looking for here.

Unfortunately sqlite3_trace isn't that useful as it only tells you the
text of the sql statement but not any bound parameters.  The way to get
the bound parameters is to note them in your own functions that prepare
statements and bind them.  You may also find this functionality is
already present in various wrappers.  For example the Python wrappers do
this.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIisL9mOOfHg372QQRAoE0AJ4yvJoYue7v1ZmwRJjEgUy6zqlk2QCfeojA
LgmuBsvg/o/lfrhBEj+CeuA=
=habD
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-07-26 Thread Alexey Pechnikov
В сообщении от Saturday 26 July 2008 10:23:57 Roger Binns написал(а):
> Alexey Pechnikov wrote:
> >> Maybe sqlite3_trace() or sqlite3_profile() can help with what you're
> >> looking for here.
>
> Unfortunately sqlite3_trace isn't that useful as it only tells you the
> text of the sql statement but not any bound parameters.  The way to get
> the bound parameters is to note them in your own functions that prepare
> statements and bind them.  You may also find this functionality is
> already present in various wrappers.  For example the Python wrappers do
> this.

All wrappers (tcl, python etc.) prepare statements and bind them self? I want 
to get all sql queries log only from scripts. On C I'm only writing 
extensions. 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-08-11 Thread Dennis Cote
Alexey Pechnikov wrote:
> 
> Can I get full log of sql statements for to sent it other network or store to 
> outher device? 
> 

Alexey,

You may want to read http://www.sqlite.org/cvstrac/wiki?p=UndoRedo for 
an example of using triggers to generate SQL to modify a database. This 
example is used for undo/redo, but the principals would be the same if 
you want to generate an SQL log of changes that have been made to a 
database.

HTH
Dennis Cote


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database repairing

2008-08-12 Thread Alexey Pechnikov
It's very dificult decision. I'm prefer to use some callback function
for logging all queries. But I don't know how to realise this callback
- trace and profile callbacks can't show query string with variables
values.

2008/8/11, Dennis Cote <[EMAIL PROTECTED]>:
> Alexey Pechnikov wrote:
>>
>> Can I get full log of sql statements for to sent it other network or store
>> to
>> outher device?
>>
>
> Alexey,
>
> You may want to read http://www.sqlite.org/cvstrac/wiki?p=UndoRedo for
> an example of using triggers to generate SQL to modify a database. This
> example is used for undo/redo, but the principals would be the same if
> you want to generate an SQL log of changes that have been made to a
> database.
>
> HTH
> Dennis Cote
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database file.

2011-01-06 Thread Max Vlasov
On Wed, Jan 5, 2011 at 10:05 PM, Greg Morehead
wrote:

> I seem to have a corrupted DB which I was able to fix with a vacuum
> command.  Any insights on how this could occur would be greatly appreciated.
>
>
>

Greg, can you post full or partial (if full is big) result of pragma
intergrity_check; command on your corrupted db. I suppose there should be
some about indexes.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database file.

2011-01-06 Thread Greg Morehead

The results:

integrity_check
"*** in database main ***
Main freelist: 1 of 153283 pages missing from overflow list
starting at 166541
On tree page 190802 cell 90: 2nd reference to page 166691
On tree page 190802 cell 90: Child page depth differs
On page 190802 at right child: 2nd reference to page 166690" 


Greg

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov
Sent: Thursday, January 06, 2011 4:28 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database file.

On Wed, Jan 5, 2011 at 10:05 PM, Greg Morehead
wrote:

> I seem to have a corrupted DB which I was able to fix with a vacuum 
> command.  Any insights on how this could occur would be greatly
appreciated.
>
>
>

Greg, can you post full or partial (if full is big) result of pragma
intergrity_check; command on your corrupted db. I suppose there should
be some about indexes.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database file.

2011-01-06 Thread Max Vlasov
On Thu, Jan 6, 2011 at 4:34 PM, Greg Morehead wrote:

>
> The results:
>
> integrity_check
>"*** in database main ***
>Main freelist: 1 of 153283 pages missing from overflow list
> starting at 166541
>On tree page 190802 cell 90: 2nd reference to page 166691
>On tree page 190802 cell 90: Child page depth differs
>On page 190802 at right child: 2nd reference to page 166690"
>
>

I'm not the expert on these errors, but it's really possible they could
affect indexes that made your queries return invalid results. If your new db
doesn't return the same or other errors, it appears that b-tree copying
during VACUUM somehow fixed them. As for the source of the errors in the
data, there's a good section at
http://www.sqlite.org/lockingv3.html#how_to_corrupt explaining possible
scenarios. You can compare your case with this information and maybe find
the cause

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database file.

2011-01-06 Thread Greg Morehead

The vacuum command did fix these errors.

Thanks for the link.  I've been assuming that this has been a hardware issue, 
but it's very difficult to prove.

Other issues I've seen on this particular computer include "malformed database" 
faults, and even an example of a timestamp that seems to have changed.

Greg

-Original Message-
From: sqlite-users-boun...@sqlite.org on behalf of Max Vlasov
Sent: Thu 1/6/2011 9:55 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Corrupted database file.
 
On Thu, Jan 6, 2011 at 4:34 PM, Greg Morehead wrote:

>
> The results:
>
> integrity_check
>"*** in database main ***
>Main freelist: 1 of 153283 pages missing from overflow list
> starting at 166541
>On tree page 190802 cell 90: 2nd reference to page 166691
>On tree page 190802 cell 90: Child page depth differs
>On page 190802 at right child: 2nd reference to page 166690"
>
>

I'm not the expert on these errors, but it's really possible they could
affect indexes that made your queries return invalid results. If your new db
doesn't return the same or other errors, it appears that b-tree copying
during VACUUM somehow fixed them. As for the source of the errors in the
data, there's a good section at
http://www.sqlite.org/lockingv3.html#how_to_corrupt explaining possible
scenarios. You can compare your case with this information and maybe find
the cause

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database file.

2011-01-06 Thread Max Vlasov
On Thu, Jan 6, 2011 at 6:39 PM, Greg Morehead wrote:

>
> The vacuum command did fix these errors.
>
> Thanks for the link.  I've been assuming that this has been a hardware
> issue, but it's very difficult to prove.
>
> Other issues I've seen on this particular computer include "malformed
> database" faults, and even an example of a timestamp that seems to have
> changed.
>
>
Possibly really hardware issue. I'd suggest also making serious memory
diagnostics tests on this machine. If it's i386, you can use for example
Microsoft Memory Diagnostics that is actually iso image that can be burned
as a boot cd-rom.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database file.

2011-01-08 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/05/2011 11:05 AM, Greg Morehead wrote:
> Any insights on how this could occur would be greatly appreciated.

http://www.sqlite.org/lockingv3.html#how_to_corrupt

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk0pNzoACgkQmOOfHg372QTaoACfWBBY2CJXMZMaJ1LjdirLROF8
RzQAoNZdPoxJjEo9J2nlcjqTrqbM1Hqs
=6Pd0
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread RSmith


On 2014/06/05 13:21, Lasse Jansen wrote:

Hi,

we have a Mac app that uses CoreData which internally uses SQLite. Some of
the queries are not expressible within CoreData, so we send them manually
using the sqlite library that comes with Mac OS X. Now some of our users
have reported that their database file got corrupted and after some
researching I think it's because of multiple copies of SQLite being linked
into the same application as described here:

http://www.sqlite.org/howtocorrupt.html

Even though we link CoreData to our application and CoreData uses sqlite
internally we still have to explicitly link libsqlite as the CoreData
version of sqlite is inaccessible due to the usage of two-level-namespacing.

So I have two questions:
1. Can this be solved without dropping CoreData?
2. If not, is there a workaround that we could use until we replaced
CoreData with something of our own?

I'm thinking of this:
As the problem seems to occur due to calling close() and we only use
libsqlite for read-only access, would just not closing the read-only
database connection prevent the corruption?


Closing the DB is not optional, it is implicit. Calling "Close()" simply closes it prematurely, but if you do not call close, it 
will close anyway when the program/thread terminates.


If I may suggest, in stead of trying trickery or dropping some part of the system, is it not possible to rather update both to the 
latest release? That way you get to keep everything with all the functionality and all the goodness and without any corruption.


Not sure if you are linking Coredata statically or compiling it as linked code into your app, also I am not that familiar with 
Coredata, but either way I am sure you can get the latest coredata or if you compile it, link in the latest SQLite.


Trying to dance around the old version is never a good idea but sometimes needed where you cannot control the code that access it... 
but you seem to not have that problem.


Maybe someone else here already did this with Coredata. Anyone?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread Richard Hipp
On Thu, Jun 5, 2014 at 7:21 AM, Lasse Jansen  wrote:

> Hi,
>
> we have a Mac app that uses CoreData which internally uses SQLite. Some of
> the queries are not expressible within CoreData, so we send them manually
> using the sqlite library that comes with Mac OS X. Now some of our users
> have reported that their database file got corrupted and after some
> researching I think it's because of multiple copies of SQLite being linked
> into the same application as described here:
>
> http://www.sqlite.org/howtocorrupt.html
>
> Even though we link CoreData to our application and CoreData uses sqlite
> internally we still have to explicitly link libsqlite as the CoreData
> version of sqlite is inaccessible due to the usage of
> two-level-namespacing.
>
> So I have two questions:
> 1. Can this be solved without dropping CoreData?
> 2. If not, is there a workaround that we could use until we replaced
> CoreData with something of our own?
>
> I'm thinking of this:
> As the problem seems to occur due to calling close() and we only use
> libsqlite for read-only access, would just not closing the read-only
> database connection prevent the corruption?
>

The problem is more than just close(), unfortunately.  Certainly the fact
that close(open(zFilename)) deletes all locks on any file descriptor for
the same file is a big problem.  But it is not the only problem.
fcntl(F_SETLK) has its own set of similar problems.

Now if you did this:

(1) Open the read-only connection using the brand-new "nolock" option
available in 3.8.5 (to disable the use of fcntl(F_SETLK).

(2) Keep the read-only connection open forever, or at least until after all
coredata connections are open.

Then it might work.  However, with locking disabled, your read-only
connection might try to read the database file simultaneously with other
process writing it, which would make the read-only connection think that
the database is corrupt.  This would be a difficult thing to clear without
closing and reopening the database connection, unfortunately.






-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread David Empson
On 5/06/2014, at 11:21 pm, Lasse Jansen  wrote:

> Hi,
> 
> we have a Mac app that uses CoreData which internally uses SQLite. Some of
> the queries are not expressible within CoreData, so we send them manually
> using the sqlite library that comes with Mac OS X. Now some of our users
> have reported that their database file got corrupted and after some
> researching I think it's because of multiple copies of SQLite being linked
> into the same application as described here:
> 
> http://www.sqlite.org/howtocorrupt.html
> 
> Even though we link CoreData to our application and CoreData uses sqlite
> internally we still have to explicitly link libsqlite as the CoreData
> version of sqlite is inaccessible due to the usage of two-level-namespacing.
> 
> So I have two questions:
> 1. Can this be solved without dropping CoreData?
> 2. If not, is there a workaround that we could use until we replaced
> CoreData with something of our own?

One possibility would be to structure your application so that it spawns a 
subprocess (not just another thread), then one process uses CoreDate while the 
other uses SQLite directly. Separate processes should avoid the issue with 
other locks in the same process being broken by a close.

Of course that will add more complexity due to needing to do some kind of 
inter-process communication, but it might be a manageable solution while you 
factor out CoreData.

Another idea which might be worth pursuing, but probably not in a reasonable 
timeframe: file a bug report with Apple, requesting that they add a means for 
applications to directly invoke the SQLite instance inside CoreData (with 
sufficient evidence of the problem you are encountering to explain why this 
design flaw in CoreData prevents safe independent use of SQLite), or extend 
CoreData as required so that you don't need to work around it.

> I'm thinking of this:
> As the problem seems to occur due to calling close() and we only use
> libsqlite for read-only access, would just not closing the read-only
> database connection prevent the corruption?

Probably not, because when CoreData closes its connection, your read-only 
connection via the second instance of SQLite will have broken locks from then 
on. If CoreData opens the database again, you could get access collisions and 
read incomplete data, due to your reader not being blocked while a CoreData 
write is in progress.

-- 
David Empson
demp...@emptech.co.nz
Snail mail: P.O. Box 27-103, Wellington 6141, New Zealand

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread Lasse Jansen
Unfortunately CoreData is closed source and is distributed as part of the
Mac OS X SDK so there is no way to update it.


2014-06-05 13:33 GMT+02:00 RSmith :

>
> On 2014/06/05 13:21, Lasse Jansen wrote:
>
>> Hi,
>>
>> we have a Mac app that uses CoreData which internally uses SQLite. Some of
>> the queries are not expressible within CoreData, so we send them manually
>> using the sqlite library that comes with Mac OS X. Now some of our users
>> have reported that their database file got corrupted and after some
>> researching I think it's because of multiple copies of SQLite being linked
>> into the same application as described here:
>>
>> http://www.sqlite.org/howtocorrupt.html
>>
>> Even though we link CoreData to our application and CoreData uses sqlite
>> internally we still have to explicitly link libsqlite as the CoreData
>> version of sqlite is inaccessible due to the usage of
>> two-level-namespacing.
>>
>> So I have two questions:
>> 1. Can this be solved without dropping CoreData?
>> 2. If not, is there a workaround that we could use until we replaced
>> CoreData with something of our own?
>>
>> I'm thinking of this:
>> As the problem seems to occur due to calling close() and we only use
>> libsqlite for read-only access, would just not closing the read-only
>> database connection prevent the corruption?
>>
>
> Closing the DB is not optional, it is implicit. Calling "Close()" simply
> closes it prematurely, but if you do not call close, it will close anyway
> when the program/thread terminates.
>
> If I may suggest, in stead of trying trickery or dropping some part of the
> system, is it not possible to rather update both to the latest release?
> That way you get to keep everything with all the functionality and all the
> goodness and without any corruption.
>
> Not sure if you are linking Coredata statically or compiling it as linked
> code into your app, also I am not that familiar with Coredata, but either
> way I am sure you can get the latest coredata or if you compile it, link in
> the latest SQLite.
>
> Trying to dance around the old version is never a good idea but sometimes
> needed where you cannot control the code that access it... but you seem to
> not have that problem.
>
> Maybe someone else here already did this with Coredata. Anyone?
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread Lasse Jansen
Thanks for the suggestion, I think I'll try this. I probably need to detect
the kind of error and use some retry-mechanism ...


2014-06-05 13:35 GMT+02:00 Richard Hipp :

> On Thu, Jun 5, 2014 at 7:21 AM, Lasse Jansen  wrote:
>
> > Hi,
> >
> > we have a Mac app that uses CoreData which internally uses SQLite. Some
> of
> > the queries are not expressible within CoreData, so we send them manually
> > using the sqlite library that comes with Mac OS X. Now some of our users
> > have reported that their database file got corrupted and after some
> > researching I think it's because of multiple copies of SQLite being
> linked
> > into the same application as described here:
> >
> > http://www.sqlite.org/howtocorrupt.html
> >
> > Even though we link CoreData to our application and CoreData uses sqlite
> > internally we still have to explicitly link libsqlite as the CoreData
> > version of sqlite is inaccessible due to the usage of
> > two-level-namespacing.
> >
> > So I have two questions:
> > 1. Can this be solved without dropping CoreData?
> > 2. If not, is there a workaround that we could use until we replaced
> > CoreData with something of our own?
> >
> > I'm thinking of this:
> > As the problem seems to occur due to calling close() and we only use
> > libsqlite for read-only access, would just not closing the read-only
> > database connection prevent the corruption?
> >
>
> The problem is more than just close(), unfortunately.  Certainly the fact
> that close(open(zFilename)) deletes all locks on any file descriptor for
> the same file is a big problem.  But it is not the only problem.
> fcntl(F_SETLK) has its own set of similar problems.
>
> Now if you did this:
>
> (1) Open the read-only connection using the brand-new "nolock" option
> available in 3.8.5 (to disable the use of fcntl(F_SETLK).
>
> (2) Keep the read-only connection open forever, or at least until after all
> coredata connections are open.
>
> Then it might work.  However, with locking disabled, your read-only
> connection might try to read the database file simultaneously with other
> process writing it, which would make the read-only connection think that
> the database is corrupt.  This would be a difficult thing to clear without
> closing and reopening the database connection, unfortunately.
>
>
>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread Lasse Jansen
Thanks, spawning another process is a good idea.

Filing a bug with Apple is probably not going to work, they don't encourage
accessing a CoreData managed database without going through CoreData.


2014-06-05 13:49 GMT+02:00 David Empson :

> On 5/06/2014, at 11:21 pm, Lasse Jansen  wrote:
>
> > Hi,
> >
> > we have a Mac app that uses CoreData which internally uses SQLite. Some
> of
> > the queries are not expressible within CoreData, so we send them manually
> > using the sqlite library that comes with Mac OS X. Now some of our users
> > have reported that their database file got corrupted and after some
> > researching I think it's because of multiple copies of SQLite being
> linked
> > into the same application as described here:
> >
> > http://www.sqlite.org/howtocorrupt.html
> >
> > Even though we link CoreData to our application and CoreData uses sqlite
> > internally we still have to explicitly link libsqlite as the CoreData
> > version of sqlite is inaccessible due to the usage of
> two-level-namespacing.
> >
> > So I have two questions:
> > 1. Can this be solved without dropping CoreData?
> > 2. If not, is there a workaround that we could use until we replaced
> > CoreData with something of our own?
>
> One possibility would be to structure your application so that it spawns a
> subprocess (not just another thread), then one process uses CoreDate while
> the other uses SQLite directly. Separate processes should avoid the issue
> with other locks in the same process being broken by a close.
>
> Of course that will add more complexity due to needing to do some kind of
> inter-process communication, but it might be a manageable solution while
> you factor out CoreData.
>
> Another idea which might be worth pursuing, but probably not in a
> reasonable timeframe: file a bug report with Apple, requesting that they
> add a means for applications to directly invoke the SQLite instance inside
> CoreData (with sufficient evidence of the problem you are encountering to
> explain why this design flaw in CoreData prevents safe independent use of
> SQLite), or extend CoreData as required so that you don't need to work
> around it.
>
> > I'm thinking of this:
> > As the problem seems to occur due to calling close() and we only use
> > libsqlite for read-only access, would just not closing the read-only
> > database connection prevent the corruption?
>
> Probably not, because when CoreData closes its connection, your read-only
> connection via the second instance of SQLite will have broken locks from
> then on. If CoreData opens the database again, you could get access
> collisions and read incomplete data, due to your reader not being blocked
> while a CoreData write is in progress.
>
> --
> David Empson
> demp...@emptech.co.nz
> Snail mail: P.O. Box 27-103, Wellington 6141, New Zealand
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database files

2014-06-05 Thread Simon Slavin
On 5 Jun 2014, at 1:16pm, Lasse Jansen  wrote:

> Thanks, spawning another process is a good idea.

If you're going to continue to use SQLite in your own code on a Mac I encourage 
you to include the amalgamation files (.h and .c) rather than referring to a 
dynamic library.  That way you can fix the version of SQLite your program uses, 
and you don't have to try to figure out which directives SQLite was compiled 
with.

My only tip for working with Core Data databases is that it can only be done 
effectively if your Core Data application isn't running.  In other words make 
sure that the two apps (the one which uses Core Data and the one which uses the 
SQLite API) are not runniing at the same time.  I do not know of an effective 
way of using both APIs in the same program.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database with Atomic write sector ?

2009-05-28 Thread SuperCommit

Hi,

We have performance issues with SQLite running on our file system (remote FS
using Atomic Write Sector).

In fact we are limited with the number of write by second allowed by our
file system, let’s say we have 10 write/s

After a series of benchmarks the log file (journal) seems to be the
bottleneck, SQLite writes to the log file many times for a simple
transaction (insert), and it consumes most of the write/s allowed.

One solution consists to use the journal only in memory (PRAGMA journal_mode
= MEMORY). 

We can continue to use the rollback systems in case of nominal mode, but on
application crash or power failure the SQLite recovery can’t be used, so we
plan to perform our own recovery system based on SQL queries.

And the question is : On Application crash or power failure with an Atomic
write sector FS does anybody knows if the SQLite database file system will
be corrupted and will need some specific SQLite recovery ?

Kind Regards.

-- 
View this message in context: 
http://www.nabble.com/Corrupted-database-with-Atomic-write-sector---tp23757941p23757941.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database with duplicated primary keys

2010-11-11 Thread Israel Lins Albuquerque
Attached has a database corrupted. 
We use the version 3.6.23.1 in wince. 

the command: 
pragma integrity_check; 

show many errors and 

Duplicate pk was founded using: 
SELECT u_pkey, count(*) 
FROM tp_gpsdata 
GROUP BY u_pkey 
HAVING count(*) > 1 
; 

this returns only 1 record 
SELECT * 
FROM tp_gpsdata 
WHERE u_pkey IN (4684, 4879) 
ORDER BY u_pkey 
DESC LIMIT 10; 

deletes one record only 
DELETE FROM tp_gpsdata WHERE u_pkey IN (4684, 4879); 

vacuum; 
doesn't works because of pk constraints. 





I'm seeding because that can be a bug in OS or in sqlite and maybe someone can 
see that, 
Thanks for your time! 

-- 
Atenciosamente/Regards, 

Israel Lins Albuquerque 
Desenvolvimento/Development 
Polibrás Brasil Software Ltda. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database with duplicated primary keys

2010-11-19 Thread Israel Lins Albuquerque
Due to attach limits this is the part 001 of the file! 

- "Israel Lins Albuquerque"  escreveu: 
> Attached has a database corrupted. 
> We use the version 3.6.23.1 in wince. 
> 
> the command: 
> pragma integrity_check; 
> 
> show many errors and 
> 
> Duplicate pk was founded using: 
> SELECT u_pkey, count(*) 
> FROM tp_gpsdata 
> GROUP BY u_pkey 
> HAVING count(*) > 1 
> ; 
> 
> this returns only 1 record 
> SELECT * 
> FROM tp_gpsdata 
> WHERE u_pkey IN (4684, 4879) 
> ORDER BY u_pkey 
> DESC LIMIT 10; 
> 
> deletes one record only 
> DELETE FROM tp_gpsdata WHERE u_pkey IN (4684, 4879); 
> 
> vacuum; 
> doesn't works because of pk constraints. 
> 
> 
> 
> 
> 
> I'm seeding because that can be a bug in OS or in sqlite and maybe someone 
> can see that, 
> Thanks for your time! 
> 
> -- 
> Atenciosamente/Regards, 
> 
> Israel Lins Albuquerque 
> Desenvolvimento/Development 
> Polibrás Brasil Software Ltda. 
> 
> 
> 
> ___ 
> sqlite-users mailing list 
> sqlite-users@sqlite.org 
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 
> 

-- 






-- 
Atenciosamente/Regards, 

Israel Lins Albuquerque 
Desenvolvimento/Development 
Polibrás Brasil Software Ltda. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database with duplicated primary keys

2010-11-19 Thread Israel Lins Albuquerque
Due to attach limits this is the part 002 of the file! 

- "Israel Lins Albuquerque"  escreveu: 
> Attached has a database corrupted. 
> We use the version 3.6.23.1 in wince. 
> 
> the command: 
> pragma integrity_check; 
> 
> show many errors and 
> 
> Duplicate pk was founded using: 
> SELECT u_pkey, count(*) 
> FROM tp_gpsdata 
> GROUP BY u_pkey 
> HAVING count(*) > 1 
> ; 
> 
> this returns only 1 record 
> SELECT * 
> FROM tp_gpsdata 
> WHERE u_pkey IN (4684, 4879) 
> ORDER BY u_pkey 
> DESC LIMIT 10; 
> 
> deletes one record only 
> DELETE FROM tp_gpsdata WHERE u_pkey IN (4684, 4879); 
> 
> vacuum; 
> doesn't works because of pk constraints. 
> 
> 
> 
> 
> 
> I'm seeding because that can be a bug in OS or in sqlite and maybe someone 
> can see that, 
> Thanks for your time! 
> 
> -- 
> Atenciosamente/Regards, 
> 
> Israel Lins Albuquerque 
> Desenvolvimento/Development 
> Polibrás Brasil Software Ltda. 
> 
> 
> 
> ___ 
> sqlite-users mailing list 
> sqlite-users@sqlite.org 
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 
> 

-- 






-- 
Atenciosamente/Regards, 

Israel Lins Albuquerque 
Desenvolvimento/Development 
Polibrás Brasil Software Ltda. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database (image is malformed): always the same size

2009-12-18 Thread Gianandrea Gobbo
I'm using sqlite (2.8) on an embedded product, running a Linux kernel.
I'm experiencing sometimes a database corruption, and listing some 
tables contents gets me a "SQL error: database disk image is malformed".
Ok, there can be many reasons for this, but the strange thing that 
puzzles me is that the database file has always the same size: 409Kb 
(418816 bytes).
Normally, the file could grow up to 1.5 Megs, starting from 200 kb. This 
makes me thing that something may fail while sqlite expands the database 
size grows across this figure.
Any hint or suggestion what to look for?

Thanks in advance,
Gianandrea.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database (image is malformed): always the same size

2009-12-18 Thread Simon Slavin

On 18 Dec 2009, at 3:27pm, Gianandrea Gobbo wrote:

> I'm using sqlite (2.8) on an embedded product, running a Linux kernel.
> I'm experiencing sometimes a database corruption, and listing some 
> tables contents gets me a "SQL error: database disk image is malformed".
> Ok, there can be many reasons for this, but the strange thing that 
> puzzles me is that the database file has always the same size: 409Kb 
> (418816 bytes).

Interesting.  Did you compile your own version of sqlite ?  If so, could you 
show us the directives you used.  If not, tell us which distribution you 
downloaded with.

Are you using any PRAGMAs, especially those which speed up sqlite3 by telling 
it not to do caching, or locking, or something like that ?  If so, please tell 
us which ones you used.

If you are in control of either of these, please try doing without them: leave 
sqlite3 with its default settings, /even if this would make your product 
unacceptably slow in real life/.  If using all the defaults prevents your file 
corruption, you will know that it's one of your own settings which is causing 
the problem.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database (image is malformed): always the same size

2009-12-18 Thread Max Vlasov
On Fri, Dec 18, 2009 at 6:27 PM, Gianandrea Gobbo  wrote:

> I'm using sqlite (2.8) on an embedded product, running a Linux kernel.
> I'm experiencing sometimes a database corruption, and listing some
> tables contents gets me a "SQL error: database disk image is malformed".
> Ok, there can be many reasons for this, but the strange thing that
> puzzles me is that the database file has always the same size: 409Kb
> (418816 bytes).
>

Gianandrea, I once had an experience with fixing a modified code that led to
database disk image malformation. If you can reproduce the bug with a
comparatively few number of steps, I'd recommend running the program with
checking PRAGMA integrity_check; almost on every step. It costs a little in
terms of performance, but in my case it helped, a report about invalid Index
entries was way ahead of this particular error so it helped to narrow it to
several queries and finally to the invalid code fragment
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database (image is malformed): always the same size

2009-12-19 Thread Artur Reilin
I also using 2.8, but i wondering why your database has 200 kb from the
beginning. my databases have only 3 or 4 kb's. But at my host if i am
using sqlite queries which are $db-> like, i get the same error. funny
thing, but the database is not malformed...

>
> On 18 Dec 2009, at 3:27pm, Gianandrea Gobbo wrote:
>
>> I'm using sqlite (2.8) on an embedded product, running a Linux kernel.
>> I'm experiencing sometimes a database corruption, and listing some
>> tables contents gets me a "SQL error: database disk image is malformed".
>> Ok, there can be many reasons for this, but the strange thing that
>> puzzles me is that the database file has always the same size: 409Kb
>> (418816 bytes).
>
> Interesting.  Did you compile your own version of sqlite ?  If so, could
> you show us the directives you used.  If not, tell us which distribution
> you downloaded with.
>
> Are you using any PRAGMAs, especially those which speed up sqlite3 by
> telling it not to do caching, or locking, or something like that ?  If so,
> please tell us which ones you used.
>
> If you are in control of either of these, please try doing without them:
> leave sqlite3 with its default settings, /even if this would make your
> product unacceptably slow in real life/.  If using all the defaults
> prevents your file corruption, you will know that it's one of your own
> settings which is causing the problem.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


Artur Reilin
sqlite.yuedream.de
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database (image is malformed): always the same size

2009-12-21 Thread Gianandrea Gobbo
Max Vlasov ha scritto:
> On Fri, Dec 18, 2009 at 6:27 PM, Gianandrea Gobbo  wrote:
>
> Gianandrea, I once had an experience with fixing a modified code that led to
> database disk image malformation. If you can reproduce the bug with a
> comparatively few number of steps, I'd recommend running the program with
> checking PRAGMA integrity_check; almost on every step. It costs a little in
> terms of performance, but in my case it helped, a report about invalid Index
> entries was way ahead of this particular error so it helped to narrow it to
> several queries and finally to the invalid code fragment
> ___
>   

Unfortunately I'm not able to reproduce the error right now. The only 
informations I get are from the field, from the return boards. I will 
try to grab more data from the customers.
At this stage, I cannot think of an applicative bug that leads to this 
kind of corruption, where the database results always in the same size.
I'll try to reproduce the error, btw. I'll keep you informed.
Thanks again for the help.

g.


-- 


Gianandrea Gobbo
Sviluppo Software Prodotti
go...@solari.it
+39 0432 497 387

Solari di Udine Spa - via Gino Pieri, 29 - 33100 - Udine
Centralino: +39 0432 4971 - Fax +39 0432 480 160
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database (image is malformed): always the same size

2009-12-21 Thread Simon Slavin

On 21 Dec 2009, at 8:00am, Gianandrea Gobbo wrote:

> At this stage, I cannot think of an applicative bug that leads to this 
> kind of corruption, where the database results always in the same size.

The only time I ever saw a number of files all truncated to the same length, 
the fault was in hardware.  I'm not saying that this is the cause of your 
problem, and it was 20 years ago and things have moved on significantly since 
then.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-04 Thread Ryan Schmidt
Hello,

I have a corrupt SQLite database about which I'd appreciate your advice. The 
data is not critical but I'd like to fix it if it's possible and not too 
time-consuming. Even just knowing why the problem occurred or how to prevent it 
in the future would be helpful. If there's something the application should be 
doing differently in its use of the SQLite library to ensure the database 
doesn't get corrupted that would be good to know.

The application that is using the SQLite library (MacPorts) is experiencing 
this error:

sqlite error: library routine called out of sequence (21)

I ran an integrity check on the database, and the output began like this:

$ sqlite3 /opt/local/var/macports/registry/registry.db
SQLite version 3.25.2 2018-09-25 19:08:10
Enter ".help" for usage hints.
sqlite> .load /tmp/macports.sqlext
sqlite> pragma integrity_check;
*** in database main ***
On tree page 76852 cell 303: Rowid 18741471 out of order
On tree page 76852 cell 301: Rowid 18741430 out of order
On tree page 76852 cell 299: Rowid 18741387 out of order
On tree page 76852 cell 296: Rowid 18741324 out of order

Many similar lines follow. The full output is here:

https://trac.macports.org/ticket/57570

This was on macOS Sierra 10.12.6 on a Mac OS Extended (case-sensitive, 
journaled) filesystem.

macports.sqlext provides a custom collation for the version number column.

Some background: I run the MacPorts build farm. MacPorts uses SQLite to store 
its "registry", which keeps track of what ports MacPorts has installed and what 
files each port provides. In the build farm we keep the latest version of each 
port installed, which amounts to about 17,000 ports providing over 3 million 
total files, so the registry gets rather large, about 1.5GB, and operations 
that change the registry take a little time to complete.

Recently, I manually uninstalled a few ports from one of the builders. That 
command would have rewritten the registry to remove the entries for those 
ports. MacPorts also occasionally vacuums the registry, when it determines that 
doing so would be worthwhile, but it doesn't print a message if it does so, so 
it's possible that was happening but I'm not certain. While MacPorts was 
dealing with the registry, the VMware virtual machine the builder runs on froze 
and had to be manually powered off. Upon restarting the VM, registry operations 
began failing with the above error.

I've been running this build farm for two years, currently with 11 different 
builders, and I haven't seen this or any other registry corruption there 
before. The VMware host servers have ECC memory and the builders' virtual disks 
are stored on SSDs.

I've seen the document "How To Corrupt An SQLite Database File". There 
certainly are a lot of possibilities.

Does this particular failure stand out to anyone as an obvious example of a 
particular problem, ideally with a particular solution? If not, I can nuke the 
MacPorts installation and its SQLite registry and start over.

Thanks!

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-04 Thread Simon Slavin
On 5 Dec 2018, at 3:20am, Ryan Schmidt  wrote:

> $ sqlite3 /opt/local/var/macports/registry/registry.db
> SQLite version 3.25.2 2018-09-25 19:08:10
> Enter ".help" for usage hints.
> sqlite> .load /tmp/macports.sqlext
> sqlite> pragma integrity_check;
> *** in database main ***
> On tree page 76852 cell 303: Rowid 18741471 out of order
> On tree page 76852 cell 301: Rowid 18741430 out of order
> On tree page 76852 cell 299: Rowid 18741387 out of order
> On tree page 76852 cell 296: Rowid 18741324 out of order

It is possible that the rows reported as missing are actually deleted rows, and 
that tree page 76852 just hasn't been moved from "part of TABLE ports" to 
"freed pages".

Try using ".dump" to dump that database to a text file.  Can you read it and 
tell whether anything is missing ?  You can either read the SQL commands by 
eye, or use command-line-tool to ".read" the .sql file to create another 
database, and use database tools to interrogate that one.

Does your database have any relations, either explicitly declared using FOREIGN 
KEY or implicit in how your software handles the data ?  Presumably every row 
in TABLE file should be part of a row in TABLE port, or something like that.  
Can you use your understanding of the relation to prove that certain rows are 
missing from certain tables ?  Does it give you any idea how much data is 
missing ?

> the VMware virtual machine the builder runs on froze and had to be manually 
> powered off. Upon restarting the VM, registry operations began failing with 
> the above error.

I'm 95% certain that your VMware software was caching changes made to the 
database file rather than flushing them to disk when the software told it to.  
Under default configuration they all do this, and this does not mean that 
VMWare is worse than any competing product.

In other words, the cause of corruption was that you were using a virtual 
machine and not real hardware, and that the virtual machine violates ACID for 
speed.  It's possible that WMware has settings which correct this problem.  If 
it does, using them will, of course, slow it down.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-04 Thread Ryan Schmidt


On Dec 4, 2018, at 22:42, Simon Slavin wrote:

> On 5 Dec 2018, at 3:20am, Ryan Schmidt wrote:
> 
>> $ sqlite3 /opt/local/var/macports/registry/registry.db
>> SQLite version 3.25.2 2018-09-25 19:08:10
>> Enter ".help" for usage hints.
>> sqlite> .load /tmp/macports.sqlext
>> sqlite> pragma integrity_check;
>> *** in database main ***
>> On tree page 76852 cell 303: Rowid 18741471 out of order
>> On tree page 76852 cell 301: Rowid 18741430 out of order
>> On tree page 76852 cell 299: Rowid 18741387 out of order
>> On tree page 76852 cell 296: Rowid 18741324 out of order
> 
> It is possible that the rows reported as missing are actually deleted rows, 
> and that tree page 76852 just hasn't been moved from "part of TABLE ports" to 
> "freed pages".
> 
> Try using ".dump" to dump that database to a text file.  Can you read it and 
> tell whether anything is missing ?  You can either read the SQL commands by 
> eye, or use command-line-tool to ".read" the .sql file to create another 
> database, and use database tools to interrogate that one.
> 
> Does your database have any relations, either explicitly declared using 
> FOREIGN KEY or implicit in how your software handles the data ?  Presumably 
> every row in TABLE file should be part of a row in TABLE port, or something 
> like that.  Can you use your understanding of the relation to prove that 
> certain rows are missing from certain tables ?  Does it give you any idea how 
> much data is missing ?

Thanks very much for taking the time to respond. I'll look into what you 
suggested.


>> the VMware virtual machine the builder runs on froze and had to be manually 
>> powered off. Upon restarting the VM, registry operations began failing with 
>> the above error.
> 
> I'm 95% certain that your VMware software was caching changes made to the 
> database file rather than flushing them to disk when the software told it to. 
>  Under default configuration they all do this, and this does not mean that 
> VMWare is worse than any competing product.
> 
> In other words, the cause of corruption was that you were using a virtual 
> machine and not real hardware, and that the virtual machine violates ACID for 
> speed.  It's possible that WMware has settings which correct this problem.  
> If it does, using them will, of course, slow it down.

That occurred to me as well. But from what I can tell initially, writes should 
not be cached:

https://kb.vmware.com/s/article/1008542

"VMware ESX acknowledges a write or read to a guest operating system only after 
that write or read is acknowledged by the hardware controller to ESX. 
Applications running inside virtual machines on ESX are afforded the same crash 
consistency guarantees as applications running on physical machines or physical 
disk controllers."


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-04 Thread Simon Slavin
On 5 Dec 2018, at 5:16am, Ryan Schmidt  wrote:

> https://kb.vmware.com/s/article/1008542
> 
> "VMware ESX acknowledges a write or read to a guest operating system only 
> after that write or read is acknowledged by the hardware controller to ESX. 
> Applications running inside virtual machines on ESX are afforded the same 
> crash consistency guarantees as applications running on physical machines or 
> physical disk controllers."

Interesting.  That paragraph is a well-written piece of text explaining the 
opposite of what I thought.  Maybe things have changed in the past decade.

I suppose the lie "Your changes have been made on physical hardware, you can 
now proceed." may be somewhere else in your system (e.g. the hardware 
controller).  Or maybe I'm barking up the wrong tree.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-04 Thread R Smith

On 2018/12/05 5:20 AM, Ryan Schmidt wrote:

Hello,

I have a corrupt SQLite database about which I'd appreciate your advice. The 
data is not critical but I'd like to fix it if it's possible and not too 
time-consuming. Even just knowing why the problem occurred or how to prevent it 
in the future would be helpful. If there's something the application should be 
doing differently in its use of the SQLite library to ensure the database 
doesn't get corrupted that would be good to know.

The application that is using the SQLite library (MacPorts) is experiencing 
this error://


Hi Ryan, the question has two parts:
1 - Can it be fixed?
Very probably - by simply dumping it to text and re-importing again. It 
may need a scan by eye to make sure there are no serious broken text, 
but usually the Index errors as per your error dump will not be too 
complicated and easily/automatically corrected.

See the .dump command in the sqlite CLI documentation:
https://sqlite.org/cli.html#converting_an_entire_database_to_an_ascii_text_file


2 - How did it happen?
Hard to say, almost certainly a write that somehow didn't make it to the 
physical layer.
Here is a list of why that can happen (which you may have already 
encountered in your research):

https://sqlite.org/howtocorrupt.html


Good luck!
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-05 Thread Ryan Schmidt


On Dec 4, 2018, at 22:42, Simon Slavin wrote:

> On 5 Dec 2018, at 3:20am, Ryan Schmidt wrote:
> 
>> $ sqlite3 /opt/local/var/macports/registry/registry.db
>> SQLite version 3.25.2 2018-09-25 19:08:10
>> Enter ".help" for usage hints.
>> sqlite> .load /tmp/macports.sqlext
>> sqlite> pragma integrity_check;
>> *** in database main ***
>> On tree page 76852 cell 303: Rowid 18741471 out of order
>> On tree page 76852 cell 301: Rowid 18741430 out of order
>> On tree page 76852 cell 299: Rowid 18741387 out of order
>> On tree page 76852 cell 296: Rowid 18741324 out of order
> 
> It is possible that the rows reported as missing are actually deleted rows, 
> and that tree page 76852 just hasn't been moved from "part of TABLE ports" to 
> "freed pages".
> 
> Try using ".dump" to dump that database to a text file.  Can you read it and 
> tell whether anything is missing ?  You can either read the SQL commands by 
> eye, or use command-line-tool to ".read" the .sql file to create another 
> database, and use database tools to interrogate that one.
> 
> Does your database have any relations, either explicitly declared using 
> FOREIGN KEY or implicit in how your software handles the data ?  Presumably 
> every row in TABLE file should be part of a row in TABLE port, or something 
> like that.  Can you use your understanding of the relation to prove that 
> certain rows are missing from certain tables ?  Does it give you any idea how 
> much data is missing ?

I was able to .dump the data from the corrupt database and .read it into a new 
database (1.4GB, slightly smaller than the original 1.5GB database, which could 
be plausible if the original had not been vacuumed?). pragma integrity_check 
then found no problems in the new database.

The tables do have some very rudimentary FOREIGN KEY relations. The "ports" 
table has an "id" INTEGER primary key, and the "files" table and a couple 
others have an "id" column that references it.

Verifying the correctness of the data just by looking at it seemed infeasible, 
so, feeling adventurous, I put the new database in place and let MacPorts build 
a few things, but problems soon became apparent. The "files" table which 
records the association of files with a port has over 31,000 files associated 
with a port two or three times each -- duplicate entries. MacPorts didn't 
expect this condition to exist and didn't react well to it. This table does not 
have any unique constraints. We probably should have had a unique constraint 
over the combination of id and path to prevent this condition from existing. 
After the corruption first occurred, several automated builds went by, 
activating and deactivating various ports, which would have affected the 
registry, before I noticed the problem and stopped the automated builds. It's 
possible that because MacPorts was not able to access the registry properly, it 
got the wrong idea about what to do, and created those duplicate entries.

Since I don't know what else may have gone wrong with the contents of the 
registry by this point, it seems safer to erase the MacPorts installation and 
start fresh. This will take a bit longer as every port has to be re-fetched and 
re-installed but at least I'll have confidence in the integrity of the registry.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-05 Thread Simon Slavin
On 5 Dec 2018, at 12:22pm, Ryan Schmidt  wrote:

> Since I don't know what else may have gone wrong with the contents of the 
> registry by this point, it seems safer to erase the MacPorts installation and 
> start fresh. This will take a bit longer as every port has to be re-fetched 
> and re-installed but at least I'll have confidence in the integrity of the 
> registry.

I agree with your diagnosis.  Your software has used the corrupted database and 
made incorrect changes to it because it trusted corrupt information.  
Unfortunate.  Since you do have the ability to build a new dataset from 
scratch, I think that's the best way.

I still have no better idea of the cause of the corruption.  I'd imagine you're 
still considering what you found in "howtocorrupt".

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order

2018-12-05 Thread James K. Lowden
On Wed, 5 Dec 2018 05:21:30 +
Simon Slavin  wrote:

> On 5 Dec 2018, at 5:16am, Ryan Schmidt  wrote:
> 
> > https://kb.vmware.com/s/article/1008542
> > 
> > "VMware ESX acknowledges a write or read to a guest operating
> > system only after that write or read is acknowledged by the
> > hardware controller to ESX. Applications running inside virtual
> > machines on ESX are afforded the same crash consistency guarantees
> > as applications running on physical machines or physical disk
> > controllers."
> 
> Interesting.  That paragraph is a well-written piece of text
> explaining the opposite of what I thought.  Maybe things have changed
> in the past decade.

VMware may well be doing the best it can on unreliable hardware.  I
believe it's common knowledge that consumer-grade hard drives lie when
acknowledging writes: the acknowlegement is sent when the data are
received into the device's write buffer, not after being written to
disk.  It's good for benchmarks.  No one benchmarks data corruptions.  

'Twas ever thus: If you want a reliable database, use a reliable disk.  

--jkl
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users