Had another damaged database report.
This time it is a configuration database which holds application settings.
The file is stored on a local disk, not shared, and only ever accessed by my
application and only by one thread.
The database is run in FULL sync mode for maximum security.
I’m
1. No client-server, I use MySQL, SQL-Server or ORACLE for that.
2. No access to the SQLite database ever by more than one process concurrently
in writable mode. In readable mode, yes. But the reported damage cases were
always single user, one PC.
3. I cannot prevent or disallow users to
As I wrote above, damaged databases are replaced. No user continues working
with a damaged database once it has been identified. The issue here is to
detect this early and avoid it altogether.
One column of one row of one table may get corrupted.
If that's the case then the database can
I estimate that over 90% of the users keep the database on local disks. I can
tell from the log files.
Keeping the SQLite database it on a network server really hurts performance.
That’s not what SQLite is designed for, besides all other aspects of network
locking mentioned in various SQLite
- The databases in question are stored on a location hard disk or SSD.
- If a user stores his database on a NAS box or Windows server, it is accessed
directly, via standard Windows file system routines.
- From what I can tell, network-based databases are not more likely to corrupt
than
Okay. First, stop doing VACUUM after this. You're not improving things and
you may be making things worse
Not important. If this error is encountered the database is marked and the user
reminded on every open/close to replace it with a backup. The database is not
supposed to be used
My application does not phone home :-/ but I can add output of these functions
to the log file my application maintains. My users know how to collect these
log files and send them to me.
I will also add the error logging callback to my wrapper class and route it to
the log file.
This
The diagnosis log of my application reports the output of integrity_check()
already.
I retrieved the log from the most recent error report. This is my application
has logged:
'*** IN DATABASE MAIN ***
ON TREE PAGE 385120 CELL 24: INVALID PAGE NUMBER 151192068
CORRUPTION DETECTED IN
The core code is in place since about 2008.
I took advantage of changes in SQLite over time, from using the shared cache to
switching to WAL mode for databases which are not opened in read-only mode.
These changes were made between 12 and six months ago, and tested during beta
tests and
Hello,
I’m using SQLite in one of my applications for several years with great success.
The databases managed with SQLite are between 1 and maybe 10 GB, with about 50
tables or so.
The platform is Windows 7 or higher.
Recently I get an increasing number of error reports about “database
1. I don’t have the damaged databases here so I cannot run the diagnosis
myself. The databases are usually too large to upload or transfer.
2. The SQLite version I currently use is 3.8.8.1 (complied using the Amalgation
and Visual Studio 2012).
But since not every user always keeps up to day,
Thanks, Richard
After swapping back to the latest SQLite version and running an Analyze on the
sample databases, performance is up to the same level as before (maybe even a
bit faster). Very good.
I will send out a recommendation to my users to run the weekly diagnostics
routine
Hi,
Information provided as requested.
_temptable is a temporary table which contains a list of oids (integer, ~ 10
rows) to consider.
Stats3
tbl idx neqnlt ndltsample
stack_elem idx_rel_stack_elem_soid 4
Hi, Richard
I have prepared a sample database, sample statements and some additional
details and sent it to your email address.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
The sample database was produced by a version of my software which runs the
previous (or even an older version of SQLite).
My software runs an Analysis as part of a weekly database maintenance
procedure. But the users can turn this off or delay it for weeks. Various
versions of my software
Hello,
After re-compiling my Windows application (compiled with Visual C++ 2012, 32
Bit application) with the latest SQLite version (3.8.6) I noticed a severely
degraded performance with certain queries. The new version is 10 or more times
slower than the previous build I used (3.8.4.3).
1.
Unrelated to your question, but, take a look at external content FTS4
table
they dramatically cut down the amount of duplicated data [1])
Thanks for the tip. I'll definitely check that.
Currently I build the contents for FTS from several other tables, combining,
splitting, merging data via SQL
I have a performance effect which I don't quite understand.
Maybe I'm using the wrong settings or something. Sorry for the long post,
but I wanted to include all the info that may be important.
My software is written in C++, runs on Windows 7/8, the SQLite database file
is either on a local SATA
If you want to try running with synchronous=NORMAL, you might try setting
PRAGMA wal_autocheckpoint=10; (from the default of 1000) which will
make for dramatically larger WAL files, but also dramatically fewer syncs.
Then the syncs will use just 5 or 6 minutes instead of 4.5 hours.
Unrelated to your question, but, take a look at external content
FTS4 table they dramatically cut down the amount of duplicated data
[1])
Thanks for the tip. I'll definitely check that.
Currently I build the contents for FTS dynamically from several other
tables, combining, splitting,
I have implemented diagnostic routines which allow my users to check the
database for problems.
If something bad happens to a database (which may be not the fault of SQLite
at all, e.g. a network problem or disk problem) I want to detect this as
early as possible in order to inform the user. This
If you have a contrary example, please send me a
copy of the database file via private email.
I take it from your reply that the integrity_check indeed should reveal
problems in the database file which cause the disk image malformed return
code so my logic is OK as it is.
Unfortunately, the
I run the VACUUM command at the end of a diagnosis and cleanup operation on
my database.
I use the SQLite 3 API on Windows 7. Latest version of SQLite.
My database uses the WAL mode.
The database size is 120 MB when I run the sqlite3_execute(VACUUM,...)
command.
After about 20 seconds of heavy
Unfortunately this does not help :-(
It may have still something to do with WAL.
Before I run the VACUUM the WAL file is quite small.
After the VACUUM has completed, it is about 20 MB - about the same size as
the properly compacted database would be.
But when I run a pragma wal_checkpoint; the
Great :-)
I guess that PRAGMA temp_store=MEMORY then does not add additional
performance on Windows and I can safely let it to DEFAULT or FILE.
This will avoid the excessive memory usage during VACUUM for my use case.
Thanks.
-- Mario
___
Hello List
the SQLite databases I use on Windows can become fairly large (several GB).
I just noticed that running a VACCUM on such a large database (where several
of the tables are also real large) can cause excessive memory usage (more
than 2.5 GB RAM in peak).
I tracked this down to using
Thank you for providing this pre-release amalgamation ;-)
I downloaded it immediately and compiled it into my software.
The problem has been ++resolved++ and the performance is at least as good as
with previous versions of SQLite.
It even feels a bit faster, although I only could try it with a
Thank you for providing this pre-release amalgamation ;-)
I downloaded it immediately and compiled it into my software.
The problem has been ++resolved++ and the performance is at least as good as
with previous versions of SQLite.
It even feels a bit faster, although I only could try it with a
Hi,
I just tried this (sorry dor the delay) but apparently I'm not having the
right toolset installed.
The make file requires gawk.exe (which I downloaded from sourceforge), but
now it's complaing about a missing tclsh85...
Since the previous SQLite version works fine I think I'll skip this and
So far I only used the Amalgamation. Looks like the files on your source
control server require me to build SQLite from source or at least run a
tool/script to build the amalgamation.
I'm using Windows and Visual Studio so the standard Linux build tools and
scripting languages are not available.
Hi, thanks.
But I'm not sure that I understand you correctly.
Is this behavior considered as something that needs to be looked at by the
SQLite Team to restore the original performance, or is this how it is now
and I have to find a work-around for good (e.g. applying your suggestion
with
This is a SQL Script
/*
Application linking to SQLite using the Amalgation.
Build Tool: C++, Visual Studio 2012, Windows 7 64-Bit
The table schema and the query enclosed below are in use
for over one year and various SQLite versions.
After downloading and compiling in the SQLite 3.7.16.2, we
32 matches
Mail list logo