On 29 Dec 2017, at 4:10am, Rowan Worth wrote:
> do any of your processes open the database file, for any
> reason, without going through sqlite's API?
Just to note that a major offender in this respect is anti-virus software. So
don’t think just of things that might want to read a SQLite dat
On 28 December 2017 at 02:55, Simon Slavin wrote:
> On 27 Dec 2017, at 6:10pm, Nikhil Deshpande wrote:
>
> >> Can you include a "pragma integrity_check" at startup ?
> >> Can you include a "pragma integrity_check" executed at regular
> intervals ?
> > The writer process does "pragma quick_check"
On 27 Dec 2017, at 6:55pm, Simon Slavin wrote:
> An alternative might be to run "integrity_check" on backup copies which don’t
> show up anything on "quick_check". This could be done without blocking the
> production system. If you never find anything then you know "quick_check" is
> all you
Thanks Richard for the response!
On 12/21/17 5:07 PM, Richard Hipp wrote:
On 12/21/17, Nikhil Deshpande wrote:
There were no power-off or reboots in near time vicinity when the
corruption was detected.
(1) Might the corruption have been sitting dormant due to some far
away power-off or rebo
On 27 Dec 2017, at 6:10pm, Nikhil Deshpande wrote:
>> Can you include a "pragma integrity_check" at startup ?
>> Can you include a "pragma integrity_check" executed at regular intervals ?
> The writer process does "pragma quick_check" on every startup at init,
> bails out on failure and spawns
On 12/21/17 9:45 PM, Rowan Worth wrote:
Does either process take backups of the DB? If so, how is that implemented?
Thanks Rowan for the response!
Backup is done by a separate process through command:
sqlite3 /path/to/db_file .dump > dump.sql
and not using the sqlite3 backup API.
Thanks,
N
Thanks Simon for the response!
On 12/21/17 5:05 PM, Simon Slavin wrote:
When "pragma integrity_check" detects an error, does "PRAGMA quick_check" do
too ?
Yes, both pragmas return same error.
Is your database file really about 65 Meg in size ? Just roughly.
Yes:
$ du -sh *
66M applianc
Does either process take backups of the DB? If so, how is that implemented?
-Rowan
On 22 December 2017 at 05:47, Nikhil Deshpande
wrote:
> Hi,
>
> We have an application that in a Linux VM that's running into
> SQLite DB corruption (after weeks and months of running,
> 4 such instances yet in di
On 12/21/17, Nikhil Deshpande wrote:
>
> There were no power-off or reboots in near time vicinity when the
> corruption was detected.
(1) Might the corruption have been sitting dormant due to some far
away power-off or reboot and was only recently discovered? How much
do you trust the fsync() sy
On 21 Dec 2017, at 9:47pm, Nikhil Deshpande wrote:
> We have an application that in a Linux VM that's running into
> SQLite DB corruption (after weeks and months of running,
> 4 such instances yet in different VMs).
>
> [snip]
>
> There were no power-off or reboots in near time vicinity when th
I used the SQLite BTree for indexing. If you are doing complex query, I
think you are much better off with SQL... The btree API is undocumented
(documented somewhat in the source files) and unsupported. If you are gonna
do that, I had recently posted some problems/solutions I encounter using the
Thanks Jay now i can understand.
On 8/24/06, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
> So is the a manual of how can I use the SQLite Btree algorithm for my
own?
It's not exposed so you can do it easily.
> And if I use the Btree will be faster than use SQLite? because SQLite
need
> to underst
So is the a manual of how can I use the SQLite Btree algorithm for my own?
It's not exposed so you can do it easily.
And if I use the Btree will be faster than use SQLite? because SQLite need
to understand the SQL and I think that took time... if the query is complex
I doubt it would be sign
> Is the feature of the row_id (unique incremental number) exists in this
API level as well?
No. The incremental row-id generation happens in the virtual machine. Look
at the code for OP_NewRecno opcode in vdbe.c. It shouldn't be too hard to
replicate this logic though.
Dan.
14 matches
Mail list logo