Re: [sqlite] page checksums (was Re: Bad db feature request)

2016-06-29 Thread Scott Robison
On Jun 29, 2016 10:14 PM, "Roger Binns"  wrote:
>
> On 29/06/16 19:13, Scott Robison wrote:
> > Given the nature of VFS, it is trivial* for anyone to create a module to
> > provide this very functionality. So you can write it yourself!
> >
> > *Not really trivial, but probably not horribly difficult either.
>
> VFS is one way you can't reasonably do it.  The VFS is handed full size
> pages, so the checksums would have to be stored somewhere other than the
> page.  That leads to a *very* complex implementation.

I'm not suggesting it is an ideal solution. Just that at least one solution
exists. Given the flexibility demonstrated in VFS sample implementations,
it is one possibility.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] page checksums (was Re: Bad db feature request)

2016-06-29 Thread Roger Binns
On 29/06/16 19:13, Scott Robison wrote:
> Given the nature of VFS, it is trivial* for anyone to create a module to
> provide this very functionality. So you can write it yourself!
> 
> *Not really trivial, but probably not horribly difficult either.

VFS is one way you can't reasonably do it.  The VFS is handed full size
pages, so the checksums would have to be stored somewhere other than the
page.  That leads to a *very* complex implementation.

The encryption extension does something like defining SQLITE_HAS_CODEC
and then gets to use a small amount of each page to store information
about the encryption of that page.  Checksums would fit very well into a
similar implementation.

Roger




signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] page checksums (was Re: Bad db feature request)

2016-06-29 Thread Scott Robison
On Jun 29, 2016 5:08 PM, "Darren Duncan"  wrote:
> I notice that the ticket rejection didn't include any rationale or
explanation, or I didn't find any when I looked.  What was the rationale
for rejecting that ticket?
>
> I believe that SQLite having page checksums would be a good idea whose
time has come.  Even Postgres on whom SQLite takes a lot of influence has
had that feature for the last 2.5 years.
>
> This should be doable as an optional-per-file feature, like some other
features like foreign keys are optional.  If the feature is used, that is a
file format break so older SQLite versions won't attempt to modify a file,
and if a file doesn't use the feature then older SQLite versions will still
work with it.
>
> -- Darren Duncan

Given the nature of VFS, it is trivial* for anyone to create a module to
provide this very functionality. So you can write it yourself!

*Not really trivial, but probably not horribly difficult either.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] page checksums (was Re: Bad db feature request)

2016-06-29 Thread Darren Duncan

On 2016-06-29 8:12 AM, Roger Binns wrote:

On 29/06/16 07:51, Dominique Devienne wrote:

I wish for the day SQLite has page checksums to detect any such random
corruption.


Agreed.  The SQLite team rejected doing so:

   http://www.sqlite.org/src/tktview/72b01a982a84f64d4284


Yes, I know, it's a format change, and will likely slow things down a
little, but it's worth it IMHO.


Note that it isn't as big a change as you think, and could be done
today.  SQLite already allows a portion of each page to be used for
other purposes, with the big user being encryption.


I notice that the ticket rejection didn't include any rationale or explanation, 
or I didn't find any when I looked.  What was the rationale for rejecting that 
ticket?


I believe that SQLite having page checksums would be a good idea whose time has 
come.  Even Postgres on whom SQLite takes a lot of influence has had that 
feature for the last 2.5 years.


This should be doable as an optional-per-file feature, like some other features 
like foreign keys are optional.  If the feature is used, that is a file format 
break so older SQLite versions won't attempt to modify a file, and if a file 
doesn't use the feature then older SQLite versions will still work with it.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Dominique Devienne
On Wed, Jun 29, 2016 at 9:54 PM, Paul Sanderson <
sandersonforens...@gmail.com> wrote:

> As mentioned above there is (or can be) reserved space at the end of
> each page (documented in the DB header) that can be used for checksums
> - you just need to write your own extension :)
>
> https://www.sqlite.org/fileformat2.html


Can you really? What kind of extension?

The pager is not an extension point.

The VFL is one, but https://www.sqlite.org/c3ref/io_methods.html is byte
oriented, not page oriented.
Plus there are different types of pages, with different format. Does the
checksum go in the same place for all them?
Official SQLite would ignore the extra info, and probably either not
rewrite it, or keep it as-is, when modifying a page,
making the page appear invalid. This is precisely the kind of things that
if not built in is kinda useless I'm afraid.

Plus w/o a format change, even if built-in to the latest SQLite, older
SQLite apps wouldn't keep the checksums up-to-date too.
This and index-based-row-counts and probably other things do require a
backward-incompatible change IMHO. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Paul Sanderson
As mentioned above there is (or can be) reserved space at the end of
each page (documented in the DB header) that can be used for checksums
- you just need to write your own extension :)

https://www.sqlite.org/fileformat2.html

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 29 June 2016 at 18:52, Scott Hess  wrote:
> On Wed, Jun 29, 2016 at 10:36 AM, Simon Slavin  wrote:
>> On 29 Jun 2016, at 5:45pm, Drago, William @ CSG - NARDA-MITEQ 
>>  wrote:
>>> Aren't there things like that already built in to the hard disk controllers 
>>> (CRC, Reed Solomon, etc.)?
>>
>> Yes.  But they operate at the level they understand.  For instance ...
>>
>> A change is made in a field which involves changing just one page of data.  
>> In terms of the SQLite file format this would mean that a table page is 
>> overwritten -- a one disk sector change.  If SQLite checksums existed then 
>> this would mean that the checksum, stored in the table pointer page, would 
>> also have to be updated.  Which would mean that another disk sector has to 
>> be changed too.
>>
>> Now suppose there's a big in the storage medium driver which means it 
>> occasionally writes the correct data to the wrong sector on disk.  Without 
>> checksums this fault would not be noticed: since the wrong sector on disk 
>> was updated the wrong checksum on disk would be updated too.  The errors 
>> would match.
>
> I think the bigger problem is that delegating this means that you
> assume the entire underlying stack is working correctly.  For
> instance, the disk may have elaborate error-correction protocols that
> are working correctly per sector, but SQLite's pages may span sectors.
> Or the underlying disk may be perfect and the filesystem doesn't
> provide the same guarantees.  Or someone is running things over NFS.
> Having the page checksum embedded in the page at the SQLite level
> would provide end-to-end confidence.
>
> Chaining the checksums is a whole different level of assurance.  To
> the best of my knowledge _all_ legitimately (1) corrupted databases
> I've seen had pages which were individually valid, but not valid when
> taken together.  Like an index page referred to a row which wasn't
> present in the table page.  This implies that the atomicity guarantees
> SQLite relies on were broken at the filesystem or disk level.
>
> -scott
>
> (1) I consider a system where the filesystem is simply broken to not
> be legitimate corruption.  For instance, if you get a page of random
> which doesn't appear to have every been SQLite data in the first
> place.  There's not much SQLite can do about that kind of thing.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Scott Hess
On Wed, Jun 29, 2016 at 10:36 AM, Simon Slavin  wrote:
> On 29 Jun 2016, at 5:45pm, Drago, William @ CSG - NARDA-MITEQ 
>  wrote:
>> Aren't there things like that already built in to the hard disk controllers 
>> (CRC, Reed Solomon, etc.)?
>
> Yes.  But they operate at the level they understand.  For instance ...
>
> A change is made in a field which involves changing just one page of data.  
> In terms of the SQLite file format this would mean that a table page is 
> overwritten -- a one disk sector change.  If SQLite checksums existed then 
> this would mean that the checksum, stored in the table pointer page, would 
> also have to be updated.  Which would mean that another disk sector has to be 
> changed too.
>
> Now suppose there's a big in the storage medium driver which means it 
> occasionally writes the correct data to the wrong sector on disk.  Without 
> checksums this fault would not be noticed: since the wrong sector on disk was 
> updated the wrong checksum on disk would be updated too.  The errors would 
> match.

I think the bigger problem is that delegating this means that you
assume the entire underlying stack is working correctly.  For
instance, the disk may have elaborate error-correction protocols that
are working correctly per sector, but SQLite's pages may span sectors.
Or the underlying disk may be perfect and the filesystem doesn't
provide the same guarantees.  Or someone is running things over NFS.
Having the page checksum embedded in the page at the SQLite level
would provide end-to-end confidence.

Chaining the checksums is a whole different level of assurance.  To
the best of my knowledge _all_ legitimately (1) corrupted databases
I've seen had pages which were individually valid, but not valid when
taken together.  Like an index page referred to a row which wasn't
present in the table page.  This implies that the atomicity guarantees
SQLite relies on were broken at the filesystem or disk level.

-scott

(1) I consider a system where the filesystem is simply broken to not
be legitimate corruption.  For instance, if you get a page of random
which doesn't appear to have every been SQLite data in the first
place.  There's not much SQLite can do about that kind of thing.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] update or replace ...

2016-06-29 Thread Simon Slavin

On 29 Jun 2016, at 5:56pm, Olivier Mascia  wrote:

> What's the expected behavior of statement "update or replace ..."? 
> (http://sqlite.org/lang_update.html)
> 
> create table T(K integer primary key, V text);
> update or replace T set V='data' where K=1;
> 
> Clearly it does nothing.
> Does this fit the intended behavior?
> Would that mean 'or replace' in the context of an update statement is a void 
> operation?

For the UPDATE OR REPLACE command, the REPLACE part comes into play only if an 
UPDATE would cause some sort of conflict.  Since the UPDATE command does 
nothing in this case, there will be no conflict, so there will be no REPLACE.

This command will return SQLITE_OK.  It is an entirely legal command which does 
nothing in this situation.

Under different circumstances (i.e. if the table already held data) the same 
command might result in a REPLACE.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread R Smith



On 2016/06/29 4:28 PM, Simon Slavin wrote:

On 29 Jun 2016, at 10:17am, R Smith  wrote:


I know I can write nonsense over the file, but I was hoping to be able to cause 
specific common corruptions, like Invalid-Schema, Missing Index entries, 
Missing pages etc. with this command. It need not be fancy, just find the first 
table that can be corrupted in the DB, and mess with it enough to fail an 
integrity check, but still work if possible.

Trouble is that whatever checking you develop might spot precisely the fault 
that the simulator introduces but not a slightly different one.  For instance 
if there's something special about the last row in a page it might spot all 
missing rows in the index except if the missing row is the last one on the page.


Of course, but that's an aside wish for me, the main aim here is to 
enable the systems we make to be tested for reporting of and surviving 
of corrupted DBs for different styles of corruption. One that can 
repeatedly be invoked in a test suite even.


Agreed though, this will never be enough for developing recovery 
mechanisms, though they, if working correctly, should at a minimum 
report and deal with these basic corruptions too.



I prefer your idea of opening the file and randomly overwriting a few bytes.  
Of course some of them will just be values in unindexed fields, so there would 
be no damage SQLite could notice.


Yes quite, but I was actually making this exact thing when I realised 
it's quite useless - I'm testing something that will near NEVER happen. 
In real world, it is far more likely to find an Index missing some 
entries or a Schema being unreadable or page-count wrong etc, than 
random data being zapped into your DB file by virtue of Neutrino's from 
Sol (or a stray cataclysm).


Like when car manufacturers test the integrity of their roll-cages, they 
don't drop airplanes on the roof of the car - though it's not 100% 
inconceivable that this might occur in life, but they rather ram it 
head-first into some barrier - exactly the most common type of accident 
out there. This is what I'm trying to achieve with the request.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Simon Slavin

On 29 Jun 2016, at 5:45pm, Drago, William @ CSG - NARDA-MITEQ 
 wrote:

> Aren't there things like that already built in to the hard disk controllers 
> (CRC, Reed Solomon, etc.)?

Yes.  But they operate at the level they understand.  For instance ...

A change is made in a field which involves changing just one page of data.  In 
terms of the SQLite file format this would mean that a table page is 
overwritten -- a one disk sector change.  If SQLite checksums existed then this 
would mean that the checksum, stored in the table pointer page, would also have 
to be updated.  Which would mean that another disk sector has to be changed too.

Now suppose there's a big in the storage medium driver which means it 
occasionally writes the correct data to the wrong sector on disk.  Without 
checksums this fault would not be noticed: since the wrong sector on disk was 
updated the wrong checksum on disk would be updated too.  The errors would 
match.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Scott Hess
On Wed, Jun 29, 2016 at 2:17 AM, R Smith  wrote:
> In response to a recent forum post and many other posts, where SQLite
> corrupt files or Index integrity was the problem at hand, I was wondering if
> we could ask for an API function that would corrupt a DB for us.

I have done some things like this in Chromium's sql/ wrapper.  There
are helpers at:
   https://cs.chromium.org/chromium/src/sql/test/test_helpers.h

sql::test::CorruptSizeInHeader() is a helper function which uses
mostly libc operations to modify the page count in the header, which
results in SQLITE_CORRUPT for most operations.

In the same file, sql::test::CorruptTableOrIndex() is a helper which
snapshots the root page of a table or index, runs a given update
query, then restores the snapshot.  This can be used to build
mis-matched indices or tables.

These have example use in:
   https://cs.chromium.org/chromium/src/sql/connection_unittest.cc
   https://cs.chromium.org/chromium/src/sql/recovery_unittest.cc
Of course, none of this would be directly useful, since it's written
in Chromium C++.  But it would be straightforward to write them using
libc and the SQLite C API, or your appropriate wrapper.

Unfortunately, one thing I found in this is that it's kind of hard to
inject real corruption reliably.  You have to figure out ways to be
wrong enough to be noticeable, and right enough that SQLite can make
progress to see the corruption, and then you have to make sure SQLite
is running through the code paths which will actually expose the
corruption.

It might be cleaner to just have a way to do error injection.  Like
maybe something like the authorizer interface, but where it could
detect the table row/column being touched and introduce an error.  Or
maybe you could write a vtable which proxies a real table and
introduces errors.

-scott
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] update or replace ...

2016-06-29 Thread R Smith



On 2016/06/29 6:56 PM, Olivier Mascia wrote:

Dear all,

What's the expected behavior of statement "update or replace ..."? 
(http://sqlite.org/lang_update.html)

create table T(K integer primary key, V text);
update or replace T set V='data' where K=1;

Clearly it does nothing.
Does this fit the intended behavior?
Would that mean 'or replace' in the context of an update statement is a void 
operation?


...OR REPLACE here indicates a Conflict-resolution algorithm. Anything 
after the OR in an UPDATE OR xxx  statement relates to the conflict 
resolution algorithm - i.e. What must be done if this change causes a 
constraint to fail?


In all the next examples, assume two rows with Primary key IDs 1 and 2 
already exist in the Database:


UPDATE OR FAIL   SET ID = 1 WHERE ID = 2;
 - indicates that the executions should stop immediately and produce a 
Constraint-failed error without touching any transaction mechanics.


UPDATE OR ROLLBACK   SET ID = 1 WHERE ID = 2;
 - indicates that the executions should stop immediately and roll back 
the entire active transaction.


UPDATE OR ABORT   SET ID = 1 WHERE ID = 2;
 - indicates that the executions should stop immediately and roll back 
the current statement, but leave prior changes in the transaction as is.


UPDATE OR IGNORE   SET ID = 1 WHERE ID = 2;
 - indicates that the the update should be skipped. It's the "Do 
nothing" check.


UPDATE OR REPLACE   SET ID = 1 WHERE ID = 2;
 - indicates that the Row with ID: 1 which already exists must be 
deleted and then this row's ID should be set to 1. Dangerous!



Hope that clears it up,
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Roger Binns
On 29/06/16 09:45, Drago, William @ CSG - NARDA-MITEQ wrote:
> Aren't there things like that [checksums] already built in to the hard disk 
> controllers (CRC, Reed Solomon, etc.)?

They are at a different level and can only detect issues in what they
see.  For example SQLite can create a page of data, and then hand it off
to the C library which then hands it off to the kernel which then hands
it off to various filesystem drivers which then hand it off to various
block devices which then hand it off over a bus of some sort to the
storage.  If corruption happens at any point before getting to the
storage then the corrupted version is going to be considered correct by
the storage.

Having checksums at the SQLite level means that SQLite can itself verify
that what it wrote (and went through any number of other layers) is what
it gets back.  Short of extremely robust C libraries, operating systems,
drivers, and hardware, SQLite is the sensible place to add checksums.
The "Lite" bit guarantees that SQLite is not run on robust everything,
but usually on less reliable components.  That is why I am somewhat
disappointed the SQLite team doesn't see value in implementing the request.

Roger




signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread R Smith



On 2016/06/29 4:53 PM, Joe Pasquariello wrote:

On 6/29/2016 6:13 AM, R Smith wrote:
Very happy this is sorted for you. Note that Synchronous=OFF should 
only ever be used on a read-only DB (or at least one where writing 
only happens when you the user/DB Admin actively makes changes via 
queries. For anything that should run autonomously, Synchronous 
should preferably be FULL and at least be ON.


There is a time-penalty to be paid for Synchronous=FULL, but if it 
isn't part of a user interface and no user to notice that specific 
thread is lagging a second or two when writing, then it rarely matters. 


Okay, thanks. I thought it was generally preferable to use "lower" 
levels of synchronous. I am not using WAL, so I'll use FULL rather 
than NORMAL. Seems like I lucked out in the table being corrupted, but 
recoverable.


Synchronous (from Latin meaning more or less: At-Same-Time or In-Step) 
in very short computer terms means something like "Wait for the OS to do 
the actual writes before accepting that they are written", so the more 
Synchronous you do things, the more safe-ish it is - with the downside 
of a bit of extra time taken by the waiting for the OS to commit the 
writes[1].


Is there a simple explanation as to why your query to find duplicates 
worked correctly, despite the corrupted index, whereas the one I had 
used did not? Does it mean that your query doesn't use the index?


It's simple really - there are 2 possible reasons:
1 - I did post another query before to demonstrate how grouping doesn't 
group values together that differ in case (if you haven't added NOCASE 
statements) and so it won't look like 2 similar rows using a GROUP BY, 
but still can make a UNIQUE constraint fail. Ex. Smith and SMITH will be 
in 2 different groupings in the query in the original table (if it did 
not have NOCASE specified), but will be failing a UNIQUE constraint in 
the new table where you definitely specified NOCASE. (this is the only 
explanation if the DB was healthy, but wasn't likely in this case).


2 - More importantly, my Query checked specifically the row_id, not just 
the uniqueness of the groupings, which means the Query-Planner had to 
consult/walk the row_id index and found those rows which were not 
showing up when using the normal other indices you had (like the one 
that's been broken and causing the integrity check to fail).


These are the reasons I suggested the Query - but please note there is 
nothing technically wrong with the query you found on StackOverflow - it 
just assumes correct case, encoding and valid indices - basically a 
valid working DB.

We did not assume such. That's the only difference.

Hope that makes it clear!
Ryan


[1] : I say Safe-ish because there's a whole other world in which hard 
drives lie to the OS about what they have committed to platters and 
such, but it's a long story, varies from manufacturer to manufacturer, 
and in general doesn't need to be accounted for, if you use 
Synchronous-full, you should be fine barring only the worst of 
coincidences.  There's been some posts on this forum discussing it if 
you are more interested and feel like searching.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] update or replace ...

2016-06-29 Thread Olivier Mascia
Dear all,

What's the expected behavior of statement "update or replace ..."? 
(http://sqlite.org/lang_update.html)

create table T(K integer primary key, V text);
update or replace T set V='data' where K=1;

Clearly it does nothing.
Does this fit the intended behavior?
Would that mean 'or replace' in the context of an update statement is a void 
operation?

-- 
Meilleures salutations, Met vriendelijke groeten, Best Regards,
Olivier Mascia, integral.be/om


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Drago, William @ CSG - NARDA-MITEQ
> boun...@mailinglists.sqlite.org] On Behalf Of Dominique Devienne
> Sent: Wednesday, June 29, 2016 10:52 AM
> To: SQLite mailing list
> Subject: Re: [sqlite] Bad db feature request
>
> On Wed, Jun 29, 2016 at 4:28 PM, Simon Slavin 
> wrote:
>
> I wish for the day SQLite has page checksums to detect any such random
> corruption.
> Yes, I know, it's a format change, and will likely slow things down a little, 
> but
> it's worth it IMHO.

Aren't there things like that already built in to the hard disk controllers 
(CRC, Reed Solomon, etc.)?

--
Bill Drago
Staff Engineer
L3 Narda-MITEQ
435 Moreland Road
Hauppauge, NY 11788
631-272-5947 / william.dr...@l-3com.com

CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any 
attachments are solely for the use of the addressee and may contain information 
that is privileged or confidential. Any disclosure, use or distribution of the 
information contained herein is prohibited. In the event this e-mail contains 
technical data within the definition of the International Traffic in Arms 
Regulations or Export Administration Regulations, it is subject to the export 
control laws of the U.S.Government. The recipient should check this e-mail and 
any attachments for the presence of viruses as L-3 does not accept any 
liability associated with the transmission of this e-mail. If you have received 
this communication in error, please notify the sender by reply e-mail and 
immediately delete this message and any attachments.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread Simon Slavin

On 29 Jun 2016, at 3:53pm, Joe Pasquariello  wrote:

> Okay, thanks. I thought it was generally preferable to use "lower" levels of 
> synchronous.

No.  Other way around.

Two principles to guide you through the use of PRAGMAs:

A) The default settings are meant to be safe, probably safest.
B) If it makes something faster, it's dangerous in some way.  Otherwise it 
would be the default.  See (A).

So start off by leaving all PRAGMAs alone.  Only if you have performance 
problems think about changing them.  There are a couple of exceptions (e.g. 
journal_mode) but that's the way to bet.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Roger Binns
On 29/06/16 07:51, Dominique Devienne wrote:
> I wish for the day SQLite has page checksums to detect any such random
> corruption.

Agreed.  The SQLite team rejected doing so:

  http://www.sqlite.org/src/tktview/72b01a982a84f64d4284

> Yes, I know, it's a format change, and will likely slow things down a
> little, but it's worth it IMHO.

Note that it isn't as big a change as you think, and could be done
today.  SQLite already allows a portion of each page to be used for
other purposes, with the big user being encryption.

Roger



signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread Joe Pasquariello

On 6/29/2016 6:13 AM, R Smith wrote:
Very happy this is sorted for you. Note that Synchronous=OFF should 
only ever be used on a read-only DB (or at least one where writing 
only happens when you the user/DB Admin actively makes changes via 
queries. For anything that should run autonomously, Synchronous should 
preferably be FULL and at least be ON.


There is a time-penalty to be paid for Synchronous=FULL, but if it 
isn't part of a user interface and no user to notice that specific 
thread is lagging a second or two when writing, then it rarely matters. 


Okay, thanks. I thought it was generally preferable to use "lower" 
levels of synchronous. I am not using WAL, so I'll use FULL rather than 
NORMAL. Seems like I lucked out in the table being corrupted, but 
recoverable. Is there a simple explanation as to why your query to find 
duplicates worked correctly, despite the corrupted index, whereas the 
one I had used did not? Does it mean that your query doesn't use the index?


Joe
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Dominique Devienne
On Wed, Jun 29, 2016 at 4:28 PM, Simon Slavin  wrote:

> I prefer your idea of opening the file and randomly overwriting a few
> bytes.  Of course some of them will just be values in unindexed fields, so
> there would be no damage SQLite could notice.


I wish for the day SQLite has page checksums to detect any such random
corruption.
Yes, I know, it's a format change, and will likely slow things down a
little, but it's worth it IMHO.
SQLite4, as you used to say Simon... --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Simon Slavin

On 29 Jun 2016, at 10:17am, R Smith  wrote:

> I know I can write nonsense over the file, but I was hoping to be able to 
> cause specific common corruptions, like Invalid-Schema, Missing Index 
> entries, Missing pages etc. with this command. It need not be fancy, just 
> find the first table that can be corrupted in the DB, and mess with it enough 
> to fail an integrity check, but still work if possible.

Trouble is that whatever checking you develop might spot precisely the fault 
that the simulator introduces but not a slightly different one.  For instance 
if there's something special about the last row in a page it might spot all 
missing rows in the index except if the missing row is the last one on the page.

I prefer your idea of opening the file and randomly overwriting a few bytes.  
Of course some of them will just be values in unindexed fields, so there would 
be no damage SQLite could notice.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread R Smith



On 2016/06/29 2:48 PM, Joe Pasquariello wrote:


starting from the original, corrupted DB...

integrity_check
row 18029 missing from index sqlite_autoindex_EventLog_1
row 18030 missing from index sqlite_autoindex_EventLog_1
row 18031 missing from index sqlite_autoindex_EventLog_1
row 18032 missing from index sqlite_autoindex_EventLog_1
row 18033 missing from index sqlite_autoindex_EventLog_1
row 18034 missing from index sqlite_autoindex_EventLog_1
row 18035 missing from index sqlite_autoindex_EventLog_1
wrong # of entries in index sqlite_autoindex_EventLog_1

delete from eventlog where rowid >=18029 and rowid <= 18035
(checked integrity again here, and got same result as above)

reindex
integrity_check
ok

vacuum
integrity_check
ok

Query for duplicates returns no records.

Thanks very much, Ryan and Simon. My application monitors wind 
turbines, and there are somewhat frequent power issues at the sites. 
At one particular site, I had frequent corruption on server crashes, 
but none since changing SYNCHRONOUS from OFF to NORMAL. This 
particular site still had SYNCHRONOUS OFF, but will now have NORMAL.


Very happy this is sorted for you. Note that Synchronous=OFF should only 
ever be used on a read-only DB (or at least one where writing only 
happens when you the user/DB Admin actively makes changes via queries. 
For anything that should run autonomously, Synchronous should preferably 
be FULL and at least be ON.


There is a time-penalty to be paid for Synchronous=FULL, but if it isn't 
part of a user interface and no user to notice that specific thread is 
lagging a second or two when writing, then it rarely matters.


Best,
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread Joe Pasquariello

On 6/29/2016 1:04 AM, R Smith wrote:


These are the duplicate records. This index is associated with a 
UNIQUE constraint on the original table. Is there a way to clean it up?


That means your DB is corrupted by some method. It's not bad yet, just 
missing Index entries.
Make a copy of the database file, then try to delete those rows and 
recreate all indices with:


DELETE FROM eventlog WHERE row_id IN 
(18029,18030,18031,18032,18033,18034,18035);

REINDEX;
PRAGMA integrity_check;

If you do not delete the rows, it can't re-index because the Unique 
Index constraint will fail since the rows are not unique.


If this worked, test all tables and queries. Be sure to have that 
backup at hand - once integrity is broken, the DB state is not secure 
until integrity check passes.



PS: Nice catch Simon - I did not even consider a broken index. I've 
never had an SQLite file go corrupt on me, even though using many 
SQLite DBs in all our systems, even as application file formats, so 
I'm always a bit oblivious to that possibility.




starting from the original, corrupted DB...

integrity_check
row 18029 missing from index sqlite_autoindex_EventLog_1
row 18030 missing from index sqlite_autoindex_EventLog_1
row 18031 missing from index sqlite_autoindex_EventLog_1
row 18032 missing from index sqlite_autoindex_EventLog_1
row 18033 missing from index sqlite_autoindex_EventLog_1
row 18034 missing from index sqlite_autoindex_EventLog_1
row 18035 missing from index sqlite_autoindex_EventLog_1
wrong # of entries in index sqlite_autoindex_EventLog_1

delete from eventlog where rowid >=18029 and rowid <= 18035
(checked integrity again here, and got same result as above)

reindex
integrity_check
ok

vacuum
integrity_check
ok

Query for duplicates returns no records.

Thanks very much, Ryan and Simon. My application monitors wind turbines, 
and there are somewhat frequent power issues at the sites. At one 
particular site, I had frequent corruption on server crashes, but none 
since changing SYNCHRONOUS from OFF to NORMAL. This particular site 
still had SYNCHRONOUS OFF, but will now have NORMAL.


Joe

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use "cursors" in c#

2016-06-29 Thread Keith Medcalf

Subject to the same caveats as normal.  It only returns the ROWID of the last 
insert on the connection.  If you are ABOSLUTELY SURE without a dounbt that the 
last insert on the connection is the one that you want the rowid for, then it 
will work as you intend.  However if you miscompute which insert is the last 
row inserted on the connection, you may get a result that confuses exceeds the 
understanding. 

> -Original Message-
> From: sqlite-users-boun...@mailinglists.sqlite.org [mailto:sqlite-users-
> boun...@mailinglists.sqlite.org] On Behalf Of Simon Slavin
> Sent: Tuesday, 28 June, 2016 11:59
> To: SQLite mailing list
> Subject: Re: [sqlite] How to use "cursors" in c#
> 
> 
> On 28 Jun 2016, at 4:56pm, R Smith  wrote:
> 
> > I am not clear on what happens for INSERT OR REPLACE results when a row
> gets updated in stead of inserted, but the documentation should have
> details.
> 
> In SQLite, the REPLACE operation really does delete the original and
> insert a new row.  So last_insert_rowid() will work correctly with it.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bad db feature request

2016-06-29 Thread R Smith

Hi SQLite devs,

In response to a recent forum post and many other posts, where SQLite 
corrupt files or Index integrity was the problem at hand, I was 
wondering if we could ask for an API function that would corrupt a DB 
for us.


I would like to use this to test all our systems' ability to catch bad 
DBs or survive them at least. I also maintain one of the SQLite DB 
management tools and would like to test it and make it more apt at 
detecting, handling and perhaps even solving some of the corruptions. (I 
know someone here made some recovery tools already).


A simple:
  int sqlite3_makecorrupt(*sqlite3_db , Int corruptionType)

I know I can write nonsense over the file, but I was hoping to be able 
to cause specific common corruptions, like Invalid-Schema, Missing Index 
entries, Missing pages etc. with this command. It need not be fancy, 
just find the first table that can be corrupted in the DB, and mess with 
it enough to fail an integrity check, but still work if possible. (Or 
whatever check is appropriate to detecting that kind of corruption, if 
any). Also, making it part of the connection API, it means the file I am 
corrupting had to already be opened successfully and so must be a valid 
SQLite DB to start with.


A returned error code might indicate whether successfully corrupted (I 
might ask to corrupt by missing Index entries, and the DB might have no 
usable Indices). Other returned codes might be the usual SQLITE_BUSY, 
SQLITE_NOT_A_DB and the like. (This is the kind of things hard to check 
if I am just messing with the file from outside a connection).


Another possibility is to produce a file-copy (perhaps via the backup 
API) of the DB that gets corrupted in stead of the DB itself - this is 
probably safer and lends itself better to testing environments where I 
can manufacture many different corruptions from the same DB file. Maybe 
add a parameter where I can suggest the new filename.


Perhaps force a new file named to clearly indicate it (to avoid 
accidents and people with nefarious intents from messing with working 
DBs easily) such as:

  MyDB.sqlite.corrupted
(from MyDB.sqlite as the original or suggested name).


The one thing I'm not sure about, maybe you or others have thoughts on 
this, is whether the corruption should be random or repeatable.


I'm thinking this must already exist somewhat in the vast SQLite 
test-suite in some form.



Thanks,
Ryan

PS: It would probably be very useful too for embedded/phone type systems 
to test battery-fail or unplugged-cable type corruptions - though these 
are easy to simulate already, but if in the API, it could more easily be 
included in test scripts.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread Simon Slavin

On 29 Jun 2016, at 5:36am, Joe Pasquariello  wrote:

> These are the duplicate records. This index is associated with a UNIQUE 
> constraint on the original table. Is there a way to clean it up?

Unfortunately it's an automatic index and I hesitate to manipulate it manually 
in case it messes up something else automatic.  First, using the SQLite shell 
tool in interactive mode, try a simple way:

 ...

1) Make absolutely sure you have a backup copy of the database.
2) Execute the SQL command "VACUUM".
3) Test one of the SELECTs which was giving the wrong results.

If that doesn't fix the problem I would ...

1) Use .dump to dump your data to a text file.
2) Quit the shell tool.
3) Start it back up with another filename to create a new blank database.
4) Use .read to read the text file and execute it as commands
5) Test one of the SELECTs which was giving the wrong results.

If it works, delete the old database file and rename the new one.

On 29 Jun 2016, at 9:04am, R Smith  wrote:

> PS: Nice catch Simon - I did not even consider a broken index. I've never had 
> an SQLite file go corrupt on me


It was the only thing I could think of.  OP typing in diagnostic SELECT 
commands seemed to demonstrate enough weirdness that it was worth checking.  
Unfortunately it means that we have to wonder how the database got corrupted.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] x64 binary

2016-06-29 Thread Noel Frankinet
Not really an answer, but it's easier to compile directly the source of
sqlite with your files. Only one .c file, no dependency, no 32 or 64 bit
problem, no installation problem.

Best wishes.
Noël

On 29 June 2016 at 01:05, BOSECKER Nancy  wrote:

> Hi,
>
> I have a very basic problem, maybe someone can help.
>
> I'm trying to use sqlite3.dll in an x64 application (VS 2008).
> So, I downloaded the x64 zip from the SQLite download page, unzipped it.
> I put the .dll, .lib, and .def files in my project's directory.
> I added the .lib to the Linker->Input -> Additional Dependencies in VS.
> I wrote a test snippet of code that calls sqlite3_close().
>
> When I compile, I get an unresolved external symbol on sqlite3_close().
>
> If I do the same thing with the x86 version of the binaries, and change my
> project target to Win32, it compiles as expected.
>
> What am I missing?
>
> Thx,
> Nancy
>
>
>
>
> This email and any attachments are intended solely for the use of the
> individual or entity to whom it is addressed and may be confidential and/or
> privileged.
>
> If you are not one of the named recipients or have received this email in
> error,
>
> (i) you should not read, disclose, or copy it,
>
> (ii) please notify sender of your receipt by reply email and delete this
> email and all attachments,
>
> (iii) Dassault Systemes does not accept or assume any liability or
> responsibility for any use of or reliance on this email.
>
> For other languages, go to http://www.3ds.com/terms/email-disclaimer
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Noël Frankinet
Strategis sprl
0478/90.92.54
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNIQUE constraint violation

2016-06-29 Thread R Smith



On 2016/06/29 6:36 AM, Joe Pasquariello wrote:



On 6/28/2016 4:52 PM, Simon Slavin wrote:

On 28 Jun 2016, at 11:22pm, Joe Pasquariello  wrote:

SELECT udatetime,typeof(udatetime),udatetime-1415000934 FROM eventlog 
where device like '%M14' and udatetime=1415000934

udatetimetypeof(udatetime)udatetime-1415000934
1415000934integer0

PRAGMA integrity_check

integrity_check
row 18029 missing from index sqlite_autoindex_EventLog_1
row 18030 missing from index sqlite_autoindex_EventLog_1
row 18031 missing from index sqlite_autoindex_EventLog_1
row 18032 missing from index sqlite_autoindex_EventLog_1
row 18033 missing from index sqlite_autoindex_EventLog_1
row 18034 missing from index sqlite_autoindex_EventLog_1
row 18035 missing from index sqlite_autoindex_EventLog_1
wrong # of entries in index sqlite_autoindex_EventLog_1

These are the duplicate records. This index is associated with a 
UNIQUE constraint on the original table. Is there a way to clean it up?


That means your DB is corrupted by some method. It's not bad yet, just 
missing Index entries.
Make a copy of the database file, then try to delete those rows and 
recreate all indices with:


DELETE FROM eventlog WHERE row_id IN 
(18029,18030,18031,18032,18033,18034,18035);

REINDEX;
PRAGMA integrity_check;

If you do not delete the rows, it can't re-index because the Unique 
Index constraint will fail since the rows are not unique.


If this worked, test all tables and queries. Be sure to have that backup 
at hand - once integrity is broken, the DB state is not secure until 
integrity check passes.



PS: Nice catch Simon - I did not even consider a broken index. I've 
never had an SQLite file go corrupt on me, even though using many SQLite 
DBs in all our systems, even as application file formats, so I'm always 
a bit oblivious to that possibility.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users