Re: [sqlite] unusual but trivially reproducible bug

2017-06-19 Thread Ketil Froyn
It's not a bug, you're setting up a table alias called "limit2". The
"AS" keyword is optional, but this is the same as:

select * from foo AS limit1;

Ketil

On 18 June 2017 at 21:19, Robert Cousins  wrote:
> Summary:
>   Leaving out the space after the word 'limit' causes the limit
> clause to be ignored.
> I've reproduced it on version 3.19.2 2017-05-25 16:50:27
> edb4e819b0c058c7d74d27ebd14cc5ceb2bad6a6144a486a970182b7afe3f8b9
> A sample output is below.
> On one hand, this is a failure to catch a syntax error. On the other
> hand, the fact that it isn't caught is perhaps problematic and indicates
> an underlying parser issue.
> Thanx
>
> sqlite> create table foo(id Integer not null primary key, bar int);
> sqlite> insert into foo values (NULL,1),(NULL,2),(NULL,3);
> sqlite> select * from foo limit1;
> id  bar
> --  --
> 1   1
> 2   2
> 3   3
> sqlite> select * from foo limit2;
> id  bar
> --  --
> 1   1
> 2   2
> 3   3
> sqlite> select * from foo limit 1;
> id  bar
> --  --
> 1   1
> sqlite> select * from foo limit 2;
> id  bar
> --  --
> 1   1
> 2   2
> sqlite>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



-- 
-Ketil
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Outputting to CSV - row is not quoted when there are no spaces

2017-06-14 Thread Ketil Froyn
Dan, I'd recommend using a tool that actually understands CSV, and
make it parse the input and create new quoted output. I haven't seen
your sed and awk, but I'm pretty sure it's easy to find some special
cases where the text includes comma, quotes or even newlines that will
break your output.

A simple stab at what I think you need would be something like this
python3 script:

$ cat quotecsv.py
#!/usr/bin/env python3
import csv
import sys

csv_in = csv.reader(sys.stdin, delimiter=',', quotechar='"')
csv_out = csv.writer(sys.stdout, delimiter=',', quotechar='"',
quoting=csv.QUOTE_ALL)
for row in csv_in:
csv_out.writerow(row)
### That's it  ###

Then you can pipe some csv through this, here's a sample line where
fields are only quoted where necessary, and the result after piping
through the python script:

$ echo 'a,b,c,"d e","f, g"'
a,b,c,"d e","f, g"
$ echo 'a,b,c,"d e","f, g"' | ./quotecsv.py
"a","b","c","d e","f, g"

Cheers, Ketil

On 14 June 2017 at 07:46,   wrote:
> Thanks Richard, in the end I added the quotes using some SED and AWK as I'm
> using SQLite as part of a BASH script.
>
> Thanks
>
>
> On 2017-06-13 14:36, Richard Hipp wrote:
>>
>> SQLite does not provide that capability, that I recall.
>>
>> But surely it would not be too difficult for you to do your own custom
>> patch, or even to write a short program to output the data in the
>> precise format you desire?
>>
>> On 6/12/17, d...@dan.bz  wrote:
>>>
>>> Hi,
>>>
>>> When outputting to CSV with '.mode csv' is there a way that all rows can
>>> be quoted even if there are no spaces? For example, here is a 1 line
>>> from the output:
>>>
>>> spotify:track:5vlDIGBTQmlyfERBnJOnbJ,Kiso,Circles,100.019
>>>
>>> I would like it to output:
>>>
>>> "spotify:track:5vlDIGBTQmlyfERBnJOnbJ","Kiso","Circles",100.019
>>>
>>> Thanks!
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



-- 
-Ketil
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-16 Thread Ketil Froyn
This may be completely irrelevant, but what about storing your sqlite db on
a zfs file system (or another with similar capabilities) and creating a new
zfs snapshot whenever you need to?

I don't know how this'll fly in practice if you have a lot of snapshots or
if you need to create many very fast, for instance. I wouldn't be surprised
if something gets slow at some point. But I see that zfs apparently
supports 2^64  snapshots, and it's "copy on write", so creating a snapshot
is a relatively lightweight operation, even if your db is big. I guess it
depends on your use. BTW, zfs supports writable snapshots, so you can have
branches/forks too.

Let us know how it goes if you try! :)

Ketil
On 15 May 2016 4:52 p.m., "Mikael"  wrote:

> Hi!
>
> Would there be any facility whereby after each transaction I do on a
> database or table, I could somehow make a snapshot so that at any future
> point in time, I could easily do a SELECT to a given version/snaphot?
>
>
> Any solution based purely on SQL would be extremely expensive I guess (e.g.
> introduce columns for snapshot index and deletedness).
>
> Implementing my own VFS would be a good way I guess, but also extremely
> complex.
>
> Thoughts?
>
> Thanks!
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Pascal (almost) style BLOBs

2016-04-17 Thread Ketil Froyn
I'm not sure I understand why you can't specify the type in a separate
column, rather than prepending it to the blob? That seems like a more
flexible way to have access to the information.

Regards, Ketil
On 17 Apr 2016 2:02 p.m., "William Drago"  wrote:

> All,
>
> Any thoughts on using the first byte of a BLOB to indicate what kind of
> data that BLOB contains?
>
> For example:
> 0 = plain old bytes
> 1 = 16 bit integers
> 2 = 32 bit integers
> 3 = singles
> 4 = doubles, etc.
>
> I am forced to consider this approach because my function will be
> receiving blobs but will have no idea what those blobs are. However, I can
> ask the sender to prepend the blob with an indicator byte to provide the
> necessary information.
>
> In the past I have used comments in the table structure and even used
> custom types (e.g. i16BLOB, for a blob that contains 16 bit ints), but in
> this case I will not have access to that information. The data is coming to
> me in the form of an ADO.NET DataTable which does not contain such
> information.
>
> Of course tagging the blobs like this will increase the size of the
> database, but not by much, and if this "feature" is not clearly documented
> someone in the future will have a very hard time figuring out why the blobs
> don't make sense.
>
> Is there anything else I should be aware of?
>
> Thanks,
> Bill
>
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Avoid duplicate sets with join table

2016-04-16 Thread Ketil Froyn
I have two tables and a join table, in principle like this:

CREATE TABLE records (id INTEGER PRIMARY KEY, data TEXT);
CREATE TABLE features (id INTEGER PRIMARY KEY, data TEXT UNIQUE);
CREATE TABLE records_features (id_r INTEGER, id_f INTEGER, ord INTEGER);

A record consists of one or more features, and a feature can exist in
multiple records. The "ord INTEGER" is there to keep track of the
order of the features. I want to avoid duplicate feature sets. We can
assume that the ord values will match for two sets.

So let's say there's a relationship stored in the DB:

INSERT INTO records (data) VALUES("record1");
=> 10
INSERT INTO features (data) VALUES("feature1");
=> 20
INSERT INTO features (data) VALUES("feature2");
=> 21
INSERT INTO features (data) VALUES("feature3");
=> 22
INSERT INTO records_features (id_r, id_f, ord) VALUES(10,20,0);
INSERT INTO records_features (id_r, id_f, ord) VALUES(10,21,1);
INSERT INTO records_features (id_r, id_f, ord) VALUES(10,22,2);

Later, if I come across a new record that has exactly features
"feature1", "feature2" and "feature3" (IDs 20,21 and 22), I don't want
to add a new record for this. So given a new set of feature IDs, how
can I best check if there's another record that has these exact
features before I insert it?

To be clear, a record with features "feature1", "feature2", "feature4"
would be ok. So would "feature1", "feature2". Subsets are ok, but not
exact duplicates.

One way could be to drop the join table and do a string concat of the
feature IDs in a UNIQUE column in the record table, ie:

CREATE TABLE records (id INTEGER PRIMARY KEY, data TEXT, feature_ids
TEXT UNIQUE);

INSERT INTO records (data, feature_ids) VALUES("record1", "20,21,22");

but I'd prefer to avoid manually having to parse the IDs out of a
string. It does appear to be a hairy way to solve what I need though.

A combination of the two would be better. The string "20,21,22" would
function as a unique key avoiding duplicates, and if I also keep the
join table I can still query the database without parsing strings. I
won't be doing many updates, so I'm not really concerned about editing
these strings.

But I'm sure there's a better way. Do I need to use a CTE for this?

-Ketil


[sqlite] Using SQLite for storing photo's

2016-04-15 Thread Ketil Froyn
I would definitely store the pictures in separate files. The main reason is
compatibility. If you store them as blobs in sqlite, you have to extract
them first to view or edit them in a another program.i Are you sure you've
covered all your use cases with your program?

Ketil
On 14 Apr 2016 7:37 p.m., "Cecil Westerhof"  wrote:

> I am thinking about storing my photo's in SQLite. This has to be done in a
> blob of-course, but I was wondering if there are any caveats, or if anyone
> has helpful tips.
>
> One thing I was wondering: what is the best way to make a backup? I would
> think a normal dump is not very handy. It would probably better to store
> the pictures them-self in one file pro picture and only store the
> (meta-)data in SQL. What would be a good way?
>
> --
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Hex literals not working?

2015-04-16 Thread Ketil Froyn
Thanks, I should have searched the changelog!

On 16 April 2015 at 10:34, Zsb?n Ambrus  wrote:
> On Thu, Apr 16, 2015 at 10:29 AM, Ketil Froyn  wrote:
>> Hexadecimal integer literals follow the C-language notation of
>> "0x" or "0X" followed
>> by hexadecimal digits. For example, 0x1234 means the same as 4660...
>>
>> Am I doing sometihng wrong? Or is this a feature that is newer than
>> Ubuntu 14.04's bundled sqlite3, which is 3.8.2?
>
> See http://sqlite.org/changes.html which tells you that hexadecimal
> literals are available from sqlite version 3.8.6.
>
> -- ambrus
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



-- 
-Ketil


[sqlite] Hex literals not working?

2015-04-16 Thread Ketil Froyn
On https://sqlite.org/lang_expr.html under "Literal Values (Constants)", I read:

Hexadecimal integer literals follow the C-language notation of
"0x" or "0X" followed
by hexadecimal digits. For example, 0x1234 means the same as 4660...

However, this doesn't seem to work for me:

sqlite> select 12;
12
sqlite> select 0x12;
Error: unrecognized token: "0x12"

Am I doing sometihng wrong? Or is this a feature that is newer than
Ubuntu 14.04's bundled sqlite3, which is 3.8.2?

-Ketil


[sqlite] Sqlite 2 and Windows 7

2015-04-10 Thread Ketil Froyn
On 10 Apr 2015 15:05, "Gabriel Tiste"  wrote:
>
> I updated the faulty column to an empty string and noticed that I could
select all records in that table. That must be a sign that something was
written to that column that sqlite could not parse.
>
> Question:
> Are there a way to dump the database without any validation or error
checking? Can I access the content in the sqlite file somehow to see what
really resides in that column that makes it corrupt?
>

A hex editor?

Ketil


[sqlite] Very poor SQLite performance when using Win8.1 + Intel RAID1

2015-03-25 Thread Ketil Froyn
Have you checked the page/block/record size of the file systems and RAID
systems, vs sqlite's page_size (which you can check or set with the
relevant pragma: https://www.sqlite.org/pragma.html)?

If sqlite has a smaller or different page size than the underlying storage,
a single sqlite write can result in a disk read, modify and write. That'll
cost you a lot of resources, and I wouldn't be surprised if that showed up
in FlushFileBuffers() .

Cheers, Ketil
On 24 Mar 2015 16:44, "Rob van der Stel" 
wrote:

> Hello,
>
> Currently I am investigating a SQLite performance problem that started to
> occur when we switched from using Windows XP to Windows 8.1  for our
> applications.
> The following information is obtained to comparing two hardware identical
> systems one running Win XP the other running Win 8.1.
>
> *** HW specification System 1 and System 2
> --- HP RP5 Retail PC 5810
> --- Intel Celeron G1820 at 2.7 GHz
> --- 4096 MB DDR3 / 1333 MHz
> --- Intel ICH8R onboard RAID1 with 2 x Samsung 128 GB SSD
>
> *** SW specification System 1
> --- Windows XP
> --- Intel RAID driver 9.5.0.1037 02/10/2009
>
> *** SW specification System 2
> --- Windows 8.1
> --- Intel RAID driver 12.5.0.1066 18/03/2013
>
>
> =>> When running our applications the performance of SQLite write-access
> on System2 (Win 8.1) is at least 10 times slower than on System1 (Win XP).
> =>> Investigations have already shown that the Windows API
> FlushFileBuffers() plays an important part regarding this issue.
>
> The following describes a few comparison tests
> --- SQLite 1000 x append in new database  *)  System
> 1: 1000 - 1100 msecSystem 2: 1 - 11000 msec
> --- Binary file 1000 x {append + FlushFileBuffers}  *) System 1:
> 220 - 230 msec System 2: 2600 - 2700 msec
>
> *) The data size of each write is comparable (106 bytes). Final file sizes
> also match.
>
> Has anyone noticed such a dramatic SQLite performance deterioration under
> comparable circumstances? If so were you able to find the root cause of it
> and tackle the problem by making improvements such that both systems
> perform equally well again (other driver software ? other driver parameters
> ? ...)
>
> It is important that we retain the data integrity provisions that are
> required for our system (RAID1 and SQLite using FlushFileBuffers). Using
> the "SQLITE_NO_SYNC compile option" or the "PRAGMA synchronous = 0" is
> therefore no alternative.
>
> Regards,
> Rob
>
>
> This e-mail and any attachments contain material that is confidential for
> the sole use of the intended recipient. Any review, reliance or
> distribution by others or forwarding without express permission is strictly
> prohibited. If you are not the intended recipient, please contact the
> sender and delete all copies.
>
>
> This message has been scanned for malware by Websense. www.websense.com
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] What is wrong with this simple SQL?

2015-03-22 Thread Ketil Froyn
But both the patients table and the DIABETIC_ISSUES_LAST table have
columns called emis_number. Since your query turns out to be valid
despite not doing what you expected, sqlite doesn't know which of
those columns you're referring to. So it looks like the "ambiguous
column name" is in fact the correct error message.

Regards, Ketil

On 22 March 2015 at 15:15, Bart Smissaert  wrote:
> Sorry, that table did indeed not have a column named emis_number, my
> mistake.
> Still, the error message ambiguous column name doesn't seem quite right.
> Should that not also be no such column: emis_number?
>
> RBS
>
> On Sun, Mar 22, 2015 at 2:06 PM, Igor Tandetnik  wrote:
>
>> On 3/22/2015 8:50 AM, Bart Smissaert wrote:
>>
>>> select g.gp_name, d.emis_number from DIABETIC_ISSUES_LAST d
>>> inner join patients p on(d.emis_number = p.emis_number)
>>> inner join gp_table g on(p.usual_gp_index_number = g.gp_id)
>>> where d.emis_number not in(select DB.emis_number from DIABETICS DB)
>>>
>>> I get:
>>>
>>> no such column: DB.emis_number
>>>
>>
>> So, the table DIABETICS doesn't have a column named emis_number
>>
>>  This runs fine:
>>>
>>> select emis_number from DIABETIC_ISSUES_LAST
>>> where emis_number not in(select emis_number from DIABETICS)
>>>
>>
>> emis_number in the sub-select is DIABETIC_ISSUES_LAST.emis_number, not
>> DIABETICS.emis_number
>>
>>  So, how should I do this?
>>>
>>
>> First, you have to figure out *what* you are trying to do. In light for
>> the fact that DIABETICS doesn't have a column named emis_number, it's not
>> at all clear.
>> --
>> Igor Tandetnik
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



-- 
-Ketil


Re: [sqlite] Performance issue when copying data from one sqlite database to another

2014-10-14 Thread Ketil Froyn
Depends on how safe/robust you want the copying to be, but if you can
simply rerun if something goes wrong, you might look into stuff like:

pragma journal_mode = MEMORY;
pragma synchronous = off;

But make sure you understand the consequences first by reading about these
commands:

http://www.sqlite.org/pragma.html

Cheers, Ketil
On 14 Oct 2014 23:25, "Pontus Bergsten"  wrote:

> Hi,
> I have the following problem setup:
> We use sqlite3 in an embedded signal logger application. The "Main"
> in-memory signal signal database consists of some minor signal definition
> tables + two large tables (shards) with the actual signal data. The
> sharding technique is used in order to implement an efficient ringbuffer in
> sqlite.
>
> Now, from time to time in the application, it is desired to extract some
> signals in some specified time window from the "Main" database, and save
> the selected signals to another smaller "Dest" database on USB memory. The
> "Dest" database will have the same signal definition tables as "Main", but
> only one signal data table. No ringbuffer functionality, and hence no
> sharding, is needed for the "Dest" database.
> The actual copying is done by first creating the "Dest" database file with
> the required empty tables on USB, and then attach it to the "Main"
> in-memory database. Then the signal definitions and data is copied using a
> series of statements looking much like
>
> INSERT INTO Dest.TheTable (field1, ..., fieldN) FROM Main.TheTable WHERE
> time BETWEEN t1 AND t2
> And here is the performance problem: When the application is executed on
> Windows on a desktop computer, the copying works fine and the performance
> is fairly ok, even when saving to USB. However, when the same code is
> executed on the embedded system, the copying of data is extremely slow,
> even though the CPU load is very moderate. Profiling the thread that
> executes the sql-statements above, reveals that the thread is active in
> many very small steps, while waiting for the USB driver for very long time
> (compared to the active time), in between. During profiling the copy-thread
> only did useful work for about 5% of of the total time, the rest was
> waiting.
>
> Is there any technique that can be used for tuning the performance of
> sqlite3 in this scenario? For example, writing larger chunks of data to the
> "Dest" database?
>
> Regards,
> Pontus Bergsten
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] pls dont send so many message - once a month is enough

2014-09-12 Thread Ketil Froyn
Clicking the mailto works, but it's not a proper reply-to. These two
messages have appeared as a separate thread in my mailbox, probably because
Larry's message didn't add the proper message-id reference. So if you reply
like this on a high volume list, people following the thread might miss
your post.

Cheers, Ketil
On 12 Sep 2014 13:35, "John McKown"  wrote:

> On Fri, Sep 12, 2014 at 1:16 AM, Larry Brasfield
>  wrote:
> > John McKown writes:
> >
> >>  The first option is to totally disable getting any emails at all. This
> >>  is often called NOMAIL mode. If you decide to do that, then you can,
> >>  at your convenience, go to this page:
> >>  http://sqlite.org:8080/cgi-bin/mailman/private/sqlite-users/ and
> >>  review the archives. Before you can browse them, you will need to put
> >>  in your email address and password. This option is really nice for
> >>  reading threads because the software can put the thread together for
> >>  you in a logical manner. Unfortunately, you cannot use the archive
> >>  page to reply to a message, or to originate a message.
> >
> > John, your kind reply is an inspiration.
> >
> > You, and the OP, may be pleased to know that under the right
> circumstances,
> > you can use the archive page to reply to a message.  On every message
> > displayed by the web interface to the archive, the 2nd line had an
> > underlined,
> > "mailto:; link.  For systems/accounts which are setup to handle that
> > "protocol",
> > clicking the link in most modern browsers will invoke the setup email
> > client,
> > with the correct addressee to reach the list.  (I have used that mailto:
> > link for
> > this message, in fact.)
> >
> > Best regards,
> > --
> > Larry Brasfield
>
> Ah. I see it, now. That cataract surgery is paying off! I didn't
> notice that the email address was highlighted and could be clicked
> upon.
>
> --
> There is nothing more pleasant than traveling and meeting new people!
> Genghis Khan
>
> Maranatha! <><
> John McKown
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Logging and timing queries

2014-07-31 Thread Ketil Froyn
Hi,

In debugging and inspecting applications using sqlite, I find it would be
very useful to log some or all queries sent to the database, ideally also
with timing information on how long it took to gather and send the
response, so that it can be determined what queries are received and how
long each query actually takes. It's possible to do this at the application
level, but I think it would be very useful to do this using sqlite itself.
If detailed query logging could be enabled in the database with a pragma or
by using a debug version of the libarry somehow, it would let users and
developers get standardised details about what queries are run and how long
they take, even temporarily on a live system.

There's lots of situations where this could be useful, like switching on
and off debugging on the fly in live/production systems to identify
transient problems or bottlenecks, and debugging software using sqlite with
a live dataset without necessarily having to recreate the working
conditions in a controlled environment, and without recompiling, and
without developing/enabling support for sql debugging in the application.

I've used mysql-proxy in the past to debug mysql databases in this way, and
it was very effective. But since there's no network connection involved in
sqlite, this method can't be used to temporarily redirect the queries in
the same way.

Have I missed a way to do something like this in sqlite already, or would
you regard this as a feature request?

Thanks, Ketil
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users