Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?

2014-12-08 Thread Max Vlasov
On Mon, Dec 8, 2014 at 5:56 PM, Roger Binns rog...@rogerbinns.com wrote:
 On 12/07/2014 04:43 PM, David Barrett wrote:
 so I'm curious if you can think of a way using the API (or any
 other way) to essentially nice the process by inserting a short
 sleep into whatever loop runs inside the VACUUM command.

 Using OS provided functionality will be the most reliable.  Other than
 that, a simple way is to sleep in the progress callback, although that
 will make I/O lumpy.

I wonder whether I/O sleeping possible in the first place. Correct
me, but what we usually call sleeping is about CPU that already
sleeps during most I/O operations waiting for rotating media finishing
its slow tasks. As a consequence, the more fragmented the data on
disk, the less relative cpu time will be spent trying to read and
write data. In this case the strategy might be to measure cpu
consumption value for consecutive blocks of data and if it's 100% (or
other heuristically adequate value)  then never sleep (this probably
means either cached data or non-fragmented data on a fast disk). But
when the cpu consumption drops significantly (much time spent waiting
for I/O), the sleeping indeed might be needed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-05 Thread Max Vlasov
On Thu, Dec 4, 2014 at 10:44 AM, Baruch Burstein bmburst...@gmail.com wrote:

 Is it possible to somehow search for/replace a string in all columns of all
 tables?


Not particularity the answer to your question, but rather a method you
or others might use.
I once implemented a virtual table allvalues that outputs all
database values with (hope self-explaining) fields

  TableName, TableRowId, FieldName, Value

that allows also exploring unknown complex databases. Take for example
places.sqlite of Mozilla Firefox. If you want to explore where it
saves your visited site, you can use the query

  SELECT Distinct TableName, FieldName FROM allvalues where Value like
%http://%;

Sqlite's virtual tables are a great tool since with a little effort on
the developer side the newly created entity starts working as a first
class citizen of the sqlite engine. Compare this to a quick hack that
outputs all raw data from a specific database to a specific media.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-05 Thread Max Vlasov
 I once implemented a virtual table allvalues that outputs all
 database values with (hope self-explaining) fields

 TableName, TableRowId, FieldName, Value

 Could you expand on how you coped with the underlying database
 changing, and how you mapped virtual table rowids to the actual
 database records?


This particular implementation was intended to be used as a
Select-only wrapper so it just iterates through every sqlite_master
table and every row of each table. I didn't support update and insert.
Rowid of this virtual table is compound bit mask starting with table
bits (able to fit the number of tables of this db), field bits
(maximum possible number of fields in a table) and the rest is rowid
of the particular table. So in theory this method won't work for
tables containing large 64-bit ids where there are not enough bits to
be used for table number and field number.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is msvcrt coupling now unavoidable (3.8.7)?

2014-10-22 Thread Max Vlasov
Hi,

my static linking with Delphi for 3.7.8 version now complains about
_beginthreadex_/_endthreadex_.
Quick search shows than everywhere there's a recommendation to use these
functions instead of API CreateThread if one plans to use Visual c run-time
(msvcrt).

All my previous linking with sqlite was based on hand-made equivalent
function for every static link to external (non-dll) functions so my
programs never used msvcrt. Now probably I either have no choice to
including msvcrt linking of make my own de-wrapper so some function that
accepts _beginthreadex_ and calls CreateThread. Or can sqlite introduce
some define that makes vs run time coupled by default, but by disabling it,
uses CreateThread api?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?

2014-10-22 Thread Max Vlasov
On Wed, Oct 22, 2014 at 4:50 PM, dave d...@ziggurat29.com wrote:


  -Original Message-
  From: sqlite-users-boun...@sqlite.org
  [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov
  Sent: Wednesday, October 22, 2014 5:25 AM
  To: General Discussion of SQLite Database
  Subject: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?
 ...
  my static linking with Delphi for 3.7.8 version now complains about
  _beginthreadex_/_endthreadex_.
 ...
 ...

 It's a good question; and it doesn't look like there is a great way of
 turning off that feature other than SQLITE_THREADSAFE=0, which you might
 not
 want for other reasons.


Thanks, dave, Clemens,

I also did some researching/fixing.

The sources have some contradictory information in the comments to the
current state, I'm not sure whether I should mention them here
Overall if one don't wants msvcrt coupling, then
SQLITE_MAX_WORKER_THREADS=0 should be in options. In this case no
_beginthreadex/_endthreadex
linking will be required.

Wrapping calls mentioned by Clemens Ladisch works. I wrote them
independently, but they look char by char the same :). When I did   PRAGMA
threads=4 and set breakpoints for SELECT ... Order By query , they were
visited and no visible errors appeared.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-06-03 Thread Max Vlasov
On Tue, May 27, 2014 at 10:49 PM, Richard Hipp d...@sqlite.org wrote:
 On Tue, May 27, 2014 at 10:50 AM, Richard Hipp d...@sqlite.org wrote:


 This time I build the 32-bit DLL using mingw instead of MSVC.  (MSVC was
 still used for the 64-bit DLL.)  So perhaps it will work correctly on
 WinXP.  Please let me know one way or the other.  Thanks.


Don't know what is difference between stock 3.8.4.3 from the site
and the newly compiled one, but synthetic tests with memory databases
(no I/O involved) show that the new one is about 20% faster (Intel
Core i5-2310). If they're both MinGW built then probably some switch
differences between these builds (if there are any) gives such
difference.

Here is the schema and the timings. The first one is for
sqlite-dll-win32-x86-3080403.zip, the second one is for
sqlite-dll-win32-x86-201406021126.zip

The table in memory
  CREATE TABLE [TestTable] ([ID] Integer primary key, [IntValue]
INTEGER, [FloatValue] FLOAT, [StrValue] TEXT)

was populated  with 1000 pseudo-random rows using the same seed for
every test. The following queries all use cross join and differ in
types of data used (int, float, string)


Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2 )
62 msec
47 msec

Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as
divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue -
round(divvalue))  0.499)
453 msec
359 msec

Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2
where (T1.IntValue * T2.IntValue) % 1789 = 56)
203 msec
187 msec

Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2
where substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20)
= t2.strvalue)
499 msec
405 msec

Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as
divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue -
round(divvalue))  0.499 or (T1.IntValue * T2.IntValue) % 1789 = 56 or
substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20) =
t2.strvalue)
1124 msec
952 msec


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-05-10 Thread Max Vlasov
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp d...@sqlite.org wrote:
 On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov max.vla...@gmail.com wrote:

 Is there a machine-readable (BNF or other) grammar as equivalent to
 the current syntax diagrams?

 An updated version of all-bnf.html has now been checked in at
 http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html


Thanks again for the updated grammar. Don't know whether it's somehow
used for production logic (probably just kept manually synced with
it), but wanted to mention a couple of things noticed.

in the rule
  join-clause::=table-or-subquery [ join-operator
table-or-subquery join-constraint ]
the construct in [] probably should be made repeated with *. At least
without this addition it prevented from parsing multiply joins.

There are no rules described for initial-select, recursive-select. So
I guessed them as synonyms for select-stmt.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WITHOUT ROWID option

2014-05-09 Thread Max Vlasov
On Wed, May 7, 2014 at 6:31 PM, Richard Hipp d...@sqlite.org wrote:
 On Wed, May 7, 2014 at 9:00 AM, Marco Bambini ma...@sqlabs.net wrote:

 What is the best way to know if a table has been created with the WITHOUT
 ROWID option?



 (1) You could send SELECT rowid FROM table 

 (2) Run both PRAGMA index_list(table) 



Is there a way for a virtual table implementation to report that
there's no rowid support before first xRowId call takes place?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] CTE in views for older versions

2014-04-20 Thread Max Vlasov
Hi,
noticed that attempt to open a database containing a view

Create vew ... with recursive ...

... with older (non-cte) versions of sqlite failed. The call to
open_v2 was ok, but any access for example to PRAGMA encoding led to
malformed database schema error. Although it came as no big
surprise, probably one might expect delayed handling behavior as with
virtual tables so it don't face incompatibility until referenced in a
query.

So probably either
  https://www.sqlite.org/lang_createview.html
or
  https://sqlite.org/lang_with.html

might contain a little sentence about such incompatibility.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-18 Thread Max Vlasov
On Fri, Apr 18, 2014 at 8:08 PM, Andy Goth andrew.m.g...@gmail.com wrote:
 On 4/18/2014 12:29 AM, Max Vlasov wrote:

 So it seems like if general queries allow affinity automatical
 selection while bind api does not have the corresponent function. I
 know that I can analize incoming data myself, but since general
 queries use a similar function probably, making some kind of
 sqlite_bind_auto should be no big deal. But probably this decision was
 deliberate.


 This was done for C compatibility.  A C++ wrapper for SQLite can have an
 sqlite_bind_auto() function which is overloaded for a variety of types,
 but it would in reality be multiple functions that have the same name
 but otherwise different type signatures.

I meant something different. A function accepting a text value while
sqlite doing affinity conversion according to how the value looks.
Actually it seems that all the problems I faced was related to the
type being typeless in declaration. For integer-declared fields sqlite
correctly converts them to the integer affinity even with
sqlite3_bind_text function. So it makes no sense to force other kind
of detection if the type is declared. But if I stay with typeless
fields, I will definitely will have problems with indexes on them,
because seems like the query planner relies on declared types and
doesn't care how many rows have particular affinity. So it looks like
I should prescan some rows for type detection if I want an automatic
conversion not knowing type info in advance

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-17 Thread Max Vlasov
Hi,

The problem was with my program that automatically converts xml data
into an sqilte table. It looks for an attribute and appends a column
if it does not exists, but stating no particular type. All values were
appended with sqlite_bind_text. Everything was fine, but an index
created after this on a ParentId field that was in real life integer
or null, actually appeared text-based. Sure, typeof all inserted
fields appeared to be text.

On the other side for a typeless table
  CREATE TABLE t (value)

Query
  insert into t (value) values (123)
  insert into t (value) values ('text')
  insert into t (value) values (34.45)

finally makes
  select typeof (value) from t

returns
  integer
  text
  real

So it seems like if general queries allow affinity automatical
selection while bind api does not have the corresponent function. I
know that I can analize incoming data myself, but since general
queries use a similar function probably, making some kind of
sqlite_bind_auto should be no big deal. But probably this decision was
deliberate.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] about the apparently arriving soon threads

2014-04-08 Thread Max Vlasov
On Tue, Apr 8, 2014 at 11:00 PM, big stone stonebi...@gmail.com wrote:
 Hi,

 I did experiment splitting my workload in 4 threads on my cpu i3-350m to
 see what are the scaling possibilities.

 Timing :
 1 cpu = 28 seconds
 2 cpu = 16 seconds
 3 cpu = 15 seconds
 4 cpu = 14 seconds


If the info at 
http://ark.intel.com/products/43529/Intel-Core-i3-350M-Processor-3M-Cache-2_26-GHz
is right, you have 2 cores, each having 2 threads. They're logically
cores, but physically not so. My tests with any multi-threading
benchmarking including parallel quicksort showed that a similar i3
mobile processor rarely benefit after 2 threads, probably cache
coherence penalty is the cause. Desktop Intel Core i5-2310, for
example, is a different beast (4 cores/4 threads), 3 threads almost
always was x3 times faster, 4 threads - with a little drop.

It all still depends on the application. Once I stopped believing a
2-threaded Atom would show x2 in any of tests I made, when on one
graphical one it finally made it. But still if number of threads are
bigger than number of cores then it's probably a legacy of
HyperThreading hardware Intel started multi-threading with

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread Max Vlasov
On Sun, Apr 6, 2014 at 10:23 PM, Dominique Devienne ddevie...@gmail.com wrote:


 If the answer to either question above is true, then a specialized
 vtable would be both more convenient and faster, no?


Hmm... If logical peculiarity of vtable approach (when
where-constrained queries might be larger than full-scan one) is
acceptable by sqlite (mentioned in my other post), then where
expression might serve as parameters so a possible hybrid might be
possible (also inspired by the recent discussion of creating user
functions on the fly). For example, a virtual table that accepts a
Select statement might look like

CREATE VIRTUAL TABLE vcommalist USING QueryVirtualizer('WITH RECURSIVE
 :commalist  ')

And the actual query using it might look like

SELECT * FROM vcommalist WHERE commalist='1,2,3,4,5'

This one served more like shortcut, but probably a more broad version
is possible when the parameter to virtual table is a print formatted
string so one can dynamically customize parameters general parameters
can't, i.e., table names, output column names etc.

Multiply parameters would be great, but with current state of things
the implementation still should use some kind of workaround to ensure
correct results so should always return huge estimatedCost in
xBestIndex if the constrained arrived doesn't contain at least one
required parameter (WHERE clause lacks one) and low one if all
parameters are provided. I think that sqlite might as well interpret
estimatedCost equal to  -1  as a ban to use this index.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-05 Thread Max Vlasov
On Fri, Apr 4, 2014 at 10:20 PM, peter korinis kori...@earthlink.net wrote:
 A data column in a link table contains comma-separated string data, where

 How do you 'parse' a table entry like: 4,66,51,3009,2,678, . to extract
 these values and use them in an SQL statement, perhaps a WHERE id='66'?



In similar cases I use my virtual table explained here:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html
Actually the table works more like function, so only one instance is
required in the db to apply this trick.

Finally the db contains

CREATE VIRTUAL TABLE cmlist Using vtcommalist

and the query

SELECT value FROM cmlist WHERE commalist='45,56,78,125'

returns 4 rows

45
56
78
125

And this was a blank db with this virtual table only and the query
doesn't reference anything outside while still generating table data.
So the data for the virtual table is actually supplied at the time of
the query itself from WHERE clause.

This trick successfully works with joins and everything else.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Difference in virtual tables logic triggered between version 3.7.15.2 to 3.8.0

2014-04-05 Thread Max Vlasov
On Sat, Apr 5, 2014 at 11:48 PM, Max Vlasov max.vla...@gmail.com wrote:

 This works for an old version of sqlite (3.6.10), but today Dominique
 Devienne mentioned some doubt about this approach and I decided to
 test it with some data with a recent version of sqlite.  With 3.8.4.3
 the same join query produced two rows and Nulls for CommaList and
 Value



Actually I found a way for it to work as assumed. My xBestIndex call
now makes estimatedCost huge (1) for non-constrained guess and
small (1) for constrained one. Although it works, I'd be glad to know
whether there is a way to return impossible instead of expensive.

I think that I see why the query worked differently in different
sqlite versions.This trick by its nature interprets constrained data
(when Where is supplied) as superset of non-constrained one (when it
returns no data if there's no Where) and this definitely violates the
logic sqlite relies upon. So in a (normal) world where
where-constrained result is always smaller than full-scan data result,
sqlite is free to choose full-scan instead of a filtered scan for a
virtual table (probably it does by default after 3.8.0 for his join
query if provided estimatedCosts are equal).

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Goodram C50 SSD slow in SQlite benchmark

2014-03-23 Thread Max Vlasov
On Mon, Mar 24, 2014 at 4:49 AM, piotr maliƄski riklau...@gmail.com wrote:
 I know it's bad. I'm trying to determine the cause of the difference, and
 if it's a feature of that SSD or a bug of some sort.

There was a very intensive discussion for a post labeled
UPDATE/INSERTing 1-2k rows slower than expected. You can read it at
https://www.mail-archive.com/sqlite-users%40sqlite.org/msg58872.html .
Also there were different tests I made during this discussion. As long
as I remember the general observation was that it's hardware that
usually says ok, I did this guaranteed -to-be-on-disk operation
you've asked for, but actually caching it somewhere inside. And
probably multiply USB controllers from the bunch of manufacturers are
to blame. SATA controller on motherboards are usually less
diversified, so more likely to be more accurate.

Also there's a setting in Windows for hard drives, enable writing
cache. If you find a similar setting in xubuntu, then probably
enabling it would make your sata connection on par with your usb
connection. But it's just a hypothesis, it's harder to make useful
tests with sata connection due physical and interface limitations of
the interface.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using a customized collate function to mimic integer array type.

2014-03-21 Thread Max Vlasov
On Fri, Mar 21, 2014 at 8:06 PM, Ben Peng ben@gmail.com wrote:

 I guess I will have to take the longer route, namely define a customized
 comparison function and translate user input internally.


There's an also virtual table method, probably not so easy to wrap the
head around, but this one allows using comma (or other symbols
splitted) lists stored in fields even in joins. See my reply about it
at https://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] which of these is faster?

2014-03-14 Thread Max Vlasov
On Thu, Mar 13, 2014 at 11:06 PM, Richard Hipp d...@sqlite.org wrote:


 Once you do that, you'll see that the opcode sequence is only slightly
 different between the two.  They should both run at about the same speed.
 I doubt you'll be able to measure the difference.



Actually a comparatively long (10,000,000 elements) CTE for random
integer generation shows difference 20 vs 38 seconds. I suppose pure
min should use linear search while order by one uses temporal b-tree
(exlain query also hints about this). Sure unless sqlite has some
detection of order by limit 1 pattern redirecting it to linear
search.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] which of these is faster?

2014-03-14 Thread Max Vlasov
On Fri, Mar 14, 2014 at 4:51 PM, Richard Hipp d...@sqlite.org wrote:


 In the original problem, there was already an index on the term for which
 the min() was requested.
.
 Whit your CTE-generated random integers, there is not an index on the
 values.  So SELECT min(x) FROM... does a linear search and SELECT x FROM
 ... ORDER BY x LIMIT 1 does a sort.


I see, my fault, didn't notice the db was a concrete one
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Observations about CTE optimization

2014-03-10 Thread Max Vlasov
Many CTE queries are just some mini-algorithms with iteration and only
last row is required. I just wondered whether it's easy to do this
without order by ...  of the outer query (also mentioned in my reply
about CTE sqrt). There's a solution, but the good news is that
probably one rarely needs such optimization because of effectiveness
of sqlite in-memory B-tree sort. Anyway, the results are below.

A synthetic test of a query interating until 1,000,000

with recursive
  testval(depth, val) as
   (
  select 1, 0
  UNION ALL
  select depth + 1, val + 1 from testval where depth = 100
   )
select val from testval order by depth desc limit 1

Time:
  4 sec. 20 ms
Explain query plan
SCAN TABLE testval
COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)
SCAN SUBQUERY 1
USE TEMP B-TREE FOR ORDER BY


We add an endmark and where condition moves to the select query so we
get rid of order by completely

with recursive
  testval(endmark, depth, val) as
   (
  select 0, 1, 0
  UNION ALL
  select (case when depth  100 then 0 else 1 end), depth + 1,
val + 1 from testval where endmark=0
   )
select val from testval where endmark=1

Time:
  2 sec 900 ms.
Explain query plan:
SCAN TABLE testval
COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)
SCAN SUBQUERY 1

For the modified CTE sqrt from the other post the difference for
100,000 queries was 27 seconds vs 21 seconds. Not so much, but
sometimes it might make sense to do such optimization

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-08 Thread Max Vlasov
On Sat, Mar 8, 2014 at 10:52 AM, Max Vlasov max.vla...@gmail.com wrote:
 On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne ddevie...@gmail.com 
 wrote:

 basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y,
 x)') would register a 2-arg function (register_function's argc-2)
 named $argv[0], which executes the following statement

 with args($argv[1],  $argv[2], ... $argv[argc-2]) as (VALUES(?, ?))
 select $argv[argc-1) from args;


 Dominique, your variant is even better than using numbered parameters.
 If you use named ones supported by sqlite (:VVV) then sqlite will do
 the job of the textual replacement with bind api itself  (
 'printf(''%-*s'', :y, :x)'  )

 The small problem in this case is that there are two ways (times) to
 check whether named parameter exists in the expression.

No longer problem here :) SQLite3_Bind_Parameter_name is available at
the time of registration, so the prepared statement knows everything
about the number and the names of the parameters for full checking.

I have a working prototype, some things are left to do, but this
confirms that sqlite is content with the contexts and everything.
Examples:

SELECT RegisterExpressionFunction('myfunc', '45');
Select Myfunc()
45

SELECT RegisterExpressionFunction('mysum', 'x', ':x + :x');
select mysum(45)
90

SELECT RegisterExpressionFunction('mysumalt', '', '', '?1 + ?2');
select mysumalt(45, 67)
112

SELECT RegisterExpressionFunction('strconcat', 's1', 's2', ':s1||:s2');
Select strconcat('foo ', 'bar')
foo bar


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Suggestion to add locate as a broader version of instr

2014-03-08 Thread Max Vlasov
On Sat, Mar 8, 2014 at 2:24 AM, big stone stonebi...@gmail.com wrote:
 Ooups !

 Thanks to the awesome  posts about RPAD/LPAD, I understood that I could
 already create a sqrt() function for SQLite3  in interpreted python.



Yes, that discussion was inspiring :)

Looking at your task I also played with cte version of sqrt.
Based on the guessing approach from one of the answers from
  
http://stackoverflow.com/questions/3581528/how-is-the-square-root-function-implemented
the following query finally worked.

/*
  :value=12345
*/
with recursive
  sqrt(depth, val, guess) as
   (
  select 1, :value, Cast(:value as Float)/2
  UNION ALL
  select depth + 1, val as newval, ((guess + val/guess)/2) as
newguess from sqrt where abs(newguess - guess)  1e-308 and depth 
100
   )
select guess from sqrt order by depth desc limit 1

but I could not overcome some pecularities of float numbers so depth 
100 here is for cases when comparison fails to stop.
Also for CTE queries in general I wonder whether there is another
faster way to get the last row of the query (in natural executing
order), so order by depth can be replaced by something else. I suspect
ordering here triggers temporary storage.

I tested this function as expression function implemented based on
that thread and an average speed of this one is about 4000 sqrt
operations / second on a mobile Intel i3. Not so fast, but if one
desperately needs one, then it would be ok.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 12:49 PM, Dominique Devienne ddevie...@gmail.com wrote:
 I think what SQLite lacks is a syntax to define custom function like
 it does for virtual tables. Something like:

 create function rpad(x, y) using scripty_module as return 
 PRINTF('%-*s',y,x);


Nice suggestion. This probably falls into case when a small new part
needed on sqlite side and large DSL specific features depending on the
side of the registered module. But you still needs some binary module
for your functions to work

How about macro expression substitution feature? Something that
could accept any expression compatible with sqlite as the function
body to be just translated into corresponding vdbe instructions.

For example,

Create function complexcase(x,y) as Case when x  10 then y else y + 10 end

Select complexcase(myField1, myField*10) from t;

On sqlite site, the engine can only syntax check for the create
function body, but the referencing it in actual query would require
actual substitution with the corresponding context-sensitive errors.

This definitely will be more constly in kB on sqlite side than module
approach, but would create a more universal solution.

Just quick thoughts, probably there are plenty of obstacles here :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 6:39 PM, Clemens Ladisch clem...@ladisch.de wrote:
 Max Vlasov wrote:

 Nice suggestion. This probably falls into case when a small new part
 needed on sqlite side

 Actually, no change to SQLite itself would be needed.  It's possible
 to create an extension that provides a function that allows to register
 another function that executes a custom SQL expression:

  SELECT register_simple_function('rpad', 2, 'SELECT printf(''%-*s'', ?, ?)');



Brilliant :) this is a solution not requiring adding anything to
sqlite. I'd only prefer not using full-features sql queries, but
rather expression that is automatically wrapped in SELECT without
FROM, otherwise they would create dimensions (columns, rows) when our
function strictly expected one result. And also support for numbered
parameters. So my example whould be implemented as

SELECT register_function('complexcase', 2, 'Case when ?1  10 then ?2
else ?2 + 10 end');

It's interesting that such approach makes this super-function
Pragma-like. It changes some internal state (reveals a new function)
and also optionally returns some result, for example 1 for success.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Sat, Mar 8, 2014 at 2:16 AM, Clemens Ladisch clem...@ladisch.de wrote:
 Eduardo Morras wrote:
 So, if a webapp that uses SQLite doesn't check it's input, functions that 
 renames SQLite internals can be injected

 SELECT register_simple_function('MAX', 1, 'DROP TABLE ?');

 Such a statement would not return a single column, so it wouldn't
 actually get executed.

 But it might be possible to execute something like PRAGMA evil = on,
 so this function probably should be secured like load_extension().


Absolute evil ) I already thought that introducing such function
violates a common sense assumption that Select Api is side-effect free
(in context of database changes) since Register slightly violates
this by messing with namespace context. Allowing non-Select queries
might pose damage risk because it would complain after the damage is
done (no SQLITE_ROW result for a Update or Insert query, but database
had already changed to the moment). That's also why I still think that
constraining it to Select Api with assuming expression and automatic
wrapping in Select is a must.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne ddevie...@gmail.com wrote:

 basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y,
 x)') would register a 2-arg function (register_function's argc-2)
 named $argv[0], which executes the following statement

 with args($argv[1],  $argv[2], ... $argv[argc-2]) as (VALUES(?, ?))
 select $argv[argc-1) from args;


Dominique, your variant is even better than using numbered parameters.
If you use named ones supported by sqlite (:VVV) then sqlite will do
the job of the textual replacement with bind api itself  (
'printf(''%-*s'', :y, :x)'  )

The small problem in this case is that there are two ways (times) to
check whether named parameter exists in the expression. Either when
the registered function executed (and report error about lacking some
parameters only here) or make a test call with some test parameters at
the time of registration.

The latter allows revealing errors earlier, but it has no knowledge
about the expression so might fire strange run-time errors (division
by zero etc). If we omit execution and leave only binding, we might
miss the case when not all binding variables mentioned in the
expression, actually provided (example ... 'somefunc', 'x', 'y', ':x +
:y + :z')

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Max Vlasov
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
est...@gmail.com wrote:

 Our main test case is TPCH, a standard DB benchmark. The lineitem table of
 TPCH contains 16 columns, which for 10M rows would require 160M xColumn
 callbacks, to pass it through the virtual table API. These callbacks are
 very expensive, especially when at the other end sits a VM (CPython or PyPy)
 handling them.


Ok, not stating that the performance improvment is impossible, I will
explain why I'm a little sceptical about it.

For every bulk insert we have a theoretical maxiumum we'd all glad to
see sqlite would perform with - the speed of simple file copying.
Sqlite can't be faster than that, but to be on par is a good goal.
This is not possible when an insert means also modification of other
parts of the file, for example when there's an index involved. But
let's forget about it. Finally when new data is added, sqlite should
write a number of database pages, the cost of this part is absolutely
in the hands of the media (driver) and OS (driver).  But for every
database page write there's also price to pay in CPU units, for many
actions sqlite should do before actual value is translated from what
the developer provided to what actually appears on disk.

The illustration of the CPU price is the following example
 CREATE TABLE t(Value)

on my ssd drive mulitply inserts (thousands)
  insert into t (Value) values ('123456689  // this string
contains many symbols, for example 1024)
performed with the speed
  30 MB/Sec

but the query
  insert into t (Value) values (10)  // this is a small integer value
only
  3 Mb/Sec

Both shows almost full cpu load. Why such difference? Because with
latter query the system can do more than 30 MB of writes in 1 second,
but it should wait for sqlite spending 10 seconds in preparations.
The former is better because CPU cost of passing a large text value to
sqlite is comparatively low comparing to the  time spent in I/O in
writing this on disk.

So CPU price to pay isn't avoidable and notice that in example this is
not virtual table API, this is bind API. I suppose that the price we
pay for CPU spent in virtual table API is on par with an average price
payed in sqlite as a whole. This means that if I transfom the avove
queries into inserts from virtual tables, the final speed difference
will be similar. And this also means that for your comparision tests
(when you get x3 difference), the CPU price sqlite pays inside bind
api and in its code wrapping xColumn call is probably similar. The
rest is the share your code pays.

Well, I know that there are differences in CPU architectures and
probably there are platform where compiled code for bind api and
virtual tables api behaves a little differently making the costs more
diffrent. But imagine that hard task of fine tuning and refactoring
just to get a noticeable difference for a particular platform.


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table Functions

2014-03-01 Thread Max Vlasov
On Fri, Feb 28, 2014 at 10:14 PM, Dominique Devienne
ddevie...@gmail.com wrote:
 Can someone tell me how the statement below works?
 

 Thanks for any help on this. This is really puzzling to me. --DD


Very puzzling for me too
For any statement like this

  select * from blablabla(123)

sqlite (3.8.3.1) primarily reports syntax a error...

   near (: syntax error

... before complaining about anything else.

I'd also be glad to see a shortcut when a virtual table created just
for a select statement and dies automatically, but I suspect it is not
supported and it was their own modified version of sqlite. Hope OP
reads this post also and will have an answer for us. Probably this
will help providing following information regarding his issues.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-01 Thread Max Vlasov
Hi,
thanks for explaining your syntax in another post. Now about virtual
tables if you don't mind.

On Fri, Feb 28, 2014 at 8:24 PM, Eleytherios Stamatogiannakis
est...@gmail.com wrote:

 If we load into SQLite, 

 create table newtable as select * from READCOMPRESSEDFILE('ctable.rc');

 it takes: 55 sec


 If we create an external program 

 it takes: 19 sec (~3x faster than using the virtual table API)



Looking at your numbers, as a user (and fan :) of virtual tables I
decided to do some tests.

I have a virtual table all values, it was designed for enumeration
of all tables values to the one single virtual table, so finally it is
a long list of

  TableName, TableRowId, FieldName, Value

so you get the idea. As an example of what it may do, you may open
places.sqlite of mozilla browser and do

  Select * from AllValues where Value Like %sqlite.org%

and see actual results even not knowing how they planned their schema.

Internally this virtual table simply uses general selects for all
other tables met in sqlite_master. This is a good (but probably not
the best) test for measuring virtual tables performance, because

  SELECT * FROM AllValues

is equivalent to reading all conventional tables of this database.
Besides
- the tool I use has a tweaker implemented with VFS that allows
measuring speed and other characteristics of the query performed while
the query is in effect.
- I have an option that forces resetting windows cache for the
database file when it is reopened. So with it we exclude the windows
cache from consideration so pure I/O reading is used. Btw, when you do
your comparison, it's very important to reset system cache before
every measurement that involves I/O.


So I took a comparatively large (500 Mb) database consisting of
several small and one big table (Posts) and compared two queries.

(Query1)

  Select sum(length(Body) + length(Title)) from Posts

This ones effectively reads the table data and uses
- length() to force sqlite reading texts that don't fit into single db page
- sum() to exclude accumulating results on my side from comparison, so
we have a single row, single column result from the work completely
done by sqlite.

(Query2)

  Select Sum(Length(Value)) from AllValues

This one performs basically the same but using sqlite virtual tables
api. It also touches other tables, but since they're small, we can
forget about this.

Query1 (General):
  Read: 540MB,
  Time: 24.2 sec,
  CPU Time: 6 Sec (25%)
  Speed: 22.31 MB/Sec

Query2 (Virtual):
  Read: 540MB,
  Time: 27.3 Sec,
  CPU Time: 13 sec (51%)
  Speed: 20 MB/Sec

In my particular test the noticeable difference is at the part of the
CPU spent more with the virtual table. I assume this can be related to
my own implementation of this virtual table since I should retrieve,
store values temporary somewhere and talk to sqlite. But this also may
shed light on your performance drop. If your virtual implementation
spend much time processing a value, you may finally get a big drop.

You may tell that this test is not fair because it does not involve
creating a table from the values of a virtual table. Unfortunately I
can't create good enough test comparing Posts and AllValues table as
sources, because the destination geometry of the tables are different
( Posts have more columns, less rows, AllValue less columns, more
rows). The closest approximation was possible when I created an
intermediate physical table containing the results from AllValues and
compared table creation from this table and from virtual table. The
virtual one took longer, but the values - 56 seconds vs 43 second not
different enough to conclude something.

I'm not sure my tests defend sqlite virtual tables sufficiently, but
currently I don't have evidence of significant inefficiency either.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about how sqlite recovers after a power loss

2014-02-26 Thread Max Vlasov
On Wed, Feb 26, 2014 at 12:33 PM, Fabrice Triboix
ftrib...@falcon-one.com wrote:

 And even then, that would not explain why the journal file lingers after 
 re-opening the database.


I remember asking a similar question. As long as I remember, the main
logical implication is that journal file presence is not a mark to
force database into some actions. It's rather a supplement for the
error state of the base. So if the base is in error state and there's
no journal file, it's bad. But reverse is ok and might appear with
some scenarios.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
Hi,

Is there a machine-readable (BNF or other) grammar as equivalent to
the current syntax diagrams?
  http://www.sqlite.org/syntaxdiagrams.html

The only one a little similar I found is
  http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34
  ( mentioned as art/syntax/all-bnf.html )

but it's pretty outdated (no CTE)

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
On Fri, Feb 21, 2014 at 4:47 PM, Richard Hipp d...@sqlite.org wrote:
 On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov max.vla...@gmail.com wrote:

 Is there a machine-readable (BNF or other) grammar as equivalent to

 Not that I am aware of.


I just noticed the file ( bubble-generator-data.tcl )
  
www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator-data.tcl?mimetype=text/plain
is it created by a human or by a machine? It looks like a good
candidate, but might lack some essential information for parsing. If
it was generated by a machine then the source of this generation might
be next good candidate :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp d...@sqlite.org wrote:
 On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov max.vla...@gmail.com wrote:


 The only one a little similar I found is
   http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34
   ( mentioned as art/syntax/all-bnf.html )


 An updated version of all-bnf.html has now been checked in at
 http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html


Thanks a lot, that human-machine activity was much faster than my
writing reply post :)

My  thoughts about parsing was about using it to recognize some states
of user queries to suggest additional tasks. In a sense any query is a
small database containing for example
- the list of datasets used (might be a quick hint nearby),
- possible state of master-detail relationship (enabling menu item for
showing tow connected list views instead of the joined table)

I already tried to detect some states without parsing, but obviously
not so much is possible without full AST at hands.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-17 Thread Max Vlasov
On Mon, Feb 17, 2014 at 2:27 PM, Simon Slavin slav...@bigfraud.org wrote:


 On 17 Feb 2014, at 7:59am, Max Vlasov max.vla...@gmail.com wrote:

  So the nanosec example modified
 
  Select v-v from
  (
Select nanosec() as v from TestTable
  )
 
  ...shows non-zero values for the current (3.8.3) and for older (3.6.10)
  version.

 Erm ... that worries me a little.

 I don't know how SQLite handles sub-selects internally.  But the
 conventional way of doing them is a JOIN to a VIEW.  Which means that v-v
 should always equal zero.



Explain lit a litle light
...
4Function0NanoSec(0)
5Function0NanoSec(0)
6Subtract2
7ResultRow1
8Next4
.
So no intermediate storage probably for performance reasons. Also the
listing looks very self-explainable while possible stateful one will add
more complexity to the VDBE code.



 By the way, when figuring out how to optimize this still it's worth noting
 that a parameterless function is rare in SQL.  It's rarely worth optimizing
 unnamed expressions because you rarely get the same unnamed expression
 (including parameters) used twice.


Simon, I see your point, next time my test function will depend on at least
a dozen of very important parameters and will have a very, very, very long
name :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-17 Thread Max Vlasov
On Mon, Feb 17, 2014 at 7:00 PM, RSmith rsm...@rsweb.co.za wrote:


 On 2014/02/17 09:59, Max Vlasov wrote:

 .
 So

Select nanosec() - nanosec() from ...

 returns non-zero values for most of the times, so there's no guarantee the
 user functions or any other functions will be called once for the step.//... 
 etc.


 Did you mark your nanosec function as SQLITE_DETERMINISTIC 
 http://www.sqlite.org/c3ref/c_deterministic.html?
 http://www.sqlite.org/c3ref/create_function.html

 Which, if not, it can and will very much return non-zero values.


Very interesting option, it did affected the results, they're now zero
for both cases. Also I see this quite a young option listed as a new
feature of 3.8.3.

Added SQLITE_DETERMINISTIC as an optional bit in the 4th argument to
the sqlite3_create_function() and related interfaces, providing
applications with the ability to create new functions that can be
factored out of inner loops when they have constant arguments

So the query with this option

  Select nanosec() as v from TestTable where vv

always returns empty dataset.

But it seems this option still has something to explain since

  Select nanosec() - nanosec() from TestTable

returns always zeros while

  Select nanosec(), nanosec() from TestTable

returns different values for fields

Either nanosec() - nanosec() is much faster than the granularity of
performance counter on average windows hardware or they are different
cases for some reasons.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-16 Thread Max Vlasov
On Mon, Feb 17, 2014 at 1:22 AM, James K. Lowden
jklow...@schemamania.orgwrote:

 On Fri, 14 Feb 2014 08:32:02 +0400
 Max Vlasov max.vla...@gmail.com wrote:

  From: Max Vlasov max.vla...@gmail.com
  To: General Discussion of SQLite Database sqlite-users@sqlite.org
  Reply-To: General Discussion of SQLite Database
  sqlite-users@sqlite.org Date: Fri, 14 Feb 2014 08:32:02 +0400
  Subject: Re: [sqlite] Once again about random values appearance
 
  On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden
  jklow...@schemamania.orgwrote:
 
 select id, (select id from TestTable where id = abs(random() %
 100)) as rndid from TestTable where id=rndid
  
   On Thu, 13 Feb 2014 07:26:55 -0500
   Richard Hipp d...@sqlite.org wrote:
  
It is undefined behavior, subject to change depending the specific
version of SQLite, compile-time options, optimization settings,
and the whim of the query planner.
  
   It should be defined.
  
   In the above query, random() should be evaluated once.  In the SQL
   model the user provides inputs once, and the system evaluates them
   once.
  
  
  Once for the query or once for the row?

 Once for the query.

 As a user you have no control how the system evaluates your query.
 The evaluation may change over time with different implementations, but
 the semantics of the query do not.

 Not long ago on this list we discussed

 SELECT *, datetime('now') from T;

 and the behavior was that the datetime function was called per-row,
 resulting in different times on different rows.  It was changed, the
 rationale IIRC to be compatible with the SQL standard and other DBMSs.



Ok, I hope I found the topic, the title was
  racing with date('now') (was: Select with dates):
one of the links to the archive
  https://www.mail-archive.com/sqlite-users@sqlite.org/msg79456.html

CMIIW, but as I see it, the final modification was commented by Richard

  As a compromise, the current SQLite trunk causes 'now' to be
exactly  the
  same for all date and time functions within a single sqlite3_step()
call.

But this is just for now and date-related functions. I wanted to be sure so
created a user function NanoSec() that returns  nanoseconds as it is
calculated with QueryPerformanceCounter and QueryPerformanceFrequency on
Windows and clock_gettime(CLOCK_REALTIME... on Linux. Seems like it's not
always real nanoseconds but value that is changed very frequently to be
different for close VDBE instructions of sqlite engine.

So

  Select nanosec() - nanosec() from ...

returns non-zero values for most of the times, so there's no guarantee the
user functions or any other functions will be called once for the step.

My original issue was commented by Richard and there's no other argument I
can post because ability ot use alias in the WHERE clause is probably the
sqlite-only feature so you just can't reference any standard about this.

But.. your first reply triggered another problem, it's where outer query
uses alias from the inner query and here sqlite can be compared with other
engines. So the nanosec example modified

 Select v-v from
 (
   Select nanosec() as v from TestTable
 )

...shows non-zero values for the current (3.8.3) and for older (3.6.10)
version. And here it would be interesting to know whether any standard has
something to tell about the value of v in the outer query.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Suggestion to add locate as a broader version of instr

2014-02-14 Thread Max Vlasov
Hi,

Some time ago when there was no instr functions, I looked at Mysql help
pages and implemented a user function locate as the one that allows
searching starting a particular position in the string. With two parameters
form it was just identical to instr only the order of parameters was
reversed. As I see, the latest sqlite has only instr.

It's not a big deal, but I noticed that locate with three parameters
becomes convenient for CTE recursive queries since it allows search
sequentially in the string. For example, a little bulky at last, but I
managed to do comma-list to dataset query

I suppose implementing locate and doing instr as a call to locate
would cost the developers probably no more than a hundred of bytes for the
final binary

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
Hi,

probably was discussed and modified before, but I still can not understand
some peculiarities with random column values.

The table
  Create table [TestTable] ([id] integer primary key)

populated with 100 default values (thanks to CTE now made with a single
query):

with recursive
  autoinc(id) as
 (values(1) UNION ALL Select id+1 from autoinc where id  100)
insert into TestTable select id from autoinc


So the following query

select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable where id=rndid

returns a single row (as expected), but this row contains different values
for id and rndid. I suppose this is because rndid to be shown is calculated
once, but the one used for comparison is calculated again and again. Is
this correct behavior?

Also checked several dll versions and saw that this change appeared
somewhere between 3.6.13 and 3.6.16, so .13 showed identical values, while
.16 different.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
On Thu, Feb 13, 2014 at 4:26 PM, Richard Hipp d...@sqlite.org wrote:

 On Thu, Feb 13, 2014 at 4:45 AM, Max Vlasov max.vla...@gmail.com wrote:

  Hi,
 
  probably was discussed and modified before, but I still can not
 understand
  some peculiarities with random column values.
 

 It is undefined behavior, subject to change depending the specific version
 of SQLite, compile-time options, optimization settings, and the whim of the
 query planner.



Thanks, this makes sense.

Assuming there are time related functions, user function with undefined
values etc, may I generalize this into an expression is guaranteed to have
the same value in any context only if it is always return the same output
for the same input?

Interesting, the problem probably arises since sometimes the symbols in the
result columns are easy to interpret mentally like states to be used
further, but they look more like macros to be substituted.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden
jklow...@schemamania.orgwrote:

   select id, (select id from TestTable where id = abs(random() % 100))
   as rndid from TestTable where id=rndid

 On Thu, 13 Feb 2014 07:26:55 -0500
 Richard Hipp d...@sqlite.org wrote:

  It is undefined behavior, subject to change depending the specific
  version of SQLite, compile-time options, optimization settings, and
  the whim of the query planner.

 It should be defined.

 In the above query, random() should be evaluated once.  In the SQL
 model the user provides inputs once, and the system evaluates them
 once.


Once for the query or once for the row?
It does evaluated once for the query. To check, just remove where

  select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable

and we have plenty of the same rndid as the right column

I almost understood the logic behind this, but just found the query that
surprised me once again. When I wrote it, I wanted to show that to compare
engines we have to remove alias usage from WHERE because most of engines
forbid using it in WHERE. But the following one that probably should be
compatible with other RDBMSes still shows different column values

select id, rndid from
(
select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable
) where id=rndid

But I supposed the rndid in outer query should have known (fixed) value and
the values from where clause should be shown identical.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] latest sqlite 3 with Delphi 5 professional

2014-01-24 Thread Max Vlasov
On Thu, Jan 23, 2014 at 1:33 AM, dean gwilliam
mgbg25...@blueyonder.co.uk wrote:
 I'm just wondering what my options are here?
 Any advice much appreciated.
 ___

My two cents...

Historically I took Aducom TDataSet-compatible classes
(http://www.aducom.com/cms/page.php?2 , author - Albert Drent) and
used it ever since, but along the road there were many changes that I
made, so I'm not sure I can recommend one of the current (last
version). They had static variant implemented with msvcrt linked
(maybe it was removed to the moment), to remove the dependency
yourself you have to implement the following functions

_malloc,_realloc,_free,_memset,_strncmp,_memmove,_memcpy,_strlen,_qsort,_memcmp,_localtime

part of them might be just calls to Delphi existing rtl, for another
part a little work needed. This allowed me for example to monitor the
number of memory requests different queries make.

BCC 5.5 (freely downloadable) compiles any version of sqlite3 to
object files linkable to Delphi 5 and later, the only drawback I
noticed is that for memory-intensive operations (memory databases) the
performance is twice as worst comparing to the dll on the site
(probably VC compiled), but for databases on disk the difference is
small since I/O overhead compensate it.

Don't know about DISQLite3 , but one of the main performance issues
when using sqlite is that BDE was made for virtual access of data with
moving cursor, but it's not possible with sqlite. You have a query and
you can only move forward. So, the easiest approach is to load all
data, but imagine this for a very large table, activating TDataset in
this case may take very long time (and space). One of the approach is
to change it to load all rowids of the table and request record data
on the fly based on this array.

So, if you plan get best performance and don't need borland database
components and controls, then your best bet is to use sqlite api or
simple object wrappers around it. Otherwise, be aware that impedance
mismatch between sqlite and BDE may cost you performance penalties
depending on the library you use.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] latest sqlite 3 with Delphi 5 professional

2014-01-24 Thread Max Vlasov
On Fri, Jan 24, 2014 at 9:16 PM, Ralf Junker ralfjun...@gmx.de wrote:
 On 24.01.2014 10:06, Max Vlasov wrote:

 BCC 5.5 (freely downloadable) compiles any version of sqlite3 to
 object files linkable to Delphi 5 and later, the only drawback I

 Don't know about DISQLite3 , but one of the main performance issues


 DISQLite3 does _not_ show the performance issues you describe for your BCB
 5.5 compiled object files. Quite the opposite: DISQLite3 outperformed
 sqlite3.dll whenever I tested.

Don't take it personally :) I just talked about c originated code that
we have to compile against OMF library files formats as long as
borland/codegear/embarcadero never supported COFF format (CMIIW). So
BCC is our only choice for static linking (probably Intel compilers
should still support OMF since Intel introduced it, but I did not try)

And when I talked about the x2 difference, it was about pure memory db
having a thousand rows and a query that make a cross join taking
totally about 6-20 seconds depending on the query. So no I/O involved,
pure cpu intensive operations inside sqlite. To my own surprise a dll
compiled with bcc 5.5 with -O2 option  (maximum optimization as I
recall) made it two times slower than the VC dll (from sqlite.org
site) compiled against the same version. So this is a synthetic test
not pretending to be general.

As for DISQLite3, I see from your site, that it is a great library
having support for many Delphi versions and many db features. I looked
at the source, as I see the dataset is unidirectional and processes
query on request. I'm sure there are no performance penalties here.
Good job

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite as a platform performance comparison tool

2014-01-06 Thread Max Vlasov
Hi,

A thought came to compare two computers of different platforms (ie
i386 vs ARM) using uniform approach. We take two binaries of the same
sqlite version compiled with the best c compilers for both platforms
and compare the time spent for identical operations using memory based
databases (to exclulde I/O from measuring).

So, we could start with some comparatively large table loaded into
:memory: database and the following operations is going to make
joins/inserts etc probably as a single, semicolon delimited set of
queries. The sets are identical for both platforms and is probably is
best handled with a single sqlite3_exec

Is such comparision correct? So maybe even for memory-based operation
there is something that makes this comparision invalid?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table: misuse error on sqlite3_declare_vtab()

2013-10-04 Thread Max Vlasov
Simon,

don't know what exactly wrong in your particular case, but I'd suggest
setting debugger breakpoints everywhere in your x-handlers and notice the
moment after which calls are ceased (or you get a error code).

Max


On Fri, Oct 4, 2013 at 6:07 PM, Simon turne...@gmail.com wrote:

 Hi there,

 I'm currently building my own virtual table implementation. I've built a
 dummy vtable that returns the string 1 to all queries for all columns
 but I'm having trouble getting started (getting a 'misuse' error). Can
 someone help me get in the right direction?


 

 1) I first create an sqlite3* object in :memory:. This one works fine.

 2) Allocate and init my derived sqlite3_module* object.
 2.1) Function pointers I use are Create,Destroy, Connect,Disconnect,
 Open,Close, BestIndex,Filter, Next,Eof, Column,Rowid.
 2.2) NULL pointers for Update, Begin,Sync,Commit,Rollback,
 FindFunction,Rename, Savepoint,Release,RollbackTo.
 2.3) *** ? *** Are there any other members not documented I'd need to init?

 3) I call sqlite3_create_module() with module name vtable1

 4) I call sqlite3_declare_vtab() with this statement: CREATE TABLE foo ( x
 integer, y integer );

 -- That one fails with ERROR 21 (lib misuse).

 


 Did I forget a step? Do them in wrong order? Anything obvious seems wrong
 to you? Where should I go to get more details and fix the issue?
 My current implementation is rather complex and involves C++ and templates
 a lot. Making a small example would be highly time-consuming, so I'd
 prefer to avoid going that way if possible.

 Anyone can tell me where I'd find a simple functionnal vtable hello
 world? The vtable documentation is great but is missing examples, imo.
 Personnally, I think the vtable documentation page could benefit from one
 example project throughout and develop it. Something like a basic CSV
 vtable would probably help the masses a lot! :)

 Thanks for your help and time,
 Simon
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why latest places.sqlite from firefox can't be opened by some older versions of sqlite

2013-08-31 Thread Max Vlasov
Hi,

I noticed that opening places.sqlite of my installation of Firefox
can't be made for example with sqlite 3.6.10, it says that file either
encrypted or invalid (everything ok for example with 3.7.15.2 and sure
firefox itself, it works :)).

This might be a failure of my particular installation so maybe after
several years of upgrading firefox made my particular db semi-valid or
something. It's not big deal, just curious, probably ff uses some
features making sqlite 3 format incompatible with 3.6.* versions

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why latest places.sqlite from firefox can't be opened by some older versions of sqlite

2013-08-31 Thread Max Vlasov
On Sat, Aug 31, 2013 at 10:14 PM, Richard Hipp d...@sqlite.org wrote:
 On Sat, Aug 31, 2013 at 2:03 PM, Max Vlasov max.vla...@gmail.com wrote:

 Hi,

 I noticed that opening places.sqlite of my installation of Firefox
 can't be made for example with sqlite 3.6.10, it says that file either
 encrypted or invalid

 Recent versions of Firefox use WAL mode (http://www.sqlite.org/wal.html)
 which is supported by SQLite version 3.7.0 and later.


Thanks, Richard, that's it. A small suggestion, if it's possible, the
doc mentioned by Slavin could contain additional section after the
main table with conditional format incompatibilities. WAL and latest
partial index support can be mentioned there.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Group by in sqlite 3.8 works a little differently depending on the spaces at the end

2013-08-28 Thread Max Vlasov
Hi,

the following query  (notice the space at the end of the 3rd string)

Create table [TestTable] ([Title] TEXT);
INsert into TestTable (Title) VALUES ('simple text');
INsert into TestTable (Title) VALUES ('simple text');
INsert into TestTable (Title) VALUES ('simple text ');
select Trim(Title) as Title, Count(*) as Cnt FROM TestTable Group By Title;

produces two results

simple text2
simple text1

while all previous versions I tried a single one

simple text3

Speaking about the correct way, seems like 3.8 is right (technically
the strings are different), but I just wonder why all this time this
seems like existed and never noticed. On the other side, mysql of a
some old version also showed a single result

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Group by in sqlite 3.8 works a little differently depending on the spaces at the end

2013-08-28 Thread Max Vlasov
On Wed, Aug 28, 2013 at 5:11 PM, Igor Tandetnik i...@tandetnik.org wrote:
 On 8/28/2013 8:57 AM, Max Vlasov wrote:
 See the recent discussion at

 http://comments.gmane.org/gmane.comp.db.sqlite.general/83005

 It's not about trailing spaces, but about whether Title in GROUP BY resolves
 to mean the table column or the alias.

Thanks, Igor, sorry, didn't notice the original discussion

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to create connection life-time object?

2013-07-26 Thread Max Vlasov
Hi, DuĆĄan


On Thu, Jul 25, 2013 at 2:39 PM, Duơan Paulovič paulo...@gisoft.cz wrote:

 Hello, is there a way to somehow set a connection life-time object?
 ...


 It would be fine to have something like:
 int sqlite3_set_lifetime_object(
   sqlite3 *db,  /*db connection*/
   const char *zObjectName,  /*utf8 name of object*/
   void *pObject,/*if NULL, object is removed*/
   void(*xDestroy)(void*)/*destructor*/
 );

 void * sqlite3_get_lifetime_object(
   sqlite3 *db,  /*db connection*/
   const char *zObjectName   /*utf8 name of object*/
 );




How about temporary memory table just for the task of storing your objects.

You initialization code for particular connection
  Attach ':memory:' as MyObjectStorage
  Create table MyObjectStorage.[objects] (Name Text, Ptr Text)

Your code for inserting an object
  Insert into MyObjectStorage.[objects] (Name, Ptr) VALUES ('obj1',
'0x12345678')

This code will query the pointer
  select Ptr from MyObjectStorage.[objects] where Name='obj1'

The only convention rule here will be the name of the attached db so no
other databases (or instances of the same storage) should use this name.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to create connection life-time object?

2013-07-26 Thread Max Vlasov
On Fri, Jul 26, 2013 at 9:56 PM, Duơan Paulovič paulo...@gisoft.cz wrote:

 Thanks for suggestion, but:
 1.) one object is not linked to one connection


If you have your own memory management, it's not a problem since the scheme
I described is basically just a storage of pointers. To free or not to free
(if the pointer points to a disposable entity) is your decision at your
chosen time. But if you want automatic reference counting (so when a
pointer not referenced anymore, it would be automatically deallocated),  I
agree, the proposal is not good. Probably using your own global structure
not related to sqlite is less pain.


 2.) object is not destroyed together with connection


The same, you're free to use the table just as pointers storage

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
I've created a kind of triple storage base with Sqlite db as the container.
Basically it's several tables implementing Object-Propery-Value metaphor.
There's only one field for data so thinking about generality I assumed that
the type for the data field should be TEXT of nothing since most of other
types (let's forget about blob for a moment) can be stored (and easily
visualized) with this field. But there are also indexes involved and here
comes the problem. If I insert natural numbers in some sub-sequence I will
get non-naturally sorted ones (1, 10, 2, 20). But we know that Sqlite can
accept any data in any field, so I can change the type to INTEGER and enjoy
numbered order when there are numbers were added (1, 2, 10, 20). On the
other side, when we look at real numbers, the problem would still exist. So
paradoxically probably the best type for universal field container is REAL
(or NUMERIC) since it will accept data of any type, but has advantage of
best sorting if reals or integers are involved.

Is this correct or I am missing something?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
On Tue, Jul 23, 2013 at 1:32 PM, Clemens Ladisch clem...@ladisch.de wrote:



 But REAL will sort the strings '1', '10', '2' wrong.


What do you mean by wrong?

The test

CREATE TABLE testtable (id integer primary key, value real);
insert into testtable (value) values ('1');
insert into testtable (value) values ('2');
insert into testtable (value) values ('10');
insert into testtable (value) values ('something');
CREATE INDEX [idx_Testable] ON [testtable] ([Value]);
SELECT * FROM testtable order by value;

will show

1, 2, 10, something
that's what I wanted (except for something being exception)

if I change create to
CREATE TABLE testtable (id integer primary key, value);

then the order will be
1, 10, 2, something.
(undesired result)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
On Tue, Jul 23, 2013 at 1:38 PM, Dan Kennedy danielk1...@gmail.com wrote:

 On 07/23/2013 02:52 PM, Max Vlasov wrote:

   So
 par adoxically probably the best type for universal field container is
 REAL
 (or NUMERIC) since it will accept data of any type, but has advantage of
 best sorting if reals or integers are involved.

 Is this correct or I am missing something?


 The only difference between INTEGER and REAL is that real
 values are converted to integers if this is possible without
 loss of data. In other respects they are the same. Both try
 to convert text values to numbers on insert.


So when building indexes, it doesn't matter whether it's REAL or INTEGER,
the comparison will include both real and fractional parts of values if
they exist, right?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
On Tue, Jul 23, 2013 at 10:09 PM, Petite Abeille
petite.abei...@gmail.comwrote:


 On Jul 23, 2013, at 9:52 AM, Max Vlasov max.vla...@gmail.com wrote:

  Basically it's several tables implementing Object-Propery-Value metaphor

 Hurray! The Entity–attribute–value (EAV) anti-pattern!


pattern, anti-pattern... I think the main principle should be whatever
works :)
As for my experience, this was started when I wanted to extract the
information from the CIA Factbook data. And this was 170 columns for more
than 150 countries. Making it triple storage with the excellent sqlite
support allowed very flexible data and queries, for example showing the
data about single country as a long table with two columns: property
name/property value, not mentioning other comparison queries, vertical or
horizontal depending on the goal. Queries become bigger, I admit, usually
this involves extra join, but they are still manageable

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-09 Thread Max Vlasov
Thanks, Richard, it worked. For cases when I want to get benefits of faster
file-mapping I did some adjustment.

Btw, this was also for a small tool I use. It's similar to how
defragmenters show sectors while doing the job. In this case the sectors
are pages of sqlite file. If a sector was read I draw a pixel and this
works as animation while a query is being executed. So for my query that
worked 27 seconds with 3.7.16.1 and 9 seconds for 3.7.17.draft, the reading
pattern a exactly the same, virtually random filling half of the area of a
32MB file. Looks like file caching is very conservative in guessing what
can be used around in the future, while file-mapping routines assumes wider
ranges around. Sure, after both queries if I don't reset the cache, the
repeated query is almost instant.

Max



On Mon, Apr 8, 2013 at 4:31 PM, Richard Hipp d...@sqlite.org wrote:

 On Mon, Apr 8, 2013 at 8:20 AM, Max Vlasov max.vla...@gmail.com wrote:

  Richard,
 
  It makes sense, but I see here some possibility of future confusion.
  Correct me if I'm wrong. Currently if I have a vfs that requires special
  preparations (for example, decompression), I have two choices, either
  provide V2 interface or emulate memory-mapping by allocating my own
 blocks
  of memory in xFetch and deallocating in xUnfetch. If future V4 IO
 routines
 

 You an implement xFetch to always return NULL:

 int xFetch(sqlie3_file *pNotUsed1, i64 notUsed2, int notUsed3, void
 **pp){
   *pp = 0;
   return SQLITE_OK;
 }

 Then SQLite will always fallback to doing plain old xRead and xWrite.



  introduce something new, one will not have the first option. So anyone in
  the future should be aware that there are two points where data can be
  needed and since one expects filling previously allocated block and
 another
  expects pointer to the data, the complexity of understanding will grow.
 Or
  is there a simple way to disable xFetch/xUnfetch on the VFS level?
 
  Max
 
 
 
 
 
  On Mon, Apr 8, 2013 at 3:33 PM, Richard Hipp d...@sqlite.org wrote:
 
   On Mon, Apr 8, 2013 at 6:12 AM, Max Vlasov max.vla...@gmail.com
 wrote:
  
But I also noticed that if I provide
version 2 of vfs, I won't get benefits of file-mapping
   
  
   That's how we implement backwards compatibility to legacy VFSes.
  
   --
   D. Richard Hipp
   d...@sqlite.org
   ___
   sqlite-users mailing list
   sqlite-users@sqlite.org
   http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
  
  ___
  sqlite-users mailing list
  sqlite-users@sqlite.org
  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
 



 --
 D. Richard Hipp
 d...@sqlite.org
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
On Sun, Apr 7, 2013 at 2:12 PM, Max Vlasov max.vla...@gmail.com wrote:




 On Thu, Apr 4, 2013 at 4:02 PM, Richard Hipp d...@sqlite.org wrote:

 We would like to encourage people to try out the new code and
 report both success and failure.



 Not particulary about this draft version, but about my experience with
 memory mapped files on Windows If you don't mind .
 ...

 I don't know whether such scenario is possible with sqlite.


Finally I did some tests and didn't not notice anything like that with
creating tables, probably because memory-mapping is not currently for
inserting and updating, so the problem I described seems like not actual.

As for general queries, I have mixed feeling. At least one of my queries
worked 9 seconds on 3.7.17 ddraft instead of 27 seconds with 3.7.16.1. So
the speed progress can be very noticeable in some cases. But as I see the
VFS stopped working transparently in this case. Shouldn't it be so that
xRead and probably xWrite still be in the chain of callings, just doing
memcpy from file-mapping regions instead of calling file routines?
Otherwise many existing vfs filtering solutions (encryption, compression)
won't longer work when memory-mapping is on.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
On Mon, Apr 8, 2013 at 1:23 PM, Dan Kennedy danielk1...@gmail.com wrote:


 Right. But a VFS is not obliged to support the new xFetch() and
 xUnfetch() methods (used to read data from a memory mapped file).
 And if it doesn't, SQLite will use xRead() exclusively.

 It always uses xWrite() to write - whether mmap is enabled or not.


Great, what is the correct way of not providing fetch procedures? Maybe I
did something wrong?
- If I call original xFetch/xUnfetch from mine - no xRead called,
- if I provide Nil as the function address, then I get Access violation
() so probably sqlite tries to call it anyway.
- If return SQLITE_Error from xFetch, xUnfetch, I get sqlite logic error.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
Dan, I see, seems like this works. But I also noticed that if I provide
version 2 of vfs, I won't get benefits of file-mapping (I assume because my
query that worked 9 seconds for file-mapping now back 27 seconds). I'm not
sure this is right, but calling sqlite original xRead very abstract by its
nature, so nobody ever expected particular implementation from sqlite and
if file mapping is on, sqlite could use file-mapping even if user provided
ver.2 of io routines.

In other word, if sqlite doesn't expect special knowledge of file mapping
existence for vfs-free client then it should not expect this from client
that uses vfs

Max


On Mon, Apr 8, 2013 at 1:56 PM, Dan Kennedy danielk1...@gmail.com wrote:

 On 04/08/2013 04:40 PM, Max Vlasov wrote:

 On Mon, Apr 8, 2013 at 1:23 PM, Dan Kennedy danielk1...@gmail.com
 wrote:


 Right. But a VFS is not obliged to support the new xFetch() and
 xUnfetch() methods (used to read data from a memory mapped file).
 And if it doesn't, SQLite will use xRead() exclusively.

 It always uses xWrite() to write - whether mmap is enabled or not.


  Great, what is the correct way of not providing fetch procedures? Maybe
 I
 did something wrong?
 - If I call original xFetch/xUnfetch from mine - no xRead called,
 - if I provide Nil as the function address, then I get Access violation
 () so probably sqlite tries to call it anyway.
 - If return SQLITE_Error from xFetch, xUnfetch, I get sqlite logic error.


 Set the iVersion field of your sqlite3_io_methods struct to 2 (not 3).

 The idea is that if you have an existing VFS, it should keep working
 as is without any modifications. If you find this is not the case, it
 may be a bug.

 Dan.


 __**_
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**usershttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
Richard,

It makes sense, but I see here some possibility of future confusion.
Correct me if I'm wrong. Currently if I have a vfs that requires special
preparations (for example, decompression), I have two choices, either
provide V2 interface or emulate memory-mapping by allocating my own blocks
of memory in xFetch and deallocating in xUnfetch. If future V4 IO routines
introduce something new, one will not have the first option. So anyone in
the future should be aware that there are two points where data can be
needed and since one expects filling previously allocated block and another
expects pointer to the data, the complexity of understanding will grow. Or
is there a simple way to disable xFetch/xUnfetch on the VFS level?

Max





On Mon, Apr 8, 2013 at 3:33 PM, Richard Hipp d...@sqlite.org wrote:

 On Mon, Apr 8, 2013 at 6:12 AM, Max Vlasov max.vla...@gmail.com wrote:

  But I also noticed that if I provide
  version 2 of vfs, I won't get benefits of file-mapping
 

 That's how we implement backwards compatibility to legacy VFSes.

 --
 D. Richard Hipp
 d...@sqlite.org
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-07 Thread Max Vlasov
On Thu, Apr 4, 2013 at 4:02 PM, Richard Hipp d...@sqlite.org wrote:

 By making use of memory-mapped I/O, the current trunk of SQLite (which will
 eventually become version 3.7.17 after much more refinement and testing)
 can be as much as twice as fast, on some platforms and under some
 workloads.  We would like to encourage people to try out the new code and
 report both success and failure.



Not particulary about this draft version, but about my experience with
memory mapped files on Windows If you don't mind .

When I worked with memory-mapped files on Windows two years ago, I
implemented a library for accessing files virtually unlimited in size with
sliding-view approach. There was an interesting effect affecting the system
as a whole. It's when  I write sequentially and starting some point the
system became unresponsive as a whole. This is an important point, not the
application that wrote to the file, the whole system, so no Alt-Tab, no
blinking caret in another application and sometimes even no mouse moving. I
tried to report and MS forums (
http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/81dd029f-2f55-49f2-bd02-1a8ceb0373eb),
but seems like this wasn't noticed. I added a small procedure to show the
effect at the forum topic in pascal (it's sill there) that can easily be
ported to any other language supporting windows api directly.

Right now I tried to reproduce this while writing this message. The machine
is windows 64 bit 4 Gb memory. I started the program writing the the file
until 10Gb. And no surprise, at about 5-6 Gb, the notepad (another
application), stopped responding on my key presses, the caret stopped
blinking and Alt-tab and taskbar didn't work for about a minute. So I could
not do anything (!) on my computer for about minute or so while other
application did something using official documented API.

I don't know whether such scenario is possible with sqlite. Only that on
Windows memory-mapped files are still implemented as a very special entity,
sometimes having exclusively more permissions than other entities
regardless of permissions of the application that uses it. Probably I
should do some particular sqlite-specific tests to find out whether this
affects sqlite but before this I wanted to share this information.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-07 Thread Max Vlasov
don't know what's wrong with the link, I'm clicking the one from gmail
thread and it works. Other way is to google [Unresponsive system under some
file-mapping related conditions] and the first result is the thread link

Max


On Sun, Apr 7, 2013 at 2:25 PM, Jean-Christophe Deschamps
j...@antichoc.netwrote:

 Hi Max,

 The link is dead. I'd like to know more about this so can you provide us
 with a live link?

 Thanks.


 __**_
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**usershttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-07 Thread Max Vlasov
On Sun, Apr 7, 2013 at 3:55 PM, Chris Smith smitt...@gmail.com wrote:

 Possibly related:
 http://en.wikipedia.org/wiki/Thrashing_(computer_science)


That's an interesting direction. Surprisingly if one query
... site:microsoft.com Thrashing memory-mapped...
on google, he or she would find a forum topic Memory mapped file
performance where the first contributor to answers is Igor Tandetnik and
we knows that Igor gave very valuable answers also on this list, so he
(having good knowledge about both win32 and sqlite) could also give his
expert opinion on this topic .

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] integrity_check out of memory

2013-01-23 Thread Max Vlasov
On Tue, Jan 22, 2013 at 12:33 PM, Dominique Pellé dominique.pe...@gmail.com
 wrote:

 Max Vlasov wrote:

  Hi,
 
  I found with the web search that this error (out of memory  for PRAGMA
  integrity_check) appeared in the wild and there were some fixes related
 to
  it.It looks perhaps like the bug that was fixed in this checkin:

 http://www.sqlite.org/src/info/120c82d56e

 Can you try and confirm?


The trunk version works ok, no limits noticed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] integrity_check out of memory

2013-01-22 Thread Max Vlasov
Thanks, Dominique

I will try asap, but looking at the arithmetics

25 (The length of Page ? is never used)
*
800 (maximum I could get)

= 20,000 (limit mentioned in the ticket )

looks like this is a winner :)

Max



 Hi Max

 It looks perhaps like the bug that was fixed in this checkin:

 http://www.sqlite.org/src/info/120c82d56e

 Can you try and confirm?

 -- Dominique
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] integrity_check out of memory

2013-01-21 Thread Max Vlasov
On Mon, Jan 21, 2013 at 9:30 PM, Klaas V klaasva...@yahoo.com wrote:

 Max wrote he's using an encrypted VFS. Perhaps the problem disappears when
 he decodes the file?


I tried to do the same for the decrypted file and on another computer, with
much more memory. The same result. Probably I I can live with that, the
only thing I would like to know whether the report errors always come most
important first so if I see Page ... not used as the first entry then I
would be sure there are no more serious errors regardless of the length of
the output.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] integrity_check out of memory

2013-01-20 Thread Max Vlasov
Hi,

I found with the web search that this error (out of memory  for PRAGMA
integrity_check) appeared in the wild and there were some fixes related to
it.

Today I decided to do the check for a base that lives with its own
encrypted vfs and with an sqlite version older than the current. But the
check was made with the latest version (3.7.15.2). I got the same error
(out of memory) for max values larger than 800, for example PRAGMA
integrity_check(801). If I provide the number below or equal to 800, the
result is only the (truncated) list of Page .. is never used. What could
go wrong? Can it be related to inadequate code of my vfs? The reason why I
would like to see the entries after 800 is that there can be more serious
errors than Page is never used.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite 3.7.13 download link

2012-07-08 Thread Max Vlasov
Hi,
Is everything ok with amalgamation link for sqlite 3.7.13?
When I click on the link nothing happens, different browsers, local
networks, the same result.

Just in case for the TCP/IP request on 80 port

www.sqlite.org
/download.html/sqlite-amalgamation-3071300.zip

The header is

HTTP/1.0 200 OK
Connection: close
Date: Sun, 08  Jul 2012 16:26:13 +
Last-Modified: Thu, 14  Jun 2012 17:02:52 +
Content-type: text/html
Content-length: 14439

plain html, no redirection

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite 3.7.13 download link

2012-07-08 Thread Max Vlasov
Ok, I see,

the problem is I sometimes reach pages from web search. So the query
[sqlite download] shows in the google
www.sqlite.org/download.html/http://www.sqlite.org/download.html/sqlite-amalgamation-3071300.zip
(with slash), it's identical to the one without slash, but every link at
this page due to the URL rules is becomes wrong. Other search engines  like
yandex.com or duckduckgo.com correctly points to the link without slash.

I think that the problem is partly on the sqlite side since I noticed that
the sqlite.org web server much rarely reports page not found errors so
this was ok for Google to accept slashed version as the main one.

Max


On Sun, Jul 8, 2012 at 8:38 PM, Keith Medcalf kmedc...@dessus.com wrote:

  Is everything ok with amalgamation link for sqlite 3.7.13?
  www.sqlite.org/download.html/sqlite-amalgamation-3071300.zip

 Link should be:  http://www.sqlite.org/sqlite-amalgamation-3071300.zip


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] free list performance

2012-04-23 Thread Max Vlasov
Hi,

Some time ago I worked with a database repeating the same sequence of
actions multiply times. They're basically:
- create table
- populate table
- do some deletes with some criteria
- drop table

After about 20 times I started to notice the usual effects of internal
fragmentation (slowness in some usually quick operations and reports
of large seek from VFS). I assume this has something to do with the
way new pages allocated from free list. I narrowed it to a little test
that can reproduce this (tested with 3.7.10)

CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT)
Insert into TestTable Default Values /* do this 1,000,000 times */
Delete from TestTable where (Id/1000) % 2 = 0
Drop table TestTable

This test makes the db very fragmented after about 10 steps.

I thought recently that the main source of internal fragmentation is
the nature of the data added.  But looks like not only. Even if your
data is sequential, but the free_list is fragmented, then you would
probably get fragmented internal data. Is it possible to automatically
sort free_list from time to time? Or maybe some other solution if this
would cost too much?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] free list performance

2012-04-23 Thread Max Vlasov
On Mon, Apr 23, 2012 at 3:35 PM, Simon Slavin slav...@bigfraud.org wrote:

 For the second one, the simplest way to do something like defragment the data 
 inside the file is to use the VACUUM command:

Simon, thanks, I see what vacuum can do, sure I use it frequently as
many of us. Ironically when I noticed the slowness, I did VACUUM that
took almost a half an hour alone because the db contained other
persistent data including big indexes. I think handling free list
alone (for example allowing sort them only with a pragma) would help
in cases like mine but unfortunately will add more confusion.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimize Table Access Efficiency

2012-03-21 Thread Max Vlasov
On Wed, Mar 21, 2012 at 1:33 AM, Tim Morton t...@preservedwords.com wrote:

 So it seems the index is no help;



Are you sure you did as Simon explained?
Becausem my tests shows the trick works... (sqlite 3.7.10)

Create table [TestTable] ([Id] integer primary key, [topics] Text,
[Definition] Text);
CREATE TABLE [SourceTable] ([Id] integer primary key, [Title] Text);

Insert into SourceTable (Title) VALUES ('bar')
Insert into SourceTable (Title) VALUES ('bar')
Insert into SourceTable (Title) VALUES ('foo bar')
Insert into SourceTable (Title) VALUES ('bar foo')

insert into TestTable (topics, definition) values ((Select Title from
SourceTable order by random() limit 1), 'abcdefghij ...') (about 500
symbols)
100,000 times so we have about 100,000 records

select topics, id from TestTable
:Sqlite read 103 Mb (I have a special hack into vfs to monitor how much
sqlite read during the query)
:Time: 4,9 sec

...After adding  the index

CREATE INDEX [idx_Trick] ON [TestTable] ([Topics], [Id])

select topics, id from TestTable order by topics,id
:Sqlite read 2 Mb
:Time: 1,3 sec

The timing was unaffected by sqlite and system cache since the db was
reopened with a special trick on Windows forcing it to clear cache for this
file

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimize Table Access Efficiency

2012-03-20 Thread Max Vlasov
Hi, Tim

On Tue, Mar 20, 2012 at 6:21 PM, Tim Morton t...@preservedwords.com wrote:


 Is there a way to read only the part of the file that has the queried
 table/column thus saving time and hard dive grinding?


There is also a trick. You can create an index on 'topics' column and
perform your first reading forcing this column order and not touching
definition at the same time. Since any index is just another b-tree but
most of the time smaller you will end up reading a smaller part of file.
Although there's a price to pay, your db will become larger.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] The number of parameters passed to user functions

2012-03-13 Thread Max Vlasov
Hi,

Is there a way to know the number of parameters passed for user function
with undefined argument count (nArg = -1 in the Sqlite3_create_function
call).

I could not find a function specific to this task and thought that I at
least can check sqlite3_values array members for zero for my implementation
of xFunc. But the problem is that for complex queries the undefined members
sometimes become not-null

Particularly I implemented locate function that in mysql syntax has two
variations, so I check whether third one exists or not.

The problem with the following query. The locate here has two parameters so
first call sees there's no third one.
  select substr(Word, 1, 100) from tbl where locate('', Word)0

but after first success the third parameter (sqlite3_values[2]) becomes not
null so my code decides that it's explicitely used third parameter and
fails. The value for this particular moment is 100 so it's probably some
data change related to this 'substr' part when it's used first time due to
'where' success.

The version of sqlite 3.7.10

So is there reliable way to find the number of parameters of such functions?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The number of parameters passed to user functions

2012-03-13 Thread Max Vlasov
On Tue, Mar 13, 2012 at 3:16 PM, Jean-Christophe Deschamps j...@antichoc.net
 wrote:

 Why do you register the function with -1 in the first place?
 Register both 2- and 3-parms functions (if you have those two variations),
 both pointing to the same function code: argc is valid and let you decide
 what to do without wild guesses.


Thanks, Jean-Christophe and Simon

Seems like I translated the function prototype to pascal without finding
out what the second unnamed parameter is for. The page
http://sqlite.org/c3ref/create_function.html shows
  (*xFunc)(sqlite3_context*,int,sqlite3_value**)

and never explains the second parameter. I think a short description could
be useful

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A possible bug probably partially fixed before

2012-03-12 Thread Max Vlasov
On Fri, Mar 9, 2012 at 12:04 PM, Max Vlasov max.vla...@gmail.com wrote:

 So looks on the way from 3.6.10 to 3.7.10 something was really fixed
 related to this issue, but seems like not everything.


Don't know whether this is serious or a problem at all, but I spent some
time with such queries and narrowed the random part to
(the table still the same, auto-incremented id from 1 to 1000)

Select  id,  (abs(random() % 1000)) as rndid
from TestTable
where id = rndid
order by id desc
limit 5

which in 3.7.10 returns results like

id  / rndid
44284
441134
440135
439805
438971

Looks like rndid is evaluated only once for comparison, but produces
different random-originated results for output.I remember there was a
discussion about it and even different versions behave differently. I think
that either rndid here should contain the value evaluated for where or
comparison should be performed for every row separately

What also puzzles me is that another variation of the original query

Select
  id,  (abs(random() % (ToValue-FromValue + 1))) as actualrndid
from
  TestTable
left join
  (Select  1 as FromValue, 1000 as ToValue) StatTable
where id=actualrndid
order by id desc
limit 10

...always returns for left (id) column random values from 800 to 900 (no
other ranges), but if I change ToValue-FromValue to 999 (no other
modifications to the query), id column starts returning values from
different part of 1..1000 range (the version is still the same, 3.7.10)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] A possible bug probably partially fixed before

2012-03-09 Thread Max Vlasov
Hi,

I experimented with random numbers recently (with an older version of
sqlite) and notice a strange behavior, I tried to reproduce this with the
latest version and noticed that it was probably partially fixed, but still
returns wrong results for my query.

The simplest table possible

CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT
)

Let's fill it with numbers from 1 to 1000. For example by repeating
  INSERT INTO TestTable (id) values ((Select max(id) from TestTable) + 1)



The following query is the problem one

Select
  id,
  MinId,
  MaxId,
  abs(random() % (MaxId-MinId + 1)) as goodRangeRnd,
  (MinId + abs(random() % (MaxId-MinId + 1))) as actualrndid
from
  TestTable
left join
(Select
  (select Min(Id) from TestTable) as MinId,
  (select Max(Id) from TestTable) as MaxId
) StatTable
where
  id=actualrndid
order by
  id desc
limit
 10


So actualrndid here should produce random values between Min(Id) and
Max(Id) and the query should return 10 rows prior or equal to this value.
But the results are different depending on the version
(goodRangeRnd is here just for testing purposes and to show the difference
with problem actualrndid. It's similar to actualrndid but without (MinId
+) member )

3.6.10 -
 goodRangeRnd is good in every row (jumps in the full range (1..1000))
 actualrndid never gets higher than 900 (wrong)
 the query returns correct results if actualrndid is considered correct

3.7.10
 goodRangeRnd is good in every row
 actualrndid is good this time (jumps in the full range (1..1000))
 the query returns bad results similar to the version above, but this time
it is wrong if we take (id=actualrndid) into account. So for example,
there's id=911 and actualrndid=3 in the same row

So looks on the way from 3.6.10 to 3.7.10 something was really fixed
related to this issue, but seems like not everything.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A possible bug probably partially fixed before

2012-03-09 Thread Max Vlasov
On Fri, Mar 9, 2012 at 12:04 PM, Max Vlasov max.vla...@gmail.com wrote:

 3.6.10 -
 ... actualrndid never gets higher than 900 (wrong)


Sorry, correction, never gets lower than 900

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about file change counter

2012-03-06 Thread Max Vlasov
On Tue, Mar 6, 2012 at 12:22 PM, Roger Binns rog...@rogerbinns.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 05/03/12 23:41, Max Vlasov wrote:
  One of the recent thought I had was implementing time machine vfs. So
  on the developer level he opens the db either in general mode or
  history (read-only) mode providing the date (or version) for working
  with the db as it was in the past.

 I don't see why a VFS has to be used.  Surely the backup API is a far
 better solution for this?

 Only the app knows where the logical boundaries are during database change
 and use.  This could be as simple as one transaction, or it could be a
 group of transactions. You would want those history points to be well
 chosen.


Roger, I don't see a problem here. Any vfs can be used only by
applications/developer knowing what for this vfs was created. So if this
vfs advertise itself as allowing any out of transaction temporary
restoration then any developer that sees that his internal logic is
compatible with it will be able to use it.

As for real world examples, I admit they can be very specific, but
generally it's like adding time domain without changing the schema. Imagine
some note-taking application. Virtually the developer will only need to add
Time-machine feature (requires more space) in the new... dialog and if
a file of such kind is opened, enable a menu item Go to the date
(read-only)... with only dozens of additional lines of code on his side.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about file change counter

2012-03-06 Thread Max Vlasov
On Tue, Mar 6, 2012 at 2:37 PM, Dan Kennedy danielk1...@gmail.com wrote:

 On 03/06/2012 02:41 PM, Max Vlasov wrote:

  But for correct work this vfs should rely on the
 fact that file change counter will stay the same until the final write to
 the first sector.



 Do you just want some hook called when a transaction is committed
 and the file is completely updated? That might be the xSync() on
 the db file.


It's an interesting suggestion, thanks. I just thought that using xSync I
even can only implement my own db change counter. In any case the question
is will it be called for rollback?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about file change counter

2012-03-06 Thread Max Vlasov
On Tue, Mar 6, 2012 at 4:00 PM, Dan Kennedy danielk1...@gmail.com wrote:

 On 03/06/2012 06:45 PM, Max Vlasov wrote:



 It's an interesting suggestion, thanks. I just thought that using xSync I
 even can only implement my own db change counter. In any case the question
 is will it be called for rollback?



 If this is a big deal, you probably could filter these states out
 by keeping an eye on the change-counter.


Ok, I already have plenty of information for field testing :)

Thanks
Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about file change counter

2012-03-06 Thread Max Vlasov
On Tue, Mar 6, 2012 at 9:55 PM, Roger Binns rog...@rogerbinns.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 06/03/12 01:56, Max Vlasov wrote:
  I don't see a problem here.

 ... For example the user could use the app for 3 hours and perform what
 they
 consider to be 10 actions over that time, but behind the scenes that could
 be 300 transactions.  Offering 300 restore points would be very
 confusing to them.


Good point, I think that on the user side the time is right for this task
and choosing the start of the day or the end of the day could be a next
setting if necessary.



 Incidentally you could make a VFS intended for backups.  It could checksum
 pages being written and only save one copy per checksum.  Consequently
 every time you backup to it only changed pages would be stored/added to
 the backup file.


I planned it as compact implementation, even compress the history data. I
thinkg as a side effect this system would allow restoring the exact copy of
the db for particular date. Or did you mean something else?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Once again about file change counter

2012-03-05 Thread Max Vlasov
Hi,

there are some ideas for vfs implementation that would require tracking
file change counter. As I recall, I asked about specific api for working
with file change counter, and seems like there was no such. On the other
side it looks like it's not a problem since probably until db format change
significantly the location of this value in the file will stay unchanged.

One of the recent thought I had was implementing time machine vfs. So on
the developer level he opens the db either in general mode or history
(read-only) mode providing the date (or version) for working with the db as
it was in the past. Technically implementing such vfs should not be a
problem. For any file write (sector write) we save the prior contents of
the location together with offset-size and version information (file change
counter is good for this). The history mode instead of reading the main
data, uses the saved history data and searches for best fit to get
requested sector contents. But for correct work this vfs should rely on the
fact that file change counter will stay the same until the final write to
the first sector. And the bad thing here is that I should read this value
manually so some conflict with existing sqlite cache is possible. So can
vfs in this case work in sync with sqlite or it's too complicated?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] opening places.sqlite from Firefox (10.0.1) profile qAdmin Cannot perform on closed dataset

2012-02-11 Thread Max Vlasov
On Sat, Feb 11, 2012 at 1:56 PM, Christoph Kukulies k...@kukulies.orgwrote:

 I'm trying to open some Firefox files that are located in the profiles
 directory using Sqlite admin
 (http://sqliteadmin.orbmu2k.**de/ http://sqliteadmin.orbmu2k.de/).

 I'm getting an error message:

 Cannot perform this operation on a closed dataset.


This admin is probably made with Delphi, this is a error message of BDE
(Borland Database Engine) that was there forever so you probably should
address this to the developers of the tool. I suppose that for some reason
it could not open the db, but didn't detect it so allowed database controls
and objects accept queries.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Phrase in the docs

2012-02-10 Thread Max Vlasov
Hi,
working with sqlite and mysql, noticed that they're different in regard of
mixed types.
  Select '24'  25
  Select 24  25
have the same results in MySql and different sqlite.

Actually it's no news (my sqlite queries contained CAST(.. as INT) ), but I
decided to look at the docs and noticed that first sentence

http://www.sqlite.org/datatype3.html

  ...If one operand has INTEGER, REAL or NUMERIC affinity and the other
operand as TEXT or NONE affinity then NUMERIC affinity is applied to other
operand.
but below
  ...Because column a has text affinity, numeric values on the
right-hand side of the comparisons are converted to text before  the
comparison occurs.

Shouldn't the first be fixed to end with ...then TEXT affinity is applied
to other operand?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Phrase in the docs

2012-02-10 Thread Max Vlasov
On Fri, Feb 10, 2012 at 4:53 PM, Richard Hipp d...@sqlite.org wrote:

 On Fri, Feb 10, 2012 at 6:11 AM, Max Vlasov max.vla...@gmail.com wrote:

  Hi,
  working with sqlite and mysql, noticed that they're different in regard
 of
  mixed types.
   Select '24'  25
   Select 24  25
  have the same results in MySql and different sqlite.
 

 In the statement:

SELECT '25'  25;

 There are no columns, only literals.  And hence no affinity is applied.




So if a string looks like a numeral it should be treated as numeral by
sqlite?

The following db and query returns results bigger than 5, is this ok?

CREATE TABLE [testtable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[value] VARCHAR(200)
);
INSERT INTO testtable VALUES(4,'-3');
INSERT INTO testtable VALUES(5,'-2');
INSERT INTO testtable VALUES(11,'33');
INSERT INTO testtable VALUES(12,'44');
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('testtable',20);
SELECT value from testtable WHERE Value  = 5
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Phrase in the docs

2012-02-10 Thread Max Vlasov
On Fri, Feb 10, 2012 at 5:45 PM, Igor Tandetnik itandet...@mvps.org wrote:

 Value has TEXT affinity, 5 has none. So 5 is converted to '5', and then
 lexicographic comparisons are performed. It so happens that all strings in
 the Value column lexicographically precede '5'.

 If you wanted Value to be treated as an integer and compared numerically,
 why would you declare it as VARCHAR?
 --



Hmm, I never saw mentioning literal had NONE affinity (select typeof(5)
returns integer).
As for the source of the problem, I have a table where cell can contain
values (12) or ranges (12-15) so the type should be non-integer, at least
to be mysql-compatible. One of my joins should compare a subset that
guaranteed to contain non-range value, so WHERE clause safely compared
values as integers. After querying in MySql and sqlite I got different
results that led to this narrowing.

Affinity section mentioned the goal of more compatibility, but without
numerical literal being treated exactly like numerical fields the
compatibility looks a little bit limited. I can live with that , but I
think the datatype doc page can also mention this difference.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserts get slower and slower

2012-02-08 Thread Max Vlasov
On Thu, Feb 9, 2012 at 2:08 AM, Steinar Midtskogen stei...@latinitas.orgwrote:


 When I build my database from scratch using millions of inserts, one
 table causes problems.  Inserts get slower and slower.  I have about
 830,000 inserts for that table.  It gets to 300,000 pretty fast, but
 then it gets slower and slower, and eventually it will only do a few
 inserts per second, and I then I have to kill sqlite3 as it will run
 for hours if not days.



My guesses:
- Your unix_time values are not successive. In this case your first fast
results are due to advantages of memory caching. The following slowness is
the result of the internal fragmentation
- You have successive unix_time, but when it goes beyond 32-bit limit your
software wraps around 32-bit value instead of going 64 bit.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About new ticket Per page/region checksums

2012-02-07 Thread Max Vlasov
On Sat, Feb 4, 2012 at 3:51 PM, Alexey Pechnikov pechni...@mobigroup.ruwrote:

 It's very important but there are some questions about
 http://www.sqlite.org/src/info/72b01a982a
 Some times ago DRH wrote that checksum calculation don't slow down
 SQLite significantly.
 But can be this realized in current SQLite 3.x branch? When checksum
 can help to restore
 damaged pages/database?



Very interesting discussion )

As I long time fun of vfs system of sqlite I thought about implementing
this as a vfs extension. At first it seemed that changing the format is a
necessity. Although there's also another approach. This vfs can implement
the checksum data as a separate file ({dbname}.chk) and work with existing
format, so passing all read/write operation unchanged and changing only
records of the chk file. There are two different scenarios
- When the software that modified the db opens the db correctly (using this
vfs)
- When some other tool (not aware of this vfs) modified the contents of the
db. In this case the vfs can track File change counter (the one at offset
24) and compare the one saves in chk file and the current one. The mismatch
can produce a warning, i.e as a recommendation to rebuild checksum db
(although don't know whether the vfs can report some existing error). The
same effect can be if the chk file is missing.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] List of DB virtual modules

2012-01-23 Thread Max Vlasov
Hi,

it there an easy way (without manual statements parsing) to get the list of
modules used in all (if any) virtual tables statements of the schema? For
any opened db I wanted to automatically enable found modules (if I
recognize them) and warn if there are unsupported ones.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] List of DB virtual modules

2012-01-23 Thread Max Vlasov
On Mon, Jan 23, 2012 at 5:36 PM, Richard Hipp d...@sqlite.org wrote:

 On Mon, Jan 23, 2012 at 8:06 AM, Max Vlasov max.vla...@gmail.com wrote:
 
  it there an easy way (without manual statements parsing) to get the list
 of
  modules used in all (if any) virtual tables statements of the schema? ...
 

 There is no way to do that right now, other than manually parsing the
 statements.


Thanks, Richard, it's interesting that some entities need support with
pragmas, but sqlite_master is so natural to work with tables, there was
never much need for special pragma for obtaining properties of tables.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some pieces of the puzzle are coming together

2012-01-20 Thread Max Vlasov
On Fri, Jan 20, 2012 at 10:05 PM, John Elrick john.elr...@fenestra.comwrote:

 The change which results in a slow down occurred between 3.7.5.0 and
 3.7.6.0.


What about EXPLAIN difference? Or just outputs of this prefix from both
versions?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Collecting return values from sql commands using SQLite APIs.

2012-01-19 Thread Max Vlasov
On Thu, Jan 19, 2012 at 1:49 PM, bhaskarReddy uni...@gmail.com wrote:


 Hi Friends,

   is it possible to collect return value of a SQL command. ex:
 SELECT COUNT(*) FROM table;  will return number of records in a table. Is
 there any SQLite API to return this value.


It's the same api that returns values for
  SELECT * FROM table;
Only for Count(*) you result is one row and one column

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some pieces of the puzzle are coming together

2012-01-19 Thread Max Vlasov
John, another suggestion

Can you test previous sqlite versions one by one towards older ones with
one of your problem queries until the results are good again and send
both good and bad EXPLAIN QUERY output here. I'm sure this will be greek
for most of us :), but when Richard or Dan look at the difference in the
vdbe code, they could notice something

Max

On Fri, Jan 20, 2012 at 1:17 AM, John Elrick john.elr...@fenestra.comwrote:

 One more useful comparison.  The following query is prepared and step is
 called with these results:


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] msize in 3.7.10

2012-01-18 Thread Max Vlasov
On Tue, Jan 17, 2012 at 6:11 PM, Max Vlasov max.vla...@gmail.com wrote:

 Hi,
 I'm using static linking with Delphi and a new function required binding
 in 3.7.10 - msize (__msize). The problem here is that there's no official
 way to query the size of a memory block in Delphi memory manager, at least
 I'm not aware of one. Should I solve this anyway (for example by keeping my
 own list of blocks and their sizes) or is there some other solution
 (enabled/disable some define)?


Quoting and answering myself )

CMIIW, but it seems that HAVE_MALLOC_USABLE_SIZE can't control anything on
Windows. As I suppose it's a new define introduced in 3.7.10

Although there are plenty of lines where #ifdef SQLITE_MALLOCSIZE assumes
there's a possibility that it either exists or not, but if I don't define
anything these lines forces HAVE_MALLOC_USABLE_SIZE to 1 on windows
// --
#if !defined(HAVE_MALLOC_USABLE_SIZE)  SQLITE_OS_WIN
# define HAVE_MALLOC_USABLE_SIZE 1
# define malloc_usable_size _msize
#endif
// --

But looks like on other platforms the define is effective  ( because 
 SQLITE_OS_WIN makes this forcing disabled) and the following line obeys
the setting afterwards
//-
#ifdef HAVE_MALLOC_USABLE_SIZE
#include malloc.h
#define SQLITE_MALLOCSIZE(x) malloc_usable_size(x)
#else
#undef SQLITE_MALLOCSIZE
#endif
//-

Was it intentional or occasional?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] msize in 3.7.10

2012-01-18 Thread Max Vlasov
On Wed, Jan 18, 2012 at 4:47 PM, Richard Hipp d...@sqlite.org wrote:

 Try using the trunk of the SQLite source tree with the SQLITE_WITHOUT_MSIZE
 preprocessor macro.



Richard, thank, I will try, but I just tried to replace

#if !defined(HAVE_MALLOC_USABLE_SIZE)  SQLITE_OS_WIN
# define HAVE_MALLOC_USABLE_SIZE 1
# define malloc_usable_size _msize
#endif

.with

#if defined(HAVE_MALLOC_USABLE_SIZE)  SQLITE_OS_WIN
# define malloc_usable_size _msize
#endif

isn't it a proper fix for HAVE_MALLOC_USABLE_SIZE to work on windows? I
compiled, it doesn't require __msize anymore and work.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] msize in 3.7.10

2012-01-18 Thread Max Vlasov
On Wed, Jan 18, 2012 at 4:47 PM, Richard Hipp d...@sqlite.org wrote:

 On Tue, Jan 17, 2012 at 9:11 AM, Max Vlasov max.vla...@gmail.com wrote:

  Hi,
  I'm using static linking with Delphi and a new function required binding
 in
  3.7.10 - msize (__msize).
  ...

 Try using the trunk of the SQLite source tree with the SQLITE_WITHOUT_MSIZE
 preprocessor macro.



Richard, thanks, tried trunk, everything works, also saw the comment in the
sources about automatic usage of msize on Windows. So probably having
HAVE_MALLOC_USABLE_SIZE always defined on Windows is necessity.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] msize in 3.7.10

2012-01-17 Thread Max Vlasov
Hi,
I'm using static linking with Delphi and a new function required binding in
3.7.10 - msize (__msize). The problem here is that there's no official way
to query the size of a memory block in Delphi memory manager, at least I'm
not aware of one. Should I solve this anyway (for example by keeping my own
list of blocks and their sizes) or is there some other solution
(enabled/disable some define)?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some pieces of the puzzle are coming together

2012-01-16 Thread Max Vlasov
On Sun, Jan 15, 2012 at 3:35 AM, John Elrick john.elr...@fenestra.comwrote:

 On Fri, Jan 13, 2012 at 5:13 PM, Max Vlasov max.vla...@gmail.com wrote:

 5,008 calls to

 UPDATE RESPONSES SET
 RESPONSE_NAME = :RESPONSE_NAME,
 prelisted_value = :prelisted_value
 WHERE RESPONSE_OID = :RESPONSE_OID

 3.6.17: 382 ms
 3.7.9: 5,924 ms



John, I did a test reproducing your job with 8 inserts
  insert into responses default values

and making 5000 calls in transaction

UPDATE RESPONSES SET RESPONSE_NAME = '12345', prelisted_value='12345'
WHERE RESPONSE_OID = abs(random() % 8)

is this test ok for you (still showing bad performance)?

On my side it's about 800 ms for 3.7.9 static and about 6000 memory calls
during the query. For 3.6.20 the number of calls ~  7000, the time is
similar.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some pieces of the puzzle are coming together

2012-01-16 Thread Max Vlasov
On Mon, Jan 16, 2012 at 5:37 PM, John Elrick john.elr...@fenestra.comwrote:

  is this test ok for you (still showing bad performance)?
 
  On my side it's about 800 ms for 3.7.9 static and about 6000 memory calls
  during the query. For 3.6.20 the number of calls ~  7000, the time is
  similar.
 
 
 Your test was hard coded.  Mine uses bindings.  However, I'm not arguing
 that something odd is going on.  I'm just trying to understand what.


Ok, I did a test with bindings
This was query
  UPDATE RESPONSES SET RESPONSE_NAME = ?, prelisted_value=? WHERE
RESPONSE_OID = ?
with 5000 rows where both strings were 'abc' and the numbers - random ids
(existing)

I noticed similar results when the results are system-cached (second and
following test in sqlite even after reopening), but when I force resetting
system cache (windows cache, not sqlite), the time is about 7-11 seconds.
But I think a test db with test data would be more useful for
understanding. Can you upload it?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some pieces of the puzzle are coming together

2012-01-16 Thread Max Vlasov
On Tue, Jan 17, 2012 at 5:05 AM, John Elrick john.elr...@fenestra.comwrote:

 
 I can try that approach with BCC.   I was concerned that the IDE would be
 getting inaccurate information from the .OBJ.  Crossing into a pre-complied
 library can be tricky.


As I recall,I could not make debug information compiled by bcc visible in
Delphi IDE, but everything will be ok if you make your code compatible with
fpc/lazarus. Besides the fact that you will be able to walk inside sqlite c
code and inspect the data (gdb is used), you will probably make your
software cross-platform. Besides, there is a library for lazarus/fpc
(developed by Ludo Brands and me) for inspecting stack calling paths with
whose memory blocks are allocated (similar to Valgrind tool). By the way,
Ludo did a great job with stack unwinding since I remember false positives
like ones mentioned here when calling chains were very strange and he had
to implement a sophisticated stack analysis for many real world examples

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   >