[sqlite] Bug Report - Analyzer app

2013-11-02 Thread RSmith

Hi there,

I use the sqlite3_analyzer.exe app get some data about tables (It's very useful 
by the way - thanks.)

The newest version downloaded some days ago gave me import errors for the produced file, so I tried loading it manually, and then 
got the SQL which follows for one of my small DB files. The Object names in the Values seem to be unquoted, which is causing the 
failure for me:


/*
  Removed lots of non-importing stats above this line  
***
The entire text of this report can be sourced into any SQL database
engine for further analysis.  All of the text above is an SQL comment.
The data used to generate this report follows:
*/
BEGIN;
CREATE TABLE space_used(
   name clob,-- Name of a table or index in the database file
   tblname clob, -- Name of associated table
   is_index boolean, -- TRUE if it is an index, false for a table
   nentry int,   -- Number of entries in the BTree
   leaf_entries int, -- Number of leaf entries
   payload int,  -- Total amount of data stored in this table or index
   ovfl_payload int, -- Total amount of data stored on overflow pages
   ovfl_cnt int, -- Number of entries that use overflow
   mx_payload int,   -- Maximum payload size
   int_pages int,-- Number of interior pages used
   leaf_pages int,   -- Number of leaf pages used
   ovfl_pages int,   -- Number of overflow pages used
   int_unused int,   -- Number of unused bytes on interior pages
   leaf_unused int,  -- Number of unused bytes on primary pages
   ovfl_unused int,  -- Number of unused bytes on overflow pages
   gap_cnt int,  -- Number of gaps in the page layout
   compressed_size int  -- Total bytes stored on disk
);
INSERT INTO space_used 
VALUES(sqlite_master,sqlite_master,0,14,12,2473,0,0,534,1,3,0,898,520,0,3,4096);
INSERT INTO space_used 
VALUES(DBases,DBases,0,5,5,644,0,0,181,0,1,0,0,349,0,0,1024);
INSERT INTO space_used 
VALUES(sqlite_autoindex_DBases_1,DBases,1,5,5,64,0,0,14,0,1,0,0,937,0,0,1024);
INSERT INTO space_used 
VALUES(DBTables,DBTables,0,15,10,4458,0,0,923,1,6,0,977,1589,0,1,7168);
INSERT INTO space_used 
VALUES(DBHistory,DBHistory,0,0,0,0,0,0,0,0,1,0,0,1016,0,0,1024);
INSERT INTO space_used 
VALUES(DBSettings,DBSettings,0,7,7,266,0,0,98,0,1,0,0,715,0,0,1024);
INSERT INTO space_used 
VALUES(sqlite_autoindex_DBSettings_1,DBSettings,1,7,7,139,0,0,24,0,1,0,0,856,0,0,1024);
INSERT INTO space_used 
VALUES(Idx_DBases_DBName,DBases,1,5,5,64,0,0,14,0,1,0,0,937,0,0,1024);
INSERT INTO space_used 
VALUES(Idx_DBTables_DBID,DBTables,1,10,10,145,0,0,22,0,1,0,0,841,0,0,1024);
INSERT INTO space_used 
VALUES(Idx_DBHistory,DBHistory,1,0,0,0,0,0,0,0,1,0,0,1016,0,0,1024);
COMMIT;

---

Importing this gets the usual no such column: sqlite_master error, but it all 
works well if I go add quotes everywhere.
If it is already fixed, or if there is something I'm doing wrong, kindly point 
me in the right direction.
Thank you


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database gets locked for other processes

2013-11-02 Thread Israel Lins Albuquerque
I know well the qt sqlite to say, maybe you aren't destroing the QSqlQuery 
class or simple call finish function, and the statement is openned helding the 
lock!

Enviado via iPhone

 Em 29/10/2013, às 10:51, Stephan Beal sgb...@googlemail.com escreveu:
 
 On Tue, Oct 29, 2013 at 1:52 PM, Martin free...@rakor-net.de wrote:
 
 The program is running on Windows7.
 ...
 
 The program runs parallel on multiple machines all sharing the same
 SQLite-Database-file.
 
 
 Connecting multiple clients over a network share is a sure-fire way to
 corrupt your database. See the bottom half of this page:
 
 http://www.sqlite.org/whentouse.html
 
 as well as any number of threads in this mailing list archives regarding
 this topic.
 
 -- 
 - stephan beal
 http://wanderinghorse.net/home/stephan/
 http://gplus.to/sgbeal
 Since tyranny's the only guaranteed byproduct of those who insist on a
 perfect world, freedom will have to do. -- Bigby Wolf
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Update and GROUP BY

2013-11-02 Thread Gert Van Assche
All, I have this table:

DROP TABLE T;
CREATE TABLE T (N, V, G);
INSERT INTO T VALUES('a', 1, 'x');
INSERT INTO T VALUES('b', 3, 'x');
INSERT INTO T VALUES('c', null, 'x');
INSERT INTO T VALUES('d', 80, 'y');
INSERT INTO T VALUES('e', null, 'y');
INSERT INTO T VALUES('f', 60, 'y');
INSERT INTO T VALUES('g', null, 'y');
INSERT INTO T VALUES('h', null, 'z');
INSERT INTO T VALUES('i', 111, 'z');

I would like to see where N='c', V as the average for the group (G) were
this record belongs to (so 'x').

Thus where N='c' I would get 2, and where N='e' or 'g', it would be 70, and
where N=h it would be 111.

I have no clue how to write this UPDATE statement. Could someone help me
out?

thanks


Gert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update and GROUP BY

2013-11-02 Thread Igor Tandetnik

On 11/2/2013 1:06 PM, Gert Van Assche wrote:

All, I have this table:

DROP TABLE T;
CREATE TABLE T (N, V, G);
INSERT INTO T VALUES('a', 1, 'x');
INSERT INTO T VALUES('b', 3, 'x');
INSERT INTO T VALUES('c', null, 'x');
INSERT INTO T VALUES('d', 80, 'y');
INSERT INTO T VALUES('e', null, 'y');
INSERT INTO T VALUES('f', 60, 'y');
INSERT INTO T VALUES('g', null, 'y');
INSERT INTO T VALUES('h', null, 'z');
INSERT INTO T VALUES('i', 111, 'z');

I would like to see where N='c', V as the average for the group (G) were
this record belongs to (so 'x').

Thus where N='c' I would get 2, and where N='e' or 'g', it would be 70, and
where N=h it would be 111.


I'm not sure I quite follow, but something like this perhaps:

update T set V = (select avg(V) from T t2 where T.G = t2.G)
where V is null;

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update and GROUP BY

2013-11-02 Thread Keith Medcalf

Can you write a SELECT which returns the data that you want?
Can you write a WHERE clause which selects the records you wish to update?

For example:

update T 
   set V = (select avg(t2.g) 
  from t as t2 
 where t2.g=t.g)
 where V IS NULL;

assuming you only want to update the value of V in rows where V IS NULL ...

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
boun...@sqlite.org] On Behalf Of Gert Van Assche
Sent: Saturday, 2 November, 2013 11:07
To: sqlite-users
Subject: [sqlite] Update and GROUP BY

All, I have this table:

DROP TABLE T;
CREATE TABLE T (N, V, G);
INSERT INTO T VALUES('a', 1, 'x');
INSERT INTO T VALUES('b', 3, 'x');
INSERT INTO T VALUES('c', null, 'x');
INSERT INTO T VALUES('d', 80, 'y');
INSERT INTO T VALUES('e', null, 'y');
INSERT INTO T VALUES('f', 60, 'y');
INSERT INTO T VALUES('g', null, 'y');
INSERT INTO T VALUES('h', null, 'z');
INSERT INTO T VALUES('i', 111, 'z');

I would like to see where N='c', V as the average for the group (G) were
this record belongs to (so 'x').

Thus where N='c' I would get 2, and where N='e' or 'g', it would be 70,
and
where N=h it would be 111.

I have no clue how to write this UPDATE statement. Could someone help me
out?

thanks


Gert
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update and GROUP BY

2013-11-02 Thread Gert Van Assche
Igor, that is exactly what I need. I also understand how it is done now.

Thanks!

gert


2013/11/2 Igor Tandetnik i...@tandetnik.org

 On 11/2/2013 1:06 PM, Gert Van Assche wrote:

 All, I have this table:

 DROP TABLE T;
 CREATE TABLE T (N, V, G);
 INSERT INTO T VALUES('a', 1, 'x');
 INSERT INTO T VALUES('b', 3, 'x');
 INSERT INTO T VALUES('c', null, 'x');
 INSERT INTO T VALUES('d', 80, 'y');
 INSERT INTO T VALUES('e', null, 'y');
 INSERT INTO T VALUES('f', 60, 'y');
 INSERT INTO T VALUES('g', null, 'y');
 INSERT INTO T VALUES('h', null, 'z');
 INSERT INTO T VALUES('i', 111, 'z');

 I would like to see where N='c', V as the average for the group (G) were
 this record belongs to (so 'x').

 Thus where N='c' I would get 2, and where N='e' or 'g', it would be 70,
 and
 where N=h it would be 111.


 I'm not sure I quite follow, but something like this perhaps:

 update T set V = (select avg(V) from T t2 where T.G = t2.G)
 where V is null;

 --
 Igor Tandetnik

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update and GROUP BY

2013-11-02 Thread Gert Van Assche
Keith, thanks for this. The statement Igor made is what I need.

gert


2013/11/2 Keith Medcalf kmedc...@dessus.com


 Can you write a SELECT which returns the data that you want?
 Can you write a WHERE clause which selects the records you wish to update?

 For example:

 update T
set V = (select avg(t2.g)
   from t as t2
  where t2.g=t.g)
  where V IS NULL;

 assuming you only want to update the value of V in rows where V IS NULL ...

 -Original Message-
 From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
 boun...@sqlite.org] On Behalf Of Gert Van Assche
 Sent: Saturday, 2 November, 2013 11:07
 To: sqlite-users
 Subject: [sqlite] Update and GROUP BY
 
 All, I have this table:
 
 DROP TABLE T;
 CREATE TABLE T (N, V, G);
 INSERT INTO T VALUES('a', 1, 'x');
 INSERT INTO T VALUES('b', 3, 'x');
 INSERT INTO T VALUES('c', null, 'x');
 INSERT INTO T VALUES('d', 80, 'y');
 INSERT INTO T VALUES('e', null, 'y');
 INSERT INTO T VALUES('f', 60, 'y');
 INSERT INTO T VALUES('g', null, 'y');
 INSERT INTO T VALUES('h', null, 'z');
 INSERT INTO T VALUES('i', 111, 'z');
 
 I would like to see where N='c', V as the average for the group (G) were
 this record belongs to (so 'x').
 
 Thus where N='c' I would get 2, and where N='e' or 'g', it would be 70,
 and
 where N=h it would be 111.
 
 I have no clue how to write this UPDATE statement. Could someone help me
 out?
 
 thanks
 
 
 Gert
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update and GROUP BY

2013-11-02 Thread Jim Callahan
Good question Gert. Good solution, Igor and I like Keith's formatting.

I thought the list might be interested in some of the statistical issues
involved in determining if this method of replacing null values is an
appropriate method for your data analysis and alternatives that are
available.

The statistical term for replacing missing values (a type of null value)
with a computed or selected value is  imputation.

This problem/solution presented on this list is an implementation of a type
of mean imputation. The statistical language R, has an entire package
devoted to imputation (although ironically, it doesn't have this exact
method -- it calculates the mean of an entire column without grouping  or
performs a more complex analysis. Although that may be because R experts
know a way to add the grouping.).
http://cran.r-project.org/web/packages/imputation/imputation.pdf

The Wikipedia article, Imputation (statistics)
http://en.wikipedia.org/wiki/Imputation_(statistics)
points out some of the tradeoffs involved:

Another imputation technique involves replacing any missing value with the
mean of that variable for all other cases [records], which has the benefit
of not changing the sample mean for that variable. However, mean imputation
attenuates any correlations involving the variable(s) that are imputed.
This is because, in cases with imputation, there is guaranteed to be no
relationship between the imputed variable and any other measured variables.
Thus, mean imputation has some attractive properties for univariate
analysis but becomes problematic for multivariate analysis.

Regression imputation has the opposite problem of mean imputation. A
regression model is estimated to predict observed values of a variable
based on other variables, and that model is then used to impute values in
cases where that variable is missing. In other words, available information
for complete and incomplete cases is used to predict whether a value on a
specific variable is missing or not. Fitted values from the regression
model are then used to impute the missing values. The problem is that the
imputed data do not have an error term included in their estimation, thus
the estimates fit perfectly along the regression line without any residual
variance. This causes relationships to be over identified and suggest
greater precision in the imputed values than is warranted. The regression
model predicts the most likely value of missing data but does not supply
uncertainty about that value.

There is a lot more in the Wikipedia article, but this seemed like the most
relevant section.

HTH,
Jim Callahan


On Sat, Nov 2, 2013 at 2:23 PM, Igor Tandetnik i...@tandetnik.org wrote:

 On 11/2/2013 1:06 PM, Gert Van Assche wrote:

 All, I have this table:

 DROP TABLE T;
 CREATE TABLE T (N, V, G);
 INSERT INTO T VALUES('a', 1, 'x');
 INSERT INTO T VALUES('b', 3, 'x');
 INSERT INTO T VALUES('c', null, 'x');
 INSERT INTO T VALUES('d', 80, 'y');
 INSERT INTO T VALUES('e', null, 'y');
 INSERT INTO T VALUES('f', 60, 'y');
 INSERT INTO T VALUES('g', null, 'y');
 INSERT INTO T VALUES('h', null, 'z');
 INSERT INTO T VALUES('i', 111, 'z');

 I would like to see where N='c', V as the average for the group (G) were
 this record belongs to (so 'x').

 Thus where N='c' I would get 2, and where N='e' or 'g', it would be 70,
 and
 where N=h it would be 111.


 I'm not sure I quite follow, but something like this perhaps:

 update T set V = (select avg(V) from T t2 where T.G = t2.G)
 where V is null;

 --
 Igor Tandetnik


 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] possible join bug/regression with sqlite-3.8.1

2013-11-02 Thread Ferdinand Hübner
On Mon, Oct 28, 2013 at 9:44 PM, Richard Hipp d...@sqlite.org wrote:
 The fix for this problem has been checked in:

 http://www.sqlite.org/src/info/9aac4e588c

 This problem was actually introduced in 3.8.0 but it was unreachable until
 some additional enhancements were added in 3.8.1.  So 3.8.1 didn't actually
 cause the bug, it merely exposed it.

 This is a rather obscure problem, so we do not (currently) intend to rush
 out a patch release.  The trunk is stable and usable if anybody really
 needs a fix right away.

Thank you, Richard.
I compiled today's trunk and it seems to fix the problem.

Regards,
Ferdinand
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second ORDER BY statement

2013-11-02 Thread Igor Tandetnik

On 11/2/2013 11:24 PM, SongbookDB wrote:

I'm using the following query to first, display in asc order any records
that have something in the language column, then any fields where the
language column is an empty string (fyi I don't have nulls in this column).
It works perfectly.

I'd now like to order the Language =  rows by another column, Artist,
but cannot crack how to restructure the query to accommodate this.

SELECT * FROM
(SELECT *
FROM table1
WHERE Language != COLLATE NOCASE
ORDER BY Language COLLATE NOCASE)
UNION ALL
SELECT *
FROM table1
WHERE Language = COLLATE NOCASE
LIMIT 100


select * from table1
order by Language = '', Language collate nocase, Artist collate nocase;

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second ORDER BY statement

2013-11-02 Thread SongbookDB
Not sure how if this is the way to reply, but thank you Igor.


On Sun, Nov 3, 2013 at 3:16 PM, Igor Tandetnik i...@tandetnik.org wrote:

 On 11/2/2013 11:24 PM, SongbookDB wrote:

 I'm using the following query to first, display in asc order any records
 that have something in the language column, then any fields where the
 language column is an empty string (fyi I don't have nulls in this
 column).
 It works perfectly.

 I'd now like to order the Language =  rows by another column, Artist,
 but cannot crack how to restructure the query to accommodate this.

 SELECT * FROM
 (SELECT *
 FROM table1
 WHERE Language != COLLATE NOCASE
 ORDER BY Language COLLATE NOCASE)
 UNION ALL
 SELECT *
 FROM table1
 WHERE Language = COLLATE NOCASE
 LIMIT 100


 select * from table1
 order by Language = '', Language collate nocase, Artist collate nocase;

 --
 Igor Tandetnik

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




-- 
Shaun Thomson
Owner - SongbookDB
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users