I've run into a phenomenon I don't understand where view nesting affects
types.
Seen in sqlite 3.7.6.3
sqlite-3.6.23 does NOT show this behavior.
After further reflection, I am going to go out on a limb and suggest
that the behavior described in the OP is a regression. The behavior
started
Hi all!
I have a silly question probably... maybe someone helps me.
I have a 33MB database with 100MB log. And log growing...
So, I try to access database by sqlite3.exe only. It is version
3.7.6.3. I call PRAGMA wal_checkpoint(RESTART); It immediately
finish with 0|8|8. But WAL log file
On Wed, Jun 22, 2011 at 9:25 PM, Rense Corten rcor...@gmail.com wrote:
CREATE TABLE table2 AS SELECT n1,n2 FROM (SELECT n1, n2 FROM table1
UNION SELECT n2 AS n1, n1 AS n2 FROM table1) WHERE(n1n2) GROUP BY
n1,n2;
This has the desired result on a small example, but when I try this on
my
I am using sqlite3(sqlite-3.7.0.1) with WAL enabled as storage for a
multiprocessing daemon(python). On doing a BULK insert of .5 million rows
each of size 230 bytes in batches of 500 where each batch is a transaction,
following errors happen in other processes which perform (10) SELECTS and
Hi
I've encountered a big problem which I failed to solve and I'm writing here
to get some advice.
But let's begin from the start. I use WAL journal mode in most of my
applications including WinCE 4/20 based devices. I know that it's not
supported, but using simple trick it actually works with
I have an application for which, I have one process which is writing to
the database pretty frequently. I believe that I have enabled
share_cache
And read uncommitted. How to I verify this?
In another process, I have a sqlite command line connected to the same
database, I
i finalizeevery statement right before closing the database, and every
finalize returns sqlite_ok. but then closing returns sqlite_busy.
Am 22.06.2011 19:04, schrieb Pavel Ivanov:
there are no statements open, as far as i can see.
Your eyes can miss something. Does sqlite3_close return
Natusch, Paul pnatu...@tycoint.com wrote:
In another process, I have a sqlite command line connected to the same
database, I periodically do select * from tableName;
This causes my write to get a busy response. How do I enable
shared_cache for the command line?
Shared cache works only for
Is the Hard Drive thrashing? Could be that most everything is being done in
swap files. Given the size of the table, a lot more RAM would help.
Tom
___
sqlite-users mailing list
sqlite-users@sqlite.org
On 23 Jun 2011, at 6:11am, Anoop K wrote:
Sqlite3 configuration
- 'PRAGMA synchronous=OFF'
Remove that and try it again.
Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
I had to turn synchronous=OFF to achieve required performance. By the way I
did change it to 'NORMAL' and tried. The issues mentioned in the mail still
happened
Some more info regarding the problem. At this high load IO Wait was about
50-60%.
Thanks
Anoop
On Thu, Jun 23, 2011 at 5:42 PM, Simon
On 23 Jun 2011, at 2:22pm, Anoop K wrote:
By the way I
did change it to 'NORMAL' and tried. The issues mentioned in the mail still
happened
Okay, that's useful information.
Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
On 06/23/2011 12:11 PM, Anoop K wrote:
I am using sqlite3(sqlite-3.7.0.1) with WAL enabled as storage for a
multiprocessing daemon(python). On doing a BULK insert of .5 million rows
each of size 230 bytes in batches of 500 where each batch is a transaction,
following errors happen in other
Hi, all!
Today i saw a curious thing: i store 440kb of wiki files in an sqlite3 db
and the db file is only 400kb.
HTF can that possibly be?
After poking around i found that the wiki files actually total 360kb (when i
added up their sizes manually, as opposed to using 'df' to get it), and the
On Jun 23, 2011, at 10:18 AM, Stephan Beal sgb...@googlemail.com wrote:
Hi, all!
Today i saw a curious thing: i store 440kb of wiki files in an sqlite3 db
and the db file is only 400kb.
HTF can that possibly be?
After poking around i found that the wiki files actually total 360kb
On 23 juin 2011, at 16:22, Mr. Puneet Kishor wrote:
On Jun 23, 2011, at 10:18 AM, Stephan Beal sgb...@googlemail.com wrote:
Hi, all!
Today i saw a curious thing: i store 440kb of wiki files in an sqlite3 db
and the db file is only 400kb.
HTF can that possibly be?
After poking
On Jun 23, 2011, at 10:49 AM, Jean-Denis Muys jdm...@kleegroup.com wrote:
On 23 juin 2011, at 16:22, Mr. Puneet Kishor wrote:
On Jun 23, 2011, at 10:18 AM, Stephan Beal sgb...@googlemail.com wrote:
Hi, all!
Today i saw a curious thing: i store 440kb of wiki files in an sqlite3
On Thu, Jun 23, 2011 at 4:49 PM, Jean-Denis Muys jdm...@kleegroup.comwrote:
Let me add two other drawbacks as well:
- incremental backups: now everytime you change one small file, the whole
database needs to be backed up, increasing needlessly storage size, and
backup time. This applies to
I am sorry I won't able to share the complete program. Also it is quite some
work to extract that piece alone.
*Some good news. As a last resort I downloaded latest sqlite source,
compiled and ran the tests. It never failed. The problem seems **to be fixed
in 3.7.6.3.*
But I am still curious to
Hello All,
Is there a way to do group by query with well defined order of calls to the
aggregate function?
The problem is as follows. I have some objects in a database identified with
obj_id. For each object, there are some arrays, represented in the
normalized form like:
create table array
On Wed, Jun 22, 2011 at 10:48 PM, Cyrille rssxpr...@free.fr wrote:
Alternatively, you could recompile SQLite.Interop.dll to use the
static CRT library (/MT).
Could you please let me know how to proceed to do this?
Download the source from
Thank you for these guidelines. Unfortunately, it seems that Visual
Studio is necessary and I just have the Express version. Do you confirm
that with the Express version, rebuilding is not possible?
Best regards,
Cyrille
Le 23/06/2011 17:55, Random Coder a écrit :
On Wed, Jun 22, 2011 at
On Wed, Jun 22, 2011 at 10:25:52 -0700, Rense Corten wrote:
CREATE TABLE table2 AS SELECT n1,n2 FROM (SELECT n1, n2 FROM table1
UNION SELECT n2 AS n1, n1 AS n2 FROM table1) WHERE(n1n2) GROUP BY
n1,n2;
Have you tried explaining it (prefix the whole query with explain query
plan and run it)?
On Thu, Jun 23, 2011 at 9:27 AM, Cyrille rssxpr...@free.fr wrote:
Thank you for these guidelines. Unfortunately, it seems that Visual
Studio is necessary and I just have the Express version. Do you confirm
that with the Express version, rebuilding is not possible?
Sorry, I'd think Visual C++
On 6/23/2011 11:37 AM, Jan Hudec wrote:
Is there a way to do group by query with well defined order of calls to the
aggregate function?
Not reliably.
So I want to denormalize the database by doing:
create temporary table packed_array (
obj_id integer primary key,
Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a lot.
As to RAM: I'm trying this on two different machines, one with 12 Gb
and one with 32 Gb RAM. I won't be able to get more in the near
future. Something that might be relevant is that the case of the 32Gb
machine, the
On 23 Jun 2011, at 7:20pm, Rense Corten wrote:
Here are the results of EXPLAIN QUERY PLAN SELECT n1, n2 FROM table1
Where n1 n2 INTERSECT SELECT n2, n1 FROM table1 Where n2 n1;
1|0|0|SCAN TABLE table1 (~437976176 rows)
2|0|0|SCAN TABLE table1 (~437976176 rows)
0|0|0|COMPOUND SUBQUERIES
I am currently making a testing with the C++ Express version. I thought
it was not possible to use it because it just allow the access to the
project SQLite.Interop.2010 of the solution. However, I tried to
compile applyng the settings you described. I am going to make some test
and I will
Yes, I have run ANALYZE.
On Thu, Jun 23, 2011 at 11:56 AM, Simon Slavin slav...@bigfraud.org wrote:
On 23 Jun 2011, at 7:20pm, Rense Corten wrote:
Here are the results of EXPLAIN QUERY PLAN SELECT n1, n2 FROM table1
Where n1 n2 INTERSECT SELECT n2, n1 FROM table1 Where n2 n1;
1|0|0|SCAN
Any reason you can't add another field to your database?
0=equal
1=n1n2
2=n1n2
The create an index on that field.
Then your query plan would look like this:
sqlite explain query plan select n1,n2 from table2 where flag = 1 intersect
select n2,n1 from table2 where flag = 2;
sele order
Come to think of itwhy don't you just write code to walk through the table
just once?
You can then print out your values just like you want.
Also...it sounds as though you might be able to insert your values in order.
i.e. always put the smaller value in the first field (I happen to have
On Thu, Jun 23, 2011 at 12:48:51 -0400, Igor Tandetnik wrote:
On 6/23/2011 11:37 AM, Jan Hudec wrote:
Is there a way to do group by query with well defined order of calls to the
aggregate function?
Not reliably.
So I want to denormalize the database by doing:
create temporary
On Thu, Jun 23, 2011 at 11:20:22 -0700, Rense Corten wrote:
Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a lot.
As to RAM: I'm trying this on two different machines, one with 12 Gb
and one with 32 Gb RAM. I won't be able to get more in the near
future. Something that
On Thu, Jun 23, 2011 at 8:37 AM, Jan Hudec b...@ucw.cz wrote:
Hello All,
Is there a way to do group by query with well defined order of calls to the
aggregate function?
The problem is as follows. I have some objects in a database identified with
obj_id. For each object, there are some
On 23 Jun 2011, at 9:31pm, Jan Hudec wrote:
You also need to make sure you are using
64-bit build of SQLite (32-bit build can only use between 2 and 3 GB and
that's not enough for such huge database).
If you try to use a 32-bit compilation of SQLite to open a bigger database,
does
On 23 Jun 2011, at 7:20pm, Rense Corten wrote:
As to RAM: I'm trying this on two different machines, one with 12 Gb
and one with 32 Gb RAM. I won't be able to get more in the near
future. Something that might be relevant is that the case of the 32Gb
machine, the database is on an AFS.
Wait
Simon: no, AFS=Andrew File System :). Both my servers run Ubuntu
(64-bit). In case of the 12 Gb RAM machine, everything is done
locally. On the 32 Gb machine, I'm afraid I can't do that. And in fact
I've been using the command-line tool all along.
On Thu, Jun 23, 2011 at 2:45 PM, Simon Slavin
On 23 Jun 2011, at 10:56pm, Rense Corten wrote:
Simon: no, AFS=Andrew File System :). Both my servers run Ubuntu
(64-bit). In case of the 12 Gb RAM machine, everything is done
locally. On the 32 Gb machine, I'm afraid I can't do that. And in fact
I've been using the command-line tool all
On Thu, Jun 23, 2011 at 5:59 PM, Simon Slavin slav...@bigfraud.org wrote:
On 23 Jun 2011, at 10:56pm, Rense Corten wrote:
Simon: no, AFS=Andrew File System :). Both my servers run Ubuntu
(64-bit). In case of the 12 Gb RAM machine, everything is done
locally. On the 32 Gb machine, I'm
On Thu, Jun 23, 2011 at 10:20 PM, Rense Corten rcor...@gmail.com wrote:
Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a
lot.
As to RAM: I'm trying this on two different machines, one with 12 Gb
and one with 32 Gb RAM. I won't be able to get more in the near
future.
Hello,
My understanding is that an index is automatically created on any column
that is used in the primary key (or a composite index is created if the key
is composed of different columns). If this is correct then why don't I see
indexes for those in my table (I'm using SQLite Administrator and
41 matches
Mail list logo