Re: [sqlite] SQLite 3.7.8 version - insert query speed decreased

2011-09-27 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/27/2011 10:18 PM, YONGIL JANG wrote:
> should I wait for newer version?

You should provide sufficient information (ideally source code and data) so
that other people can reproduce what you are experiencing.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk6CuxYACgkQmOOfHg372QSymgCeO0Smredq3ZeTzdlN7u/qxIt2
/AcAoKBT8CoRbSBqRpDZ1l8TE86rzldf
=KlJI
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite 3.7.8 version - insert query speed decreased

2011-09-27 Thread YONGIL JANG
Dear all,


I'm using sqlite 3.7.6.3 version on my system.

And, I'd tried to use 3.7.8 version, because of retrieval query speed of
distinct and group by has been increased.

Really, it shows improved performance for several scenes in my application.


However, insert queries of 1,000 entries to sqlite shows decreased
performance than 3.7.6.3 whatever I use transaction or not.

Some case shows 2 times slower than 3.7.6.3.


Is there any patches?

should I wait for newer version?


I can wait for newer version. But, I'd like to execute it as soon as
possible. ^^


Thank you.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Richard Hipp
On Tue, Sep 27, 2011 at 7:16 PM, Tim Streater  wrote:

> On 27 Sep 2011 at 18:15, Richard Hipp  wrote:
>
> > On Tue, Sep 27, 2011 at 1:13 PM, Tim Streater 
> wrote:
> >
> >> The databases that get vacuumed tend to have a fair amount of traffic in
> >> and out. So it's good to compress them from time to time.
> >
> > Really?  Have you actually measured this to see if it makes a difference?
> > What happens if you never VACUUM?
>
> They'll get bigger and bigger. I imagine the users (if I ever have any)
> would start complaining.
>


You know that SQLite automatically reclaims and reuses space from rows you
DELETE and tables you DROP, right?

VACUUM repacks and defragments the database file.  The repacking might make
the file a little smaller, but probably not that much.  Defragmenting might
help performance, but again, probably not that much.  Hence I ask:  have you
actually measured the difference?



>
> --
> Cheers  --  Tim
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Tim Streater
On 27 Sep 2011 at 18:15, Richard Hipp  wrote: 

> On Tue, Sep 27, 2011 at 1:13 PM, Tim Streater  wrote:
>
>> The databases that get vacuumed tend to have a fair amount of traffic in
>> and out. So it's good to compress them from time to time.
>
> Really?  Have you actually measured this to see if it makes a difference?
> What happens if you never VACUUM?

They'll get bigger and bigger. I imagine the users (if I ever have any) would 
start complaining.

--
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] speeding up FTS4

2011-09-27 Thread Puneet Kishor
I have 

CREATE TABLE uris (
uri_id INTEGER PRIMARY KEY, 
uri TEXT,
uri_content TEXT,
downloaded_on DATETIME DEFAULT CURRENT_TIMESTAMP,
feed_history_id INTEGER
);

with 46608 rows

CREATE TABLE feed_history (
feed_history_id INTEGER PRIMARY KEY,
feed_id INTEGER,
scanned_on DATETIME DEFAULT CURRENT_TIMESTAMP
);

with 3276 rows

CREATE TABLE feeds (feed_id INTEGER PRIMARY KEY, feed_uri TEXT, project_id 
INTEGER);

with 79 rows

CREATE TABLE projects (project_id INTEGER PRIMARY KEY, project_name TEXT);

with 3 rows

CREATE INDEX idx_uris_downloaded_on ON uris (downloaded_on);
CREATE VIRTUAL TABLE fts_uri USING fts4 (uri_id, uri_content);

The database file is about 27 GB.


The following query takes (CPU Time: user 23.952698 sys 73.188765) returning 
46608 rows

SELECT u.uri_id uri_id, u.uri uri, u.downloaded_on, 
Snippet(fts_uri, '', '', '…', -1, 
64) snippet 
FROM fts_uri f 
JOIN uris u ON f.uri_id = u.uri_id 
JOIN feed_history fh ON u.feed_history_id = fh.feed_history_id 
JOIN feeds f ON fh.feed_id = f.feed_id 
JOIN projects p ON f.project_id = p.project_id 
WHERE p.project_id = 3
AND Datetime(u.downloaded_on) >= Datetime(p.project_start) 
AND fts_uri MATCH 'education,school' 
ORDER BY u.uri_id, downloaded_on DESC;


The EXPLAIN QUERY PLAN for the above query tells me

0|0|0|SCAN TABLE fts_uri AS f VIRTUAL TABLE INDEX 4: (~0 rows)
0|1|1|SEARCH TABLE uris AS u USING INTEGER PRIMARY KEY (rowid=?) (~1 rows)
0|2|2|SEARCH TABLE feed_history AS fh USING INTEGER PRIMARY KEY (rowid=?) 
(~1 rows)
0|3|3|SEARCH TABLE feeds AS f USING INTEGER PRIMARY KEY (rowid=?) (~1 rows)
0|4|4|SEARCH TABLE projects AS p USING INTEGER PRIMARY KEY (rowid=?) (~1 
rows)
0|0|0|USE TEMP B-TREE FOR ORDER BY


Is there anything I can do to speed this up?

--
Puneet Kishor
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign key mismatch Error

2011-09-27 Thread Kees Nuyt
On Tue, 27 Sep 2011 14:58:06 +0400, Madhan Kumar
 wrote:

>Hello..
>  In sqlite (version 3.7.7.1),  I created two tables with foreign keys
>as shown below
>
>CREATE TABLE [*Employee*] (
>  [*EmpId*] INT NOT NULL,
>  [EmpName] VARCHAR2(50));
>
>CREATE TABLE [*Department*] (
>  [DeptID] INTEGER NOT NULL,
>  [DeptName] VARCHAR2(20),
>  [*EmpID*] INTEGER NOT NULL CONSTRAINT [FK_EMP_Dept] REFERENCES
>[Employee]([*EmpId*]));
>
>Using sqlite Expert Tool,
>I inserted a row in *Employee* Table - Success
>when i want to delete the same row from Employee table - gives error
>"foreign key mismatch".
>
>When i try to insert a row in *Department* Table, gives error "foreign key
>mismatch".
>
>I tried executing PRAGMA foreign_keys = ON
>and it returns "1"
>
>But still i am getting errors. Pls assist me to fix this issue.
>Thanks in advance.

I think you have the reference the wrong way around.
Employee should reference Department, the relation is n:1.

The way you have it now, Department references employee, which
means a department can only have one employee.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 23:11, Scott Hess wrote:

> On Tue, Sep 27, 2011 at 2:07 PM, Patrick Proniewski wrote:
> 
>> On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:
>>> gawk has fflush()
>> 
>> none of them is available out of the box on Mac OS X, or FreeBSD. gawk can
>> be installed, but I'd rather use my "while true" loop instead of installing
>> gawk.
>> 
> 
> Did you try it?

nop, I don't have gawk so I didn't even think about trying.

>  On my Mac fflush() fixes it.

indeed. Thanks. So it's not specific to gawk, that's great news! My problem is 
solved.

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:44, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski :
> 
>>> Take a look at a utility called dstat.
> 
>> no, it's linux only.
> 
> But it is written in Python - so it should be relatively
> transportable.

and it relies on /proc/, Mac OS X does not have a /proc/

patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:41, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski :
> 
>> That's what I do, but I think using a loop is ugly, and I would like to find 
>> a way
>> to feed data continuously into sqlite.
> 
> I can't see why you would want to do this more than once every minute
> - or do you?

The granularity I'm looking for is between 1 second and 10 seconds. Cron is not 
an option here.

> Why, exactly, do you want to do this anyway? I'm interested because I've
> done something similar.

I've performance issue on a file server hooked to a raid enclosure, and 
exporting the corresponding volume via NFS.
The performance problem seems to be on the raid itself. So I'm logging I/O 
performances during production, to detect anomaly.
sample: http://perso.univ-lyon2.fr/~pproniew/kbpt-2011-09-27-22.png (besier 
smoothing, 24 hours of data).
We will change the storage in few days, and this iostat logging will help 
compare before/after performances.

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Scott Hess
On Tue, Sep 27, 2011 at 2:07 PM, Patrick Proniewski wrote:

> On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:
> > gawk has fflush()
>
> none of them is available out of the box on Mac OS X, or FreeBSD. gawk can
> be installed, but I'd rather use my "while true" loop instead of installing
> gawk.
>

Did you try it?  On my Mac fflush() fixes it.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:18, Gabor Grothendieck wrote:

> gawk has fflush()


On 27 sept. 2011, at 20:29, Roger Andersson wrote:

> stdbuf?
> unbuffer?


none of them is available out of the box on Mac OS X, or FreeBSD. gawk can be 
installed, but I'd rather use my "while true" loop instead of installing gawk.

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Paul Linehan
2011/9/27 Patrick Proniewski :

>> Take a look at a utility called dstat.

> no, it's linux only.


But it is written in Python - so it should be relatively
transportable. I've even
managed to modify the code myself - and if I can do it, anybody can!  8-)


Paul...


> patpro

-- 


Hmmm a "life": wonder where I can download one of those?


lineh...@tcd.ie

Mob: 00 353 86 864 5772
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread David Garfield
Patrick Proniewski writes:
> On 27 sept. 2011, at 20:14, David Garfield wrote:
> 
> > Any entry in a pipe could be buffering.  In a quick test here, awk is
> > buffering.  To find the buffering, try using the pieces up to a given
> > stage with " | cat " added at the end.  If this buffers, you've found
> > the problem.
> 
> as far as my understanding goes, the simple fact I don't have my last output 
> into a tty is enough to trigger buffering.

Actually, any program that doesn't explicitly block buffering and uses
stdout will get buffering.  Some block buffering.  Some (like probably
iostat) explicitly flush the buffers.  Some don't use stdout.  Others
get buffered.

> >  Unbuffered output is usually slower, so it is normally
> > done only to a terminal.  I think the only easy way to externally
> > disable the buffer is to wrap the program in a pseudo-tty.
> 
> apparently... not so easy by the way :)

Well, I think there are three choices for arbitrary programs:
1) Wrap it in a pseudo-tty.  I think I've seen a program to do this,
   but I don't remember where/what.
2) Override isatty() through an LD_PRELOAD.
3) Change the source, either to the program or to libc.

> > Alternatively, look for an option that lets you explicitly unbuffer.
> > (for instance, in perl, do: $| = 1; )
> 
> nothing in awk, but I could try sed instead (-l  Make output line 
> buffered)
> 
> regards,
> patpro

--David Garfield

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Paul Linehan
2011/9/27 Patrick Proniewski :

> That's what I do, but I think using a loop is ugly, and I would like to find 
> a way
> to feed data continuously into sqlite.


I can't see why you would want to do this more than once every minute
- or do you?

If not,
==
cron iostat > myfile.

Parse myfile - get data out. Insert into SQLite.

Delete myfile.
=== Should take a max of 5 seconds =

Then repeat cron for the next min, 2 mins 5/10/whatever...

Not elegant I know, but it will do the job.

Why, exactly, do you want to do this anyway? I'm interested because I've
done something similar.


Paul...





> patpro

-- 


Hmmm a "life": wonder where I can download one of those?


lineh...@tcd.ie

Mob: 00 353 86 864 5772
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Paul Linehan
2011/9/27 Patrick Proniewski :

> That's what I do, but I think using a loop is ugly, and I would like to find a
> way to feed data continuously into sqlite.


cron


Paul...

> patpro


-- 


Hmmm a "life": wonder where I can download one of those?


lineh...@tcd.ie

Mob: 00 353 86 864 5772
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Scott Hess
On Tue, Sep 27, 2011 at 11:18 AM, Gabor Grothendieck <
ggrothendi...@gmail.com> wrote:

> On Tue, Sep 27, 2011 at 2:14 PM, David Garfield
>  wrote:
> > Any entry in a pipe could be buffering.  In a quick test here, awk is
> > buffering.  To find the buffering, try using the pieces up to a given
> > stage with " | cat " added at the end.  If this buffers, you've found
> > the problem.  Unbuffered output is usually slower, so it is normally
> > done only to a terminal.  I think the only easy way to externally
> > disable the buffer is to wrap the program in a pseudo-tty.
> > Alternatively, look for an option that lets you explicitly unbuffer.
> > (for instance, in perl, do: $| = 1; )
> >
>
> gawk has fflush()


To clarify, you need to add "fflush();" at the end of your awk command.
 iostat is flushing, and awk is flushing IFF the output is to a terminal.
 But if it's to a pipe, it's not flushing, so you need to do it manually.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Roger Andersson

 On 09/27/11 20:14, David Garfield wrote:

Any entry in a pipe could be buffering.  In a quick test here, awk is
buffering.  To find the buffering, try using the pieces up to a given
stage with " | cat " added at the end.  If this buffers, you've found
the problem.  Unbuffered output is usually slower, so it is normally
done only to a terminal.  I think the only easy way to externally
disable the buffer is to wrap the program in a pseudo-tty.
Alternatively, look for an option that lets you explicitly unbuffer.
(for instance, in perl, do: $| = 1; )

stdbuf?
unbuffer?

/Roger

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:14, David Garfield wrote:

> Any entry in a pipe could be buffering.  In a quick test here, awk is
> buffering.  To find the buffering, try using the pieces up to a given
> stage with " | cat " added at the end.  If this buffers, you've found
> the problem.

as far as my understanding goes, the simple fact I don't have my last output 
into a tty is enough to trigger buffering.


>  Unbuffered output is usually slower, so it is normally
> done only to a terminal.  I think the only easy way to externally
> disable the buffer is to wrap the program in a pseudo-tty.

apparently... not so easy by the way :)

> Alternatively, look for an option that lets you explicitly unbuffer.
> (for instance, in perl, do: $| = 1; )

nothing in awk, but I could try sed instead (-l  Make output line buffered)

regards,
patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Gabor Grothendieck
On Tue, Sep 27, 2011 at 2:14 PM, David Garfield
 wrote:
> Any entry in a pipe could be buffering.  In a quick test here, awk is
> buffering.  To find the buffering, try using the pieces up to a given
> stage with " | cat " added at the end.  If this buffers, you've found
> the problem.  Unbuffered output is usually slower, so it is normally
> done only to a terminal.  I think the only easy way to externally
> disable the buffer is to wrap the program in a pseudo-tty.
> Alternatively, look for an option that lets you explicitly unbuffer.
> (for instance, in perl, do: $| = 1; )
>

gawk has fflush()
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 20:04, Paul Linehan wrote:

> 2011/9/27 Patrick Proniewski :
> 
> 
>> I'm facing a challenging problem. I want to log some data into an SQLite3 DB.
>> Data come from a system command (iostat) in an endless steam, one row every 
>> X seconds:
> 
> 
> Take a look at a utility called dstat. 

no, it's linux only.

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread David Garfield
Any entry in a pipe could be buffering.  In a quick test here, awk is
buffering.  To find the buffering, try using the pieces up to a given
stage with " | cat " added at the end.  If this buffers, you've found
the problem.  Unbuffered output is usually slower, so it is normally
done only to a terminal.  I think the only easy way to externally
disable the buffer is to wrap the program in a pseudo-tty.
Alternatively, look for an option that lets you explicitly unbuffer.
(for instance, in perl, do: $| = 1; )

--David Garfield

Patrick Proniewski writes:
> On 27 sept. 2011, at 18:31, Roger Andersson wrote:
> 
> > I do not know if tee makes any difference or if it's available on Mac?
> > http://unixhelp.ed.ac.uk/CGI/man-cgi?tee
> 
> tee is available, but no more luck here, as it won't allow to disable the 
> buffer.
> 
> 
> > iostat -d -w 10 disk0 | tee -a logfile
> > and then
> > tail -f logfile | awk '!/[a-zA-Z]/ {print "INSERT INTO io 
> > VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
> > sqlite3 iostat.db
> 
> same problem here ;)
> 
> patpro
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Paul Linehan
2011/9/27 Patrick Proniewski :


> I'm facing a challenging problem. I want to log some data into an SQLite3 DB.
> Data come from a system command (iostat) in an endless steam, one row every X 
> seconds:


Take a look at a utility called dstat. I've twiddled with the source and
have its output go to a .csv. Basically, I've done what you want to do,
except with Oracle.

The plan then is to cron (or within Oracle - use the db scheduler) a job that
copies the data into the database - then at certain times delete down the
.csv file so that you're not continually rejecting records already in the db.

I have implemented this with Oracle - need to do a bit of work - but it's
part of what I believe *_should_* be easy -  i.e. put basic system metrics
directly into a database so that such data can be analysed over a long
period, rather than "Oh, what did iostat say yesterday?".

When I have it fully working with Oracle (XE 10), I plan to get it working
with SQLite - it should be reasonably easy using .csv and cron jobs.


Paul...

-- 


Hmmm a "life": wonder where I can download one of those?


lineh...@tcd.ie

Mob: 00 353 86 864 5772
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 18:31, Roger Andersson wrote:

> I do not know if tee makes any difference or if it's available on Mac?
> http://unixhelp.ed.ac.uk/CGI/man-cgi?tee

tee is available, but no more luck here, as it won't allow to disable the 
buffer.


> iostat -d -w 10 disk0 | tee -a logfile
> and then
> tail -f logfile | awk '!/[a-zA-Z]/ {print "INSERT INTO io 
> VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
> sqlite3 iostat.db

same problem here ;)

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Richard Hipp
On Tue, Sep 27, 2011 at 1:13 PM, Tim Streater  wrote:

>
> The databases that get vacuumed tend to have a fair amount of traffic in
> and out. So it's good to compress them from time to time.
>

Really?  Have you actually measured this to see if it makes a difference?
What happens if you never VACUUM?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Tim Streater
On 27 Sep 2011 at 12:23, Simon Slavin  wrote: 

> On 27 Sep 2011, at 10:49am, Tim Streater wrote:
>
>> On 27 Sep 2011 at 00:19, Simon Slavin  wrote:
>>
>>> I believe that VACUUM is one of the statements which counts as changing the
>>> schema, because it does its work by rewriting entire tables and/or indexes.
>>> So don't do a VACUUM when you're doing multi-process access.  Cut out the
>>> VACUUMs and see whether you still get this result code.
>>
>> Ah, thanks, that's a good clue. I can do some work in that area to ensure
>> that the VACUUMs are done at a quiet moment.
>
> You might not need VACUUM at all.  I might use it just before I make a copy of
> the database file for transfer or archive, if it was important to me that the
> file was as small as possible.  But I have quite a few databases I've never
> bothered using VACUUM on at all.  Most of them shrink only by small amounts
> and probably start growing immediately afterwards anyway.

The databases that get vacuumed tend to have a fair amount of traffic in and 
out. So it's good to compress them from time to time. Not doing that when there 
is traffic has ended up being a couple of lines of PHP and a couple of lines of 
JavaScript, so if that avoids the issue in future I shall be well pleased.

--
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on armv7l GNU/Linux - JDBC issues -anysuggestions?!

2011-09-27 Thread Katie Blake
Hello again,

Just to let you know that I am running the same code on the same OS but with 
OpenJDK and my issue has now magically disappeared! Confirming Christian's 
suspicions of the JVM being the root of the issue I think..

Best Regards,

Katie,

-Original Message-
From: Katie Blake 
Sent: 26 September 2011 19:08
To: 'General Discussion of SQLite Database'
Subject: RE: [sqlite] SQLite on armv7l GNU/Linux - JDBC issues -anysuggestions?!

Thank you so much for this suggestion Michael.

I got JamVm for my platform and now see a different error - anyone any 
thoughts?!

( Once again the line in my code which causes the error is conn = 
DriverManager.getConnection("jdbc:sqlite:testdb.db"); )

Logging in to database
running in pure-java mode
running in pure-java mode
Got jdbc driver
java.lang.reflect.InvocationTargetException
   at java.lang.reflect.VMMethod.invoke(Native Method)
   at java.lang.reflect.Method.invoke(Method.java:327)
   at jamvm.java.lang.JarLauncher.main(JarLauncher.java:50)
Caused by: java.lang.reflect.InvocationTargetException
   at java.lang.reflect.VMMethod.invoke(Native Method)
   at java.lang.reflect.Method.invoke(Method.java:327)
   at 
org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
   at java.lang.reflect.VMMethod.invoke(Native Method)
   ...2 more
Caused by: java.lang.NullPointerException
   at org.sqlite.NestedDB$CausedSQLException.fillInStackTrace(NestedDB.java:649)
   at java.lang.Throwable.(Throwable.java:161)
   at java.lang.Exception.(Exception.java:78)
   at java.sql.SQLException.(SQLException.java:76)
   at java.sql.SQLException.(SQLException.java:113)
   at org.sqlite.NestedDB$CausedSQLException.(NestedDB.java:626)
   at org.sqlite.NestedDB._open(NestedDB.java:63)
   at org.sqlite.DB.open(DB.java:86)
   at org.sqlite.Conn.open(Conn.java:140)
   at org.sqlite.Conn.(Conn.java:57)
   at org.sqlite.JDBC.createConnection(JDBC.java:77)
   at org.sqlite.JDBC.connect(JDBC.java:64)
   at java.sql.DriverManager.getConnection(DriverManager.java:165)
   at java.sql.DriverManager.getConnection(DriverManager.java:204)
   at DataTest.main(DataTest.java:27)
   at java.lang.reflect.VMMethod.invoke(Native Method)
   ...5 more

Best Regards,

Katie

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Black, Michael (IS)
Sent: 26 September 2011 14:41
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite on armv7l GNU/Linux - JDBC issues -anysuggestions?!

I found an old post here that claims jamvm fixed their same (or similar?) 
assertion problem with that jdbc driver.



https://evolvis.org/pipermail/jalimo-info/2009-March/000299.html





Michael D. Black

Senior Scientist

NG Information Systems

Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Katie Blake [kbl...@sentec.co.uk]
Sent: Monday, September 26, 2011 6:39 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] SQLite on armv7l GNU/Linux - JDBC issues 
-anysuggestions?!

Christian! Hi !

What a small world it is :) Thank you for helping me out!

I have tried the zentus driver, and have sent the same issue. I am now looking 
at getting the http://www.ch-werner.de/javasqlite/ compiled for my platform - 
fingers crossed!

I don't see any difference if I run the test program with a precedingjava 
-Djava.compiler=NONE ... am I doing something foolish?

Thanks so much again, very much hope that life is treating you well at the 
moment.

Katie,

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Katie Blake, Software Developer

Visit us at Metering Europe, Booth B54.

Sentec Ltd  phone:  +44(0) 1223 303800
5 The Westbrook Centre  fax:+44(0) 1223 303801
Milton Road mobile: 
Cambridge   email:  kbl...@sentec.co.uk
CB4 1YG, UK web:www.sentec.co.uk

This email is confidential. If you have received it in error, please notify 
Sentec Ltd UK at postmas...@sentec.co.uk immediately,
delete it from your system and note that you may not copy, distribute or use 
its contents.

Sentec Limited is registered at the above address UK Company Number 3452194.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Roger Andersson

 On 09/27/11 07:48, Patrick Proniewski wrote:

I though I could easily pipe data into SQLite:

iostat -d -w 10 disk0 |\
awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
sqlite3 iostat.db

but it won't work, because sqlite3 won't record any data until the iostat 
command ends. And of course, this iostat command will never end.
So I'm stuck with a working but very ugly script:

while true; do
  iostat -c 2 -d -w 10 disk0 |\
  tail -1 |\
  awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\
  sqlite3 iostat.db
done

endless loop, forking iostat for 2 rows of data (-c 2), keeping only the last 
row because the first one is an artifact (tail -1).
I've tried various solutions with named pipes, file descriptors redirections… 
but none worked, because they all seem to require the data steam to end before 
feeding data into the DB.

Any idea?


I do not know if tee makes any difference or if it's available on Mac?
http://unixhelp.ed.ac.uk/CGI/man-cgi?tee

iostat -d -w 10 disk0 | tee -a logfile
and then
tail -f logfile | awk '!/[a-zA-Z]/ {print "INSERT INTO io 
VALUES(datetime(\"now\",\"localtime\"),"$1","$2","$3");"}' |\

sqlite3 iostat.db

/Roger
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 答复: 答复: Is SQLite supporting x64?

2011-09-27 Thread mmxida
Well, I'll check it out then. Thank you, Richard.

-邮件原件-
发件人: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-bounces@sqlite.
org] 代表 Richard Hipp
发送时间: 2011年9月26日 0:47
收件人: General Discussion of SQLite Database
主题: Re: [sqlite] 答复: Is SQLite supporting x64?

2011/9/25 mmxida 

> Roger, Thanks for your reply, that almost clear my doubts. BUT, Have 
> the SQLite develop team already concern the 64-bit issue when they 
> begin the project? Is all the C code is independent of the 
> platform-specific API? As I know, a pointer is 64-bit long on 64-bit 
> Windows as well as a int.
> How do they avoid the difference between the win32 and x64 in a 
> implementation level?
>

You can see at http://www.sqlite.org/checklists/3070800#c9 that we do 100%
MC/DC testing on SQLite for both Win32 and Win64 with no issues detected.



>
> -邮件原件-
> 发件人: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-bounces@sqlite.
> org] 代表 Roger Binns
> 发送时间: 2011年9月25日 16:44
> 收件人: sqlite-users@sqlite.org
> 主题: Re: [sqlite] Is SQLite supporting x64?
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/24/2011 10:00 PM, mmxida wrote:
> > In short, can the SQLite binary file work well also on 64-bit Windows
OS?
>
> The SQLite website download is a 32 bit DLL and will only work in 32 
> bit processes (that is independent of the OS being 32 or 64 bit).  
> However you can compile the code yourself (use the amalgamation) for 
> your 64 bit target and it will work just fine.
>
> In fact SQLite works quite happily on 32 bit and 64 bit platforms, in 
> either endianess and on LP64 model (typically non-Windows) and P64 
> (Windows), as well as on numerous different CPUs and operating 
> systems.  The file format is always the same as does not depend on any 
> of those factors.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.11 (GNU/Linux)
>
> iEYEARECAAYFAk5+6dQACgkQmOOfHg372QSs/wCg4BTHW8kji13gka5TfiQZqSuJ
> TrQAnR+Q9csRdwByxzXppNtdHLLwetsR
> =jpY/
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 答复: 答复: Is SQLite supporting x64?

2011-09-27 Thread mmxida
Well, I'll check it out then. Thank you, Richard.

-邮件原件-
发件人: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-bounces@sqlite.
org] 代表 Richard Hipp
发送时间: 2011年9月26日 0:47
收件人: General Discussion of SQLite Database
主题: Re: [sqlite] 答复: Is SQLite supporting x64?

2011/9/25 mmxida 

> Roger, Thanks for your reply, that almost clear my doubts. BUT, Have 
> the SQLite develop team already concern the 64-bit issue when they 
> begin the project? Is all the C code is independent of the 
> platform-specific API? As I know, a pointer is 64-bit long on 64-bit 
> Windows as well as a int.
> How do they avoid the difference between the win32 and x64 in a 
> implementation level?
>

You can see at http://www.sqlite.org/checklists/3070800#c9 that we do 100%
MC/DC testing on SQLite for both Win32 and Win64 with no issues detected.



>
> -邮件原件-
> 发件人: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-bounces@sqlite.
> org] 代表 Roger Binns
> 发送时间: 2011年9月25日 16:44
> 收件人: sqlite-users@sqlite.org
> 主题: Re: [sqlite] Is SQLite supporting x64?
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/24/2011 10:00 PM, mmxida wrote:
> > In short, can the SQLite binary file work well also on 64-bit Windows
OS?
>
> The SQLite website download is a 32 bit DLL and will only work in 32 
> bit processes (that is independent of the OS being 32 or 64 bit).  
> However you can compile the code yourself (use the amalgamation) for 
> your 64 bit target and it will work just fine.
>
> In fact SQLite works quite happily on 32 bit and 64 bit platforms, in 
> either endianess and on LP64 model (typically non-Windows) and P64 
> (Windows), as well as on numerous different CPUs and operating 
> systems.  The file format is always the same as does not depend on any 
> of those factors.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.11 (GNU/Linux)
>
> iEYEARECAAYFAk5+6dQACgkQmOOfHg372QSs/wCg4BTHW8kji13gka5TfiQZqSuJ
> TrQAnR+Q9csRdwByxzXppNtdHLLwetsR
> =jpY/
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 答复: 答复: Is SQLite supporting x64?

2011-09-27 Thread mmxida
Thanks for your information, Roger, it helps a lot.

-邮件原件-
发件人: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-bounces@sqlite.
org] 代表 Roger Binns
发送时间: 2011年9月26日 4:26
收件人: sqlite-users@sqlite.org
主题: Re: [sqlite] 答复: Is SQLite supporting x64?

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/25/2011 02:09 AM, mmxida wrote:
> Have the SQLite develop team already concern the 64-bit issue when 
> they begin the project?

Yes.  The Unix world had 64 bit platforms way back in the mid-90s (eg DEC
Alpha).  Not to mention different endianess on different CPUs.

> Is all the C code is independent of the platform-specific API?

In terms of APIs there is a layer called VFS that adapts to each platform
since each platform has different calls for things like opening files or
doing locking.  SQLite comes with VFS for Unix (POSIX) and Windows (which at
one point supported Windows 9x, Windows NT/2000/XP/7 and Windows CE).  If
you had a custom operating system you can make your own VFS.

  http://www.sqlite.org/c3ref/vfs.html

> As I know, a pointer is 64-bit long on 64-bit Windows as well as a int.

Actually Windows uses the model known as LLP64 - only pointers are 64 bit
with int and long being 32 bit.  The 'long long' type is 64 bit.  Other 64
bit platforms are LP64 where in addition to pointers being 64 bit the long
type is as well.

  http://en.wikipedia.org/wiki/64-bit#64-bit_data_models

If you look at the start of the SQLite source you can see that it defines an
internal sqlite_int64 type for the cases where it needs a 64 bit type with
the declaration working no matter what the underlying data model is.

> How do they avoid the difference between the win32 and x64 in a 
> implementation level?

The types as used by the API are used.  For example the Windows API says to
use HANDLE and DWORD for various types (eg open files) so that is exactly
what the SQLite code does.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk5/jmgACgkQmOOfHg372QTSoACfTOHp+AXIOY70+vFcUqYR7AEF
QTcAnR9zztjixRR5oNCO8Usvi3guCXaT
=hemn
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Black, Michael (IS)
I love C myself...this does what you want I think.  Only has 3 processes ever 
run.
And since you're not worried about SQL errors apparently no need to use the 
sqlite library.
A couple of changes to match your iostat output is all that's needed.
I assume you know C (a rather large assumption I must admit)

#include 
#include 
#include 
int main(int argc, char *argv[])
{
  FILE *pipe1,*pipe2;
  char buf[4096];
  char scan[4096];
  if (argc != 2) {
fprintf(stderr,"Usage: %s [device]\n",argv[0]);
exit(1);
  }
  //sprintf(buf,"iostat -d -w 10 %s",argv[1]);
  sprintf(buf,"iostat %s 2",argv[1]);
  pipe1=popen(buf,"r");
  if (pipe1 == NULL) {
perror("iostat");
exit(1);
  }
  pipe2=popen("sqlite3 test.db","w");
  if (pipe2 == NULL) {
perror("sqlite3");
exit(1);
  }
  sprintf(scan,"%s %%lf %%lf %%lf",argv[1]);
  while(fgets(buf,sizeof(buf),pipe1)) {
char sql[4096];
double d1, d2, d3;
int n=sscanf(buf,scan,&d1,&d2,&d3);
if (n == 3) {
  printf("%s %f %f %f\n",argv[1],d1,d2,d3);
  sprintf(sql,"insert into io 
values(datetime(\'now\',\'localtime\'),%f,%f,%f",d1,d2,d3);
  //fprintf(pipe2,"%s\n",sql);
  puts(sql);
}
  }
  pclose(pipe1);
  return 0;
}


Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Patrick Proniewski [pat...@patpro.net]
Sent: Tuesday, September 27, 2011 6:10 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] feed "endless" data into sqlite, thru a shell script


On 27 sept. 2011, at 08:31, Baptiste Daroussin wrote:

> You don't need awk :)
>
> iostat -d -w 10 disk0 | while read a b c; do case $a in *[a-zA-Z]*)
> continue ;; *) sqlite3 iostat.db "INSERT INTO io
> VALUES(datetime('now', 'localtime'), \"$a\", \"$b\", \"$c\");" ;;
> esac; done


Ok, this forks less, but still, you can't get rid of the loop ;) (I love awk)

thanks,
patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with using WAL journal mode in embedded system (disk I/O error)

2011-09-27 Thread Richard Hipp
On Tue, Sep 27, 2011 at 8:14 AM, Black, Michael (IS)  wrote:

>
> Richard almost got it.  According to this link JFFS doesn't support
> MMAP_SHARED.  Can another flag be used?
>

No.  The whole point of doing the mmap() is so that different processes
accessing the same database file can share some memory in order to
coordinate their activities.  MMAP_SHARED is absolutely required in this
context.

If JFFS does not support MMAP_SHARED, then it won't support WAL.

Except, if you only have a single process accessing the database file, then
you can set "PRAGMA locking_mode=EXCLUSIVE" and the shared memory will not
be used.



>
> http://lists.busybox.net/pipermail/uclibc/2006-November/016838.html
>
> Michael D. Black
> Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
>
>
>
>
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
> behalf of Richard Hipp [d...@sqlite.org]
> Sent: Monday, September 26, 2011 1:54 PM
> To: General Discussion of SQLite Database
> Subject: EXT :Re: [sqlite] Problem with using WAL journal mode in embedded
> system (disk I/O error)
>
>
> On Mon, Sep 26, 2011 at 2:48 PM, Korey Calmettes  >wrote:
>
> > Richard,
> >
> > Thanks for the tip of recreating the experiment in a separate directory.
> > However the same error occurs.
> >
> > sqlite> .tables
> > (5386) os_unix.c:28414: (22) mmap(/data/test.db-shm) -
> > (5386) statement aborts at 109: [SELECT name FROM sqlite_master WHERE
> > type IN ('table','view') AND name NOT LIKE 'sqlite_%' UNION ALL SELECT
> > name FROM sqlite_temp_master WHERE type IN ('table','view') ORDER BY 1]
> > disk I/O error
> > Error: disk I/O error
> > sqlite>
> >
> > Any additional ideas why mmap would be failing?
> >
> >
> Perhaps your kernel doesn't support mmap() on JFFS filesystems?  (That's
> just a guess.)
>
>
>
> >  Korey
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Richard Hipp
> > Sent: September 26, 2011 11:36 AM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] Problem with using WAL journal mode in embedded
> > system (disk I/O error)
> >
> > On Mon, Sep 26, 2011 at 2:21 PM, Korey Calmettes
> > wrote:
> >
> > > Thanks for responding so quickly.
> > >
> > > I performed the same general experiment.  Here is the output when the
> > > error occurred.
> > >
> > > sqlite> .tables
> > > (5386) os_unix.c:28414: (22) mmap(//test.db-shm) -
> > >
> >
> > Looks like a mmap() call is failing here:
> > http://www.sqlite.org/src/artifact/10e0c4dcdb?ln=4009-4011
> >
> > Looks like you are using a database in the root of the filesystem
> > ("/test.db").  Do you get the same error if you put the database file in
> > a directory someplace?
> >
> >
> > > (5386) statement aborts at 109: [SELECT name FROM sqlite_master WHERE
> > > type IN ('table','view') AND name NOT LIKE 'sqlite_%' UNION ALL SELECT
> >
> > > name FROM sqlite_temp_master WHERE type IN ('table','view') ORDER BY
> > > 1] disk I/O error
> > > Error: disk I/O error
> > > sqlite>
> > >
> > > Any ideas?
> > >
> > > Korey
> > >
> > >
> > > -Original Message-
> > > From: sqlite-users-boun...@sqlite.org
> > > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Richard Hipp
> > > Sent: September 26, 2011 8:38 AM
> > > To: General Discussion of SQLite Database
> > > Subject: Re: [sqlite] Problem with using WAL journal mode in embedded
> > > system (disk I/O error)
> > >
> > > On Mon, Sep 26, 2011 at 11:30 AM, Korey Calmettes
> > > wrote:
> > >
> > > > ~ # sqlite3 test.db
> > > > SQLite version 3.7.7.1 2011-06-28 17:39:05 Enter ".help" for
> > > > instructions Enter SQL statements terminated with a ";"
> > > > sqlite> .tables
> > > > test
> > > > sqlite> pragma journal_mode=wal;
> > > > wal
> > > > sqlite> .tables
> > > > Error: disk I/O error
> > > > sqlite>
> > > >
> > > >
> > > > Any ideas on what may be causing this error and any suggestions for
> > > > a work-around?
> > > >
> > >
> > > Type:
> > >
> > >.log stdout
> > >
> > > at the beginning of this session and run your experiment again please.
> > > The extra output might give some clues.
> > >
> > >
> > > --
> > > D. Richard Hipp
> > > d...@sqlite.org
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > >
> >
> >
> >
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin

Re: [sqlite] Problem with using WAL journal mode in embedded system (disk I/O error)

2011-09-27 Thread Black, Michael (IS)

Richard almost got it.  According to this link JFFS doesn't support 
MMAP_SHARED.  Can another flag be used?

http://lists.busybox.net/pipermail/uclibc/2006-November/016838.html

Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Richard Hipp [d...@sqlite.org]
Sent: Monday, September 26, 2011 1:54 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Problem with using WAL journal mode in embedded 
system (disk I/O error)


On Mon, Sep 26, 2011 at 2:48 PM, Korey Calmettes wrote:

> Richard,
>
> Thanks for the tip of recreating the experiment in a separate directory.
> However the same error occurs.
>
> sqlite> .tables
> (5386) os_unix.c:28414: (22) mmap(/data/test.db-shm) -
> (5386) statement aborts at 109: [SELECT name FROM sqlite_master WHERE
> type IN ('table','view') AND name NOT LIKE 'sqlite_%' UNION ALL SELECT
> name FROM sqlite_temp_master WHERE type IN ('table','view') ORDER BY 1]
> disk I/O error
> Error: disk I/O error
> sqlite>
>
> Any additional ideas why mmap would be failing?
>
>
Perhaps your kernel doesn't support mmap() on JFFS filesystems?  (That's
just a guess.)



>  Korey
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Richard Hipp
> Sent: September 26, 2011 11:36 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Problem with using WAL journal mode in embedded
> system (disk I/O error)
>
> On Mon, Sep 26, 2011 at 2:21 PM, Korey Calmettes
> wrote:
>
> > Thanks for responding so quickly.
> >
> > I performed the same general experiment.  Here is the output when the
> > error occurred.
> >
> > sqlite> .tables
> > (5386) os_unix.c:28414: (22) mmap(//test.db-shm) -
> >
>
> Looks like a mmap() call is failing here:
> http://www.sqlite.org/src/artifact/10e0c4dcdb?ln=4009-4011
>
> Looks like you are using a database in the root of the filesystem
> ("/test.db").  Do you get the same error if you put the database file in
> a directory someplace?
>
>
> > (5386) statement aborts at 109: [SELECT name FROM sqlite_master WHERE
> > type IN ('table','view') AND name NOT LIKE 'sqlite_%' UNION ALL SELECT
>
> > name FROM sqlite_temp_master WHERE type IN ('table','view') ORDER BY
> > 1] disk I/O error
> > Error: disk I/O error
> > sqlite>
> >
> > Any ideas?
> >
> > Korey
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Richard Hipp
> > Sent: September 26, 2011 8:38 AM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] Problem with using WAL journal mode in embedded
> > system (disk I/O error)
> >
> > On Mon, Sep 26, 2011 at 11:30 AM, Korey Calmettes
> > wrote:
> >
> > > ~ # sqlite3 test.db
> > > SQLite version 3.7.7.1 2011-06-28 17:39:05 Enter ".help" for
> > > instructions Enter SQL statements terminated with a ";"
> > > sqlite> .tables
> > > test
> > > sqlite> pragma journal_mode=wal;
> > > wal
> > > sqlite> .tables
> > > Error: disk I/O error
> > > sqlite>
> > >
> > >
> > > Any ideas on what may be causing this error and any suggestions for
> > > a work-around?
> > >
> >
> > Type:
> >
> >.log stdout
> >
> > at the beginning of this session and run your experiment again please.
> > The extra output might give some clues.
> >
> >
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Simon Slavin

On 27 Sep 2011, at 1:07pm, Patrick Proniewski wrote:

> On 27 sept. 2011, at 13:44, Simon Slavin wrote:
> 
>> If you're using the OS X version, I don't think you need to run iostat as a 
>> continuous process.  Write a shell script with a timed loop which runs 
>> iostat without the '-w 10'.  So you could write a script which does
> 
> That's what I do, but I think using a loop is ugly, and I would like to find 
> a way to feed data continuously into sqlite. 

Can't be done using piping because of the problem pointed out earlier: piping 
buffers octets until it fills up a page.  That is a good solution for normal 
piping purposes and completely useless for anything that must be 
up-to-the-second.  I think the scripting solution will do better for you.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 13:44, Simon Slavin wrote:

> On 27 Sep 2011, at 12:03pm, Patrick Proniewski wrote:
> 
>> You're assuming I'm running Linux, but I'm running Mac OS X Server (or 
>> FreeBSD by the way), so no /proc here, and iostat is probably working 
>> differently too.
>> 
> 
> If you're using the OS X version, I don't think you need to run iostat as a 
> continuous process.  Write a shell script with a timed loop which runs iostat 
> without the '-w 10'.  So you could write a script which does

That's what I do, but I think using a loop is ugly, and I would like to find a 
way to feed data continuously into sqlite. 

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] undefined symbol/reference: sqlite3_version. Please!

2011-09-27 Thread James Hartley
On Sun, Sep 25, 2011 at 7:04 PM, JFC Morfin  wrote:

> At 02:44 26/09/2011, Simon Slavin wrote:
>
>  On 26 Sep 2011, at 1:35am, JFC Morfin wrote:
>>
>> > I have copied the C programe in 
>> > http://sqlite.org/quickstart.**htmland 
>> > tried to compile it.
>> > With BorlandC, TCC, and now MinGW.
>> > I obtain the unique same error: "undefined symbol (or reference):
>> sqlite3_version".
>> >
>> > As I see in the sql code that  "SQLITE_API const char sqlite3_version[]
>> = SQLITE_VERSION;" is in line 693 and 110173 (ifndef SQLITE_AMALGAMATION
>> which is defined in line 21)., I do not understand the problem. Suspecting
>> it might be something related to the last version I looked in vain into the
>> list archives of the last two months list.
>>
>
> Hi! Simon,
> Thank you for helping.
>
>
>  Did you include the amalgamation source code in with your program ?  I
>> don't mean just put it in the right folder, but also tell your compiler that
>> both the C code and the header are part of your project.
>>
>
>
> I think so: I entered
>
> #include "c:\sqlite\sqlite3.h"
> #include "c:\sqlite\sqlite3.c"
>
> at the begining of the file.
>

This is wrong for a number of reasons.

The first thing you should do is study the vendor's documentation on project
files.  Usually, there is an additional file deposited into the directory
which explicitly lists all source code files which should be compiled &
linked into the final binary.  I suspect the author is not aware of this.

Using #include to merge all source files together is done to bypass project
files.  Sometimes this works, but most of the time this doesn't as the
errors cited above attest.

Including source files into other source files via #include is *not* the way
to define a project for a number of reasons.  Best practice in C/C++
programming shows that use of #include should be limited to header files
only.  The reason for this dictum is that all compilers have limits as to
how much code can be processed to create an object file.  Merging one source
code file into another may exceed what the compiler can translate in one
translation unit context.

A second reason for not merging source files together is that the scoping
landscape intended by the original authors may be compromised.  As an
example, consider global static variables defined within a single .c file.
These variables were initially intended to be global for only the file where
they were defined.  By including that file into another, the intended scope
is now inadvertently broadened.

"undefined symbol" denotes linker errors -- meaning that the linker is
unable to match the name of symbols needed in any given object file with the
symbols defined in other object files in the project.  I don't know the code
which is being compiled, but I suspect that the variable in question is
defined globally within the file.  One would think that including source
files together as naively done here would resolve this, but consider the
ramifications if this project is being compiled as C++ code which will
decorate names.  I suspect (without verification...) that the amalgamation
has some extern "C" constructs in either the header or source files.  This
signals the compiler to treat the code as C code (unmangled) as opposed to
decorated C++ code. While all code may very well compile, the linker cannot
merge all object files into a coherent binary because all encountered files
do not contain matching symbols.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Simon Slavin

On 27 Sep 2011, at 12:03pm, Patrick Proniewski wrote:

> You're assuming I'm running Linux, but I'm running Mac OS X Server (or 
> FreeBSD by the way), so no /proc here, and iostat is probably working 
> differently too.
> 

If you're using the OS X version, I don't think you need to run iostat as a 
continuous process.  Write a shell script with a timed loop which runs iostat 
without the '-w 10'.  So you could write a script which does

while true
do 
iostat -d disk0 | ...
sleep 10
done

You should be able to feed just the result of the single iostat output to 
sqlite3 somehow.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign key mismatch Error

2011-09-27 Thread Simon Slavin

On 27 Sep 2011, at 11:58am, Madhan Kumar wrote:

>  In sqlite (version 3.7.7.1),  I created two tables with foreign keys
> as shown below
> 
> CREATE TABLE [*Employee*] (
>  [*EmpId*] INT NOT NULL,
>  [EmpName] VARCHAR2(50));
> 
> CREATE TABLE [*Department*] (
>  [DeptID] INTEGER NOT NULL,
>  [DeptName] VARCHAR2(20),
>  [*EmpID*] INTEGER NOT NULL CONSTRAINT [FK_EMP_Dept] REFERENCES
> [Employee]([*EmpId*]));
> 
> Using sqlite Expert Tool,
> I inserted a row in *Employee* Table - Success
> when i want to delete the same row from Employee table - gives error
> "foreign key mismatch".
> 
> When i try to insert a row in *Department* Table, gives error "foreign key
> mismatch".

Can you please show us the 'INSERT' and 'DELETE FROM' statements you are using 
for the records which give this problem ?  Ideally, starting from a blank 
database, INSERT a few records then try to DELETE one and show us the error you 
didn't expect to get.

If you can use the sqlite3 shell tool to demonstrate the problem that would be 
even better than using the 'Expert Tool' you are talking about.  But we'll try 
from the Expert Tool.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Simon Slavin

On 27 Sep 2011, at 10:49am, Tim Streater wrote:

> On 27 Sep 2011 at 00:19, Simon Slavin  wrote: 
> 
>> I believe that VACUUM is one of the statements which counts as changing the
>> schema, because it does its work by rewriting entire tables and/or indexes. 
>> So don't do a VACUUM when you're doing multi-process access.  Cut out the
>> VACUUMs and see whether you still get this result code.
> 
> Ah, thanks, that's a good clue. I can do some work in that area to ensure 
> that the VACUUMs are done at a quiet moment.

You might not need VACUUM at all.  I might use it just before I make a copy of 
the database file for transfer or archive, if it was important to me that the 
file was as small as possible.  But I have quite a few databases I've never 
bothered using VACUUM on at all.  Most of them shrink only by small amounts and 
probably start growing immediately afterwards anyway.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:31, Baptiste Daroussin wrote:

> You don't need awk :)
> 
> iostat -d -w 10 disk0 | while read a b c; do case $a in *[a-zA-Z]*)
> continue ;; *) sqlite3 iostat.db "INSERT INTO io
> VALUES(datetime('now', 'localtime'), \"$a\", \"$b\", \"$c\");" ;;
> esac; done


Ok, this forks less, but still, you can't get rid of the loop ;) (I love awk)

thanks,
patpro

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 08:02, Stephan Beal wrote:

> That's a tricky one, it seems. If you're not restricted to shell code, you
> could possibly do this using perl, PHP, or similar. You could open a pipe
> for iostat, read a line from the pipe, and feed that line to your db (not in
> the form of a raw text line but using the script language's sqlite3 API).
> Repeat until the pipe is eof or a signal is caught or whatever.

Changing languages could be an option, but I'd rather keep my ugly while loop 
than learn PERL :)

patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Patrick Proniewski
On 27 sept. 2011, at 12:58, Simon Slavin wrote:

> On 27 Sep 2011, at 6:48am, Patrick Proniewski wrote:
> 
>> I've tried various solutions with named pipes, file descriptors 
>> redirections… but none worked, because they all seem to require the data 
>> steam to end before feeding data into the DB.
> 
> Most of your problems are caused because you're using iostat.  Can you 
> instead read the data directly out of /proc ?  Take a look at the end of the 
> 'man iostat' page for details.

You're assuming I'm running Linux, but I'm running Mac OS X Server (or FreeBSD 
by the way), so no /proc here, and iostat is probably working differently too.


patpro
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] feed "endless" data into sqlite, thru a shell script

2011-09-27 Thread Simon Slavin

On 27 Sep 2011, at 6:48am, Patrick Proniewski wrote:

> I've tried various solutions with named pipes, file descriptors redirections… 
> but none worked, because they all seem to require the data steam to end 
> before feeding data into the DB.

Most of your problems are caused because you're using iostat.  Can you instead 
read the data directly out of /proc ?  Take a look at the end of the 'man 
iostat' page for details.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Foreign key mismatch Error

2011-09-27 Thread Madhan Kumar
Hello..
  In sqlite (version 3.7.7.1),  I created two tables with foreign keys
as shown below

CREATE TABLE [*Employee*] (
  [*EmpId*] INT NOT NULL,
  [EmpName] VARCHAR2(50));

CREATE TABLE [*Department*] (
  [DeptID] INTEGER NOT NULL,
  [DeptName] VARCHAR2(20),
  [*EmpID*] INTEGER NOT NULL CONSTRAINT [FK_EMP_Dept] REFERENCES
[Employee]([*EmpId*]));

Using sqlite Expert Tool,
I inserted a row in *Employee* Table - Success
when i want to delete the same row from Employee table - gives error
"foreign key mismatch".

When i try to insert a row in *Department* Table, gives error "foreign key
mismatch".

I tried executing PRAGMA foreign_keys = ON
and it returns "1"

But still i am getting errors. Pls assist me to fix this issue.
Thanks in advance.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Tim Streater
On 27 Sep 2011 at 00:19, Simon Slavin  wrote: 

> On 26 Sep 2011, at 11:32pm, Tim Streater wrote:

>> But, very often the database on which the error occurs is :memory: - and I
>> guess each thread will have its own one of those.
>
> Are you doing concurrent access to your :memory: databases ?

Looking back through my logs to early July, there were three instances - and in 
each case a housekeeping script is doing a VACUUM. So that's a strong pointer. 
And in the case of :memory:, it may just appear (to my ill-informed eyes) to be 
related to :memory:. The statement is in fact:

  insert into mem.messages select * from main.messages where absid= some value;

Here, :memory: was attached as mem, and main was opened earlier but will be one 
of those that gets VACUUMed. I guess the select can then fail in the same way.

--
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite shell's .import does not work with attached db's, add wiki note?

2011-09-27 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/26/2011 06:03 PM, yary wrote:
> I noticed that the sqlite shell won't ".import" into an attached database:

The cause of the error message is a test in the shell code.  It prepares this:

  SELECT * from 'tablename'

I have no idea why it puts single quotes around the table name.  If you do
that then database names are not used.  The table name is correctly quoted
so it really makes no sense.

  sqlite> attach '/tmp/foo' as foo;
  sqlite> create table foo.bar(x);
  sqlite> select * from foo.bar;
  sqlite> select * from 'foo.bar';
  SQLError: no such table: foo.bar

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk6Bo8wACgkQmOOfHg372QQ0+QCgqCr2wB19UqboO346U+pnzF7g
tAcAoOHaEVZGnQequmfPZtXBsm7IEoXL
=ZICf
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Still chasing "database schema has changed"

2011-09-27 Thread Tim Streater
On 27 Sep 2011 at 00:19, Simon Slavin  wrote: 

> On 26 Sep 2011, at 11:32pm, Tim Streater wrote:
>
>> For the most part it's going to be SELECT, INSERT, UPDATE, BEGIN, COMMIT.
>> Less often, there's VACUUM. Let me have a look at my logs - OK, when I got
>> the error today, there was a thread doing a VACUUM on the db which got the
>> error (within a few seconds, in terms of the log timestamps). The SQL
>> statement being done when the error was reported was in fact:
>
>>  update uids set uid_date='1317028381',rec_date='1317002026' where
>> uid='UID3335-1298893761'
>
> I believe that VACUUM is one of the statements which counts as changing the
> schema, because it does its work by rewriting entire tables and/or indexes. 
> So don't do a VACUUM when you're doing multi-process access.  Cut out the
> VACUUMs and see whether you still get this result code.

Ah, thanks, that's a good clue. I can do some work in that area to ensure that 
the VACUUMs are done at a quiet moment.

>> But, very often the database on which the error occurs is :memory: - and I
>> guess each thread will have its own one of those.
>
> Are you doing concurrent access to your :memory: databases ?

No, that's the odd part. But I'll check back through my logs.

Thanks for the help.

--
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users