Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  Even in my use case I got "Disk I/O error" after I reached 2GB of 
virtual memory.
Max tell us the size of the memory mapped file in VMMap tool when you 
got "Disk I/O error", and check the value of Virtual Bytes in windows 
performance counter

Il 10/09/2010 6.54, Max Vlasov ha scritto:
>> But as a side effect I got not expected result in other area, when I tried
>> to append 1,500,000 records to this 1.7G file having 5G of free space on the
>> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
>> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
>> but this seems was not related to WAL)
>>
>>
> Now I the problem is fully reproducible. A modified versions of the steps:
>
> Windows 7 64bit Home Premium, sqlite 3.7.2 in dll
>
> 1. Create db with the table
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] VARCHAR(200)
> )
>
> 2. Open the db that should currently be in journal_mode=delete
>
> 3. Change journal_mode=WAL;
>
> 4. BEGIN TRANSACTION
>
> 4. Make 1,300,000 repeated queries
> INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
> of the string = 1152)
>
> 5. While the queries are executed, when the shm file grows to 11M
> (0xAC), the failure occurs with Disk I/O error (both result and extended
> are 10 (SQLITE_IOERR)).
>
> There's a change that there's something wrong with my program, can someone
> do a similar test on another Windows 64bit system?
>
> Thanks
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R&D


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  Hi Shane, have you look at the Virtual Bytes in the performance 
monitor? in my situation is that counter that grow up to 2GB and than 
application crash because virtual space finish.
I'm using Windows XP professional SP3.
I can reproduce the problem in my DB with this SQL statement:

DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value) AND (Source='source_name'))

Usually timestamp_value identify 1 week records. So I Delete 1 week of 
records at a time.
The DELETE statement is inside a  BEGIN;COMMIT; statement and I can have 
a lot of delete, like this:
BEGIN;

DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value1) AND (Source='source_name1'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value2) AND (Source='source_name2'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value3) AND (Source='source_name3'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value4) AND (Source='source_name4'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value5) AND (Source='source_name5'))

..

COMMIT;

could be event 50 DELETE inside BEGIN; COMMIT;

My table have 3 indexes and have the size I told you yesterday.
We are using the default page size, because I do not change it.

I create the table like this:
CREATE TABLE IF NOT EXISTS metadata (Id INTEGER PRIMARY KEY 
AUTOINCREMENT,DateTime INTEGER,Source TEXT,SensorName TEXT,SensorValue TEXT)


Il 09/09/2010 23.25, Shane Harrelson ha scritto:
> Michele-
>
> I've looked at trying to reproduce your issue on an 32-bit Windows XP
> system using the latest code, and could not.
>
> Even assuming the "worst case" of a 512 byte page size, starting with
> a 1.2gb DB file, deleting all the records would result in a WAL file
> of roughly the same size containing 2mil+ pages.  Each of the WAL
> pages has an 8 byte entry in the SHM file.  The SHM file uses 32k
> regions which can each hold 4k entries.   For 2mil+ entries, we would
> need around 500 regions.  The SHM file is what is memory mapped, with
> each 32k region being mapped into memory.  The Windows implementation
> uses an average 48k for each region, so at most we would use around
> 24mb of memory.
>
> I attempted to reproduce this by creating a 1.5gb DB, containing 17mil
> records, each 65 bytes long, using a page size of 512 bytes.  Starting
> with this DB in WAL mode, I deleted all records.   Maximum shared
> memory usage did not exceed 24mb.
>
> If you're sure you added the fix Dan indicated correctly into your
> build, then we're going to need more info on exactly what you're
> doing.  What Windows version are you using specifically?  What page
> size are you using?   What SQL queries are you executing?  Can you
> provide a short series of statements with the CLI to reproduce this?
>
> HTH.
> -Shane
>
>
>
>
> On Thu, Sep 9, 2010 at 11:36 AM, Michele Pradella
>   wrote:
>>   Hi Max, I got the problem in both situations:
>>
>>1. I have a journal_mode=DELETE database and I convert it to WAL.
>>2. I create a new database with WAL mode.
>>
>> I never check the handles in the task manager, but I always see the
>> mapped files in vmmap growing up. I think it's the same.
>> Anyway I have the memory wasted especially when I have a quite big
>> DB(about 1.2GB with  about 17milions of records) and I try to Delete a
>> lot of records: in this situation I see mapped files growing up and
>> waste a lot of memory (I reached the maximum 32bit windows memory limit
>> so my application crash).
>>
>> ps.With this DB the "SELECT count(ID) FROM table_name" it's very
>> slow...it take  minutes(with the sqlite shell)!
>>
>> Il 09/09/2010 17.04, Max Vlasov ha scritto:
>>> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy
>>> wrote:
>>>
 On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:

>Hi, do you have some news about the wasted memory? have you found the
> reason for the windows backend?
 Fixed here:

 http://www.sqlite.org/src/ci/f213e133f6


>>> Dan, don't know whether it is related, but I detected memory leak in 3.7.2
>>> related to handle count increasing.
>>>
>>> Steps to reproduce
>>> (Windows XP SP3, sqlite3.7.2.dll compiled with bcc)
>>>
>>> 1. Create or use previous db with the table
>>> CREATE TABLE [TestTable] (
>>> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
>>> [Text] VARCHAR(200)
>>> )
>>>
>>> 2. Open the db that currently in journal_mode=delete
>>>
>>> 3. Change journal_mode=WAL;
>>>
>>> 4. BEGIN TRANSACTION
>>>
>>> 4. Make 50,000 repeated queries
>>> INSERT INTO TestTable (Text) VALUES ("12345678912345 (in my case the
>>> lengh of this string was about 1100 bytes)
>>>
>>> 5. See while the queries are processing how handles in Task manager
>>> increasing (total about 14). The followiing commit does not help in
>>> d

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  Hi Max, I'm sure that is a leak related to sqlite WAL because if I 
disable WAL with journal_mode=DELETE and I do not have the leak. More 
over if I use vmmap tool I see the size of the mapped files growing up 
and vmmap show me that all the memory is used by -shm files mapped a lot 
of times. I'm using Windows XP 32bit, but even with win7 64bit I got the 
same result.

Il 09/09/2010 22.21, Max Vlasov ha scritto:
> On Thu, Sep 9, 2010 at 7:16 PM, Dan Kennedy  wrote:
>
>>> Dan, don't know whether it is related, but I detected memory leak in
>>> 3.7.2
>>> related to handle count increasing.
>> I think there will be one open handle for each 32KB of
>> shared-memory space in use. Or put another way, one open
>> handle for every 4000 pages in the WAL file. They should
>> all be closed when all connections to the database within
>> the process are closed.
>>
>>
>>
> Dan,
> thank you for your explanation, it makes perfect sense if we calculate the
> sizes and everything.
>
> I tried to test till the sizes Michele mentioned, this time on Win7 64 bit,
> I had almost perfect results, non-stopped appending till 1.7G and memory
> size was never more than 50MB, so I'd suggest to Michele to check the code,
> maybe there's a leak there not related to sqlite.
>
> But as a side effect I got not expected result in other area, when I tried
> to append 1,500,000 records to this 1.7G file having 5G of free space on the
> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
> but this seems was not related to WAL)
>
> While I wrote this e-mail, I tried to perform another test, 1,200,000
> appends worked perfectly, the next series (without prior commit) failed
> almost immediately with the same Disk I/O Error.Free size is still 3,7G, so
> this was not related to the absence of free space. The size of shm file is
> 0xAC, and the section starting 0xAB8000 till the end filled with zeros.
> Please let me know if anything would be helpful to know from these files, I
> keep it in the state they were after the failure, I even did not perform
> Commit and Close.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R&D


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] upstreaming Chromium patches for file handle passing support

2010-09-09 Thread Paweł Hajdan , Jr .
On Tue, Aug 31, 2010 at 10:51, Richard Hipp  wrote:

> My suggestion is that you make a copy of the os_unix.c source file (call it
> chromium_vfs.c or anything else that you like) and apply your edits to that
> copy.  Since you started with an exact copy of the original os_unix.c
> source
> file, your copy will all the internal utilities and mechanisms as
> os_unix.c.  You can use all those internal mechanism as much as you want.
> Then compile your customized VFS separately and add it to SQLite at
> start-time using sqlite3_vfs_register().
>

I have done a proof-of-concept patch, attached to
https://bugs.webkit.org/show_bug.cgi?id=45416

Direct link:
https://bugs.webkit.org/attachment.cgi?id=66952&action=prettypatch

However, Dumitru Daniliuc (dumi) has noticed that we could still reuse large
portions of os_unix.c and only provide our own implementation for opening a
file (instead of calling "open" directly, it would call
"ChromiumBridge::databaseOpenFile". Similarly for database deletion, etc.

Now that we have the code to talk about (see above), could you take another
look and see if it would be possible to avoid copy-pasting parts of
os_unix.c? I'm just trying to explore all possibilities we have - both
copying the code (done now), and re-using more of os_unix.c code.

On Wed, Sep 1, 2010 at 21:33, Dan Kennedy  wrote:

> How do you know which journal file to open or where to create
> a master journal file if the client writes to the database?


It is hard for me to answer this question (I'm not very familiar with sqlite
yet). See the patch above - I think it preserves the semantics of current
code used in Chrome, just moves it to WebKit. Any feedback on whether that
code is right or wrong and how it can be improved would be appreciated.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Max Vlasov
> But as a side effect I got not expected result in other area, when I tried
> to append 1,500,000 records to this 1.7G file having 5G of free space on the
> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
> but this seems was not related to WAL)
>
>

Now I the problem is fully reproducible. A modified versions of the steps:

Windows 7 64bit Home Premium, sqlite 3.7.2 in dll

1. Create db with the table
CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Text] VARCHAR(200)
)

2. Open the db that should currently be in journal_mode=delete

3. Change journal_mode=WAL;

4. BEGIN TRANSACTION

4. Make 1,300,000 repeated queries
INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
of the string = 1152)

5. While the queries are executed, when the shm file grows to 11M
(0xAC), the failure occurs with Disk I/O error (both result and extended
are 10 (SQLITE_IOERR)).

There's a change that there's something wrong with my program, can someone
do a similar test on another Windows 64bit system?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory database and locking.

2010-09-09 Thread Hemant Shah



--- On Thu, 9/9/10, Kees Nuyt  wrote:

> From: Kees Nuyt 
> Subject: Re: [sqlite] In memory database and locking.
> To: sqlite-users@sqlite.org
> Date: Thursday, September 9, 2010, 11:48 AM
> On Wed, 8 Sep 2010 21:03:05 -0700
> (PDT), Hemant Shah
> 
> wrote:
> 
> > I forgot to mention, that I have set pragma
> journal_mode to off
> > and for every insert I do prepare -> step ->
> finalize -> COMMIT.
> 
> That's a weird sequence. 
> Because you use the same statement all the time, you only
> have to prepare() once, and then for every INSERT: 
> bind(); step(); reset(); clear_bindings();
> 
> Also, COMMIT doesn't make sense without a BEGIN.
> For speed, wrap several (or even many) INSERTs in one
> BEGIN
> / COMMIT pair.
> Without BEGIN; COMMIT, every single INSERT is a
> transaction
> by itself.
> 

Where can I find good example of this. I have not found a good example
of insert and select. I am used to DB2 and Oracle, this is first time I am 
working with sqlite.

> Finalize at program exit (or not at all).
> -- 
>   (  Kees Nuyt
>   )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Hemant Shah
E-mail: hj...@yahoo.com
> 


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory database and locking.

2010-09-09 Thread Hemant Shah


--- On Thu, 9/9/10, Black, Michael (IS)  wrote:

> From: Black, Michael (IS) 
> Subject: Re: [sqlite] In memory database and locking.
> To: "General Discussion of SQLite Database" 
> Date: Thursday, September 9, 2010, 9:16 AM
> I've never seen an application that
> would run faster in ANY database vs custom code. 
> Databases are for generic query problems...not the end-all
> to "store my data" when speed is a concern.
> 

I will see how sqlite works out otherwise I will write my own hash code.


> I've pointed out a few times on this list where people are
> concerned for speed and showed things like 30X faster using
> your own.  I've done network data acquisition like this
> before and I'll guarantee that you will never keep up with a
> data burst on a gigabit network.  I don't even think
> winpcap can do that let alone a database.
> 
> I don't think you need to run your trigger every
> minute.  Just run it every insert.  I think the
> delete will be notably faster than the insert and you won't
> notice the difference vs running every 60 seconds.
> 
> What you will want to do is only do a commit every so
> often...I think you stated you're doing a commit every
> packet which would be dog slow.
> 

  I agree with you about commiting every insert. I am going to experiment with 
different times and see where I get best performance. Right now I have one 
process doing insert and every 5000 rows do a delete.


> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Hemant Shah
> Sent: Thursday, September 09, 2010 8:57 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:Re: [sqlite] EXTERNAL: In memory database
> and locking.
> 
> How do I setup trigger to run every minute?
> 
> I thought about writing hash code, but thought sqlite or
> other in memory database would work. The in memory database
> seems to keep up with the in coming traffic. 
> 
> Hemant Shah
> 
> E-mail: hj...@yahoo.com
> 
> --- On Thu, 9/9/10, Black, Michael (IS) 
> wrote:
> 
> From: Black, Michael (IS) 
> Subject: Re: [sqlite] EXTERNAL: In memory database and
> locking.
> To: "General Discussion of SQLite Database" 
> Date: Thursday, September 9, 2010, 7:48 AM
> 
> Have you considered doing your cleanup during a trigger?
> I assume you're already using transactions for your
> inserts.  I wouldn't
> think it would be much slower doing it every insert as
> you'd be deleting
> a much smaller set every time.
> 
> This is really a LOT faster if you just hash your info and
> then
> periodically walk the hash table to delete old stuff.  A
> database is
> never going to keep up with a gigabit interface.
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Hemant Shah
> Sent: Wednesday, September 08, 2010 10:55 PM
> To: sqlite-users@sqlite.org
> Subject: EXTERNAL:[sqlite] In memory database and locking.
> 
> Folks,
> I am trying to write an application that reads packets from
> the network
> and inserts it into sqlite database, I have a unique key
> which is
> combination of couple of columns. I want to find
> re-transmitted packets
> so I rely on the fact that if I violate unique key
> constraint then I
> have found the duplicate packet. Also, I just want to
> compare it with
> packets received within last minute. One of the column is
> timestamp.
> I am using C API and statically link sqlite 3.7.2 with my
> application.
> Here is what I am doing. When I start my application it
> creates the
> database and table and then forks two processes. One
> process reads
> packets from network and inserts information about it in
> the database,
> if insert fails then it has found re-transmission and it
> executes the
> select statement to get the information about previous
> packet and print
> information about both packets.
> The other process wakes up every 60 seconds and deletes all
> row whose
> timestamp columns is less then (current timestamp - 60).
> The timestamp
> is number of seconds since epoch.
> The first process is constantly inserting rows into the
> database, so the
> other process cannot delete any rows. When I use :memory:
> for database I
> do not get any error but it does not delete any rows as the
> memory
> footprint of my program keeps on increasing.If I use a file
> for database
> I get error that database is locked.
> Both of these processes are sibling and have same database
> handle. When
> I read the documentation I found that in-memory database
> always uses
> EXCLUSIVE lock.
> How do I solve this problem?
> Thanks.
> 
> 
> Hemant Shah
> 
> E-mail: hj...@yahoo.com
> 
> 
>       
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8

Re: [sqlite] Feature request: copying vacuum

2010-09-09 Thread Taras Glek
  On 09/01/2010 11:41 AM, Taras Glek wrote:
> Hi,
> Currently VACUUM takes care of sqlite-level fragmentation. 
> Unfortunately it does little for fs-level fragmentation since the same 
> file is being reused. It would be really beneficial for Mozilla 
> performance if we could get a vacuum/hotcopy mode.
>
> As I understand it, currently vacuum works by copying out the data to 
> a new file, then copying it back in and truncating the original db file.
>
> It would be really nice to instead do something like:
>
> copy the data to a new file
>
> swap the underlying filehandles to point at new file
>
> remove old file
>
> rename the new file to old name.
>
> This yields two benefits:
>
> A less fragmented db
>
> ~50% vacuum speedup since the data is only copied once
>
> Currently we can copy the data to a new file, but it is a pretty 
> invasive change to swap all of the current sqlite connections to the 
> new file. Things like prepared statements, etc need to be updated for 
> every single db consumer. Thus it would make sense to have this 
> feature on the sqlite side.
>
> Is this reasonable request?
>
> Thanks,
> Taras
I sent this last week, wanted to address some issues that were raised 
about my proposal.

Sorry for the weird(and late!) reply, I'm subscribed via a digest(would 
be so nice to have a gmane mirror), so I can't reply directly.

Richard Hipp:
> If other connections have the database file open while it is being vacuumed,
> then on unix the other connections will still be left open on the old
> unlinked version of the database file and will never see the new content.
> And on windows, the file swapping and renaming simply is not allowed while
> other connections have the database files open.
> The work around is to modify SQLite so that it is constantly closing and
> reopening the database files.  But that adds rather large overheads that
> seem likely to be much greater than any savings seen through a reduction in
> disk FS fragmentation.
>
>
I agree with both points. A copying VACUUM should specify that it does 
not support the multi-connection usecase. It fail abort if it detects 
another db connection( or have this mentioned in documentation if this 
detection isn't possible).
The wins from avoiding disk fragmentation+copying less data are 
significant. Punting seems like a reasonable alternative to forcing 
sqlite to constantly close/open the db.


Jay A. Kreibich:

>You're also breaking transactional integrity.  You need the option of
>backing-out of the operation right up until the moment it works, and
>this procedure can't do that.  For example, if you lose power right
>after "remove old file", your database is no longer there.
>
You are right my original sequence of events was flawed, it should be:

copy the data to a new file

swap the underlying filehandles to point at new file

rename the new file to old name(this also removes old file).

This takes care of the "remove old file" problem.

>You'd have to do that anyways, for all connections other than the one
>that issued the VACUUM command.  Coordinating FD swaps across multiple
>connections in the same process would be confusing enough-- there
>would be no possible way to do it across database connections in
>multiple processes.
As I said above, I think restricting this to single-connection usecases 
is reasonable.

>Maybe there would be some way to pre-populate the rollback journal
>with the full contents of the original database.  Then the file could
>be truncated before the copy-back procedure.  That would make it
>clear to the OS that it is free to allocate whatever file blocks it
>wants, hopefully in better patterns.  The copy back could also be
>done in very large chunks.
That sounds like a reasonable alternative to me. It does more IO than 
copy+rename, but I'm mostly interested in avoiding fragmentation here 
and this solve that issue.

>On a personal level, I don't think it is worth it.  In the end, you're
>still hoping the OS and filesystem will make smart choices about block
>allocations.  An application shouldn't need to be spending a lot
>of time worrying about this level of filesystem performance.  No
>matter what, you're just hinting and setting up conditions that
>should allow the filesystem driver to do something smart and fast.
>It may, or it may not, actually do so.
I appreciate your feeling on the matter. But there are APIs(ie 
fallocate) that are specifically designed to minimize fragmentation. The 
underlying filesystem can not be expected to guess every possible 
workload correctly.

Scott Hess:
> I agree with Jay - while it is tempting to have SQLite bite off
> optimizing this kind of thing, it's pretty far out of scope.  Next
> we'll be talking about running SQLite on raw partitions!  The recent
> addition of SQLITE_FCNTL_CHUNK_SIZE is probably about all the hinting
> you can producti

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Shane Harrelson
Michele-

I've looked at trying to reproduce your issue on an 32-bit Windows XP
system using the latest code, and could not.

Even assuming the "worst case" of a 512 byte page size, starting with
a 1.2gb DB file, deleting all the records would result in a WAL file
of roughly the same size containing 2mil+ pages.  Each of the WAL
pages has an 8 byte entry in the SHM file.  The SHM file uses 32k
regions which can each hold 4k entries.   For 2mil+ entries, we would
need around 500 regions.  The SHM file is what is memory mapped, with
each 32k region being mapped into memory.  The Windows implementation
uses an average 48k for each region, so at most we would use around
24mb of memory.

I attempted to reproduce this by creating a 1.5gb DB, containing 17mil
records, each 65 bytes long, using a page size of 512 bytes.  Starting
with this DB in WAL mode, I deleted all records.   Maximum shared
memory usage did not exceed 24mb.

If you're sure you added the fix Dan indicated correctly into your
build, then we're going to need more info on exactly what you're
doing.  What Windows version are you using specifically?  What page
size are you using?   What SQL queries are you executing?  Can you
provide a short series of statements with the CLI to reproduce this?

HTH.
-Shane




On Thu, Sep 9, 2010 at 11:36 AM, Michele Pradella
 wrote:
>  Hi Max, I got the problem in both situations:
>
>   1. I have a journal_mode=DELETE database and I convert it to WAL.
>   2. I create a new database with WAL mode.
>
> I never check the handles in the task manager, but I always see the
> mapped files in vmmap growing up. I think it's the same.
> Anyway I have the memory wasted especially when I have a quite big
> DB(about 1.2GB with  about 17milions of records) and I try to Delete a
> lot of records: in this situation I see mapped files growing up and
> waste a lot of memory (I reached the maximum 32bit windows memory limit
> so my application crash).
>
> ps.With this DB the "SELECT count(ID) FROM table_name" it's very
> slow...it take  minutes(with the sqlite shell)!
>
> Il 09/09/2010 17.04, Max Vlasov ha scritto:
>> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy  wrote:
>>
>>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>>
   Hi, do you have some news about the wasted memory? have you found the
 reason for the windows backend?
>>> Fixed here:
>>>
>>>    http://www.sqlite.org/src/ci/f213e133f6
>>>
>>>
>> Dan, don't know whether it is related, but I detected memory leak in 3.7.2
>> related to handle count increasing.
>>
>> Steps to reproduce
>> (Windows XP SP3, sqlite3.7.2.dll compiled with bcc)
>>
>> 1. Create or use previous db with the table
>> CREATE TABLE [TestTable] (
>> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
>> [Text] VARCHAR(200)
>> )
>>
>> 2. Open the db that currently in journal_mode=delete
>>
>> 3. Change journal_mode=WAL;
>>
>> 4. BEGIN TRANSACTION
>>
>> 4. Make 50,000 repeated queries
>> INSERT INTO TestTable (Text) VALUES ("12345678912345 (in my case the
>> lengh of this string was about 1100 bytes)
>>
>> 5. See while the queries are processing how handles in Task manager
>> increasing (total about 14). The followiing commit does not help in
>> decreasing the number to the start value.
>>
>> I tried to look with Process explorer, it seems there are many handles
>> titled "section".
>> Also I could not reproduce this when the db is already in WAL mode when
>> opened. Michele, can you tell us what is the mode when you initially open
>> db?
>>
>> Max
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
>
> --
> Selea s.r.l.
>
>
>        Michele Pradella R&D
>
>
>        SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.prade...@selea.com* 
> *http://www.selea.com*
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Freelist walking and location of free blocks

2010-09-09 Thread David C. Smith
Got it - I think I understand.  I don't really care about free list pages,
just free blocks within a page.

So, deleting a second row from my small test database gave me the following
output:

Page Size is: 1024 (0400)
FreeListPage = 0
FreeListCount = 0
Going to offset: 1024
Freeblock = 964 (0D was at offset 0, so I read the next 2 bytes as described
in the Btree Page Header)
Start of offset: 1024, freeblock: 964
nextFreeBlock says 995
nextFreeBlock says 0
Hit 00 00, so stop
[964, 995, 0] (my free block list)

Thanks for the response, I also realized I had made notes from the SQLite
file format original page http://www.sqlite.org/fileformat.html - which
added to my confusion.

Quick question, do I have to read every page to find free blocks or is there
a bigger linked list somewhere?

-Dave

On Thu, Sep 9, 2010 at 4:22 PM, Richard Hipp  wrote:

>
>
> On Thu, Sep 9, 2010 at 4:03 PM, David C. Smith wrote:
>
>> All, I am trying to manually parse the freelist to recover deleted data.
>>  Yes, I googled for someone to have created this already :)
>>
>> I am getting data that I don't expect and wanted to see if anyone could
>> see
>> where I have a misunderstanding.  I am using
>> http://www.sqlite.org/fileformat2.html
>>
>> () = test data
>> read the database file
>>  read page size at offset 16 & 17 (1024 0x400)
>>  read first freelist trunk page  at offset 32 to 35) ()
>>
>
> Your freelist is empty.  There are no unused pages in your database.
> Probably the two rows you delete were small enough to fit on a single page
> so no pages were released and no pages were added to the freelist.
>
>
>>  Go to that page (1024, page 1)
>>
>>  **> this is where I start having issues
>>  B-tree Page header format  says that offset 0, size 1 should be a 2,5,
>> 10,
>> 13 or is in error.  I have a 0D.
>>  But, offset 1, 2 have 03 E3 (995) and that leads to 2019, the start of my
>> deleted data .
>>
>> I have not built a database with 2 deleted rows, so I am not sure how to
>> follow the free block list.
>>
>> Any help comments?
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query suggestion?

2010-09-09 Thread Doug
Thenk you Gerry.  

After some studying I now understand that the inner SELECT is executed 
for each outer row -- so trimming the outer result set early seems 
like a very good optimization idea.

Doug

> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Gerry Snyder
> Sent: Thursday, September 09, 2010 1:52 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Query suggestion?
> 
>   On 9/9/2010 11:32 AM, Doug wrote:
> > Thank you Igor.
> >
> > You've helped me before with what also turned out to be a similar
> > select referencing the same table twice.  I guess it's a concept
> > that I don't fully get.  If there is a name for this technique
> > I'll go Google and study up on it.
> >
> 
> You should be able to find some good info by googling :"correlated
> subquery"
> 
> 
> 
> Gerry
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Freelist walking and location of free blocks

2010-09-09 Thread Richard Hipp
On Thu, Sep 9, 2010 at 4:03 PM, David C. Smith  wrote:

> All, I am trying to manually parse the freelist to recover deleted data.
>  Yes, I googled for someone to have created this already :)
>
> I am getting data that I don't expect and wanted to see if anyone could see
> where I have a misunderstanding.  I am using
> http://www.sqlite.org/fileformat2.html
>
> () = test data
> read the database file
>  read page size at offset 16 & 17 (1024 0x400)
>  read first freelist trunk page  at offset 32 to 35) ()
>

Your freelist is empty.  There are no unused pages in your database.
Probably the two rows you delete were small enough to fit on a single page
so no pages were released and no pages were added to the freelist.


>  Go to that page (1024, page 1)
>
>  **> this is where I start having issues
>  B-tree Page header format  says that offset 0, size 1 should be a 2,5, 10,
> 13 or is in error.  I have a 0D.
>  But, offset 1, 2 have 03 E3 (995) and that leads to 2019, the start of my
> deleted data .
>
> I have not built a database with 2 deleted rows, so I am not sure how to
> follow the free block list.
>
> Any help comments?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Max Vlasov
On Thu, Sep 9, 2010 at 7:16 PM, Dan Kennedy  wrote:

>
> >
> > Dan, don't know whether it is related, but I detected memory leak in
> > 3.7.2
> > related to handle count increasing.
>
> I think there will be one open handle for each 32KB of
> shared-memory space in use. Or put another way, one open
> handle for every 4000 pages in the WAL file. They should
> all be closed when all connections to the database within
> the process are closed.
>
>
>
Dan,
thank you for your explanation, it makes perfect sense if we calculate the
sizes and everything.

I tried to test till the sizes Michele mentioned, this time on Win7 64 bit,
I had almost perfect results, non-stopped appending till 1.7G and memory
size was never more than 50MB, so I'd suggest to Michele to check the code,
maybe there's a leak there not related to sqlite.

But as a side effect I got not expected result in other area, when I tried
to append 1,500,000 records to this 1.7G file having 5G of free space on the
disk, I got the error a user reported recently about win7 64bit, "Disk I/O
error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
but this seems was not related to WAL)

While I wrote this e-mail, I tried to perform another test, 1,200,000
appends worked perfectly, the next series (without prior commit) failed
almost immediately with the same Disk I/O Error.Free size is still 3,7G, so
this was not related to the absence of free space. The size of shm file is
0xAC, and the section starting 0xAB8000 till the end filled with zeros.
Please let me know if anything would be helpful to know from these files, I
keep it in the state they were after the failure, I even did not perform
Commit and Close.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Freelist walking and location of free blocks

2010-09-09 Thread David C. Smith
All, I am trying to manually parse the freelist to recover deleted data.
 Yes, I googled for someone to have created this already :)

I am getting data that I don't expect and wanted to see if anyone could see
where I have a misunderstanding.  I am using
http://www.sqlite.org/fileformat2.html

() = test data
read the database file
  read page size at offset 16 & 17 (1024 0x400)
  read first freelist trunk page  at offset 32 to 35) ()
  Go to that page (1024, page 1)

  **> this is where I start having issues
  B-tree Page header format  says that offset 0, size 1 should be a 2,5, 10,
13 or is in error.  I have a 0D.
  But, offset 1, 2 have 03 E3 (995) and that leads to 2019, the start of my
deleted data .

I have not built a database with 2 deleted rows, so I am not sure how to
follow the free block list.

Any help comments?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query suggestion?

2010-09-09 Thread Gerry Snyder
  On 9/9/2010 11:32 AM, Doug wrote:
> Thank you Igor.
>
> You've helped me before with what also turned out to be a similar
> select referencing the same table twice.  I guess it's a concept
> that I don't fully get.  If there is a name for this technique
> I'll go Google and study up on it.
>

You should be able to find some good info by googling :"correlated subquery"



Gerry
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query suggestion?

2010-09-09 Thread Doug
Thank you Igor.  

You've helped me before with what also turned out to be a similar
select referencing the same table twice.  I guess it's a concept 
that I don't fully get.  If there is a name for this technique
I'll go Google and study up on it.

Doug


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Igor Tandetnik
> Sent: Thursday, September 09, 2010 10:59 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Query suggestion?
> 
> Doug  wrote:
> > I'm wondering if the SQL gurus here can give me some direction.  I
> have a
> > very simple stats table:
> >
> > CREATE TABLE StatData
> > (
> > StatID INTEGER NOT NULL,
> > Value REAL NOT NULL,
> > Date INTEGER NOT NULL
> > );
> >
> > I'd like to pull out the most recent date and associated value for
> each
> > StatID.
> 
> select StatID, Value, Date from StatData t1 where rowid =
> (select rowid from StatData t2 where t2.StatID = t1.StatID
>  order by Date desc limit 1);
> 
> -- or
> 
> select StatID, Value, Date from StatData t1 where Date =
> (select max(Date) from StatData t2 where t2.StatID = t1.StatID);
> 
> The first query is probably slightly faster, the second is easier on
> the eyes.
> 
> Igor Tandetnik
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory database and locking.

2010-09-09 Thread Kees Nuyt
On Wed, 8 Sep 2010 21:03:05 -0700 (PDT), Hemant Shah
 wrote:

> I forgot to mention, that I have set pragma journal_mode to off
> and for every insert I do prepare -> step -> finalize -> COMMIT.

That's a weird sequence. 
Because you use the same statement all the time, you only
have to prepare() once, and then for every INSERT: 
bind(); step(); reset(); clear_bindings();

Also, COMMIT doesn't make sense without a BEGIN.
For speed, wrap several (or even many) INSERTs in one BEGIN
/ COMMIT pair.
Without BEGIN; COMMIT, every single INSERT is a transaction
by itself.

Finalize at program exit (or not at all).
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query suggestion?

2010-09-09 Thread Igor Tandetnik
Doug  wrote:
> I'm wondering if the SQL gurus here can give me some direction.  I have a
> very simple stats table:
> 
> CREATE TABLE StatData
> (
> StatID INTEGER NOT NULL,
> Value REAL NOT NULL,
> Date INTEGER NOT NULL
> );
> 
> I'd like to pull out the most recent date and associated value for each
> StatID.

select StatID, Value, Date from StatData t1 where rowid =
(select rowid from StatData t2 where t2.StatID = t1.StatID
 order by Date desc limit 1);

-- or

select StatID, Value, Date from StatData t1 where Date =
(select max(Date) from StatData t2 where t2.StatID = t1.StatID);

The first query is probably slightly faster, the second is easier on the eyes.

Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] issue on blob

2010-09-09 Thread Igor Tandetnik
Fabio Spadaro  wrote:
> 2010/9/9 Igor Tandetnik 
> 
>> Fabio Spadaro  wrote:
>>> Can you determine  in the blob's data
>>> what is buffered file type and what was its original length?
>> 
>> You retrieve the size of the BLOB field with sqlite3_column_bytes function,
>> or with length() function in SQL.
> 
> I use DB-API 2.0 interface for SQLite databases for python  and seems that
> there is not a function
> equivalent or wrong?

A quick glance at the documentation indeed suggests that there's no way to 
obtain the length of the BLOB without completely reading the BLOB itself. A 
.fetchone() method would give you a tuple with column values, including BLOBs, 
and you can use Python's built-in len() function to determine the length of the 
BLOB.

If this is undesirable, e.g. for performance reasons, you can run a query like 
this: select length(MyBlobColumn); 
-- 
Igor Tandetnik


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Register For The 17th Annual Tcl/Tk Conference 2010

2010-09-09 Thread Andreas Kupries
17th Annual Tcl/Tk Conference (Tcl'2010)
http://www.tcl.tk/community/tcl2010/

October 11 - 15, 2010
Hilton Suites/Conference Center

Chicago/Oakbrook Terrace, Illinois, USA


Registration for the Conference is open at

http://www.tcl.tk/community/tcl2010/reg.html

To book a room at the conference hotel at reduced rates follow the
link on that page. Note that this offer expires on September 26. Book
early.


Our schedule can be found at

http://www.tcl.tk/community/tcl2010/schedule.html


We have special social activites

October 15, 2010 - Friday afternoon -- Tour of Fermilab

Fermi National Accelerator Laboratory,   http://www.fnal.gov/

Tour signup is at the conference, Wednesday, Oct 13.

A tour lasts about two hours and begins in Wilson Hall. Visitors view
the Laboratory from the 15th floor windows and visit various displays
located there. The tour moves to the Linear Accelerator building where
visitors see the Cockcroft-Walton, the components in the linear
accelerator gallery and the Main Control Room.  There is a docent for
every 20 people. Registration required since tours must be arranged in
advance.

October 15, 2010 - Friday evening 

Dinner-theater at the award winning Drury Lane theater located next to
our hotel.

Special package price for dinner and theater at $44.00 per
person. Regularly $64 per person.

Please make reservation and payment at time of registration, or
by Wednesday Oct 13 latest (at the conference).


"Seven Brides for Seven Brothers"  http://www.drurylaneoakbrook.com/

Bill Jenkins is one of the newest and most exciting new directors on
the Chicago scene. As Chairman of the Department of Theatre and Dance
at Ball State University, he heads one of the nation's largest theatre
programs. After admiring his work with other Chicago area theatres, we
are very pleased for this opportunity to showcase his talent at Drury
Lane. Once Bill shared some of the fresh ideas and insights he had for
“Seven Brides for Seven Brothers”, we knew we would have a great
show for the holidays.


Conference Committee

Clif Flynt  Noumena CorpGeneral Chair, Website 
Admin
Andreas Kupries ActiveState Software Inc.   Program Chair
Cyndy Lilagan   Iomas Research, LLC Facilities Coordination
Brian Griffin   Mentor Graphics
Ron Fox NSCL/FRIB Michigan State University
Arjen MarkusDeltares
Mike Doyle  Iomas Research, LLC
Gerald Lester   KnG Consulting, LLC
Donal Fellows   University of Manchester
Jeffrey Hobbs   ActiveState Software Inc.
Steve Landers   Digital Smarties
Kevin Kenny GE Global Research Center
Larry VirdenTcl FAQ Maintainer
Steve Redler IV SR Technology

Contact Information tclconfere...@googlegroups.com


Tcl'2010 would like to thank those who are sponsoring the conference:

ActiveState Software Inc.
Buonacorsi Foundation
Mentor Graphics
Noumena Corp.
SR Technology
Tcl Community Association

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query suggestion?

2010-09-09 Thread Doug
I'm wondering if the SQL gurus here can give me some direction.  I have a
very simple stats table:

CREATE TABLE StatData
(
StatID INTEGER NOT NULL,
Value REAL NOT NULL,
Date INTEGER NOT NULL
);

I'd like to pull out the most recent date and associated value for each
StatID.

I initially thought of:

SELECT StatID, max(Date), max(Value) FROM StatData GROUP BY StatID

That would give me the most recent Date, but not the Value that corresponds
with that Date.  
None of the other aggregate functions seem appropriate either.

Thanks for any ideas.

Doug


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  Hi Max, I got the problem in both situations:

   1. I have a journal_mode=DELETE database and I convert it to WAL.
   2. I create a new database with WAL mode.

I never check the handles in the task manager, but I always see the 
mapped files in vmmap growing up. I think it's the same.
Anyway I have the memory wasted especially when I have a quite big 
DB(about 1.2GB with  about 17milions of records) and I try to Delete a 
lot of records: in this situation I see mapped files growing up and 
waste a lot of memory (I reached the maximum 32bit windows memory limit 
so my application crash).

ps.With this DB the "SELECT count(ID) FROM table_name" it's very 
slow...it take  minutes(with the sqlite shell)!

Il 09/09/2010 17.04, Max Vlasov ha scritto:
> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy  wrote:
>
>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>
>>>   Hi, do you have some news about the wasted memory? have you found the
>>> reason for the windows backend?
>> Fixed here:
>>
>>http://www.sqlite.org/src/ci/f213e133f6
>>
>>
> Dan, don't know whether it is related, but I detected memory leak in 3.7.2
> related to handle count increasing.
>
> Steps to reproduce
> (Windows XP SP3, sqlite3.7.2.dll compiled with bcc)
>
> 1. Create or use previous db with the table
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] VARCHAR(200)
> )
>
> 2. Open the db that currently in journal_mode=delete
>
> 3. Change journal_mode=WAL;
>
> 4. BEGIN TRANSACTION
>
> 4. Make 50,000 repeated queries
> INSERT INTO TestTable (Text) VALUES ("12345678912345 (in my case the
> lengh of this string was about 1100 bytes)
>
> 5. See while the queries are processing how handles in Task manager
> increasing (total about 14). The followiing commit does not help in
> decreasing the number to the start value.
>
> I tried to look with Process explorer, it seems there are many handles
> titled "section".
> Also I could not reproduce this when the db is already in WAL mode when
> opened. Michele, can you tell us what is the mode when you initially open
> db?
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R&D


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* 
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Dan Kennedy

On Sep 9, 2010, at 10:04 PM, Max Vlasov wrote:

> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy   
> wrote:
>
>>
>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>
>>> Hi, do you have some news about the wasted memory? have you found  
>>> the
>>> reason for the windows backend?
>>
>> Fixed here:
>>
>>  http://www.sqlite.org/src/ci/f213e133f6
>>
>>
>
> Dan, don't know whether it is related, but I detected memory leak in  
> 3.7.2
> related to handle count increasing.

I think there will be one open handle for each 32KB of
shared-memory space in use. Or put another way, one open
handle for every 4000 pages in the WAL file. They should
all be closed when all connections to the database within
the process are closed.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Max Vlasov
On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy  wrote:

>
> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>
> >  Hi, do you have some news about the wasted memory? have you found the
> > reason for the windows backend?
>
> Fixed here:
>
>   http://www.sqlite.org/src/ci/f213e133f6
>
>

Dan, don't know whether it is related, but I detected memory leak in 3.7.2
related to handle count increasing.

Steps to reproduce
(Windows XP SP3, sqlite3.7.2.dll compiled with bcc)

1. Create or use previous db with the table
CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Text] VARCHAR(200)
)

2. Open the db that currently in journal_mode=delete

3. Change journal_mode=WAL;

4. BEGIN TRANSACTION

4. Make 50,000 repeated queries
INSERT INTO TestTable (Text) VALUES ("12345678912345 (in my case the
lengh of this string was about 1100 bytes)

5. See while the queries are processing how handles in Task manager
increasing (total about 14). The followiing commit does not help in
decreasing the number to the start value.

I tried to look with Process explorer, it seems there are many handles
titled "section".
Also I could not reproduce this when the db is already in WAL mode when
opened. Michele, can you tell us what is the mode when you initially open
db?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] issue on blob

2010-09-09 Thread Fabio Spadaro
HI,

2010/9/9 Igor Tandetnik 

> Fabio Spadaro  wrote:
> > Can you determine  in the blob's data
> > what is buffered file type and what was its original length?
>
> You retrieve the size of the BLOB field with sqlite3_column_bytes function,
> or with length() function in SQL.
>
> I'm not familiar with the term "buffered file type". SQLite certainly
> doesn't store any such thing in the BLOB column. If you need to track some
> kind of metadata about your BLOB, store it yourself in additional columns in
> the same row.
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

I use DB-API 2.0 interface for SQLite databases for python  and seems that
there is not a function
equivalent or wrong?

-- 
Fabio Spadaro
www.fabiospadaro.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory database and locking.

2010-09-09 Thread Black, Michael (IS)
I've never seen an application that would run faster in ANY database vs custom 
code.  Databases are for generic query problems...not the end-all to "store my 
data" when speed is a concern.

I've pointed out a few times on this list where people are concerned for speed 
and showed things like 30X faster using your own.  I've done network data 
acquisition like this before and I'll guarantee that you will never keep up 
with a data burst on a gigabit network.  I don't even think winpcap can do that 
let alone a database.

I don't think you need to run your trigger every minute.  Just run it every 
insert.  I think the delete will be notably faster than the insert and you 
won't notice the difference vs running every 60 seconds.

What you will want to do is only do a commit every so often...I think you 
stated you're doing a commit every packet which would be dog slow.


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Hemant Shah
Sent: Thursday, September 09, 2010 8:57 AM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] EXTERNAL: In memory database and locking.

How do I setup trigger to run every minute?

I thought about writing hash code, but thought sqlite or other in memory 
database would work. The in memory database seems to keep up with the in coming 
traffic. 

Hemant Shah

E-mail: hj...@yahoo.com

--- On Thu, 9/9/10, Black, Michael (IS)  wrote:

From: Black, Michael (IS) 
Subject: Re: [sqlite] EXTERNAL: In memory database and locking.
To: "General Discussion of SQLite Database" 
Date: Thursday, September 9, 2010, 7:48 AM

Have you considered doing your cleanup during a trigger?
I assume you're already using transactions for your inserts.  I wouldn't
think it would be much slower doing it every insert as you'd be deleting
a much smaller set every time.

This is really a LOT faster if you just hash your info and then
periodically walk the hash table to delete old stuff.  A database is
never going to keep up with a gigabit interface.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Hemant Shah
Sent: Wednesday, September 08, 2010 10:55 PM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] In memory database and locking.

Folks,
I am trying to write an application that reads packets from the network
and inserts it into sqlite database, I have a unique key which is
combination of couple of columns. I want to find re-transmitted packets
so I rely on the fact that if I violate unique key constraint then I
have found the duplicate packet. Also, I just want to compare it with
packets received within last minute. One of the column is timestamp.
I am using C API and statically link sqlite 3.7.2 with my application.
Here is what I am doing. When I start my application it creates the
database and table and then forks two processes. One process reads
packets from network and inserts information about it in the database,
if insert fails then it has found re-transmission and it executes the
select statement to get the information about previous packet and print
information about both packets.
The other process wakes up every 60 seconds and deletes all row whose
timestamp columns is less then (current timestamp - 60). The timestamp
is number of seconds since epoch.
The first process is constantly inserting rows into the database, so the
other process cannot delete any rows. When I use :memory: for database I
do not get any error but it does not delete any rows as the memory
footprint of my program keeps on increasing.If I use a file for database
I get error that database is locked.
Both of these processes are sibling and have same database handle. When
I read the documentation I found that in-memory database always uses
EXCLUSIVE lock.
How do I solve this problem?
Thanks.


Hemant Shah

E-mail: hj...@yahoo.com


      
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory database and locking.

2010-09-09 Thread Hemant Shah

--- On Thu, 9/9/10, Simon Slavin  wrote:

From: Simon Slavin 
Subject: Re: [sqlite] In memory database and locking.
To: "General Discussion of SQLite Database" 
Date: Thursday, September 9, 2010, 8:24 AM


On 9 Sep 2010, at 4:55am, Hemant Shah wrote:

> The other process wakes up every 60 seconds and deletes all row whose 
> timestamp columns is less then (current timestamp - 60). The timestamp is 
> number of seconds since epoch.

Do you do this using a single DELETE FROM command ?  Do you have an index on 
the timestamp column so it can find the appropriate rows quickly and easily ?


Yes, I run "DELETE FROM table WHERE instimestamp < deletetimestamp". I have 
index on this column.

> The first process is constantly inserting rows into the database, so the 
> other process cannot delete any rows.

I would probably merge the processes.  Have the one that does the inserting 
issue the DELETE FROM command.  So it could do a DELETE after every INSERT.  Or 
better still, after every 100 INSERTs or something.

I think this might work. I will give it a try.


Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




Hemant Shah

E-mail: hj...@yahoo.com



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] issue on blob

2010-09-09 Thread Igor Tandetnik
Fabio Spadaro  wrote:
> Can you determine  in the blob's data
> what is buffered file type and what was its original length?

You retrieve the size of the BLOB field with sqlite3_column_bytes function, or 
with length() function in SQL.

I'm not familiar with the term "buffered file type". SQLite certainly doesn't 
store any such thing in the BLOB column. If you need to track some kind of 
metadata about your BLOB, store it yourself in additional columns in the same 
row.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXTERNAL: In memory database and locking.

2010-09-09 Thread Hemant Shah
How do I setup trigger to run every minute?

I thought about writing hash code, but thought sqlite or other in memory 
database would work. The in memory database seems to keep up with the in coming 
traffic. 

Hemant Shah

E-mail: hj...@yahoo.com

--- On Thu, 9/9/10, Black, Michael (IS)  wrote:

From: Black, Michael (IS) 
Subject: Re: [sqlite] EXTERNAL: In memory database and locking.
To: "General Discussion of SQLite Database" 
Date: Thursday, September 9, 2010, 7:48 AM

Have you considered doing your cleanup during a trigger?
I assume you're already using transactions for your inserts.  I wouldn't
think it would be much slower doing it every insert as you'd be deleting
a much smaller set every time.

This is really a LOT faster if you just hash your info and then
periodically walk the hash table to delete old stuff.  A database is
never going to keep up with a gigabit interface.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Hemant Shah
Sent: Wednesday, September 08, 2010 10:55 PM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] In memory database and locking.

Folks,
I am trying to write an application that reads packets from the network
and inserts it into sqlite database, I have a unique key which is
combination of couple of columns. I want to find re-transmitted packets
so I rely on the fact that if I violate unique key constraint then I
have found the duplicate packet. Also, I just want to compare it with
packets received within last minute. One of the column is timestamp.
I am using C API and statically link sqlite 3.7.2 with my application.
Here is what I am doing. When I start my application it creates the
database and table and then forks two processes. One process reads
packets from network and inserts information about it in the database,
if insert fails then it has found re-transmission and it executes the
select statement to get the information about previous packet and print
information about both packets.
The other process wakes up every 60 seconds and deletes all row whose
timestamp columns is less then (current timestamp - 60). The timestamp
is number of seconds since epoch.
The first process is constantly inserting rows into the database, so the
other process cannot delete any rows. When I use :memory: for database I
do not get any error but it does not delete any rows as the memory
footprint of my program keeps on increasing.If I use a file for database
I get error that database is locked.
Both of these processes are sibling and have same database handle. When
I read the documentation I found that in-memory database always uses
EXCLUSIVE lock.
How do I solve this problem?
Thanks.


Hemant Shah

E-mail: hj...@yahoo.com


      
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] issue on blob

2010-09-09 Thread Fabio Spadaro
Hi all.
Can you determine  in the blob's data
what is buffered file type and what was its original length?
Obviously the solution to store the blob on a files would be too easy and I
do not like it.
Workaround?

-- 
Fabio Spadaro
www.fabiospadaro.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] idle state and dead locks

2010-09-09 Thread Sam Carleton
I am having some issues with dead locks.  Currently I have two
different processes access the sqlite db, one is an Apache module
using the dbd framework to manage connections, the other is a .Net
program using the standard .Net provider.

I wanted to just validate that both the providers can open the DB for
read/write but when there is no transactions, the other process should
not have a problem reading or writing.

I am 99.9% sure that the issue is in my code because when one feature
is turned off, the dead lock stops, I just wanted to double check:)

Sam
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In memory database and locking.

2010-09-09 Thread Simon Slavin

On 9 Sep 2010, at 4:55am, Hemant Shah wrote:

> The other process wakes up every 60 seconds and deletes all row whose 
> timestamp columns is less then (current timestamp - 60). The timestamp is 
> number of seconds since epoch.

Do you do this using a single DELETE FROM command ?  Do you have an index on 
the timestamp column so it can find the appropriate rows quickly and easily ?

> The first process is constantly inserting rows into the database, so the 
> other process cannot delete any rows.

I would probably merge the processes.  Have the one that does the inserting 
issue the DELETE FROM command.  So it could do a DELETE after every INSERT.  Or 
better still, after every 100 INSERTs or something.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Detecting storage class from C

2010-09-09 Thread Simon Slavin

On 9 Sep 2010, at 7:13am, Max Vlasov wrote:

> But if you write something universal, also have in mind that sqlilte won't
> raise any exception, for example if you try to read an integer, but there's
> a text stored there, you won't get an error, just zero as a result

Two different situations can occur.  In one, you can rely on your own 
library/procedures being used to put data into the file.  In that one, as long 
as you've decided on your own standards and stuck to them, you're okay.  In the 
other you have to reliably read files made by any application that can call the 
sqlite functions.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXTERNAL: In memory database and locking.

2010-09-09 Thread Black, Michael (IS)
Have you considered doing your cleanup during a trigger?
I assume you're already using transactions for your inserts.  I wouldn't
think it would be much slower doing it every insert as you'd be deleting
a much smaller set every time.

This is really a LOT faster if you just hash your info and then
periodically walk the hash table to delete old stuff.  A database is
never going to keep up with a gigabit interface.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Hemant Shah
Sent: Wednesday, September 08, 2010 10:55 PM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] In memory database and locking.

Folks,
I am trying to write an application that reads packets from the network
and inserts it into sqlite database, I have a unique key which is
combination of couple of columns. I want to find re-transmitted packets
so I rely on the fact that if I violate unique key constraint then I
have found the duplicate packet. Also, I just want to compare it with
packets received within last minute. One of the column is timestamp.
I am using C API and statically link sqlite 3.7.2 with my application.
Here is what I am doing. When I start my application it creates the
database and table and then forks two processes. One process reads
packets from network and inserts information about it in the database,
if insert fails then it has found re-transmission and it executes the
select statement to get the information about previous packet and print
information about both packets.
The other process wakes up every 60 seconds and deletes all row whose
timestamp columns is less then (current timestamp - 60). The timestamp
is number of seconds since epoch.
The first process is constantly inserting rows into the database, so the
other process cannot delete any rows. When I use :memory: for database I
do not get any error but it does not delete any rows as the memory
footprint of my program keeps on increasing.If I use a file for database
I get error that database is locked.
Both of these processes are sibling and have same database handle. When
I read the documentation I found that in-memory database always uses
EXCLUSIVE lock.
How do I solve this problem?
Thanks.


Hemant Shah

E-mail: hj...@yahoo.com


  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to inject primary keys which are set to autoincrement in sqlite??

2010-09-09 Thread Simon Davies
On 9 September 2010 12:22, tigermain  wrote:
>
> In MS SQL I would use
>
> SET IDENTITY INSERT ON
>
> How do I do something similar in SQLite. I am trying to upgrade a database
> and want to maintain the IDs from the original

You choose whether to include the key as part of the insert statement.

SQLite version 3.6.11
Enter ".help" for instructions
sqlite> create table tst( id integer primary key, data text );
sqlite> insert into tst( data ) values( 'd1' ); -- key assigned by 
sqlite
sqlite> insert into tst( data ) values( 'd2' );
sqlite>
sqlite> select * from tst;
1|d1
2|d2
sqlite>
sqlite> insert into tst( id, data ) values( 101, 'd101' );  -- key 
in
insert statement
sqlite> insert into tst( id, data ) values( 102, 'd102' );
sqlite>
sqlite> select * from tst;
1|d1
2|d2
101|d101
102|d102
sqlite>
sqlite>
sqlite> insert into tst( data ) values( 'd103' );   -- key assigned 
by sqlite
sqlite>
sqlite> select * from tst;
1|d1
2|d2
101|d101
102|d102
103|d103
sqlite>

>
> Thanks
> --

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to inject primary keys which are set to autoincrement in sqlite??

2010-09-09 Thread tigermain

In MS SQL I would use

SET IDENTITY INSERT ON

How do I do something similar in SQLite. I am trying to upgrade a database
and want to maintain the IDs from the original

Thanks
-- 
View this message in context: 
http://old.nabble.com/How-to-inject-primary-keys-which-are-set-to-autoincrement-in-sqlite---tp29647839p29647839.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to inject primary keys which are set to autoincrement in sqlite??

2010-09-09 Thread tigermain

In MS SQL I would use

SET IDENTITY INSERT ON

How do I do something similar in SQLite. I am trying to upgrade a database
and want to maintain the IDs from the original

Thanks
-- 
View this message in context: 
http://old.nabble.com/How-to-inject-primary-keys-which-are-set-to-autoincrement-in-sqlite---tp29647838p29647838.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  I have ported the changes of the fix 
http://www.sqlite.org/src/ci/f213e133f6 to my code (with some difficult 
because I have ported directly in my sqlite3.c source).
Anyway I think that the problem still exist, after few test I'll see 
with vmmap mapped file growing up like before the fix.
I think I ported the fix in the right way, and the problem still exist. 
I'll do some other tests.
After this fix did you never see the problem?

Il 09/09/2010 9.46, Michele Pradella ha scritto:
>ok thank you, today I'm going to port the difference to my source code
> and I'm going to try if the memory it's ok
>
> Il 09/09/2010 9.37, Dan Kennedy ha scritto:
>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>
>>>Hi, do you have some news about the wasted memory? have you found the
>>> reason for the windows backend?
>> Fixed here:
>>
>>  http://www.sqlite.org/src/ci/f213e133f6
>>
>> Does the problem still show up for you using fossil tip?
>>
>>
>>
>>
>>
>>
>>> do you think it could be due to the windows implementation of the
>>> mmap?
>>>
>>> Il 02/09/2010 16.46, Richard Hipp ha scritto:
 On Thu, Sep 2, 2010 at 9:59 AM, Marcus Grimm>>> online.de>wrote:

> Michele Pradella wrote:
>> ok, I'll wait for the walk around.
>> I always use a BEGIN; COMMIT; transaction but often, after a
>> COMMIT; the
>> -wal file does not change in size, it seams it's not checkponted.
>> Anyway do you think that with WAL journal mode I should continue
>> to use
>> BEGIN; COMMIT; statement? or not?
> as Richard mentioned, the wal mode is not intended to work well
> for bulk-insert kind of actions. You may try to split your insert
> cycles into smaller pieces.
>
> However, that might not help if you do sql statements which involve
> a huge implicit transaction, for example "CREATE INDEX .." on a
> huge table.
> At least on windows it can fail with IO error on a GB sized db.
>
 We are working on that problem.  In the meantime, your workaround
 is to
 switch to journal_mode=DELETE before creating large indices.


> Btw, I think the wal file doesn't shrink because sqlite doesn't
> truncate
> that file after completing the checkpoint. That's by design I guess.
>
 Correct.  The -wal file is deleted when the last connection to the
 database
 is closed.  But prior to that, the WAL file is kept open and is not
 truncated.  This is a performance optimization.  Most filesystems
 are faster
 at overwriting an existing file than they are at appending to the
 end of a
 file.  (Note the qualifier "Most" in the previous sentence.  There
 are
 exceptions to the rule.  We try to optimize for the common case.)


> Marcus
>
>
>
>> Il 02/09/2010 14.43, Richard Hipp ha scritto:
>>> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<
> michele.prade...@selea.com
 wrote:
 Hi,
 I found a strange behavior of the sqlite 3.7.2 with WAL journal
 mode.
 Yesterday I found my application DB with a -wal file of 1,5GB
 and a
> -shm
 file of few MB (about 9MB) with a DB file of 1,2GB: in this
 situation I got the process memory wasted by "mapped file" of
 the -shm
 file. It seams that the file is mapped a lot of times in memory
 so the
 process memory become 2GB and it can't allocate more memory. In
 that
 situation operation made on the DB cause I/O disk errors
 probably due
> to
 the wasted memory.

>>> By coincidence, the SQLite developers were just discussing this
>>> problem
>>> earlier this morning.  There are technical issues with windows
>>> that make
> a
>>> solution difficult.  We are trying to come up with a work-
>>> around.  (The
>>> problem you describe is specific to the windows backend and does
>>> not
> come up
>>> in unix.)
>>>
>>>
 I'm doing some other test to reproduce the problem, but I think
 that
 could be when I got a lot of operation between a BEGIN; COMMIT;
 So is it ok to use the BEGIN; COMMIT; with the WAL journal
 activated?
 is there some kind of limit in the number of operation between
 a BEGIN;
 COMMIT; statement?

>>> SQLite will not checkpoint the journal until you commit your
> transaction.
>>> So if you leave the transaction open too long, the WAL file and
>>> the -shm
>>> file will grow excessively large.  WAL works best with many
>>> smaller
>>> transactions.  If you have one or two big transactions, then
>>> using a
>>> traditional rollback-journal mode works better.
>>>
>>>
>>>
 I try to use the PRAGMA wal_checkpoint; to try resolve this
 situation,
 but seams that command was ignored by sqlite be

Re: [sqlite] "create table as" syntax

2010-09-09 Thread thomas veymont
>
> > hello,
> >
> > I'm trying to use the "CREATE TABLE AS" syntax to create a table and
> > insert
> > in the same time a default row.
> >
> > e.g :
> >
> > sqlite> CREATE TABLE test (x NUMERIC) AS (SELECT 25 AS x);
> > Error: near "AS": syntax error
> >
> > oops.
> >
> > this syntax is documented in http://www.sqlite.org/lang_createtable.html
> > sqlite version is 3.6.20
> >
> > any idea ? thanks for any help,
>
>
> Dan Kennedy wrote:
>
> The diagram is meant to be interpreted such that a CREATE TABLE
> may have either a list of columns in parenthesis or an "AS SELECT"
> clause. Not both.
>
 > http://www.sqlite.org/syntaxdiagrams.html#create-table-stmt
>

>
> Andy wrote:
> I believe you either specify the column definitions *OR* use the "AS
> clause - i.e.
> CREATE TABLE test (x NUMERIC)
>
> *OR*
>
> CREATE TABLE test AS SELECT 25 AS x;
> Does this help?
>

sure it helped.
it works now with the correct syntax,
thanks to Dann & Andy.

Tom
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "create table as" syntax

2010-09-09 Thread Andy Gibbs
On Thursday, September 09, 2010 10:56 AM, thomas veymont wrote:

> hello,
>
> I'm trying to use the "CREATE TABLE AS" syntax to create a table and 
> insert
> in the same time a default row.
>
> e.g :
>
> sqlite> CREATE TABLE test (x NUMERIC) AS (SELECT 25 AS x);
> Error: near "AS": syntax error
>
> oops.
>
> this syntax is documented in http://www.sqlite.org/lang_createtable.html
> sqlite version is 3.6.20
>
> any idea ? thanks for any help,

I believe you either specify the column definitions *OR* use the "AS" 
clause - i.e.

CREATE TABLE test (x NUMERIC)

*OR*

CREATE TABLE test AS SELECT 25 AS x;


Does this help?

Andy





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "create table as" syntax

2010-09-09 Thread Dan Kennedy

On Sep 9, 2010, at 3:56 PM, thomas veymont wrote:

> hello,
>
> I'm trying to use the "CREATE TABLE AS" syntax to create a table and  
> insert
> in the same time a default row.
>
> e.g :
>
> sqlite> CREATE TABLE test (x NUMERIC) AS (SELECT 25 AS x);
> Error: near "AS": syntax error
>
> oops.
>
> this syntax is documented in http://www.sqlite.org/lang_createtable.html
> sqlite version is 3.6.20

The diagram is meant to be interpreted such that a CREATE TABLE
may have either a list of columns in parenthesis or an "AS SELECT"
clause. Not both.

   http://www.sqlite.org/syntaxdiagrams.html#create-table-stmt


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "create table as" syntax

2010-09-09 Thread thomas veymont
hello,

I'm trying to use the "CREATE TABLE AS" syntax to create a table and insert
in the same time a default row.

e.g :

sqlite> CREATE TABLE test (x NUMERIC) AS (SELECT 25 AS x);
Error: near "AS": syntax error

oops.

this syntax is documented in http://www.sqlite.org/lang_createtable.html
sqlite version is 3.6.20

any idea ? thanks for any help,
Tom
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  ok thank you, today I'm going to port the difference to my source code 
and I'm going to try if the memory it's ok

Il 09/09/2010 9.37, Dan Kennedy ha scritto:
> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>
>>   Hi, do you have some news about the wasted memory? have you found the
>> reason for the windows backend?
> Fixed here:
>
> http://www.sqlite.org/src/ci/f213e133f6
>
> Does the problem still show up for you using fossil tip?
>
>
>
>
>
>
>> do you think it could be due to the windows implementation of the
>> mmap?
>>
>> Il 02/09/2010 16.46, Richard Hipp ha scritto:
>>> On Thu, Sep 2, 2010 at 9:59 AM, Marcus Grimm>> online.de>wrote:
>>>
 Michele Pradella wrote:
>ok, I'll wait for the walk around.
> I always use a BEGIN; COMMIT; transaction but often, after a
> COMMIT; the
> -wal file does not change in size, it seams it's not checkponted.
> Anyway do you think that with WAL journal mode I should continue
> to use
> BEGIN; COMMIT; statement? or not?
 as Richard mentioned, the wal mode is not intended to work well
 for bulk-insert kind of actions. You may try to split your insert
 cycles into smaller pieces.

 However, that might not help if you do sql statements which involve
 a huge implicit transaction, for example "CREATE INDEX .." on a
 huge table.
 At least on windows it can fail with IO error on a GB sized db.

>>> We are working on that problem.  In the meantime, your workaround
>>> is to
>>> switch to journal_mode=DELETE before creating large indices.
>>>
>>>
 Btw, I think the wal file doesn't shrink because sqlite doesn't
 truncate
 that file after completing the checkpoint. That's by design I guess.

>>> Correct.  The -wal file is deleted when the last connection to the
>>> database
>>> is closed.  But prior to that, the WAL file is kept open and is not
>>> truncated.  This is a performance optimization.  Most filesystems
>>> are faster
>>> at overwriting an existing file than they are at appending to the
>>> end of a
>>> file.  (Note the qualifier "Most" in the previous sentence.  There
>>> are
>>> exceptions to the rule.  We try to optimize for the common case.)
>>>
>>>
 Marcus



> Il 02/09/2010 14.43, Richard Hipp ha scritto:
>> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<
 michele.prade...@selea.com
>>> wrote:
>>>Hi,
>>> I found a strange behavior of the sqlite 3.7.2 with WAL journal
>>> mode.
>>> Yesterday I found my application DB with a -wal file of 1,5GB
>>> and a
 -shm
>>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>>> situation I got the process memory wasted by "mapped file" of
>>> the -shm
>>> file. It seams that the file is mapped a lot of times in memory
>>> so the
>>> process memory become 2GB and it can't allocate more memory. In
>>> that
>>> situation operation made on the DB cause I/O disk errors
>>> probably due
 to
>>> the wasted memory.
>>>
>> By coincidence, the SQLite developers were just discussing this
>> problem
>> earlier this morning.  There are technical issues with windows
>> that make
 a
>> solution difficult.  We are trying to come up with a work-
>> around.  (The
>> problem you describe is specific to the windows backend and does
>> not
 come up
>> in unix.)
>>
>>
>>> I'm doing some other test to reproduce the problem, but I think
>>> that
>>> could be when I got a lot of operation between a BEGIN; COMMIT;
>>> So is it ok to use the BEGIN; COMMIT; with the WAL journal
>>> activated?
>>> is there some kind of limit in the number of operation between
>>> a BEGIN;
>>> COMMIT; statement?
>>>
>> SQLite will not checkpoint the journal until you commit your
 transaction.
>> So if you leave the transaction open too long, the WAL file and
>> the -shm
>> file will grow excessively large.  WAL works best with many
>> smaller
>> transactions.  If you have one or two big transactions, then
>> using a
>> traditional rollback-journal mode works better.
>>
>>
>>
>>> I try to use the PRAGMA wal_checkpoint; to try resolve this
>>> situation,
>>> but seams that command was ignored by sqlite because the -wal
>>> file does
>>> not change in size, even the DB file.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>>
>>
>> -- 
>> Selea s.r.l.
>>
>>
>> Michele Pradella R&D
>>
>>
>> SELEA s.r.l.
>>
>> Via Aldo Moro 69
>> Italy - 46019 Cicognara (MN)
>> Tel +

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Dan Kennedy

On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:

>  Hi, do you have some news about the wasted memory? have you found the
> reason for the windows backend?

Fixed here:

   http://www.sqlite.org/src/ci/f213e133f6

Does the problem still show up for you using fossil tip?






> do you think it could be due to the windows implementation of the  
> mmap?
>
> Il 02/09/2010 16.46, Richard Hipp ha scritto:
>> On Thu, Sep 2, 2010 at 9:59 AM, Marcus Grimm> online.de>wrote:
>>
>>> Michele Pradella wrote:
   ok, I'll wait for the walk around.
 I always use a BEGIN; COMMIT; transaction but often, after a  
 COMMIT; the
 -wal file does not change in size, it seams it's not checkponted.
 Anyway do you think that with WAL journal mode I should continue  
 to use
 BEGIN; COMMIT; statement? or not?
>>> as Richard mentioned, the wal mode is not intended to work well
>>> for bulk-insert kind of actions. You may try to split your insert
>>> cycles into smaller pieces.
>>>
>>> However, that might not help if you do sql statements which involve
>>> a huge implicit transaction, for example "CREATE INDEX .." on a  
>>> huge table.
>>> At least on windows it can fail with IO error on a GB sized db.
>>>
>> We are working on that problem.  In the meantime, your workaround  
>> is to
>> switch to journal_mode=DELETE before creating large indices.
>>
>>
>>> Btw, I think the wal file doesn't shrink because sqlite doesn't  
>>> truncate
>>> that file after completing the checkpoint. That's by design I guess.
>>>
>> Correct.  The -wal file is deleted when the last connection to the  
>> database
>> is closed.  But prior to that, the WAL file is kept open and is not
>> truncated.  This is a performance optimization.  Most filesystems  
>> are faster
>> at overwriting an existing file than they are at appending to the  
>> end of a
>> file.  (Note the qualifier "Most" in the previous sentence.  There  
>> are
>> exceptions to the rule.  We try to optimize for the common case.)
>>
>>
>>> Marcus
>>>
>>>
>>>

 Il 02/09/2010 14.43, Richard Hipp ha scritto:
> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<
>>> michele.prade...@selea.com
>> wrote:
>>   Hi,
>> I found a strange behavior of the sqlite 3.7.2 with WAL journal  
>> mode.
>> Yesterday I found my application DB with a -wal file of 1,5GB  
>> and a
>>> -shm
>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>> situation I got the process memory wasted by "mapped file" of  
>> the -shm
>> file. It seams that the file is mapped a lot of times in memory  
>> so the
>> process memory become 2GB and it can't allocate more memory. In  
>> that
>> situation operation made on the DB cause I/O disk errors  
>> probably due
>>> to
>> the wasted memory.
>>
> By coincidence, the SQLite developers were just discussing this  
> problem
> earlier this morning.  There are technical issues with windows  
> that make
>>> a
> solution difficult.  We are trying to come up with a work- 
> around.  (The
> problem you describe is specific to the windows backend and does  
> not
>>> come up
> in unix.)
>
>
>> I'm doing some other test to reproduce the problem, but I think  
>> that
>> could be when I got a lot of operation between a BEGIN; COMMIT;
>> So is it ok to use the BEGIN; COMMIT; with the WAL journal  
>> activated?
>> is there some kind of limit in the number of operation between  
>> a BEGIN;
>> COMMIT; statement?
>>
> SQLite will not checkpoint the journal until you commit your
>>> transaction.
> So if you leave the transaction open too long, the WAL file and  
> the -shm
> file will grow excessively large.  WAL works best with many  
> smaller
> transactions.  If you have one or two big transactions, then  
> using a
> traditional rollback-journal mode works better.
>
>
>
>> I try to use the PRAGMA wal_checkpoint; to try resolve this  
>> situation,
>> but seams that command was ignored by sqlite because the -wal  
>> file does
>> not change in size, even the DB file.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>
>
> -- 
> Selea s.r.l.
>
>
>Michele Pradella R&D
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.prade...@selea.com* 
> *http://www.selea.com*
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.