Re: [sqlite] sqlite 2.0 database

2005-11-07 Thread R S
Are you trying to open a file by the nomenclature "-journal" file?


On 11/7/05, Manuel Enache <[EMAIL PROTECTED]> wrote:
>
> I have a sqlite 2.0 DB and I need the data within.
>
> I tried open it with almost all the tools I finded.
>
> All say that: "file is encrypted or is not a database"
>
> I'm 90% sure that the DB is not encrypted.
>
> It is possible that the may be to old for these tools?
> What tool should I use to open that DB and export the
> data?
>
> Thanks in advance!
>
>
>
> __
> Yahoo! FareChase: Search multiple travel sites in one click.
> http://farechase.yahoo.com
>


Re: [sqlite] Size of INSERT and UPDATE in TRANSACTION

2005-10-27 Thread R S
It would be the other way around, no?
The larger the no of inserts within a Transaction, the better the
performance.


On 10/27/05, Hannes Ricklefs <[EMAIL PROTECTED]> wrote:
>
> Hello,
>
> I was wondering if anyone has any experience with the number of INSERT
> UPDATE
> statements in one TRANSACTION. For example I have the situation that i
> have to
> do around 20.000 INSERTS in one TRANSACTION, so I am wondering if it has
> any
> performance improvements if I split these up into smaller TRANSACTIONS of
> 1000
> each?
>
> Thanks,
> Hannes
>


Re: [sqlite] Very Slow delete times on larger databases, please help!

2005-10-26 Thread R S
In my case Delete happens reasonably OK but Vaccuuming takes incredibly
long?


On 10/21/05, Allan, Mark <[EMAIL PROTECTED]> wrote:
>
>
> Thanks to both Christian Smith and John Stanton for your posts.
>
> > On Wed, 19 Oct 2005, Christian Smith wrote:
> > From the VDBE output you originally posted, you are doing a
> > fair amount of
> > work for each deleted row:
> > - Index search to find the next row from EXAMINATIONS to delete
> > - Removing the row from 3 indexes on EXAMINATIONS
> > - (trigger) Remove related row in SPIRO_TEST from 2 indexes
> > on SPIRO_TEST
> > - (trigger) Remove related row in SPIRO_TEST
> > - Remove the row from EXAMINATIONS
> >
> > Check your cache size. If the above work is causing the 75 page entry
> > cache to thrash, you're likely to hit worst case performance as the
> > thrashing pages may be being accessed in a cyclical fashion.
> > Not sure how
> > like it is that your page cache is not big enough. How big is a row of
> > data, typically?
>
> Our cache size is 75 pages of 8192 bytes = 600Kb.
>
> The maximum size of an EXAMINATIONS record is about 500 bytes, 450 bytes
> of this is a varchar field. In the test example the average size of an
> EXAMINATIONS record is 60 bytes as not much text is saved.
>
> The maximum size of a SPIRO_TEST record is about 5Kb, these will vary from
> test to test, but for the test example the size of each SPIRO_TEST record is
> fixed to approx 1Kb.
>
> Based on this I dont think that we should be thrashing the cache. I am
> however unsure how SQlite works here.
>
> > Also, a 60x slowdown is not to be unexpected. The PC version, while
> > probably having the same SQLite page cache size in the SQLite
> > app itself,
> > will most likely be reading and writing to the OSes cache at memory to
> > memory copy speed most of the time, with synchronous writes
> > only done when
> > needed. The embedded platform you're using probably writes straight to
> > FLASH, which is necassarily a synchronous operation if your OS doesn't
> > have a cache between your app and the FLASH FS. While flash
> > writes are low
> > latency, they are also low bandwidth, and won't be within an order of
> > magnitude of performance when compared to a desktop PC write to OS
> > filesystem cache.
> >
> > Finally, you give no indication on the actual CPU speed of
> > the embedded
> > platform. It's quite reasonable to assume a development PC could be an
> > order of magnitude faster on sheer integer throughput. I'm
> > amazed how slow
> > my 50MHz microSPARC based SPARCclassic is. Such a platform
> > would not be
> > much, if at all, slower than a modern embedded platform, and has the
> > benefit of gobs of RAM, but still runs the same code two orders of
> > magnitude slower at least than my Athlon XP 1700 based
> > desktop. You have
> > to keep your performance expectations realistic. You are, afterall,
> > running a complete, ACID transaction, SQL relational database.
>
>
> The maximum CPU speed of our ARM7 chip is 71Mhz.
>
> > Others have indicated that dropping indexes might help when
> > deleting or
> > inserting records. However, have you tried simply not having
> > indexes at
> > all? Would that cause unacceptable slowdown? Perhaps, for the
> > demo query
> > from the original post, just keep the DATE index on
> > EXAMINATIONS, and use
> > full table scans for queries based on EXAM_TYPE and
> > STATUS_FLAG. Truth is,
> > given the small number of EXAM_TYPE and STATUS_FLAG values (I
> > presume),
> > you're as well just doing table scans when looking for
> > specific exam types
> > and statuses. Indexes only really help when you have a large
> > variation in
> > values with few collisions. Doing this will leave a single
> > index update in
> > addition to the actual row removals, which should improve performance.
>
> I have tried permanently dropping the indexes on EXAM_TYPE and STATUS_FLAG
> and this gives some improvement in time. Indeed it does seem that the
> STATUS_FLAG index is worthless and in the initial version of the software we
> will have only 1 EXAM_TYPE (although this will increase for each module we
> release over the next few months).
>
> I have also tried the suggested method of dropping the
> EXAM_PATIENT_ID_INDEX index on the examinations table before delete and
> rebuilding it on completion. I cannot delete the remaining indexes as they
> are used during the delete operation and this slows the whole operation
> down.
>
> The latest changes have reduced the time to delete the same number of
> records from 3:45 minutes to 2:53 minutes. Still a long time to wait but any
> time saving is welcome, especially as the test is for a 50% full scenario so
> at 99% we can expect it to take 6 minutes.
>
> Thanks again for your help.
>
> If there are any other ideas on how we can optimise this further then
> please let me know.
>
>
> Mark
>
>
> DISCLAIMER:
> This information and any attachments contained in this email message is
> intended only for the use of the 

[sqlite] Temp table in which dir?

2005-10-25 Thread R S
Hi,
I programatically create a Temp table for my DB. Does the Temp table reside
in the same directory as my Database or in another directory (/var/tmp I
suppose).
Thanks?


Re: [sqlite] Is this query optimized?

2005-10-25 Thread R S
Actually I have an index on both columnVal (a varchar) and insertTime
(bigint).


On 10/25/05, Martin Engelschalk <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> if the where clause in your subquery is not supported by an index (and i
> suspect this is the case), sqlite has to do a full table scan.
> Therefore, the execution time is you observe is to be expected.
> I think thar you can change your query to something like (i didn't test
> it,)
>
> select count(*) as totalCount
> from myTable
> where (insertionTime BETWEEN  and )
> and columnVal > '0'
> group by columnVal
> having totalCount > 10
>
> If you also create an index on insertionTime, your query should be fast.
>
> Martin
>
>
> R S schrieb:
>
> >Hi,
> >I am trying to use this query and notice that the execution time
> increasing
> >linearly as the Table size increases.
> >
> >select totalCount from (select count(*) as totalCount from myTable where
> >(insertionTime BETWEEN  and ) and columnVal > '0'
> group
> >by columnVal) where totalCount > 10;
> >
> >Diff between beginTime and endTime is always constant.
> >columnVal is a varchar.
> >aColumn is an integer.
> >
> >Thanks!
> >
> >
> >
>


[sqlite] Is this query optimized?

2005-10-25 Thread R S
Hi,
I am trying to use this query and notice that the execution time increasing
linearly as the Table size increases.

select totalCount from (select count(*) as totalCount from myTable where
(insertionTime BETWEEN  and ) and columnVal > '0' group
by columnVal) where totalCount > 10;

Diff between beginTime and endTime is always constant.
columnVal is a varchar.
aColumn is an integer.

Thanks!


[sqlite] Receive error: database is full

2005-10-23 Thread R S
with Return Value 13.
I checked my partition space and its usage is just 2% (Platform is Linux
using SQLite 3.2.2).
Also I am using Temp tables and periodically move data into my Main Table. I
wondered if my Temp Table is full because strace on my process gave me
messages like:
access("/var/tmp/sqlite_MyjXYCDJGFYkfnc-journal", F_OK) = -1 ENOENT (No such
file or directory)

However, I have some debug statements in my code which indicate that the
failure is during insertion directly in the Main Table (Commit phase of that
transaction).
Any ideas?
Thanks!


[sqlite] "or" in SQL Query converted to "IN"

2005-10-20 Thread R S
by default? I am using 3.2.2

I ran across this doc

http://www.sqlite.org/optoverview.html


Re: [sqlite] Qn on moving data from Temp Table to Main Table

2005-10-09 Thread R S
Thanks Clark.

Another question, what if my Temp Table gets full (i.e. I am out of memory).
Would I get a SQLITE_NOMEM error?

If I am worried about running out of memory would a better approach be to
write into a Table in another DB and then move the contents of this table
into my Main Table by attaching the 2 DBs?
Thanks


On 10/9/05, Clark Christensen <[EMAIL PROTECTED]> wrote:
>
> If you insert null into the main table as the value for recordNo (instead
> of the value from the temp table) when you transfer, you'll get a new
> recordNo value from main. Of course, that means you'll have to change
> "select * from temp" in your insert stmt to a select that explicitly names
> each column (insert into main select null, col1, col2... from temp;)
>
> -Clark
>
> - Original Message 
> From: R S <[EMAIL PROTECTED]>
> To: sqlite-users@sqlite.org
> Sent: Saturday, October 08, 2005 15:24:28
> Subject: [sqlite] Qn on moving data from Temp Table to Main Table
>
> Hi,
> The Temp Table (obviously) mimics the Main Table. The Main Table has a
> column, recordNo which is a integer primary key (which autoincrements).
> Now
> I periodically move Data from the Temp into the Main Table (and then
> delete
> records from the Temp Table). It succeeds the first time, but fails
> subsequently because of duplicate recordNo values (I presume).
> Do I have to maintain the last recordNo value in the Main Table and pass
> it
> to the Temp Table so that it starts using those values during insertion to
> avoid a clash? Is there a better way of working around this problem?
> Thanks!
>
>
>
>


[sqlite] Qn on moving data from Temp Table to Main Table

2005-10-08 Thread R S
Hi,
The Temp Table (obviously) mimics the Main Table. The Main Table has a
column, recordNo which is a integer primary key (which autoincrements). Now
I periodically move Data from the Temp into the Main Table (and then delete
records from the Temp Table). It succeeds the first time, but fails
subsequently because of duplicate recordNo values (I presume).
Do I have to maintain the last recordNo value in the Main Table and pass it
to the Temp Table so that it starts using those values during insertion to
avoid a clash? Is there a better way of working around this problem?
Thanks!


Re: [sqlite] Re: rowId continuity?

2005-09-14 Thread R S
> I don't quite see why you need rowId to be contiguous to implement your
> paging. Can't you do something like this:
> 
> select * from mytable
> where rowId > :lastRowId
> order by rowId
> limit 100
> 
> OK I do it this way, but my concern was are these rows refilled later? 
Also what happens if the rowid exceeds U32?


[sqlite] rowId continuity?

2005-09-14 Thread R S
Hi,
I have a screen which displays rows of my table in a paged form (i.e. one 
can do Back/Previous on them and navigate across pages). I use the rowId of 
the top and bottom rows to query for the next/previous set of rows in the 
page. However I just read somewhere that the rowId need not be contigous and 
this would imply that my paging mechanism is all wrong.

The rowId in my table is a bigint primary key. Any suggestions?
Thanks!


[sqlite] SQLite where clause parsing?

2005-09-06 Thread R S
Hi,
I have a million records in my DB. I tried using .explain on and ran the 
query below which took a long time although I just want the last 100,000 
records...(It ran much faster when my table had a 100,000 records) so I 
assume it is related to how I constructed the statement.

select columns from myTable WHERE (rowid > (select max(rowid) from myTable) 
- 10) and many more conditions group by myTable.column1, myTable.column2

explain just returned me the columns headers?

Any pointers?


[sqlite] What is this extraneous table?

2005-09-05 Thread R S
Hi,
I have a DB containing a table called MyTable which is about 3Gb in size 
currently.
When my process is writing to it, it seems to create a table something 
likeMyTable-vmhyimo41pq82sw63ip0 with an associated journal table.I seem to 
have quite a few of these tables.I cannot use the Command Line tool to view 
this extraneous DB when the process is running because it is locked.
Once the process dies, the file seems to have a large size, but doing a 
.table or .schema returns nothing.
Please help.


[sqlite] Lock type during an update?

2005-08-24 Thread R S
I presume it is a PENDING lock? Does SQLite acquire a PENDING lock as
soon it begins an Update operation?


[sqlite] How do I view the execution plan of a SQL query?

2005-08-13 Thread R S
Thanks!


[sqlite] Re: Limitation on no. of Indices in a table?

2005-08-04 Thread R S
Also the column is a tinyint..Guess that shouldnt matter..But just FYI...

On 8/4/05, R S <[EMAIL PROTECTED]> wrote:
> Hi,
>   I have a table with a large # of indices (almost as the # of
> columns) in the table. SQLite works beautifully with most constraints
> and magically takes a long time for a query with constraints on the
> column with its index last created. Any limitations on the max # of
> indices allowed for a table?
>


[sqlite] Limitation on no. of Indices in a table?

2005-08-04 Thread R S
Hi,
  I have a table with a large # of indices (almost as the # of
columns) in the table. SQLite works beautifully with most constraints
and magically takes a long time for a query with constraints on the
column with its index last created. Any limitations on the max # of
indices allowed for a table?


[sqlite] Qn on Transaction across 2 DBs

2005-08-03 Thread R S
Hi,
   I have a requirement where I need to update 2 tables in 2 separate
DBs within a transaction. Do I need to attach the second table to the
first at the start of the transaction? As I understand SQLite commits
a transaction across a single DB only?
Thanks!


Re: [sqlite] Long retrieval times

2005-08-02 Thread R S
Thanks everyone for your response.

Dennis, that works great!
Jay, I think  I presume you meant (x >= 'a'). This works great too!

Thank you again!


On 8/2/05, Dennis Cote <[EMAIL PROTECTED]> wrote:
> Thomas Briggs wrote:
> 
> >
> >
> >
> >
> >>However if I use something like:
> >>select * from myTable where column2!='';
> >>(Takes a long time to return).
> >>
> >>I guess because the column in that row isn't indexed? Any
> >>alternatives?
> >>
> >>
> >
> >   I don't believe that indices can be used to satisfy != conditions, so
> >even if the column is indexed, you always have to do a table scan to
> >find rows matching the criteria.  That can obviously take some time
> >depending on the size of the table being scanned.
> >
> >   -Tom
> >
> >
> >
> You can get the same effect as != using a union to combine the results
> of two queries that can use the index, one using <, and the other using >.
> 
> select * from myTable where column2 < ''
> union
> select * from myTable where column2 > ''
> 
> This produces the same results as != (even skipping columns where
> column2 is NULL), but uses the index to speed up locating the rows that
> meet the criterion.
> 
> I'm not sure if it really is faster in your application, but it may be
> much faster if there are many rows that equal the criterion and are
> skipped by using the index. If most rows are selected then there is
> probably little gain.
> 
> HTH
> Dennis Cote
>


[sqlite] No Timeout during commit?

2005-07-25 Thread R S
Hi,
   I have 2 process accessing the DB, one reading and the other
writing. Often the process reading the DB could take long and could
block the other process from committing a bunch of records to the DB.
I noticed that when the reader process has the lock and the writer
process tries to commit a bunch of records in the DB, the writer
blocks forever till it obtains the lock? Is this intended? I could
also always try to commit a little later and the approach works well
for my particular app. Can the behavior be changed?
Thanks.


[sqlite] Basic qns about TEMP table

2005-07-22 Thread R S
1) When I want to move/insert records from a TEMP table into my
regular table, can I do this within a transaction?

2) Suppose my Table/(and Temp Table) have a schema like

CREATE [TEMP] TABLE My[Temp]Test (
id primary key not null,
value int);

and I use insert into MyTest(null, {intvalue}) to populate my DBs.
When I finally move records from my Temp Table into my Regular Table,
will the id be automtically sequential?
So if MyTest contained:
1, 1
2, 5
3, 6

and MyTempTest contained:
1,7
2, 11
3, 44

after insertion would they look like:
1,1
2, 5
3, 6
4,7
5, 11
6, 44

Also is insert into Table(null,...) the most efficient way of
inserting a record when the id is unique?

Thanks!


Re: [sqlite] Efficient record insertion techniques?

2005-07-20 Thread R S
Ok that improved my response time. 
Now, what if have queries which dont query the id field at all? Will
the response then be sluggish?


On 7/20/05, D. Richard Hipp <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-07-20 at 11:15 -0700, R S wrote:
> > Actually I deduced the file size is not really the problem.
> >
> > After reaching 6 mill records, I ran a simple query asking for the
> > last 50 records from my app.
> > Using strace I figured that the DB is actually running thru *all the
> > records* to return me the last 50 and while doing that it locked the
> > DB! Maybe I constructed my query wrong
> > It looks something like this:
> >
> > select fields from table order by id desc limit 50;
> >
> 
> Try putting and index on table.id.
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
>


Re: [sqlite] Efficient record insertion techniques?

2005-07-20 Thread R S
Actually I deduced the file size is not really the problem.

After reaching 6 mill records, I ran a simple query asking for the
last 50 records from my app.
Using strace I figured that the DB is actually running thru *all the
records* to return me the last 50 and while doing that it locked the
DB! Maybe I constructed my query wrong
It looks something like this:

select fields from table order by id desc limit 50;

Any ideas???







On 7/20/05, R S <[EMAIL PROTECTED]> wrote:
> Actually it is an ext3...and I guess the File Size limitation on ext3 is 4TB.
> 
> 
> On 7/20/05, F.W.A. van Leeuwen <[EMAIL PROTECTED]> wrote:
> > t the same as PARTIT
>


Re: [sqlite] Efficient record insertion techniques?

2005-07-19 Thread R S
Thanks Donald. Inline


On 7/19/05, Griggs, Donald <[EMAIL PROTECTED]> wrote:
> I'm no expert, but I hope the following are accurate.
> 
> Regarding: 1) Should I use a transaction even for a single record?
>   You already are, since your inserts automatically become one-line
> transactions.
OK then I guess I need to batch them to improve performance. Temp
tables best way to go?
 
> 
> Re: 2) I open the DB, insert the record and close the DB for every record
> inserted. What if the process crashed before I closed the DB. Would the DB
> get locked? If so how do I unlock it?
> 
> You only need worry about crashing *during* the insert, and even then
> the sqlite journal capability should do glorious things when you next open
> the database, restoring it to a consistent state.
> 
> If you left the database open during your logging sessions then it should be
> much more efficient -- at least in terms of disk activity and CPU time.

My concern here is that leaving the DB open for long leaves the DB
file susceptible to corruption during power failures, spikes etc.
Opening, writing and closing reduces this window.

Could you clarify how is the DB unlocked if my app that obtained  a
handle on it got killed before releasing the locked?


> 
> Re:  3) Six million and (not) counting:
>I don't really know, but are you perhaps on a FAT filesystem with a
> 2GByte filesize limit?
No. Its an ext2 :-(
When the DB got locked the partition usage was just 6%!


> 
> 
> Donald Griggs
> 
> Opinions are not necessarily those of Misys Healthcare Systems nor its board
> of directors.
> 
> 
> 
> -Original Message-
> From: R S [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 19, 2005 2:23 PM
> To: sqlite-users@sqlite.org
>


[sqlite] Efficient record insertion techniques?

2005-07-19 Thread R S
Hi,
  I wrote a Real Time logging app that insert logs from various Unix
machines in our lab into a sqlite Database.
The insertions are not batched as of now. Maybe that itself is an optimization.
1) Should I use a transaction even for a single record?
2) I open the DB, insert the record and close the DB for every record
inserted. What if the process crashed before I closed the DB. Would
the DB get locked? If so how do I unlock it?
3) I got the app running over the last week and noticed it worked well
till it reached 6 million records. The DB then suddenly got locked w/o
the app crashing or anything. I then killed the app and could run
queries via the Command Line tool. Restarting the app locked the DB
again.
Any ideas? :-(
Thanks!


Re: [sqlite] Reindexing a table in SQLite

2005-03-02 Thread R S
I periodically delete older records from my table (which contains
hundreds of thousands of entries). I want to reindex the table for
more efficient access.

Thanks!


On Wed, 02 Mar 2005 18:20:39 -0600, Kurt Welgehausen <[EMAIL PROTECTED]> wrote:
> You can drop an index and later create it again.
> Why do you think you need to do that?
> 
> Regards
>


[sqlite] Reindexing a table in SQLite

2005-03-02 Thread R S
Hi,
  How do I programatically reindex a table in SQLite? Is it possible?
I did a search but didn't find much. I am using version 3.0.8
Thanks!


[sqlite] Support for ODBC?

2004-10-14 Thread R S
Hi,
  Can an application access data from SQLite via ODBC? Didn't see
documentation on the same.
Thanks!


Re: [sqlite] Is this an in-memory database too

2004-10-07 Thread R S
Another novice question:
Does this run in its separate process or  is directly linked into the
application requiring access to the stored data.
I don't want a crash in any of my embedded modules to crash the
database as well.


On Thu, 07 Oct 2004 07:44:43 -0400, D. Richard Hipp <[EMAIL PROTECTED]> wrote:
> 
> 
> R S wrote:
> > I was trying to find info for the same, but couldn't. Do let me know.
> >
> 
> SQLite normally operates off of disk.  It reads a little from the
> disk as it can get by with, but once it reads from the disk it
> caches information in memory in case it has to reuse it later.
> The size of the memory cache is limited by default to 2MB, but
> you can change this at runtime using a pragma.
> 
> If, however, you open the special database file named ":memory:"
> it will create an empty in-memory only database that you can use
> for temporary storage.
> 
> --
> D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
> 
>


[sqlite] Is this an in-memory database too

2004-10-07 Thread R S
I was trying to find info for the same, but couldn't. Do let me know.