Re: [sqlite] Porting Sqlite to MQX OS: Question 2

2010-03-15 Thread Dan Kennedy

On Mar 16, 2010, at 5:22 AM, GeoffW wrote:

>
> Hello Dan
>
> Thanks for your useful input. To answer your questions.
>
>> Do you have any file-locking primitives provided by the OS?
> There are no file locking OS Primitives at all that I can use.
>
>> Do you have clients connecting to the database from multiple
>> processes? Or only multiple threads within the same process?
>
> My Application doesnt have a concept of processes and threads as you  
> might
> find in Windows.
> Think of it as a  small data collection embedded Application. The  
> platform
> has essentially 1 fixed Application running, made up of say 10  
> different
> concurrent tasks.

Do the tasks share a heap? And do you have multiple connections
to the database (multiple calls to sqlite3_open() or sqlite3_open_v2().

If you only have one connection, then you don't need any locking. Have
xCheckReservedLock() set its result variable to 0 for all calls.

If all tasks share a heap, maybe you can implement locking in-memory
using global variables in the VFS layer. Or, if you strictly use
shared-cache mode, you will not need any locking.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach a database dynamically if not already available within a trigger

2010-03-15 Thread andy
Thank you.  Dont worry readers Im not trying anything "007"

Rather, I have  variety of separate utilities that insert/update/ 
delete records across several database and hoped to use triggers to  
log changes to a single place.

By embeding the attach statement as part of the trigger (i.e becomes  
part of the database specification) so as each utility/program does  
not remember to attach the log database before it starts updating data  
i.e  the trigger commands are fully self contained less likely to  
cause issues if it is executed and the log database has not already  
been attached (i.e the updating program 'forgot'.

thanks for the help btw!


On Mar 15, 2010, at 10:04 AM, Pavel Ivanov wrote:

>> How can I embed the 'attach" command within the trigger statement so
>> that the log database is attached if it is not already available?
>
> AFAIK, this is impossible.
>
> But why do you need that? If you write your own application you can
> always attach this database at the beginning. But if you want to spy
> after some third-party application I believe SQLite is not suitable
> for this kind of tasks...
>
>
> Pavel
>
> On Sun, Mar 14, 2010 at 10:50 PM, andy   
> wrote:
>> Hello
>> I am hoping someone can help with the following problem.
>>
>> - I want to create a log of all changes to tables in a database.
>> - I would like to keep the log table in a separate database to the
>> main data files
>> - I plan to use triggers to capture changes and insert them into the
>> log table.
>>
>> Question:
>> How can I embed the 'attach" command within the trigger statement so
>> that the log database is attached if it is not already available?
>>
>> i.e something like
>>
>> create trigger insert_data after insert on data
>> begin
>>if database logdb does not exist
>>attach logdb.db as logdb;
>>
>>insert into logdb.log values(.)
>> end;
>>
>>
>> I am also a little concerned about performance so I am not sure if
>> testing for the presence of the database for every change will add to
>> much overhead. Thoughts?
>> Thanks
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] regular expressions

2010-03-15 Thread Jean-Christophe Deschamps

>is anybody aware of a possibility to do s.th. like
>select * from table where field like '[A|a]%'

Unless non-standard compile option and provided you don't issue
 PRAGMA case_sensitive_like = 1;
LIKE is case-insensitive, so LIKE 'A%' is the same as LIKE 'a%'

SQLite offers another filtering function: GLOB, which is case-sensitive 
and can accept '*', '?' wildcards (instead of LIKE' % and _) and a 
regexp-style character subset for selection '[a-z]' or exclusion '[^0-9]'.

select 'abcd123' glob '*[Cc]?[1-9]2*';
1




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] regular expressions

2010-03-15 Thread Ralf Jantschek
Hi,

is anybody aware of a possibility to do s.th. like
select * from table where field like '[A|a]%' 

Thanks
Ralf

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach a database dynamically if not already available within a trigger

2010-03-15 Thread Simon Slavin

On 15 Mar 2010, at 10:52pm, andy wrote:

> How can I embed the 'attach" command within the trigger statement so  
> that the log database is attached if it is not already available?

Why not just attach it outside the TRIGGER ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread Tim Romano
If you want to "combine two tables" by copying rows from TableB into 
TableA, and TableA has an autoincrementing primary key, there is no need 
to reference the PK column in the SQL statement:

insert into TableA(address, weight)
select address, weight from TableB

The approach above does indeed "bypass the primary key conflict" because 
it leaves it up to TableA to assign the PK value upon insert.  Now, I 
don't know what inflexible "program" you are using that doesn't give you 
control over which columns you want to select. Maybe your "program" will 
let you create a view on TableB?

create view MyView as select address, weight from TableB

and then you could

insert into TableA(address, weight)
select address, weight from MyView

Regards
Tim Romano


On 3/15/2010 9:32 AM, dravid11 wrote:
> Well the situation is that i am merging data of one table in data of another
> table using a program
> so it is going to add all values by it self .I am not actually using insert
> query in that case to select values to add.
>
>
> There is another scenario , what if inserting a data and i want to bypass
> the primary key conflict and just update rest of the values.
>
> again i am combing two tables together so it should take all columns .
>
>
>
> Tim Romano wrote:
>
>> If all you want to do is to insert a new row, do not mention the primary
>> key column name in the insert statement:
>>
>> INSERT INTO temp (address, weight)
>> values( "blah blah", 100)
>>
>> The autoincrementing primary key will be autoincremented.
>>
>> Regards
>> Tim Romano
>>
>>
>> On 3/15/2010 9:15 AM, dravid11 wrote:
>>  
>>> Hello !
>>> I have been trying to search for this solutions for days,yet did not find
>>> the solution.
>>>
>>>
>>> I want to write an insert query on a table. When there is a conflicting
>>> primary key
>>> then it should increment the primary key and insert the row .
>>>
>>> For example i want to run this query
>>> INSERT INTO temp VALUES("1","112","112");
>>>
>>> Here first column is auto increment primary Key.
>>> This query will add the record(20,112,112) in the table
>>> Now when i run the query as
>>>
>>> INSERT INTO temp VALUES("1","100","100");
>>>
>>> I want the result as (2,100,100)
>>>
>>> But it give unique constraint error.
>>> How can i do that ,i.e, duplicate primary key comes in a table insert the
>>> new record and change the primary key of the new record.
>>>
>>> I have used this query but did not work
>>>
>>> insert or replace INTO temp (tempID,Address,Weight)
>>> VALUES(new."tempID",new."Address",new."Weight") ;
>>> SELECT RAISE(IGNORE);
>>> END
>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>>  
>
>
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 9.0.733 / Virus Database: 271.1.1/2748 - Release Date: 03/15/10 
> 03:33:00
>
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Attach a database dynamically if not already available within a trigger

2010-03-15 Thread andy
Hello
I am hoping someone can help with the following problem.

- I want to create a log of all changes to tables in a database.
- I would like to keep the log table in a separate database to the  
main data files
- I plan to use triggers to capture changes and insert them into the  
log table.

Question:
How can I embed the 'attach" command within the trigger statement so  
that the log database is attached if it is not already available?

i.e something like

create trigger insert_data after insert on data
begin
if database logdb does not exist
attach logdb.db as logdb;

insert into logdb.log values(.)
end;


I am also a little concerned about performance so I am not sure if  
testing for the presence of the database for every change will add to  
much overhead. Thoughts?
Thanks

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Scott Hess
On Mon, Mar 15, 2010 at 11:18 AM, Scott Hess  wrote:
> AFAICT, the operation to copy the pages back _is_ journaled, and the
> journal will get any pages which are overwritten in the front of the
> main database.  If the initial database has half of the pages used, it
> seems like the journal could range from almost nothing (if the used
> pages were mostly at the end) to almost half the database size (if the
> used pages were mostly at the front), so to be safe one would need 2x
> the final database size available.

As an aside, it occurs to me that one could implement an alternate
VACUUM which wrote the new database back to the main database file,
journalling the original versions of those pages.  The main database
would be locked for longer, but that might be a fair tradeoff for some
applications.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting Sqlite to MQX OS: Question 2

2010-03-15 Thread GeoffW

Hello Dan

Thanks for your useful input. To answer your questions.

>Do you have any file-locking primitives provided by the OS?
There are no file locking OS Primitives at all that I can use.

>Do you have clients connecting to the database from multiple
>processes? Or only multiple threads within the same process?

My Application doesnt have a concept of processes and threads as you might
find in Windows.
Think of it as a  small data collection embedded Application. The platform
has essentially 1 fixed Application running, made up of say 10 different
concurrent tasks. I would like to structure the program  so that any of the
tasks can safely access the database. The frequency of reading or writing to
the database will be fairly low, so performance isnt a prime consideration.
However integrity of the database is vital, I dont want to risk corruption
due to a broken or misunderstood file locking implementation.

Your message on first reading seemed to imply that the internal lockstate
variable  (NONE ,PENDING ,RESERVED, EXCLUSIVE etc) that Sqlite keeps on each
db file access is enough to make the locking safe, without needing an OS
lock/unlock primitive? I suspect I am mis-interpreting your comments here
though ?

Thanks for any further clarification on how to implement this.

Regards Geoff





Dan Kennedy-4 wrote:
> 
> 
> On Mar 15, 2010, at 10:04 PM, GeoffW wrote:
> 
>>
>> Hi Kees
>>
>> Thanks for the response and giving me a clue to look at dot locking,  
>> I had
>> missed the dot lock mechanism as I had concentrated mainly on the  
>> osWin.c
>> file. I cant find any documentation other than the source code on  
>> how this
>> low level file locking stuff is implemented.
>>
>> I am currently reading through the code to see if I can figure it  
>> out. The
>> unix.c module is horrendously complicated so I am struggling to make  
>> sense
>> of it.
> 
> It is more complicated. But the comments in it are often better than
> those in os_win.c. See the comments above unixLock(), at or around line
> 1208 of os_unix.c. Also here:
> 
>http://www.sqlite.org/lockingv3.html
> 
>> In simple terms I was thinking that when the db file is opened for  
>> writing,
>> I could use the filename with ".lock" appended, and open that new  
>> dummy
>> file, which is then used to determine if I have a lock on the original
>> database file or not ?
> 
> You can implement locking that way. But it would be a last
> resort. And is the sort of trick that could be prone to race
> conditions.
> 
> Do you have any file-locking primitives provided by the OS?
> 
> Do you have clients connecting to the database from multiple
> processes? Or only multiple threads within the same process?
> 
>> As the code says for dot locking there is really only a 2 state lock,
>> exclusive or no lock. I got lost by the complexity of the code, for  
>> vxworks
>> it seems to keep linked lists of filenames, not sure why I would  
>> need to do
>> that, also it does lots of other stuff that I didnt really  
>> understand for
>> example updating the file timestamp on the ".lock" file.
> 
> 2 locking states are all SQLite needs to work. An exclusive lock
> for writers and a shared lock for readers. The other locks are
> only required to increase concurrencly between readers and the
> single writer.
> 
> If you handle RESERVED and PENDING locks in the same way as EXCLUSIVE
> locks, things will work fine, just you lose a little concurrency.
> 
> 
> 
> 
>> I was hoping Dr Hipp could manage to find a few minutes to write  
>> some notes
>> explaining what needs to be implemented re: dot locking and what can  
>> be
>> simplified and ignored.
>>
>> Thanks if anyone can fill in some details here.
>>
>> Geoff
>>
>>
>>
>> Kees Nuyt wrote:
>>>
>>> On Fri, 12 Mar 2010 02:22:31 -0800 (PST), GeoffW
>>>  wrote:
>>>

 Hi

 Now I have got a half ported version of Sqlite up and running on  
 MQX I
>> cant
 put off any longer the question of file locking and how I tackle  
 that ?
>> For
 the moment i had made the xLock() and xUnlock() the same as in  
 osWin.c
 version but with the Windows lock() and unlock() function calls  
 commented
 out.

 The problem I have is that MQX's file system is just ANSI C, and  
 it lacks
 any lock/unlock functions.

 As Sqlite has been ported to so many diverse operating systems I  
 am hoping
 that this question of what to do for an ANSI only File system has  
 already
 been hit and a workaround defined ?

 I am pretty hazy on this topic of file locking so I would  
 appreciate some
 basic discussion and guidance. It might help if I explain my  
 application a
 little. It is an embedded system where we are just running one
>> application,
 but it is multithreaded. For my initial porting work I have made
 SQLITE_THREADSAFE = 0 (so I probably dont even need a file lock in  
 the
 

Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Jay A. Kreibich
On Mon, Mar 15, 2010 at 11:18:32AM -0800, Scott Hess scratched on the wall:
> On Sun, Mar 14, 2010 at 5:18 PM, Jay A. Kreibich  wrote:

> > ?While I have not tested this, I was under the impression that the
> > ?journal file is very very small, as no modifications are made to the
> > ?database, other than the final low-level copy (which is not a
> > ?journaled operation).

> AFAICT, the operation to copy the pages back _is_ journaled, and the
> journal will get any pages which are overwritten in the front of the
> main database.

  Taking a closer look at the comments in the code, I believe you are
  correct.  That makes sense anyways, as there is still a need to be
  able to back-out.
  
  There are some comments in the code about how turning journaling
  off is slower than leaving it on, but I see now that those comments
  are for the temp database, not the original file.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Scott Hess
On Sun, Mar 14, 2010 at 5:18 PM, Jay A. Kreibich  wrote:
> On Sun, Mar 14, 2010 at 07:19:59PM -0400, Matthew L. Creech scratched on the 
> wall:
>> I have a SQLite database with one large table, and I'd like to shrink
>> the size of that table to free up space in the filesystem.  My problem
>> is that the database is (for example) 100 MB, and I have 80 MB of free
>> filesystem space.  I figured that I could DELETE, say, half of the
>> records from the table, then VACUUM, and the VACUUM would
>> [temporarily] need ~50 MB of free space for the journal (since that's
>> how much real data there is).
>>
>> Instead, I'm finding that it needs a full 100 MB for the journal, even
>> though once the VACUUM succeeds the resulting DB is only 50 MB.  As a
>> result, I'm stuck unable to shrink the database, since VACUUM fails
>> with a disk I/O error (out of space), seemingly no matter many entries
>> I remove ahead of time.  I know the space is being freed, since
>> "PRAGMA freelist_count" shows the expected numbers.  So presumably
>> this is just an artifact of the way VACUUM is implemented internally.
>
>  Are you sure it is the journal file that is growing too large?
>
>  VACUUM works by making a logical copy of the database from the
>  original database into a temp database.  This restructures the
>  database and recovers space.  The temp database is then copied back
>  to the original database using low-level page copy.  This low-level
>  copy then truncates the original database file, recovering
>  filesystem space.
>
>  This also means the total space required to VACUUM a database is:
>  [old database size] + [new database size] + [journal file]
>
>  While I have not tested this, I was under the impression that the
>  journal file is very very small, as no modifications are made to the
>  database, other than the final low-level copy (which is not a
>  journaled operation).

AFAICT, the operation to copy the pages back _is_ journaled, and the
journal will get any pages which are overwritten in the front of the
main database.  If the initial database has half of the pages used, it
seems like the journal could range from almost nothing (if the used
pages were mostly at the end) to almost half the database size (if the
used pages were mostly at the front), so to be safe one would need 2x
the final database size available.

I could be mis-reading the code.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with DATABASE_LOCKED

2010-03-15 Thread Martin Engelschalk
Hello Igor,

yes, sorry, i forgot to mention that, i do have write access. The 
database file itself gets created.

Martin

Am 15.03.2010 20:02, schrieb Igor Tandetnik:
> Martin Engelschalk
>   wrote:
>
>> we experience a problem at a customer site, where the very first
>> statement the program executes on a newly created database failes
>> with a "database locked" error. The statement that failes is "PRAGMA
>> synchronous = OFF". The reason seems to be that the customer inists on
>> placing the database file on a network file system.
>>  
> Do you have write access to the directory where the database file is located? 
> SQLite needs to create a journal file there.
>
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with DATABASE_LOCKED

2010-03-15 Thread Igor Tandetnik
Martin Engelschalk
 wrote: 
> we experience a problem at a customer site, where the very first
> statement the program executes on a newly created database failes
> with a "database locked" error. The statement that failes is "PRAGMA
> synchronous = OFF". The reason seems to be that the customer inists on
> placing the database file on a network file system.

Do you have write access to the directory where the database file is located? 
SQLite needs to create a journal file there.

Igor Tandetnik


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread D. Richard Hipp

On Mar 14, 2010, at 7:19 PM, Matthew L. Creech wrote:

> Hi,
>
> I have a SQLite database with one large table, and I'd like to shrink
> the size of that table to free up space in the filesystem.
>
> I'm finding that it needs a full 100 MB for the journal, even
> though once the VACUUM succeeds the resulting DB is only 50 MB.
>
> Any tips are appreciated.  Thanks!

PRAGMA journal_mode=OFF;

Be warned, though, that if you session crashes or you lose power, your  
database will go corrupt if there is no journal.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can default column values use max() ?

2010-03-15 Thread Simon Slavin

On 14 Mar 2010, at 8:12pm, Max Vlasov wrote:

> I'm not sure I understood you correctly, but the query
> 
> INSERT INTO Books
> (Author, Title)
> VALUES
> ((Select Max(rowid) FROM Authors), "test")
> 
> just worked (tested it), sure you have to make id autoincrement to ensure
> Max(rowid) is actually the last author

I can do it in the INSERT instruction, but I was hoping to make that value a 
default.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Matthew L. Creech
On Mon, Mar 15, 2010 at 2:11 AM, Matthew L. Creech  wrote:
>
> I'll give this a try tomorrow on a real device with journaling off,
> and see how much space it uses in /tmp with journaling turned off.
>

I ran some tests on a real device with a real database, and got the
following results:



Test 1: 250,000 entries, normal VACUUM

Before the test:
- 27.9 MB database
- No journal
- 37.5 MB free disk space

By the time the journal file starts growing:
- 27.9 MB database
- Small journal
- ~15 MB free disk space

Toward the end of the operation (before failure):
- 27.9 MB database
- 14.6 MB journal
- < 2 MB free disk space

The VACUUM operation fails with a disk I/O error, presumably due to
running out of space.


Test 2: 250,000 entries, journal_mode = OFF, VACUUM

Before the test:
- 27.9 MB database
- 37.5 MB free disk space

Toward the end of the operation:
- 28.1 MB database
- 10.6 MB free disk space

The VACUUM operation succeeds.


Test 3: 250,000 entries trimmed to 100,000 entries, normal VACUUM

Before the test:
- 27.9 MB database
- No journal
- 37.5 MB free disk space

By the time the journal file starts growing:
- 27.9 MB database
- Small journal
- ~33 MB free disk space

Toward the end of the operation:
- 27.9 MB database
- ~28 MB journal
- ~5 MB free disk space

Afterward:
- 11.2 MB database
- 54.3 MB free disk space

The VACUUM operation succeeds.


Test 4: 250,000 entries trimmed to 100,000 entries, journal_mode = OFF, VACUUM

Before the test:
- 27.9 MB database
- 37.5 MB free disk space

Toward the end of the operation:
- 28.1 MB database
- 33.3 MB free disk space

The VACUUM operation succeeds.




I never did see any temporary files, but space was obviously being
taken up for a temp database, so I assume that SQLite opens a file
then unlink()s it or something like that.  It looks like in the normal
(journaled) case, the journal file consistently grows about as large
as the original database, but the extra disk space used up by the
temporary table is dependent on the new database size.

So Jay's estimate of disk usage:

 [old database size] + [new database size] + [journal file]

is correct, but for the normal VACUUM case, [journal file] is
basically equivalent to [old database size].  So it's really just:

 (2 * [old database size]) + [new database size]

This means that to VACUUM a SQLite database of size X, you need at
least 2X of _additional_ free disk space available.  That seems rather
wasteful, just looking at it as a SQLite user.  Although
programmatically there may be reasons for it that I'm not aware of.

At any rate, I guess my immediate problem will have to be solved with
a one-off patch that disables journaling and does a VACUUM, and for
long-term usage I'll need to shrink the database capacity even more to
account for the 3x disk space usage if I need to VACUUM again in the
future.

Thanks

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problem with DATABASE_LOCKED

2010-03-15 Thread Martin Engelschalk
Dear Sqlite users,

we experience a problem at a customer site, where the very first 
statement the program executes on a newly created database failes with a 
"database locked" error. The statement that failes is "PRAGMA 
synchronous = OFF". The reason seems to be that the customer inists on 
placing the database file on a network file system. However, other such 
scenarios work, becausw the the app is single threaded with one 
connection, no other processes access the database file.

The sqlite version is 3.2.5 (upgrading is not an option, unfortunately), 
I compiled using the amalgamation without special #Defines.

The Machine is SUSE Linux Enterprise Server 10, Kernel: 
2.6.16.60-0.54.5, gcc version 4.1.2
The NFS Software used is limal-nfs-server (LiMaL NFS Server Library) 
Version 1.1.72, the customer also mentions a "Celerra NS-40 F" by "EMC", 
which seems to be an external storage system, and NFS version 4 protocol.

Has anyone any ideas how to overcome this? Thank you for any comments.

Martin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3 (Version 2 .6.23) - Is the boolean operator "AND" no more available?

2010-03-15 Thread Max Vlasov
> I wonder if the operator "AND" (in capitals letters) is yet available and
> different from the basic term "and" (in lower letters).
>
>
Make sure you compiled the sources with SQLITE_ENABLE_FTS3_PARENTHESIS,
since according to docs

SQLITE_ENABLE_FTS3_PARENTHESIS
This option modifies the query pattern parser in FTS3 such that it
supports operators AND and NOT (in addition to the usual OR and NEAR) and
also allows query expressions to contain nested parenthesesis.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can default column values use max() ?

2010-03-15 Thread Max Vlasov
> When I enter a new book, I want the author to default to the last author in
> the database.  (For the sake of this example you can ignore tricks involving
> deleting authors and reusing ids.)
>


I'm not sure I understood you correctly, but the query

INSERT INTO Books
(Author, Title)
VALUES
((Select Max(rowid) FROM Authors), "test")

just worked (tested it), sure you have to make id autoincrement to ensure
Max(rowid) is actually the last author

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3 offsets() function, sorted list?

2010-03-15 Thread D. Richard Hipp

On Mar 15, 2010, at 1:42 PM, Nasron Cheong wrote:

> Are the results of the offsets() function used in the FTS3 module  
> sorted in
> any form? Can I rely on this always being the case?

Do not rely on the order of offsets returned by the offsets() function  
of FTS3.  I think they are currently sorted, but that might change in  
a subsequent release.

>
> Thanks.
>
> - Nasron
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS3 offsets() function, sorted list?

2010-03-15 Thread Nasron Cheong
Are the results of the offsets() function used in the FTS3 module sorted in
any form? Can I rely on this always being the case?

Thanks.

- Nasron
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] BUG 3.6.23 CorruptE.test needs ifcapable oversize_cell_check

2010-03-15 Thread Noah Hart
(Resent to sqlite-users)

 

CorruptE.test gives different results with and without
SQLITE_ENABLE_OVERSIZE_CELL_CHECK 

 

Coding similar to the following from corrupt7.test needs to be added

 

# Deliberately corrupt some of the cell offsets in the btree page

# on page 2 of the database.

#

# The error message is different depending on whether or not the

# SQLITE_ENABLE_OVERSIZE_CELL_CHECK compile-time option is engaged.

#

ifcapable oversize_cell_check {

 

Regards,

 

==

Noah Hart

Sr. Systems Analyst

Lipman Ins Admin, Inc.

'510-796-4676 x266

==

 




CONFIDENTIALITY NOTICE: 
This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG: shell.c ATTACH parses filename poorly

2010-03-15 Thread Noah Hart
There may be a documentation bug

According to http://sqlite.org/lang_attach.html
the format of the attach command is 
ATTACH [DATABASE]  AS 

Jay reports below that the filename can be an expression.

Looks like the documentation needs to be clarified.

REQUEST:  Add explanation to Syntax Diagrams For SQLite showing 
what the legal values are for the non bolded bubbles

Regards,

Noah

-Original Message-

> 
> I know that the work-a-round is to enter the filename in quotes.  
> 
> My question is why is the attach statement parsing for a column at
all.

  The format of the ATTACH command is:
 
ATTACH [DATABASE]  AS 

  It happens that the expression needs to be a text value for the
  command to work properly, but the parser doesn't know that.

  A bare character-string given as an expression is considered a
  column reference (consider SELECT or WHERE clauses).  However,
  the ATTACH command has no table/column context, so the column
  name is unresolved, i.e. no such column.



-j


-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >



CONFIDENTIALITY NOTICE: 
This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting Sqlite to MQX OS: Question 2

2010-03-15 Thread Dan Kennedy

On Mar 15, 2010, at 10:04 PM, GeoffW wrote:

>
> Hi Kees
>
> Thanks for the response and giving me a clue to look at dot locking,  
> I had
> missed the dot lock mechanism as I had concentrated mainly on the  
> osWin.c
> file. I cant find any documentation other than the source code on  
> how this
> low level file locking stuff is implemented.
>
> I am currently reading through the code to see if I can figure it  
> out. The
> unix.c module is horrendously complicated so I am struggling to make  
> sense
> of it.

It is more complicated. But the comments in it are often better than
those in os_win.c. See the comments above unixLock(), at or around line
1208 of os_unix.c. Also here:

   http://www.sqlite.org/lockingv3.html

> In simple terms I was thinking that when the db file is opened for  
> writing,
> I could use the filename with ".lock" appended, and open that new  
> dummy
> file, which is then used to determine if I have a lock on the original
> database file or not ?

You can implement locking that way. But it would be a last
resort. And is the sort of trick that could be prone to race
conditions.

Do you have any file-locking primitives provided by the OS?

Do you have clients connecting to the database from multiple
processes? Or only multiple threads within the same process?

> As the code says for dot locking there is really only a 2 state lock,
> exclusive or no lock. I got lost by the complexity of the code, for  
> vxworks
> it seems to keep linked lists of filenames, not sure why I would  
> need to do
> that, also it does lots of other stuff that I didnt really  
> understand for
> example updating the file timestamp on the ".lock" file.

2 locking states are all SQLite needs to work. An exclusive lock
for writers and a shared lock for readers. The other locks are
only required to increase concurrencly between readers and the
single writer.

If you handle RESERVED and PENDING locks in the same way as EXCLUSIVE
locks, things will work fine, just you lose a little concurrency.




> I was hoping Dr Hipp could manage to find a few minutes to write  
> some notes
> explaining what needs to be implemented re: dot locking and what can  
> be
> simplified and ignored.
>
> Thanks if anyone can fill in some details here.
>
> Geoff
>
>
>
> Kees Nuyt wrote:
>>
>> On Fri, 12 Mar 2010 02:22:31 -0800 (PST), GeoffW
>>  wrote:
>>
>>>
>>> Hi
>>>
>>> Now I have got a half ported version of Sqlite up and running on  
>>> MQX I
> cant
>>> put off any longer the question of file locking and how I tackle  
>>> that ?
> For
>>> the moment i had made the xLock() and xUnlock() the same as in  
>>> osWin.c
>>> version but with the Windows lock() and unlock() function calls  
>>> commented
>>> out.
>>>
>>> The problem I have is that MQX's file system is just ANSI C, and  
>>> it lacks
>>> any lock/unlock functions.
>>>
>>> As Sqlite has been ported to so many diverse operating systems I  
>>> am hoping
>>> that this question of what to do for an ANSI only File system has  
>>> already
>>> been hit and a workaround defined ?
>>>
>>> I am pretty hazy on this topic of file locking so I would  
>>> appreciate some
>>> basic discussion and guidance. It might help if I explain my  
>>> application a
>>> little. It is an embedded system where we are just running one
> application,
>>> but it is multithreaded. For my initial porting work I have made
>>> SQLITE_THREADSAFE = 0 (so I probably dont even need a file lock in  
>>> the
>>> configuration I have at present ?)
>>> However I would of course like to make SQLITE_THREADSAFE = 1 so that
>>> multiple tasks can safely access a database.
>>
>> (Disclaimer: this is not my specialism, I hope it helps
>> nevertheless)
>>
>> SQLITE_THREADSAFE has effect on threads within the same
>> process (=task), especially when they use the same
>> connection. You need file locks for synchronisation between
>> different processes. If the filesystem doesn't provide them,
>> you could use the dot lockfile paradigm, which seems to be
>> associated with SQLITE_ENABLE_LOCKING_STYLE.
>>
>>> Thanks for any guidance
>>>
>>> Regards Geoff
>> -- 
>>  (  Kees Nuyt
>>  )
>> c[_]
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
> -- 
> View this message in context: 
> http://old.nabble.com/Porting-Sqlite-to-MQX-OS%3A-Question-2-tp27874124p27905578.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] Poor SQLite indexing performance

2010-03-15 Thread Max Vlasov
> Also it's quite known that
> creating index after inserting all rows is much faster than creating
> index before that. So it can be even beneficial in inserting huge
> amount of rows somewhere in the middle of the work: first delete all
> indexes, then insert rows, then create indexes once more.
>

Pavel, please add some reference (for example from docs or drh), my tests
showed that for large data there almost no difference in speed. And at the
first place I thought that the algorithm is really different, now I almost
sure that building index from scratch is just enumerating records building
the tree. CMIIW

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting Sqlite to MQX OS: Question 2

2010-03-15 Thread GeoffW

Hi Kees

Thanks for the response and giving me a clue to look at dot locking, I had
missed the dot lock mechanism as I had concentrated mainly on the osWin.c
file. I cant find any documentation other than the source code on how this
low level file locking stuff is implemented.

I am currently reading through the code to see if I can figure it out. The
unix.c module is horrendously complicated so I am struggling to make sense
of it. 

In simple terms I was thinking that when the db file is opened for writing,
I could use the filename with ".lock" appended, and open that new dummy
file, which is then used to determine if I have a lock on the original
database file or not ?
As the code says for dot locking there is really only a 2 state lock,
exclusive or no lock. I got lost by the complexity of the code, for vxworks
it seems to keep linked lists of filenames, not sure why I would need to do
that, also it does lots of other stuff that I didnt really understand for
example updating the file timestamp on the ".lock" file.

I was hoping Dr Hipp could manage to find a few minutes to write some notes
explaining what needs to be implemented re: dot locking and what can be
simplified and ignored.

Thanks if anyone can fill in some details here.

Geoff



Kees Nuyt wrote:
> 
> On Fri, 12 Mar 2010 02:22:31 -0800 (PST), GeoffW
>  wrote:
> 
>>
>>Hi
>>
>>Now I have got a half ported version of Sqlite up and running on MQX I
cant
>>put off any longer the question of file locking and how I tackle that ?
For
>>the moment i had made the xLock() and xUnlock() the same as in osWin.c
>>version but with the Windows lock() and unlock() function calls commented
>>out.
>>
>>The problem I have is that MQX's file system is just ANSI C, and it lacks
>>any lock/unlock functions.
>>
>>As Sqlite has been ported to so many diverse operating systems I am hoping
>>that this question of what to do for an ANSI only File system has already
>>been hit and a workaround defined ?
>>
>>I am pretty hazy on this topic of file locking so I would appreciate some
>>basic discussion and guidance. It might help if I explain my application a
>>little. It is an embedded system where we are just running one
application,
>>but it is multithreaded. For my initial porting work I have made
>>SQLITE_THREADSAFE = 0 (so I probably dont even need a file lock in the
>>configuration I have at present ?)
>>However I would of course like to make SQLITE_THREADSAFE = 1 so that
>>multiple tasks can safely access a database.
> 
> (Disclaimer: this is not my specialism, I hope it helps
> nevertheless)
> 
> SQLITE_THREADSAFE has effect on threads within the same
> process (=task), especially when they use the same
> connection. You need file locks for synchronisation between
> different processes. If the filesystem doesn't provide them,
> you could use the dot lockfile paradigm, which seems to be
> associated with SQLITE_ENABLE_LOCKING_STYLE.
> 
>>Thanks for any guidance
>>
>>Regards Geoff
> -- 
>   (  Kees Nuyt
>   )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Porting-Sqlite-to-MQX-OS%3A-Question-2-tp27874124p27905578.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread P Kishor
On Mon, Mar 15, 2010 at 9:18 AM, dravid11  wrote:
>
> As i told you before i am using this database inside an application which is
> merging/combining one source database table to destination database table.
>
> So in that case i am not writing the actual insert query.and in that case
> all columns are taken.
>
> The trigger inside the database should handle that when there is a duplicate
> primary key then it should auto increment this primary key and insert the
> new record with it .
>
>


I am not sure what help you want from the list. I mean, if you can't
change the application that is accessing the db, then use a different
application. If you can't change the INSERT query, how can you change
the "trigger inside the database."

I also suggest you read the sqlite docs on INTEGER PRIMARY KEY.


>
>
>
> P Kishor-3 wrote:
>>
>> On Mon, Mar 15, 2010 at 8:32 AM, dravid11  wrote:
>>>
>>> Well the situation is that i am merging data of one table in data of
>>> another
>>> table using a program
>>> so it is going to add all values by it self .I am not actually using
>>> insert
>>> query in that case to select values to add.
>>>
>>>
>>> There is another scenario , what if inserting a data and i want to bypass
>>> the primary key conflict and just update rest of the values.
>>>
>>
>> INSERT INTO t1 (not_a_pk1, not_a_pk2...)
>> SELECT not_a_pk1, not_a_pk2... FROM t2;
>>
>>
>> And, please don't use double quotes to quote your strings. Use single
>> quotes instead.
>>
>>
>>> again i am combing two tables together so it should take all columns .
>>>
>>>
>>>
>>> Tim Romano wrote:

 If all you want to do is to insert a new row, do not mention the primary
 key column name in the insert statement:

 INSERT INTO temp (address, weight)
 values( "blah blah", 100)

 The autoincrementing primary key will be autoincremented.

 Regards
 Tim Romano


 On 3/15/2010 9:15 AM, dravid11 wrote:
> Hello !
> I have been trying to search for this solutions for days,yet did not
> find
> the solution.
>
>
> I want to write an insert query on a table. When there is a conflicting
> primary key
> then it should increment the primary key and insert the row .
>
> For example i want to run this query
> INSERT INTO temp VALUES("1","112","112");
>
> Here first column is auto increment primary Key.
> This query will add the record(20,112,112) in the table
> Now when i run the query as
>
> INSERT INTO temp VALUES("1","100","100");
>
> I want the result as (2,100,100)
>
> But it give unique constraint error.
> How can i do that ,i.e, duplicate primary key comes in a table insert
> the
> new record and change the primary key of the new record.
>
> I have used this query but did not work
>
> insert or replace INTO temp (tempID,Address,Weight)
> VALUES(new."tempID",new."Address",new."Weight") ;
> SELECT RAISE(IGNORE);
> END
>

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


>>>
>>> --
>>> View this message in context:
>>> http://old.nabble.com/On-conflicting-Primary-key-how-to-insert-record-and-increment-the-Key-tp27904087p27904288.html
>>> Sent from the SQLite mailing list archive at Nabble.com.
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>>
>> --
>> Puneet Kishor
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] Poor SQLite indexing performance

2010-03-15 Thread Pavel Ivanov
First of all you should write to sqlite-users mailing list, not sqlite-dev.

> Why is the indexing so slow and bogs down as we proceed with insertions ?
> Any suggestions ?
> Also, how could I improve performance ?

I can't say exactly why performance with index degrades so
significantly with the size of the table. Probably it's something to
do with cache - try to increase its size. Also it's quite known that
creating index after inserting all rows is much faster than creating
index before that. So it can be even beneficial in inserting huge
amount of rows somewhere in the middle of the work: first delete all
indexes, then insert rows, then create indexes once more.

> Is there any way to have a UNIQUE
> field but disable indexing till the end?

How do you expect your uniqueness to be enforced? SQLite does that by
looking into index - if value is there then it is repeated, if value
is not there then it's unique and should be inserted into index for
further check.

> However, as I am using hash values instead of filenames I need to deal with
> collisions. Hence, removing the UNIQUE field is risky as I cant detect
> collisions. But I am also unable to use it because of the poor indexing
> performance.

What's wrong with creating unique index after all rows are inserted?
It's the same as declaring UNIQUE field although checking for
uniqueness is postponed till index creation and you will unable to
create index if there's some not unique values. But I don't know if
all this matters to you.


Pavel

On Fri, Mar 12, 2010 at 7:51 AM, rohan a  wrote:
> Hello,
>
> I am using an SQLite database for storing certain file properties. Initially
> I used a pathname (ex: D:\\1\2\file.txt) as a PRIMARY KEY. But as the
> database created was very large, I decided to use hashes of the pathname
> instead. This brought down the size occupied by the database on disk. I am
> using the C language interface provided.
>
> The table looks like this:
>
> CREATE TABLE IF NOT EXISTS EITable (Crc1 INTEGER, Crc2 INTEGER, Ctime
> INTEGER, Checksum INTEGER, UNIQUE(Crc1,Crc2));
>
> I generate 2 64-bit hashes of the pathname and store them into the database.
> With a combination of (Crc1,Crc2) being UNIQUE.
>
> The size of the database definitely came down using this method. However,
> the time taken for insertions becomes abnormally large and un-acceptable. It
> is quite fast initially but slows down drastically as the database gets
> filled with records.
>
> When I remove the UNIQUE field and create an INDEX after all INSERTions are
> completed the insertions are fast and complete quickly. I use the CREATE
> INDEX when all insertions are completed.
> However, as I am using hash values instead of filenames I need to deal with
> collisions. Hence, removing the UNIQUE field is risky as I cant detect
> collisions. But I am also unable to use it because of the poor indexing
> performance.
>
> Why is the indexing so slow and bogs down as we proceed with insertions ?
> Any suggestions ?
> Also, how could I improve performance ?Is there any way to have a UNIQUE
> field but disable indexing till the end?
>
> I am using Transactions and compiled statements for INSERT, SELECT etc. I
> timed each of the activities like generation of the hash, insertion, time
> taken for COMMIT. This indicated that the indexing is the culprit. Also
> moving indexing to the end improved performance.
>
> Any help/suggestions on this ?
>
> Thanks
>
> ___
> sqlite-dev mailing list
> sqlite-...@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-dev
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread dravid11

As i told you before i am using this database inside an application which is
merging/combining one source database table to destination database table. 

So in that case i am not writing the actual insert query.and in that case
all columns are taken.

The trigger inside the database should handle that when there is a duplicate
primary key then it should auto increment this primary key and insert the
new record with it .





P Kishor-3 wrote:
> 
> On Mon, Mar 15, 2010 at 8:32 AM, dravid11  wrote:
>>
>> Well the situation is that i am merging data of one table in data of
>> another
>> table using a program
>> so it is going to add all values by it self .I am not actually using
>> insert
>> query in that case to select values to add.
>>
>>
>> There is another scenario , what if inserting a data and i want to bypass
>> the primary key conflict and just update rest of the values.
>>
> 
> INSERT INTO t1 (not_a_pk1, not_a_pk2...)
> SELECT not_a_pk1, not_a_pk2... FROM t2;
> 
> 
> And, please don't use double quotes to quote your strings. Use single
> quotes instead.
> 
> 
>> again i am combing two tables together so it should take all columns .
>>
>>
>>
>> Tim Romano wrote:
>>>
>>> If all you want to do is to insert a new row, do not mention the primary
>>> key column name in the insert statement:
>>>
>>> INSERT INTO temp (address, weight)
>>> values( "blah blah", 100)
>>>
>>> The autoincrementing primary key will be autoincremented.
>>>
>>> Regards
>>> Tim Romano
>>>
>>>
>>> On 3/15/2010 9:15 AM, dravid11 wrote:
 Hello !
 I have been trying to search for this solutions for days,yet did not
 find
 the solution.


 I want to write an insert query on a table. When there is a conflicting
 primary key
 then it should increment the primary key and insert the row .

 For example i want to run this query
 INSERT INTO temp VALUES("1","112","112");

 Here first column is auto increment primary Key.
 This query will add the record(20,112,112) in the table
 Now when i run the query as

 INSERT INTO temp VALUES("1","100","100");

 I want the result as (2,100,100)

 But it give unique constraint error.
 How can i do that ,i.e, duplicate primary key comes in a table insert
 the
 new record and change the primary key of the new record.

 I have used this query but did not work

 insert or replace INTO temp (tempID,Address,Weight)
 VALUES(new."tempID",new."Address",new."Weight") ;
 SELECT RAISE(IGNORE);
 END

>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>
>> --
>> View this message in context:
>> http://old.nabble.com/On-conflicting-Primary-key-how-to-insert-record-and-increment-the-Key-tp27904087p27904288.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 
> -- 
> Puneet Kishor
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/On-conflicting-Primary-key-how-to-insert-record-and-increment-the-Key-tp27904087p27904948.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attach a database dynamically if not already available within a trigger

2010-03-15 Thread Pavel Ivanov
> How can I embed the 'attach" command within the trigger statement so
> that the log database is attached if it is not already available?

AFAIK, this is impossible.

But why do you need that? If you write your own application you can
always attach this database at the beginning. But if you want to spy
after some third-party application I believe SQLite is not suitable
for this kind of tasks...


Pavel

On Sun, Mar 14, 2010 at 10:50 PM, andy  wrote:
> Hello
> I am hoping someone can help with the following problem.
>
> - I want to create a log of all changes to tables in a database.
> - I would like to keep the log table in a separate database to the
> main data files
> - I plan to use triggers to capture changes and insert them into the
> log table.
>
> Question:
> How can I embed the 'attach" command within the trigger statement so
> that the log database is attached if it is not already available?
>
> i.e something like
>
> create trigger insert_data after insert on data
> begin
>        if database logdb does not exist
>                attach logdb.db as logdb;
>
>        insert into logdb.log values(.)
> end;
>
>
> I am also a little concerned about performance so I am not sure if
> testing for the presence of the database for every change will add to
> much overhead. Thoughts?
> Thanks
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How can I deal with hexadecimal numerical value?

2010-03-15 Thread Pavel Ivanov
If you need to insert integer value then you'll have to convert it to
decimal notation by yourself. It's not a big deal, right?
If you want to insert blob or text string with this byte/character in
it then you can use notation like this:
   select X'FF';
   insert into t values(X'FF');


Pavel

On Mon, Mar 15, 2010 at 3:51 AM, How can I deal with hex numerical
value?  wrote:
> sqlite doesn't support 0xFF in SQL, such as:
>    select 0xFF;
>    insert into t values(0xFF);
>
> Error: unrecognized token: "0xFF"
>
> But sometime, this is not convenient.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread P Kishor
On Mon, Mar 15, 2010 at 8:32 AM, dravid11  wrote:
>
> Well the situation is that i am merging data of one table in data of another
> table using a program
> so it is going to add all values by it self .I am not actually using insert
> query in that case to select values to add.
>
>
> There is another scenario , what if inserting a data and i want to bypass
> the primary key conflict and just update rest of the values.
>

INSERT INTO t1 (not_a_pk1, not_a_pk2...)
SELECT not_a_pk1, not_a_pk2... FROM t2;


And, please don't use double quotes to quote your strings. Use single
quotes instead.


> again i am combing two tables together so it should take all columns .
>
>
>
> Tim Romano wrote:
>>
>> If all you want to do is to insert a new row, do not mention the primary
>> key column name in the insert statement:
>>
>> INSERT INTO temp (address, weight)
>> values( "blah blah", 100)
>>
>> The autoincrementing primary key will be autoincremented.
>>
>> Regards
>> Tim Romano
>>
>>
>> On 3/15/2010 9:15 AM, dravid11 wrote:
>>> Hello !
>>> I have been trying to search for this solutions for days,yet did not find
>>> the solution.
>>>
>>>
>>> I want to write an insert query on a table. When there is a conflicting
>>> primary key
>>> then it should increment the primary key and insert the row .
>>>
>>> For example i want to run this query
>>> INSERT INTO temp VALUES("1","112","112");
>>>
>>> Here first column is auto increment primary Key.
>>> This query will add the record(20,112,112) in the table
>>> Now when i run the query as
>>>
>>> INSERT INTO temp VALUES("1","100","100");
>>>
>>> I want the result as (2,100,100)
>>>
>>> But it give unique constraint error.
>>> How can i do that ,i.e, duplicate primary key comes in a table insert the
>>> new record and change the primary key of the new record.
>>>
>>> I have used this query but did not work
>>>
>>> insert or replace INTO temp (tempID,Address,Weight)
>>> VALUES(new."tempID",new."Address",new."Weight") ;
>>> SELECT RAISE(IGNORE);
>>> END
>>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
> --
> View this message in context: 
> http://old.nabble.com/On-conflicting-Primary-key-how-to-insert-record-and-increment-the-Key-tp27904087p27904288.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread dravid11

Well the situation is that i am merging data of one table in data of another
table using a program 
so it is going to add all values by it self .I am not actually using insert
query in that case to select values to add.


There is another scenario , what if inserting a data and i want to bypass
the primary key conflict and just update rest of the values.

again i am combing two tables together so it should take all columns .



Tim Romano wrote:
> 
> If all you want to do is to insert a new row, do not mention the primary 
> key column name in the insert statement:
> 
> INSERT INTO temp (address, weight)
> values( "blah blah", 100)
> 
> The autoincrementing primary key will be autoincremented.
> 
> Regards
> Tim Romano
> 
> 
> On 3/15/2010 9:15 AM, dravid11 wrote:
>> Hello !
>> I have been trying to search for this solutions for days,yet did not find
>> the solution.
>>
>>
>> I want to write an insert query on a table. When there is a conflicting
>> primary key
>> then it should increment the primary key and insert the row .
>>
>> For example i want to run this query
>> INSERT INTO temp VALUES("1","112","112");
>>
>> Here first column is auto increment primary Key.
>> This query will add the record(20,112,112) in the table
>> Now when i run the query as
>>
>> INSERT INTO temp VALUES("1","100","100");
>>
>> I want the result as (2,100,100)
>>
>> But it give unique constraint error.
>> How can i do that ,i.e, duplicate primary key comes in a table insert the
>> new record and change the primary key of the new record.
>>
>> I have used this query but did not work
>>
>> insert or replace INTO temp (tempID,Address,Weight)
>> VALUES(new."tempID",new."Address",new."Weight") ;
>> SELECT RAISE(IGNORE);
>> END
>>
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/On-conflicting-Primary-key-how-to-insert-record-and-increment-the-Key-tp27904087p27904288.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread Tim Romano
If all you want to do is to insert a new row, do not mention the primary 
key column name in the insert statement:

INSERT INTO temp (address, weight)
values( "blah blah", 100)

The autoincrementing primary key will be autoincremented.

Regards
Tim Romano


On 3/15/2010 9:15 AM, dravid11 wrote:
> Hello !
> I have been trying to search for this solutions for days,yet did not find
> the solution.
>
>
> I want to write an insert query on a table. When there is a conflicting
> primary key
> then it should increment the primary key and insert the row .
>
> For example i want to run this query
> INSERT INTO temp VALUES("1","112","112");
>
> Here first column is auto increment primary Key.
> This query will add the record(20,112,112) in the table
> Now when i run the query as
>
> INSERT INTO temp VALUES("1","100","100");
>
> I want the result as (2,100,100)
>
> But it give unique constraint error.
> How can i do that ,i.e, duplicate primary key comes in a table insert the
> new record and change the primary key of the new record.
>
> I have used this query but did not work
>
> insert or replace INTO temp (tempID,Address,Weight)
> VALUES(new."tempID",new."Address",new."Weight") ;
> SELECT RAISE(IGNORE);
> END
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] On conflicting Primary key how to insert record and increment the Key

2010-03-15 Thread dravid11

Hello ! 
I have been trying to search for this solutions for days,yet did not find
the solution.


I want to write an insert query on a table. When there is a conflicting
primary key 
then it should increment the primary key and insert the row .

For example i want to run this query 
INSERT INTO temp VALUES("1","112","112");

Here first column is auto increment primary Key. 
This query will add the record(20,112,112) in the table
Now when i run the query as 

INSERT INTO temp VALUES("1","100","100");

I want the result as (2,100,100)

But it give unique constraint error. 
How can i do that ,i.e, duplicate primary key comes in a table insert the
new record and change the primary key of the new record.

I have used this query but did not work

insert or replace INTO temp (tempID,Address,Weight)
VALUES(new."tempID",new."Address",new."Weight") ;
SELECT RAISE(IGNORE);
END


-- 
View this message in context: 
http://old.nabble.com/On-conflicting-Primary-key-how-to-insert-record-and-increment-the-Key-tp27904087p27904087.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How can I deal with hexadecimal numerical value?

2010-03-15 Thread How can I deal with hex numerical value?
sqlite doesn't support 0xFF in SQL, such as:
select 0xFF;
insert into t values(0xFF);

Error: unrecognized token: "0xFF"

But sometime, this is not convenient.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Attach a database dynamically if not already available within a trigger

2010-03-15 Thread andy
Hello
I am hoping someone can help with the following problem.

- I want to create a log of all changes to tables in a database.
- I would like to keep the log table in a separate database to the  
main data files
- I plan to use triggers to capture changes and insert them into the  
log table.

Question:
How can I embed the 'attach" command within the trigger statement so  
that the log database is attached if it is not already available?

i.e something like

create trigger insert_data after insert on data
begin
if database logdb does not exist
attach logdb.db as logdb;

insert into logdb.log values(.)
end;


I am also a little concerned about performance so I am not sure if  
testing for the presence of the database for every change will add to  
much overhead. Thoughts?
Thanks

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3 (Version 2 .6.23) - Is the boolean operator "AND" no more available?

2010-03-15 Thread Dan Kennedy

On Mar 14, 2010, at 5:34 AM, gerard.jouannot wrote:

> Hello all SQLlite users.
>
> I wonder if the operator "AND" (in capitals letters) is yet  
> available and
> different from the basic term "and" (in lower letters).

Available only if FTS3 is compiled with this option:

   -DSQLITE_ENABLE_FTS3_PARENTHESIS

See:

   http://www.sqlite.org/fts3.html#section_2

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM & journal size

2010-03-15 Thread Matthew L. Creech
On Sun, Mar 14, 2010 at 9:18 PM, Jay A. Kreibich  wrote:
>
>  Are you sure it is the journal file that is growing too large?
>
...
>
>  Now, if I'm following you correctly, the numbers you gave seem to
>  indicate that this should work... If the old database is 100MB and
>  the new database is 50MB and I'm saying the journal file is small,
>  then 80MB free before you start should be enough.
>

I'm sure that it's the journal file.  This is happening on an embedded
device in a flash-based filesystem which is rather slow, so I was
logged in via SSH and could see the journal file growing unusually
large as free space shrunk to zero.

To double check, I just tried a similar test on my desktop:

$ ls -l deadband.db
-rw-r--r-- 1 mlcreech mlcreech 85209088 Mar 15 01:35 deadband.db
$ ./sqlite3 deadband.db
SQLite version 3.6.22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> SELECT count() FROM val_table;
663552
sqlite> DELETE FROM val_table WHERE idx < 60;
sqlite> SELECT count() FROM val_table;
51840
sqlite> VACUUM;

In another shell, I've got a script monitoring the size of
deadband.db-journal every 100ms.  When I do the VACUUM, it shows:

Journal size: 4096 bytes
Journal size: 4096 bytes
Journal size: 7389184 bytes
Journal size: 14774272 bytes
Journal size: 22159360 bytes
Journal size: 2958 bytes
Journal size: 36929536 bytes
Journal size: 44314624 bytes
Journal size: 51699712 bytes
Journal size: 59080704 bytes
Journal size: 66658304 bytes
Journal size: 81235968 bytes
Journal size: 85393408 bytes

That last size is actually larger than the original database
(additional transaction metadata and what not, I guess).  After it's
done, though, the file size is appropriately smaller:

$ ls -l deadband.db
-rw-r--r-- 1 mlcreech mlcreech 6709248 Mar 15 01:43 deadband.db

I notice that when I've mostly emptied the database (as in this
example), the VACUUM completes quickly, roughly in proportion to how
many real entries are left.  But the journal size still follows a
near-linear growth throughout the operation, regardless.  So
presumably it's not really doing anything with the old data, but still
copying it over to the journal as it goes through the old DB
page-by-page or something.

You mentioned that it's creating a temp database, and looking at
sqlite3RunVacuum() I see 'vacuum_db' which seems to be just that.
However, I'm not sure where that data is actually going.  Setting
temp_store_directory to my current directory didn't actually generate
any files while the VACUUM was happening, that I could see.

>
>    PRAGMA journal_mode = OFF;
>
>  I would do this on a test system.  If the problem really is the
>  journal file, this should allow things to work.  If you still get a
>  space error, we're dealing with something else.
>

This could work as a last resort, although I'd hoped to find a better
way - these are embedded devices, so power failure or reboot during
the middle of this operation is a possibility.  For this one-time
case, though, I may be able to swing it if there's no other choice.

By the way, there's one single read/write filesystem in flash on these
systems, so in my case "/tmp" is no different than the directory that
the database & journal are stored in.  There's also not nearly enough
free memory to hold the small database, so I can't use temp_store =
MEMORY, unfortunately.

I'll give this a try tomorrow on a real device with journaling off,
and see how much space it uses in /tmp with journaling turned off.

Thanks for the response!

-- 
Matthew L. Creech
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users