Re: [sqlite] Another .DUMP issue with v 3.18.0

2017-04-07 Thread Josh Hunsaker
On Fri, Apr 7, 2017 at 3:02 PM, Richard Hipp wrote:
> On 4/7/17, Tony Papadimitriou wrote:
>>
>> sql xxx.db "CREATE TABLE xxx(`time zone`)" "insert into xxx values('1')"
>> sql .dump xxx.db | sql
>>
>
> I'm unable to repro.
>

Is this possibly because the shell that Tony is using is evaluating `time zone`?

I can repro, but in my case, the obvious cause is that the `time zone`
portion gets evaluated to an empty string.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Massively multithreaded SQLite queries

2017-03-03 Thread Josh Hunsaker
On Fri, Mar 3, 2017 at 1:13 PM, Andrew Brown wrote:
>
> Any tips to handle massively multithreaded side by side chunked
> queries on the same database?

In my (limited) experience, it seems that multithreaded SQLite
acquires a lock on a shared, in-memory b-tree every time a
statement is prepared.  It might not be the database read itself,
but statement preparation that is causing threads to wait for
each other.  Your problem might be mitigated if you could compile
your queries in advance.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] fts5

2015-02-17 Thread Josh Wilson
I accidentally sent this message to the sqlite-dev mailing list and finally
found this thread to post it in the appropriate location:


So I was having a peruse of SQLite documentation and found this FTS5 branch
in the timeline.
http://www.sqlite.org/src/timeline?n=100=fts5

>From what I gather, some of the main improvements include Okapi BM25 ranking
as a default ranking option and connecting into custom ranking functions.

https://github.com/neozenith/sqlite-okapi-bm25

I have spent some time forking this implementation from 'rads' to allow for
weighted fields in the bm25f and it is great to see this implemented in this
changeset:
http://www.sqlite.org/src/info/1cd15a1759004d5d

Although I see no provision for the bm25+ lower bounding fix where a
document that has 1 of 10 tokens matching in a document ranking the same as
10 of 100 tokens matching in a document.

Although they both have 10% of the document matching, having 10 matching
terms should be higher than a document with only 1 term.
http://en.wikipedia.org/wiki/Okapi_BM25#Modifications



Also since there is work already being done on the FTS engine I would like
to call attention to this thread I started some time ago regarding exposing
token position (as opposed to byte position) to allow for proximity ranking
algorithms.
http://sqlite.1065341.n5.nabble.com/Proximity-ranking-with-FTS-td76149.html

Otherwise super excited about this branches development.



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/fts5-tp77822p80578.html
Sent from the SQLite mailing list archive at Nabble.com.


Re: [sqlite] Proximity ranking with FTS

2014-06-17 Thread Josh Wilson
Let's say I have search phrase "TermA TermB"

matchinfo option 'p' would be 2. 

CREATE VIRTUAL TABLE t1 USING fts4(title, content);

matchinfo option 'c' returns 2 for the number of columns. 

Now consider sample data:

|1|""|"TermA"|
|2|"TermA TermB"|"TermA TermA"|
|3|"TermA TermA TermA"|"TermB"|

matchinfo option 'x' would have ('p' * 'c' * 3) bytes of data per row. 

But each of these are aggregate pieces of information. 

For example in the list of (p)hrase terms I'm interested in the one at index 
[0] :: "TermA" in (c)olumn [0] for when matchinfo is looking at row [3]; I'm 
going to need a list of 3 token positions. But if there were more matches I'm 
going to need N token positions. 

So if matchinfo had a 't' option which is the total number of token hits within 
the row and this is an int then we can have option 'q' which would have the 
following data:

int N = matchinfo[T_OFFSET];
for (int i = 0; i < N; i++)
{
// this is composed on my phone so pardon the poor indenting. 
int phraseTerm = matchinfo[Q_OFFSET + 3*i];
int column = matchinfo[Q_OFFSET + 3*i + 1];
int tokenPosition = matchinfo[Q_OFFSET + 3*i + 2];
}

Again ideally this would be precomputed so matchinfo can maintain its speed in 
forming the BLOB. 

This is similar to how offsets() returns results but the documentation says 
that offsets() is an order of magnitude slower and I'm presuming it is using 
the fts3tokenize() on the matched results and tokenising the data again. 

A quick win would be to make a token_offsets() function that uses the 
fts3tokenise() function to get the values we are after by tokenising the 
results. Technically it'd get the job done but I'd like it to still have the 
speed matchinfo has so the proximity ranking isn't waiting on tokenising 
documents all the time. 

So if it is to be precalculated it will have to be stored in a shadow table 
somewhere and also updated accordingly with FTS4 INSERT, UPDATE and DELETE 
actions. 

Regards

Josh



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Proximity-ranking-with-FTS-tp76149p76156.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proximity ranking with FTS

2014-06-16 Thread Josh Wilson
Yeah I had thought about using the byte distance between words but you get
these instances:

[Example A]
|word1|10charword|word2|

[Example B]
|word1|3charword|4charword|3charword|word2|

By using byte distances, both of these score the same, where Example A
should score more highly.

But it would seem I can use the fts3_tokenizer somehow to get the token
positions or that this underlying value is available but just not stored in
an accessible manner.

I implemented OkapiBM25f [1] but was hoping to implement something like the
following proximity ranking [2] as it combines Bag-Of-Words ranking and
proximity ranking. Although that article proposes to precalculate the
distance pairs for all tokens, I'm happy to accept the TimeCost and
calculate on the fly as that SpaceCost won't be worth it.

[1] https://github.com/neozenith/sqlite-okapi-bm25
[2] http://infolab.stanford.edu/~theobald/pub/proximity-spire07.pdf



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Proximity-ranking-with-FTS-tp76149p76152.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proximity ranking with FTS

2014-06-16 Thread Josh Wilson
Ok so I have  found the following guides:

http://www.sqlite.org/privatebranch.html

http://www.sqlite.org/src/doc/trunk/README.md

So as far as creating a private branch 'the proper way' this should be
sufficient but as far as getting the token positions for FTS MATCH results
any advice on the topic is appreciated.



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Proximity-ranking-with-FTS-tp76149p76150.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Proximity ranking with FTS

2014-06-16 Thread Josh Wilson
I know that sqlite FTS provides proximity operators with NEAR and I also know
that the querying tokenizer section says you can get a token position
(http://www.sqlite.org/fts3.html#section_8_2).

Although when using the offsets() function in FTS you get the byte position
as opposed to the token position.

I'd like to implement a proximity ranking function that ideally works like
so:

SELECT 
  docID, 
  rank(matchinfo(documents, 'pcxnal'), offsets(documents)) as score 
FROM documents MATCH 
ORDER BY score DESC
LIMIT 10;

or as extra information within matchinfo().

As far as I can tell this information isn't easily accessible through the
API but shouldn't be too much of a stretch to be accessible.

I'm not averse to the idea of writing the code myself so before diving in,
would anyone familiar have any advice or pointers or is there an "intro to
contributing to sqlite" guide I need to read?



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Proximity-ranking-with-FTS-tp76149.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode61 Tokenizer

2014-06-15 Thread Josh Wilson
Each major release (and occasionally minor) we review third party libraries and 
what improvements have been released since we last looked. 

FTS and having international support for a fast search within our app has been 
a big push.  We are essentially a glorified database front end with a cushy 
form UI. We are trying to use FTS to help with autocompleting fields. 

Once we are happy the latest version of the third party library is stable we 
build it and manually copy it into the repo. So third party libraries aren't 
part of the build process except for linking statically. This guarantees 
stability as it is a frozen version of the third party library. Although human 
error creeps in when manually copying apparently. 

> On 16 Jun 2014, at 1:52 pm, "Stadin, Benjamin [via SQLite]" 
> <ml-node+s1065341n76119...@n5.nabble.com> wrote:
> 
> You could create a fake framework, because it takes a while every time to 
> compile. 
> 
> Just wondering: What's your rationale to use Unicode61 in an iOS project? 
> Being able to sort based on the locale is a feature all our foreign customers 
> demand (and here in German as well). Of course nobody will complain until 
> they realize. 
>  
> Von: [hidden email] [[hidden email]] im Auftrag von Josh Wilson [[hidden 
> email]] 
> Gesendet: Montag, 16. Juni 2014 04:38 
> An: [hidden email] 
> Betreff: Re: [sqlite] Unicode61 Tokenizer 
> 
> Righteo thanks for the sanity check that it must be me at fault and that this 
> is indeed possible without ICU. 
> 
> I have a separate XCode project for rolling the latest SQLite amalgamation 
> and copy that built library out of the Derived Data folder into our main App 
> project. 
> 
> It would appear I kept copying an old file for v3.8.4.3 and not the actual 
> v3.8.5 I was modifying so no wonder there was no change. So after a `Clean` 
> and `Delete Derived Data` then build the resulting build worked. Rookie 
> mistake, sorry guys. 
> 
> I simply followed the ottersoftware approach to adding the defines at the 
> top of the sqlite3.c file after SQLITE_CORE and SQLITE_AMALGAMATION get 
> defined. 
> 
> #define SQLITE_ENABLE_FTS4 
> #define SQLITE_ENABLE_FTS3_PARENTHESIS 
> #define SQLITE_ENABLE_FTS4_UNICODE61 
> 
> This worked for me building against the iOS7.1 SDK (including 64bit 
> architecture build) for anyone else's future reference. 
> 
> 
> 
> 
> -- 
> View this message in context: 
> http://sqlite.1065341.n5.nabble.com/Unicode61-Tokenizer-tp76113p76118.html
> Sent from the SQLite mailing list archive at Nabble.com. 
> ___ 
> sqlite-users mailing list 
> [hidden email] 
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___ 
> sqlite-users mailing list 
> [hidden email] 
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below:
> http://sqlite.1065341.n5.nabble.com/Unicode61-Tokenizer-tp76113p76119.html
> To unsubscribe from Unicode61 Tokenizer, click here.
> NAML




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Unicode61-Tokenizer-tp76113p76121.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode61 Tokenizer

2014-06-15 Thread Josh Wilson
Righteo thanks for the sanity check that it must be me at fault and that this
is indeed possible without ICU.

I have a separate XCode project for rolling the latest SQLite amalgamation
and copy that built library out of the Derived Data folder into our main App
project.

It would appear I kept copying an old file for v3.8.4.3 and not the actual
v3.8.5 I was modifying so no wonder there was no change. So after a `Clean`
and `Delete Derived Data` then build the resulting build worked. Rookie
mistake, sorry guys.

I simply followed the ottersoftware approach to adding the defines at the
top of the sqlite3.c file after SQLITE_CORE and SQLITE_AMALGAMATION get
defined.

#define SQLITE_ENABLE_FTS4
#define SQLITE_ENABLE_FTS3_PARENTHESIS
#define SQLITE_ENABLE_FTS4_UNICODE61

This worked for me building against the iOS7.1 SDK (including 64bit
architecture build) for anyone else's future reference.




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Unicode61-Tokenizer-tp76113p76118.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode61 Tokenizer

2014-06-15 Thread Josh Wilson
https://bitbucket.org/ottersoftware/fts-diacritic-marks

Ok so the above project successfully includes a build of sqlite v3.7.15.2
without ICU but the `unicode61` tokenizer works.

So I tried the same #defines they used with v3.8.5 and still get 'unknown
tokenizer: unicode61'

Has something happened between versions?



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Unicode61-Tokenizer-tp76113p76116.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unicode61 Tokenizer

2014-06-15 Thread Josh Wilson
Thanks Ben.

Ok that makes sense. Normally if you roll the source code yourself there are
no issues since you are taking responsibility for the code being delivered
to the AppStore.

I know that sometimes Apple insta-rejects based upon the symbols in your
binary. Collisions with private APIs makes sense too.

>From what I gather though is that the ICU library adds quite a footprint
~25Mb which is kinda prohibitive relative to our app size.

http://sqlite.1065341.n5.nabble.com/unicode61-FTS-tokenizer-td62499.html

Dr Richard Hipp states in the above linked post that `unicode61` is
independent of ICU yet I keep getting the following error:

'unknown tokenizer: unicode61'

Which makes me thinks the rules for v6.1 of unicode are hard baked into the
code somewhere.

Worst case scenario you are saying that rolling a build of sqlite3+ICU won't
get rejected from Apple AND the `unicode61` tokenizer works? This is good
news.

I'd still like confirmation of how to get the ICU-less version working if it
is at all possible.



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Unicode61-Tokenizer-tp76113p76115.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unicode61 Tokenizer

2014-06-15 Thread Josh Wilson
This might seem like a dumb question but I am getting conflicting
information from web sources.

https://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg1218086.html

I'm trying to enable unicode61 tokenizer for a build on iOS. Apple frowns
upon including your own libicu since they include their own
libicucore.dylib and restrict access to how it is used purely through their
NSString APIs.

The above link suggests I can simply compile sqlite with
-DSQLITE_ENABLE_FTS4_UNICODE61
but not needing -DSQLITE_ENABLE_ICU.

I have tried this with v3.8.5 and get errors that the unicode61 tokenizer
is not recognised. Which makes sense to me, since presumably sqlite relies
on ICU for it's definition of 'what is unicode' (as it is a moving target
it would seem).

So is it possible to have an ICU-less build that allows the unicode61
tokenizer?

Or should I look at writing a custom tokenizer that hooks into the allowed
NSString API?

Kind Regards
-- 

Josh Wilson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug: Memory leak using PRAGMA temp_store_directory

2012-05-01 Thread Josh Gibbs
I reported this a while ago and forgot about this until today while I 
was doing some debugging and once again got the report of leaked memory.


I'm using the c amalgamation code from 3.7.10 with VStudio 2010, and 
always start up my databases setting a temp directory to be used in the 
form:


PRAGMA temp_store_directory = 'my_app_dir_dbtemp'

This is passed into the 'sqlite3_exec' function.

On exit, the program is reporting that line 15215 of sqlite.c is where 
the unfreed allocation occurred:void *p = SQLITE_MALLOC( nByte );


The content of the memory contains my temp path.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR IGNORE vs INSERT WHERE NOT EXISTS

2012-04-07 Thread Josh Gibbs
Thanks, but I can't do that because I'm batching up multiple writes in 
transactions to get performance.  The errors cause the whole transaction 
to need to be rolled back.


On 8/04/2012 11:20 a.m., Igor Tandetnik wrote:

Josh Gibbs<jgi...@imailds.com>  wrote:

The method that must be used is as follows:

CREATE TABLE test_table (property TEXT PRIMARY KEY, value TEXT);

SQLite: INSERT OR IGNORE INTO test_table VALUES('prop','val');
Postgres: INSERT INTO test_table SELECT 'prop','val' WHERE NOT EXISTS
(SELECT 1 FROM test_table WHERE property='prop');

Another option: use a plain vanilla INSERT, and simply ignore any 
constraint-related errors in your application.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] INSERT OR IGNORE vs INSERT WHERE NOT EXISTS

2012-04-07 Thread Josh Gibbs

Looking for some performance advice before I go testing this myself.

I'm porting some code that's currently running with SQLite as its DB 
engine over to postgres.  SQLite will still be an option so I need to 
maintain compatibility across both engines.  I've run into the common 
postgres problem of having no direct equivalent method to 'INSERT OR 
IGNORE'.


The method that must be used is as follows:

CREATE TABLE test_table (property TEXT PRIMARY KEY, value TEXT);

SQLite: INSERT OR IGNORE INTO test_table VALUES('prop','val');
Postgres: INSERT INTO test_table SELECT 'prop','val' WHERE NOT EXISTS 
(SELECT 1 FROM test_table WHERE property='prop');


The Postgres syntax works fine on SQLite, but I'm wondering if there's 
any major performance difference between the two.  'EXPLAIN'ing the two 
shows 23 steps for INSERT OR IGNORE, and 53 steps for the WHERE NOT 
EXISTS method, but without setting up a large test I can't determine if 
there's a significant difference between the two.


My options are to either switch on the DB type, or just go with the 
compatible query, but the performance will be the determining factor.


Thanks for any info that can be provided.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any thoughts on optimizations for this query?

2011-11-12 Thread Josh Gibbs

On 13/11/2011 3:45 a.m., Igor Tandetnik wrote:

Josh Gibbs<jgi...@imailds.com>  wrote:

To reiterate from my original question, if we don't add the order by then
the results come back at a very acceptable speed.  But of course we'd
then have to iterate through the results ourselves to siphon off the
top item set that we're after.  I'd really like the DB to do that for me.

Doing that in your application code is likely your best bet. It is possible to 
select top M out of N elements (when M is much smaller than N) much faster than 
sorting all N elements and then taking first M in order. But SQLite doesn't 
implement any such algorithm - it only knows how to sort the whole set.


Any chance of TOP being added to sqlite?  If I could get the results I'm 
after unsorted then I can deal with the sorting later.  Bringing the 
whole list out of the session to scan through is an overhead that's 
going to be even slower with the language I'm using to access this data 
(a CGI interface from web).



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any thoughts on optimizations for this query?

2011-11-12 Thread Josh Gibbs

On 13/11/2011 3:37 a.m., Igor Tandetnik wrote:

Josh Gibbs<jgi...@imailds.com>  wrote:

The timestamps are already integers.  We stumbled across that CAST operation
optimization purely by accident.  I don't remember what led to it, but
we found
that it gave a measurable performance boost casting the integer as an
integer.

This works by *suppressing* the index. When you are selecting a significant 
portion of all the records in the table, a linear scan works faster than an 
extra level of indirection through the index. But SQLite doesn't know 
beforehand how many records migh end up being retrieved, and calculates the 
query plan based largely on syntactical structure of the query (but see ANALYZE 
command).

You can make it

WHERE +messages.TimeStamp BETWEEN 0 AND 99

The unary plus is a no-op except it prevents SQLite from using an index on the 
column.


Very interesting, thanks.  I'd asked on the list about this when we 
first came across it but no one ever responded to the question.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any thoughts on optimizations for this query?

2011-11-12 Thread Josh Gibbs

On 12/11/2011 5:02 p.m., Simon Slavin wrote:

On 12 Nov 2011, at 3:43am, Josh Gibbs wrote:


We are struggling to find a way to rework this query in a way
that performs efficiently for large data sets.

It's all about the indexes.


The problem with this query seems to be related to the aggregation
of the counts, which of course we can't add an index to.  The final order
by is where the performance suddenly takes the hit.


The goal is to find the top x most active senders of e-mails
within a date range.


Do you frequently do this for many of random date ranges ?  Or do you have a 
set of standard requirements, like 'Most popular posters today, this week, this 
month.' ?


We have some predefined selections, but we also have a self-selection
range.  We are trying to avoid creating pre-aggregated sets if it's at all
possible.


WHERE
CAST (messages.TimeStamp AS INTEGER) BETWEEN 0 AND 99

It's a little late at night, so forgive me.  What is this for ?  What formats 
are the TimeStamps without CASTing ?  Can you store the TimeStamps as integers, 
or even get rid of this entirely ?  Oh hold on ... this is just for testing and 
in real life you set it to a smaller span ?


The timestamps are already integers.  We stumbled across that CAST operation
optimization purely by accident.  I don't remember what led to it, but 
we found
that it gave a measurable performance boost casting the integer as an 
integer.

You might want to check into that one.

And yes, those numbers are just for testing.  Normally that range is an 
epoch time
range spanning the requested time.  I left that in there because it is 
part of the

query we are using to be complete.


ORDER BY Total DESC

This is what you say is killing you.  Just noting it.


To reiterate from my original question, if we don't add the order by then
the results come back at a very acceptable speed.  But of course we'd
then have to iterate through the results ourselves to siphon off the
top item set that we're after.  I'd really like the DB to do that for me.


CREATE INDEX messages_timestamp_index ON messages (TimeStamp);

Because you do a CAST in your 'WHERE' clause, I don't know whether this index 
would actually be used.

As mentioned previously, the CAST makes the query quicker.  However in our
tests right now we're using a smaller data set of around a million 
records of

messages/senders and not actually using the WHERE clause at all.  It doesn't
make much difference to the overall query time.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Any thoughts on optimizations for this query?

2011-11-11 Thread Josh Gibbs

Hi all,

We are struggling to find a way to rework this query in a way
that performs efficiently for large data sets.

The goal is to find the top x most active senders of e-mails
within a date range.

The killer of this query is the ORDER BY clause.  Without it
the results are quick and snappy.  Obviously to get the LIMIT
to provide useful results we have to use the ORDER.  It seems
so close to being able to work efficiently, if only it could be
performing the LIMIT/sort while it's collating the groups.

The magnitude of data we are working with is in the millions
for both message and sender records and this can take hours to
complete the query.

Any ideas would be greatly appreciated.

Thanks, Josh


SELECT
EMailAddress, COUNT(*) AS Total
FROM
   senders
INNER JOIN
messages ON messages.message_ID = senders.message_ID
INNER JOIN
email_addresses ON senders.email_address_ID = 
email_addresses.email_address_ID

WHERE
CAST (messages.TimeStamp AS INTEGER) BETWEEN 0 AND 99
GROUP BY
senders.email_address_ID
ORDER BY Total DESC
LIMIT 50

Table create statements:
CREATE TABLE messages (message_ID INTEGER PRIMARY KEY AUTOINCREMENT, 
TimeStamp INTEGER);

CREATE INDEX messages_timestamp_index ON messages (TimeStamp);

CREATE TABLE email_addresses (email_address_ID INTEGER PRIMARY KEY 
AUTOINCREMENT, EMailAddress TEXT UNIQUE);


CREATE TABLE senders (message_ID INTEGER, email_address_ID INTEGER 
DEFAULT NULL, FOREIGN KEY(message_ID) REFERENCES messages(message_ID) ON 
DELETE CASCADE);

CREATE INDEX senders_emailAddressID_index ON senders (email_address_ID);
CREATE UNIQUE INDEX senders_constraint_index ON senders (message_ID, 
email_address_ID);




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] PRAGMA temp_store_directory not releasing resources

2011-07-26 Thread Josh Gibbs
We've just introduced some memory leak detection into our
code and have discovered that this pragma call is not having its
resources cleaned up at shutdown.  It's not a critical leak since
it's only called once at program start, but it would be nice to
have the system report zero memory leaks on exit.

The following code will reproduce the leak:

 sqlite3 *pdb;
 int res = sqlite3_open("c:/temp/memleak.db", );
 if( res == SQLITE_OK )
 {
 sqlite3_exec(pdb, "PRAGMA temp_store_directory = 'c:/temp/';", 
NULL, NULL, NULL);
 sqlite3_close(pdb);
 }


Any ideas why this might be occurring, or is it an sqlite bug?

We're using 3.7.5 amalgamation on Win32 in this instance.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query performance issue

2011-06-22 Thread Josh Gibbs
Hi all.  We're trying to get the following query working at a better 
speed and
I'm wondering if anyone has any ideas on optimizations we might be able 
to do.

The query groups e-mail addresses and gives us the total number of each
address seen within a given time range of messages, ordering from the 
highest
to lowest count.

The first time we run the query it's very slow, but speeds up with 
subsequent
runs.  That doesn't help in production because by the time it comes 
around to
running the report the cached info is well expired.

Table structure:
CREATE TABLE email_addresses (email_address_ID INTEGER PRIMARY KEY 
AUTOINCREMENT, EMailAddress TEXT UNIQUE);
CREATE TABLE messages (message_ID INTEGER PRIMARY KEY AUTOINCREMENT, 
TimeStamp INTEGER);
CREATE INDEX messages_timestamp_index ON messages (TimeStamp);
CREATE TABLE recipients (recipient_ID INTEGER PRIMARY KEY AUTOINCREMENT, 
message_ID INTEGER, email_address_ID INTEGER, FOREIGN KEY(message_ID) 
REFERENCES messages(message_ID) ON DELETE CASCADE);
CREATE INDEX recipients_emailAddressID_index ON recipients 
(email_address_ID);
CREATE UNIQUE INDEX recipients_constraint_index ON recipients 
(message_ID, email_address_ID);

Query:
SELECT email_addresses.EMailAddress, COUNT(*) as OrderColumn
FROM email_addresses,recipients,messages
WHERE email_addresses.email_address_ID = recipients.email_address_ID AND 
messages.message_ID = recipients.message_ID
AND CAST (messages.TimeStamp AS INTEGER) BETWEEN 0 AND 99
GROUP BY email_addresses.EMailAddress
ORDER BY
  OrderColumn
DESC

Table stats (these are just the relevant tables):

Messages: 942,279
Recipients: 531,186
Email_addresses: 226,337

DB size is just over 1Gb


On a side note, the CAST in there was a leftover from when the timestamp 
was a date field.
Taking the cast out slows the query down?  Explain suggests that the 
query has 2 additional
instructions to perform without the CAST.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] complex query

2011-02-24 Thread Josh Marell
I would say that couple should be a 3 column table, with coupleID,
partner1ID, partner2ID.  It looks like right now, you have just coupleID and
partnerID, that doubles the number of rows you have.

On Thu, Feb 24, 2011 at 7:02 AM, Igor Tandetnik  wrote:

> Aric Bills  wrote:
> > Using the results of this query, I'd like to identify cousins.  It should
> be
> > possible to define two people as cousins if they share a grandparent but
> > have no parents in common.  Defining the query above as the view
> > "grandparentage", I can come up with a query for people who share a
> common
> > grandparent as follows:
> >
> >SELECT
> >p.personid,
> >c.personid AS cousinid,
> >FROM
> >grandparentage AS p,
> >grandparentage AS c
> >WHERE
> >p.grandparentid = c.grandparentid
> >
> > What I'm not sure how to do is determine whether p.personid has any
> parents
> > in common with c.personid.
>
> and not exists (
>select 1 from parentage parent1, parentage parent2
>where parent1.parentid = parent2.parentid
>and parent1.personid = p.personid
>and parent2.personid = c.personid)
>
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up count(distinct col)

2011-02-07 Thread Josh Marell
I would create a tagList table (integer tagID, string tagName,
unique(tagName))

Before performing your batch of inserts, query the tagList table (integer
tagID, string tagName), and generate a map (key on tagName, value of tagID).
For each tag you want to insert, see if it exists in the map.  If it
doesn't, insert first into tagList table and get the tagID of that new entry
and update your map.  Perform your insert with the tagID instead now.  If it
does exist, use the value of the tagName key in your map.

I chose a map (or some similar implementation), because the lookup should be
quick, and you won't need to query the DB for each new tag during your batch
of inserts.

On Mon, Feb 7, 2011 at 2:32 PM, Yuzem  wrote:

>
>
> Petite Abeille-2 wrote:
> >
> > Hey... sounds like IMdb :P
> >
> Yes, I'm coding a  http://yuzem.blogspot.com/p/figuritas-screenshots.html
> movie manager  that grabs the info from imdb.
>
>
> Petite Abeille-2 wrote:
> >
> > In any case, as mentioned by Michael Black, you might benefit greatly by
> > normalizing your table and indexing the foreign key:
> >
> > http://dev.alt.textdrive.com/browser/IMDB/IMDB.ddl#L401
> >
>
> I thought sqlite didn't handle foreign keys correctly, I am using triggers
> to automatically delete tags when a movie gets deleted.
> If I use a foreign key will it automatically remove the tag if the movie
> gets deleted?
> Anyway, to use integers in the "tags" table is very complicated because I
> will have to assign the corresponding number to each tag that I insert and
> I
> have to insert lots of keywords for every movie.
> Does sqlite has any function to convert a text string into an unique
> number?
>
> --
> View this message in context:
> http://old.nabble.com/Speed-up-count%28distinct-col%29-tp30864622p30867411.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of UNION query results

2011-01-23 Thread Josh Gibbs
Excellent and perfect solution to my problem.  Thanks Richard.


On 23/01/2011 2:16 a.m., Richard Hipp wrote:
> On Fri, Jan 21, 2011 at 6:53 PM, Josh Gibbs<jgi...@di.co.nz>  wrote:
>
>> Could someone please clarify for me if the the resulting order of a UNION
>> query will come back with the left data first, then the right data in
>> the case
>> that no ordering has been defined for the query.
>>
>> My need is to have a parameter stored in a database, with an optional
>> overriding parameter which should take precedence, such as:
>>
>> select value from param_overrides where key='setting' UNION
>> select value from params where key='setting'
>>
> SELECT coalesce(
>  (SELECT value FROM param_overrides WHERE key='setting'),
>  (SELECT value FROM param WHERE key='setting)
> );
>
> This approach above has the advantage that it never evaluates the second
> query if the first query is successful.
>
>
>
>> I'd like the resulting recordset to always contain the override parameter
>> first if it exists so I can simply use that value.
>>
>> Thanks, Josh
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Order of UNION query results

2011-01-22 Thread Josh Gibbs
Could someone please clarify for me if the the resulting order of a UNION
query will come back with the left data first, then the right data in 
the case
that no ordering has been defined for the query.

My need is to have a parameter stored in a database, with an optional
overriding parameter which should take precedence, such as:

select value from param_overrides where key='setting' UNION
select value from params where key='setting'

I'd like the resulting recordset to always contain the override parameter
first if it exists so I can simply use that value.

Thanks, Josh


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] View with values in 1st table replaced by values in second table

2011-01-14 Thread Josh Marell
Thank you both for the advice, I had never thought to join on the same table
using 3 different names like that, will have to keep that in mind!

On Thu, Jan 13, 2011 at 9:27 PM, Jay A. Kreibich <j...@kreibi.ch> wrote:

> On Thu, Jan 13, 2011 at 01:44:12PM -0600, Josh Marell scratched on the
> wall:
>
> > Schedule {
> > date TEXT UNIQUE NOT NULL
> > problem_set INTEGER
> > literature INTEGER
> > research INTEGER}
> >
> > Presenters {
> > p_id INTEGER PRIMARY KEY
> > short_name TEXT UNIQUE NOT NULL}
>
> > I am trying to create a view such that the output is the 4 columns in the
> > schedule table, except instead of the p_id being displayed, I want to
> > replace those values with the short_name.
> >
> > For any given date, 2 of the 3 categories is set to a p_id and the 3rd is
> > null.
>
>
> CREATE VIEW Schedule_names AS
>  SELECT s.date, p.name, l.name, r.name
>FROM  Schedule   AS s
>LEFT JOIN Presenters AS p ON ( s.problem_set = p.p_id )
>LEFT JOIN Presenters AS l ON ( s.literature  = l.p_id )
>LEFT JOIN Presenters AS r ON ( s.research= r.p_id );
>
>
>
>   -j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Intelligence is like underwear: it is important that you have it,
>  but showing it to the wrong people has the tendency to make them
>  feel uncomfortable." -- Angela Johnson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] View with values in 1st table replaced by values in second table

2011-01-13 Thread Josh Marell
Hi everyone.  I am new to this mailing list, so hopefully I will be able to
present my idea clearly to you:

I have two tables designed as:

Schedule {
date TEXT UNIQUE NOT NULL
problem_set INTEGER
literature INTEGER
research INTEGER}

Presenters {
p_id INTEGER PRIMARY KEY
short_name TEXT UNIQUE NOT NULL}

problem_set, literature, and research reference a p_id in the presenter
list.

I am trying to create a view such that the output is the 4 columns in the
schedule table, except instead of the p_id being displayed, I want to
replace those values with the short_name.

For any given date, 2 of the 3 categories is set to a p_id and the 3rd is
null.

Any ideas?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE OR IGNORE statement?

2010-09-24 Thread Josh Gibbs
  What's the chance that 2 people with the same surname would have
the same problem in the same week...

I believe I just solved the same problem you are asking about yesterday
thanks to a query from Richard:

CREATE TABLE Message (message_id INTEGER PRIMARY KEY AUTOINCREMENT,
>>>  Subject TEXT);
>>>  CREATE TABLE MessageRecipient (message_id INTEGER REFERENCES
>>>  Message(message_id) ON DELETE CASCADE, recipient_ID INTEGER REFERENCES
>>>  Recipient(recipient_id));
>>>  CREATE TABLE Recipient (recipient_ID INTEGER PRIMARY KEY AUTOINCREMENT,
>>>  name);
>>>
>>>  I've tried creating a trigger after delete on MessageRecipient to remove

>>  the

> >>>  referenced Recipient, and this works if it's the only related item,
> >>>  however any
> >>>  other MessageRecipient relationship causes the delete to fail.  As there
> >>>  is no
> >>>  'or ignore' for the delete statement, I can't get this to keep my data
> >>>  clean.
>
>  DELETE FROM recipient
> WHERE recipient_id = old.recipient_id
>   AND NOT EXISTS(SELECT 1 FROM message_recipient
>   WHERE recipient.recipient_id=
> message_recipient.recipient_id);
>


That SQL statement (with minor corrections) works within and AFTER DELETE
trigger.  The key references prevent deletion, and the trigger does the
cleanup when only one item is left over.

I struggled the same way you did at first thinking there would be a
DELETE OR IGNORE clause.

Hope this helps.

Josh




On 24/09/2010 1:59 a.m., Andy Gibbs wrote:
> On Thursday, September 23, 2010 3:36 PM, Simon Slavin wrote:
>
>> On 23 Sep 2010, at 2:13pm, Andy Gibbs wrote:
>>
>>> I've got a table with a primary key and then any number of additional
>>> tables
>>> with foreign keys that reference this primary key table with "ON DELETE
>>> RESTRICT"
>> I always worry when I see descriptions like this.  Those additional
>> tables: do they all have the same columns ?  If so, can you amagamate
>> them all into one big table ?  Just insert one extra column saying what
>> kind of row this row is.
>>
>> Not only does this fix the problem you raised, but it means you don't
>> need to change your schema each time you encounter a new type of
>> information.
> Thanks for the suggestion, Simon.  If only it were that simple.
> Unfortunately, each of the foreign key tables are actually quite distinct in
> their purpose, so putting them all into one huge table would not be the
> right solution.
>
> The primary key is a timestamp (as an integer, i.e. number of seconds since
> some arbitrary epoch or other).  The primary key table holds then the
> "common" information on the "action" that has happened, i.e. timestamp, user
> name, and some other data.  The foreign key tables are all those that hold
> data for the particular actions that can be done, but really they are very
> very different from each other.
>
> Of course it would have been possible instead to merge the columns from the
> primary key table into each of the foreign key tables and not have the
> primary key table, but the really nice thing about keeping the common data
> it central, is that only one table needs to be queried e.g. to find out the
> which users have been making alterations to the system and when (this is one
> of the main design requirements).
>
> It seems to be a trade-off -- either the complexity is in the DELETE
> statement to keep the primary key table tidy or in the SELECT statement
> querying it.  If it has to be a choice, then the complexity has to be in the
> DELETE statement since this happens very infrequently.
>
> Cheers
> Andy
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trouble with constraints and triggers

2010-09-23 Thread Josh Gibbs

On 23/09/2010 11:52 p.m., Richard Hipp wrote:
>> Josh Gibbs<jgi...@imailds.com>  wrote:
>>> CREATE TABLE Message (message_id INTEGER PRIMARY KEY AUTOINCREMENT,
>>> Subject TEXT);
>>> CREATE TABLE MessageRecipient (message_id INTEGER REFERENCES
>>> Message(message_id) ON DELETE CASCADE, recipient_ID INTEGER REFERENCES
>>> Recipient(recipient_id));
>>> CREATE TABLE Recipient (recipient_ID INTEGER PRIMARY KEY AUTOINCREMENT,
>>> name);
>>>
>>> I've tried creating a trigger after delete on MessageRecipient to remove
>> the
>>> referenced Recipient, and this works if it's the only related item,
>>> however any
>>> other MessageRecipient relationship causes the delete to fail.  As there
>>> is no
>>> 'or ignore' for the delete statement, I can't get this to keep my data
>>> clean.
>
> DELETE FROM recipient
>   WHERE recipient_id = old.recipient_id
> AND NOT EXISTS(SELECT 1 FROM message_recipient
> WHERE recipient.recipient_id=
>   message_recipient.recipient_id);
>

This works perfectly, thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trouble with constraints and triggers

2010-09-22 Thread Josh Gibbs
  On 23/09/2010 3:15 p.m., Igor Tandetnik wrote:
> Josh Gibbs<jgi...@imailds.com>  wrote:
>> CREATE TABLE Message (message_id INTEGER PRIMARY KEY AUTOINCREMENT,
>> Subject TEXT);
>> CREATE TABLE MessageRecipient (message_id INTEGER REFERENCES
>> Message(message_id) ON DELETE CASCADE, recipient_ID INTEGER REFERENCES
>> Recipient(recipient_id));
>> CREATE TABLE Recipient (recipient_ID INTEGER PRIMARY KEY AUTOINCREMENT,
>> name);
>>
>> I've tried creating a trigger after delete on MessageRecipient to remove the
>> referenced Recipient, and this works if it's the only related item,
>> however any
>> other MessageRecipient relationship causes the delete to fail.  As there
>> is no
>> 'or ignore' for the delete statement, I can't get this to keep my data
>> clean.
> You could do something like
>
> delete from Recipient where recipient_ID = old.recipient_ID and
>recipient_ID not in (select recipient_ID from MessageRecipient);
>

That was the last idea we had as well.  Trouble is MessageRecipient 
contains hundreds
of thousands of records.  Would that cipple the speed of the delete, or 
would the query
apply the 'not in' to the sub-select on its index?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Trouble with constraints and triggers

2010-09-22 Thread Josh Gibbs
  Hi all, I'm hoping someone can assist me with a problem I'm having
creating a cascading delete operation as well as a constraint.

This table is an example of the layout of my data:

CREATE TABLE Message (message_id INTEGER PRIMARY KEY AUTOINCREMENT, 
Subject TEXT);
CREATE TABLE MessageRecipient (message_id INTEGER REFERENCES 
Message(message_id) ON DELETE CASCADE, recipient_ID INTEGER REFERENCES 
Recipient(recipient_id));
CREATE TABLE Recipient (recipient_ID INTEGER PRIMARY KEY AUTOINCREMENT, 
name);

My goal is that when I remove an item from the Message table, all of the 
related
items in the MessageRecipient table are removed, and all items from the 
Recipient
table that no longer have references to the MessageRecipient are removed.

The tables as listed above fulfill the need for a Recipient to exist, 
prevent a
Recipient from being removed if a MessageRecipient still points to them, and
removes the MessageRecipient when the Message is deleted.

It does not fulfill tidying up of the Recipient table, and items are 
left orphaned
when all Messages are removed that refer to that recipient.

I've tried creating a trigger after delete on MessageRecipient to remove the
referenced Recipient, and this works if it's the only related item, 
however any
other MessageRecipient relationship causes the delete to fail.  As there 
is no
'or ignore' for the delete statement, I can't get this to keep my data 
clean.

Any solutions to this conundrum would be greatly appreciated.

Thanks, Josh

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Josh
Good point. Thanks.

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/10/2010 01:11 PM, Josh wrote:
>> A saw the backup API's but they looked overly complicated for my situation.
>
> How so?
>
> There is sample code at this link (see the second example specifically):
>
>  http://www.sqlite.org/backup.html
>
> It is at most 10 lines of code.
>
> The advantage of using the backup API is that it is guaranteed to be
> correct.  As your program grows over time, other things may access the
> database, disk errors could occur, contention etc, it will always get things
> right.
>
> Reinventing that wheel will take you more than 10 lines of code, and you are
> unlikely to do as much testing as SQLite does.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAkyKpAUACgkQmOOfHg372QSDigCg2MTTsstinndl+VnyeuXh38Mu
> 0YcAnRQhuPq48yBoMoODYrv+JcgdghL9
> =M+nG
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backing up SQLite file

2010-09-10 Thread Josh
Thanks so much, this helps. A saw the backup API's but they looked overly 
complicated for my situation. A little clarification.

It should be noted that my program is a single thread, and I can assume no 
other programs or threads should be accessing my database file. Since 
SQLite auto commits transactions and I won't have any transactions open, 
I'm thinking there shouldn't be any reason I couldn't just copy the file 
without an exclusive lock, but it sounds like I may be missing something?

I'm assuming the BEGIN EXCLUSIVE will get a file lock. Will the BEGIN 
EXCLUSIVE statement block, waiting until it can get an exclusive lock, or 
if it fails to get an exclusive lock, will it immediately return to the 
caller (if called by sqlite3_get_table() for example)?

Are there other ways to lock or sync the database than this?

How would this work?:

runsql("sync database somehow?"); //do I need this?
runsql("BEGIN EXCLUSIVE");
copydatabasefile();
runsql("ROLLBACK");

Thanks!

Josh


> On Fri, Sep 10, 2010 at 12:09:58PM -0700, Josh scratched on the wall:
>> Hello all,
>>
>> I think this is a simple question...
>>
>> I am using the C api to open and read/write a SQLite database (ie.
>> sqlite3_open_v2() etc.). I would like to have a function in my program to
>> backup the database file (using the OSes copy command).
>
>  You might also be able to use the backup APIs.
>
>  See:  http://sqlite.org/c3ref/backup_finish.html
>
>> I can guarentee
>> that my program will not write to the database. Do I need to sync or lock
>> the database file before I do the copy command?
>
>  That would be a good idea.
>
>> I believe that as long as
>> I have no open write transactions the file should be fine to copy, is this
>> correct?
>
>  No, not exactly.  Transactions are normally lazy about getting locks.
>  To force the transaction to get the locks, issue the command:
>
>BEGIN EXCLUSIVE
>
>  If that works, you know nobody else can touch the database.  You're
>  then free to copy it.  Once the copy is done, you can rollback the
>  transaction.
>
>   -j
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Backing up SQLite file

2010-09-10 Thread Josh
Hello all,

I think this is a simple question...

I am using the C api to open and read/write a SQLite database (ie. 
sqlite3_open_v2() etc.). I would like to have a function in my program to 
backup the database file (using the OSes copy command). I can guarentee 
that my program will not write to the database. Do I need to sync or lock 
the database file before I do the copy command? I believe that as long as 
I have no open write transactions the file should be fine to copy, is this 
correct? Is there a way to guarentee there are no open write transactions 
(I can guarentee there aren't any open transactions by going through my 
code, but I was just wondering if there is a way for SQLite to tell this 
as well)? Thanks for any thoughts.

Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] commit transaction/savepoints on program crash

2010-08-09 Thread Josh
Thanks for the answers, but I guess I should have made my question more 
clear. I knew that you can commit every sql statement individually, then 
the question would be, how can I roll them back?

In other words I'd like something like savepoint and rollback to 
savepoint, while not loosing the transactions if there is a power failure. 
I know I can have either one of these, but can I have both!?

Josh

>> I'm new to the list and had a question. I know the default behavior for
>> savepoints (or any transactions) is that if they have not been committed, if
>> the program crashes, they are lost. Is there any way to have them committed 
>> by
>> default? Basically I *only* want the transaction rolled back in case of an
>> explicit rollback statement, not due to program crash/power failure, etc. 
>> Does
>> anyone know of a way of doing this? Thanks!
>
> Sure.  Don't define any transactions.  Just issue every command 
> separately.  That way SQLite automatically makes a little transaction 
> for each command, and once the command is finished it will automatically 
> COMMIT it.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] commit transaction/savepoints on program crash

2010-08-09 Thread Josh
Hello all,

I'm new to the list and had a question. I know the default behavior for 
savepoints (or any transactions) is that if they have not been committed, if 
the program crashes, they are lost. Is there any way to have them committed by 
default? Basically I *only* want the transaction rolled back in case of an 
explicit rollback statement, not due to program crash/power failure, etc. Does 
anyone know of a way of doing this? Thanks!

Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] rtree segfault on Linux X86_64

2008-12-22 Thread Josh Purinton
I get a segfault using a particular rtree query. Here's the simplest way I
could find to reproduce it.

$ uname -a
Linux odysseus 2.6.18-6-xen-vserver-amd64 #1 SMP Fri Jun 6 07:07:31 UTC 2008
x86_64 GNU/Linux
$ sqlite3 -version
3.6.7
$ cat >bug.sql
create table foo (id integer primary key);
create virtual table bar using rtree (id, minX, maxX, minY, maxY);
insert into foo values (null);
insert into foo select null from foo;
insert into foo select null from foo;
insert into foo select null from foo;
insert into foo select null from foo;
insert into foo select null from foo;
insert into foo select null from foo;
delete from foo where id > 40; -- change to 39,38,37,... and it won't
segfault
insert into bar select null,0,0,0,0 from foo;
select count(*)
  from bar b1, bar b2, foo s1
  where b1.minX <= b2.maxX
  and s1.id = b1.id;
$ sqlite3 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trim everything that is entered into database

2008-09-10 Thread Josh Millstein



On 9/10/08 11:11 AM, "Dennis Cote" <[EMAIL PROTECTED]> wrote:

> Josh Millstein wrote:
>> On 9/9/08 11:46 AM, "Igor Tandetnik" <[EMAIL PROTECTED]> wrote:
>>> Josh Millstein <[EMAIL PROTECTED]>
>>> wrote:
>>>> Is there anyway to perform a trim to everything that is entered into
>>>> a table instead of trimming before I put data in?
>>> 
>>> update mytable set myfield=trim(myfield);
>>> 
>> 
>> Yeah, but can you do that automatically on each insert into the db.  Trim
>> the whitespace, that is?>
> 
> Yes. Simply do these updates in triggers. You will need to add two
> triggers, one that executes after each insert, and one that executes
> after each update.
> 
>create trigger mytab_in after insert on mytable
>begin
>  update mytable
>set myfield = trim(myfield)
>  where rowid = new.rowid;
>end;
> 
>create trigger mytab_in after update of myfield on mytable
>begin
>  update mytable
>set myfield = trim(myfield)
>  where rowid = new.rowid;
>end;
> 
> Now your application can insert untrimmed data, but the database will
> only store trimmed data, and therefore you will only ever retrieve
> trimmed data.
> 
> HTH
> Dennis Cote

Awesome, thank you for the info.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

-- 
[EMAIL PROTECTED]
785-832-9154



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trim everything that is entered into database

2008-09-10 Thread Josh Millstein
Yes, automatically like using triggers.  That is exactly what I'm talking
about


On 9/9/08 2:03 PM, "Enrique Ramirez" <[EMAIL PROTECTED]> wrote:

> I'm guessing he means like automatically (IE using triggers).
> 
> Which also would be my answer (use triggers).
> 
> On Tue, Sep 9, 2008 at 1:52 PM, P Kishor <[EMAIL PROTECTED]> wrote:
>> On 9/9/08, Josh Millstein <[EMAIL PROTECTED]> wrote:
>>> Hello,
>>> 
>>>  Is there anyway to perform a trim to everything that is entered into a
>>> table
>>>  instead of trimming before I put data in?
>> 
>> aren't "perform a trim to everything that is entered into a table" and
>> "trimming before I put data in" the same actions?
>> 
>> 
>>> 
>>>  Thanks,
>>>  Josh
>>> 
>>> --
>>> 
>>> 
>>>  ___
>>>  sqlite-users mailing list
>>>  sqlite-users@sqlite.org
>>>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> 
>> 
>> 
>> --
>> Puneet Kishor http://punkish.eidesis.org/
>> Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
>> Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> 
> 

-- 
[EMAIL PROTECTED]
785-832-9154



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trim everything that is entered into database

2008-09-10 Thread Josh Millstein
The trim before I put data in was based on using a programming language and
not the db language.  I want to do it all in sql syntax


On 9/9/08 12:52 PM, "P Kishor" <[EMAIL PROTECTED]> wrote:

> On 9/9/08, Josh Millstein <[EMAIL PROTECTED]> wrote:
>> Hello,
>> 
>>  Is there anyway to perform a trim to everything that is entered into a table
>>  instead of trimming before I put data in?
> 
> aren't "perform a trim to everything that is entered into a table" and
> "trimming before I put data in" the same actions?
> 
> 
>> 
>>  Thanks,
>>  Josh
>> 
>> --
>> 
>> 
>>  ___
>>  sqlite-users mailing list
>>  sqlite-users@sqlite.org
>>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> 

-- 
[EMAIL PROTECTED]
785-832-9154



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trim everything that is entered into database

2008-09-10 Thread Josh Millstein

On 9/9/08 11:46 AM, "Igor Tandetnik" <[EMAIL PROTECTED]> wrote:

> Josh Millstein <[EMAIL PROTECTED]>
> wrote:
>> Is there anyway to perform a trim to everything that is entered into
>> a table instead of trimming before I put data in?
> 
> I'm not sure I understand the question. Are you perhaps thinking of
> 
> update mytable set myfield=trim(myfield);
> 
> Igor Tandetnik 
> 

Yeah, but can you do that automatically on each insert into the db.  Trim
the whitespace, that is?>
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

-- 
[EMAIL PROTECTED]
785-832-9154



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Trim everything that is entered into database

2008-09-09 Thread Josh Millstein
Hello,

Is there anyway to perform a trim to everything that is entered into a table
instead of trimming before I put data in?

Thanks,
Josh
-- 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] corrupt db vacuums clean on 3.2.7 but not 3.5.4 or 3.5.6

2008-03-11 Thread Josh Gibbs
Hi there,

We are having some problem with DB corruption occurring
using 3.5.4.  I don't know the source of the corruption, however
after extensive testing and updating to 3.5.6 in the hope of getting
some resolution to our problems I accidentally ran an older build
of the command line tool and found that it was able to vacuum a
corrupt DB back to a working state.

Once the DB had been vacuumed with 3.2.7, it was then possible
to vacuum it with 3.5.6.

What's the chance of someone having a look at the DB that we
have with the problem and assisting with a patch so we can stay
on the 3.5.x code track?

Thanks, Josh

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Trying to determine if a column exists through a SQL Statement...

2006-05-12 Thread Josh

With MySQL I would simply do:

SHOW COLUMNS FROM `war3users` LIKE 'playerip';

Can I do something similar in SQLite ?

Thanks!!
Josh


[sqlite] Newbie question: sqlite.exe command usage?

2004-11-17 Thread Josh Don
I have read http://www.sqlite.org/sqlite.html
 
I want this to work:
 
[C:\test.js]
WS=new ActiveXObject('WScript.Shell')
WS.Run('C:\\sqlite.exe "C:\\test.db" .read "C:\\query.txt" .output 
"C:\\OUT.txt" ') 
 
[C:\query.txt] contains...
select * from sqlite_master;
 
What's up?  
I have searched for links to sample command usage
and sample databases. please help.
 
[C:\whatever.js] If I try this, I lose the context of the previous settings...
WS=new ActiveXObject('WScript.Shell')
WS.Run('C:/sqlite.exe "C:/test.db"') 
WS.Run('.read "C:\\query.txt"')
WS.Run('.output "C:\\OUT.txt"')
 
I would love it if all settings could be read from a single command file...
(settings=command line options, path to database file, SQL ) 
 
[C:\test.js]
WS=new ActiveXObject('WScript.Shell')
WS.Run('C:\\sqlite.exe -input "C:\\commands.txt" -output "C:\\result.txt" ')
 
joshdon
brisbane australia 

 


-
Do you Yahoo!?
 Meet the all-new My Yahoo! – Try it today!