[sqlite] Limiting the size of a database?
Is it possible to tell SQLite to limit the size that a database may grow to? It would be useful for storage constrained applications. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Limiting the size of a database?
It would be helpful if the maximum size was configurable per database via a PRAGMA statement. That way you would be able to ensure a non-critical database doesn't grow too large leaving no space for one that is more important. On 5/7/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Joe Wilson [EMAIL PROTECTED] wrote: I wrote too soon: http://www.sqlite.org/cvstrac/chngview?cn=3941 + /* + ** Maximum number of pages in one database file. + */ + #ifndef SQLITE_MAX_PAGE_COUNT + # define SQLITE_MAX_PAGE_COUNT 1073741823 + #endif This #define doesn't do anything yet. But check back in a few days and it might. -- D. Richard Hipp [EMAIL PROTECTED] - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] sqlite3_total_changes() and multiple connections
Hi, I have multiple database connections opened against the same database and I'm having problems with sqlite3_total_changes(). The docs state: This function returns the total number of database rows that have be modified, inserted, or deleted since the database connection was created using sqlite3_open(). but it seems like only changes made through the connection I call the function on are counted. Is there any way to get the total number of changes made through all opened connections? - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: sqlite3_total_changes() and multiple connections
but it seems like only changes made through the connection I call the function on are counted. This is correct. The documentation is misleading and should mention this limitation. Is there any way to get the total number of changes made through all opened connections? Get the numbers for each connection, and add them up. The connections are opened from different processes, so collecting the numbers isn't trivial. It would be nice if sqlite stored a change count in the database that was easily accessible. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Atomically creating a database and bootstrapping its tables
No, you have to do that in your application. In pseudo code your create function could do this: open the database file begin an exclusive transaction (since you may need to write to initialize the tables) read the user_version if the user version is not zero create the tables (read SQL script into a string and pass the string to sqlite3_exec) set the user version to a non zero value endif commit the transaction Only one process will succeed in getting the write lock while opening the transaction. That process will create the tables and set the user_version so no other process will try to reinitialize the tables latter. That solves the problem of two processes bootstrapping the database, but is it possible for the second process to open the database while the first is in the process of creating it (creating the header pages, the system tables, etc) and view the database as corrupt since it isn't fully created yet? I imagine there is at least a small window between when the file is first created and when the file lock is acquired on it, but I'm wondering how big that window is (or does it not exist?). - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Atomically creating a database and bootstrapping its tables
I have two processes trying to access a database for the first time at roughly the same time. I'm wondering if it's possible to atomically create a database and bootstrap it with some tables from one process so that the other process doesn't open the database either before SQLite has finished writing the system tables or the first process has finished the application specific bootstrapping. In general, does SQLite protect against the database being opened while it's being created so other processes don't open it assuming everything is in a good state? - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] sqlite3_prepare_v2 and multi-connection schema changes
I just switched to 3.3.10 and I'm playing around with sqlite3_prepare_v2. I have a database with two open connections, and each connection creates its own table. After the second connection creates its table all statements created with sqlite3_prepare_v2 on the first connection fail with SQLITE_SCHEMA. I searched the mailing list and found this earlier discussion about schema changes from different connections: http://www.mail-archive.com/sqlite-users@sqlite.org/msg17823.html, but I'm wondering if sqlite3_prepare_v2 should try to reload the schema before returning the SQLITE_SCHEMA error. The new documentation states: If the schema has changed in a way that makes the statement no longer valid, sqlite3_step http://sqlite.org/capi3ref.html#sqlite3_step() will still return SQLITE_SCHEMA. But unlike the legacy behavior, SQLITE_SCHEMA is now a fatal error. Calling sqlite3_prepare_v2http://sqlite.org/capi3ref.html#sqlite3_prepare_v2() again will not make the error go away. which implies that SQLITE_SCHEMA is only returned when the schema change is fatal, i.e. the table has been removed. If v2 statements are going to continue returning an error for schema changes from another connection, can there at least be separate error codes for fatal vs. non-fatal schema changes?
Re: [sqlite] sqlite3_prepare_v2 and multi-connection schema changes
This should be fixed by checkin [3592]. Thanks for the quick fix! - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Tokenizing in a trigger
Hi all, I have two tables: CREATE TABLE data (id INTEGER PRIMARY KEY, data TEXT); CREATE TABLE tokens (token TEXT, data INTEGER); Where tokens contains a list of each data row broken down into multiple tokens (split on semicolons). I want to create a trigger that on updates to data will remove all tokens for the item from the tokens table and then re-tokenize and insert the new tokens into the tokens table. Is there any way to get a trigger to insert a variable number of rows based on a single row being updated? Is there a way to write a custom function to do this? I've been thinking of writing a virtual table that would take the input in a WHERE clause and dump the tokens out, one per row, but I'm hoping that there is a simpler way to do this. Thanks.
Re: [sqlite] Re: Re: Custom functions in GROUP BY?
On 9/28/06, Igor Tandetnik [EMAIL PROTECTED] wrote: Ron Stevens sqlite-Y9FGH9USQxS1Z/[EMAIL PROTECTED] wrote: The problem is that I can't produce a canonical representation of the entries in my database. Often times some entries are subsets of others, but considered equal. It's possible for an entry to be a subset of two larger entries that aren't equal themselves and still be equal to each of the larger entries. Since your relation is not transitive, it is not equivalence. GROUP BY does not make sense for it. If you have A ~ B and A ~ C but B !~ C, how exactly do you expect these three rows to be grouped? This case is very uncommon in my data set, but when it does come up I would want A to show up in two groups, one with A and B and one with A and C.
[sqlite] Custom functions in GROUP BY?
I have a custom function that compares two text values and returns 1 if they're equal based on an algorithm that's not strictly text comparison and 0 if they don't match. I want to do a query that groups all rows that match using my custom function into a single group. Is that possible?
Re: [sqlite] Re: Custom functions in GROUP BY?
The problem is that I can't produce a canonical representation of the entries in my database. Often times some entries are subsets of others, but considered equal. It's possible for an entry to be a subset of two larger entries that aren't equal themselves and still be equal to each of the larger entries. On 9/28/06, Igor Tandetnik [EMAIL PROTECTED] wrote: Ron Stevens sqlite-Y9FGH9USQxS1Z/[EMAIL PROTECTED] wrote: I have a custom function that compares two text values and returns 1 if they're equal based on an algorithm that's not strictly text comparison and 0 if they don't match. I want to do a query that groups all rows that match using my custom function into a single group. Is that possible? What you really need is a custom function that produces a canonical representation for all the strings in the same equivalence class under your equivalence relation. That is, a function CR(s) such as for every s1 and s2, s1 ~ s2 if and only if CR(s1) = CR(s2). Here '~' is your equivalence relation, '=' is the usual byte-wise comparison. E.g. if '~' is a case-insensitive comparison, then CR(s) could return s converted to all lowercase (or all uppercase). Once you have such a function, you can simply GROUP BY CR(fieldName). Igor Tandetnik - To unsubscribe, send email to [EMAIL PROTECTED] -