Re: [sqlite] Library routine called out of sequence
On 11 Feb 2015, at 11:19pm, pablo Van pdvm2...@gmail.com wrote: I have a problem with my delphi's application when I want run it on other PC. I use ZeosLib in my application The message is : Library routine called out of sequence What's happening is that SQLite relies on some calls being executed in a sequence, for instance _prepare() _step() _step() _step() ... _finalize() and the application is trying to do something like _prepare() _step() _finalize() _step() _step() ... If you wrote the Delphi application you might be able to fix it. If you didn't write it, and it works fine on one PC and not on the other /when they have the same data/, then you're going to have to contact the programmer to get it fixed. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] 转发:SQLITE3 JOURNAL MODE
On 11 Feb 2015, at 2:59am, Keith Medcalf kmedc...@dessus.com wrote: 1) Keep it on the boot volume, not a network or external drive. The following things will make your journal less safe: 1) PRAGMA data_store_directory to anywhere other than the boot volume I presume you mean local filesystem not boot volume. Both are true. My logic is that if the boot volume stops working (or somehow gets unmounted) then the computer is going to stop working anyway. Whereas another volume on the same disk might somehow get dismounted and that wouldn't stop the computer from working. But I suspect that the OP doesn't care about this. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Porting SQLite to another operating system (not supported out of the box)
On 10 Feb 2015, at 5:01pm, Clemens Ladisch clem...@ladisch.de wrote: Janke, Julian wrote: In my opinion, this means, we must only write a VFS implementation for our target platform. What file API is there? It looks like a standard POXIS implementation. My guess is that the best place to start would be https://www.sqlite.org/vfs.html including the examples linked in section 2.3. The OP should come back if he has more specific questions after that. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] 转发:SQLITE3 JOURNAL MODE
On 10 Feb 2015, at 9:32am, 博协软件-张涛 2718894...@qq.com wrote: WHERE A POWER LOSS OR SIGNAL 'KILL -9' HAPPEND IN LINUX SYSTEM,HOW CAN I MAKE SURE THE DB FILE HAVE THE fewest LOSS! At the same time, THE kernel CPU period IN 30MS~50MS OR LESS. I HAVE TRIED TO USE WAL JOURNAL MODE, IN THIS MODE ,when POWER DOWN OR RESET, THE DB DATA IS OK ? WAL mode is no more safe or less safe than the original journal made. To make your database more safe 1) Keep it on the boot volume, not a network or external drive. 2) Set a long busy_timeout, perhaps several minutes. (see PRAGMA busy_timeout) 3) PRAGMA synchronous = FULL The following things will make your journal less safe: 1) PRAGMA data_store_directory to anywhere other than the boot volume 2) PRAGMA ignore_check_constraints = YES 3) PRAGMA journal_mode = MEMORY 4) PRAGMA journal_mode = OFF 5) PRAGMA synchronous = OFF 6) PRAGMA temp_store_directory to anywhere other than the boot volume Do not do any of those things if you want your database to be as safe as possible. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQLCipher - Full Database Encryption for SQLite
https://www.zetetic.net/sqlcipher/ SQLCipher is an open source extension to SQLite that provides transparent 256-bit AES encryption of database files. It comes as free source for you to compile yourself, or you can buy pre-built binary libraries for numerous platforms, with support. The API extends SQLite by adding PRAGMAs and C functions to configure the encryption. As a full database encryptor it encrypts the database file as a whole rather than individual rows or fields. It therefore isn't weak to the attacks which easily penetrate most quick-and-easy encryption systems. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bug in SQLite FLOAT values
On 7 Feb 2015, at 2:22pm, Abdul Aziz abduldblog...@gmail.com wrote: but in fields which were FLOAT were filling with junk values (after 6 decimal places, see in SENSOR_1) why? Take a look at what happens when you try to write 1/13th in decimal: 0.076923076923076923076923076923076923076923076923076923. What about 7/9ths: 0.77. And PI: 3.1415926535897932384626443323279502. (I forget the rest) There's no way to write those numbers precisely in decimal. The first repeats after the sixth place. The second repeats after the first place. The third never repeats, but also never stops. Similarly there's no way to write some numbers precisely in binary. So if I tell you you had to write the number using just digits and a decimal point, you can't do it. Similarly, if you convert some numbers into binary format and back (just 0 and 1 and a decimal point) you don't get back precisely the number you put in. Fields defined as FLOAT (in your Android API) have their values stored in binary format. So storing a number in that field involves converting it to binary. If, instead, you define your fields as text no conversion takes place because there's no need to turn the number into binary format. (Above explanation simplified with respect to affinities and deep maths for simplicity.) Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] equivalent for JOIN LATERAL
On 8 Feb 2015, at 10:52pm, Big Stone stonebi...@gmail.com wrote: I fall over this presentation of LATERAL, from postgresql guys. (look at pages 1 to 16) For those like me who hadn't heard of it, here's a reference: https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.3#LATERAL_JOIN and from the documentation, section 7.2.1.5 of http://www.postgresql.org/docs/9.3/static/queries-table-expressions.html Does it exist in SQLITE ? If not, would it be possible [without] too much effort ? I'm going to let other people answer the question. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bug in SQLite FLOAT values
On 7 Feb 2015, at 5:57am, Abdul Aziz abduldblog...@gmail.com wrote: *again I converted back to float this string formatted value, and printed into Log, I was clearly seeing values upto 6 decimal places , but after insertion into sqlite db, when after generation of sqlite db file, was getting values upto 11-12 decimal places!* SQLite itself would have converted your text (to six digits) and stored the resulting number. When you asked to retrieve your value, SQLite would have retrieved that number -- still at six digits. However if you ask for the retrieved figure as a number (as opposed to a string) the programming language you use has to put the resulting number into a 'float' variable. And in doing this it would have to turn the number back into float format, which would introduce the extra 'garbage' digits. So yes, you can argue that there is a bug somewhere, but if there is one it's in the Android interface to SQLite, not in SQLite itself. You can avoid this by asking for the retrieved figure as a string, not a number. Or by storing the value as a TEXT column instead of a REAL column, which is the solution you came up with. So I'm glad you found a solution. It might be worth asking yourself why you are trimming your values to six digits and then saving the result as a number. It would make more sense to trim your values and then handle the number as a string from then onwards. Or to handle all the digits you have and to convert to text as six digits just before you put the number on the display. Both of these would be more mathematically 'correct' than what you are doing. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Possible to get table size (in bytes)?
On 4 Feb 2015, at 6:49pm, Stephen Chrzanowski pontia...@gmail.com wrote: I can't be certain if a single page contains multiple types of data such as table AND index information. In a SQLite file, each page is assigned to header information or to a specific table or index. No mixed use. However I agree with your conclusion: there's no way to get filesize information using just SQLite calls. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Truncation of floating point numbers in SQLite?
On 31 Jan 2015, at 11:14pm, James K. Lowden jklow...@schemamania.org wrote: That said, https://www.sqlite.org/datatype3.html says REAL is stored as an 8-byte IEEE floating point number. I can't see why a non-signalling NaN couldn't be stored and interpreted. You'd need some convention for collation, and a function like is_nan() would be helpful for WHERE clauses. So, having established that NaN and -0 do not make the round trip from a C variable through a database and back into a C variable ... at least I think we have ... There's a bit representation for storing NaN in a binary64. Presumably, if you have NaN in a C variable, it's using that representation in memory. Then you use sqlite3_bind_double() to get this into an SQLite variable, and use INSERT to get SQLite to put that into a database. Do we expect to be able to see that pattern by looking through the database file using hexdump ? Because there's nothing in the SQLite documentation that says it can store values like NaN or -0.0. The documentation just says it can store numbers. It should be possible to find out when the distinction between 0.0 and -0.0 gets lost. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed error occurs more
On 30 Jan 2015, at 1:07pm, Mario M. Westphal m...@mwlabs.de wrote: What worries me more are the incidents where users see this problem happen several times, with q database kept on a local hard disk or SSD. Just to make it clear, when corruption is reported, the corruption is not automatically fixed. The database will still be corrupt, and as the app continues it may notice the same corruption again and report it again. This is why I asked you whether you are noticing more corruption or are just continuing to use a corrupt database. So yes, if the user continues to use the same database, they'll get more error messages. And if they restore a backup it might be a good idea to check to see whether that backup is corrupt. At least until you have tracked down the cause of your corruption and stopped it. that’s really hard to tell, because unless SQLite has to access a corrupted section of the file during normal operation, or integrity_check() is run, a damaged database may behave perfectly normal for a long time... One column of one row of one table may get corrupted. If that's the case then the database can be used for years without any problem being noticed. Theoretically PRAGMA integrity_check will notice it, however. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed error occurs more
On 29 Jan 2015, at 7:04pm, Mario M. Westphal m...@mwlabs.de wrote: The diagnosis log of my application reports the output of integrity_check() already. I retrieved the log from the most recent error report. This is my application has logged: '*** IN DATABASE MAIN *** ON TREE PAGE 385120 CELL 24: INVALID PAGE NUMBER 151192068 CORRUPTION DETECTED IN CELL 24 ON PAGE 385120 CORRUPTION DETECTED IN CELL 25 ON PAGE 385120 MULTIPLE USES FOR BYTE 1612 OF PAGE 385120 FRAGMENTATION OF 30 BYTES REPORTED AS 0 ON PAGE 385120' Okay. First, stop doing VACUUM after this. You're not improving things and you may be making things worse. Second, a corrupt database may remain corrupt. So we try to distinguish between (A) and (B): A) Something corrupted my database but that was just once and it has never happened again B) Something is continually corrupting my database. So have you tried replacing that database with one which isn't corrupt and seeing whether the new 'clean' one somehow becomes corrupt ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Truncation of floating point numbers in SQLite?
On 28 Jan 2015, at 10:47pm, Donald Shepherd donald.sheph...@gmail.com wrote: This is a bit of a speculative question related to a problem I'm having - are there legal values of a C++ double that would get truncated when written into and read from an SQLite database? In theory there should be no problem here. C doubles have 15 to 16 digits of precision. In SQLite databases, numbers which can't be stored as integers are stored as IEEE 754-2008 64-bit floating point numbers, sometimes known as 'binary64'. These give 15 to 17 digits of precision. My understanding is that it is possible to store every distinct C double value as a distinct binary64 value. If it's the conversion that worries you, you can read the SQLite source code to find the programming used to encode and decode numbers into this format. I am not competent to read that source code and tell you definitely that it works for all C double values. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Truncation of floating point numbers in SQLite?
On 29 Jan 2015, at 3:05am, James K. Lowden jklow...@schemamania.org wrote: We can make stronger statements than that, can't we? It's not like there's some mystery about it: database and memory share a single floating-point format. If the caller uses _bind_double and _column_double, he's storing and fetching a 64-bit double. SQLlite's only job is to collect those 8 bytes, store them, and return them unmolested. Ah well, the reason I didn't state this was that I didn't know it. Thanks for the information. I wonder what happens if you put SQLite on a computer with no native IEEE maths library. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed error occurs more frequently...?
On 28 Jan 2015, at 3:15pm, Mario M. Westphal m...@mwlabs.de wrote: The damage is usually detected during “diagnosis” runs. This feature runs an “analyze” and a” vacuum” command in order to physically validate the database (and to optimize and compact it). Please don't do that. Neither of those things does any diagnosis on the database. And they both make changes which can complicate damage or cover it up and let it happen again. The command PRAGMA integrity_check checks your database for faults. That's the way to detect faults and tell whether there really is a problem with your database. http://www.sqlite.org/pragma.html#pragma_integrity_check Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Converting *.sqlite3 files
On 26 Jan 2015, at 7:26pm, Luke Niewiadomski lniewiadom...@thezerobase.com wrote: I am looking to translate *.sqlite3 files into *.csv, or similar format. Would you be willing to point me in the right direction? I appreciate any help on this. I am not technically apt enough to figure this out on my own. You can download the SQLite shell tool from the download page. Here's the documentation: http://www.sqlite.org/cli.html You'll see a section called 'CSV Export' which should help do what you want but you can omit the last command (the '.system' one). Don't hesitate to post again if this isn't what you want. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Invalid column prefix returned in SELECT with joined subquery
On 26 Jan 2015, at 2:04am, James K. Lowden jklow...@schemamania.org wrote: What should the name be that is returned to the user? Going back to my example, select columns from (select statement) as T each name in columns should be the shortest possible syntactically correct column reference. While you're discussing possibilties and alternatives, what should be returned as the name for the following column SELECT 1*2 FROM myTable ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database locked in PHP
On 24 Jan 2015, at 8:13pm, Lev leventel...@gmail.com wrote: I sometimes get a database locked error when I access the database by calling the execute() call. This is on PHP. Have you set a timeout ? Immediately after opening the connection use http://php.net/manual/en/sqlite3.busytimeout.php and set it to 6 (60 seconds) or so. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Streaming group by in union of already sorted tables
On 23 Jan 2015, at 4:59pm, Emmanouil Karvounis man...@di.uoa.gr wrote: tableA and tableB have both primary key on (c1, c2) explain query plan select c1, c2, count(*) from ( select c1, c2 from tableA union all select c1, c2 from tableB ) group by c1,c2 2|0|0|SCAN TABLE tableA USING COVERING INDEX sqlite_autoindex_tableA_1 3|0|0|SCAN TABLE tableB USING COVERING INDEX sqlite_autoindex_tableB_1 1|0|0|COMPOUND SUBQUERIES 2 AND 3 (UNION ALL) 0|0|0|SCAN SUBQUERY 1 0|0|0|USE TEMP B-TREE FOR GROUP BY There is no reason to create a new temp B-tree when you can sequentially and in-synch scan the B-tree of tableA and of tableB and get the groups in one pass. Sorry, but SQLite does not understand how the subquery (inside the brackets) is going to be used by the main query. It hqs to complete the subquery first and only then can it inspect the main query to find out how to optimize it. This is not a bug, there just isn't enough flexibility to do this the way you want. One of the following may or may not be useful: You may be able to use select c1, c2, myagg(*) from ( select c1, c2 from tableA group by c1,c2 union all select c1, c2 from tableB group by c1,c2 ) group by c1,c2 Alternatively, is there a good reason for tableA and tableB not to be merged with, perhaps, an extra column indicating 'A' or 'B' ? This would allow you to create an index and get your answer almost instantly. When you see two tables with the same columns it's often an indication that there should really be one table. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Streaming group by in union of already sorted tables
On 23 Jan 2015, at 4:15pm, Emmanouil Karvounis man...@di.uoa.gr wrote: In short, we have two tables that are already sorted on a combination of two fields There is no such thing as a 'sorted table' in SQL. Each table is a set of rows and the rows have no order. If you want to make it easy for SQL to access a table's rows in a particular order, create an index or make that order the table's primary key (which is another way of making an index). select c1, c2, myagg(*) from ( select * from tableA union all select * from tableB ) group by c1, c2; This command tells SQL that you want to construct a list of every row of tableA and every row of tableB. In other words, if you have 300 rows in tableA and 500 rows in tableB, you are telling SQL to construct a new table of 800 rows. And because this table doesn't yet exist, it doesn't have any indexes so it can't be searched quickly. Is that what you wanted ? Is there a good reason for needing this data in two separate tables rather than one for which you can create an index on (c1, c2) ? Do the groups occur entirely within one table or do you have to add the tables together before SQL can figure out the groups. where tableA, tableB have primary key (c1, c2) and their schema comprises 3 integers: c1, c2, and prop. It might be worth testing with something like 'total(*)' just to make sure it isn't your own function which is causing the problems. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 very slow even after creating without rowid
On 23 Jan 2015, at 3:16pm, Parakkal, Navin S (Software Engineer) navin.parak...@hp.com wrote: I also did another experiment. I created this table and did a vaccum and then the select count(*) in sqlite3 was around 2 mins. When I create an index manually after the table is loaded (imported from csv), select count(*) in sqlite3 was within 30 to 40 secs. In the second case, to calculate count(*) SQLite was able to use the index you had created. Since this index was smaller than the table, SQLite was able to count the entries in it faster. The result would have been the same if you had done whenever the index had been created CREATE TABLE .import CREATE INDEX time the 'select count(*) from hp_table1' command here should yield pretty-much the same result as CREATE TABLE CREATE INDEX .import time the 'select count(*) from hp_table1' command here If you are using a table for which rows are INSERTed but never DELETEd, then you will get the same result almost instantly using select max(rowid) from hp_table1 instead of counting the rows. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Shell tool question
On 21 Jan 2015, at 5:15pm, Richard Hipp d...@sqlite.org wrote: It appears to be, after a quick glance at the code. It uses a new stack frame for each level of recursion. So if you create a recursion loop, you'll overflow the stack and segfault. That's great. Thanks for the fast answer. I will be careful not to create a loop. As to why I need this ... I am using a language which has no SQLite library. It writes a script and then makes the shell tool run it. But the script it makes has a long standard part to it I wanted to leave in another text file. This lets me do it. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Shell tool question
Quickie: In the Shell Tool, is '.read' recursive ? In other words, if I use .read to read a script and I have .read in a script, will the shell tool finish both scripts correctly ? I ran a quick test and I know what I got, but I wanted to be reassured I wasn't just lucky. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] excel vba use sqlite
On 22 Jan 2015, at 1:04am, YAN HONG YE yanhong...@mpsa.com wrote: I don't know how to use sqlite in EXCEL vba? Need I install sqlite connect driver? http://lmgtfy.com/?q=sqlite+excel+vba Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 performance on select count very slow for 16 GB file
On 20 Jan 2015, at 12:12pm, Parakkal, Navin S (Software Engineer) navin.parak...@hp.com wrote: When I do a select count(*) on hp_table1 it takes more than 5 mins which is quite a huge time. If this is a table for which rows are inserted but never deleted, then you will find that SELECT max(rowid) FROM hp_table1 returns the same value almost immediately. Perhaps value-1, but whatever it is it'll be consistent. Now I create a index [snip] Now I do select count(*) on hp_table1 , it takes around 15 secs. [...] The size of database is around 18GB (empty_database) with rows. Now I do a select count(*) on empty_table1 (actually it contains lots of rows like 100M+ records ) and it takes more than 5 mins. 5 mins is too much of a time for us to bear. The customer wants the information within a minute. What information ? The number of rows in a table ? That's rarely important information for a user. It looks more like the kind of information a database admin would want. The problem is that an unusual aspect of how SQLite works means that the total number of rows for a table is not stored anywhere. To calculate count(*) it has to look at every row in the table. It can do this by looking at every row in a table and counting the entries, which is what it did originally to take 5 minutes. But a full index on the table has the same number of entries but involves handling less data, and SQLite knows that counting the entries in the index you created would be faster. So once you had created the index to find count(*) it went through all the entries in that index instead, which took it just 15 seconds. How do we go about resolving this ie what should we do to create a table with sub minute access for 100-500 million . Access is not the problem here. SQLite can access any row in a table that big in a few milliseconds. The problem here is the specific function of counting every row which is something SQLite does not do quickly. Take a look at the 'max(rowid)' trick I showed above. If that's no good for you, you use TRIGGERs which add 1 for each INSERT and delete 1 for each DELETE to keep track of the number of rows. You would create another table to keep the total counts in. This would decrease the time taken to return the counts at the cost of increasing the time taken to insert and delete rows. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Whish List for 2015 (David Barrett)
On 20 Jan 2015, at 12:23am, Klaas V klaasva...@yahoo.com wrote: On my whish-list for this list is the possibility to include attachments like screenshots and databases. I've been active on several technical helplists over the years and from my experience I'll tell you what would happen. We'd get A) People posting a description of a problem and attaching big databases and expecting everyone reading to download their attachment, reproduce the problem and solve it for them. They won't try very hard to solve it themself before posting. B) People posting screenshots of various SQLite management applications, thinking that because they have SQLite in their name, bugs in the app should be fixed here. C) People posting screenshots of error messages which mention SQLite, but which are actually caused by bugs in a program which uses SQLite, not SQLite itself. The list would soon get so drowned in these things it would be useless for solving the slightly more technical problems we currently see. By presenting a small challenge to anyone who wants to get a problem solved here, we filter out the try once, give up, don't read the documentation just ask for help brigade. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ORDER BY and LIMIT regression
On 19 Jan 2015, at 3:10pm, Richard Hipp d...@sqlite.org wrote: It is a very complex problem. In particular, the sample query works fine as long as the number of columns in the result set is not exactly 60. Adding or removing a single column of result gives the correct answer. I would love to know the cause of that. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Fsync or fdatasync performs slowly
On 19 Jan 2015, at 5:38am, Wei, Catherine catherine@arris.com wrote: The time i'm looking at is the duration that function fsync or fdatasync executes, it's about 30ms. I don't know wether it's related to linux kernel or something related. I've tested it in another kind of set-up box and the duration of fsyn is only about 5ms. The data being written to is just an insert sql, very little. Every time after executing the insert sql, database transaction will be committed and fsync will be executed during commiting process. The answer is that the computer you have that says 5s is lying to you. A normal hard disk in a normal computer rotates at 5400 times a minute. This means it completes one rotation in about 10ms. And that means that waiting for the right part of the disk to pass under the read/write head takes an average of 5ms. Committing a transaction involves SQLite writing to the disk in a number of different places: it has to move the transaction from the journal file to the database file which involves at least two read and two write commands, usually more than that. (I do not know enough about SQLite to know the proper numbers but I'm sure they are more than that.) With an average latency of 5ms per access, this means that in the best possible case committing a transaction will take 20ms. This suggests that the timing of 30ms you're getting from your second computer is about right. So why is the first computer lying to you ? Well it's to make itself seem faster. In this case when a 'write' command is issued to the disk drive it is reporting finished immediately, before the change has actually been made to disk. This doesn't matter when a computer is used for normal desktop purposes (web browser, word processor) because you might just lose the last few characters of a word processing file, but it matters a lot in a database like SQLite because writing one row to a database involves many changes and if they're not consistent then the database may be corrupted. So for good safety of data in your database your 30ms computer is the good computer and your 5ms computer is a bad computer. It's worth noting that this long delay (30ms) is only for writing a transaction, not for each change to the database. So BEGIN INSERT ... INSERT ... INSERT ... 97 more of these ... END only involves one long update, not 100 long updates. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Changing Page_size fails sometimes, retry succeeds but is messy. Ideas?
On 19 Jan 2015, at 6:04pm, Andy (KU7T) k...@ku7t.org wrote: Any ideas what I am doing wrong. Do you have to exclusively lock or wait somewhere? You have to run VACUUM immediately after setting the new page_size. See http://www.sqlite.org/pragma.html#pragma_page_size One normally only sets the page_size when creating a new database. VACUUM simulates writing a new database. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] When to run ANALYZE
On 19 Jan 2015, at 4:11pm, Andy (KU7T) k...@ku7t.org wrote: ANALYZE ANALYZE sqlite_master To my app at timely intervals. We are also running in WAL mode. Do I need to do a WAL checkpoint (pragma wal_checkpoint) before running the index recalc or not? You don't even need the ANALYZE at timely intervals. Just do an ANALYZE once, when you have realistic data in your tables. You don't need to do it again unless the /type/ of data you're storing changes. Changes to the data itself don't require another ANALYZE. ANALYZE makes SQLite examine the columns for chunkiness. In other words, it tells the optimizer that your surname table has almost as many different values at it has rows whereas the membershipType column only has two values: 'normal' and 'premium'. Unless you're suddenly introduce three new types of membership there's no need to run ANALYZE again. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Whish List for 2015
On 18 Jan 2015, at 8:29pm, David Barrett dbarr...@expensify.com wrote: SELECT createNewJSONCollationFunction( FooCollationSequence, foo ); SELECT createNewJSONCollationFunction( BarCollationSequence, bar ); Those two might be okay depending on what the quoted values are meant to be. CREATE INDEX ON testDataFoo ( json FooCollationSequence ); CREATE INDEX ON testDataBar ( json BarCollationSequence ); Syntax wrong. CREATE INDEX indexname ON tablename (columnname COLLATE collatefunction) Indexes need a name each. Inside the brackets goes the name of the column you're indexing on, then COLLATE, then the collation function to use (which can be your own). You cannot make the collation function take a parameter at this point. Before giving that a shot, however, I'm hoping someone here can give me a sense of whether that'll work? Specifically, if I'm generating these dynamic collation sequences on the fly, how does sqlite store the index on disk? SQLite uses the simplest most obvious way to do it: the values returned by your COLLATE function are stored in the index when a new row is added to a table. If your COLLATE function changes after rows have been added to the index, SQLite does not understand the problem and can corrupt the index as it tries to change it assuming that the old values are correct. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Huge WAL log
On 16 Jan 2015, at 12:39pm, Jan Slodicka j...@resco.net wrote: Thanks to your post I discovered multiple-row inserts so that I now understand what you asked. Just a note that multiple-row inserts were added to SQLite relatively recently (2012-03-20 (3.7.11)) and, because SQLite does only database-level locking, its overhead for INSERTs is far less than that of SQL Server. It might be faster to use them but I would expect it to be so much faster than many inserts as part of one transaction. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Database corrupted 28 billion bytes
On 16 Jan 2015, at 11:18am, MikeD supp...@dotysoftware.com wrote: Just terminated sqlite3_analyzer. You can let it run. Overnight if need be. Its CPU usage will never increase much past what you've already seen and memory usage shouldn't be excessive. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP
On 16 Jan 2015, at 12:23pm, Jay Kreibich j...@kreibi.ch wrote: They can all be (re)defined, some just happen to have default functions: https://www.sqlite.org/lang_expr.html#like Might be worth noting here that there can be a danger in replacing the definitions of default functions. These functions might be used in your schema (e.g. for CHECK constraints) and may be overridden in your program before it handles data. Someone could then open the same database in another program (e.g. the Shell Tool) which had instead the default functions and use that to add data. If instead you define custom functions and use those in your schema, if someone opens your database in another program it will have no definition for the function, and produce an error. I'm not saying that overriding functions is always bad, just reminding people of the danger. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Shell tool improvement request
(following description simplified) I have a text file I wanted to .import into a table. The text file has two columns separated by a tab: a word and a number. It starts off like this: ! 32874624 239874242 # 98235252 $ 438743824 % 324872489 39854724 a 23498725 i 1298371287 as 23194874 at 3598725 and continues for another 13588392 (sic.) rows. I use the SQLite shell tool. I create an appropriate table in my database, set .separator to \t and use .import. The .import failed because the second line of the file contains a quote character. I'm okay with the fact that it fails: it's documented, or nearly documented. However, the failure message is this: Error: mytextfile.txt line 13588392: expected 2 columns of data but found 1 Naturally I spent some time looking near the end of the file to figure out what was wrong where the actual error was in line 2, and should have read something like still inside quote when file ends. It would be useful if this was explicitly checked for when .import reached the end of the file. An alternative would be for the .import command to include text telling you how many records were successfully understood before the error occurred. For my error this would have generated a message like the following: Error: mytextfile.txt line 13588392: after successfully reading 1 record, expected 2 columns of data but found 1 This would also have told me where to look for the problem in the file. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] System.Data.Sqlite, not using parameterized query, how to encode special characters
On 16 Jan 2015, at 7:06pm, Andy (KU7T) k...@ku7t.org wrote: If I just build the sql string myself, how can I solve encoding issues as mentioned above? There's only one character which should cause a problem with embedded strings: the apostrophe. And you escape this by doubling it. Illustration: INSERT INTO myTable VALUES ('I can't stand this.') will fail because the string ends at the 't' so the command can't be parsed. INSERT INTO myTable VALUES ('I can''t stand this.') will succeed, inserting a string containing, among other things, a single apostrophe. So write yourself a little routine which doubles apostrophes and you should be fine. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 tool bug
On 16 Jan 2015, at 6:38pm, Dave Dyer ddyer-sql...@real-me.net wrote: I have a class of database for which using sqlite3 to create a copy via the pipe method fails. Using an explicit intermediate file seems to work ok. Which version of Windows are you using ? You can type 'ver' at the prompt. Can you reduce your database down to just a few lines which cause the problem, then post those lines here ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 tool bug
On 16 Jan 2015, at 9:01pm, Keith Medcalf kmedc...@dessus.com wrote: Not, at least, when your database contains string data with unusual characters that Windows feels like it should translate for you... I think that pretty much limits one to the 7-bit ASCII character set ... Why on earth would an operating system programmer bother to put any translation into piping ? Piping is really just a demonstration of how to use stdin and stdout. I'm mystified. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3 tool bug
On 16 Jan 2015, at 10:27pm, Keith Medcalf kmedc...@dessus.com wrote: [snip] The long and the short of it is that the interprocess pipe in Windows connects to cooked channels because it never occurred to anyone at Microsoft that this was undesirable and irrational. Thanks for this long explanation which I found interesting and useful. And annoying in that it decreases my appreciation of Windows still further. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Huge WAL log
On 15 Jan 2015, at 3:44pm, Jan Slodicka j...@resco.net wrote: Index rebuild (using sqlite3 shell) took 123 sec. This would suggest that it might be better to run with deleted indexes and rebuild them at the end. That is as expected, and is standard advice for cases where you are adding huge numbers of rows. WAL size started at 65MB and slightly increased once upon a time ending finally at 177MB. Very good. Yes, that's better. I assume you set a journal_size_limit value. However, other information in your message suggests that you have a resource leak of some type somewhere. Especially, it should not take 12 minutes to insert 3.5M rows into a simple table with an index or two unless really long strings or blobs are involved. Unfortunately, I'm only really familiar with the C and PHP interfaces to SQLite. But in both of those you can check the result code of each API call to make sure it is SQLITE_OK. Are you able to do this with whatever interface you're using ? DB size increased by roughly 17-18K after each commit. This suggests that WAL needs 10x more memory than the DB itself. Very variable. Depends on whether the changes in one transaction change many different pages or change fewer different pages multiple times. At least, I think so. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] damaged database recovery
On 15 Jan 2015, at 8:24pm, Dave Dyer ddyer-sql...@real-me.net wrote: 1) the generic error 11 database corrupt could have been more specific. It would have been handy to know that the complaint was about duplicate indexes, and which index, or even which table was involved. 2) it wasn't possible to drop the index in question. Or even the whole table containing the index. If I could have dropped the offending index, I could have removed the duplicates and recreated the index without requiring major surgery. You might have checked the extended error code ... https://sqlite.org/rescode.html#extrc though I don't know what it would have told you about your particular database. When you write that it wasn't possible to drop the index in question what happened when you tried ? Were you using your own code or the SQLite shell tool ? My guess is that you actually have file-level corruption which just happened to corrupt data in an index page. Could have just as easily been a table page and you would have had more trouble recovering your data. A good approach for your situation might have been to use the SQLite shell tool to .dump your database to a text file, then to use .read to create a new database from those commands. But it may or may not have worked from your particular corrupt database. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Saving PDF Files in SQlite
On 15 Jan 2015, at 8:24pm, John Payne j...@pde-usa.net wrote: How do I save PDF files in SQLIte? Read the bytes of the file and save them in a BLOB field. But I have to warn you ... I'm not a programmer, SQLite is a tool for programmers. It makes database facilities available to programmers. It's not designed for people who know only how to use programs. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best Practice: Storing Dates
On 14 Jan 2015, at 10:40pm, Baruch Burstein bmburst...@gmail.com wrote: Of course, this is just at the theoretical level. As yo said, your app probably wouldn't need to worry about this. I think a previous poster had it right. If you need to do lots of maths with the timestamps store unix epochs as INTEGERs (or Julian dates if you care only about date and not time). If, on the other hand, your database is read directly by a human a lot, store the dates as ISO format in TEXT. None of these formats have any problem-dates coming up, so we don't expect a y2k panic for any of them. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Huge WAL log
On 14 Jan 2015, at 3:36pm, Jan Slodicka j...@resco.net wrote: - WAL log size 7.490 GB Please repeat your tests but as the first command after opening your database file issue PRAGMA journal_size_limit = 100 With this change the WAL file may still grow to 7 GB while that particular transaction is being executed but should be reduced in size after the transaction is completed. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database is locked
On 14 Jan 2015, at 5:30pm, Roman Fleysher roman.fleys...@einstein.yu.edu wrote: Is there a way to figure out what is happening? Clear the lock? We would need specifics of your system to answer this absolutely correctly. But you can try these things in turn until one of them works: Unmount the volume the database is on, then mount it again. Log out, then log in again. Reboot the computer. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best Practice: Storing Dates
On 14 Jan 2015, at 5:53pm, Nigel Verity nigelver...@hotmail.com wrote: I generally just use a fixed-length 14-character string to store the date and time in MMDDHHMMSS format. It accommodates any time stamp across a 10,000 year timespan and also supports simple date/time comparisons and sorting. There is no problem with using that format. However I would advise you to make a note in your documentation, and/or to add comments to your code, saying what TimeZone these stamps are in. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] .NET System.Data.SQLite, how trap SQLITE_BUSY_SNAPSHOT (WAL) and retry
On 13 Jan 2015, at 10:53pm, Andreas Hofmann andreas.hofm...@ku7t.org wrote: Ideally, SQLite should retry itself, but I am not sure if this is supported. http://www.sqlite.org/c3ref/busy_timeout.html http://www.sqlite.org/pragma.html#pragma_busy_timeout SQLite supports it just fine. The question is whether System.Data.SQLite supports it. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] PK References One of Two Other Tables
On 11 Jan 2015, at 6:21pm, Rich Shepard rshep...@appl-ecosys.com wrote: I've only had a foreign key reference one specific table, not either of two tables You can't do the latter. Foreign keys can reference only one table. You could create yet another table, which just supplies primary keys, but it would seem that this would just duplicate a function of your 'items' table. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Error while executing query: no such column: t1.*B.Switch-Tower-Sector
On 8 Jan 2015, at 11:12pm, MikeSnow michael.sab...@gmail.com wrote: UPDATE t2 SET [*B.ANT_ORIENTATION] = (SELECT t2.ANT_ORIENTATION FROM t2 WHERE t2.[*SSID-CELLID-SECTOR] = t1.[*B.Switch-Tower-Sector]); You do not mention a specific row of t1, so it doesn't know what value you're talking about. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Index rownum
On 8 Jan 2015, at 1:43am, Max Vasilyev maxrea...@mail.ru wrote: I guessed that insertion and deletion could be an issue. We need to re-number keys (change a lot of Btree nodes) on each operation. Or at least on REINDEX command (I say not strictly, just as idea). If you need to manually set the numbers of your keys, then your key numbers are just variables and nothing to do with the internal organisation (e.g. Btree) of your database. So create an integer column for them and then you can do what you want with them: index them, UPDATE them, etc.. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Uncached performance
On 8 Jan 2015, at 10:04am, Максим Гумеров mgume...@gmail.com wrote: When I put it on HDD and try to make 1 queries (extracting values for 1 different keys) with some additional processing of extracted values, it takes about 4 seconds on my PC on any run except the first, with maybe half of that time being the additional processing mentioned; even when I perform every query 3 times in a row (making it 3 queries), this time does not change. On the first run, thought, 1 queries take about 30 seconds! This suggests that you are filling up a cache, especially if your BLOBs are large. In terms of overall time, 1 queries in 2 seconds is 5000 queries a second. If you are using a rotating hard disk then this is not unexpected, given that your disk probably rotates at 5,400 rpm, that gives an average latency of 5.55ms /per read/. CREATE TABLE global ( [key] VARCHAR (1024), value BLOB, level INTEGER NOT NULL, original_name VARCHAR (1024), id INTEGER PRIMARY KEY AUTOINCREMENT, parent_id REFERENCES global (id) ); SQLite will interpret VARCHAR (1024) as TEXT and will not truncate. I'm just warning you. value is a BLOB (and for 50% keys is just NULL) Is there a difference between a key being present in your database with NULL value, and the key not being present ? And there are 3 separate indices: by level, key, and parent_id. An index which is only on level would probably be pointless. I suspect you're more likely to want to have that index include the key, too. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Should .dump preserve foreign_keys pragma?
On 8 Jan 2015, at 1:38pm, Niall O'Reilly niall.orei...@ucd.ie wrote: I'ld have expected the foreign_keys pragma setting to have been preserved. That makes sense in terms of how a sensible user would expect SQLite to behave. But unfortunately it's not what SQLite does. See section 2 of https://www.sqlite.org/foreignkeys.html I think that the reason is that FOREIGN KEYs were developed a long time after SQLite3. A choice was made that they should default to OFF to preserve backward compatibility. So yes, as your experiment shows, you have to execute PRAGMA foreign_keys = ON each time you open a database connection if you want the foreign keys to do their stuff. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Time Zone Conversions
On 6 Jan 2015, at 6:13pm, MikeSnow michael.sab...@gmail.com wrote: I am trying to create a CASE statement that converts time zones to UTC for db storage. For example, I have 3 columns, ReceiveDate, Timezone, UTC Datetime. 04/11/2014 2:00:00, EST, empty I would like to update UTC Datetime with logic based off Timezone col to store as UTC? Write code which puts those three strings together into one string in a set format, then use SQLite DateTime functions to produce that timestamp as a number. https://www.sqlite.org/lang_datefunc.html Perhaps use format 2013-10-07 04:23:19.120-04:00 By the way, if possible, store a datetime as number instead of a string. That way you can do searching and sorting. Only when needed turn it back into a string date again, bearing in mind which timezone the user will find most convenient. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Time Zone Conversions
On 7 Jan 2015, at 2:00pm, MikeSnow michael.sab...@gmail.com wrote: I am kind of new at this. so if I get you, I should concat the 3 columns to get one in the suggested format. But then how do you convert? Column_Time 2013-10-07 04:23:19.120-04:00 datetime(Column_Time, 'utc')? SELECT datetime( strftime(ReceiveDate ||' '|| Timezone) ) might do what you want. Unfortunately I don't know of anything that can understand timezone initials like 'EST'. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] PHP: squelch warning and error messages
On 6 Jan 2015, at 9:47am, Swithun Crowe swit...@swithun.servebeer.com wrote: Hello L I know what this error/warning is. I *DO* error handling in my code. I L just don't want the message printed on the webpage. You can configure PHP to log error messages, rather than display them. Just to explain to non-PHP users: PHP wasn't really designed for the purposes we put it to these days and sometimes its roots show. In default configuration PHP /always/ generates error messages if many built-in functions get them, even if your code traps the error and handles it. One suppresses the error message using the '@' sign as Stephan described. For the original poster: standard lines at the top of all PHP files ... ?php error_reporting(E_ALL); ini_set('display_errors', '1'); ini_set('log_errors', '1'); Those are the values I use while I'm writing new PHP code. They ensure that all errors are spat out in many places so I can see them and debug them. When I switch that program to production, I change the parameters to suppress most errors. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3_bind_text and strings that look like numbers
On 6 Jan 2015, at 7:49pm, Nelson, Erik - 2 erik.l.nel...@bankofamerica.com wrote: Is there any way to force the bind_text() to store the input text verbatim? https://www.sqlite.org/datatype3.html A column with TEXT affinity stores all data using storage classes NULL, TEXT or BLOB. If numerical data is inserted into a column with TEXT affinity it is converted into text form before being stored. [...] When text data is inserted into a NUMERIC column, the storage class of the text is converted to INTEGER or REAL (in order of preference) if such conversion is lossless and reversible. If your affinity really is TEXT and you really are binding a string, then SQLite should be binding the string, not some number. The thing that would explain what you're reporting is if the column's affinity is REAL or INTEGER. Can you open the database with a shell tool to make sure ? Another possibility is that everything is being stored fine, and that something is changing the type of the result just before you print it. To check this do something like SELECT id,length(id) FROM tester and see whether the length is what you'd expect it to be. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] VACUUM requires 6.7 times space ?
I have a database file which is 120GB in size. It consists of two huge tables and an index. Its journal_mode is DELETE. It is on a partition with 803GB of free space. By my calculations I have 6.7 times the amount of free space as the database is taking up. I use the SQLite shell tool version 3.7.9 to run VACUUM on it. The Shell tool bails out reporting CPU Time: user 2113.596836 sys 437.660032 Error: near line 5 : database or disk full. My understanding is that VACUUM can't take more than three times the current size of the database file. What does the above error mean under these circumstances ? I'm currently running PRAGMA integrity_check but I have no reason to believe that the database is corrupt, I'm just desperate. If nobody comes up with any suggestions my next tactic is to DROP the index, do the VACUUM, then rebuild the index. I suspect that all three of those operations will be overnight runs so again I'll write a script which does all three and use the shell tool's .read function. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] VACUUM requires 6.7 times space ?
On 5 Jan 2015, at 11:32am, Dan Kennedy danielk1...@gmail.com wrote: Probably running out of space wherever temp files are created. Oh, that makes a lot of sense. The free space on the boot volume for that system is only 37GB. Okay, I can move the file to another computer. Thanks for the fast and useful answer and to Igor for another possibility. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] VACUUM requires 6.7 times space ?
On 5 Jan 2015, at 2:43pm, Nelson, Erik - 2 erik.l.nel...@bankofamerica.com wrote: RSmith wrote on Monday, January 05, 2015 7:43 AM I haven't done this, but I seem to remember there was a way to tell SQLite where to make temp files, or override the system default at any rate - which may help. I ran into this a while ago- used pragma temp_store_directory That's what I'm trying now. Unfortunately the directory I'm trying to use has spaces in and is several folders down a hierarchy. I'm just hoping that the string I've used to set the path doesn't need spaces or slashes escaped. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is this date supposed to be less than or more than the other?
On 3 Jan 2015, at 1:12am, J Decker d3c...@gmail.com wrote: On Fri, Jan 2, 2015 at 4:44 PM, Simon Slavin slav...@bigfraud.org wrote: On 3 Jan 2015, at 12:12am, J Decker d3c...@gmail.com wrote: https://www.sqlite.org/datatype3.html /* lists DateTime as a distinct type */ I'm going to answer this in detail because you still don't understand what's happening and how to work with SQLite. SQLite does not have 'column types', it has 'column affinity'. This means that if you declare a column as NUMERIC but try to store a string in it SQLite will store the string rather than generating an error message and/or storing NULL. Remember this because it has an effect further down. No it doesn't. It says that if you try to define a column as DATETIME SQLite will understand it as you wanting a column with NUMERIC affinity. I know... but that first implies that it will work as a number, esp. if the content is a understood datetime format... No, it means that if you save a number in that column it will be understood as a number, and if you save a string in that column it will be understood as a string. Neither of them will be understood as anything to do with dates or times. SQLite ignores the fact that you originally specified DATETIME and acts exactly the same as if you'd originally specified NUMERIC. It will store a number or a string, and it will understand it as a number or a string, never anything to do with dates or times. which I was quite happy with, decided that maybe local-prefixed self-descriptive offset times could be nice... (not sure how -3:15 is handled... is that -3 + 45?... but that's well out of scope) '-3:15' is a string. It has a colon in it. Numbers don't have colons in them. okay I wrote invalid times. I thought the colon was optional in the timezone offset portion All the values you supplied will be understood as strings. So given the following values 2013-10-07 06:23:19.120 2013-10-07T01:23:19.120Z 2013-10-07 08:23:19.120-04:00 they will be understood to be in this order: 2013-10-07 06:23:19.120 2013-10-07 08:23:19.120-04:00 2013-10-07T01:23:19.120Z because they're being understood as strings. ('6' '8' and ' ' 'T'). It never even occurs to SQLite that they might be times or dates. If you need your values understood as timedate you /must/ use the timedate functions. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Client/Server Best Practices
On 3 Jan 2015, at 4:41am, Keith Medcalf kmedc...@dessus.com wrote: I do not believe that there is a way to specify COMMIT BUT MAINTAIN THE SHARED LOCK, (that is, to commit the changes only and un-escalate the lock back to a shared lock) which would be required in order for the loop semantics posited by the OP to work correctly. If it were possible to COMMIT BUT MAINTAIN THE SHARED LOCK then a thread could hog the database, never letting other threads/processes do their work. So I hope it's not possible to do it. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite login password
On 4 Jan 2015, at 12:56am, YAN HONG YE yanhong...@mpsa.com wrote: I want to know how to setting a loging sqlite passwd? When I input: Sqlite mtdb Command, the console will prompt like this: Pls input your password: http://www.sqlite.org/src/doc/trunk/ext/userauth/user-auth.txt It is only available in recent versions of SQLite. Since 2014-10-17. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite login password
On 4 Jan 2015, at 1:00am, Richard Hipp d...@sqlite.org wrote: (1) The OP says he used the sqlite command (version 2 of SQLite) not sqlite3. (2) SQLite has *never* given a prompt Pls input your password. That message is coming from third-party software. I agree that my answer has nothing to do with the situation he described. But it does answer the question he asked. Our problem is that the situation he described could not have taken place. Unless a third party wrote a utility they called 'Sqlite' which would, of course, be bad. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is this date supposed to be less than or more than the other?
On 3 Jan 2015, at 12:12am, J Decker d3c...@gmail.com wrote: https://www.sqlite.org/datatype3.html /* lists DateTime as a distinct type */ No it doesn't. It says that if you try to define a column as DATETIME SQLite will understand it as you wanting a column with NUMERIC affinity. Since on the datefunc page Formats 2 through 10 may be optionally followed by a timezone indicator of the form *[+-]HH:MM* or just *Z*. The date and time functions use UTC or zulu time internally, and so the Z suffix is a no-op. Any non-zero HH:MM suffix is subtracted from the indicated date and time in order to compute zulu time. For example, all of the following time strings are equivalent: 2013-10-07 08:23:19.120 2013-10-07T08:23:19.120Z 2013-10-07 04:23:19.120-04:00 2456572.84952685 Defines equivalency... I would have assumed that inequalities could also be done. Yep. That's equivalency for use in datetime functions. Not for datetime values since SQLite does not understand datetime values. To process those you need to convert your datetime into a string or a number. And since 'formats supported' are specified, one would assume that datetime columns There are no datetime columns. Because SQLite doesn't have datetime as a datatype. with supported strings would work with at least =, = =, , , != To do what you want use the datetime function strftime('%s',value) to convert your stored time values into seconds since 1970. This will let you use the operators you have listed above. Here's an example: strftime('%s',receivedate) strftime('%s',receivedate2) This use will recognise all the string formats you've listed above. Better still store your datetime values as numbers of seconds to start with instead of inconsistent text string. Then you can use those columns in indexes and do sorting and searching without having to use datetime functions. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL newbie, how to implement a delete correctly.
On 30 Dec 2014, at 3:53pm, Jonathan Leslie j...@jonathanleslie.com wrote: Sorry, I wasn't clear. what if there are other child tables, say child02- childxx, and you don't know the names of the table, you want to search the entire database? SQLite has no command which will help it specifically find all the child rows of a row. You will have to use a set of SELECT commands like the one Hick posted. It would be possible to write some clever code which came up with this list. You would want to start off executing PRAGMA foreign_key_list(table-name) and looking at the results you get back from it. From those you could generate SELECT statements which found the child rows. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL newbie, how to implement a delete correctly.
On 30 Dec 2014, at 6:19pm, Jonathan Leslie j...@jonathanleslie.com wrote: C:\joncat t2.lis | gawk /.*/ { print \delete from \ $3 \ where \ $4 } |cut -d, -f1delete from parent01 where VALUES(1417556005delete from child01 where VALUES(1417626376delete from child01 where VALUES(1417626391delete from child01 where VALUES(1417703626delete from child01 where VALUES(1417703753delete from child01 where VALUES(1419259626 Wait ... you're doing all this just so you can delete the child rows ? Then why don't you just set delete to CASCADE and let SQLite do it all for you ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bus error with Evolution 3.12.9 and SQLite 3.8.7.4
On 29 Dec 2014, at 1:09pm, Paul Menzel paulepan...@users.sourceforge.net wrote: using Debian Sid/unstable and upgrading from libsqlite3-0 3.8.7.2 to 3.8.7.4, Evolution 3.12.9 started to crash with a bus error [1]. Attachments don't work on this list. Please post long text on a web site somewhere or append it to your message. Can you tell what sqlite_ call was being made to cause the crash ? If it was executing a SQL command, can you tell us what the command is ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Client/Server Best Practices
On 25 Dec 2014, at 12:32pm, Rick Kelly rpke...@gci.net wrote: If a request encounters %SQLITE_BUSY or %SQLITE_LOCKED, I'm retrying up to 5 times with a 100ms sleep in between. Will this approach minimize %SQLITE_BUSY and %SQLITE_LOCKED situations and provide decent concurrency and long term stability? Do not do the retrying inside your own code. Instead set a busy_timeout and let SQLite handle all retrying itself: https://www.sqlite.org/c3ref/busy_timeout.html https://www.sqlite.org/pragma.html#pragma_busy_timeout SQlite will then take care of all the retrying for you. A timeout of a minute or two might seem strange, but if you think about what you'd want to happen if the database is busy it starts to make sense. If SQLite is still reporting SQLITE_BUSY or SQLITE_LOCKED after this you can treat it as an unrecoverable error indicating hardware or network problems. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] COMMIT nested in SELECT returns unexpected results
On 24 Dec 2014, at 10:50am, Jim Carroll j...@carroll.com wrote: I understand that performing a SELECT and nested COMMIT on the same table is not supported in sqlite, but I would have expected a COMMIT on a separate table would not be a problem. Some test code in python however reveals that performing the COMMIT disrupts the SELECT statement, and causes duplicate data to be returned. If this is not a supported operation, would you mind pointing me to the docs so I can understand it better? All operations on a SQL database, whether read or write, must be performed inside a transaction. So theoretically if you performed a SELECT without having done a BEGIN first, SQLite could return an error. However, continually having to write BEGIN and END makes your code look messy so instead the programmers of SQLite have been kind to you. If SQLite notices you issuing a command and you haven't already started a transaction it automatically wraps your command in BEGIN and COMMIT. If either the BEGIN or COMMIT fail, the error result is returned as if it was an error from your command. However, I do not see why this, or anything else I know about SQLite, would lead to this in your output: (0,) (1,) (0,) (1,) (2,) I am mystified, Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] COMMIT nested in SELECT returns unexpected results
On 24 Dec 2014, at 10:50am, Jim Carroll j...@carroll.com wrote: #!/usr/bin/env python import sqlite3 as sq Sorry, I may have just realised what's wrong. You are misusing your python library. See the documentation at https://docs.python.org/2/library/sqlite3.html You cannot always use .execute against the connection to the database. You need to create a Cursor to the connection first, and use .execute against that as documented in 11.13.3. Your code works fine here db = sq.connect(':memory:') db.execute('CREATE TABLE tbl (col INTEGER)') db.execute('CREATE TABLE tbl2 (col INTEGER)') db.executemany('INSERT INTO tbl (col) VALUES (?)', [(0,), (1,), (2,)]) db.commit() because there is only ever one thing accessing the connection and, as noted in 11.13.7.1 you can do this as a shortcut. However, in your later code for col in db.execute('SELECT col FROM tbl'): print(col) db.execute('INSERT INTO tbl2 VALUES (?)', col) db.commit() You are trying to use the connection to the database for two cursors at the same time. One is the iteration through the SEELCT, the other is the INSERT command. The python library can't handle this and gets confused. You should instead be doing like cursorSelect = conn.cursor() cursorInsert = conn.cursor() for col in cursorSelect.execute('SELECT col FROM tbl'): print(col) cursorInsert.execute('INSERT INTO tbl2 VALUES (?)', col) cursorInsert.commit() I don't know python, and I cannot test the above code but it might be enough to point you in the right direction for an eventual solution. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] COMMIT nested in SELECT returns unexpected
On 24 Dec 2014, at 6:39pm, Jim Carroll j...@carroll.com wrote: I actually tried this same idea yesterday, but it made no difference. Even manually creating cursors and executing all statements through them yielded the exact same problem. For simplicity, I kept the code sample short, but I've tried dozens of different ideas over the last two days to get to the bottom of this. Dammit; I had high hopes I'd solved it. I even spent time studying the _sqlite.c code base https://svn.python.org/projects/python/trunk/Modules/_sqlite/ to see if I could track the source of the problem -- but I'm coming up blank. I was getting hung up with trying to understand whether or not the concept being attempted was valid sqlite. The comments I hearing is that from the sqlite perspective, the concept SHOULD work, but that there may in fact be some sort of bug/feature in the pysqlite connector code? I feel that RSmith's post shows that the fault is not in SQLite itself. I suspect it's in the Python library but I don't know nearly enough Python to be able to tell for real. Could you try the same thing using this python library http://rogerbinns.github.io/apsw/ http://rogerbinns.github.io/apsw/download.html#source-and-binaries instead ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] COMMIT nested in SELECT returns unexpected
On 25 Dec 2014, at 3:17am, Keith Medcalf kmedc...@dessus.com wrote: Using a separate connection for the INSERT/COMMIT also will not work because it will not be able to get a write lock while the select is running. Should some part of the library then be producing an error which can be trapped ? Would correctly implementing the exception handling described here https://www.python.org/dev/peps/pep-0249/#exceptions have caught an error message of some sort ? The issue can be resolved by: (a) moving the commit out of the loop; (b) retrieving the whole resultset before running the loop; or, (c) changing the journal_mode to WAL (using an on-disk database) and using a separate connection for the INSERT/COMMIT in the loop. (d) Using the INSERT ... SELECT command rather than a loop. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Whish List for 2015
On 23 Dec 2014, at 3:26pm, Tony Papadimitriou to...@acm.org wrote: CREATE PROC sample AS ( SELECT table1.* FROM table1,...,tableN WHERE ... possibly complicated join ... AND last_name like :1 ORDER BY :2; ); @sample 'Smith%',tax_id would become: SELECT table1.* FROM table1,...,tableN WHERE ... possibly complicated join ... AND last_name like 'Smith%' ORDER BY last_name,tax_id; (If you don't like CREATE PROC, make it CREATE SCRIPT or something else, although I think PROC is good enough as it allows for possible future expansion with more capabilities -- wish lists for 2016 and beyond.) Instead of creating a new structure, have procedures stored in a table and use an EXECUTE command with the same syntax as a SELECT to execute them: EXECUTE command FROM script1 ORDER BY rowid; or EXECUTE command FROM scripttable WHERE script='monthlymaintenance' ORDER BY rowid; That was you can have your own code create and maintain scripts inside a database. The rules would require the entire script to be retrieved before the first line of the script is run to avoid problems with scripts which modify themselves. Not sure how parameters should be handled. The other way would, as someone else commented, be to incorporate stored Lua procedures. Lua would be an excellent language to use for this. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Whish List for 2015
On 21 Dec 2014, at 2:39pm, Stephen Chrzanowski pontia...@gmail.com wrote: SQLite4 may never be released, or if it is going to be, not for a few years yet. SQLite 4 can be whatever gets released next. It may be nothing like SQLite4 is now. Next month one of the developer team may have some terrific idea which causes them to scrap the current system and start again. Alternatively Dr Hipp may already have a plan to release SQLite4 on January the first. We can't know. When you're wishing for the future, however, it's best to wish big, not for a tiny step-wise improvement. Don't wish for a system with a better journalling mode, wish for a system without 17 journalling different modes. Don't wish for faster locking, wish for a system that doesn't use locking. And hope most of all that Dr Hipp doesn't care what you want and is channelling Henry Ford: “If I had asked people what they wanted, they would have said faster horses.” (Note: This quote is famous but was probably never spoken by Mr Ford.) Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Whish List for 2015
On 21 Dec 2014, at 10:01pm, jonathon toki.kant...@gmail.com wrote: On 21/12/14 09:47, big stone wrote: that I hope may help end-user/students popularity [2] [2] http://db-engines.com/en/ranking_trend The methodology used by that site is tilted in favour of big data, and complex databases. Consequently, even if there are ten million SQLite databases for every non-SQLite database, SQLite won't rank very high. SQLite shines something that is created for a one time use, and then can be deleted, because it is no longer needed. Which is not to say that SQLite is not a good database engine. But for quick and dirty and work, SQLite is more suitable than Oracle, MySQL, Microsoft SQL Server, or PostgreSQL. Also worth reminding people that (statistically speaking) almost no SQLite installations are in devices with permanent power-supplies and internet connectivity. SQLite is in your tablet, your mobile phone, your TV, your PVR/DVR, your Games console, your car, your GPS device, and built into your web browser. Almost nobody who is running SQLite knows they run SQLite. This doesn't even cover SQLite's use on millions of embedded computers built into data-capture hardware like the device your courier uses to list their deliveries and capture signatures. You cannot run an Oracle server on these machines. (No slight to Oracle: that's not what Oracle is made for.) Nor would someone learning SQL want to run an Oracle server to do their coursework on. But they can fire up the sqlite shell tool and learn everything they need to about SQL, drawing less power than it takes to run Excel. And they do. So what do you think we'd see if we compared the number of real users of SQLite versus other database systems ? You'd prefer to count installations rather than users ? I can't even count the copies of SQLite running just on the gadgets in my home. The laptop I'm typing this on has at least four. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ALTER TABLE .. ADD COLUMN .. IF NOT EXISTS ..?
On 16 Dec 2014, at 10:40pm, Nico Williams n...@cryptonector.com wrote: I have a habit of putting schema definitions in a file that's always safe to read and execute against a DB connection. This means that I DROP some things IF EXISTS and CREATE all things IF NOT EXISTS. But if I have to ALTER TABLE... there's no IF NOT EXISTS .. equivalent for ALTER TABLE. Funny that, or that I only just noticed this absence. Looking at other SQL databases I see that this is actually a common question/request, and it seems that where this is implemented it looks like this: ALTER TABLE [IF EXISTS] tbl ADD COLUMN col [IF NOT EXISTS] ..; If 'ALTER TABLE ... ADD COLUMN ...' fails it fails harmlessly, with its work already having been done. The ALTER command came along a long time after original SQL. By that time software could handle cases where a single SQL command failed without the software having to crash at that point. In other words a programmer could execute the ALTER command, and if if failed carry on regardless, or use that failure to skip over more code which set up initial values in the new column. I would value far more the ability to do ALTER TABLE ... DROP COLUMN ... in SQLite, difficult though it would be to implement in SQLite3. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] '.timer on' in the shell tool
Okay. I used '.timer on' in the shell tool. SQLite 3.7.13, if it matters. Here are two sample lines I got in response to different INSERT ... SELECT commands: CPU Time: user 880.710398 sys 353.260288 CPU Time: user 5073.001124 sys 11609.266484 The two commands were issued one after another on a computer which was otherwise idle. Question 1: What are the units ? Question 2: I would have expected consistency in that user time was always greater than system time. Or perhaps the other way around. Why is a different one greater for the two examples ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] '.timer on' in the shell tool
On 15 Dec 2014, at 9:20am, Donald Shepherd donald.sheph...@gmail.com wrote: - Units are seconds. - IIRC user time is time spent in SQLite code, sys time is time spent in system (OS) calls. Both can vary from run to run and (at least in my testing) sys time tends to vary based off system usage. If you want the best real time results, a later version of the command line also includes a real time that represents actual time elapsed but it also has updated query planning IIRC. Okay. That explains that. What I really needed was wall time, which I guess this version doesn't have. Not to worry. I don't need to know it, it would just have been interesting. Thanks, Donald. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Feature Request - RowCount
On 14 Dec 2014, at 11:08am, Jean-Christophe Deschamps j...@antichoc.net wrote: Without using slow triggers or changing the v3 file format there is still another possibility which could be implemented relatively easily. All it would need is a new pragma (or internal function) like pragma row_count=0/1 and some code. On invokation, the engine would create a hidden system table like sqlite_rowcount --similar to sqlite_sequence-- which would initially hold row counts for every table in the DB. Two obvious places: A) In that sqlite_sequence table you mentioned, as an additional column. Always up-to-date. B) In the tables prepared by SQLite ANALYZE. If you want the rowcount updated, do another ANALYZE. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Feature Request - RowCount
On 13 Dec 2014, at 12:38pm, Richard Hipp d...@sqlite.org wrote: Also, if there are indices available, SQLite attempts to count the smallest index (it has to guess at which is the smallest by looking at the number and declared datatypes of the columns) and counting the smallest index instead, under the theory that a smaller index will involve less I/O. Would it not be faster to just count the number of pages each index takes up ? Uh ... no. Wow. You really don't like storing counts or sizes, do you ? To do better than this requires, as far as I know, an incompatible file format change and/or a performance hit for applications that do not use the feature. Can you tell us whether the problem exists in SQLite4 ? I know it uses a different format for indexes. I tried checking the documentation but didn't see an answer that didn't involve more work than I felt like doing. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Feature Request - RowCount
On 13 Dec 2014, at 7:46pm, James K. Lowden jklow...@schemamania.org wrote: Every DB Admin tool I've ever used proved to be more hinderance than help. They seem to be written by the moderately competent to help the novice, and run out of gas or fall over when faced with anything complex. [snip] My first question, then, is whether or not the rowcount is so interesting that it must be known before a table can be operated on. I suggest the answer is No. The relative approximate sizes of the tables is known to the admin in most cases and, when it is not, the information is readily discovered on a case-by-case basis. [snip] All true. Yet when I wrote my own DB Admin tool (suitable only for my own use, of no interest to anyone else) I included the same feature in it. When you click on a TABLE to select it the count(*) pops up along with information about the table's structure. I had no real idea why I put that in, it just seemed a natural thing to do. That said, I'm puzzled why rowcount isn't maintained and exposed in SQLite as part of a table's metadata, particularly when indexes/keys are present. The cost of maintaining a rowcount is small, in terms of computation and complexity. ISTM it is valuable information to the system itself in evaluating query-plan costs. It does seem that knowing count(*) would be a very good thing to know for evaluating query-plan costs. I hope SQLite4 stores it. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Online/Hot backup of WAL journalling mode database
On 12 Dec 2014, at 10:27am, Clemens Ladisch clem...@ladisch.de wrote: If you write your own backup tool that simply calls sqlite3_backup_step(b, -1), the entire database is copied in a single atomic transaction. OP's problem is that he runs several processes which are constantly (every few seconds) writing to the database he needs to copy. So any operation which locks the database for a long period would mean that some data was not captured. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Online/Hot backup of WAL journalling mode database
On 10 Dec 2014, at 10:40pm, Nick maill...@css-uk.net wrote: All the processes would have automatic checkpointing disabled. Just the backup process would perform the checkpoint. I don't know enough about the internals of SQLite to be sure, but various parts of me are concerned that this is a bad idea. I don't know what WAL mode would be like without checkpointing but there has to be a reason for checkpointing and disabling it between backups sounds bad. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to speed up database open
On 11 Dec 2014, at 11:51am, Paul de...@ukr.net wrote: I understand, that having them is a must for a decent performance. In my specific case I have millions of individual database files. This is one, among other reasons that I can't keep them open all the time. Just too many of them. These databases are being opened frequently. Let's say 500 times per second. In most cases, just to query a single row. Ironically, querying takes only a handful of microseconds, and most CPU time is spent reading same database structure over and over again. Can you please make some advice, what can be done to reduce this overhead? The problem with this is that it cannot be solved by SQLite's programmers because most of the time is taken by operating system calls. Merely opening a file (which you no doubt know is not done by sqlite_open() but delayed until the first access) is a time-consuming procedure. Once SQLite has access to the data it is, as you have shown, very fast. You explain that you have millions of individual database files. Is that the only reason you can't open the database and keep it open, or are there others ? Also, do all these separate databases have the same tables with the same columns in ? My normal advice would be that before you start querying you merge your millions of separate database files into one big one. Judging by the degree of technical information in your question you don't need me to suggest ways of programming or scripting this, or of keeping a merged central copy up-to-date. The only question is whether it is appropriate to your circumstances. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] replace many rows with one
On 10 Dec 2014, at 3:40pm, RSmith rsm...@rsweb.co.za wrote: INSERT INTO s2merged SELECT a, b, sum(theCount) FROM s2 GROUP BY a,b; Thanks to Martin, Hick and R for this solution. It was just what I was looking for. Not sure if your theCount field already contains totals or if it just has 1's... how did duplication happen? The existing rows contain totals. Or maybe I should call them subtotals. The data is being massaged from one format to another. I did a bunch of stuff when it was text files, then imported it into SQLite and did a bunch more on it as rows and columns. Eventually it'll end up in SQLite. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Counting rows
In my table which had about 300 million (sic.) rows I did this SELECT count(*) FROM myTable; to count the number of rows. After half an hour it was still processing and I had to kill it. I know that the internal structure of a table means that this number isn't simple to produce. But is there really no faster way ? This table is going to have about six times that amount soon. I really can't count the rows in less than a few hours ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Counting rows
On 11 Dec 2014, at 3:58pm, Paul Sanderson sandersonforens...@gmail.com wrote: would count _rowid_ from mytable be quicker Hmm. Given that these tables have the normal use of rowid, and that rows in this table are only inserted, never deleted, I wonder whether SELECT max(rowid) FROM myTable would have given the right result, almost instantly. Can't check it now, but thanks for the idea, Paul. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Counting rows
On 11 Dec 2014, at 4:39pm, Dominique Devienne ddevie...@gmail.com wrote: I have a little utility that connects to Oracle, and does a big UNION ALL query to get the counts of all my tables (82 currently): Yeah, it's easy in Oracle. The problem is that SQLite3 uses a tree to store lists, and it does not store the total number of entries separately. So to count the number of rows in a table SQLite has to walk the entire tree: go up and down all the branches to find which rows exist, whether any have been deleted, etc.. SQLite4 uses a different file format and I understand it does not have this problem. Which doesn't help me at all right now. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] replace many rows with one
Dear folks, A little SQL question for you. The database file concerned is purely for data manipulation at the moment. I can do anything I like to it, even at the schema level, without inconveniencing anyone. I have a TABLE with about 300 million (sic.) entries in it, as follows: CREATE TABLE s2 (a TEXT, b TEXT, theCount INTEGER) There are numerous cases where two or more rows (up to a few thousand in some cases) have the same values for a and b. I would like to merge those rows into one row with a 'theCount' which is the total of all the merged rows. Presumably I do something like CREATE TABLE s2merged (a TEXT, b TEXT, theCount INTEGER) INSERT INTO s2merged SELECT DISTINCT ... FROM s2 and there'll be a TOTAL() in there somewhere. Or is it GROUP BY ? I can't seem to get the right phrasing. Also, given that this is the last operation I'll be doing on table s2, will it speed things up to create an index on s2 (a,b), or will the SELECT just spend the same time making its own temporary index ? Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] seeking advice
On 9 Dec 2014, at 8:41pm, Rene Zaumseil r.zaums...@freenet.de wrote: Version 3: One table with time stamp, parameter id and parameter value - Is it working when all values change? - Is retrieving values for one parameter fast? That one. Versions 1 2 will both, technically, work, but they're abuse of how SQL should be used and will result in horrible code. The speed for retrieving all parameters will be bound by your programming language. SQLite will do its side of the job very quickly. And since columns have just affinity and not type, having some values INTEGER and other REAL will work fine. I will write and read the data on the same time. But writing should have priority. Multithread ? Multiprocess ? Neither are needed, but those are the things you need to decide on next. Also, do not forget to pick a journal mode and to set a busy_timeout. Ah, I see you already mentioned journal_mode. Good. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Online/Hot backup of WAL journalling mode database
On 9 Dec 2014, at 8:57pm, Nick maill...@css-uk.net wrote: Environment is Linux with multiple (c. 4-6) processes accessing a single sqlite database named test.db. Backup: - New process started using cronjob to initiate application checkpoint until completion. - rsync diff the file test.db to another drive/location (specifically ignoring the -shm and -wal file). - exit process Restore: - rsync the file test.db from another drive/location. Will not be trustworthy if the database is being written to during the rsync operations. Recommend either of the following: A) Ensure all processes besides the backup process have the database closed while it is being copied. Establish some kind of semaphore so they can tell when it's safe to open the database again. B) Use the SQLite Backup API which was invented to do what you want. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Online/Hot backup of WAL journalling mode database
On 10 Dec 2014, at 12:30am, Nick maill...@css-uk.net wrote: That's interesting Simon I didn't expect the database not to be trustworthy. The database will be trustworthy at any instant. Your copy of it will be corrupt because the file will be changing while you are copying it. In WAL mode I thought the database file is only written to when checkpointing. Have I misunderstood this journaling mode? How do you intend to prevent your other processes from checkpointing while you take the backup ? You can disable checkpointing for your own connection to the database but not for the connections other processes have. Again I may have misunderstood the docs around the Backup API, does it not start again from the beginning copying pages if another process writes to the database during the process? In practice could it successfully backup a 2GB database that is being written to once a second? Not if the writing never stopped. But there's no way to take a copy of a file which is constantly being rewritten. rsync can't do it either. How can anything copy a file which is constantly being modified ? You can BEGIN EXCLUSIVE and then END once your backup is finished. That should prevent other processes writing to the file. You will have to deal with what happens if your BEGIN EXCLUSIVE times out, and you will have to put long timeouts in your other processes so they can handle the file being locked long enough for the entire copy to be taken. That's the only way I can think of to do it. And yes, it will prevent writing to the database while it's being copied. On the other hand, there's a different way to clone a database: log the changes. When something issues an INSERT/DELETE/UPDATE command, execute the command but also append a copy of that command to a text file somewhere. When you want to bring your backup copy up-to-date, take a copy of the log file, then execute all the commands in it to your out-of-date copy. You need a method of zeroing out the log file, or knowing where you got to on your last backup. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail
On 8 Dec 2014, at 1:31pm, Gwendal Roué g...@pierlis.com wrote: We share the same conclusion. I even tried to decorate the update query with ORDER clauses, in a foolish attempt to reverse the ordering of row updates, and circumvent the issue. A way to solve this is to use REAL for page numbers instead of INTEGER. To insert a page between two existing ones, give it a number which is the mean of the two pages you're inserting it between. Every so often you can run a maintenance routine which renumbers all pages to integers. Alternatively, store your pages as a linked list. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail
On 8 Dec 2014, at 3:05pm, Gwendal Roué g...@pierlis.com wrote: Why not an opt-in way to ask for deferred constraint checking. The key here is only to allow perfectly legit requests to run. With all the due respect to sqlite implementors and the wonderful design of sqlite. SQL-99 includes a syntax for deferred checking. We don't need to invent our own syntax with a PRAGMA. However, it is done when the constraint is defined rather than being something one can turn on or off. So you would need to think out whether you wanted row- or transaction-based checking when you define each constraint in the first place. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?
On 9 Dec 2014, at 1:36am, David Barrett dbarr...@expensify.com wrote: *Re: Why VACUUM.* We vacuum weekly. This particular database is a rolling journal -- we are constantly adding new rows to the end of the table, and every week we truncate off the head of the journal to only keep 3M rows at the tail. Given that we're truncating the head, without vacuuming we'd be inserting the new rows at the front of the database with the old rows at the end -- and then each truncation would leave the database more and more fragmented. Granted, this is on SSDs so the fragmentation doesn't matter a *ton*, but it just adds up and gets worse over time. Anyway, agreed it's not the most important thing to do, but all things being equal I'd like to do it if I can to keep things clean and snappy. Okay. I have some great news for you. You can completely ignore VACUUMing without any time or space drawbacks. You're wasting your time and using up the life of your SSD for no advantage. Fragmentation ceases to become a problem when you move from rotating disks to SSD. SSD is a truly random access medium. It's no faster to access block b then block b+1 than it is block b then block b+1000. Two contiguous blocks used to be faster in rotating disks only because there is a physical read/write head and it will already be in the right place. SSDs have no read/write head. It's all solid state and accessing one block is no faster than another. Delete old rows and you'll release space. Insert new rows and they'll take up the space released. Don't worry about the internal 'neatness' of the file. Over a long series of operations you might see an extra block used from time to time. But it will be either zero or one extra block per table/index. No more than that. A messy internal file structure might niggle the OCD side of your nature but that's the only disadvantage. Also, SSD drives wear out fast. We don't have good figures yet for mass-produced drives (manufacturers introduce new models faster than the old ones wear out, so it's hard to gather stats) but typical figures show a drive failing in from 2,000 to 3,000 write cycles of each single block. Your drive does something called 'wear levelling' and it has a certain number of blocks spare and will automatically swap them in when the first blocks fail, but after that your drive is smoke. And VACUUM /thrashes/ a drive, doing huge amounts of reading and writing as it rebuilds tables and indexes. You don't want to do something like that on an SSD without a good reason. So maybe once every few years, or perhaps if you have another more complicated maintenance routine which already takes up lots of time, do a VACUUM then. But it doesn't really matter if you never VACUUM. (Which is entirely unlike my home, dammit.) Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?
On 8 Dec 2014, at 12:43am, David Barrett dbarr...@expensify.com wrote: Other alternatives we're considering are to fork and nice the process, or to call ioprio_set() directly, but I'm curious if there's a simpler way to do it. Thanks! VACUUM does the same job (in a very different way) as copying all the TABLEs, then creating the VIEWS, INDEXes and TRIGGERs on the new tables. These can all be done using SQL statements. Had you considered creating a VACUUMed copy yourself ? You could do one table/view/index/trigger at a time. And you could engineer a pause of a few seconds after every ten thousand rows are put in a table. But I'm wondering why you need to VACUUM often enough that anything it does is a problem. It can save filespace after deletion (before new data is put in to take up the released filespace), and it can increase speed, but the speed increase is small. It's not needed in normal use. It should be kept for a maintenance routine, perhaps once a month at most. If your users are putting in more data than they are deleting, VACUUM has no noticable effect and I know of SQLite databases which have been amended daily for years without ever once having been VACUUMed. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to Verify (know that they are correct) Queries in SQLite GUI Manager Firefox add-on?
On 8 Dec 2014, at 2:24am, Dwight Harvey dharv...@students.northweststate.edu wrote: I know very little and Databases are complex and intimidating. I figured out how to run queries but I don't know if they are correct/accurate, as in what I requested from the 'RUN' results? How do you 'VERIFY' your query results? I bet if you read over your notes or your textbook you'll find that your instructor explained how to do this sometime during your instruction. But without knowing what you instructor intended you to do we might be able to take some guesses. If you could list every row of the tables in your query you could look down them yourself and see what you think the result of the query should be. So you can do that, just use SELECT * FROM dbo.employees Obviously in real life tables get huge and it's not practical to do this. That's why when you test your code you make up small dummy tables to check that things are working correctly. And you make up appropriate sample data to test two kinds of errors: false positive: returning a row you didn't mean to return false negative: not returning a row you should have returned Good luck with your course. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database is locked for SQLITE_BUSY
On 4 Dec 2014, at 5:36pm, Jonathan Moules j.mou...@hrwallingford.com wrote: Depending on the application, an end user likely won't see the error code, but instead just the error message SQlite is not a program. It's an API, intended for use by a programmer. Those error codes should not be reported to the end user. They are intended for the user of the API: the programmer. An end user is not expected to know what something like SQLITE_BUSY_SNAPSHOT means. What the programmer has their program do about them is the heart of the matter. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database is locked for SQLITE_BUSY
On 3 Dec 2014, at 2:20pm, Stephen Chrzanowski pontia...@gmail.com wrote: Although I think there is already an error result, one situation might be when the DB is in a read only state. I just thought of the database /file/ being marked 'read-only'. But it turns out that there's a different SQLite result code for that situation. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database is locked for SQLITE_BUSY
On 3 Dec 2014, at 3:10pm, Hick Gunter h...@scigames.at wrote: SQLITE_BUSY means that some connection is BUSY with a write transaction and has locked the database file; presumably, it will be possible to write to the database when the current writer has finished, just not now or within the specified busy timeout. SQLITE_LOCKED otoh means that the calling application is in error and has specified two or more transactions whose table access modes are incompatible and whose table access orders differ. This situation is resolvable only if at least one involved transaction is rolled back. This is very illuminating and far better information than I managed to find in the official SQLite documentation. It would be really helpful if something like this could be incorporated in an appropriate place. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users