Re: [sqlite] New SQLite Forum established - this mailing list is deprecated
On Fri, 13 Mar 2020 11:22:46 -0400 Richard Hipp wrote: > 4. Your email address is never displayed, even to subscribers. While I have no illusions of altering the online trajectory of this ML, I'd like to point out that email addresses foster community. Studies and experience both show that the incidental availability of email addresses invites the formation of new relationships through off-list communication. Speaking for myself, that opportunity has manifested several new connections and even professional opportunities. I value those relationships. I'm sorry this forum will cease to be a place where new ones will develop. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Fwd: inserting new data only
On Tue, 25 Feb 2020 12:02:24 -0500 Przemek Klosowski wrote: > and I want to avoid storing repetitive data, so that the database > should contain > 10:32 12 > 10:35 15 > 10:39 13 > 10:46 18 > where only the earliest time with the unchanging value is stored. Be careful what you wish for. Usually "avoid storing" is a proxy for some kind of intended use. Unless it's infeasible, it's usually better to store everything, verbatim, as it arrives. Then you can present it however you like, with nothing lost. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] "Standard SQL" ?
On Sun, 2 Feb 2020 10:05:11 +0100 Markus Winand wrote: > When you say ?many standards? do you mean the different releases > those standards have? Yes. > IMHO, there is only one SQL standard, namely ISO/IEC 9075. The > current and technically only valid version is that of 2016 (even > though an extension was added in 2019). That's a defensible proposition. And that's one way to measure an implementation. I think of standards more as accreting. The longer a feature has been standardiized, the more succeeding versions of the standard include it, the "more standard" it is. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Check constrain execution timing change? (Now a bug)
On Mon, 3 Feb 2020 10:45:50 +0100 Dominique Devienne wrote: > On Sun, Feb 2, 2020 at 12:50 AM Richard Hipp wrote: > > > On 2/1/20, Thomas Kurz wrote: > > > Does this mean there will be no possibility to prevent inserting > > > a string into an integer column anymore? > > > > > > create table x (x integer check (typeof(x) == 'integer')); > > > insert into x values ('1'); > > > > > > --> will pass in future versions??? > > > > I think that is what it means. yes. > > Wow... I haven't caught up on this thread, but that's really really > bad IMHO ... > the fact we can no longer do that would be a real shame. I wonder > where this is coming from... --DD It's a good thing, really! The rule would be that the provided value is converted to the column's type before inserting. If it can't be converted, it's still an error. If it can, great. I think you'll find that's the behavior of most SQL DBMS implementations. After all, of what significance is the type of the provided argument? Do you want to force applications to "pre-convert" values the DBMS can convert implicitly? Do you want binding choices in the application to drive the datatype in the database, or do you want the database to enforce types? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] importing data to a table that has generated-columns
On Mon, 27 Jan 2020 12:00:52 -0700 "Keith Medcalf" wrote: > If you mean importing into a pre-existing table as above that is > declared with "check typeof(a) in ('integer', 'real')" then yes. The > check constraint is run before affinity is applied rather than after > (unfortunately) ... Thank you, Keith. Yes, that's what I was referring to. > >I imagine it's already being considered: if pragma table_info > >included a column with the SQL for generated columns (and NULL) > >otherwise, the shell could use that instead of SELECT * to determine > >the number of insertable columns. > > That data is already in the schema structures as is the assigned > affinity: Good to know. I realize of course that computed columns are a new feature. Knowing that their "computed" property is easily ascertained, I'm optimistic that the shell's .import will one day avail itself of that information. What's old is always new again. ISTR when Microsoft SQL Server added computed columns, they also got in the way of bulk-loading at first. The rule for a while was that the buik-copy utility (bcp) couldn't be used with such tables. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] New word to replace "serverless"
On Mon, 27 Jan 2020 17:18:45 -0500 Richard Hipp wrote: > But more recently, "serverless" has become a popular buzz-word that > means "managed by my hosting provider rather than by me." Many > readers have internalized this new marketing-driven meaning for > "serverless" and are hence confused when they see my claim that > "SQLite is serverless". It would be a mistake to discard the useful term "serverless" in favor of some invented term. Any invented term would have to be defined, in which case you're back to where you started from. As you know, no terminology has context-free meaning. (My favorite example is time: "six o'clock" can mean several different things, depending on context.) The fact that some users impose inappropriate context on "serverless" doesn't make that term less useful or descriptive or meaningful. It seems to me the best course of action, to dispell the confusion, is simply to define the term in some prominent location. You already do that at https://sqlite.org/serverless.html. Maybe that page just needs 3rd bullet point? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] "Standard SQL" ?
On Thu, 30 Jan 2020 19:19:53 + Simon Slavin wrote: > Have any of you been using this term for a meaning other than "Fully > conforming to SQL:2019 (or whatever version you think current) ? Do > you have documentation somewhere ? Or are my suspicions correct and > there's no such thing ? I often refer to "the SQL standard" as if there were only one. For many purposes, there might as well be just one. For example, some SQL DBMSs support "UPDATE FROM" to update one table from another. If we can agree that's nonstandard SQL, then surely it's also "not standard SQL" and "is not defined by the SQL standard". Similarly, last I checked, no SQL standard supported LIMIT for SELECT. On the positive side, some parts of SQL haven't changed since the Late Bronze Age. "SELECT *" still means all columns; "FROM" still takes a table argument, whether a tablename, view, or expression. "WHERE" operates on values "before" aggregation; "HAVING" on values "after" aggregation. Any implementation that operates any other way does not implement standard SQL. SQL is hardly unique in this regard. We also refer to "the" C standard library, to "Posix", and to "the" C or C++ standard. Like SQL, there are many such and (also like SQL) some parts are unchanged since the beginning while, just as usefully, some that weren't part of the first standard haven't changed since they were introduced. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] importing data to a table that has generated-columns
On Sun, 26 Jan 2020 12:01:32 -0700 "Keith Medcalf" wrote: > Now that the table exists, use "SELECT * FROM " to determine > the number of columns in the table (which will include computed > always columns, if any). ... > Otherwise, Richard *may* make some changes to the .import logic which > *might* appear in the next release version. I imagine it's already being considered: if pragma table_info included a column with the SQL for generated columns (and NULL) otherwise, the shell could use that instead of SELECT * to determine the number of insertable columns. I'm a little confused, though. ISTR the shell does something clever with .import, because constraints that enforce numeric types are violated even when the data are numeric. Is that simply because the shell uses sqlite3_bind_text for every column, and the system doesn't attempt to convert numeric text to a numeric type, regardless of the column's declared type? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Unexplained table bloat
On Sun, 12 Jan 2020 16:24:53 -0700 "Keith Medcalf" wrote: > while one may be tempted to claim that "consistency is the hobgoblin > of little minds" You might have forgotten that the much overpraised Ralph Waldo specified "a foolish consistency". He only meant: don't try to hang your hat on too small a peg. > Doing this does not really do much since you still have to check the > type on retrieval of the value anyway in order to know what to do > with it. That depends what "much" is. The value of constraints used to enforce types is to reject from the database values outside the domain. That not only simplifies application logic, but also the logical consistency of the queries themselves. If "year" is always an integer -- never NULL, never a string -- then avg(year) and count(year) are always correct. But if the database contains for "year" a string like "it was a good one", or NULLs, they're both unreliable. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Unexplained table bloat
On Sun, 12 Jan 2020 15:48:05 -0700 "Keith Medcalf" wrote: > SQL is simply a Structured Query Language that can be overlaid on a > variety of datastore models, one of which is the Relational model. I think that's misleading, Keith, for a couple of reasons. SQL was invented by IBM expressly and specfically for its "relational" product. Its wordiness is the product of its roots in "4th generation" languages in vogue at the time, the promise of which was to permit users to "write their own reports". Hence the strict select-from-where syntax, meant to be so simple that untrained users could figure it out. While its true, as you say, that many pre-relational systems (and post-relational ones, later, feh) added SQL on their shingles, those were never coherent implementations. They were a way to say Yes! when the question was, "Do you support SQL?" > Many (most in fact) datastores require that all instances of the same > "column" in an "entity" be the same type To the extent that SQL implements relational algebra/calculus, its utility and consistency *requires* that each column be of a defined type. For example, if the query includes, where A between 0 and 1 and A is a *not* a numeric value, then the query is reduced from rigorous first-order predicate logic to gibberish. SQLite serves a particular niche very well. The choice not to enforce type constraints for declared datatypes biases the system toward ease-of-insertion. That has advantages in the SQLite problem domain, but the trade-off comes at a price: it makes SQLite *harder* to use in a rigorous way. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best way to store key,value pairs
On Mon, 13 Jan 2020 10:37:57 -0800 Jens Alfke wrote: > > On Jan 12, 2020, at 4:12 PM, James K. Lowden > > wrote: > > > > What is the motivation behind this advice? It's completely > > unnecessary. > > Thanks for your opinion, James! I disagree. > ... > IMHO there are too many of them to do the usual SQL thing > and add every header of every message to a table ? that multiples the > number of inserts and deletes by an order of magnitude. Hi Jens, I asked for your rationale; thanks for your answer. So, basically, a nomalized design requires too much use of INSERT? You're making an efficiency argument here, or maybe ease-of-implementation assertion. For me, inserting one header row or 20 is the same coding effort (still need a loop). I think transaction throughput would be about the same if COMMIT is applied only to whole messages. The OP wanted some basic design advice; he didn't say a simple, straightforward design was too much work or too slow. He in fact said, > > > local small NNTP server Given that, ISTM that textbook SQL 101 advice is in order. JSON should wait until your assumptions are tested. > > For all the complexity JSON adds to the design, it adds exactly no > > power: precisely the same effects can be achieved without it. > > Well sure, but you could say the same thing about indexes, couldn't > you? No. Perhaps I should have been more explicit about what "complexity" I was talking about. I'm saying you've added a user-visible aspect, JSON, to the logical database design for him to cope with, but in no way made the database capable of representing something it otherwise couldn't. Indexes are the opposite: invisible affordances that don't affect the database's logical design. > Encoding headers as JSON is pretty simple if you've got a JSON > encoder handy Perhaps. It's still introducing an extraneous technology to the user's problem domain. > > I see no advantage to introducing JSON to a system with no external > > use for it. > > Hm, a number of database companies (such as my employer, Couchbase) > and their customers would disagree with you :) Please don't take this personally, because I don't intend insult. For years I worked on databases before Couchbase existed, and for which using Couchbase even today would be considered a joke. DBMSs are used for all kinds of purposes by people well trained and not, to good effect and bad. The number who don't understand the basic theory of what they're working with far exceeds those that do. Half of them are below average, and the average isn't very high. I'm sure you'll understand if popular opinion doesn't impress me. I get why you would do it your way. In your estimation, given your particular skills, you feel it would be easier to use the JSON API. I didn't want to leave unchallenged the impression "6 one, half-dozen the other" impression that bring that into the mix is just a matter of taste. Simpler systems are better, I'm sure you'd agree. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Best way to store key,value pairs
On Tue, 7 Jan 2020 17:11:45 -1000 Jens Alfke wrote: > Consider encoding the headers as JSON and storing them in a single > column. SQLite has a JSON extension that makes it easy to access > values from JSON data in a query. What is the motivation behind this advice? It's completely unnecessary. For all the complexity JSON adds to the design, it adds exactly no power: precisely the same effects can be achieved without it. I can understand the utility of using SQLite's JSON features where there's pre-existing JSON, especially if there's need to preserve it and reproduce it later. I see no advantage to introducing JSON to a system with no external use for it. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min
On Mon, 9 Dec 2019 22:02:07 -0500 Richard Damon wrote: > If we assume that over-committing has been removed, then the fact > that the fork succeeded is the promise that both processes have the > right to access all of their address space. Any page that is writable > needs to have swap space reserved, Yes, that's SOP in most systems, and not expensive. The kernel need not write anything to swap; it just has to book the space in its swap account. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min
On Sat, 7 Dec 2019 05:23:15 + Simon Slavin wrote: > (Your operating system is allowed to do this. Checking how much > memory is available for every malloc takes too much time.) Not really. Consider that many (all?) operating systems before Linux that supported dynamic memory returned an error if the requested amount couldn't be supplied. Some of those machines had 0.1% of the processing capacity, and yet managed to answer the question reasonably quickly. The origin of oversubscribed memory rather has its origins in the changed ratio of the speed of RAM to the speed of I/O, and the price of RAM. As RAM prices dropped, our machines got more RAM and the bigger applications that RAM supported. As memory got faster, relatively, the disk (ipso facto) has gotten slower. Virtual memory -- the hallmark of the the VAX, 4 decades ago -- has become infeasibly slow both because the disk is relatively slower than it was, and because more is being demanded of it to support today's big-memory applications. Swapping in Firefox, at 1 GB of memory, who knows why, is a much bigger deal than Eight Megabytes and Constantly Swapping. If too much paging makes the machine too slow (however measured) one solution is less paging. One administrative lever is to constrain how much paging is possible by limiting the paging resource: swap space. However, limiting swap space may leave the machine underutilized, because many applications allocate memory they never use. Rather than prefer applications that use resources rationally or administer machines to prevent thrashing, the best-effort, least-effort answer was lazy allocation, and its infamous gap-toothed cousin, the OOM. Nothing technical mandates oversubscribed memory. The problem, as ever, is not with the stars, but with ourselves. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Things you shouldn't assume when you store names
On Wed, 13 Nov 2019 15:37:15 -0600 Peter da Silva wrote: > My last name has a space in it. Don't get me started. My phone number has dashes in it, two to be exact. I don't remember the last website that accepted it verbatim. We're pretty far from a database discussion. It is a wonder, though, that Postel's Law is -- on the web built on the Internet he helped create -- observed amost entirely in the breach. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] LEFT JOIN: ON vs. WHERE
On Mon, 28 Oct 2019 12:10:38 -0600 "Keith Medcalf" wrote: > CREATE TABLE t1 (a, b); > INSERT INTO t1 VALUES(1,2); > INSERT INTO t1 VALUES(2,3); > INSERT INTO t1 VALUES(1,3); > CREATE TABLE t2 (b, c); > INSERT INTO t2 VALUES(2,3); > INSERT INTO t2 VALUES(3,4); > select * from t1 left join t2 on t1.b == t2.b and t1.a == 1; > 1|2|2|3 > 2|3|| > 1|3|3|4 Thank you, Keith. What you're saying is that when ON applies to the outer table, it still constrains the JOIN. The tuple 2 3 3 4 would have been produced, but for the ON restriction t.a = 1 > This sort of thing is useful I don't doubt it. I can't remember ever having written a query like that (not knowing that's what it would do). I would probably have expressed the giraffe-neck problem as select * from t1 left join ( select * from t2 where b <> 1 ) as t2 on t1.b = t2.b because usually, in domains I worked with, the constraint on the inner table wasn't a function of the join criteria, but of some type or group or catagory column, or date. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Single or double quotes when defining alias?
On Fri, 25 Oct 2019 23:55:20 +0200 Thomas Kurz wrote: > SELECT column1 AS 'c' > --or-- > SELECT column2 AS "d" > > On the one hand, the name refers to a column or table identifier. The SQL-92 standard refers to that kind of name as a "correlation name", and its BNF grammar designates a correlation name as a kind of indentifier. Therefore, syntactically, "d" is correct because double-quotes are used to quote identifiers. Which was news to me. I've always used single-quotes for correlation names (on creation, never reference). Not because they need quoting. I never choose a correlation name that needs to be quoted; normally they're just 3 lower-case letters, at most. I would quote them only to make them stand out for the syntax highlighter. And who doesn't like pretty SQL? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] LEFT JOIN: ON vs. WHERE
On Sun, 27 Oct 2019 15:09:03 -0700 Benjamin Asher wrote: > My colleagues and I are trying to understand the role of ON > constraints vs WHERE clauses in JOINs. ON applies before JOIN. WHERE applies after. That's a loose interpretation, but IMO it's nevertheless a useful way to think about it. It *mostly* doesn't matter. It was invented for outer joins. In SQL-89, there was only the inner join. There was no JOIN keyword. You listed all the tables in FROM: FROM R, S WHERE R.key = S.key and all was good with the world. To support outer joins, vendors invented various syntaxes. To unifiy them, SQL-92 defined JOIN and ON: FROM R JOIN S ON R.key = S.key with that syntax, JOIN could be modifed to be LEFT or RIGHT outer, and the inner table could be further restricted: FROM R LEFT OUTER JOIN S ON R.key = S.key AND S.col = 'foo' That means: all rows from R joined to rows in S for which col = 'foo'. If no such S rows exist, every result row will be NULL for the S columns. Compare with FROM R LEFT OUTER JOIN S ON R.key = S.key WHERE S.col = 'foo' Here, R and S are joined, and the WHERE test is applied to the joined result. If no such S rows exist, there will be no result rows at all. When wrestling with this topic, it's useful to remember that ON doesn't constrain the outer table: FROM R LEFT OUTER JOIN S ON R.key = S.key AND R.col = 'foo' Here, AND R.col = 'foo' while valid syntax (unfortunately), has no effect. R is the outer table, and all rows of the outer table are always produced, no matter what ON says about it. Perhaps the above should mean: FROM (select * from R WHERE R.col = 'foo') as R LEFT OUTER JOIN S ON R.key = S.key but it does not. Perfection remains, as ever, elusive. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Date time input
On Mon, 7 Oct 2019 18:17:14 +0100 Simon Slavin wrote: > Converting data to and from a convenient storage format is not the > job of a DBMS. While I have no quarrel with your specific point about date strings, this particular statement is too broad. Arguably, data conversion is an inherent, central part of what the DBMS does. Unless, that is, you're adept at interpreting IEEE floating point as 1s and 0s. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Date time input
On Tue, 8 Oct 2019 09:06:24 -0700 Jens Alfke wrote: > I think the idea of a semi-official ?SQLite++? has been floated here > before OK, but it needs a better name. What better place than here to debate that? ;-) What the opposite of "Lite"? I don't know. It's like asking for the opposite of "organic" milk. What we have is "milk" and "organic milk". In a just and rational world, we'd have "milk" and "industrial milk". Milk is, after all, organic to start with. Moving on... We can't us be fat-shaming and all, calling it "SQLfat" or "SQLobese" or somesuch. "SQLoaded" seems intoxicated. "SQLplus" isn't accurate; it's not more than SQL. IMO something fanciful is called for. I nominate "SQLippo". After the hippopotamus, not lipposuction, but it's mnemonic either way. Mud, mud, glorious mud! --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] DBMS Normalization Query
On Wed, 2 Oct 2019 21:42:13 -0700 (MST) shivambhatele wrote: > 1. When is the process of normalization used? > 2. Boyce and Codd Normal Form > 3. 1NF, 2NF, and 3NF Good information on database theory is remarkably scarce on the web, and difficult for the beginner to distinguish from bad information on database theory. And, actually, that's the case for almost any body of knowledge. If you want to master a discipline, books are a better resource. I have tried referring people to wikipedia before. It's not necessarily wrong; it's more a question of completeness and explication. You just can't replace a book with a web page or pages. I recommend An Introduction to Database Systems (8th Edition), 2003, by C.J. Date It's useful to understand normalization in the context of relational algebra: by removing redundancy, you defend against anomalies that would otherwise result from mathematically correct operations. So, well, more than one chapter to read. :-) --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Programming methodology (was DEF CON (wasL A license plate of NULL))
On Tue, 13 Aug 2019 16:47:43 -0400 Richard Hipp wrote: > On 8/13/19, Jose Isaias Cabrera wrote: > > > > I see all of you smart programmers using this > > non-column matching behavior, and I ask myself why? > > Because that's the way Dennis Richie did it. :-) That's right. Like many of a certain age, I learned C from K&R, and adopted Ritchie's style. I'm reminded of Bjarne Stroustrup's complaint about the C++ standardization process. He would ask those assembled to offer suggestions for how C++ could be made easier to use and more approachable for the novice. That question was always met with silence. If you want a lively discussion, he said, ask where the curly braces belong. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] DEF CON (wasL A license plate of NULL)
On Mon, 12 Aug 2019 14:14:08 -0600 "Keith Medcalf" wrote: > Perhaps I am just lazy but I see no point in engaging in extra work > for no advantage bool is_true (bool tf) { if (tf == true) { return true; } return false; } --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] SQLITE_BUSY (SQLITE_BUSY_SNAPSHOT) due to a simple SELECT query -- Documentation needed!
On Mon, 12 Aug 2019 12:02:33 +0200 Kira Backes wrote: > I would change the error description for SQLITE_BUSY_SNAPSHOT so that > it no longer says "read transaction" but instead says "read > transaction or ongoing select statement". While I agree the documentation could be clarified, that wouldn't be a particularly good change. A "read transaction" is, exactly, an "ongoing select statement". More precisely, the *execution* of a SELECT statement is a read transaction, whether or not preceded by BEGIN TRANSACTION. The person reading the documentation is expected to know that. It's defined by SQL, and has nothing to do with the SQLite implementation. I think you actually got burned by multithreading. Let's just say you're not the first. ;-) To the extent the documentation could be made clearer, I would suggest it focus on the interface and not the implementation. Knowledge of locks, while interesting, should not be required to use transactions correctly or to understand SQLITE_BUSY_SNAPSHOT. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] sqlite3_exec without ubiqitous text conversions
On Wed, 31 Jul 2019 12:05:05 +0200 Olivier Mascia wrote: > Nothing stops any piece of your own programming or anyone using the > CLI to do: > > INSERT INTO "VALUES"(value_int) VALUES('something'); Not nothing, just nothing automatic. value_int INTEGER not NULL check( typeof(value_int) = 'integer') will do nicely. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Quirks of SQLite. Was: Version 3.29.0
On Thu, 11 Jul 2019 10:21:10 -0400 Richard Hipp wrote: > If you have personally experienced some unusual or unexpected feature > of SQLite that you think should be added to "quirks.html", please > follow-up to this thread Thank you for publishing this page. I would suggest these additions: 1. Integer division by zero is not an error. It results in NULL. 2. Update is not atomic. Each row is written one at a time, and "intermediate" updates that (temporarily) violate UNIQUE constraints cause the update to fail, even if the the constraint would be satisfied were the update carried to completion. 3. A table of isolation in SQL terms (repeatable read, etc.). Isolation is affected by WAL and Begin Transaction. SQLite differs in that way quite sharply from other DBMSs. Of these, #2 is the most significant, because it's an unambiguous violation of the SQL standard. I'm unaware of any other SQL implementation that enforces UPDATE constraints at a level invisible to the user. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] the sqlite3 documentation would be pretty good if it wasn't tragic...
On Sat, 22 Jun 2019 23:14:08 -0700 Ben Earhart wrote: > can't be bothered to write example sql code While I'm sure you're irritated, that criticism is misplaced. You might want to take a step back. Tools that work with standardized languages don't define the language they process. You won't find many examples included with your C compiler or ODBC driver, either. It's not a matter of can't be bothered. It's a matter of choosing. Anyone who's done a lot of writing, especially technical writing IMO, has the problem of deciding what to include and what to exclude. Anyone who reads documentation appreciates the difficulty of finding the relevant information, and of skipping over what isn't relevant in case it (surprisingly) is relevant. SQL examples in SQLite documentation, except where they illustrate some preculiar aspect of SQLite's SQL, would be only so much noise: they would hinder the job of understanding the grammar. Necessarily, being examples, they would highlight only certain features of the grammar. You may say examples would help the beginner. But the reference manual is not a tutorial and not a user guide and not an introduction to SQL. The beginner is well advised to consult those kinds of materials as a way to learn SQL, and come back to the SQLite manual to learn how to use SQLite. Specifically. I started learning SQL before Bill Gates discovered the Internet. "Diving in" in those days meant going down to the bookstore at lunchtime to find out what there was to find out. Still today, the best way to learn about something is to read about it from someone who wants to explain it to you. CJ Date has sold 800,000 copies of his textbook, which in the technical book market is a runaway best seller. There are dozens of others just as good but not as popular. Avoid, if I may suggest, anything that promises to make SQL easy or implies that it's hard. It's not hard. But it may well be the only language you ever use that is grounded in math & logic. It has a more-than-casual relationship to the Relational Model, itself based on set theory and first-order predicate logic. It's worth your time to understand that, and you might as well work with an author who wants you to. BTW, SQL is more standardized than some give it credit for. While it's true that a given statement may be accepted by one DBMS and not another, a great swath of the language -- all the important parts -- work just fine. It's quite rare to find two implementations that both accept a standard query and produce different results from the same data. Have fun storming the castle. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Understanding the WITH clause
On Mon, 17 Jun 2019 20:46:41 -0400 Sam Carleton wrote: > It is my view that peace is not something that can be defined with > some words, rather it is a universal experience. "But peace is not merely the absence of this tension, but the presence of justice." -- MLK, Letter from the Birmingham Jail --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Should SQLite distinguish between +0.0 and -0.0 on output?
On Wed, 12 Jun 2019 18:40:19 -0400 Richard Hipp wrote: > On 6/12/19, James K. Lowden wrote: > > what kind of computation > > would lead to a value in memory representing -0,0? > > 0.0/-1.0 Fine. I suspect the reason -0.0 has never cropped up as an issue in my experience is that -0.0 == 0.0. The existence of -0.0 never mattered because it was computationally irrelevant. I couldn't tell from your reference to Wolfram whether or not you considered the "negative zero is not math" to be dispositive (so to speak). If you're still considering rendering "-0.0" in the even the floating point unit happened to end up with "negative zero", are you also going to provide a way for users to detect the sign bit and "positivize" zero, such as through a SQLite function for signbit(3)? If so, to what end? IMO this whole discussion is a tempest in a teapot about angels dancing on the head of a pin. I have yet to see anyone offer any advantage of treating -.0.0 as anything other than 0. Far more important is integer division by zero. SQLite disguises it as NULL, making it undetectable and indistinguishable from genuinely missing information. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [SPAM?] Re: [EXTERNAL] Re: Should SQLite distinguish between +0.0 and -0.0 on output?
On Thu, 13 Jun 2019 10:44:52 -0400 Doug Currie wrote: > > Except by the rules of IEEE (as I understand them) > > > > -0.0 < 0.0 is FALSE, so -0.0 is NOT "definitely left of true zero" > > > > Except that 0.0 is also an approximation to zero, not "true zero." Sure, because floating point numbers are approximations of real numbers. > Consider that 1/-0.0 is -inf whereas 1/0.0 is +int Consider too that both are mathematical nonsense. Just reading over the documentation, it seems it's also detectable nonsense: fenv(3) describes FE_DIVBYZERO. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Should SQLite distinguish between +0.0 and -0.0 on output?
On Wed, 12 Jun 2019 12:57:22 -0400 Richard Hipp wrote: > On 6/12/19, Thomas Kurz wrote: > >> For an SQL engine, the next-best-thing to strict binary IEEE754 is > >> not > > sloppy binary IEEE754, its probably strict decimal IEEE754. > > > > That would be a *really great* improvement! > > It would also be a file format change, rendering about 1e12 existing > database files obsolete. Is that necessarily the case? Could blob types be "overloaded" somehow, such that existing code sees a blob and new code recognizes said blob as decimal numeric? For example, if the first two bytes of the expansion area of the database header (at offset 72) were 0x0bcd (say), then record type 44 could represent a 128-bit decimal encoding [16 bytes = (44 -12)/2]. Existing programs wouldn't examine the expansion area, and would interpret the field as a blob. New programs would interpret type 44 as decimal or blob, depending on the CREATE TABLE definition. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Should SQLite distinguish between +0.0 and -0.0 on output?
On Wed, 12 Jun 2019 10:28:20 -0600 "Keith Medcalf" wrote: > Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 > 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or > "license" for more information. > >>> a = -0.0 > >>> b = 0.0 > >>> print(a,b) > -0.0 0.0 > > Really difficult hoops to jump through are they not? $ printf "%f\n" -0.0 -0.00 A poor choice of words. What I meant was, what kind of computation would lead to a value in memory representing -0,0? I don't remember ever coming across one. Just because we can print -0.0 doesn't mean it's meaningful. #include int main() { if( -0.0 == 0.0 ) { printf("all zeros are zero\n"); return 0; } printf("minus zero is real\n"); return 1; } make negzero && ./negzero cc negzero.c -o negzero all zeros are zero --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Should SQLite distinguish between +0.0 and -0.0 on output?
On Wed, 12 Jun 2019 09:35:13 -0400 Richard Hipp wrote: > Question: Should SQLite be enhanced to show -0.0 as "-0.0"? No. 1. Prior art. I can't think of a single programming language that displays -0.0 without jumping through hoops. 2. Math. Negative zero is not a mathematical concept. 3. Utility. There is none. The user is only inconvenienced. Negative zero is an artifact of the IEEE 754, not a feature. By contrast, IEEE adopted 2s-complement signed integers specificially to avoid negative zero. That's evidently harder to do with floating point. ;-) I see no advantage to exposing a technical mechanical artifact to the user. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] New Information Schema Views
On Fri, 07 Jun 2019 01:43:18 -0600 "Keith Medcalf" wrote: > -- Catalog Views using sqlite_master for SysObjects (Object Names) > -- requires that the eval() extension function be loaded and available > -- all TEXT columns in views have "collate nocase" attachmented to > the output > -- columns to ensure that where conditions on retrievals are not case > sensitive > -- Column Names in views defined so as to not conflict with keywords > to ensure > -- quoting when using the views is not required > -- table/view/index name MUST NOT contain embeded semicolons (;) I would like to play Omniscient Narrator for a moment and mention to our listening radio audience that this is not the first substantial contribution by the inimitable Mr. Medcalf. The other one that I remember was a complete set of functions for math.h. I'm sure there are others. For users coming to SQLite and knowledgable about SQL standards, INFORMATION_SCHEMA is the obvious place to look for schema metadata, much easier to query and remember than a variety of functions and pragmas and whatnot. It's not beautiful, but then very little about SQL is beautiful. What's important is that is works and is understandable and standard. Thanks, Keith. You just saved a lot of people a lot of work. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite-users Digest, Vol 138, Issue 4
On Tue, 4 Jun 2019 11:26:12 -0700 "Doug" wrote: > select songfile_id,dancename,dancegroupname from songfiletable where > dancename like "Waltz"; What Shawn Wagner's answer shows you is that 'Waltz' is a string and "Waltz" is a column name, because in SQL double-quotes denote identifiers. They don't denote strings, unlike as in, say, C. The double-quote escape syntax let's you have odd columns names with spaces and such: create table "The Blue Danube" ( "Waltzing Matilda" text not NULL primary key ); If there's no column name "Waltz" in songfiletable, that's a bug IMO. As a matter of style, what is songfiletable? A set of songs, or a file, or a table? Why not just "songs"? create table songs { id integer not null primary key, -- probably not needed dance ... , dance_group ... , -- or just "group", but see next ); If songs have names and dances, and dances have groups, then dancegroupname belongs in another table, "dances". HTH. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] round function inconsistent
On Fri, 24 May 2019 13:10:49 + Jose Isaias Cabrera wrote: > >Consider these two queries: > > > > SELECT round(3.255,2); > > SELECT round(3.2548,2); > > > >Do you expect them to give different answers? > > 3.26 > 3.25 sqlite> SELECT cast(100 * (0.005 + 3.2548) as ...> integer)/100.0; 3.26 --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How to set access permissions to protect a database file?
On Sun, 26 May 2019 19:52:29 +0800 Adrian Ho wrote: > Finally, create a "reading_room" script that your users will run: > > #!/usr/bin/env bash > > sudo -u reading_room /path/to/reading_room.tcl This script is more efficient and portable: #! /bin/sh sudo -u reading_room /path/to/reading_room.tcl --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL Features That SQLite Does Not Implement
On Wed, 22 May 2019 17:56:23 -0700 Jens Alfke wrote: > > On May 22, 2019, at 3:55 PM, James K. Lowden > > wrote: > > > > I've always thought user-defined types were unnecessary except as a > > convenience. > > User-defined types are quite important if you?re doing fancy stuff in > user-defined functions, where data that?s stored in tables as blobs > has an internal structure visible to those functions. (JSON is a good > example, and the reason why SQLite added its ?subtypes? feature.) I see. So user-defined type here isn't some combination of extant primitive types, but a blob that *could* be represented by extant types but is instead represented another way. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] User Defined Types implementations ...
On Wed, 22 May 2019 21:16:04 -0600 "Keith Medcalf" wrote: > Basically, when you declared something as a UDT you were giving a > "blob" a type-domain. Whenever you tried to do something with a UDT > a "mangled function name" was generated that took that blob as the > first argument and you merely implemented a bunch of functions with > the appropriate names that DB2 would use whenever you referred to > that UDT type, rather than using the builtin functions (this is > similar to the way languages like Python implement classes). I see. First, declare a type U. Then define a function, say, U max( U, U ) . Now, if we have a table T( U u ) we can SELECT max(u) from T to invoke our function instead of the built-in max(). What about operators, then, as long as we're talking C++? U operator<(U u) ? If you could define equivalence and less-than, you can join on, sort by, or get the maximium. In fact (speaking of Python), you get those for free if there's an implicit conversion to string. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQL Features That SQLite Does Not Implement
On Wed, 22 May 2019 14:20:11 -0600 "Keith Medcalf" wrote: > (such as was added to DB2 back in the late 80's early 90's, and > which I do not think anyone else has implemented as nicely anywhere > else) That's an interesting aside. It would make an interesting OT thread, if you're inclined to start it. ;-) I've always thought user-defined types were unnecessary except as a convenience. There are few new primitive types; most user-defined types I can think of are "structures" -- sets of columns -- that one might like to name and constrain as a new type that may appear in many tables. About the only primitive type I can imagine are mathematical: complex numbers or exact numeric representations. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite with single writer on Windows network share
On Wed, 8 May 2019 21:36:43 +0200 Clemens Ladisch wrote: > Andrew Moss wrote: > > ... an SQLite database hosted on a windows network share (using > > server 2012 R2 or later). We are well aware this is not advisable > > There are three possible sources of network filesystem data > corruption: > > 1) Bad locking implementations. Some Unix-y network filesystems >products allow to disable locking (in a misguided attempt to > increase performance), or do not implement locking at all. Although it doesn't solve anything, it's useful to understand that "bad" locking is not the whole story. File operations under NFS -- by design -- do not agree with the semantics on a local filesystem. The basic problem is that file I/O operations act on the kernel's filebuffer, not what's on the disk. Since all processes on one machine share the same kernel, they share the same filebuffer, and that filebuffer is the very definition an accurate representation of the state of the filesystem. With NFS, that's not true. There are as many filebuffers as there are nodes using the fileserver, plus the fileserver's own. If machine X updates the file, nothing informs machine Y. If the second machine acts on stale information in its filebuffer -- boom! -- welcome to a corrupted file. As a matter of fact, when machine X updates the file, nothing guarantees *any* data reach the server. Data are not necessarily written until the file is closed. Under SQLite, that could be a long time indeed! Even if locks were honored with perfect fidelity, inconsistent filebuffers in different clients sharing the same file provide lots of opportunity for inconsistency. A low-traffic system with a single writer and not much contention might not bump into it very often (or notice when it does!) but on NFS none of the ACID promises SQLite makes are actually in force. You may read that NFSv4 solves locking problems and others. It does not alter the basic consistency guarantee, though, that nothing is assuredly on the disk until the file is closed. It does not inform other clients' kernels of changes to files they're sharing. NFSv4 provides new functions to commit data to the disk and ascertain whether the current in-kernel image is up to date. SQLite does not use those functions and, even if it did, the mishmash of implementations would make any guaratees tenuous at best. If this sounds like an indictment of NFS, it's really not. Ironically, back when NFS was being invented, the cognoscenti already knew that what the filesystem couldn't provide, database servers could, and would, and soon did. Any problems with consistency, concurrency, locking, etc., were evidence that the process required a DBMS, not that the filesystem was insufficient to the purpose. They were right: the inconsistent-information problem is better solved not by disseminating the information across N filebuffers, but by routing the information through a single one, on a shared DBMS server. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Custom collation of blobs
On Fri, 26 Apr 2019 10:36:34 -0700 Jens Alfke wrote: > The only workaround I can think of is to define a custom function > that converts a blob to a string and collate using those strings ? > e.g. `? ORDER BY collatable_blob(b)`. But this requires expensive > string conversions, Your UDF need not convert to a string. It could return an integer or, if that's not enough, couldn't it return a different blob whose bit order yields the correct ordering? > and it doesn?t work well with indexes. If indexing is important, that's a clue that the blob is not your friend to start with. It might be worthwhile to store the searchable components in "redundant" columns that can be readily indexed. By analogy, if your blob is a PDF, it might be convenient to store the author and title in columns for search & display purposes. If you write UDFs to pull out the pieces, you could invoke them in a trigger to keep the "redundant" columns consistent. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Regarding sqlite3 reliability in using in service oriented architecture
On Fri, 26 Apr 2019 02:09:33 +0100 Lullaby Dayal wrote: > We use sqlite3 in an embedded automotive system based on QNX > hypervisor running multiple virtual machines. Am I the only one who reads a sentence like that and thinks, "I don't want to drive that car"? I hope the embedded automotive system is in the garage, and not under the hood. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Multiple Independent Database Instances
On Mon, 22 Apr 2019 21:25:31 + "Lee, Jason" wrote: > I have a set of several million database files sitting on my > filesystem. Each thread will open a previously unprocessed database > file, do some queries, close the database, and move on to the next > unprocessed database file. Fascinating. One wonders what Feynman would have said. Even with gobs of RAM and solid-state storage, I/O will quickly bottleneck because the processor is 3 orders of magnitude faster than RAM and 6 orders faster than the disk. Once you exhaust the I/O bus, it's exhausted. I would build a pipeline, and let processes do the work. Write a program to process a single database: open, query, output, close. Then define a make(1) rule to convert one database into one output file. Then run "make -j dd" where "dd" is the number of simultaneous processes (or "jobs"). I think you'll find ~10 processes is all you can sustain. You could use the sqlite3 utility as your "program", but it's not very good at detecting errors and returning a nonzero return status to the OS. Hence a bespoke program. Also, you can get the data into binary form, suitable for concatenation into one big file for input into your numerical process. That will go a lot faster. Although there's some overhead to invoking a million processes, it's dwarfed by the I/O time. The advantage of doing the work under make is that it's reusable and restartable. if you bury the machine, you can kill make and restart it with a lower number of jobs. If you find some databases are corrupt or incomplete, you can replace them, and make will reprocess only the new ones. If you add other databases at a later time, make will process only those. You can add subsequent steps, too; make won't start from square 1 unless it has to. With millions of inputs, the odds are you will find problems. Perfectly good input over a dataset that size probably occured before in recorded history, but not frequently. I assume your millions of databases are not in a single directory; I'd guess you have 1000s of directories. They offer convenient work partitions, which you might need; I have no idea how make will respond to a dependency tree with millions of nodes. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] compressed sqlite3 database file?
On Thu, 11 Apr 2019 18:45:01 -0600 Warren Young wrote: > Sure, but what *is* on the disk after a crash is always consistent > with ZFS, so any decent database engine can recover. It's been some years, but I saw a presentation about running Postgres on ZFS. Every "victory" was a way to compensate for a feature of ZFS. None of the problems surmounted are presented by normal filesystems. The simple fact is that ZFS does nothing for the DBMS. As you acknowledge, it does complicate matters. And the features ZFS supplies slow down the DBMS by stealing I/O cycles that the DBMS could otherwise use productively. > > It's really not the ideal substrate for a system that takes its > > fsyncs seriously. > > You know, I?ve just realized that it?s been a really long time since > I?ve heard anyone seriously talk about running databases on raw > storage. I don't know what you mean by "raw" storage. A storage substrate that really syncs when it's told to and did do when it says it did doesn't have to be a locally attached SATA drive. It could be a SAN. It just has to be a yankee at the end of a dirt road: simple and honest. ;-) > I?ve even heard of people successfully using ZFS snapshots to make > live, continuous DB replications from one site to another for fast > failover. No doubt. But absence of proof is not proof of absence: just because it seems to work doesn't mean it can be depended on. If the DBMS developer can't vouch for it, it's a sure bet it's depending on at least one fortuitous coincidence. > It calls into question how important, relatively speaking, > lack of mediation is in system storage design. We mustn't place much faith in common (a/k/a "best") practices, viz: > Of course raw storage isn?t the main alternative to ZFS. It?s LVM+md > +XFS and similar lash-ups, which are even worse in this regard. Exactly. The simple understanding that a DBMS has many of the attributes of an operating system, and therefore needs direct access to the hardware, is lost on many (I would say most) system administrators and not a few database administrators. Now if you'll excuse me I have some kids to chase off my lawn. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Problem with SELECT by rowid after INSERT
On Fri, 12 Apr 2019 11:40:13 -0400 Jim Dossey wrote: > CREATE TABLE "sessiond" ( > "journal" VARCHAR(4) DEFAULT '' NOT NULL, > "session" VARCHAR(16) DEFAULT '' NOT NULL, > "pid" INTEGER DEFAULT 0 NOT NULL, > rowid INTEGER PRIMARY KEY > ); Although it has nothing to do with the problem you posed, at some point you might want to reconsider this design. A row in this table is allowed to exist even if contains no information. The journal and session can be empty strings, and the pid 0 (invalid). That, in fact, is the default. I assume the rowid is meaningless, just a way to refer to the row from another table. The table has the hallmark of an application design that inserts "empty records" and updates them. That's a whole other ball of wax, and I don't want to critique an assumption. But it's easy to see how that kind of design is inefficient and defeats features provided by the DBMS. The DEFAULT '' NOT NULL construct is particularly pernicious, because NULL is useful in ways a zero-length string is not. NULL can always be converted to something else with coalesce, and is usually obvious when interrogating the database interactively. The same cannot be said of zero-length strings. Instead of allowing any number of meaningless duplicate rows, why not let the DBMS do a little work for you? For example: CREATE TABLE sessiond ( journal VARCHAR(4) NOT NULL, session VARCHAR(16) NOT NULL, pid INTEGER NOT NULL check (pid > 0), primary key( pid, journal, session ) ); If the rowid actually helps, sure use it. If it's better as the primary key, that's fine; it's an arbitrary choice, logically speaking. But the {pid, journal, session} tuple should be unique regardless. If not, what to 2 identical rows represent, and how are they different? (I don't need to know, but you do.) When you need to track two indistigushable events, the solution isn't to allow two undifferentiated rows with an arbitrary discriminator. The solution is to add a "quantity" column, and count them. It's easy to delete the row when quantity reaches zero, or cull zero-quantity rows on selection. For extra credit, add CHECK constraints to enforce the lengths of journal and session. With a table like that, you can prevent applications from inserting nonsense into the database. I've used that strategy to good effect. Once the developers have agreed they don't want garbage in the database, it's pretty rare, when an insert fails, for the programmer to say, "yeah, I meant to do that". Nearly always, it's a bug. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Option to control implicit casting
On Thu, 11 Apr 2019 11:35:04 +1000 John McMahon wrote: > > SELECT x * y & ~1 AS even_numbered_area FROM squares; > > Suggestion: "Don't Do That", use database purely as a storage medium. You yourself don't really believe that! A disk is a storage medium. A file is an undifferntiated stream of bytes. A SQLite database is much more than that, as I'm sure you'll agree. Not least, it includes a query language with a passing resemblance to first order predicate logic and set theory. > You would seem to be working in an edge case environment, in which > case it is your responsibility to make the adjustments. Actually, the OP was illustrating how implicit conversion to double yields incorrect results instead of a domain error. I somehow doubt he has much use for 10^80 on a regular basis. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] compressed sqlite3 database file?
On Wed, 10 Apr 2019 15:14:59 -0600 Warren Young wrote: > On Apr 10, 2019, at 2:12 PM, Keith Medcalf > wrote: > > > > It is far cheaper and much more reliable to just buy some file > > storage space. > > If you?re going to buy some more storage, you should put ZFS on it > then, too. :) That's interesting advice for a DBMS mailing list. ZFS has built-in transactions, of a sort. There's enough mediation in the filesystem to frustrate the efforts of the DBMS to make sure that what's committed in the transaction is, in fact, committed to the disk. It's really not the ideal substrate for a system that takes its fsyncs seriously. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] compressed sqlite3 database file?
On Wed, 10 Apr 2019 15:20:32 -0500 Peter da Silva wrote: > > Why would anyone fart about with added complication and the > > concomittant increased unreliability when storage is so damn cheap? > > Embedded systems and mobile devices. > > But of course those probably don't apply here. :) Are you saying my MB Pro isn't portable? If not, I have an Osborne Executive to sell you. I was going to tell the OP that the 1990s called, and they want their STAC floppies back. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Option to control implicit casting
On Mon, 8 Apr 2019 23:08:18 -0400 Joshua Thomas Wise wrote: > I propose there should be a compile-time option to disable all > implicit casting done within the SQL virtual machine. You can use SQLite in a "strict" way: write a CHECK constraint for every numerical column. Just don't do that for tables that are loaded by the .import comand. As I reported here not long ago, .import rejects numeric literals. Apparently, the value is inserted as a string and rejected, instead of being converted to a number first. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Returning NULL or empty values when the SELECT does not satisfy all of the query
On Fri, 5 Apr 2019 18:54:18 + Jose Isaias Cabrera wrote: > Why does this work I don't know what "work" means, but I can explain the difference. With an outer join, JOIN and WHERE are not the same. In analyzing the query, we consider JOIN before WHERE. > select > a.*, b.* from t as a LEFT JOIN z as b on a.a = b.f > AND > a.idate = (select max(idate) from t where a = a.a) > AND > b.idate = (select max(idate) from z where f = a.a) > where a.a = 'p006' In an outer join, the outer table -- think "outer" as Venn diagram -- is the "preserved" table. All rows match, join criteria notwithstanding. This component has no effect: > a.idate = (select max(idate) from t where a = a.a) because "a" is the outer table, and so all rows match, join criteria notwithstanding. The inner table is a little different, and also different from WHERE in an inner join. On the inner table, JOIN restrictions can cause a row not to match that otherwise would. In your query: > b.idate = (select max(idate) from z where f = a.a) "b" rows that don't pass that test are rejected from the join. The "a" columns will appear (because all "a" rows do) and the "b" columns will be NULL. Only once the join is completed do we consider WHERE: > where a.a = 'p006' This restricts rows in the outer table. Now let's look at your other query. > select > a.*, b.* from t as a LEFT JOIN z as b on a.a = b.f The *only* join criterion is a.a = b.f. All rows passing that test are subjected to WHERE: all "a" rows (because outer table) and "b" rows with a matching "f". > where a.a = 'p006' OK, same as #1. > AND > a.idate = (select max(idate) from t where a = a.a) This further restricts the "a" table rows, unlike #1. > AND > b.idate = (select max(idate) from z where f = a.a) This restricts produced rows to those passing the test. Different from #1, no row passes this test for which b.idate is NULL. Whenever you disallow NULL on the inner table of an outer join, you effectively convert the join from outer to inner. You're asking for: 1. all rows in "a", whether or not they match a "b" row, provided 2. they do match a "b" row (because b.idate cannot be NULL) You can vote for anyone in either party from this list of Democrats. Because both AND clauses restrict the output, we can expect the 2nd query to produce fewer rows. If it doesn't, there's no need for an outer query for the data in their current state. By the way, the distinction of JOIN and WHERE is not a relational concept. It was added to SQL-92 in part to provide exactly the separation your question illustrates: how to express an outer join. Relationally, that's not needed. An outer join is nothing but an inner join + a union with the outer table. That's cumbersome to express in SQL, and anything cumbersome to express is hard to optimize, and query optimization was and is the unsolved technical challenge of SQL. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Remove row to insert new one on a full database
On Fri, 5 Apr 2019 15:45:10 +0300 Arthur Blondel wrote: > The data is always the same. That's why removing one row should be > enough to insert a new one. > My problem is that some times I need to remove many rows to add one > new one. SQLite *could* avoid that problem by pre-allocating space in the journal sufficient to permit a single row to be deleted. But it's not obvious to me that the complexity is worth it, given the size of disks these days and consequent rarity of the problem. If I were in your shoes, I'd consider maintaining a "dummy" file that's expendable in the event of a SQLITE_FULL error. Compute how much space SQLite needs to delete a row. Maybe double that for safety's sake. Create a file that size, and fill it with deadbeef just to be sure. Write functions to create and delete that file, because you'll want to do it consistently. When you encounter SQLITE_FULL, delete the file, do the deed, and re-create the file. If you can't recreate the file, you have an unrecoverable error, but an intact database. It's not a perfect solution. To guard against other processes seizing the space while you're trying to use it, you'd have to wall off the space, maybe with a loopback filesystem. But it'd get you further down the road than you are now. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Returning NULL or empty values when the SELECT does not satisfy all of the query
On Fri, 5 Apr 2019 14:01:20 + Jose Isaias Cabrera wrote: > The owners of the business said that "there will never be...", The perfect opportunity for a CHECK constraint or to enforce a foreign key. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ANN: SQLite3 Decimal Extension
On Thu, 4 Apr 2019 11:21:41 -0400 Joshua Wise wrote: > > On the other hand, what table has a floating point number in its > > key? > > > > How do you even express the value of such a key for an exact > > match? > > Well I imagine it can be very useful for range queries. Imagine > Julian dates, coordinate points, rankings, etc. Julian dates are integers. The tm structure is all integers, too. I suppose you could store lat/lon as floating point. It's exactly the kind of data that calls out of a tm-like structure, though, because officially there are 60 minutes in a degree, and 60 seconds in a minute. Just as with time, the governing authorities use a non-decimal notation; decimal fractions of a degree are mere computational convienience. And, again, it's not part of the key. In financial analysis, range queries over large datasets are common. If it's not a range of dates, it's a range of returns/price/earning/capitalization over time. Yet Microsoft SQL Server never suggested we use anything other than IEEE to store the data. Perhaps that's because, more often than not, floating point data are manipulated as part of the query. If you're joining the table to itself to select price change over time to compute, say, variance, the absolute magnitude of the data are uninteresting. You find the stocks by date, subtract the prices and compute the variance, in IEEE format, of course, because that's what the CPU supports. Then you sort and filter the top quintile, or whatever. In such a case, the overhead of floating-point conversion will be significant: twice for every row, overhead that is nonexistent today. I'm skeptical of the claimed advantage. The downside is clear. If the advantage can be shown, its use would be specialized. OTOH, a compiete BCD implementation would be ... interesting. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ANN: SQLite3 Decimal Extension
On Thu, 4 Apr 2019 17:30:29 +0200 Lifepillar wrote: > On 4 Apr 2019, at 17:15, James K. Lowden > wrote: > > On Wed, 3 Apr 2019 14:30:52 +0200 > > Lifepillar wrote: > >> SQLite3 Decimal is an extension implementing exact decimal > >> arithmetic for SQLite3. > > > > What does divide-by-zero yield? > > By default: > > sqlite> select decDiv(1,0); > Error: Division by zero > > You have the option to ignore the error, though, in which case you > get +Inf: > > sqlite> delete from decTraps where flag = 'Division by zero'; > sqlite> select decStr(decDiv(1,0)); > Infinity > > decTraps is a virtual table containing a list of flags which, when > set by some function, raise an error. The flags are from IEEE 754. Nice. Division by zero can be a problem in other DBMSs because, if it occurs, the query aborts, and you then have to search for the cases and decide what to do. By flagging and marking them, you make that task easier. Especially when the query runs for a long time. > NULLs are avoided where other results make sense. For instance: > > sqlite> create table T (n blob); > sqlite> select decStr(decSum(n)) from T; -- Sum of an empty set of > sqlite> values > 0 > sqlite> select decStr(decAvg(n)) from T; -- Avg. of empty set of > sqlite> values > NaN Here, as you may know, you're in close touch with SQL's ambivalent ambiguity with NULL. Should the sum of nulls be 0? Should the count be 0? Just one example of how SQL is a fossil from the 1980s. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ANN: SQLite3 Decimal Extension
On Wed, 3 Apr 2019 14:30:52 +0200 Lifepillar wrote: > SQLite3 Decimal is an extension implementing exact decimal arithmetic > for SQLite3. It is currently unfinished and under development. ... > I welcome any feedback, from the super-technical to the > end-user oriented. There is no manual so far, but the code is mostly > documented. What does divide-by-zero yield? If NULL, no amount of exactitude will matter. If the library is based on math, on the other hand, that would be a boon to SQLite users. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ANN: SQLite3 Decimal Extension
On Wed, 3 Apr 2019 11:37:47 -0600 Warren Young wrote: > Put another way, your defaults are already so large that no > conceivable physical entity could build a computer big enough to > simultaneously contain every distinct state your data type represents. Exactly (as it were). Physical entities cannot be measured to more than 6 orders of magnitude. Finer than that, error takes over. What is the distance from New York to Los Angeles? We know it's 3,944 km. Do we know it's 3,944,000 meters? 3,944,000,000 mm? We do not. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ANN: SQLite3 Decimal Extension
On Wed, 3 Apr 2019 17:29:47 -0400 Richard Hipp wrote: > On 4/3/19, Joshua Wise wrote: > > From my naive understanding, memcmp() is used to efficiently > > compare long strings of bytes. But where in SQLite3 is it necessary > > to compare long strings of floating point numbers? I, of course, > > can imagine SQL queries plucking single floating point values from > > rows or indexes, but I can?t imagine where the long strings would > > be. Could you enlighten me? > > Comparing keys in a btree search uses a lot of CPU cycles. If the > comparison can be done using memcmp() rather than some custom > function, the comparison goes much faster, which makes searching > btrees faster. On the other hand, what table has a floating point number in its key? How do you even express the value of such a key for an exact match? There is also a significant cost of converting to and from IEEE format. In my experience bulk-loading quantitative databases, I encountered many occasions in which parsing the input accounted for 50% of the computation. The limit wasn't network bandwidth or server speed, it was data conversion. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] UPSERT with multiple constraints
On Wed, 27 Mar 2019 23:59:47 +0100 Thomas Kurz wrote: > Sure. I have a table of items. Each item has a type, a name, and > properties A, B, C (and some more, but they're not relevant here). > > I want to enforce ... UNIQUE (type, name). ... > Furthermore, items of a certain type that have identical properties > A, B, C are also considered equal, regardless of their name: UNIQUE > (type, A, B, C). ... > Now when inserting an item that already exists (according to the > uniqueness definition above), the existing item should be updated > with the new name and A, B, C properties. IIUC, by "upsert" you mean that for a new row matching an existing row on {type, A, B, C}, instead of inserting the new row, you want to update the existing row with the new row's name. Unless, that is, the new row would then conflict with (i.e., match) a different row on {type, name}, in which case the update fails. If no row matches either criteria, you want to insert the row. So why not use SQL to do that as designed, instead of relying on the strange upsert? insert into T values ( 'type', 'name', 'a', 'b', 'c' ) where not exists ( select 1 from T where type = 'type' and name = 'name' or A = 'a' and B = 'b' and C = 'c' ); update T set name = 'name' where A = 'a' and B = 'b' and C = 'c'; For efficiency you can check that the first insert affected zero rows before updating, but that's not strictly necessary. If there's a possibility of other processes updating the database between the two statements, wrap them in a transaction. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] .import fails CHECK constraint on valid data
On Sun, 10 Mar 2019 17:04:46 -0400 "James K. Lowden" wrote: > Why does the .import command cause the CHECK constraint to fail, when > an ordinary INSERT does not? On Sun, 10 Mar 2019 14:12:33 -0700 Shawn Wagner wrote: > The check constraint is probably being evaluated (with t as a string) > before any type conversion to match the column affinity is done. Does anyone have a better answer? Isn't .import supposed to work like INSERT? If it doesn't, CHECK constraints for type safety are useless for tables that are loaded from files. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] printf() - Local decilmal separator
On Tue, 12 Mar 2019 16:08:24 + Alexandre Billon wrote: > 1st question : Is there a way to tell printf() to display the decimal > separator set in the OS ? > > For example, the decimal separator in my OS is set to comma (,), I > would like printf() to display the comma as the decimal separator. You may want to return the value in native form and use your C library to format it. Posix defines the single-quote character as a modifier that does just what you want: ''' (apostrophe) Decimal conversions (d, u, or i) or the integral portion of a floating point conversion (f or F) should be grouped and separated by thousands using the non-monetary separator returned by localeconv(3). --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Recursive CTE on tree with doubly linked items
On Mon, 11 Mar 2019 10:39:06 +0100 Jean-Luc Hainaut wrote: > Your implementation of trees is that of network databases at the > pointer-based physical level but definitely not relational. Try this: > > create table TREE( >ID integer not null primary key, >Parent integer references TREE on delete ... on update cascade); > -- Notice the absence of "not null" > create index XTREE on TREE(Parent); -- Only useful for large sets of > nodes > > That's all. Bravo! To the OP: this is the answer you want, whether you want it or not. > > I've a tree with doubly linked items. That's the root of your problem, as it were. It's hard to solve in SQL because you're trying to use SQL in a nonrelational way. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Apparent power fail data loss in embedded use - SQLite newbie
On Tue, 12 Mar 2019 10:36:37 -0400 ted.goldbl...@gmail.com wrote: > The problem is basically that as part of a test, the customer wants > to power fail the device, and then to be able to recover (after the > device restarts) data that was generated/stored as close to the power > failure as possible. ... > Unfortunately, when doing this, it appears that significant amounts > of data that should be in the database are missing and/or corrupted, > and this doesn?t appear particularly dependent on when the failure > occurs. Only 3 possibilities exist: 1. SQLite finished its transaction before the power cut, and the data were committed and do appear in the database later. 2. SQLite did not finish its transaction, and the transaction was rolled back as part of database initialization after power was restored. 3. The hardware or driver reported the data were written when they were not. We can dismiss as statistically insignificant possibility #4, a bug in SQLite, because of SQLite's excellent testing regimen and gigantic user base. If you can confirm that SQLite finished the transaction whose data the database does not reflect on restart, you really must suspect the driver or device. I don't know much about USB drives, but consumer grade hard drives *normally* lie about committed data for performance reasons. "It's easy to make it fast if it doesn't have to be right." USB devices face at least as much temptation to misrepresent their performance. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] .import fails CHECK constraint on valid data
$ sqlite3 db "create table T (t integer not null);" $ sqlite3 db "create table Tc (t integer not null check(typeof(t) = 'integer'));" $ echo 1 > dat $ sqlite3 db ".import 'dat' T" $ sqlite3 db ".import 'dat' Tc" dat:1: INSERT failed: CHECK constraint failed: Tc $ sqlite3 db "insert into Tc select * from T;" $ sqlite3 db "select * from Tc" 1 $ sqlite3 db "select typeof(t) from T" integer $ sqlite3 db "select typeof(t) from Tc" integer Why does the .import command cause the CHECK constraint to fail, when an ordinary INSERT does not? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Equiv stmts, different explain plans
On Tue, 05 Mar 2019 13:58:06 -0700 "Keith Medcalf" wrote: > >The query requests no such thing. SQL makes no request or > >suggestion for how to execute a query. It simply describes a result. > >It's up to the implementation to determine how to produce that > >result. > > You are, of course, correct. However for the two queries given I do > not believe that any query planner currently in existence will > recognize that t1.c == 1 and t2.c == 1 implies that t1.c == t2.c. Thank you for the clarification, Keith. You may well be right about the state of the art. I fault SQL itself; if it implemented relational algebra correctly, there would be no duplicates from SELECT, and no need of DISTINCT. Perhaps then the transformation of FORALL to JOIN would be easier to infer. There is sometimes a tendency in this forum to use shorthand, to describe what SQLite does as what SQL does. It's useful for the users' sake to distinguish between the two, so as not to confuse the attainable with the attained. :-) --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Equiv stmts, different explain plans
On Mon, 04 Mar 2019 20:20:08 -0700 "Keith Medcalf" wrote: > In the first query the subselect that creates the list is independent. > In the second query the subselect that creates the list is correlated. Yes, and if it can be shown that the two queries are logically equivalent under relational algebra, then it's theoretically possible for the query planner to have arrived at the same plan in both cases. That is the only test that could support/deny the assertion that they could be rendered according to the same execution plan. > In the first query you have requested that the subquery be executed > to create the list for use by the IN operator. No. The query requests no such thing. SQL makes no request or suggestion for how to execute a query. It simply describes a result. It's up to the implementation to determine how to produce that result. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] [EXTERNAL] Equiv stmts, different explain plans
On Tue, 5 Mar 2019 08:13:32 + Hick Gunter wrote: > The second statement uses a *correlated* subquery as the RHS of an IN > expression. The QP needs to actually run this query for every record > of t1 that matches the condition t1.c == 1. I'm not sure what you mean be "needs", above. If you're describing the way the SQLite QP works, OK. If you're asserting that the QP or any QP must work that way, no, that's common fallacy. The person writing the query may think of a correlated subquery that way; it's *logically* true. But the planner is free to execute the query however it chooses. In fact, SQLite explains in great detail when the optimizer will "flatten" a subquery into a join. > select * from t1 >where c=1 and d in (select d from t2 where c=1); > select * from t1 >where c=1 and d in (select d from t2 where t2.c=t1.c); Consider: select distinct t1.* from t1 join t2 on t1.c = t2.c and t1.d = t2.d where t1.c = 1 Every existential quantification can be recast as a join. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is there a tool to convert `where`s to equivalent `join`s?
On Fri, 1 Mar 2019 13:44:57 +0530 Rocky Ji wrote: > So for learning sake, is there a tool that converts a query using > `WHERE` to a query (that yields identical results) using JOINs? Like > a English -> side-by-side translator. SQL-92 was introduced a long time ago. At that time, I don't remember any vendor offering any kind of automatic conversion tool. Nor do I know of one now. It's not that it couldn't be done. It's that it wouldn't help. Given a database schema and a query, it's no problem to separate join criteria from (relational) select criteria. But equivalent translations could be expressed several ways, and no machine-generated conversion would necessarily communicate the intention of the query any better. Put another way: how likely is a machine to better pose a query than the human being did in first place? If automatic translation adds functionality, that's different. For example, Oracle had a tool to convert T-SQL to PL/SQL. While the output was no one's concept of beauty, it did have the property of running on Oracle. The most important contribution of SQL-92 to the SELECT statement, as I think Keith mentioned, wasn't separating JOIN from WHERE, but a clear, standardized outer-join syntax. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Checking differences in tables
On Tue, 12 Feb 2019 15:05:29 + Jose Isaias Cabrera wrote: > >SELECT * From t WHERE datestamp = "20190208" > > Ok, Simon, I'll bite; :-) Imagine this table: > > t (n INTEGER PRIMARY KEY, a, b, c, d, e, idate) That's better. > how would I find the differences in the fields based on the different > idate? select ... from t as now join t as then on now.idate = '20190208' and then.idate = '20190207' -- note single quotes and ... -- other things that match where ... --- things that don't match Can't be more specific than that when the question is "find the differences". --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Min/Max and skip-scan optimizations
On Mon, 4 Feb 2019 18:55:33 +0100 Gerlando Falauto wrote: > I remember reading ORDER BY is only allowed in > the outer query As Keith said, SQLite allows ORDER BY in subqueries. The SQL standard does not. Logically, ORDER BY makes sense only for the outer query. An SQL SELECT statement decribes a set of results, possibly presented in a particular order. An "internal ORDER BY" describes neither selection criteria nor presentation order. Technically, ORDER BY takes a tabular result as input and produces a cursor as output. It need not sort the results; all that's required is that the cursor return successive rows in the prescribed order. SQLite extends ORDER BY with LIMIT. Because the combination affects more than just the order, it can be useful to use ORDER BY in a subquery. Now that window functions provide a (more convenient) standard way to produce row numbers, LIMIT is a bit of anachronism but, for reasons of backwards compatibility, is unlikely to be removed. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite slow when lots of tables
On Tue, 29 Jan 2019 12:00:49 +0100 (CET) mzz...@libero.it wrote: > what happens if I put all data in a single table and this table > become very huge (for example millions of rows)? Big tables are your friend, actually. A binary search on 1 million rows requires at most 20 operations. A binary search on 2 million rows requires at most 21 operations. A binary search on 10 million rows requires at most 24 operations. Provided there are appropriate indexes, the power of O(log n) means that as the data grow, they are incrementally easier to search. I had a friend who told me once that he was working on his 2nd million dollars. He gave up on the first milliion because it was too hard. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite slow when lots of tables
On Mon, 28 Jan 2019 16:28:41 + Simon Slavin wrote: > SQL is not designed to have a variable number of tables in a > database. All the optimization is done assuming that you will have a > low number of tables, and rarely create or drop tables. This. The table name should not be meaningful to your application; nothing in your application should conjure up a table name. It's an external factor your application is configured to use. If you're generating table names based on application data, you're insinuating data in the metadata. When Simon says "SQL is not designed" for that, he's referring to the fact that the schema is expected to be relatively stable because it *describes* the data. The system is designed to search for data in the tables, not among their names. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Is this rewrite of a query OK
On Fri, 25 Jan 2019 23:35:51 +0100 Cecil Westerhof wrote: > SELECT MIN(totalUsed) AS minimum > , MAX(totalUsed) AS maximum > , MAX(totalUsed) - MIN(totalUsed) AS range > FROM quotes > > But I did not like it because I repeated the MIN and MAX. There's no problem with either version. For what it's worth, I like your first one, above. It's shorter, and the meaning is not obscured by calling min & max twice. Computationally they should be the same; the DBMS will traverse "quotes" just once. If the repeated calcuations were more complicated, to the point that burying them in a subquery made the whole thing shorter, then I would probably opt for something like the 2nd version, to avoid the possibility of errors in the repeated code. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Database locking problems
On Mon, 21 Jan 2019 18:12:25 -0500 Richard Damon wrote: > Some operations can be order of microseconds if the data resides in > cache, Thank you, I hadn't considered that. I was thinking that seek times on "spinning rust" -- which is the only economically feasible technology for large databases, which would be typical of a database with many concurrent users -- is still measured in milliseconds. And the larger the database, the lower the cache hit ratio. > SQLite is NOT intended to be used over a network No emphasis required. I didn't imply network access. I posited multple clients, of which there could be many on a large shared-logic machine. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Database locking problems
On Sun, 20 Jan 2019 21:51:19 + wrote: > > insert into t > > select :pid, nrows, N > > from (select 1 as N union select 2 union select 3) as cardinals > > cross join (select :pid, count(*) as nrows from t) as how_many; > > > > By using a single SQL statement, you avoid a user-defined > > transaction and any proprietary transaction qualifiers. > > There's too much back-and-forth between the database and my logic to > put it all into a single statement. Thus, transactions are > necessary. Transactions exist to allow multiple statements to become > an atomic unit, so eschewing them is basically the same thing as > admitting they don't work. I don't suggest you eschew user-defined transactions, and I don't admit they don't work. I suggest not relying on proprietary features. BEGIN IMMEDIATE may do what you want, but at the cost of not learning how to accomplish the same thing using standard SQL. In general, the formulation select ... do stuff insert ... will get you in trouble in most SQL setups, because selected data are subject to change, whether or not the logic is in a user-defined transaction. If the DBMS is set up with serialized isolation, concurrency will be compromised because you're holding a transaction open across application logic. That's especially true when it would be most convenient: when "do stuff" involves user interaction, and said user might choose that moment for a coffee break. The most common solution is to check your assumptions at the door, otherwise known as "optimistic concurrency". At time of insert, make sure the previously obtained data still obtain, i.e., that nothing has changed meanwhile. If it has, the precondition for the update has been violated, and you start over. If it hasn't -- and, in practice, usually it hasn't -- you're good to go. In your case, you might be able to do something like select count(*) as nrows from t do stuff begin transaction insert into t select ... where nrows = (select count(*) from t) verify rows affected == 1 insert into t select ... where nrows = (select count(*) from t) verify rows affected == 1 insert into t select ... where nrows = (select count(*) from t) verify rows affected == 1 commit putting whatever the relevant test is in the WHERE clause. The test can be simplified in SQLite to just the first insert, because with SQLite's single-writer design, they're redundant once the transaction has begun to take effect (ie, once other writers really are blocked). User-defined transactions weren't invented to deal with read-write-write errors. They were invented to enforce referential integrity. SQL has no syntax to update several related tables at the same time. If updates to different tables would leave the database in an inconsistent state, some mechanism is needed to convert those several SQL statements into a single all-or-nothing update. Thus were born user-defined transactions. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Database locking problems
On Sun, 20 Jan 2019 17:01:25 -0700 "Keith Medcalf" wrote: > SQLite3 however has latencies on the order of microseconds Is that really true? Are there machines for which SQLite's throughput can be measured in transactions per millisecond? I think you're referring to the latency of the function-call overhead, as opposed to using a network interface. But since DBMSs are basically I/O machines, and the most interesting operations involve I/O, it's not clear to me why function-call overhead is a relevant measure. > [SQLite] does not have the same opportunity for opportunistic > behaviour as does a client/server database which may be serving > thousands of concurrent (but different) applications. That I think is the relevant measure. It's the number of clients, not latency, that makes SQLite's lock-the-whole database feasible. On a large machine supporting thousands of clients, the latency advantage would be overwhelmed by the concurrency disadvantage, depending of course on the rate of updates. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Variable Declaration
On Sat, 19 Jan 2019 12:01:34 -0700 "Keith Medcalf" wrote: > Microsoft took the OS/2 3.0 Beta 2 code and generated their OS/2 New > Technology. The "New Technology" part was considered to be a bit to > long, so Microsoft shortened it to NT, replaced the Presentation > Manager with with Windows layer, added the "Windows Subsystem" and > released it as Windows NT. The whole message was fun to read; I just wanted to correct this part. You'll remember Microsoft hired David Cutler from DEC to create Windows NT. AFAIK there was no OS/2 technology per se in NT. Some of the Windows API was shared across all three -- DOS/Windows, OS/2, and NT -- but the underlying OS functionality -- scheduling, memory model, I/O -- was utterly different. Of the 3, NT was the only one with demand-paged virtual memory and isolated per-process virtual addressing on the i386 platform. Notably, early versions of NT put the video driver in userspace, not in the kernel. That made processing more reliable and games slower. History has since shown that Microsoft, when faced with a choice between correct and fast, always chose fast. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Database locking problems
On Sat, 19 Jan 2019 08:07:42 -0500 Richard Hipp wrote: > The busy timeout is not working because you start out your transaction > using a read operation - the first SELECT statement - which gets a > read lock. Later when you go to COMMIT, this has to elevate to a > write lock. But SQLite sees that some other process has already > updated the database since you started your read. Another solution is to rely on atomicity in SQL: insert into t select :pid, nrows, N from (select 1 as N union select 2 union select 3) as cardinals cross join (select :pid, count(*) as nrows from t) as how_many; By using a single SQL statement, you avoid a user-defined transaction and any proprietary transaction qualifiers. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite 3.21.0 for z/OS UNIX - installation issues
On Fri, 4 Jan 2019 16:14:22 +0100 Mario Bezzi wrote: > awk '($0 ~ /^CC = /) { $0 = "CC = xlc" } > ($0 ~ /^CFLAGS = /) { $0 = "CFLAGS = -O2 -q32 -qfloat=ieee > -qnolist" } ($0 ~ /^LDFLAGS = /) { $0 = "LDFLAGS = -q32" } > ($0 ~ /^DEFS = /) { for (i = 1; i <= NF; i++) \ > { if ($i == "-DHAVE_POSIX_FALLOCATE=1") $i = > "-DHAVE_POSIX_FALLOCATE=0 -D_XOPEN_SOURCE=600" } } > (1) { print $0 }' Makefile > echo "Makefile patched, original version saved as Makefile.original" > That might be more conventionally written, and easier to understand, thus: awk ' /^CC = / { $0 = "CC = xlc" } /^CFLAGS = / { $0 = "CFLAGS = -O2 -q32 -qfloat=ieee -qnolist" } /^LDFLAGS = / { $0 = "LDFLAGS = -q32" } /^DEFS = / { gsub(/-DHAVE_POSIX_FALLOCATE=1/, \ "-DHAVE_POSIX_FALLOCATE=0 -D_XOPEN_SOURCE=600") } } { print } ' Makefile echo "Makefile patched, original version saved as Makefile.original" --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using sqlite3_interrupt with a timeout
On Mon, 31 Dec 2018 14:25:41 -0700 "Keith Medcalf" wrote: > def run_query_with_timeout(db, query, timeout, whizround) > stmt = prepare(db, query) > create_thread A interrupt_function(db, stmt, timeout, > whizround) while sqlite3_step(stmt) == SQLITE_ROW > ... process the row ... > cancel_thread A > join_thread A /* make sure the thread is ended */ > sqlite3_finalize(stmt) If I don't want to use threads, can I call sqlite3_interrupt from a signal handler? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sample Employee database ported to SQLite from MySQL
On Thu, 20 Dec 2018 15:42:27 + Chris Locke wrote: > and a model number is a numeric number My phone's model number is VVX 500. > set the column affinity to the type of data Yes, and not everything that looks like a number is a number. Some things that start out looking like numbers change over time in ways that make them non-numeric. If the assigning body didn't intend the "number" as a quantity, it's not. Treating it as such will often come to tears. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Sample Employee database ported to SQLite from MySQL
On Wed, 19 Dec 2018 10:55:11 + Chris Locke wrote: > Fields with '_no' are read as 'number' and so should be a number. > OK, that doesn't always work for 'telephone_no' (they usually start > with a 0 Lots of numbers are labels that aren't meant to be calculated on. Item number, part number, model number, serial number, order number. Anything that needs to be distinguished and isn't worth naming. It's never a good idea to store such numbers as numerical types. There's always a potential loss of information, be it the leading zero or embedded '-' or multiple '.' characters. Unless the "number" is a quantity, for compuational purposes it's text. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about floating point
On Tue, 18 Dec 2018 17:34:29 -0500 Dennis Clarke wrote: > some serious reading and experiments are needed to get a good > handle on why numerical computation is as much art as it is science. > If we wander into the problem without sufficient study and VERY > careful consideration then we are doomed to repeat the errors of the > past. I think perhaps you left out "Numerical Methods for Scientists and Engineers", by Richard Hamming. :-) But when you boil it down, the answer is there is no answer, is there? The best advice is to understand where things can go wrong, and stay away from them. The truth is that any system for representing numbers is forced to represent some numbers approximately. We think "pen and paper" and the good old decimal system is the gold standard, but what of ? ? Even bankers, ever counting pennies, approximate to compute interest and averages. Little known fact: sometimes they compute interest on the basis of a 360-day year. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about floating point
On Mon, 17 Dec 2018 17:35:54 + Simon Slavin wrote: > On 17 Dec 2018, at 5:16pm, James K. Lowden > wrote: > > > IEEE > > double-precision floating point is accurate to within 15 decimal > > digits. > > First, the problem is not storage it's calculation. > > Second, the thread was started because a floating point calculation > in SQLite, exactly as it is run today, led to the following value: > > 211496.252 > > which is typical of such problems. What problem? Rounded to the number of significant digits -- 2 decimal places in the input -- the number is correct. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] add constant to INTEGER PRIMARY KEY
On Thu, 13 Dec 2018 16:06:04 -0700 "Keith Medcalf" wrote: > You cannot do that. The PRIMARY KEY is required to be unique at each > "step" along the way, For the OP's benefit, this is longstanding, er, idiosyncrasy of SQLite. It does not conform to the SQL standard. The SQL rule is there are no "steps". Each statement is atomic; it is either executed entirely or not at all. The contraints on the table apply to the *user-visible* view of the data: before the update begins, and after it finishes. They do not apply during the time the DBMS is updating the data, however it chooses to do so. > Letting alone why anyone would want to do such a thing The following is legal in SQL for a unique column, update T set a = a +1 *regardless* of the values in "a". SQLite is the only DBMS I'm aware of that prevents the update if "a" is unique and holds consecutive values. That kind of update is neither absurd nor unusual. An obvious example is updating all records by 1 day (perhaps because originally misrecorded). --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about floating point
On Sat, 15 Dec 2018 01:24:18 -0800 Darren Duncan wrote: > If yours is a financial application then you should be using exact > numeric types only Color me skeptical. That very much depends on the application. IEEE double-precision floating point is accurate to within 15 decimal digits. The example given, > 211496.26 gives, safely, a margin of 6 order of magnitude. If the application is taking sums of 100's of thousands of dollars, it will stay accurate to within a penny using floating point until there are millions of entries: 10^15 ÷ 10^8 = 10^7 I doubt the financial application exists that sums milliions of entries AND cares about the last penny. I've seen advice about using integer arithmetic and implied decimal points in textbooks. It's convenient in languages like Cobol, that support it. In languages like C, floating point is too convenient -- and accurate -- to ignore. I'm sure banks have regulations and approved rounding algorithms. In decades of programming on Wall Street, though, we used floating point for everything. The only problems I remember involved matching results between systems when porting: the differences were insignificant, but because they were visible they had to be explained. That always took some work. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about floating point
On Sat, 15 Dec 2018 10:35:01 -0700 "Keith Medcalf" wrote: > And the propensity to apply intermediate rounding still exists. Yes. The only significant errors I've ever seen using floating point were not due to the computer, but to the programmer. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Corrupted database: On tree page 76852 cell 303: Rowid 18741471 out of order
On Wed, 5 Dec 2018 05:21:30 + Simon Slavin wrote: > On 5 Dec 2018, at 5:16am, Ryan Schmidt wrote: > > > https://kb.vmware.com/s/article/1008542 > > > > "VMware ESX acknowledges a write or read to a guest operating > > system only after that write or read is acknowledged by the > > hardware controller to ESX. Applications running inside virtual > > machines on ESX are afforded the same crash consistency guarantees > > as applications running on physical machines or physical disk > > controllers." > > Interesting. That paragraph is a well-written piece of text > explaining the opposite of what I thought. Maybe things have changed > in the past decade. VMware may well be doing the best it can on unreliable hardware. I believe it's common knowledge that consumer-grade hard drives lie when acknowledging writes: the acknowlegement is sent when the data are received into the device's write buffer, not after being written to disk. It's good for benchmarks. No one benchmarks data corruptions. 'Twas ever thus: If you want a reliable database, use a reliable disk. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] sqlite3_bind_text() and WHERE x IN (?)
On Fri, 30 Nov 2018 23:25:48 +0900 Simon Walter wrote: > > SELECT id, data FROM val WHERE id IN ("1, 893, 121212"); ... > I have no idea yet if MySQL and/or PostgreSQL can handle this > scenario and how they do it. The important thing to understand about parameterized queries is that they are not a generalized macro system. Only data -- not metadata, not arbitrary strings -- can be parameterized. That's why your IN list can't be parameterized (except as individual elements) and why can't say SELECT id, data FROM ? or SELECT id, ? FROM val as would occasionally be convenient. Other than string-slinging, the only generalized standard solution for your parameterized IN list, where the number of elements is variable, is to first insert the list into a table, then use IN or EXISTS against it. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Suitability for Macintosh OS 9.2?
On Mon, 19 Nov 2018 17:16:37 +0100 Dominique Devienne wrote: > Most of the SQLite code is platform agnostic, but at some point, it > must interface with the local filesystem on the local platform. While your first challenge will probably be to get a sufficiently modern C compiler for OS 9, Dominique's point will probably turn out to matter more. SQLite is very adaptable; it makes almost no assumptions about OS services. But on a non-Posix, non-Windows OS like yours, you'll have the work of supplying the missing pieces to allow SQLite to open a file, flush writes to disk, and so on. As they say, just a small matter of programming. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] forming sqlite3_statements using stored sql
On Fri, 2 Nov 2018 15:24:51 -0700 Jens Alfke wrote: > > On Nov 2, 2018, at 12:50 AM, Thomas Kurz > > wrote: > > > > My opinion is that the logic for database queries should be held > > together with the data. > > Why? The logic has to be in the application itself (where else would > it come from?) to copy into the database in the first place. So why > create a duplicate of it in the db file? The usual answer is that the query can be modified to produce the same logical result without recompiling the application. An unmodified application could use a newer database with a different schema, and still obtain the same results by calling the (new) procedure by the same name. In object-oriented terms, that's information hiding: the "internal" structure can change, while the access method continues to present the same interface. To a limited extent, views have the same effect. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] forming sqlite3_statements using stored sql
On Thu, 1 Nov 2018 08:41:51 +0100 Clemens Ladisch wrote: > > It strikes me that this would be nicer if sqlite offered this as > > an intrinsic capability. > > How would SQLite know what the table and column names are? When the SQL is loaded, it can be parsed and analyzed. SQLite could, if it chose, prevent dropping a table references by the stored SQL, or at least report on missing/obsolete pieces. But it need not do any more than it currently does with views. > How would that API be different from exec("SELECT SQL From StmtSQL > WHERE StmtName = ?")+prepare? Perhaps the least intrusive change would be to support special syntax in the prepare function. If the first word of the SQL is EXEC instead of insert/update/delete/select, look up the name in the procedure table, subsitute the retrieved text, and proceed per usual. Rather than inserting the SQL into a user-visible table, it might be preferable to support CREATE PROCEDURE. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Bug: float granularity breaking unique contraint?
On Thu, 1 Nov 2018 01:18:26 +0100 szmate1618 wrote: > But there seems to be an unintended > workaround > > DROP TABLE IF EXISTS TestReal;CREATE TABLE TestReal(A REAL > UNIQUE);INSERT INTO TestReal values (9223372036854775807);INSERT INTO > TestReal values (9223372036854775807 - 1);INSERT INTO TestReal values > (9223372036854775807 - 2);INSERT INTO TestReal values > (9223372036854775807 - 3); > > runs without any problems. On Wed, 31 Oct 2018 23:05:19 -0300 Bernardo Sulzbach wrote: > So if you are inserting integers > into a real column, you are going to store integers. However, when you > select from it they are presented as reals and mix up (looking as if > there were duplicates [...] I don't think that explanation holds water. sqlite> DROP TABLE IF EXISTS TestReal;CREATE TABLE TestReal(A REAL UNIQUE);INSERT INTO TestReal values (9223372036854775807);INSERT INTO TestReal values (9223372036854775807 - 1);INSERT INTO TestReal values (9223372036854775807 - 2);INSERT INTO TestReal values (9223372036854775807 - 3);sqlite>...>...>...>...> sqlite> select cast(A as integer) from TestReal; 9223372036854775807 9223372036854775807 9223372036854775807 9223372036854775807 sqlite> select hex(A) from TestReal; 392E323237323033363835343738652B3138 392E323237323033363835343738652B3138 392E323237323033363835343738652B3138 392E323237323033363835343738652B3138 sqlite> select count(*), hex(A) from TestReal group by hex(A); 4|392E323237323033363835343738652B3138 sqlite> .schema TestReal CREATE TABLE TestReal(A REAL UNIQUE); sqlite> Curiouser and curiouser. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question about a query
On Tue, 9 Oct 2018 10:22:12 -0700 Jens Alfke wrote: > You could implement a custom query function to do this (custom > functions are quite simple, and there are examples online). http://www.schemamania.org/sql/sqlite/udf/ Been there, done that. :-) --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite Windows GUI alternative to Excel?
On Sat, 6 Oct 2018 21:21:38 +0100 Simon Slavin wrote: > There is never any point in this process when a manager looks at > what's being done with Excel and says "Okay we need to hire a > programmer to turn this into a proper App.". Hmm, there is such a point. I used to do work like that, and there were others in the firm who did, too. Still are. afaik. Modeling and prototyping get done by analysts with numerical and statistical tools: Excel, sure, but also SAS, Matlab, R, etc. Not infrequently, some sinister stew cooked up with a database or two added (or taken from). If the model/prototype proves useful, it will be reimplemented as a production application in a "real language" to keep better control of the computational result. The more users, the more control is needed, because fools^Wusers are so inventive. It's actually a gargantuan, industry-wide, systemic waste of time. If the core of the model could be be extracted from its GUI and inserted as a module in a production program, millions of man-hours could be saved every year. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Persistent snapshots and rollbacks
On Fri, 5 Oct 2018 17:39:57 +0200 Daniel Kraft wrote: > I need the ability to make multiple changes / commits to my SQLite > database but keep snapshots of previous states and potentially roll > back to those states later on. All of that needs to be persistent, > i.e. survive closing the database and restarting the process. After > some time, I can get rid of old snapshots (my process determines by > itself when and which snapshots can get discarded, it is not based on > some fixed TTL or something like that). "The totality of data in a data bank may be viewed as a collection of time-varying relations." -- E.F. Codd in "A Relational Model of Data for Large Shared Data Banks" You're not the first. Data change over time. SQL doesn't support data versions as a language feature, but you can implement it yourself in your database design. Add a "version" column to your table. Create views that (using a self-join) show only the latest version. Periodically purge old versions. Roll back by deleting new versions. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] using a custom memory allocator in TH3
After stumbling on SQLite's description of its memory allocators, I decided to write a one for testing that might be slow but is certainly robust.[1] I thought it might be interesting to hook it up to TH3, but I don't understand how to do that. The instructions say: 1. Generate the code 2. Link to your favorite SQLite library 3. Run Where do I call sqlite3_config(SQLITE_CONFIG_MALLOC, ... ) as explained in https://www.sqlite.org/c3ref/config.html ? --jkl [1] https://github.com/jklowden/mmalloc ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please remove multiple warnings from compiler about optimisation, variable conversion, signed overflow and many more potential errors.
On Fri, 29 Sep 2017 16:55:05 -0400 Igor Korot wrote: > But then why not give it some default value ("0" maybe") and default > it to "1" only if needed during configure? Because complexity. It takes effort --- unnecessary effort -- to set up that default. That effort could introduce an error, whereas no effort *cannot* introduce an error. Less is more. The assumption on the part of the guideline authors seems to be that if something is undefined, it might have been overlooked, and the best way to make sure it's not overlooked is by ensuring there's always an explicit definition. That's a debatable proposition. The mere fact something is defined in no wise ensures it is defined correctly. In this case, the tools themselves provide the definition. For those that do, the code compiles one way. For those that do not, another way. It's entirely automatic. How could supplying those definitions manually be an improvement? --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Does prepare do arithmetic?
On Mon, 29 May 2017 16:18:17 +0200 R Smith wrote: > Actually what Richard is saying is that you SHOULD do the arithmetic > yourself when you can, because you can boldly make assumptions about > the code that you write, but he, or the SQLite engine to be precise, > cannot be so bold, it may not be accurate in accepting that nothing > about the database will change from one evaluation to the next, so it > has to recompute - but you may optimise that computation out because > you KNOW your system and you KNOW you won't be changing column > collations or type affinities etc. in between steps. I wouldn't characterize his answer that way. He said that complex partial constants are computed once, and that recomputing simple constants wasn't measurably cheaper than storing them. From what they've measured, you're safe letting SQLite compute your constants. Even if it's doing a little extra work, it won't be noticeable. What was a little more suprising was why the little extra work is necessary. > calling sqlite3_column_text and thus causing the integer result to be > converted into a string I didn't know sqlite3_column_text converted the result. I thought the result was constant, and the function returned a string representation of it. Richard is saying that the column-retrieval functions change the type of the retrieved column for the current row. Because of that, and because that change-of-type action is under programmer control, SQLite must recompute each row anew. Even for "constant" columns. > 2*15-5+11/5<<3*1 In this case, two partials are stored, for efficiency, and the final operation, shift, is recomputed each time. Because the programmer can't affect the types of the partials, they can safely be memoized. Because the result of the final computation is a column -- whose type *can* be changed by the programmer -- it's recomputed for each row. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] NOT NULL integer primary key
On Fri, 26 May 2017 18:04:14 +0200 (CEST) Eric wrote: > Why should the INSERT return an error? It is quite OK to, when > inserting a row, not specify a value for a NOT NULL column - as long > as the DDL has specified some way of constructing a value. The SQL in question is >>> insert into test values (null, 'row1'); There's a difference between not supplying a value, and specifying NULL: insert into test values (null, 'row1'); insert into test (data) values ('row1'); Specifying NULL for a non-null column is an error in every SQL DBMS with which I'm familar, whether or not a default is defined. > > It's a documented feature, so it's not a bug. But it is decidedly > > odd. > > The DDL specifies, in SQLite's own unique way, that a value will be > provided, and the application author knows that a value will be > provided, so where on earth is the problem? I don't think it's > peculiar at all It's peculiar because it doesn't say what it does. It's nonstandard, and nothing in the text of the DDL suggests a default exists. SQL-92 includes syntax to define a default; it uses the word DEFAULT. It's a problem because the behavior is in exception to what the standard specifies, the opposite of what is plainly expressed, and contrary to what any normal SQL DBMS does. The SQLite user has to be aware that non-NULL primary key "integer" columns have a special, implicit property: that NULL is accepted on INSERT, and converted to an indeterminate value. Put simply, how does the syntax id integer not null primary key suggest that an inserted NULL will be converted to a value? If a magic value generated silently for a primary key isn't peculiar enough, consider that it behaves differently on INSERT and UPDATE. Try to set the same column to NULL with INSERT, get a value; with UPDATE, it's an error. How many datatypes have that property? One. > the application author knows As I said, it's not a bug because it's well documented. But if you haven't studied the SQLite documentation in detail, you might be forgiven for thinking that INT PRIMARY KEY and INTEGER PRIMARY KEY mean the same thing. I'm not so sure every application author knows, but it's a safe bet many find out the hard way. --jkl ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users