Re: [sqlite] Example of implementing undo/redo using sqlite3?
On Aug 4, 2011, at 10:58 AM, Stephan Beal wrote: > "Temporary triggers can be added to the database to record all changes into > a (temporary) undo/redo log table. These changes can then be played back > when the user presses the Undo and Redo buttons. Using this technique, an > unlimited depth undo/redo implementation can be written in surprisingly > little code." > > i'm wondering if anyone can point me to an example of implementing such a > beast? I used regular triggers to create an undoable database in some of my early k/v store testing work: https://github.com/dustin/kvtest/blob/master/sqlite-base.cc#L160 -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Will SQLite supports UnQL?
On Jul 31, 2011, at 12:58 PM, Simon Slavin wrote: > These two go together. Multi-master replication (one example of which is a > document store) is relatively easy. Datestamp every value (document) and > whichever one has the lastest date is the one you want. I hear that a lot, but it makes me pretty uncomfortable. People like timestamps because there's some kind of implied causality. It's not there in reality, though, and this type of resolution can lead to harmful (i.e. data loss) results. Just because something happened after something else doesn't mean that it had all the same knowledge that went into the first decision. And those are the best cases. Lamport clocks (and the more general vector clocks) exist because they *explicitly* state causality. That is, if I have perfectly synchronized clocks and two applications running on machines immediately next to each other (trying to avoid the relativity argument[0]), event A can occur that changes the data to a particular state and can be picked up by server one, but not server two. Server one and server two can go to change the data at roughly the same time, but server two was slightly slower and it came in last. Now server two is just eating data *because* it's reacting more slowly. Your timestamp-automated conflict resolver favors slower machines that stay behind. With explicit causality, you state that state B succeeds state A because we knew about state A regardless of when we made our decision. If state B' tries to succeed state A without knowing about state B, then it can happen on an isolated system, but will introduce a conflict when it learns that something had already done this. Now we have two successors for state A and only your application's conflict resolver can make sense of what it means for what state the document should be in. (note that in the case of CouchDB if state B and state B' were the same, this would be recognized as not a conflict, but that's rather a special case). [0]: I always try to avoid the relativity argument, but when you're dealing with systems that are far apart, whether an event happened before another event is entirely up to the perspective of the observer. My CouchDB on Mars might see and react to an event hours before we can observe it on earth. About one (earth) year later, Earth might observe and react to this event hours before Mars can. In both cases, an event coming from the other direction would have flipped its arrival order between the two. While that may seem like a silly thing to discuss, the exact thing happens locally. The theoretical floor of ping time between the east and west coast of the US is about 18ms milliseconds. In practice, you'll get something closer to 40ms. Do you know how much stuff happens within 40ms? From the perspective of the east coast, anything happening on the east cost will appear to occur considerably sooner than anything happening at "the same time" on the west coast. -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] backup api database is locked?
I've been having a somewhat unexpected condition where I get "database is locked" when using the SQLite backup api (.backup from the sqlite3 command). This is with version 3.7.2 and using WAL. My impression was that the backup API would yield and occasionally finish. I seem to have misunderstood. Is there a recommendation for something I can do to ensure my backups can actually finish (other than just retry continuously and indefinitely)? -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] WAL for production use
On Feb 2, 2011, at 8:14, Duquette, William H (318K) wrote: > In SQLite 3.7.4/3.7.5, does WAL seem to be stable enough for production use? I'm using it very, very heavily right now. -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] How can we put check constraints on a column
On Jan 20, 2011, at 20:51, Sunil Bhardwaj wrote: > How can we put check constraints on a column in sqlite, while creating a > table. > > We want to restrict values for a column in a range say '1 to 10'. The standard SQL way works. Are you experiencing otherwise? I did a google search for "sql check constraint'' and pasted in the first results and it worked as expected. -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] WAL on a separate filesystem?
On Jan 19, 2011, at 11:01, Simon Slavin wrote: > If you do something special to keep your journal file in a different place, > these other sqlite3 applications won't find it. So they'll just find a > corrupt database file, and are less likely to be able to figure out how to > restore to a COMMIT point or a SAVEPOINT. I understand this concern and think it's a valid point. I can assume for the purpose of this usage that only tools I provide will be used to access the DB (I ship a sqlite3 binary since I'm using WAL and I've got users on CentOS which ships sqlite 1.2 for all I know). I'm a bit of a proxy of this question. I wrote software that uses SQLite under some pretty high volumes and I have a user wanting to split stuff up across multiple filesystems. I already have the ability to do data partitioning in the application, but the user is wanting to separate the WAL out as well. This isn't a question so much about value judgment (I've already argued that some, though mentioning maintenance tools is helpful there, too). It comes down to whether reliability of SQLite itself would be reduced if a WAL existed on a different partition -- whether there are any assumptions WAL makes that would be invalid across a filesystem boundary. -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] WAL on a separate filesystem?
On Jan 19, 2011, at 2:35, Richard Hipp wrote: > No. The WAL has to be in the same directory as the original database. > Otherwise, the process that tries to recover from a crash or power failure > won't know where to find the WAL file. I understand how it's opening it. This is more about how the file is used. If I could convince SQLite to open the WAL in a location other than in the same location as the main db, would this cause reliability problems? -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] WAL on a separate filesystem?
Is it possible without violating any assumptions that would lead to reliability problems to have a DB's WAL exist on a separate filesystem? -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using WAL?
On Dec 30, 2010, at 21:57, Sachin Gupta wrote: > We don't want that read operations get effected because the write operations > are in progress (and Vice-Versa). Have you determined that my suggestion can't meet your performance requirements? > The way that Oracle handles this using Read Committed; Can the same thing be > achieved using SQLite for an in-memory database? Which oracle in-memory database are you referring to? -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using WAL?
On Dec 30, 2010, at 19:21, Sachin Gupta wrote: > Q. Well since WAL cannot be used with in-memory database, which other method > do you all suggest one should use for providing better concurrency? > Can using Shared Cache Mode be of use? http://www.sqlite.org/sharedcache.html In general, just don't just try to have a bunch of threads throwing stuff at the same data structure and expecting awesome goodness to fall out. Can you structure your application so that there's just one thread holding the state and you pass messages in and out to retrieve and mutate state? I imagine this would be the shortest path to success and definitely the easiest to prove correct. -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using WAL?
On Dec 30, 2010, at 10:42, Sachin Gupta wrote: > This is for setting the Journal Mode. But has the WAL mode been set properly? > How can we confirm this? Simon answered this question. > Also, Can WAL mode be used for In Memory Databases? > I am planning to use it in my application. I don't think this makes much sense. Where would the log live? The answer to all of your questions are in the docs here: http://sqlite.org/pragma.html#pragma_journal_mode -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Just compiled SQLite in Visual Studio
On Nov 30, 2010, at 18:04, Igor Tandetnik wrote: > It is true, however, that it's fairly easy to transform any C program into a > program that's both valid C and C++, by applying straightforward > modifications - tightening type safety and such. Except there are a lot of areas where C advanced in C99 that C++ can't deal with in current versions: item_info info = { .nvalue = 1 }; struct { char blah[]; } char blah[somevariable]; The ability to sprintf a size_t or any general 64-bit number. (and a few other things that are really convenient, but have got in my way in attempts to port c code to c++). -- dustin sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] When is corruption acceptable?
On Nov 19, 2010, at 12:17, Pavel Ivanov wrote: >>1. When is it acceptable for sqlite to leave a corrupt database that >> can't be used? > > It's never acceptable. SQLite specifically written and tested to > manager "out of disk space" errors appropriately without database > corruption. Yes, this is the argument I've been making, and it kind of comes back to me when stuff is corrupt. :/ Then people ask me why I don't just make something better. While I do think I could build something that fit our application a little better, my confidence in sqlite is higher than whatever I come up with over the weekend. >>I've got the data moved off to the side after it broke. > > Did you move journal along with database? If not you made a big > mistake. If it's not deleted yet and you didn't change anything in the > database you can close all connections to it, put journal nearby and > run command line utility again. After that integrity check should be > okay. I didn't actually do the move, but I've been told all of the files came together while the application was shut down. There is no journal. I've read the shell history and found that it appears that someone copied the files out from under the application while it was still running. This copy is what was corrupt. We later removed the original and replaced them with their copy. So, yay sqlite, confidence restored (now just wait for the data to be). >>2. Is there any way to recover the data that didn't get corrupt >> (which should be lots)? > > If there's no way to restore journal then you can only try .dump > command from command line utility. Thank you for the suggestion. This looks like it's going to be helpful to get a lot of the data out. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] When is corruption acceptable?
I have an application that was writing to a sqlite db until we ran out of disk space. The app failed to store data, commit, etc... and then was killed. I've got the data moved off to the side after it broke. integrity check says this: *** in database main *** Main freelist: 231854296 of 5 pages missing from overflow list starting at 7184967 On tree page 187114 cell 0: 4 of 5 pages missing from overflow list starting at 5920373 Page 5510841 is never used Page 5985431 is never used Page 6127423 is never used Page 6324952 is never used Page 7156369 is never used Page 7179495 is never used Page 7184966 is never used Page 7184968 is never used -- clearly, overflow broke a bit. I can do a count(*) on the table, but I can't grab all of the rows due to overflow. It seems a bit... delicate at this point, which leads to two questions: 1. When is it acceptable for sqlite to leave a corrupt database that can't be used? 2. Is there any way to recover the data that didn't get corrupt (which should be lots)? -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] programmatic way of determining fragmentation?
On Oct 22, 2010, at 15:12, Max Vlasov wrote: > As for your initial question, I think fragmentation evaluation is possible > with the help of VFS. I'd keep a total sum of of absolute difference between > consequent read offsets for xRead operation. In this case if some xRead > request reads 1024 bytes at the 4096 offset and the next xRead reads > something at 5120 (4096 +1024) this will add 0 to the sum, but if the next > read something at 8192, this will add 3072 = (8192 - (4096 +1024)). If this > implemented, you will be able to see how this value changes for some of your > SELECT and maybe evaluate it on per record basis. If it's more like some > threshold value, ok, peform VACUUM This sounds like a *really* awesome idea. I know exactly what operations I'm doing that *shouldn't* generally seek and I can keep some stats on that. Thanks a lot. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] When does "begin transaction" fail?
On Oct 22, 2010, at 14:56, Simon Slavin wrote: > Are you asking for circumstances where a 'BEGIN' could issue an error message > ? This one. When might "begin" return an error code and fail to begin a transaction? -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] When does "begin transaction" fail?
Assuming I'm not in a transaction already, what could possibly cause me to fail to start a transaction? I have reason to believe I have at some point a commit was attempted to start but failed. I do not currently have any data around this failure (it happened a couple of times with my software on someone else's machine), but I'd like to know what would make it happen as I start getting a bit defensive in the area. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] programmatic way of determining fragmentation?
On Oct 21, 2010, at 10:05, Pavel Ivanov wrote: > I think it's not related to fragmentation, but to fill percentage of > b-tree pages. I guess your reconstructed table is much less in total > size than your initial one. Also does changing cache_size changes > above numbers? Interesting. We have been using sqlite3_analyze on some tests and finding that we can get our dbs very fragmented. It doesn't report fill size as far as I can tell. We'll play around with cache_size to see if that does anything useful for us in the meantime. > What size do these tables have? About 2.2GB with about 4 million rows. > What bottleneck appears to be in 3-hour query execution? Is it disk thrashing? Yes. I've tried different strategies in the past. Vacuum and the rebuild both seem to help quite a bit. I don't understand the file layout all that well right now, so I don't completely understand how the index is traversed. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] programmatic way of determining fragmentation?
On Oct 21, 2010, at 9:27, Simon Slavin wrote: > Have you actually demonstrated this ? In other words do you have an > operation that's really 'too slow', but after a VACUUM it's fast enough ? Yes. Select * from a table took just slightly under three hours. Select * from a reconstructed table (insert into select from) in a new database took 57 seconds. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] programmatic way of determining fragmentation?
On Oct 21, 2010, at 7:52, Roger Binns wrote: > You'll need to read the docs on the file format: > > http://www.sqlite.org/fileformat.html > http://www.sqlite.org/fileformat2.html > > - From that you can determine a measure of how bad the fragmentation is, and > your code can be quick and crude. I've seen this, and that's my last resort. I figured I'd ask because it feels like something someone would have done other than sqlite3_analyzer. Thanks. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] programmatic way of determining fragmentation?
On Oct 21, 2010, at 1:00, Kees Nuyt wrote: > PRAGMA page_count; and PRAGMA freelist_count; will give you > some info, but not as much as sqlite3_analyzer. > It might be enough in your case. Those provide some info, but not the specific info I'm having problems with right now. I have too many non-sequential pages and it's making my application run a couple of orders of magnitude slower than a fresh DB. > Running a full analysis takes quite some time, you might as > well start a VACUUM periodically. I would suggest to run it > with half the frequency you intend to run the analysis. That > way the time spent to this kind of housekeeping will be > about the same. We're not measuring it taking too long at this point. We have a lot of application-specific optimizations we could perform if we had a bit more information. > And you could consider > PRAGMA auto_vacuum = 0 | NONE | 1 | FULL | 2 | INCREMENTAL; The problem with auto_vacuum is that it's documented to make the problem worse. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] programmatic way of determining fragmentation?
I realize sqlite3_analyzer will give me some good data for general use, but I'd like to be able to do this from within my app. Does anyone have a lib-like thing I can use, or an internal sqlite API that can help me out here? Mostly, I want to have an idea how fragmented I am. My app can tremendously wreck a sqlite table right now to the point where a reconstruction (vacuum, etc...) is the difference between three hours and one minute. I'd like to know when things are getting bad. If no such thing exists, I can build it, but I would imagine it's been done before. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Many many tables... performance wise?
On Oct 15, 2010, at 17:32, Scott Hess wrote: > Having a table with an owner_id, key, and value, with a unique index > on (owner_id, key) will probably be more efficient than having a > separate table per owner. Also, it will be easier to code safely, > because bind parameters don't work on table names (I'm assuming you're > using dynamic table names in this case - if so, watch out for johny > drop tables). ...but there will also be a unique index on rowid, which will get large and need to be maintained. I'm concerned that this alone could be limiting me somewhat. I have a similar application with a single table that I'd like to split into more based on an identifier that appears in the table. All of my operations are limited to one of these identifiers (though it's not indexed, the lookup is always by rowid). Occasionally, I want to delete all records based on an ID. Bobby Tables is not relevant to my application as I know how to do my bindings properly and have no confusion with data types (this is an integer) or user data vs. executable code. As a single table, I can easily have many tens of millions of rows. Splitting it into 1,024 tables by a specific ID, I'd expect the each index to be smaller and (at the very least), I'll have a far easier time deleting a large chunk all at once. I do intend to do some experimentation here, though it'd be helpful to have some more detailed pointers as to why the intuition is wrong here. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] When do I need SQLITE_THREADSAFE?
On Oct 5, 2010, at 23:50, Michele Pradella wrote: > I check in the sqlite3.c code for the SQLITE_THREADSAFE. I can't found > any difference between SQLITE_THREADSAFE=1 and SQLITE_THREADSAFE=2. I > found only differences if SQLITE_THREADSAFE=0 or SQLITE_THREADSAFE>0 > Have I miss something? Look for bCoreMutex and bFullMutex. bFullMutex is in use when SQLITE_THREADSAFE=1 and bCoreMutex is in use for 1 or 2. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] When do I need SQLITE_THREADSAFE?
On Oct 4, 2010, at 14:46, Jay A. Kreibich wrote: > If you're treating the threads independently, each with their own > database connections, you should be safe with =2 ("multithread"). > That provides less protection than =1 ("serialized"), but it is also > faster. Continued from above: > > When compiled with SQLITE_THREADSAFE=2, SQLite can be used in a > multithreaded program so long as no two threads attempt to use > the same database connection at the same time. I did read that, but I didn't quite understand what the global state is that will be accessed between otherwise independent threads. Reading the code makes that a bit more clear. I was mainly wondering if there was a difference between having two entirely independent threads accessing two entirely independent databases, or if =2 was for concurrent access to a single database only. It sounds like the answer is ``just do it anyway.'' Thanks. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] When do I need SQLITE_THREADSAFE?
I've read the documentation, but one thing that was unclear: Do I need SQLITE_THREADSAFE=2 (or 1) when I am using sqlite from two different threads entirely independently in a single-threaded manner? That is, no information sharing between them from my application. Specifically, is there any global state that will conflict? -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] how to use raise() in a trigger?
On Sep 28, 2010, at 15:02, David Ventimiglia wrote: > CREATE TRIGGER AFTER INSERT on WHEN EXISTS > (select * from ) > > BEGIN > >RAISE(ROLLBACK); > > END try something like this: create trigger after insert on begin select raise(rollback) when exists (select * from ); end -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] processing a sql script from the C API?
On Sep 27, 2010, at 23:20, Richard Hipp wrote: > See http://www.sqlite.org/c3ref/complete.html Thank you very much. That was exactly what I need. Sorry to have overlooked that. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] processing a sql script from the C API?
I've got a program that uses SQLite and has a couple of callouts to do pre and post initialization (setting pragmas mostly). Originally, I was just parsing for ';' characters and sending each through with execute. Recently, I wanted to set up a couple of temporary triggers (which naturally contain ';' characters) in one of these scripts and that sort of fell apart. Is there any way to run a sql script, or perhaps hand sqlite a chunk of text and ask ``is this a query yet?'' -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] usleep bug?
On Sep 27, 2010, at 5:43, Simon Slavin wrote: > I think this differs for different platforms. For instance: > > http://www.manpagez.com/man/3/usleep/ > > makes no mention of that fault. It kind of does indirectly. It says it's built on nanosleep(), and links to that doc which says this in their error section: [EINVAL] rqtp specified a nanosecond value less than zero or greater than or equal to 1000 million. I didn't actually run into a problem with it, I just noticed that its result isn't checked and documentation I was reading had it defined to fail. > Since the time for sleeping is an integer number of seconds it's probably > simplest just to replace this call with one to sleep(10). sleep() is defined > in POSIX.1 so I believe it should be available in a library for every > platform SQLite is expected to run on. I think that makes sense. It also means you don't need a comment describing that it's intended to sleep for exactly ten seconds. :) -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] usleep bug?
I happened to notice this code in the output of a git grep somewhere: usleep(1000); /* wait 10 sec and try the lock again */ usleep is documented to fail with EINVAL if its argument is greater than 1,000,000. I'm not sure if that takes effect on all platforms, but it could very well reduce to a quick function that just sets errno if usleep fails. -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] ATTACH and WAL
I have a database that has a sort of central db with four other databases that are ATTACHed. When I enable journal mode of WAL, I only see one wal and shm file (for the one I actually opened). I would expect to see one for each attached database, as that would be consistent with the older file-based journal types. Is it expected that I would only have one wal and shm file for this configuration? -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite file Validation
On May 19, 2010, at 12:46, seandakid wrote: > Thanks for the quick reply Dustin. That was my concern as well.. it might > create more issues than it will solve. > > One of the devs suggested this code example: > > int makeSQLtight(const TCHAR* update); I'm all for developer laziness, but holistically. Doing lots of work to asymptotically approach "safe" with the effect of encouraging unsafe practices. This conversation came up a few times on reddit a month or so ago. People brought up things like mysql_real_escape (that is, something (unfortunately) widely used and allegedly well-tested). Rather than stopping at assuming it was wrong, I just did a google search for exploits within it. There were lots. It's not worth it. If you do things right, bad results become impossible. If you do things wrong, you'll never get to a solution, regardless of how quick it appears. :) -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] SQLite file Validation
On May 19, 2010, at 12:24, seandakid wrote: > Question: Do you think that instead of getting them go back throughout their > code, it is feasible to create a function that just eliminates the ; and > replaces it with a ,? And if so, any suggested code? On one hand, you have something that is safer and more efficient (probably tons faster depending on your app since you'd be able to reuse statements). On the other, you have something that will increase your technical debt and give you more places to hide bugs (with false hope that you can figure out the difference between code and data "magically" in a new layer). -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] dealing with IO errors
I'm using sqlite from the C API on top of EC2 where EBS will sometimes give me IO errors. For the most part, this is OK and my application can deal with them, but I'm not entirely sure *how* to deal with them correctly in all cases. My usage is pretty simple: begin transaction; -- do a bunch of inserts (on conflict replace, so many are updates in practice) commit; Whenever an insert fails, I can requeue it to happen at a later time. Right now, when a commit fails, I just keep retrying it until it succeeds. The types of problems I might have that could lead me to failing to commit should theoretically also lead me to be unable to rollback. My app must assume that these errors in the IO layer will eventually heal. What's the best course of action to keep my application running and try my best to keep my on-disk data sane? (a variation of this was also posted to SO and may have some useful commentary around it: <http://stackoverflow.com/questions/2842752/how-should-i-deal-with-sqlite-errors> ) -- Dustin Sallings ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users