On Mon, Apr 30, 2001 at 09:50:25PM -0500, William A. Rowe, Jr. wrote: > From: "Greg Stein" <[EMAIL PROTECTED]> > Sent: Monday, April 30, 2001 8:49 PM > > > -1. Please back out. > > I'd rather push forward and address your other concerns...
Sorry. In this case, that would not be prudent. Please back out. The situation is that the SDBM is ten *years* old. It doesn't have bugs. It simply is not something that I worry about. These kinds of changes are destabilizing, in an area where I think we can't afford it. >... > I'm well aware of this. That's why I posited the question to the list about > the cache invalidation semantic. The other option, of course, is to drop > cache > entirely in a shared open. The latter is the only option (short of a synchronized / distributed cache thingameebob). The mtime isn't going to be fine-grained enough to give you proper invalidation. Two changes within the same second will cause data corruption. > Please note we continue to lock the file for it's lifetime iff the user > doesn't > pass the new APR_SHARELOCK flag. See the snippets I grabbed from the commit > log below. Sure, but I believe the existing change needs to be reverted, then we can move forward in *teeny* pieces. I *really* am hesitant to get near this code. If we change it, then we should only do so in a narrow, limited fashion. As a starter, the error handling needs to be fixed. That whole apr_sdbm_error_get() isn't needed now that you've changed everything to return a status. That will clear up the ioerr() macros, and the numerous "goto error" lines. As a result, that will simplify error recovery in the functions, which is very important to ensure that we unlock when necessary. Then we need the function for cache invalidation, which you have in the posted patch. Lastly, we would have a function/macro to lock on entry (if needed), and then another to invalidate/unlock on exit. The mechanism for inval/unlock needs to be tightly bound. > > I am mildly against the concept, in general, for SDBM (since people can turn > > to other DBM implementations), but if you come up with a patch that solves > > the issues, then we could certainly discuss it. > > I'm not proposing we provide the most optimized solution in the world. Understood. Invalidating the cache would be an adequate solution, and I'm okay with that. But the patch needs to be reverted, then we can start with small bits to ensure absolute correctness. Example? I found a bug in your latest addition. A failure in chkpage() could have resulted in returning APR_SUCCESS. (this was the checkin *before* the locking changes; I haven't reviewed the locking change because of the glaring cache issue; who knows what is there? I don't want to have to worry about it) >... > > There are some other issues with this patch, but it's kind of moot... > > Bring them on. Again, I prefer to work forward, since I mentioned in my other Sorry, but we'll revert and start over. Get the fundamentals proper, then add in the bits we need. With this code, that is much better than jumping ten steps and hope that we get the previous nine filled back in. > comment, the auth_dbm, auth_digest, and soon ssl will need at least a > bare-bones > cross-process dbm. sdbm is already cross-process. It arbitrates access to the DBM quite fine. You're simply trying to make the locking more fine-grained. That's okay, but not at the expense of trust in this code. > I'd prefer to dump caching entirely for APR_SHARELOCK access, rather than > revert > to the prior code, but if we can find a better way I'd like that. Dumping > caching > can't be worse than closing up and reopening an sdbm on every request. Dumping the cache will be a *lot* better than close/reopen. Cheers, -g -- Greg Stein, http://www.lyra.org/
