Re: secrets and lies
Please enlighten me: who bullshitted you Americans into believing that one needs a license to use software? Since you asked, that would be MAI Systems Corporation in 1993, in a lawsuit against Peak Computer, Inc.. See http://www.law.berkeley.edu/journals/btlj/articles/10_1/Nicholson/html/text.html for a discussion of the case and its implications. The issue of "ephemeral copies" is currently a hot topic in US copyright law, and is likely to be decided explicitly by statute in the near-ish future. This being US copyright law, the issue is likely to be decided the wrong way--just one more reason to avoid proprietary commercial software. Or that software is patentable? Nobody has mentioned software patents in this thread but you, as far as I have seen; perhaps bringing up a completely new topic in a "move this discussion somewhere else" message isn't wise.
DFSG and DJB (was Re: secrets and lies)
Allowing patches is necessary, but it's not sufficient. Debian's Free Software Guidelines has a similar clause, and I see no other clause that DJB's licence conflicts with. If I go by your statement, why is qmail listed under the non-free section? Ability to distribute binaries built from modified source would seem to be the key issue. From DFSG section 4: The license must explicitly permit distribution of software built from modified source code. (As a note of personal preference, I think allowing "you can only distribute the pristine source since patches" is a ridiculous concession, and I don't consider software with such a license to be "free" in the liberated sense at all. But my personal preference isn't especially relevant to this discussion.)
Re: Qmail is *NOT* reliable with ReiserFS
It is DJB's view that all directory operations (creating, removing, linking, etc.) sould be synchronous, just like BSD does. For the record, FFS with soft-updates does not guarantee synchronous directory operations; you have to open and fsync() the file you just moved to be sure the operation has been committed to disk. See http://mail-index.netbsd.org/current-users/2000/06/19/0011.html for a little more information. Based on the patch, it sounds like ReiserFS agrees with FFS+softupdates in semantics; that is, if you want to ensure that a directory operation has completed, you open and fsync the directory entry you care about. This behavior is different from ext2fs, where you have to open and fsync the directory containing the entry you care about.
Re: Qmail is *NOT* reliable with ReiserFS
Apologies for not catching this in my first reply to Bruce's message. There is also the discussion of ordered meta-data updates (OMDU) vs unordered (UMDU). Linux (with the exception of newer journalled file systems) does UMDU. With OMDU, the file meta-data (inode, indirect blocks, etc) is written in an ordered fashion, typically before the data. This means FWIR that you can have good meta-data pointing to bad data in the case of a crash. With UMDU, you can have bad meta-data but good data, which is something that a fsck will detect. You have ODMU backwards. Any sane ordered write scheme will write out a block X before writing out a block (inode or directory entry) which points to block X. FFS, with or without soft updates, should never encounter a case where an inode points to bad data. (Of course, if you disk controller reorders write operations you'll lose no matter what. Unfortunately, you have to choose both your hardware and your software somewhat carefully if you really care about filsystem consistency.) Linux ext2fs has no write ordering whatsoever. If the system goes down uncleanly, you can get metadata pointing to bad data or data not pointed to by metadata. A recently created file might exist but contain blocks from an old copy of /etc/shadow instead of the data you wrote to it. It's really ugly. fsck cannot correct all of the possible problems which can arise, no matter how clever or thorough it is. People have tried to justify this state of affairs in lots of ways, but the only potentially correct and convincing justification is, "who cares?" Which is great unless you're one of the (admittedly, relatively few) people who does care. Note that write ordering is different from synchronous vs. asynchronous operations. Write ordering is about filesystem consistency, which is mostly irrelevant to qmail's operation because of the way qmail works. ext2fs is also a little odd with respect to synchronous operations (as discussed in my last piece of mail), but it's certainly possible to work around that.
Re: Qmail is *NOT* reliable with ReiserFS
For the record, FFS with soft-updates does not guarantee synchronous directory operations; you have to open and fsync() the file you just moved to be sure the operation has been committed to disk. See http://mail-index.netbsd.org/current-users/2000/06/19/0011.html for a little more information. Then I was confused. I assumed FFS was like UFS on Solaris, where you can "feel" the synchronous directory operations by doing a "rm -rf" of anything larger than a few files. Soft updates are a recent thing. UFS on Solaris does not have them. Without soft updates, FFS does have synchronous directory operations, and yes, you will feel the resultant performance limitations. If ReiserFS behaved identically to FFS+softupdates, it would not need any qmail patches. I can't really address this issue; I don't know qmail well enough. Which to me seems to be a more logical mode of operations: if you want the file data sync'd to disk, call fsync on the file; if you want the directory, fsync the directory. Perhaps. There are arguments for either model being simplest, and history should not be ignored when picking between the two. The Single Unix Spec v2 also appears to mandate the FFS model, for those who care about that standard: The fsync() function forces all currently queued I/O operations associated with the file indicated by file descriptor fildes to the synchronised I/O completion state. All I/O operations are completed as defined for synchronised I/O file integrity completion. [and:] synchronised I/O file integrity completion - Identical to a synchronised I/O data integrity completion with the addition that all file attributes relative to the I/O operation (including access time, modification time, status change time) will be successfully transferred prior to returning to the calling process. [and:] synchronised I/O data integrity completion - [...] The write is complete only when the data specified in the write request is successfully transferred and all file system information required to retrieve the data is successfully transferred.
Re: Does someone knows what is this about?
ORBS also lists tarpitting people, although as spam relays they are unsusable, too. Anybody clueful enough to do tarpitting should block relaying. There exists sites which do not have a nice block of IP addresses which describe all of their valid mail relay users. For such sites, tarpitting is a much better solution than relay blocking. MIT is one of them (many of its mail relay users are customers of random outside ISPs), and has had numerous problems with ORBS as a result. No. You obviously do not see my point. ORBS's job is to list open relays. It does that, and it's good at it too. It also does not enforce this policy on anybody. That's fine, but you personally have been making normative statements like "No, don't get used to being listed on ORBS" and the one I quoted above.
Re: The current status of IETF drafts concerning bare linefeeds
Yup. The problem with bare linefeeds is simple: their interpretation is ambiguous on a Unix machine. This is an oversimplification. Unix machines are perfectly capable of interpreting bare LFs in whatever way the spec might say they should. There is a practical problem because MTA and MUA implementations like to allow the use of standard Unix tools to process messages; this can be considered an implementation issue, although it's a big one. Given the differing interpretations of bare linefeeds and carriage-returns, they must be disallowed by the SMTP specification, Certainly, given that sendmail munges them (or at least used to), the spec should disallow them, and the DRUMS draft does. and they must not be accepted by SMTP clients or servers. This is a matter of philosophy. The IETF tendency on such matters is to regularly mandate that servers must accept invalid data as long as the data can be interpreted. I don't agree with this philosophy, at least for the initial deployment of a protocol ("be liberal in what you accept" increases robustness now at the expense of encouraging interoperability problems later), but it is somewhat defensible. The DRUMS draft should probably become clearer about the proper interpretation of a bare LF before it becomes a standard, if it ever does. (Given that it has much more precisely specified grammar rules than RFC 822 does, I would love to see it become a standard, but I don't know how likely that is.)
Re: compile error
Dan wrote, in 1996: ``In case anyone's curious: I use void main() because it shuts gcc up. Of course, a modern version of gcc (I just tested 2.8.1) will warn about "void main()" even if you don't give it warning flags. (I asked for this to be the case, back in 1996 when Dan said that; I can't remember whether the maintainers had already made the change in the development sources or if they did so in response to my asking.) As to Russell Nelson's assertion that "int main" is a gratuitous innovation in C, I think that he's confused. "void" didn't even exist in early C, and the semantics of the return value from main() were probably in place long before void was added. I don't have any references to back up my beliefs, though.
Re: Patches revisited
So, since you think you can do better, what would you do differently? Split the page up? That would waste people's time. Add more information? I'm fine with that -- "send code", as they say. There's always the approach of "one big page with an index at the top where the index links point to anchors within the page." (I'm not saying there's a natural way to split the page up, but you can split the page up without increasing latency, although it won't be quite the same as splitting it up the normal way.) In another message, you wrote: If I was making a distribution, *I* would reserve some UIDs 100. The chances that you'll run into a conflict in an upgrade a very small. I'd include a program to check for a conflict, and reassign the existing UID. Of course, once a site starts making use of network filesystems, it doesn't matter what UIDs are "reserved" by new operating systems, and no simple tool can make reassigning UIDs easy. Eventually someone is going to feel some pain. It happens.
Re: daemontools binaries (was Re: binaries)
the daemontools binaries are included, they are, like all DJB software other than Qmail itself, under PD (not GPL). Public domain would mean you can do anything you want with it. You can't; in particular, you are not allowed to distribute derivative works other than precompiled var-qmail packages without Dan's permission. Don't ask me how I know that, maybe an old discussion here, but I do know that there are no LICENSE readme files in the packages or on DJB's site. If you download some source code and there is no license, you can't assume that the source is in the public domain or that you have permission to do anything covered by the exclusive rights given under copyright law. As it turns out, there is a license among Dan's web pages. See: ftp://koobera.math.uic.edu/www/qmail/dist.html
Re: daemontools binaries (was Re: binaries)
For *qmail*. See the Subject of this message. Yeah, sorry about that. Some of the reasoning in my message remains valid (lack of a license is not an indication of public domain status), but of course the specific facts were irrelevant.
Re: Getting qmail
Dan's anonftpd chroots itself, and there's no way out. Crackers simply cannot break authentication because there *is* no authentication. Anybody can download only the files in the ftpd directory. Anything else is less secure. But giving Dan's anonftpd the binary label "secure" and anything different the binary label "insecure" seems misleading to me. Dan's anonftpd is still subject to stream modification of the file you request, for instance (not that I particularly expect an ftpd to solve this problem; it's hard), and your idea of use of "less secure" really seems to mean "potentially less secure." Isn't it about time we had an ftp server which cooperated with a visual ftp client by giving it an easily parsed listing format? If this format is intended for programs and not users, wouldn't it be better to provide it under a new query? The description of LIST in RFC 959 says about the information returned: Since the information on a file may vary widely from system to system, this information may be hard to use automatically in a program, but may be quite useful to a human user.
Re: file names = inodes : why?
Mark Delany [EMAIL PROTECTED] wrote: Possibly. What do you propose? The current method guarantees a unique file name first time, every time. Since it's needed for every new mail, you want it to be efficient, right? Not a very good argument. If some other technique gets a unique filename the first time with 1-epsilon probability, then the performance difference will be negligible. Having to handle more conditions, though undesirable, does not imply significantly reduced performance.