On Aug 9, 2006, at 16:47, Randall Gellens wrote:

At 10:46 AM -0400 7/19/06, Gerard wrote:

For what its worth, I just had a mbox size of 12M on one account. Not extremely large by any standards. Using 'qpopper' the download failed as it usually does with a large mbox size near or over 10M. I immediately changed to 'popa3d' in the inetd.conf file and restarted it. Now, when I attempted to download the email messages it proceeded without incident.

I firmly believe that there is an inherent flaw in qpopper that has gone
 unfixed for quite some time. It does not effect everyone, or perhaps
 every OS, but it is there. Unfortunately, the developers do not seem
interested in getting to the root of the problem. A simple Google search will turn up others who have experienced the exact same sort of problem
 that I have. It is even mentioned in the 'fetchmail' manual.

I'm not aware of any such bugs. I'm very happy to look into them, but I'd appreciate if anyone who experiences this and suspects a Qpopper bug could help by reproducing with Qpopper tracing and/or kernel tracing using truss(1), ktrace(1), or whatever is used on your platform.

To enable tracing in Qpopper:

1.  Do a 'make clean'
2.  Re-run ./configure, adding '--enable-debugging'.
3. Edit the inetd.conf line for Qpopper, adding '-d' or '-t <tracefile-path>'.
4.  Send inetd (or xinetd) a HUP signal.

(Steps 3 and 4 are only needed if you use inetd (or xinetd). In standalone mode, you can add '-d' or '-t <tracefile-path>' to the command line directly.)

(In either standalone or inetd mode, if you use a configuration file you can add 'set debug' or 'set tracefile = <tracefile>' to either a global or user-specific configuration file instead of steps 3 and 4.)

This causes detailed tracing to be written to the syslog or to the file specified as 'tracefile'.

While I see the symptoms frequently, every time I chase them down its the client machine dripping the connections. I have a number of users with mailboxes in the 50 MB range. Some are approaching 500 MB. Typically they users just don't wait for the download to complete. They have no appreciation for the download time. However, those symptoms might have different causes with different configurations. For example, I use home_dir_mail and home_dir_misc (patched to make it work). Having thousands of dynamic files in one directory takes a lot of time on a busy server just for manipulating the inode structure. I keep my home directories spread out among 100 different directories to avoid that issue. Also, I use fast_update as rewriting a large mailbox can take enough time to cause a client to timeout. We do require our users to delete mail after downloading, but some don't. Even so, this approach seems to give good results.

If you find a mailbox that consistently has problems, telnet to the server and issue the commands by hand. That eliminates the client timeouts and you can see how long it takes. It will also succeed if timeouts were the cause.

Reply via email to