Anyone know why APR_TCP_NODELAY_INHERITED is not defined in apr.hw?
Or why ap_sock_disable_nagle is not defined for Windows (looks like this was omitted unintentionally).
Windows is exhibiting predicatable performance problems because nagle is enabled. The following patch fixes it.
Also, it's not part of the nagle problem but it looks to me like APR_O_NONBLOCK_INHERITED was inadvertently left out of apr.hw also.
Allan ------------------------------------------------------------------- Index: server/mpm_common.c =================================================================== RCS file: /home/cvs/httpd-2.0/server/mpm_common.c,v retrieving revision 1.105 diff -u -d -b -r1.105 mpm_common.c --- server/mpm_common.c 20 Mar 2003 21:50:40 -0000 1.105 +++ server/mpm_common.c 9 Apr 2003 20:13:41 -0000 @@ -302,7 +302,7 @@ } #endif /* AP_MPM_WANT_PROCESS_CHILD_STATUS */
-#if defined(TCP_NODELAY) && !defined(MPE) && !defined(TPF) && !defined(WIN32) +#if defined(TCP_NODELAY) && !defined(MPE) && !defined(TPF) void ap_sock_disable_nagle(apr_socket_t *s) { /* The Nagle algorithm says that we should delay sending partial Index: srclib/apr/include/apr.hw =================================================================== RCS file: /home/cvs/apr/include/apr.hw,v retrieving revision 1.113 diff -u -d -b -r1.113 apr.hw --- srclib/apr/include/apr.hw 23 Mar 2003 02:25:02 -0000 1.113 +++ srclib/apr/include/apr.hw 9 Apr 2003 20:13:41 -0000 @@ -342,6 +342,14 @@ */ #define APR_CHARSET_EBCDIC 0
+/* Is the TCP_NODELAY socket option inherited from listening sockets? +*/ +#define APR_TCP_NODELAY_INHERITED 1 + +/* Is the O_NONBLOCK flag inherited from listening sockets? +*/ +#define APR_O_NONBLOCK_INHERITED 1 + /* Typedefs that APR needs. */
typedef unsigned char apr_byte_t;