Re: Talking about proxy workers

2010-08-09 Thread Paul Fee
Rainer Jung wrote:

 Minor additions inside.
 
 On 06.08.2010 14:49, Plüm, Rüdiger, VF-Group wrote:


 -Original Message-
 From: Paul Fee
 Sent: Freitag, 6. August 2010 14:44
 To: dev@httpd.apache.org
 Subject: Re: Talking about proxy workers


 Also, is it possible to setup these three reuse styles for a
 forward proxy?

 1: No reuse, close the connection after this request.

 Yes, this the default.

 2: Reuse connection, but only for the client that caused its creation.

 No.
 
 Even if you configure pooled connections like in the example given in 3,
 the connections are returned to the pool after each request/response
 cycle. They are not directly associated with the client connection.
 
 But: if the MPM is prefork, the client connection is handled by a single
 process which doesn't handle any other requests during the life of the
 client connection. Since pools are process local, in this case the pool
 will always return the same connection (the only connection in the
 pool). Note that this pooled connection will not be closed when the
 client connection is closed. It can live longer or shorter than the
 client connection and you can't tie their lifetime together.
 
 Whether the proxy operates in forward or reverse mode doesn't matter, it
 only matters how the pool aka worker is configured. See 3.
 
 3: Pool connection for reuse by any client.

 Yes, but this is needed separately for every origin server you forward
 to:

 Proxy http://www.frequentlyused.com/
 # Set an arbitrary parameter to trigger the creation of a worker
 ProxySet keepalive=on
 /Proxy
 
 Pools are associated with workers, and workers are identified by origin
 URL. In case of a reverse proxy you often only have a few origin
 servers, so pooling works fine. In case of a forward proxy you often
 have an enormous amount of origin servers, each only visited every now
 and then. So using persistent connections is less effective. It would
 only make some sense, if we could optionally tie together client
 connections and origin server connections.
 
 Regards,
 
 Rainer

I'm using the worker MPM, so connection sharing between clients can happen.

As you've pointed out, pooling works well for reverse proxies as there are 
few backends and the hit rate is high.  For forward proxies, there are 
numerous destinations and the pool hit rate will be low.  The pool has a 
cost due to multi-threaded access to a single data structure, I presume 
locks protect the connection pool.  Locks can limit scalability.

I'm wondering if pools should be restricted to the reverse proxy case.  
Forward proxies would couple the proxy-origin server connection to the 
client side connection.  Since connections can not be shared, there's no 
need for locking.  We'd loss the opportunity to share, but since the 
probability of a pool hit by another client is low, that loss should be 
acceptable.

Essentially, I'm asking if it would make sense to implement 2: Reuse 
connection, but only for the client that caused its creation.  This could 
be a configurable proxy worker setting.

Thanks,
Paul


Re: svn commit: r983618 - in /apr/apr/trunk: network_io/unix/sockets.c test/testsock.c

2010-08-09 Thread Joe Orton
This fixes a slow memory leak in mod_proxy FYI.  The sockaddr passed to 
apr_socket_connect() is allocated out of worker-cp-pool.  When a new 
backend connection is created, core_create_conn extracts the address 
from that socket to the conn_rec and it gets duped in that pool again.

On Mon, Aug 09, 2010 at 12:51:29PM -, jor...@apache.org wrote:
 Author: jorton
 Date: Mon Aug  9 12:51:29 2010
 New Revision: 983618
 
 URL: http://svn.apache.org/viewvc?rev=983618view=rev
 Log:
 * network_io/unix/sockets.c (apr_socket_connect): Copy the remote
   address by value rather than by reference.  This ensures that the
   sockaddr object returned by apr_socket_addr_get is allocated from
   the same pool as the socket object itself, as apr_socket_accept
   does; avoiding any potential lifetime mismatches.
 
 * test/testsock.c (test_get_addr): Enhance test case to cover this.
 
 Modified:
 apr/apr/trunk/network_io/unix/sockets.c
 apr/apr/trunk/test/testsock.c
 
 Modified: apr/apr/trunk/network_io/unix/sockets.c
 URL: 
 http://svn.apache.org/viewvc/apr/apr/trunk/network_io/unix/sockets.c?rev=983618r1=983617r2=983618view=diff
 ==
 --- apr/apr/trunk/network_io/unix/sockets.c (original)
 +++ apr/apr/trunk/network_io/unix/sockets.c Mon Aug  9 12:51:29 2010
 @@ -387,10 +387,13 @@ apr_status_t apr_socket_connect(apr_sock
  /* A real remote address was passed in.  If the unspecified
   * address was used, the actual remote addr will have to be
   * determined using getpeername() if required. */
 -/* ### this should probably be a structure copy + fixup as per
 - * _accept()'s handling of local_addr */
 -sock-remote_addr = sa;
  sock-remote_addr_unknown = 0;
 +
 +/* Copy the address structure details in. */
 +sock-remote_addr-sa = sa-sa;
 +sock-remote_addr-salen = sa-salen;
 +/* Adjust ipaddr_ptr et al. */
 +apr_sockaddr_vars_set(sock-remote_addr, sa-family, sa-port);
  }
  
  if (sock-local_addr-port == 0) {
 
 Modified: apr/apr/trunk/test/testsock.c
 URL: 
 http://svn.apache.org/viewvc/apr/apr/trunk/test/testsock.c?rev=983618r1=983617r2=983618view=diff
 ==
 --- apr/apr/trunk/test/testsock.c (original)
 +++ apr/apr/trunk/test/testsock.c Mon Aug  9 12:51:29 2010
 @@ -334,8 +334,11 @@ static void test_get_addr(abts_case *tc,
  apr_status_t rv;
  apr_socket_t *ld, *sd, *cd;
  apr_sockaddr_t *sa, *ca;
 +apr_pool_t *subp;
  char *a, *b;
  
 +APR_ASSERT_SUCCESS(tc, create subpool, apr_pool_create(subp, p));
 +
  ld = setup_socket(tc);
  
  APR_ASSERT_SUCCESS(tc,
 @@ -343,7 +346,7 @@ static void test_get_addr(abts_case *tc,
 apr_socket_addr_get(sa, APR_LOCAL, ld));
  
  rv = apr_socket_create(cd, sa-family, SOCK_STREAM,
 -   APR_PROTO_TCP, p);
 +   APR_PROTO_TCP, subp);
  APR_ASSERT_SUCCESS(tc, create client socket, rv);
  
  APR_ASSERT_SUCCESS(tc, enable non-block mode,
 @@ -369,7 +372,7 @@ static void test_get_addr(abts_case *tc,
  }
  
  APR_ASSERT_SUCCESS(tc, accept connection,
 -   apr_socket_accept(sd, ld, p));
 +   apr_socket_accept(sd, ld, subp));
  
  {
  /* wait for writability */
 @@ -389,18 +392,38 @@ static void test_get_addr(abts_case *tc,
  
  APR_ASSERT_SUCCESS(tc, get local address of server socket,
 apr_socket_addr_get(sa, APR_LOCAL, sd));
 -
  APR_ASSERT_SUCCESS(tc, get remote address of client socket,
 apr_socket_addr_get(ca, APR_REMOTE, cd));
 -
 -a = apr_psprintf(p, %pI, sa);
 -b = apr_psprintf(p, %pI, ca);
  
 +/* Test that the pool of the returned sockaddr objects exactly
 + * match the socket. */
 +ABTS_PTR_EQUAL(tc, subp, sa-pool);
 +ABTS_PTR_EQUAL(tc, subp, ca-pool);
 +
 +/* Check equivalence. */
 +a = apr_psprintf(p, %pI fam=%d, sa, sa-family);
 +b = apr_psprintf(p, %pI fam=%d, ca, ca-family);
  ABTS_STR_EQUAL(tc, a, b);
 +
 +/* Check pool of returned sockaddr, as above. */
 +APR_ASSERT_SUCCESS(tc, get local address of client socket,
 +   apr_socket_addr_get(sa, APR_LOCAL, cd));
 +APR_ASSERT_SUCCESS(tc, get remote address of server socket,
 +   apr_socket_addr_get(ca, APR_REMOTE, sd));
 +
 +/* Check equivalence. */
 +a = apr_psprintf(p, %pI fam=%d, sa, sa-family);
 +b = apr_psprintf(p, %pI fam=%d, ca, ca-family);
 +ABTS_STR_EQUAL(tc, a, b);
 +
 +ABTS_PTR_EQUAL(tc, subp, sa-pool);
 +ABTS_PTR_EQUAL(tc, subp, ca-pool);
 
  apr_socket_close(cd);
  apr_socket_close(sd);
  apr_socket_close(ld);
 +
 +apr_pool_destroy(subp);
  }
  
  static void 

RE: svn commit: r983618 - in /apr/apr/trunk: network_io/unix/sockets.c test/testsock.c

2010-08-09 Thread Plüm, Rüdiger, VF-Group
 

 -Original Message-
 From: Joe Orton 
 Sent: Montag, 9. August 2010 15:14
 To: dev@httpd.apache.org
 Subject: Re: svn commit: r983618 - in /apr/apr/trunk: 
 network_io/unix/sockets.c test/testsock.c
 
 This fixes a slow memory leak in mod_proxy FYI.  The sockaddr 
 passed to 
 apr_socket_connect() is allocated out of worker-cp-pool.  
 When a new 
 backend connection is created, core_create_conn extracts the address 
 from that socket to the conn_rec and it gets duped in that pool again.

Many thanks for the heads up Joe. I guess this is the root cause for
PR49713.

Regards

Rüdiger



[PATCH 49559] Admin-supplied Diffie-Hellman parameters for DHE connections

2010-08-09 Thread Erwann ABALEA
Hello,

I wrote and posted this patch several weeks ago, this is just a
message to eventually open a discussion for its approval or rejection.

-- 
Erwann ABALEA erwann.aba...@keynectis.com
Département RD
KEYNECTIS