Re: [squid-dev] [PATCH] Fix HttpRequest object leaks in Squid 4.0.x

2016-03-09 Thread Amos Jeffries
On 10/03/2016 5:41 p.m., Nathan Hoad wrote:
> Hello,
> 
> The attached patch fixes a rather profuse memory leak in the Squid
> 4.0.x series where under certain conditions, HttpRequest objects would
> get lost. I can provide more information here if requested,
> specifically a Valgrind trace and configuration to reproduce this.
> 
> I suggest that a better long term solution would be to make the
> members private, and have setter methods to ensure the correct
> unlocking occurs. Alternatively, using a smart pointer, e.g.
> Http::HttpRequestPointer. If people have any recommendations here, let
> me know and I can work towards this.

We should always store HttpMsg hierarchy objects using their Pointer's.
Regardless of whether setters are used.

These HttpMsg/HttpRequest/HttpReply are in a long drawn out transition
right now. Leading to this leak amongst other issues. Any help you can
provide converting object members to Pointer is very welcome.

Thank you. I have applied your fix for now to get trunk stability back.

> 
> Thank you,
> 
> Nathan.
> 
> P.S. I have two more patches that fix less severe memory leaks that
> are quite distinct from one another. Should I submit them as a single
> patch, or multiple patches? At least one of the patches will involve
> some discussion on how to improve it. They are not large.
> 

If they are distinct, then separate patches are better than combined. No
reason to hold up a leak fix because something else needs discussion.

Amos

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] Fix HttpRequest object leaks in Squid 4.0.x

2016-03-09 Thread Nathan Hoad
Hello,

The attached patch fixes a rather profuse memory leak in the Squid
4.0.x series where under certain conditions, HttpRequest objects would
get lost. I can provide more information here if requested,
specifically a Valgrind trace and configuration to reproduce this.

I suggest that a better long term solution would be to make the
members private, and have setter methods to ensure the correct
unlocking occurs. Alternatively, using a smart pointer, e.g.
Http::HttpRequestPointer. If people have any recommendations here, let
me know and I can work towards this.

Thank you,

Nathan.

P.S. I have two more patches that fix less severe memory leaks that
are quite distinct from one another. Should I submit them as a single
patch, or multiple patches? At least one of the patches will involve
some discussion on how to improve it. They are not large.
Ensure any previous HttpRequest objects on AccessLogEntry are unlocked, to ensure they don't leak memory.

With this change, all current assignments to al->request and
al->adapted_request are guarded by either a nullptr check, or by
HTTPMSGUNLOCK()'ing the previous request object first.

This is submitted on behalf of Bloomberg L.P.

=== modified file 'src/client_side.cc'
--- src/client_side.cc	2016-02-13 07:51:20 +
+++ src/client_side.cc	2016-03-10 03:59:42 +
@@ -446,6 +446,7 @@
 }
 
 if (request) {
+HTTPMSGUNLOCK(al->adapted_request);
 al->adapted_request = request;
 HTTPMSGLOCK(al->adapted_request);
 }
@@ -2820,6 +2821,7 @@
 acl_checklist->al->tcpClient = clientConnection;
 acl_checklist->al->cache.port = port;
 acl_checklist->al->cache.caddr = log_addr;
+HTTPMSGUNLOCK(acl_checklist->al->request);
 acl_checklist->al->request = request;
 HTTPMSGLOCK(acl_checklist->al->request);
 acl_checklist->nonBlockingCheck(httpsSslBumpAccessCheckDone, this);

___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[squid-dev] [PATCH] shared_memory_locking

2016-03-09 Thread Alex Rousskov
Hello,

The attached patch adds a "shared_memory_locking" configuration
directive to control mlock(2).

Locking shared memory at startup avoids SIGBUS crashes when kernel runs
out of RAM during runtime. This has been discussed during the "[RFC] Fix
shared memory initialization, cleanup. Ensure its usability." thread
archived at
http://lists.squid-cache.org/pipermail/squid-dev/2015-December/004112.html

Why not enable locking by default? Unfortunately, locking requires
privileges and/or much-higher-than-default RLIMIT_MEMLOCK limits. Thus,
requiring locked memory by default is likely to cause too many
complaints, especially since Squid has not required that before. Even
"make check" will not work in most environments (without making our unit
tests even less representative)! Thus, the default is off, at least for now.

As we gain more experience, we may enable locking by default while
making default locking failures non-fatal and warning about significant
[accumulated] locking delays. The proposed code was written to make
those possible future changes easier, but I am not volunteering to
implement them at this time.


Thank you,

Alex.
Added shared_memory_locking configuration directive to control mlock(2).

Locking shared memory at startup avoids SIGBUS crashes when kernel runs
out of RAM during runtime. Why not enable it by default? Unfortunately,
locking requires privileges and/or much-higher-than-default
RLIMIT_MEMLOCK limits. Thus, requiring locked memory by default is
likely to cause too many complaints, especially since Squid has not
required that before. The default is off, at least for now.

As we gain more experience, we may try to enable locking by default
while making default locking failures non-fatal and warning about
significant [accumulated] locking delays.

=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h	2016-01-15 06:47:59 +
+++ src/SquidConfig.h	2016-03-01 05:22:45 +
@@ -55,40 +55,41 @@ public:
 int n_configured;
 /// number of disk processes required to support all cache_dirs
 int n_strands;
 };
 #define INDEXSD(i) (Config.cacheSwap.swapDirs[i].getRaw())
 }
 
 /// the representation of the configuration. POD.
 class SquidConfig
 {
 public:
 struct {
 /* These should be for the Store::Root instance.
 * this needs pluggable parsing to be done smoothly.
 */
 int highWaterMark;
 int lowWaterMark;
 } Swap;
 
 YesNoNone memShared; ///< whether the memory cache is shared among workers
+YesNoNone shmLocking; ///< shared_memory_locking
 size_t memMaxSize;
 
 struct {
 int64_t min;
 int pct;
 int64_t max;
 } quickAbort;
 int64_t readAheadGap;
 RemovalPolicySettings *replPolicy;
 RemovalPolicySettings *memPolicy;
 #if USE_HTTP_VIOLATIONS
 time_t negativeTtl;
 #endif
 time_t maxStale;
 time_t negativeDnsTtl;
 time_t positiveDnsTtl;
 time_t shutdownLifetime;
 time_t backgroundPingRate;
 
 struct {

=== modified file 'src/cf.data.pre'
--- src/cf.data.pre	2016-02-19 17:19:25 +
+++ src/cf.data.pre	2016-03-09 21:34:21 +
@@ -402,40 +402,73 @@ LOC: Config.cpuAffinityMap
 DEFAULT: none
 DEFAULT_DOC: Let operating system decide.
 DOC_START
 	Usage: cpu_affinity_map process_numbers=P1,P2,... cores=C1,C2,...
 
 	Sets 1:1 mapping between Squid processes and CPU cores. For example,
 
 	cpu_affinity_map process_numbers=1,2,3,4 cores=1,3,5,7
 
 	affects processes 1 through 4 only and places them on the first
 	four even cores, starting with core #1.
 
 	CPU cores are numbered starting from 1. Requires support for
 	sched_getaffinity(2) and sched_setaffinity(2) system calls.
 
 	Multiple cpu_affinity_map options are merged.
 
 	See also: workers
 DOC_END
 
+NAME: shared_memory_locking
+TYPE: YesNoNone
+COMMENT: on|off
+LOC: Config.shmLocking
+DEFAULT: off
+DOC_START
+	Whether to ensure that all required shared memory is available by
+	"locking" that shared memory into RAM when Squid starts. The
+	alternative is faster startup time followed by slightly slower
+	performance and, if not enough RAM is actually available during
+	runtime, mysterious crashes.
+
+	SMP Squid uses many shared memory segments. These segments are
+	brought into Squid memory space using an mmap(2) system call. During
+	Squid startup, the mmap() call often succeeds regardless of whether
+	the system has enough RAM. In general, Squid cannot tell whether the
+	kernel applies this "optimistic" memory allocation policy (but
+	popular modern kernels usually use it).
+
+	Later, if Squid attempts to actually access the mapped memory
+	regions beyond what the kernel is willing to allocate, the
+	"optimistic" kernel simply kills Squid kid with a SIGBUS signal.
+	Some of the memory limits enforced by the kernel are currently
+	poorly understood: We do not know how to detect and check them. This
+	option ensures that the mapped memory will be available. 
+
+	This option may have