Hello,
The attached patch adds a "shared_memory_locking" configuration
directive to control mlock(2).
Locking shared memory at startup avoids SIGBUS crashes when kernel runs
out of RAM during runtime. This has been discussed during the "[RFC] Fix
shared memory initialization, cleanup. Ensure its usability." thread
archived at
http://lists.squid-cache.org/pipermail/squid-dev/2015-December/004112.html
Why not enable locking by default? Unfortunately, locking requires
privileges and/or much-higher-than-default RLIMIT_MEMLOCK limits. Thus,
requiring locked memory by default is likely to cause too many
complaints, especially since Squid has not required that before. Even
"make check" will not work in most environments (without making our unit
tests even less representative)! Thus, the default is off, at least for now.
As we gain more experience, we may enable locking by default while
making default locking failures non-fatal and warning about significant
[accumulated] locking delays. The proposed code was written to make
those possible future changes easier, but I am not volunteering to
implement them at this time.
Thank you,
Alex.
Added shared_memory_locking configuration directive to control mlock(2).
Locking shared memory at startup avoids SIGBUS crashes when kernel runs
out of RAM during runtime. Why not enable it by default? Unfortunately,
locking requires privileges and/or much-higher-than-default
RLIMIT_MEMLOCK limits. Thus, requiring locked memory by default is
likely to cause too many complaints, especially since Squid has not
required that before. The default is off, at least for now.
As we gain more experience, we may try to enable locking by default
while making default locking failures non-fatal and warning about
significant [accumulated] locking delays.
=== modified file 'src/SquidConfig.h'
--- src/SquidConfig.h 2016-01-15 06:47:59 +
+++ src/SquidConfig.h 2016-03-01 05:22:45 +
@@ -55,40 +55,41 @@ public:
int n_configured;
/// number of disk processes required to support all cache_dirs
int n_strands;
};
#define INDEXSD(i) (Config.cacheSwap.swapDirs[i].getRaw())
}
/// the representation of the configuration. POD.
class SquidConfig
{
public:
struct {
/* These should be for the Store::Root instance.
* this needs pluggable parsing to be done smoothly.
*/
int highWaterMark;
int lowWaterMark;
} Swap;
YesNoNone memShared; ///< whether the memory cache is shared among workers
+YesNoNone shmLocking; ///< shared_memory_locking
size_t memMaxSize;
struct {
int64_t min;
int pct;
int64_t max;
} quickAbort;
int64_t readAheadGap;
RemovalPolicySettings *replPolicy;
RemovalPolicySettings *memPolicy;
#if USE_HTTP_VIOLATIONS
time_t negativeTtl;
#endif
time_t maxStale;
time_t negativeDnsTtl;
time_t positiveDnsTtl;
time_t shutdownLifetime;
time_t backgroundPingRate;
struct {
=== modified file 'src/cf.data.pre'
--- src/cf.data.pre 2016-02-19 17:19:25 +
+++ src/cf.data.pre 2016-03-09 21:34:21 +
@@ -402,40 +402,73 @@ LOC: Config.cpuAffinityMap
DEFAULT: none
DEFAULT_DOC: Let operating system decide.
DOC_START
Usage: cpu_affinity_map process_numbers=P1,P2,... cores=C1,C2,...
Sets 1:1 mapping between Squid processes and CPU cores. For example,
cpu_affinity_map process_numbers=1,2,3,4 cores=1,3,5,7
affects processes 1 through 4 only and places them on the first
four even cores, starting with core #1.
CPU cores are numbered starting from 1. Requires support for
sched_getaffinity(2) and sched_setaffinity(2) system calls.
Multiple cpu_affinity_map options are merged.
See also: workers
DOC_END
+NAME: shared_memory_locking
+TYPE: YesNoNone
+COMMENT: on|off
+LOC: Config.shmLocking
+DEFAULT: off
+DOC_START
+ Whether to ensure that all required shared memory is available by
+ "locking" that shared memory into RAM when Squid starts. The
+ alternative is faster startup time followed by slightly slower
+ performance and, if not enough RAM is actually available during
+ runtime, mysterious crashes.
+
+ SMP Squid uses many shared memory segments. These segments are
+ brought into Squid memory space using an mmap(2) system call. During
+ Squid startup, the mmap() call often succeeds regardless of whether
+ the system has enough RAM. In general, Squid cannot tell whether the
+ kernel applies this "optimistic" memory allocation policy (but
+ popular modern kernels usually use it).
+
+ Later, if Squid attempts to actually access the mapped memory
+ regions beyond what the kernel is willing to allocate, the
+ "optimistic" kernel simply kills Squid kid with a SIGBUS signal.
+ Some of the memory limits enforced by the kernel are currently
+ poorly understood: We do not know how to detect and check them. This
+ option ensures that the mapped memory will be available.
+
+ This option may have