Tim Armstrong has uploaded a new patch set (#15). Change subject: IMPALA-3203: Part 2: per-core free lists in buffer pool ......................................................................
IMPALA-3203: Part 2: per-core free lists in buffer pool Add per-core lists of clean pages and free pages to enable allocation of buffers without contention on shared locks in the common case. This is implemented with an additional layer of abstraction in "BufferAllocator", which tracks all memory (free buffers and clean pages) that is not in use but has not been released to the OS. The old BufferAllocator is renamed to SystemAllocator. See "Spilled Page Mgmt" and "MMap Allocator & Scalable Free Lists" in https://goo.gl/0zuy97 for a high-level summary of how this fits into the buffer pool design. The guts of the new code is BufferAllocator::AllocateInternal(), which progresses through several strategies for allocating memory. Misc changes: * Enforce upper limit on buffer size to reduce the number of free lists required. * Add additional allocation counters. * Slightly reorganise the MemTracker GC functions to use lambdas and clarify the order in which they should be called. Also adds a target memory value so that they don't need to free *all* of the memory in the system. * Fix an accounting bug in the buffer pool where it didn't evict dirty pages before reclaiming a clean page. Performance: We will need to validate the performance of the system under high query concurrency before this is used as part of query execution. The benchmark in Part 1 provided some evidence that this approach of a list per core should scale well to many cores. Testing: Added buffer-allocator-test to test the free list resizing algorithm directly. Added a test to buffer-pool-test to exercise the various new memory reclamation code paths that are now possible. Also run buffer-pool-test under two different faked-out NUMA setups - one with no NUMA and another with three NUMA nodes. buffer-pool-test, suballocator-test, and buffered-tuple-stream-v2-test provide some further basic coverage. Future system and unit tests will validate this further before it is used for query execution (see IMPALA-3200). Ran an initial version of IMPALA-4114, the ported BufferedBlockMgr tests, against this. The randomised stress test revealed some accounting bugs which are fixed. I'll post those tests as a follow-on patch. Change-Id: I612bd1cd0f0e87f7d8186e5bedd53a22f2d80832 --- M be/src/benchmarks/free-lists-benchmark.cc M be/src/common/init.cc M be/src/runtime/bufferpool/CMakeLists.txt A be/src/runtime/bufferpool/buffer-allocator-test.cc M be/src/runtime/bufferpool/buffer-allocator.cc M be/src/runtime/bufferpool/buffer-allocator.h M be/src/runtime/bufferpool/buffer-pool-counters.h M be/src/runtime/bufferpool/buffer-pool-internal.h M be/src/runtime/bufferpool/buffer-pool-test.cc M be/src/runtime/bufferpool/buffer-pool.cc M be/src/runtime/bufferpool/buffer-pool.h M be/src/runtime/bufferpool/free-list-test.cc M be/src/runtime/bufferpool/free-list.h M be/src/runtime/bufferpool/suballocator-test.cc M be/src/runtime/bufferpool/suballocator.h A be/src/runtime/bufferpool/system-allocator.cc A be/src/runtime/bufferpool/system-allocator.h M be/src/runtime/disk-io-mgr.cc M be/src/runtime/disk-io-mgr.h M be/src/runtime/exec-env.cc M be/src/runtime/mem-tracker.cc M be/src/runtime/mem-tracker.h A be/src/testutil/cpu-util.h A be/src/testutil/rand-util.h M be/src/util/cpu-info.cc M be/src/util/cpu-info.h 26 files changed, 1,519 insertions(+), 314 deletions(-) git pull ssh://gerrit.cloudera.org:29418/Impala-ASF refs/changes/14/6414/15 -- To view, visit http://gerrit.cloudera.org:8080/6414 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-MessageType: newpatchset Gerrit-Change-Id: I612bd1cd0f0e87f7d8186e5bedd53a22f2d80832 Gerrit-PatchSet: 15 Gerrit-Project: Impala-ASF Gerrit-Branch: master Gerrit-Owner: Tim Armstrong <tarmstr...@cloudera.com> Gerrit-Reviewer: Dan Hecht <dhe...@cloudera.com> Gerrit-Reviewer: Taras Bobrovytsky <tbobrovyt...@cloudera.com> Gerrit-Reviewer: Tim Armstrong <tarmstr...@cloudera.com>