When running a set of, mostly window function, queries concurrently on a single drillbit with a 8GB max direct memory. We are seeing a continuous increase of direct memory allocation.
We repeat the following steps multiple times: - we launch in "iteration" of tests that will run all queries in a random order, 10 queries at a time - after the iteration finishes, we wait for a couple of minute to give Drill time to release the memory being held by the finishing fragments Using Drill's memory logger ("drill.allocator") we were able to get snapshots of how memory was internally used by Netty, we only focused on the number of allocated chunks, if we take this number and multiply it by 16MB (netty's chunk size) we get approximately the same value reported by Drill's direct memory allocation. Here is a graph that shows the evolution of the number of allocated chunks on a 500 iterations run (I'm working on improving the plots) : http://bit.ly/1JL6Kp3 In this specific case, after the first iteration Drill was allocating ~2GB of direct memory, this number kept rising after each iteration to ~6GB. We suspect this caused one of our previous runs to crash the JVM. If we only focus on the log lines between iterations (when Drill's memory usage is below 10MB) then all allocated chunks are at most 2% usage. At some point we end up with 288 nearly empty chunks, yet the next iteration will cause more chunks to be allocated!!! is this expected ? PS: I am running more tests and will update this thread with more informations. -- Abdelhakim Deneche Software Engineer <http://www.mapr.com/> Now Available - Free Hadoop On-Demand Training <http://www.mapr.com/training?utm_source=Email&utm_medium=Signature&utm_campaign=Free%20available>