All of War and Peace is only 3MB.
Let people document however they want. Don't over-optimize for problems
that have never occurred.
On Fri, Mar 3, 2017 at 3:19 PM, Kunal Khatua wrote:
> It might be, incase someone begins to dump a massive design doc into the
> comment field
It might be, incase someone begins to dump a massive design doc into the
comment field for a view's JSON.
I'm also not sure about how this information can be consumed. If it is through
CLI, either we rely on the SQLLine shell to trim the output, or not worry at
all. I'm assuming we'd also
+1 on John's suggestion.
On Fri, Mar 3, 2017 at 6:24 AM, John Omernik wrote:
> So your node has 32G of ram yet you are allowing Drill to use 36G. I would
> change your settings to be 8GB of Heap, and 22GB of Direct Memory. See if
> this helps with your issues. Also, are you
It looks like you are trying to query a hive table (backed by a hbase
table) from drill. Can you try querying the same table from hive itself? I
would also login to hbase and check whether the underlying table exists or
not
On Thu, Mar 2, 2017 at 2:14 AM, Khurram Faraaz wrote:
So your node has 32G of ram yet you are allowing Drill to use 36G. I would
change your settings to be 8GB of Heap, and 22GB of Direct Memory. See if
this helps with your issues. Also, are you using a distributed filesystem?
If so you may want to allow even more free ram...i.e. 8GB of Heap and
Hi,
Please find our configuration details :-
Number of Nodes : 4
RAM/Node : 32GB
Core/Node : 8
DRILL_MAX_DIRECT_MEMORY="20G"
DRILL_HEAP="16G"
And all other variables are set to default.
Since we have tried some of the settings suggested above but still facing
this issue more frequently, kindly
Can you help me understand what "local to the cluster" means in the context
of a 5 node cluster? In the plan, the files are all file:// Are the files
replicated to each node? is it a common shared filesystem? Do all 5 nodes
have equal access to the 10 files? I wonder if using a local FS in a
I did not change the default values used by drill.
Are you talking of changing planner.memory_limit
and planner.memory.max_query_memory_per_node ?
If there are any other debug work that I can do, pls suggest
Regards
On Fri, Mar 3, 2017 at 5:14 PM, Nitin Pawar wrote:
>
how much memory have you set for planner ?
On Fri, Mar 3, 2017 at 5:06 PM, PROJJWAL SAHA wrote:
> Hello all,
>
> I am quering select * from dfs.xxx where yyy (filter condition)
>
> I am using dfs storage plugin that comes out of the box from drill on a
> 1GB file, local to