Thanks Mich,
Unfortunately we have many insert queries.
Are there any other ways?
Thanks,
Ravi
On Wed, Aug 31, 2016 at 9:45 PM, Mich Talebzadeh
wrote:
> Trt this
>
> hive.limit.optimize.fetch.max
>
>- Default Value: 5
>- Added In: Hive 0.8.0
>
> Maximum
Trt this
hive.limit.optimize.fetch.max
- Default Value: 5
- Added In: Hive 0.8.0
Maximum number of rows allowed for a smaller subset of data for simple
LIMIT, if it is a fetch query. Insert queries are not restricted by this
limit.
HTH
Dr Mich Talebzadeh
LinkedIn *
hi guys,
Vlad: good suggestion however in my case its a 5 second query (when it
works)
Gopal: Thanks for the explanation on the effect logging can have on the
execution path. somewhat counter-intuitive i must say and as you can
imagine a tad more challenging to debug - when debugging influences
Hi Community,
Many users run adhoc hive queries on our platform.
Some rogue queries managed to fill up the hdfs space and causing mainstream
queries to fail.
We wanted to limit the data generated by these adhoc queries.
We are aware of strict param which limits the data being scanned, but it is
No, I am not using dynamic partitioning.
On Wed, Aug 31, 2016 at 4:00 AM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:
> In hive 1.2.1 the automatic estimation of buffer size happens only if
> column count is >1000.
> You need https://issues.apache.org/jira/browse/HIVE-11807 for
One of the causes could be that on long-running queries your terminal
session gets disconnected and client process terminate (appearing like
query hangs).
When debug messages are on, they will keep terminal session alive and hence
allowing your query to complete.
I'm not sure if this is your