I tried both of the following with STS but neither works for me.

Starting STS with --hiveconf hive.limit.optimize.fetch.max=50

and

Setting common.max_count in Zeppelin

Without setting such limits, a query that outputs lots of rows could cause
the driver to OOM and makes TS unusable. Any workarounds or thoughts?


On Tue, Aug 2, 2016 at 7:29 AM Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:

> I don't think it really works and it is vague. Is it rows, blocks, network?
>
>
>
> [image: Inline images 1]
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 2 August 2016 at 12:09, Chanh Le <giaosu...@gmail.com> wrote:
>
>> Hi Ayan,
>> You mean
>> common.max_count = 1000
>> Max number of SQL result to *display to prevent the browser overload*.
>> This is common properties for all connections
>>
>>
>>
>>
>> It already set default in Zeppelin but I think it doesn’t work with Hive.
>>
>>
>> DOC: http://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/interpreter/jdbc.html
>>
>>
>> On Aug 2, 2016, at 6:03 PM, ayan guha <guha.a...@gmail.com> wrote:
>>
>> Zeppelin already has a param for jdbc
>> On 2 Aug 2016 19:50, "Mich Talebzadeh" <mich.talebza...@gmail.com> wrote:
>>
>>> Ok I have already set up mine
>>>
>>>     <property>
>>>     <name>hive.limit.optimize.fetch.max</name>
>>>     <value>50000</value>
>>>     <description>
>>>       Maximum number of rows allowed for a smaller subset of data for
>>> simple LIMIT, if it is a fetch query.
>>>       Insert queries are not restricted by this limit.
>>>     </description>
>>>   </property>
>>>
>>> I am surprised that yours was missing. What did you set it up to?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>> On 2 August 2016 at 10:18, Chanh Le <giaosu...@gmail.com> wrote:
>>>
>>>> I tried and it works perfectly.
>>>>
>>>> Regards,
>>>> Chanh
>>>>
>>>>
>>>> On Aug 2, 2016, at 3:33 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
>>>> wrote:
>>>>
>>>> OK
>>>>
>>>> Try that
>>>>
>>>> Another tedious way is to create views in Hive based on tables and use
>>>> limit on those views.
>>>>
>>>> But try that parameter first if it does anything.
>>>>
>>>> HTH
>>>>
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>> LinkedIn * 
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>> On 2 August 2016 at 09:13, Chanh Le <giaosu...@gmail.com> wrote:
>>>>
>>>>> Hi Mich,
>>>>> I use Spark Thrift Server basically it acts like Hive.
>>>>>
>>>>> I see that there is property in Hive.
>>>>>
>>>>> hive.limit.optimize.fetch.max
>>>>>
>>>>>    - Default Value: 50000
>>>>>    - Added In: Hive 0.8.0
>>>>>
>>>>> Maximum number of rows allowed for a smaller subset of data for simple
>>>>> LIMIT, if it is a fetch query. Insert queries are not restricted by this
>>>>> limit.
>>>>>
>>>>>
>>>>> Is that related to the problem?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Aug 2, 2016, at 2:55 PM, Mich Talebzadeh <mich.talebza...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> This is a classic problem on any RDBMS
>>>>>
>>>>> Set the limit on the number of rows returned like maximum of 50K rows
>>>>> through JDBC
>>>>>
>>>>> What is your JDBC connection going to? Meaning which RDBMS if any?
>>>>>
>>>>> HTH
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>> LinkedIn * 
>>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>>> any loss, damage or destruction of data or any other property which may
>>>>> arise from relying on this email's technical content is explicitly
>>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>>> arising from such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 2 August 2016 at 08:41, Chanh Le <giaosu...@gmail.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>> I setup STS and use Zeppelin to query data through JDBC connection.
>>>>>> A problem we are facing is users usually forget to put limit in the
>>>>>> query so it causes hang the cluster.
>>>>>>
>>>>>> SELECT * FROM tableA;
>>>>>>
>>>>>> Is there anyway to config the limit by default ?
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Chanh
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to