[
https://issues.apache.org/jira/browse/HIVE-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12697999#action_12697999
]
Zheng Shao commented on HIVE-372:
---------------------------------
The patch is mainly for fixing the long-query problem - like 80-union query and
this long-udf query.
The error message is kind of a side effect - and overall we are seeing more
error messages (Just see all the changed error messages in the diff)
We could also do a more thorough fix of the grammar, but that will potentially
take much longer, and it may not be that prioritized compared wtih other to-do
items.
> Nested UDFs cause _very_ high memory usage when processing query
> ----------------------------------------------------------------
>
> Key: HIVE-372
> URL: https://issues.apache.org/jira/browse/HIVE-372
> Project: Hadoop Hive
> Issue Type: Bug
> Components: Query Processor
> Environment: Fedora Linux, 10x Amazon EC2 (Large Instance w/ 8GB Ram)
> Reporter: Steve Corona
> Attachments: HIVE-372.1.patch, HIVE-372.2.patch
>
>
> When nesting UDFs, the Hive Query processor takes a large amount of
> time+memory to process the query. For example, I ran something along the
> lines of:
> select trim( trim( trim(trim( trim( trim( trim( trim( trim(column)))))))))
> from test_table;
> This query needs 10GB+ of memory to process before it'll launch the job. The
> amount of memory increases exponentially with each nested UDF.
> Obviously, I am using trim() in this case as a simple example that causes the
> same problem to occur. In my actual use-case I had a bunch of nested
> regexp_replaces.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.