[
https://issues.apache.org/jira/browse/HIVE-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12770135#action_12770135
]
Namit Jain commented on HIVE-900:
---------------------------------
Instead of relying on the mapper to copy each file to the distributed cache -
can we rely on the hive client (ExecDriver) to do that ?
>From the work, the client knows the tasks that need to be executed on the map
>side. Before submitting the job, execute that work.
The execmapper needs to change to copy from that portion of the work only
instead of executing the whole work.
> Map-side join failed if there are large number of mappers
> ---------------------------------------------------------
>
> Key: HIVE-900
> URL: https://issues.apache.org/jira/browse/HIVE-900
> Project: Hadoop Hive
> Issue Type: Improvement
> Reporter: Ning Zhang
> Assignee: Ning Zhang
>
> Map-side join is efficient when joining a huge table with a small table so
> that the mapper can read the small table into main memory and do join on each
> mapper. However, if there are too many mappers generated for the map join, a
> large number of mappers will simultaneously send request to read the same
> block of the small table. Currently Hadoop has a upper limit of the # of
> request of a the same block (250?). If that is reached a
> BlockMissingException will be thrown. That cause a lot of mappers been
> killed. Retry won't solve but worsen the problem.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.