[ https://issues.apache.org/jira/browse/HIVE-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546072#comment-14546072 ]
Jason Dere commented on HIVE-10711: ----------------------------------- In your scenario, the user basically has bad settings - the user has told Hive to use more memory than it has available. Since the point of this fix is to prevent the user from hitting out of memory, I think HIVEHASHTABLEFOLLOWBYGBYMAXMEMORYUSAGE is safer. Ideally this isn't a behavior we want to be relying on for performance. If the user is unhappy with the fact that Hive didn't use enough memory for hash tables, they can check their logs/settings and realize their settings were bad. > Tez HashTableLoader attempts to allocate more memory than available when > HIVECONVERTJOINNOCONDITIONALTASKTHRESHOLD exceeds process max mem > ------------------------------------------------------------------------------------------------------------------------------------------ > > Key: HIVE-10711 > URL: https://issues.apache.org/jira/browse/HIVE-10711 > Project: Hive > Issue Type: Bug > Reporter: Jason Dere > Assignee: Jason Dere > Attachments: HIVE-10711.1.patch, HIVE-10711.2.patch > > > Tez HashTableLoader bases its memory allocation on > HIVECONVERTJOINNOCONDITIONALTASKTHRESHOLD. If this value is largeer than the > process max memory then this can result in the HashTableLoader trying to use > more memory than available to the process. -- This message was sent by Atlassian JIRA (v6.3.4#6332)