[ https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550340#comment-16550340 ]
Hive QA commented on HIVE-19937: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12932285/HIVE-19937.4.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14679 tests executed *Failed tests:* {noformat} org.apache.hive.minikdc.TestJdbcWithMiniKdc.testTokenAuth (batchId=263) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12720/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12720/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12720/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12932285 - PreCommit-HIVE-Build > Intern fields in MapWork on deserialization > ------------------------------------------- > > Key: HIVE-19937 > URL: https://issues.apache.org/jira/browse/HIVE-19937 > Project: Hive > Issue Type: Improvement > Components: Spark > Reporter: Sahil Takiar > Assignee: Sahil Takiar > Priority: Major > Attachments: HIVE-19937.1.patch, HIVE-19937.2.patch, > HIVE-19937.3.patch, HIVE-19937.4.patch, post-patch-report.html, report.html > > > When fixing HIVE-16395, we decided that each new Spark task should clone the > {{JobConf}} object to prevent any {{ConcurrentModificationException}} from > being thrown. However, setting this variable comes at a cost of storing a > duplicate {{JobConf}} object for each Spark task. These objects can take up a > significant amount of memory, we should intern them so that Spark tasks > running in the same JVM don't store duplicate copies. -- This message was sent by Atlassian JIRA (v7.6.3#76005)