[jira] [Commented] (HIVE-8508) UT: fix bucketsort_insert tests - related to SMBMapJoinOperator

2014-12-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14242455#comment-14242455
 ] 

Xuefu Zhang commented on HIVE-8508:
---

+1 pending on tests.

 UT: fix bucketsort_insert tests - related to SMBMapJoinOperator
 ---

 Key: HIVE-8508
 URL: https://issues.apache.org/jira/browse/HIVE-8508
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Thomas Friedrich
Assignee: Chinna Rao Lalam
 Attachments: HIVE-8508.1-spark.patch


 The 4 tests
 bucketsortoptimize_insert_2
 bucketsortoptimize_insert_4
 bucketsortoptimize_insert_6
 bucketsortoptimize_insert_7
 bucketsortoptimize_insert_8
 all fail with the same NPE related in SMBMapJoinOperator:
 order object is null in SMBMapJoinOperator:
 // fetch the first group for all small table aliases
 for (byte pos = 0; pos  order.length; pos++) {
 if (pos != posBigTable)
 { fetchNextGroup(pos); }
 Daemon Thread [Executor task launch worker-3] (Suspended (exception 
 NullPointerException))
 SMBMapJoinOperator.processOp(Object, int) line: 258
 FilterOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 FilterOperator.processOp(Object, int) line: 137
 TableScanOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 TableScanOperator.processOp(Object, int) line: 95
 MapOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 MapOperator.process(Writable) line: 536
 SparkMapRecordHandler.processRow(Object, Object) line: 139
 HiveMapFunctionResultList.processNextRecord(Tuple2BytesWritable,BytesWritable)
  line: 47
 HiveMapFunctionResultList.processNextRecord(Object) line: 28
 HiveBaseFunctionResultList$ResultIterator.hasNext() line: 108
 Wrappers$JIteratorWrapperA.hasNext() line: 41
 Iterator$class.foreach(Iterator, Function1) line: 727
 Wrappers$JIteratorWrapperA(AbstractIteratorA).foreach(Function1A,U) 
 line: 1157
 RDD$$anonfun$foreach$1.apply(IteratorT) line: 760
 RDD$$anonfun$foreach$1.apply(Object) line: 760
 SparkContext$$anonfun$runJob$3.apply(TaskContext, IteratorT) line: 1118
 SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
 ResultTaskT,U.runTask(TaskContext) line: 61
 ResultTaskT,U(TaskT).run(long) line: 56
 Executor$TaskRunner.run() line: 182
 ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
 ThreadPoolExecutor$Worker.run() line: 615
 Thread.run() line: 745
 There is also a NPE in the FileSinkOperator: the FileSystem object fs is null:
 // in recent hadoop versions, use deleteOnExit to clean tmp files.
 if (isNativeTable) {
 autoDelete = fs.deleteOnExit(fsp.outPaths[0]);
 Daemon Thread [Executor task launch worker-1] (Suspended (exception 
 NullPointerException))
 FileSinkOperator.createBucketFiles(FileSinkOperator$FSPaths) line: 495
 FileSinkOperator.closeOp(boolean) line: 925
 FileSinkOperator(OperatorT).close(boolean) line: 582
 SelectOperator(OperatorT).close(boolean) line: 594
 SMBMapJoinOperator(OperatorT).close(boolean) line: 594
 DummyStoreOperator(OperatorT).close(boolean) line: 594
 FilterOperator(OperatorT).close(boolean) line: 594
 TableScanOperator(OperatorT).close(boolean) line: 594
 MapOperator(OperatorT).close(boolean) line: 594
 SparkMapRecordHandler.close() line: 175
 HiveMapFunctionResultList.closeRecordProcessor() line: 57
 HiveBaseFunctionResultList$ResultIterator.hasNext() line: 122
 Wrappers$JIteratorWrapperA.hasNext() line: 41
 Iterator$class.foreach(Iterator, Function1) line: 727
 Wrappers$JIteratorWrapperA(AbstractIteratorA).foreach(Function1A,U) 
 line: 1157
 RDD$$anonfun$foreach$1.apply(IteratorT) line: 760
 RDD$$anonfun$foreach$1.apply(Object) line: 760
 SparkContext$$anonfun$runJob$3.apply(TaskContext, IteratorT) line: 1118
 SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
 ResultTaskT,U.runTask(TaskContext) line: 61
 ResultTaskT,U(TaskT).run(long) line: 56
 Executor$TaskRunner.run() line: 182
 ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
 ThreadPoolExecutor$Worker.run() line: 615
 Thread.run() line: 745



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8508) UT: fix bucketsort_insert tests - related to SMBMapJoinOperator

2014-12-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14242544#comment-14242544
 ] 

Hive QA commented on HIVE-8508:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12686573/HIVE-8508.1-spark.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 7260 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join22
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/518/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/518/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-518/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12686573 - PreCommit-HIVE-SPARK-Build

 UT: fix bucketsort_insert tests - related to SMBMapJoinOperator
 ---

 Key: HIVE-8508
 URL: https://issues.apache.org/jira/browse/HIVE-8508
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Thomas Friedrich
Assignee: Chinna Rao Lalam
 Attachments: HIVE-8508.1-spark.patch


 The 4 tests
 bucketsortoptimize_insert_2
 bucketsortoptimize_insert_4
 bucketsortoptimize_insert_6
 bucketsortoptimize_insert_7
 bucketsortoptimize_insert_8
 all fail with the same NPE related in SMBMapJoinOperator:
 order object is null in SMBMapJoinOperator:
 // fetch the first group for all small table aliases
 for (byte pos = 0; pos  order.length; pos++) {
 if (pos != posBigTable)
 { fetchNextGroup(pos); }
 Daemon Thread [Executor task launch worker-3] (Suspended (exception 
 NullPointerException))
 SMBMapJoinOperator.processOp(Object, int) line: 258
 FilterOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 FilterOperator.processOp(Object, int) line: 137
 TableScanOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 TableScanOperator.processOp(Object, int) line: 95
 MapOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 MapOperator.process(Writable) line: 536
 SparkMapRecordHandler.processRow(Object, Object) line: 139
 HiveMapFunctionResultList.processNextRecord(Tuple2BytesWritable,BytesWritable)
  line: 47
 HiveMapFunctionResultList.processNextRecord(Object) line: 28
 HiveBaseFunctionResultList$ResultIterator.hasNext() line: 108
 Wrappers$JIteratorWrapperA.hasNext() line: 41
 Iterator$class.foreach(Iterator, Function1) line: 727
 Wrappers$JIteratorWrapperA(AbstractIteratorA).foreach(Function1A,U) 
 line: 1157
 RDD$$anonfun$foreach$1.apply(IteratorT) line: 760
 RDD$$anonfun$foreach$1.apply(Object) line: 760
 SparkContext$$anonfun$runJob$3.apply(TaskContext, IteratorT) line: 1118
 SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
 ResultTaskT,U.runTask(TaskContext) line: 61
 ResultTaskT,U(TaskT).run(long) line: 56
 Executor$TaskRunner.run() line: 182
 ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
 ThreadPoolExecutor$Worker.run() line: 615
 Thread.run() line: 745
 There is also a NPE in the FileSinkOperator: the FileSystem object fs is null:
 // in recent hadoop versions, use deleteOnExit to clean tmp files.
 if (isNativeTable) {
 autoDelete = fs.deleteOnExit(fsp.outPaths[0]);
 Daemon Thread [Executor task launch worker-1] (Suspended (exception 
 NullPointerException))
 FileSinkOperator.createBucketFiles(FileSinkOperator$FSPaths) line: 495
 FileSinkOperator.closeOp(boolean) line: 925
 FileSinkOperator(OperatorT).close(boolean) line: 582
 SelectOperator(OperatorT).close(boolean) line: 594
 SMBMapJoinOperator(OperatorT).close(boolean) line: 594
 DummyStoreOperator(OperatorT).close(boolean) line: 594
 FilterOperator(OperatorT).close(boolean) line: 594
 TableScanOperator(OperatorT).close(boolean) line: 594
 MapOperator(OperatorT).close(boolean) line: 594
 SparkMapRecordHandler.close() line: 175
 HiveMapFunctionResultList.closeRecordProcessor() line: 57
 HiveBaseFunctionResultList$ResultIterator.hasNext() line: 122
 Wrappers$JIteratorWrapperA.hasNext() line: 41
 Iterator$class.foreach(Iterator, Function1) line: 727
 Wrappers$JIteratorWrapperA(AbstractIteratorA).foreach(Function1A,U) 
 line: 1157
 RDD$$anonfun$foreach$1.apply(IteratorT) line: 760
 

[jira] [Commented] (HIVE-8508) UT: fix bucketsort_insert tests - related to SMBMapJoinOperator

2014-12-11 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14242646#comment-14242646
 ] 

Xuefu Zhang commented on HIVE-8508:
---

Committed to Spark branch. Thanks, Chinna.

 UT: fix bucketsort_insert tests - related to SMBMapJoinOperator
 ---

 Key: HIVE-8508
 URL: https://issues.apache.org/jira/browse/HIVE-8508
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Thomas Friedrich
Assignee: Chinna Rao Lalam
 Fix For: spark-branch

 Attachments: HIVE-8508.1-spark.patch


 The 4 tests
 bucketsortoptimize_insert_2
 bucketsortoptimize_insert_4
 bucketsortoptimize_insert_6
 bucketsortoptimize_insert_7
 bucketsortoptimize_insert_8
 all fail with the same NPE related in SMBMapJoinOperator:
 order object is null in SMBMapJoinOperator:
 // fetch the first group for all small table aliases
 for (byte pos = 0; pos  order.length; pos++) {
 if (pos != posBigTable)
 { fetchNextGroup(pos); }
 Daemon Thread [Executor task launch worker-3] (Suspended (exception 
 NullPointerException))
 SMBMapJoinOperator.processOp(Object, int) line: 258
 FilterOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 FilterOperator.processOp(Object, int) line: 137
 TableScanOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 TableScanOperator.processOp(Object, int) line: 95
 MapOperator(OperatorT).forward(Object, ObjectInspector) line: 799
 MapOperator.process(Writable) line: 536
 SparkMapRecordHandler.processRow(Object, Object) line: 139
 HiveMapFunctionResultList.processNextRecord(Tuple2BytesWritable,BytesWritable)
  line: 47
 HiveMapFunctionResultList.processNextRecord(Object) line: 28
 HiveBaseFunctionResultList$ResultIterator.hasNext() line: 108
 Wrappers$JIteratorWrapperA.hasNext() line: 41
 Iterator$class.foreach(Iterator, Function1) line: 727
 Wrappers$JIteratorWrapperA(AbstractIteratorA).foreach(Function1A,U) 
 line: 1157
 RDD$$anonfun$foreach$1.apply(IteratorT) line: 760
 RDD$$anonfun$foreach$1.apply(Object) line: 760
 SparkContext$$anonfun$runJob$3.apply(TaskContext, IteratorT) line: 1118
 SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
 ResultTaskT,U.runTask(TaskContext) line: 61
 ResultTaskT,U(TaskT).run(long) line: 56
 Executor$TaskRunner.run() line: 182
 ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
 ThreadPoolExecutor$Worker.run() line: 615
 Thread.run() line: 745
 There is also a NPE in the FileSinkOperator: the FileSystem object fs is null:
 // in recent hadoop versions, use deleteOnExit to clean tmp files.
 if (isNativeTable) {
 autoDelete = fs.deleteOnExit(fsp.outPaths[0]);
 Daemon Thread [Executor task launch worker-1] (Suspended (exception 
 NullPointerException))
 FileSinkOperator.createBucketFiles(FileSinkOperator$FSPaths) line: 495
 FileSinkOperator.closeOp(boolean) line: 925
 FileSinkOperator(OperatorT).close(boolean) line: 582
 SelectOperator(OperatorT).close(boolean) line: 594
 SMBMapJoinOperator(OperatorT).close(boolean) line: 594
 DummyStoreOperator(OperatorT).close(boolean) line: 594
 FilterOperator(OperatorT).close(boolean) line: 594
 TableScanOperator(OperatorT).close(boolean) line: 594
 MapOperator(OperatorT).close(boolean) line: 594
 SparkMapRecordHandler.close() line: 175
 HiveMapFunctionResultList.closeRecordProcessor() line: 57
 HiveBaseFunctionResultList$ResultIterator.hasNext() line: 122
 Wrappers$JIteratorWrapperA.hasNext() line: 41
 Iterator$class.foreach(Iterator, Function1) line: 727
 Wrappers$JIteratorWrapperA(AbstractIteratorA).foreach(Function1A,U) 
 line: 1157
 RDD$$anonfun$foreach$1.apply(IteratorT) line: 760
 RDD$$anonfun$foreach$1.apply(Object) line: 760
 SparkContext$$anonfun$runJob$3.apply(TaskContext, IteratorT) line: 1118
 SparkContext$$anonfun$runJob$3.apply(Object, Object) line: 1118
 ResultTaskT,U.runTask(TaskContext) line: 61
 ResultTaskT,U(TaskT).run(long) line: 56
 Executor$TaskRunner.run() line: 182
 ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
 ThreadPoolExecutor$Worker.run() line: 615
 Thread.run() line: 745



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)