[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272795#comment-14272795 ] Ashutosh Chauhan commented on HIVE-9104: [~vikram.dixit] This should be backported for 0.14.1, since its a correctness issue. > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: PTF-Windowing >Affects Versions: 0.14.0 >Reporter: Chao >Assignee: Chao > Fix For: 0.15.0 > > Attachments: HIVE-9104.2.patch, HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272722#comment-14272722 ] Hive QA commented on HIVE-9104: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12691483/HIVE-9104.2.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6747 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2327/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2327/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2327/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12691483 - PreCommit-HIVE-TRUNK-Build > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Chao >Assignee: Chao > Attachments: HIVE-9104.2.patch, HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272334#comment-14272334 ] Chao commented on HIVE-9104: [~xuefuz] OK, will do. > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Chao >Assignee: Chao > Attachments: HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272299#comment-14272299 ] Xuefu Zhang commented on HIVE-9104: --- [~csun] Could you add a test case in which perhaps the same query runs with multiple reducers. It can be in the same .q file. > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Chao >Assignee: Chao > Attachments: HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272290#comment-14272290 ] Harish Butani commented on HIVE-9104: - +1, thanks for tracking this down. > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Chao >Assignee: Chao > Attachments: HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272187#comment-14272187 ] Xuefu Zhang commented on HIVE-9104: --- +1. Code looks reasonable to me. However, it's great if [~rhbutani] or someone else familiar to this part of code to take a look. > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Chao >Assignee: Chao > Attachments: HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9104) windowing.q failed when mapred.reduce.tasks is set to larger than one
[ https://issues.apache.org/jira/browse/HIVE-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14269370#comment-14269370 ] Hive QA commented on HIVE-9104: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12690674/HIVE-9104.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6732 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_covar_samp org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2291/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2291/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2291/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12690674 - PreCommit-HIVE-TRUNK-Build > windowing.q failed when mapred.reduce.tasks is set to larger than one > - > > Key: HIVE-9104 > URL: https://issues.apache.org/jira/browse/HIVE-9104 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Chao >Assignee: Chao > Attachments: HIVE-9104.patch > > > Test {{windowing.q}} is actually not enabled in Spark branch - in test > configurations it is {{windowing.q.q}}. > I just run this test, and query > {code} > -- 12. testFirstLastWithWhere > select p_mfgr,p_name, p_size, > rank() over(distribute by p_mfgr sort by p_name) as r, > sum(p_size) over (distribute by p_mfgr sort by p_name rows between current > row and current row) as s2, > first_value(p_size) over w1 as f, > last_value(p_size, false) over w1 as l > from part > where p_mfgr = 'Manufacturer#3' > window w1 as (distribute by p_mfgr sort by p_name rows between 2 preceding > and 2 following); > {code} > failed with the following exception: > {noformat} > java.lang.RuntimeException: Hive Runtime Error while closing operators: null > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:446) > at > org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.closeRecordProcessor(HiveReduceFunctionResultList.java:58) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:108) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at > org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$2.apply(AsyncRDDActions.scala:115) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.SparkContext$$anonfun$30.apply(SparkContext.scala:1390) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.NoSuchElementException > at java.util.ArrayDeque.getFirst(ArrayDeque.java:318) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFFirstValue$FirstValStreamingFixedWindow.terminate(GenericUDAFFirstValue.java:290) > at > org.apache.hadoop.hive.ql.udf.ptf.WindowingTableFunction.finishPartition(WindowingTableFunction.java:413) > at > org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.finishPartition(PTFOperator.java:337) > at org.apache.hadoop.hive.ql.exec.PTFOperator.closeOp(PTFOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:598) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:610) > at > org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.close(SparkReduceRecordHandler.java:431) > ... 15 more > {noformat} > We need to find out: > - Since which commit this test started failing, and > - Why it fails -- This message was sent by Atlassian JIRA (v6.3.4#6332)