[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-19 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

   Resolution: Fixed
Fix Version/s: 2.2.0
   2.1.0
   Status: Resolved  (was: Patch Available)

The failures are not related. 
Committed to master and branch-2.1
Thank [~ctang.ma] for reviewing the code.

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Fix For: 2.1.0, 2.2.0
>
> Attachments: HIVE-14015.1.patch, HIVE-14015.2.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hado

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-19 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Attachment: HIVE-14015.2.patch

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch, HIVE-14015.2.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtoc

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-17 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Attachment: (was: HIVE-14015.1.patch)

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.call

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-17 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Attachment: HIVE-14015.1.patch

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch, HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtoc

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-17 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Status: Open  (was: Patch Available)

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlock

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-17 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Status: Patch Available  (was: Open)

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlock

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-16 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Attachment: HIVE-14015.1.patch

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMet

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-16 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Attachment: (was: HIVE-14015.1.patch)

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;
> The stack is as following:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
> ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
> row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
> while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
>   at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at 
> org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"c":"13"}
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
>   at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.call

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-16 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Description: 
java.io.IOException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication

It could be reproduced:
1) prepare sample data:
a=1
while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data

2) prepare source hive table:
CREATE TABLE `s`(`c` string);
load data local inpath 'data' into table s;

3) prepare the bucketed table:
set hive.enforce.bucketing=true;
set hive.enforce.sorting=true;
CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
insert into t select * from s;

4) reproduce this issue:
SET hive.execution.engine=spark;
SET hive.auto.convert.sortmerge.join = true;
SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
SET hive.optimize.bucketmapjoin = true;
SET hive.optimize.bucketmapjoin.sortedmerge = true;
select * from t join t t1 on t.c=t1.c;

The stack is as following:
{noformat}
Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
recent failure: Lost task 0.3 in stage 0.0 (TID 6, 
ychencdh571-2.vpc.cloudera.com): java.lang.RuntimeException: Error processing 
row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row {"c":"13"}
at 
org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:154)
at 
org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
at 
org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
at 
org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95)
at 
scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at 
org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
at 
org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120)
at 
org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
at 
org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row {"c":"13"}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
... 16 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.IOException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
can be issued only with kerberos or web authentication
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7454)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getDelegationToken(AuthorizationProviderProxyClientProtocol.java:662)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:966)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-15 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Status: Patch Available  (was: Open)

patch 1 fix the issue by put mapreduce.job.credentials.binary to JobConf
Need code review.


> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14015) SMB MapJoin failed for Hive on Spark when kerberized

2016-06-15 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14015:

Attachment: HIVE-14015.1.patch

SMB mapredlocalwork need to set HADOOP_TOKEN_FILE_LOCATION
to JobConf

> SMB MapJoin failed for Hive on Spark when kerberized
> 
>
> Key: HIVE-14015
> URL: https://issues.apache.org/jira/browse/HIVE-14015
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14015.1.patch
>
>
> java.io.IOException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token 
> can be issued only with kerberos or web authentication
> It could be reproduced:
> 1) prepare sample data:
> a=1
> while [[ $a -lt 100 ]]; do echo $a ; let a=$a+1; done > data
> 2) prepare source hive table:
> CREATE TABLE `s`(`c` string);
> load data local inpath 'data' into table s;
> 3) prepare the bucketed table:
> set hive.enforce.bucketing=true;
> set hive.enforce.sorting=true;
> CREATE TABLE `t`(`c` string) CLUSTERED BY (c) SORTED BY (c) INTO 5 BUCKETS;
> insert into t select * from s;
> 4) reproduce this issue:
> SET hive.execution.engine=spark;
> SET hive.auto.convert.sortmerge.join = true;
> SET hive.auto.convert.sortmerge.join.bigtable.selection.policy = 
> org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSelectorForAutoSMJ;
> SET hive.auto.convert.sortmerge.join.noconditionaltask = true;
> SET hive.optimize.bucketmapjoin = true;
> SET hive.optimize.bucketmapjoin.sortedmerge = true;
> select * from t join t t1 on t.c=t1.c;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)