Re: Review Request 34666: HIVE-9152 - Dynamic Partition Pruning [Spark Branch]

2015-07-02 Thread chengxiang li

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34666/#review90197
---



ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkRemoveDynamicPruningBySize.java
 (line 59)
https://reviews.apache.org/r/34666/#comment143202

The statistic data shoud be quite unaccurate after filter and group, as 
it's computered based on estimation during compile time. I think threshold 
verification on unaccurate data should be unacceptable as that means the 
threshold may not work at all.
We may check this threshold in SparkPartitionPruningSinkOperator at runtime.



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java (line 396)
https://reviews.apache.org/r/34666/#comment143199

Why we need List for table/cloumnname/partkey here? do we support multi 
PartitionPruningSinkOperator inside single operator tree?



ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkPartitionPruningSinkOperator.java
 (line 61)
https://reviews.apache.org/r/34666/#comment143203

While append data size overwhelm its capability, DataOutputBuffer expand 
its byte array size by create a new byte array with 2x size and copy old one to 
new one. A estimated initial byte array size should be able to reduce most 
array copy.


- chengxiang li


On 五月 26, 2015, 4:28 p.m., Chao Sun wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34666/
 ---
 
 (Updated 五月 26, 2015, 4:28 p.m.)
 
 
 Review request for hive, chengxiang li and Xuefu Zhang.
 
 
 Bugs: HIVE-9152
 https://issues.apache.org/jira/browse/HIVE-9152
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Tez implemented dynamic partition pruning in HIVE-7826. This is a nice 
 optimization and we should implement the same in HOS.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 43c53fc 
   itests/src/test/resources/testconfiguration.properties 2a5f7e3 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 0f86117 
   metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp a0b34cb 
   metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h 55e0385 
   metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp 749c97a 
   metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py 
 4cc54e8 
   ql/if/queryplan.thrift c8dfa35 
   ql/src/gen/thrift/gen-cpp/queryplan_types.h ac73bc5 
   ql/src/gen/thrift/gen-cpp/queryplan_types.cpp 19d4806 
   
 ql/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/OperatorType.java
  e18f935 
   ql/src/gen/thrift/gen-php/Types.php 7121ed4 
   ql/src/gen/thrift/gen-py/queryplan/ttypes.py 53c0106 
   ql/src/gen/thrift/gen-rb/queryplan_types.rb c2c4220 
   ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java 9867739 
   ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 91e8a02 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveSparkClientFactory.java 
 21398d8 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkUtilities.java 
 e6c845c 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorSparkPartitionPruningSinkOperator.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java 
 1de7e40 
   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 9d5730d 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java ea5efe5 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkDynamicPartitionPruningOptimization.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/SparkRemoveDynamicPruningBySize.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SparkMapJoinResolver.java
  8e56263 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
 5f731d7 
   
 ql/src/java/org/apache/hadoop/hive/ql/optimizer/spark/SparkPartitionPruningSinkDesc.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkProcContext.java 
 447f104 
   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/GenSparkUtils.java 
 e27ce0d 
   
 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/OptimizeSparkProcContext.java
  f7586a4 
   ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkCompiler.java 
 19aae70 
   
 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkPartitionPruningOptimizer.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/parse/spark/SparkPartitionPruningSinkOperator.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java 05a5841 
   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java aa291b9 
   

[jira] [Created] (HIVE-11166) HiveHBaseTableOutputFormat can't call getFileExtension(JobConf jc, boolean isCompressed, HiveOutputFormat?, ? hiveOutputFormat)

2015-07-02 Thread meiyoula (JIRA)
meiyoula created HIVE-11166:
---

 Summary: HiveHBaseTableOutputFormat can't call 
getFileExtension(JobConf jc, boolean isCompressed, HiveOutputFormat?, ? 
hiveOutputFormat)
 Key: HIVE-11166
 URL: https://issues.apache.org/jira/browse/HIVE-11166
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler, Spark
Reporter: meiyoula


 I create a hbase table with HBaseStorageHandler in JDBCServer of spark, then 
execute the *insert into* sql statement, ClassCastException occurs.
{quote}
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
1 in stage 3.0 failed 4 times, most recent failure: Lost task 1.3 in stage 3.0 
(TID 12, vm-17): java.lang.ClassCastException: 
org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to 
org.apache.hadoop.hive.ql.io.HiveOutputFormat
at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.outputFormat$lzycompute(hiveWriterContainers.scala:72)
at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.outputFormat(hiveWriterContainers.scala:71)
at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.getOutputName(hiveWriterContainers.scala:91)
at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.initWriters(hiveWriterContainers.scala:115)
at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.executorSideSetup(hiveWriterContainers.scala:84)
at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1(InsertIntoHiveTable.scala:112)
at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:93)
at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:93)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:197)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
{quote}

It's because the code in spark below. To hbase table, the outputFormat is 
HiveHBaseTableOutputFormat, it isn't instanceOf[HiveOutputForm
at].
{quote}
@transient private lazy val 
outputFormat=conf.value.getOutputFormat.asInstanceOf[HiveOutputForm
at[AnyRef, Writable]]
val extension = Utilities.getFileExtension(conf.value, 
fileSinkConf.getCompressed, outputFormat)
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11167) Dynamic partition - hive-spark failed for date type

2015-07-02 Thread Ratan Kumar Nath (JIRA)
Ratan Kumar Nath created HIVE-11167:
---

 Summary: Dynamic partition - hive-spark failed for date type
 Key: HIVE-11167
 URL: https://issues.apache.org/jira/browse/HIVE-11167
 Project: Hive
  Issue Type: Bug
Reporter: Ratan Kumar Nath
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


new to hive and could't build success

2015-07-02 Thread baishaoqi
Dear developers:
I am new to hive,  I have a work which have to compile hive and make some 
modifications for my use(very few)。
but when I type the following command:

  $ svn co http://svn.apache.org/repos/asf/hive/trunk hive
  $ cd hive
  $ mvn clean install -Phadoop-2,dist
   
  but couldn't success
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 26:58 min
[INFO] Finished at: 2015-07-02T16:21:52+08:00
[INFO] Final Memory: 203M/3304M
[INFO] 
[ERROR] Failed to execute goal on project hive-hcatalog-pig-adapter: Could not 
resolve dependencies for project 
org.apache.hive.hcatalog:hive-hcatalog-pig-adapter:jar:1.2.0-SNAPSHOT: Could 
not transfer artifact joda-time:joda-time:jar:2.2 from/to central 
(https://repo.maven.apache.org/maven2): GET request of: 
joda-time/joda-time/2.2/joda-time-2.2.jar from central failed: SSL peer shut 
down incorrectly - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hive-hcatalog-pig-adapter

  

2015-07-02


baishaoqi 

[jira] [Created] (HIVE-11168) Followup on HIVE-9917, set the default value of hive.int.timestamp.conversion.in.seconds to true

2015-07-02 Thread Aihua Xu (JIRA)
Aihua Xu created HIVE-11168:
---

 Summary: Followup on HIVE-9917, set the default value of 
hive.int.timestamp.conversion.in.seconds to true
 Key: HIVE-11168
 URL: https://issues.apache.org/jira/browse/HIVE-11168
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 2.0.0
Reporter: Aihua Xu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36139: HIVE-11130 Refactoring the code so that HiveTxnManager interface will support lock/unlock table/database object

2015-07-02 Thread Chao Sun

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36139/#review90282
---

Ship it!


Ship It!

- Chao Sun


On July 2, 2015, 8:28 p.m., Aihua Xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36139/
 ---
 
 (Updated July 2, 2015, 8:28 p.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-11130 Refactoring the code so that HiveTxnManager interface will support 
 lock/unlock table/database object
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 8bcf860 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java 7e93387 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java 2dd0c7d 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java 
 eccb8d1 
 
 Diff: https://reviews.apache.org/r/36139/diff/
 
 
 Testing
 ---
 
 Test has been done. This work will be followed by HIVE-10984 to solve the 
 explicit lock table issue.
 
 
 Thanks,
 
 Aihua Xu
 




RE: problems running spark tests

2015-07-02 Thread Li, Rui
Other guys also run into this on Mac.
The spark binary is downloaded to itests/thirdparty and then unpacked and 
copied to itests/qtest-spark/target/spark. Maybe you can manually do the 
process and check if anything goes wrong.

Cheers,
Rui Li

-Original Message-
From: Sergey Shelukhin [mailto:ser...@hortonworks.com] 
Sent: Friday, July 03, 2015 6:32 AM
To: dev@hive.apache.org
Subject: Re: problems running spark tests

I was able to get the tests to run with the parameter Hari suggested, on a 
different (Linux) machine.
However, on my Mac laptop, the bin/ part of spark directory is not regenerated. 
I guess I will do the usual shamanic dances like nuking the maven repo, 
re-cloning the code, etc., next time I need it. If that doesn’t work I might 
file a bug or revive this thread.

On 15/7/2, 11:40, Szehon Ho sze...@cloudera.com wrote:

This works for me..

mvn test -Dtest=TestSparkCliDriver -Dqfile=join1.q -Phadoop-2 For 
multiple tests you might need to add quotes around the comma-separated 
list.

I haven't seen that error, did you run from itests directory?  There 
are some steps in pom to copy over the spark scripts needed to run, 
that look like they were skipped as that script is not available in your run.

Thanks
Szehon

On Thu, Jul 2, 2015 at 10:31 AM, Sergey Shelukhin 
ser...@hortonworks.com
wrote:

 Hi. I am trying to run TestSparkCliDriver.

 1) Spark tests do not appear to support specifying a query like other  
tests; when I run mvn test -Phadoop-2 -Dtest=TestSparkCliDriver tests 
run,  but with  mvn test -Phadoop-2 -Dtest=TestSparkCliDriver 
-Dqfile=foo.q,bar.q,..
test
 just instantly succeeds w/o running any queries. Is there some other 
way  to specify those?

 2) When I run all the test, they fail with the below exception  I’ve 
done a full regular build (mvn clean install … in root and then  
itests). Are more steps necessary?
 The itests/qtest-spark/../../itests/qtest-spark/target/spark 
directory  exists and has bunch of stuff, but bin/ subdirectory that 
it tries to run  from is indeed empty.

 2015-07-02 10:11:58,678 ERROR [main]: spark.SparkTask
 (SessionState.java:printError(987)) - Failed to execute spark task, 
 with exception 
 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark 
 client.)'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create 
 spark client.
 at
 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(Spa
rkS
es
 sionImpl.java:57)
 at
 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.g
etS
es
 sion(SparkSessionManagerImpl.java:114)
 at
 
org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(Sp
ark
Ut
 ilities.java:127)
 at
 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:
101
)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
 at
 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.jav
a:8
9)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1672)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1431)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1212)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1063)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
 at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:21
3)  at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:840)
 at
 
org.apache.hadoop.hive.cli.TestSparkCliDriver.clinit(TestSparkCliDri
ver
.j
 ava:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at
 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
ava
:6
 2)
 at
 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
orI
mp
 l.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at
 
org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod
.ja
va
 :35)
 at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:24)
 at
 
org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMet
hod
Bu
 ilder.java:11)
 at
 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder
.ja
va
 :59)
 at
 
org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForCl
ass
(A
 llDefaultPossibilitiesBuilder.java:26)
 at
 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder
.ja
va
 :59)
 at
org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:2
6)
 at
 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider
.ja
va
 :262)
 at
 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4P
rov
id
 er.java:153)
 at
 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.
jav
a:
 124)
 at
 

[jira] [Created] (HIVE-11178) Connecting to Hive via Beeline using Kerberos keytab

2015-07-02 Thread Ghanshyam Malu (JIRA)
Ghanshyam Malu created HIVE-11178:
-

 Summary: Connecting to Hive via Beeline using Kerberos keytab
 Key: HIVE-11178
 URL: https://issues.apache.org/jira/browse/HIVE-11178
 Project: Hive
  Issue Type: Wish
  Components: Clients
Reporter: Ghanshyam Malu


Is it possible to connect to Hive via beeline using (kerberos) keytab file 
similar to the approach used for JDBC described at 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-UsingKerberoswithaPre-AuthenticatedSubject

PS : beeline does support connecting on a kerberos secured hive server with 
username and password. But I am looking for a way to connect it with a keytab 
file. 
http://doc.mapr.com/display/MapR40x/Configuring+Hive+on+a+Secure+Cluster#ConfiguringHiveonaSecureCluster-UsingBeelinewithKerberos





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36156: HIVE-11053: Add more tests for HIVE-10844[Spark Branch]

2015-07-02 Thread lun gao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36156/
---

Review request for hive and chengxiang li.


Bugs: HIVE-11053
https://issues.apache.org/jira/browse/HIVE-11053


Repository: hive-git


Description
---

Add some test cases for self union, self-join, CWE, and repeated sub-queries to 
verify the job of combining quivalent works in HIVE-10844.


Diffs
-

  ql/src/test/queries/clientpositive/Dynamic_RDD_Cache.q f5b40ea 
  ql/src/test/results/clientpositive/spark/Dynamic_RDD_Cache.q.out f52100a 

Diff: https://reviews.apache.org/r/36156/diff/


Testing
---


Thanks,

lun gao



[jira] [Created] (HIVE-11179) HIVE should allow custom converting from HivePrivilegeObjectDesc to privilegeObject for different authorizers

2015-07-02 Thread Dapeng Sun (JIRA)
Dapeng Sun created HIVE-11179:
-

 Summary: HIVE should allow custom converting from 
HivePrivilegeObjectDesc to privilegeObject for different authorizers
 Key: HIVE-11179
 URL: https://issues.apache.org/jira/browse/HIVE-11179
 Project: Hive
  Issue Type: Improvement
Reporter: Dapeng Sun
Assignee: Dapeng Sun


HIVE should allow custom converting from HivePrivilegeObjectDesc to 
privilegeObject for different authorizers:

There is a case in Apache Sentry: Sentry support uri and server level 
privilege, but in hive side, it uses 
{{AuthorizationUtils.getHivePrivilegeObject(privSubjectDesc)}} to do the 
converting, and the code in {{getHivePrivilegeObject()}} only handle the scenes 
for table and database 
{noformat}
privSubjectDesc.getTable() ? HivePrivilegeObjectType.TABLE_OR_VIEW :
HivePrivilegeObjectType.DATABASE;
{noformat}

A solution is move this method to {{HiveAuthorizer}}, so that a custom 
Authorizer could enhance it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11170) port parts of HIVE-11015 to master for ease of future merging

2015-07-02 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-11170:
---

 Summary: port parts of HIVE-11015 to master for ease of future 
merging
 Key: HIVE-11170
 URL: https://issues.apache.org/jira/browse/HIVE-11170
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin


That patch changes how IOContext is created (file structure) and adds tests; I 
will merge non-LLAP parts of it now, so it's easier to merge later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: new to hive and could't build success

2015-07-02 Thread Alan Gates
Hive no longer uses svn.  The official code is now in git.  So the below 
should be:


git clone https://git-wip-us.apache.org/repos/asf/hive.git
cd hive
mvn clean install -Phadoop-2 # you don't need dist unless your building 
the package


Alan.

baishaoqi mailto:baisha...@kuaidigroup.com
July 2, 2015 at 2:56
Dear developers:
I am new to hive, I have a work which have to compile hive and make 
some modifications for my use(very few)。

but when I type the following command:

$ svn co http://svn.apache.org/repos/asf/hive/trunk hive
$ cd hive
$ mvn clean install -Phadoop-2,dist

but couldn't success
[INFO] BUILD FAILURE
[INFO] 


[INFO] Total time: 26:58 min
[INFO] Finished at: 2015-07-02T16:21:52+08:00
[INFO] Final Memory: 203M/3304M
[INFO] 

[ERROR] Failed to execute goal on project hive-hcatalog-pig-adapter: 
Could not resolve dependencies for project 
org.apache.hive.hcatalog:hive-hcatalog-pig-adapter:jar:1.2.0-SNAPSHOT: 
Could not transfer artifact joda-time:joda-time:jar:2.2 from/to 
central (https://repo.maven.apache.org/maven2): GET request of: 
joda-time/joda-time/2.2/joda-time-2.2.jar from central failed: SSL 
peer shut down incorrectly - [Help 1]

[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with 
the -e switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, 
please read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

[ERROR]
[ERROR] After correcting the problems, you can resume the build with 
the command

[ERROR] mvn goals -rf :hive-hcatalog-pig-adapter



2015-07-02


baishaoqi


[jira] [Created] (HIVE-11172) Vectorization wrong results for aggregate query with where clause without group by

2015-07-02 Thread Yi Zhang (JIRA)
Yi Zhang created HIVE-11172:
---

 Summary: Vectorization wrong results for aggregate query with 
where clause without group by
 Key: HIVE-11172
 URL: https://issues.apache.org/jira/browse/HIVE-11172
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 0.14.0
Reporter: Yi Zhang
Priority: Critical


create table testvec(id int, dt int, greg_dt string) stored as orc;

insert into table testvec
values 
(1,20150330, '2015-03-30'),
(2,20150301, '2015-03-01'),
(3,20150502, '2015-05-02'),
(4,20150401, '2015-04-01'),
(5,20150313, '2015-03-13'),
(6,20150314, '2015-03-14'),
(7,20150404, '2015-04-04');



hive select dt, greg_dt from testvec where id=5;
OK
201503132015-03-13
Time taken: 4.435 seconds, Fetched: 1 row(s)


hive set hive.vectorized.execution.enabled=true;
hive set hive.map.aggr;
hive.map.aggr=true

hive select max(dt), max(greg_dt) from testvec where id=5;

OK
201503132015-03-30

hive set hive.vectorized.execution.enabled=false;
hive  select max(dt), max(greg_dt) from testvec where id=5;
OK
201503132015-03-13



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


problems running spark tests

2015-07-02 Thread Sergey Shelukhin
Hi. I am trying to run TestSparkCliDriver.

1) Spark tests do not appear to support specifying a query like other
tests; when I run mvn test -Phadoop-2 -Dtest=TestSparkCliDriver tests run,
but with 
mvn test -Phadoop-2 -Dtest=TestSparkCliDriver -Dqfile=foo.q,bar.q,.. test
just instantly succeeds w/o running any queries. Is there some other way
to specify those?

2) When I run all the test, they fail with the below exception
I’ve done a full regular build (mvn clean install … in root and then
itests). Are more steps necessary?
The itests/qtest-spark/../../itests/qtest-spark/target/spark directory
exists and has bunch of stuff, but bin/ subdirectory that it tries to run
from is indeed empty.

2015-07-02 10:11:58,678 ERROR [main]: spark.SparkTask
(SessionState.java:printError(987)) - Failed to execute spark task, with
exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to
create spark client.)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark
client.
at 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSes
sionImpl.java:57)
at 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSes
sion(SparkSessionManagerImpl.java:114)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUt
ilities.java:127)
at 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:101)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1672)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1431)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1212)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1063)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:840)
at 
org.apache.hadoop.hive.cli.TestSparkCliDriver.clinit(TestSparkCliDriver.j
ava:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:6
2)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
l.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java
:35)
at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:24)
at 
org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBu
ilder.java:11)
at 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java
:59)
at 
org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(A
llDefaultPossibilitiesBuilder.java:26)
at 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java
:59)
at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java
:262)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provid
er.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:
124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoad
er(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBoot
er.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.io.IOException: Cannot run program
“[snip]/itests/qtest-spark/../../itests/qtest-spark/target/spark/bin/spark-
submit: error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at 
org.apache.hive.spark.client.SparkClientImpl.startDriver(SparkClientImpl.ja
va:415)
at 
org.apache.hive.spark.client.SparkClientImpl.init(SparkClientImpl.java:94
)
at 
org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFac
tory.java:80)
at 
org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.init(RemoteHiv
eSparkClient.java:91)
at 
org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSpark
Client(HiveSparkClientFactory.java:65)
at 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSes
sionImpl.java:55)
... 33 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.init(UNIXProcess.java:248)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 39 more





Re: Review Request 36139: HIVE-11130 Refactoring the code so that HiveTxnManager interface will support lock/unlock table/database object

2015-07-02 Thread Chao Sun

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36139/#review90252
---



ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java (line 124)
https://reviews.apache.org/r/36139/#comment143261

could you add more doc for the methods added (e.g., what the method is for 
(in more detail), and what are each of the parameters)?



ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java (line 89)
https://reviews.apache.org/r/36139/#comment143263

long line.



ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java (line 169)
https://reviews.apache.org/r/36139/#comment143260

this is not only used for unlocking database. maybe changed the error 
message.



ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java (line 174)
https://reviews.apache.org/r/36139/#comment143264

this looks exactly the same as DDLTask#getHiveObject. Is there a way to get 
rid of the latter?



ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java (line 189)
https://reviews.apache.org/r/36139/#comment143262

long line.


- Chao Sun


On July 2, 2015, 5:24 p.m., Aihua Xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36139/
 ---
 
 (Updated July 2, 2015, 5:24 p.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-11130 Refactoring the code so that HiveTxnManager interface will support 
 lock/unlock table/database object
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
 8bcf860d1f4b783352e84c3b9c988e061ab0b751 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java 
 2dd0c7df2a91c7e9f7e8b9825725ccddd463262b 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java 
 eccb8d1d5ae7a41f90800fe8d62875978c98b074 
 
 Diff: https://reviews.apache.org/r/36139/diff/
 
 
 Testing
 ---
 
 Test has been done. This work will be followed by HIVE-10984 to solve the 
 explicit lock table issue.
 
 
 Thanks,
 
 Aihua Xu
 




[jira] [Created] (HIVE-11171) Join reordering algorithm might introduce projects between joins

2015-07-02 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-11171:
--

 Summary: Join reordering algorithm might introduce projects 
between joins
 Key: HIVE-11171
 URL: https://issues.apache.org/jira/browse/HIVE-11171
 Project: Hive
  Issue Type: Bug
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez


Join reordering algorithm might introduce projects between joins which causes 
multijoin optimization in SemanticAnalyzer to not kick in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11169) LLAP: more out file changes compared to master

2015-07-02 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-11169:
---

 Summary: LLAP: more out file changes compared to master
 Key: HIVE-11169
 URL: https://issues.apache.org/jira/browse/HIVE-11169
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36139: HIVE-11130 Refactoring the code so that HiveTxnManager interface will support lock/unlock table/database object

2015-07-02 Thread Aihua Xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36139/
---

Review request for hive.


Repository: hive-git


Description
---

HIVE-11130 Refactoring the code so that HiveTxnManager interface will support 
lock/unlock table/database object


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
8bcf860d1f4b783352e84c3b9c988e061ab0b751 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java 
2dd0c7df2a91c7e9f7e8b9825725ccddd463262b 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java 
eccb8d1d5ae7a41f90800fe8d62875978c98b074 

Diff: https://reviews.apache.org/r/36139/diff/


Testing
---

Test has been done. This work will be followed by HIVE-10984 to solve the 
explicit lock table issue.


Thanks,

Aihua Xu



Re: problems running spark tests

2015-07-02 Thread Hari Subramaniyan
Can you try running with -Dspark.query.files instead of -Dqfile from itests 
directory

Thanks
Hari

 On Jul 2, 2015, at 10:32 AM, Sergey Shelukhin ser...@hortonworks.com 
 wrote:
 
 Hi. I am trying to run TestSparkCliDriver.
 
 1) Spark tests do not appear to support specifying a query like other
 tests; when I run mvn test -Phadoop-2 -Dtest=TestSparkCliDriver tests run,
 but with 
 mvn test -Phadoop-2 -Dtest=TestSparkCliDriver -Dqfile=foo.q,bar.q,.. test
 just instantly succeeds w/o running any queries. Is there some other way
 to specify those?
 
 2) When I run all the test, they fail with the below exception
 I’ve done a full regular build (mvn clean install … in root and then
 itests). Are more steps necessary?
 The itests/qtest-spark/../../itests/qtest-spark/target/spark directory
 exists and has bunch of stuff, but bin/ subdirectory that it tries to run
 from is indeed empty.
 
 2015-07-02 10:11:58,678 ERROR [main]: spark.SparkTask
 (SessionState.java:printError(987)) - Failed to execute spark task, with
 exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to
 create spark client.)'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark
 client.
 at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSes
 sionImpl.java:57)
 at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSes
 sion(SparkSessionManagerImpl.java:114)
 at 
 org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUt
 ilities.java:127)
 at 
 org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:101)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1672)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1431)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1212)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1063)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:840)
 at 
 org.apache.hadoop.hive.cli.TestSparkCliDriver.clinit(TestSparkCliDriver.j
 ava:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:6
 2)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
 l.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at 
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java
 :35)
 at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:24)
 at 
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBu
 ilder.java:11)
 at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java
 :59)
 at 
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(A
 llDefaultPossibilitiesBuilder.java:26)
 at 
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java
 :59)
 at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java
 :262)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provid
 er.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:
 124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoad
 er(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBoot
 er.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.io.IOException: Cannot run program
 “[snip]/itests/qtest-spark/../../itests/qtest-spark/target/spark/bin/spark-
 submit: error=2, No such file or directory
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
 at 
 org.apache.hive.spark.client.SparkClientImpl.startDriver(SparkClientImpl.ja
 va:415)
 at 
 org.apache.hive.spark.client.SparkClientImpl.init(SparkClientImpl.java:94
 )
 at 
 org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFac
 tory.java:80)
 at 
 org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.init(RemoteHiv
 eSparkClient.java:91)
 at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSpark
 Client(HiveSparkClientFactory.java:65)
 at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSes
 sionImpl.java:55)
 ... 33 more
 Caused by: java.io.IOException: error=2, No such file or directory
 at java.lang.UNIXProcess.forkAndExec(Native Method)
 at 

Re: problems running spark tests

2015-07-02 Thread Szehon Ho
This works for me..

mvn test -Dtest=TestSparkCliDriver -Dqfile=join1.q -Phadoop-2
For multiple tests you might need to add quotes around the comma-separated
list.

I haven't seen that error, did you run from itests directory?  There are
some steps in pom to copy over the spark scripts needed to run, that look
like they were skipped as that script is not available in your run.

Thanks
Szehon

On Thu, Jul 2, 2015 at 10:31 AM, Sergey Shelukhin ser...@hortonworks.com
wrote:

 Hi. I am trying to run TestSparkCliDriver.

 1) Spark tests do not appear to support specifying a query like other
 tests; when I run mvn test -Phadoop-2 -Dtest=TestSparkCliDriver tests run,
 but with
 mvn test -Phadoop-2 -Dtest=TestSparkCliDriver -Dqfile=foo.q,bar.q,.. test
 just instantly succeeds w/o running any queries. Is there some other way
 to specify those?

 2) When I run all the test, they fail with the below exception
 I’ve done a full regular build (mvn clean install … in root and then
 itests). Are more steps necessary?
 The itests/qtest-spark/../../itests/qtest-spark/target/spark directory
 exists and has bunch of stuff, but bin/ subdirectory that it tries to run
 from is indeed empty.

 2015-07-02 10:11:58,678 ERROR [main]: spark.SparkTask
 (SessionState.java:printError(987)) - Failed to execute spark task, with
 exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to
 create spark client.)'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark
 client.
 at
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSes
 sionImpl.java:57)
 at
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSes
 sion(SparkSessionManagerImpl.java:114)
 at
 org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUt
 ilities.java:127)
 at
 org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:101)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
 at
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1672)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1431)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1212)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1063)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:840)
 at
 org.apache.hadoop.hive.cli.TestSparkCliDriver.clinit(TestSparkCliDriver.j
 ava:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:6
 2)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
 l.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at
 org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java
 :35)
 at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:24)
 at
 org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBu
 ilder.java:11)
 at
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java
 :59)
 at
 org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(A
 llDefaultPossibilitiesBuilder.java:26)
 at
 org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java
 :59)
 at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
 at
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java
 :262)
 at
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provid
 er.java:153)
 at
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:
 124)
 at
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoad
 er(ForkedBooter.java:200)
 at
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBoot
 er.java:153)
 at
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.io.IOException: Cannot run program
 “[snip]/itests/qtest-spark/../../itests/qtest-spark/target/spark/bin/spark-
 submit: error=2, No such file or directory
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
 at
 org.apache.hive.spark.client.SparkClientImpl.startDriver(SparkClientImpl.ja
 va:415)
 at
 org.apache.hive.spark.client.SparkClientImpl.init(SparkClientImpl.java:94
 )
 at
 org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFac
 tory.java:80)
 at
 org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.init(RemoteHiv
 eSparkClient.java:91)
 at
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSpark
 

[jira] [Created] (HIVE-11173) Fair ordering of fragments in wait queue

2015-07-02 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-11173:


 Summary: Fair ordering of fragments in wait queue
 Key: HIVE-11173
 URL: https://issues.apache.org/jira/browse/HIVE-11173
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran


Wait queue ordering is currently based on the following
1) Finishable or not
2) Vertex parallelism
3) Attempt start time

This ordering has issues in cases when 2 or more queries are execute in 
parallel, then all map tasks gets scheduled first even though reduce task of 
some dag are in finishable state. This will make the first submitted query 
slower even though it can proceed to completion.

Add fair ordering to wait queue comparator to take dag submission time into 
account. For the above scenario, if we take dag submission time into account, 
the first submitted task will proceed to completion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11174) Hive does not treat floating point signed zeros as equal (-0.0 should equal 0.0 according to IEEE floating point spec)

2015-07-02 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-11174:
-

 Summary: Hive does not treat floating point signed zeros as equal 
(-0.0 should equal 0.0 according to IEEE floating point spec) 
 Key: HIVE-11174
 URL: https://issues.apache.org/jira/browse/HIVE-11174
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 1.2.0
Reporter: Lenni Kuff
Priority: Critical


Hive does not treat floating point signed zeros as equal (-0.0 should equal 
0.0).  This is because Hive uses Double.compareTo(), which states:
0.0d is considered by this method to be greater than -0.0d

http://docs.oracle.com/javase/7/docs/api/java/lang/Double.html#compareTo(java.lang.Double)

The IEEE 754 floating point spec specifies that signed -0.0 and 0.0 should be 
treated as equal. From the Wikipedia article 
(https://en.wikipedia.org/wiki/Signed_zero#Comparisons):
bq. negative zero and positive zero should compare as equal with the usual 
(numerical) comparison operators


How to reproduce:
{code}

select 1 where 0.0=-0.0;
Returns no results.

select 1 where -0.00.0;
Returns 1
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36139: HIVE-11130 Refactoring the code so that HiveTxnManager interface will support lock/unlock table/database object

2015-07-02 Thread Aihua Xu


 On July 2, 2015, 5:36 p.m., Chao Sun wrote:
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java, line 
  174
  https://reviews.apache.org/r/36139/diff/1/?file=998352#file998352line174
 
  this looks exactly the same as DDLTask#getHiveObject. Is there a way to 
  get rid of the latter?

Moved to HiveLockObject as a static method so that it can be used by both.


- Aihua


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36139/#review90252
---


On July 2, 2015, 5:24 p.m., Aihua Xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36139/
 ---
 
 (Updated July 2, 2015, 5:24 p.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-11130 Refactoring the code so that HiveTxnManager interface will support 
 lock/unlock table/database object
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
 8bcf860d1f4b783352e84c3b9c988e061ab0b751 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java 
 2dd0c7df2a91c7e9f7e8b9825725ccddd463262b 
   ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java 
 eccb8d1d5ae7a41f90800fe8d62875978c98b074 
 
 Diff: https://reviews.apache.org/r/36139/diff/
 
 
 Testing
 ---
 
 Test has been done. This work will be followed by HIVE-10984 to solve the 
 explicit lock table issue.
 
 
 Thanks,
 
 Aihua Xu
 




Re: Review Request 34059: HIVE-10673 Dynamically partitioned hash join for Tez

2015-07-02 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34059/
---

(Updated July 2, 2015, 7:46 p.m.)


Review request for hive, Matt McCline and Vikram Dixit Kumaraswamy.


Changes
---

Fixing failure in tez_smb_1.q - the big table position in 
CommonMergeJoinOperator and the ReduceWork were different, they need to be 
consistent for the merge join to work properly.


Bugs: HIVE-10673
https://issues.apache.org/jira/browse/HIVE-10673


Repository: hive-git


Description
---

Reduce-side hash join (using MapJoinOperator), where the Tez inputs to the 
reducer are unsorted.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 6d0cf15 
  itests/src/test/resources/testconfiguration.properties 441b278 
  ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java 15cafdd 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java d7f1b42 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KeyValuesAdapter.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KeyValuesFromKeyValue.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/KeyValuesFromKeyValues.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/ReduceRecordProcessor.java 
545d7c6 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/ReduceRecordSource.java 
7d79e87 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java 
e9bd44a 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedBatchUtil.java 
3780113 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/VectorMapJoinCommonOperator.java
 4c8c4b1 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java 
5a87bd6 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java 4d84f0f 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkMapJoinProc.java 
bca91dd 
  ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezProcContext.java adc31ae 
  ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java 11c1df6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezWork.java 6db8220 
  ql/src/java/org/apache/hadoop/hive/ql/plan/BaseWork.java a342738 
  ql/src/java/org/apache/hadoop/hive/ql/plan/CommonMergeJoinDesc.java f9c34cb 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java fb3c4a3 
  ql/src/java/org/apache/hadoop/hive/ql/plan/MapJoinDesc.java cee9100 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ReduceWork.java a78a92e 
  ql/src/test/queries/clientpositive/tez_dynpart_hashjoin_1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/tez_dynpart_hashjoin_2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/tez_vector_dynpart_hashjoin_1.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/tez_vector_dynpart_hashjoin_2.q 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/tez_dynpart_hashjoin_1.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/tez_dynpart_hashjoin_2.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/tez_vector_dynpart_hashjoin_1.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/tez/tez_vector_dynpart_hashjoin_2.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/34059/diff/


Testing
---

q-file tests added


Thanks,

Jason Dere



Review Request 36143: HIVE-11172

2015-07-02 Thread Hari Sankar Sivarama Subramaniyan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36143/
---

Review request for hive and Matt McCline.


Repository: hive-git


Description
---

Vectorization wrong results for aggregate query with where clause without group 
by


Diffs
-

  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxString.txt 7e0dda6 
  ql/src/test/queries/clientpositive/vector_aggregate_without_gby.q 
PRE-CREATION 
  ql/src/test/results/clientpositive/vector_aggregate_without_gby.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/36143/diff/


Testing
---


Thanks,

Hari Sankar Sivarama Subramaniyan



Re: Review Request 36139: HIVE-11130 Refactoring the code so that HiveTxnManager interface will support lock/unlock table/database object

2015-07-02 Thread Aihua Xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36139/
---

(Updated July 2, 2015, 8:28 p.m.)


Review request for hive.


Repository: hive-git


Description
---

HIVE-11130 Refactoring the code so that HiveTxnManager interface will support 
lock/unlock table/database object


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 8bcf860 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java 7e93387 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java 2dd0c7d 
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManagerImpl.java eccb8d1 

Diff: https://reviews.apache.org/r/36139/diff/


Testing
---

Test has been done. This work will be followed by HIVE-10984 to solve the 
explicit lock table issue.


Thanks,

Aihua Xu



[jira] [Created] (HIVE-11175) create function using jar does not work with sql std authorization

2015-07-02 Thread Olaf Flebbe (JIRA)
Olaf Flebbe created HIVE-11175:
--

 Summary: create function using jar does not work with sql std 
authorization
 Key: HIVE-11175
 URL: https://issues.apache.org/jira/browse/HIVE-11175
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 1.2.0
Reporter: Olaf Flebbe
 Fix For: 2.0.0


{{create function xxx as 'xxx' using jar 'file://foo.jar' }} 

gives error code for need of accessing a local foo.jar  resource with ADMIN 
privileges. Same for HDFS (DFS_URI) .

problem is that the semantic analysis enforces the ADMIN privilege for write 
but the jar is clearly input not output. 

Patch und Testcase appendend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11176) aused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to [Ljava.lang.Object;

2015-07-02 Thread Soundararajan Velu (JIRA)
Soundararajan Velu created HIVE-11176:
-

 Summary: aused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to 
[Ljava.lang.Object;
 Key: HIVE-11176
 URL: https://issues.apache.org/jira/browse/HIVE-11176
 Project: Hive
  Issue Type: Bug
  Components: Hive, Tez
Affects Versions: 1.2.0, 1.0.0
 Environment: Hive 1.2 and TEz 0.7
Reporter: Soundararajan Velu
Priority: Critical


Unreachable code: 
hive/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardStructObjectInspector.java


// With Data
  @Override
  @SuppressWarnings(unchecked)
  public Object getStructFieldData(Object data, StructField fieldRef) {
if (data == null) {
  return null;
}
// We support both ListObject and Object[]
// so we have to do differently.
boolean isArray = ! (data instanceof List);
if (!isArray  !(data instanceof List)) {
  return data;
}

*
The if condition above translates to 
if(!true  true) the code section cannot be reached,

this causes a lot of class cast exceptions while using Tez and ORC file 
formats, 

Changed the code to 
 boolean isArray = data.getClass().isArray();
if (!isArray  !(data instanceof List)) {
  return data;
}

Even then, lazystructs get passed as fields causing downstream cast exceptions 
like lazystruct cannot be cast to Text etc...

So I changed the method to something like this,

 // With Data
  @Override
  @SuppressWarnings(unchecked)
  public Object getStructFieldData(Object data, StructField fieldRef) {
if (data == null) {
  return null;
}
if (data instanceof LazyBinaryStruct) {
data = ((LazyBinaryStruct) data).getFieldsAsList();
}
// We support both ListObject and Object[]
// so we have to do differently.
boolean isArray = data.getClass().isArray();
if (!isArray  !(data instanceof List)) {
  return data;
}

This is causing arrayindexout of bounds exception and other typecast exceptions 
in object inspectors,

Please help,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35902: HIVE-11103 Add banker's rounding BROUND UDF

2015-07-02 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35902/#review90283
---



ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java (line 756)
https://reviews.apache.org/r/35902/#comment143291

If you're going to add a vectorized version of the function, can you add a 
q-file test to test the vectorized bround()?



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/MathExpr.java 
(line 38)
https://reviews.apache.org/r/35902/#comment143290

Can you add a JUnit test somewhere to test that the behavior of 
MathExpr.bround(x) == RoundUtils.bround(x, 0)?


- Jason Dere


On June 26, 2015, 2:19 a.m., Alexander Pivovarov wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35902/
 ---
 
 (Updated June 26, 2015, 2:19 a.m.)
 
 
 Review request for hive and Jason Dere.
 
 
 Bugs: HIVE-11103
 https://issues.apache.org/jira/browse/HIVE-11103
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-11103 Add banker's rounding BROUND UDF
 
 
 Diffs
 -
 
   ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java 
 6485a2ac5f12dbdba7bdf4d17ba18ad054c6f73b 
   common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java 
 a8215f29aed3a0399ec274cc311a3c92e0cca55b 
   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 
 fabc21e2092561cbf98c35a406e4ee40e71fe1de 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/BRoundWithNumDigitsDoubleToDouble.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalUtil.java
  ef800596deed612b525ed3371b196f275ad88e09 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncBRoundWithNumDigitsDecimalToDecimal.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FuncRoundWithNumDigitsDecimalToDecimal.java
  9f3e8a3fcacb17990c6644a67cf587ae9948adad 
   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/MathExpr.java 
 aef923e2c362a8d15b8dcc3467aef01a862c205c 
   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBRound.java 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFRound.java 
 963e4a87c417798f95bb1490a4275339a61e869c 
   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/RoundUtils.java 
 0b389a5783fa2cf6643919c411ee57a7ed873d84 
   ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFBRound.java 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/udf_bround.q PRE-CREATION 
   ql/src/test/results/clientpositive/show_functions.q.out 
 5de4ffcd1ace477af026b83fb7bfb8068fc192b3 
   ql/src/test/results/clientpositive/udf_bround.q.out PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/35902/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Alexander Pivovarov
 




[jira] [Created] (HIVE-11177) CLONE - LLAP: more spark out file changes compared to master

2015-07-02 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-11177:
---

 Summary: CLONE - LLAP: more spark out file changes compared to 
master
 Key: HIVE-11177
 URL: https://issues.apache.org/jira/browse/HIVE-11177
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: llap






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: problems running spark tests

2015-07-02 Thread Sergey Shelukhin
I was able to get the tests to run with the parameter Hari suggested, on a
different (Linux) machine.
However, on my Mac laptop, the bin/ part of spark directory is not
regenerated. 
I guess I will do the usual shamanic dances like nuking the maven repo,
re-cloning the code, etc., next time I need it. If that doesn’t work I
might file a bug or revive this thread.

On 15/7/2, 11:40, Szehon Ho sze...@cloudera.com wrote:

This works for me..

mvn test -Dtest=TestSparkCliDriver -Dqfile=join1.q -Phadoop-2
For multiple tests you might need to add quotes around the comma-separated
list.

I haven't seen that error, did you run from itests directory?  There are
some steps in pom to copy over the spark scripts needed to run, that look
like they were skipped as that script is not available in your run.

Thanks
Szehon

On Thu, Jul 2, 2015 at 10:31 AM, Sergey Shelukhin ser...@hortonworks.com
wrote:

 Hi. I am trying to run TestSparkCliDriver.

 1) Spark tests do not appear to support specifying a query like other
 tests; when I run mvn test -Phadoop-2 -Dtest=TestSparkCliDriver tests
run,
 but with
 mvn test -Phadoop-2 -Dtest=TestSparkCliDriver -Dqfile=foo.q,bar.q,..
test
 just instantly succeeds w/o running any queries. Is there some other way
 to specify those?

 2) When I run all the test, they fail with the below exception
 I’ve done a full regular build (mvn clean install … in root and then
 itests). Are more steps necessary?
 The itests/qtest-spark/../../itests/qtest-spark/target/spark directory
 exists and has bunch of stuff, but bin/ subdirectory that it tries to
run
 from is indeed empty.

 2015-07-02 10:11:58,678 ERROR [main]: spark.SparkTask
 (SessionState.java:printError(987)) - Failed to execute spark task, with
 exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to
 create spark client.)'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark
 client.
 at
 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkS
es
 sionImpl.java:57)
 at
 
org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getS
es
 sion(SparkSessionManagerImpl.java:114)
 at
 
org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(Spark
Ut
 ilities.java:127)
 at
 
org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:101
)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
 at
 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:8
9)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1672)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1431)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1212)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1063)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1053)
 at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:840)
 at
 
org.apache.hadoop.hive.cli.TestSparkCliDriver.clinit(TestSparkCliDriver
.j
 ava:59)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
:6
 2)
 at
 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
mp
 l.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at
 
org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.ja
va
 :35)
 at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:24)
 at
 
org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethod
Bu
 ilder.java:11)
 at
 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.ja
va
 :59)
 at
 
org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass
(A
 llDefaultPossibilitiesBuilder.java:26)
 at
 
org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.ja
va
 :59)
 at 
org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:26)
 at
 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.ja
va
 :262)
 at
 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Prov
id
 er.java:153)
 at
 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.jav
a:
 124)
 at
 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLo
ad
 er(ForkedBooter.java:200)
 at
 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBo
ot
 er.java:153)
 at
 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: java.io.IOException: Cannot run program
 
“[snip]/itests/qtest-spark/../../itests/qtest-spark/target/spark/bin/spar
k-
 submit: error=2, No such file or directory
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
 at