[jira] [Commented] (HIVE-4009) CLI Tests fail randomly due to MapReduce LocalJobRunner race condition

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222347#comment-14222347
 ] 

Hive QA commented on HIVE-4009:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682533/HIVE-4009.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6651 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1874/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1874/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1874/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682533 - PreCommit-HIVE-TRUNK-Build

 CLI Tests fail randomly due to MapReduce LocalJobRunner race condition
 --

 Key: HIVE-4009
 URL: https://issues.apache.org/jira/browse/HIVE-4009
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-4009-0.patch, HIVE-4009.patch


 Hadoop has a race condition MAPREDUCE-5001 which causes tests to fail 
 randomly when using LocalJobRunner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8909) Hive doesn't correctly read Parquet nested types

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222359#comment-14222359
 ] 

Hive QA commented on HIVE-8909:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682975/HIVE-8909.6.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6681 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1875/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1875/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1875/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682975 - PreCommit-HIVE-TRUNK-Build

 Hive doesn't correctly read Parquet nested types
 

 Key: HIVE-8909
 URL: https://issues.apache.org/jira/browse/HIVE-8909
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: Ryan Blue
Assignee: Ryan Blue
 Attachments: HIVE-8909-1.patch, HIVE-8909-2.patch, HIVE-8909.2.patch, 
 HIVE-8909.3.patch, HIVE-8909.4.patch, HIVE-8909.5.patch, HIVE-8909.6.patch, 
 parquet-test-data.tar.gz


 Parquet's Avro and Thrift object models don't produce the same parquet type 
 representation for lists and maps that Hive does. In the Parquet community, 
 we've defined what should be written and backward-compatibility rules for 
 existing data written by parquet-avro and parquet-thrift in PARQUET-113. We 
 need to implement those rules in the Hive Converter classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8917) HIVE-5679 adds two thread safety problems

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222375#comment-14222375
 ] 

Hive QA commented on HIVE-8917:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682946/HIVE-8917.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 6651 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1876/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1876/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1876/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682946 - PreCommit-HIVE-TRUNK-Build

 HIVE-5679 adds two thread safety problems
 -

 Key: HIVE-8917
 URL: https://issues.apache.org/jira/browse/HIVE-8917
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Sergey Shelukhin
 Attachments: HIVE-8917.patch


 HIVE-5679 adds two static {{SimpleDateFormat}} objects and 
 {{SimpleDateFormat}} is not thread safe. These should be converted to thread 
 locals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8774) CBO: enable groupBy index

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222376#comment-14222376
 ] 

Hive QA commented on HIVE-8774:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682950/HIVE-8774.6.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1877/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1877/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1877/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-1877/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java'
Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target shims/scheduler/target 
packaging/target hbase-handler/target testutils/target jdbc/target 
metastore/target itests/target itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
hcatalog/hcatalog-pig-adapter/target accumulo-handler/target hwi/target 
common/target common/src/gen service/target contrib/target serde/target 
beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1641190.

At revision 1641190.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682950 - PreCommit-HIVE-TRUNK-Build

 CBO: enable groupBy index
 -

 Key: HIVE-8774
 URL: https://issues.apache.org/jira/browse/HIVE-8774
 Project: Hive
  Issue Type: Improvement
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
 HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch


 Right now, even when groupby index is build, CBO is not able to use it. In 
 this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8936) Add SORT_QUERY_RESULTS for join tests that do not guarantee order

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222378#comment-14222378
 ] 

Hive QA commented on HIVE-8936:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682954/HIVE-8936.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1878/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1878/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1878/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-1878/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1641191.

At revision 1641191.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682954 - PreCommit-HIVE-TRUNK-Build

 Add SORT_QUERY_RESULTS for join tests that do not guarantee order
 -

 Key: HIVE-8936
 URL: https://issues.apache.org/jira/browse/HIVE-8936
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chao
Assignee: Chao
Priority: Minor
 Attachments: HIVE-8936.patch


 Since join doesn't impose ordering, we should add {{SORT_QUERY_RESULTS}} for 
 the following tests:
 {noformat}
decimal_join.q
filter_join_breaktask.q
join1.q
join10.q
join11.q
join12.q
join13.q
join14.q
join17.q
join19.q
join2.q
join3.q
join4.q
join5.q
join8.q
join9.q
join_rc.q
join_thrift.q
join_vc.q
louter_join_ppr.q
mapjoin_decimal.q
mapjoin_mapjoin.q
ppd_join.q
ppd_join2.q
ppd_join3.q
ppd_outer_join1.q
ppd_outer_join2.q
ppd_outer_join4.q
router_join_ppr.q
temp_table_join1.q
tez_join_tests.q
tez_joins_explain.q
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8895) bugs in mergejoin

2014-11-23 Thread cw (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cw updated HIVE-8895:
-
Attachment: HIVE-8895.2.patch

Add test case.

 bugs in mergejoin
 -

 Key: HIVE-8895
 URL: https://issues.apache.org/jira/browse/HIVE-8895
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0, 0.14.0, 0.13.1
Reporter: cw
Priority: Minor
  Labels: patch
 Attachments: HIVE-8895.1.patch, HIVE-8895.2.patch


 I got a IndexOutOfBoundsException with a SQL in hive0.13.1. But it runs well 
 on hive0.11. Here is the example sql which can trigger the exception.
 {code}
 create table test_join_1(a string, b string);
 create table test_join_2(a string, b string);
 -- got an IndexOutOfBoundsException error
 explain 
 select * from
 (
 SELECT a a, b b
 FROM test_join_1
 )t1
 join 
 (
 SELECT a a, b b
 FROM test_join_1
 )t2
 on  t1.a = t2.a
 and t1.a = t2.b
 join
 (
 select a from test_join_2
 )t3 on t1.a = t3.a;
 {code}
 And here is some stack information:
 {code}
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.rangeCheck(ArrayList.java:604)
 at java.util.ArrayList.get(ArrayList.java:382)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.mergeJoins(SemanticAnalyzer.java:7403)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.mergeJoinTree(SemanticAnalyzer.java:7616)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8946)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9220)
 ...
 {code}
 But sql as below runs well.
 {code}
 explain select * from
 (
 SELECT a a, b b
 FROM test_join_1
 )t1
 join 
 (
 SELECT a a, b b
 FROM test_join_1
 )t2
 on  t1.a = t2.a
 and t2.a = t2.b
 join
 (
 select a from test_join_2
 )t3 on t1.a = t3.a;
 {code}
 I didn't quite understand the details of mergejoin. But I noticed the patch 
 in HIVE-5556 edited SemanticAnalyzer.java with the change below:
 {code}
 -if ((targetCondn == null) || (nodeCondn.size() != targetCondn.size())) {
 -  return -1;
 +if ( targetCondn == null ) {
 +  return new ObjectPair(-1, null);
 +}
 {code}
 Maybe it's a good idea to revert the logic of the 'if' statement as before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8895) bugs in mergejoin

2014-11-23 Thread cw (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cw updated HIVE-8895:
-
Status: Patch Available  (was: Open)

 bugs in mergejoin
 -

 Key: HIVE-8895
 URL: https://issues.apache.org/jira/browse/HIVE-8895
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1, 0.13.0, 0.14.0
Reporter: cw
Priority: Minor
  Labels: patch
 Attachments: HIVE-8895.1.patch, HIVE-8895.2.patch


 I got a IndexOutOfBoundsException with a SQL in hive0.13.1. But it runs well 
 on hive0.11. Here is the example sql which can trigger the exception.
 {code}
 create table test_join_1(a string, b string);
 create table test_join_2(a string, b string);
 -- got an IndexOutOfBoundsException error
 explain 
 select * from
 (
 SELECT a a, b b
 FROM test_join_1
 )t1
 join 
 (
 SELECT a a, b b
 FROM test_join_1
 )t2
 on  t1.a = t2.a
 and t1.a = t2.b
 join
 (
 select a from test_join_2
 )t3 on t1.a = t3.a;
 {code}
 And here is some stack information:
 {code}
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.rangeCheck(ArrayList.java:604)
 at java.util.ArrayList.get(ArrayList.java:382)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.mergeJoins(SemanticAnalyzer.java:7403)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.mergeJoinTree(SemanticAnalyzer.java:7616)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8946)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9220)
 ...
 {code}
 But sql as below runs well.
 {code}
 explain select * from
 (
 SELECT a a, b b
 FROM test_join_1
 )t1
 join 
 (
 SELECT a a, b b
 FROM test_join_1
 )t2
 on  t1.a = t2.a
 and t2.a = t2.b
 join
 (
 select a from test_join_2
 )t3 on t1.a = t3.a;
 {code}
 I didn't quite understand the details of mergejoin. But I noticed the patch 
 in HIVE-5556 edited SemanticAnalyzer.java with the change below:
 {code}
 -if ((targetCondn == null) || (nodeCondn.size() != targetCondn.size())) {
 -  return -1;
 +if ( targetCondn == null ) {
 +  return new ObjectPair(-1, null);
 +}
 {code}
 Maybe it's a good idea to revert the logic of the 'if' statement as before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8936) Add SORT_QUERY_RESULTS for join tests that do not guarantee order

2014-11-23 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8936:
--
Attachment: HIVE-8936.patch

This is strange. The patch applies to latest trunk cleanly, not sure why the 
buiild is complaining. Reattach the same patch.

 Add SORT_QUERY_RESULTS for join tests that do not guarantee order
 -

 Key: HIVE-8936
 URL: https://issues.apache.org/jira/browse/HIVE-8936
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chao
Assignee: Chao
Priority: Minor
 Attachments: HIVE-8936.patch, HIVE-8936.patch


 Since join doesn't impose ordering, we should add {{SORT_QUERY_RESULTS}} for 
 the following tests:
 {noformat}
decimal_join.q
filter_join_breaktask.q
join1.q
join10.q
join11.q
join12.q
join13.q
join14.q
join17.q
join19.q
join2.q
join3.q
join4.q
join5.q
join8.q
join9.q
join_rc.q
join_thrift.q
join_vc.q
louter_join_ppr.q
mapjoin_decimal.q
mapjoin_mapjoin.q
ppd_join.q
ppd_join2.q
ppd_join3.q
ppd_outer_join1.q
ppd_outer_join2.q
ppd_outer_join4.q
router_join_ppr.q
temp_table_join1.q
tez_join_tests.q
tez_joins_explain.q
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222390#comment-14222390
 ] 

Xuefu Zhang commented on HIVE-8834:
---

Patch looks good to me. It's fine for now to use a wrapper for job info and 
stage info.

[~vanzin], could you please take a look at the latest patch again? BTW, why 
these two classes (SparkJobInfo and SparkStageInfo) are not declaired as 
serailizable? Does it make sense to do so? Looked at the class definition and I 
found nothing not serailzable. All members are just numbers or strings.

 enable job progress monitoring of Remote Spark Context [Spark Branch]
 -

 Key: HIVE-8834
 URL: https://issues.apache.org/jira/browse/HIVE-8834
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Rui Li
  Labels: Spark-M3
 Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
 HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch


 We should enable job progress monitor in Remote Spark Context, the spark job 
 progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
 progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hive-0.14 - Build # 742 - Still Failing

2014-11-23 Thread Apache Jenkins Server
Changes for Build #696
[rohini] PIG-4186: Fix e2e run against new build of pig and some enhancements 
(rohini)


Changes for Build #697

Changes for Build #698

Changes for Build #699

Changes for Build #700

Changes for Build #701

Changes for Build #702

Changes for Build #703
[daijy] HIVE-8484: HCatalog throws an exception if Pig job is of type 'fetch' 
(Lorand Bendig via Daniel Dai)


Changes for Build #704
[gunther] HIVE-8781: Nullsafe joins are busted on Tez (Gunther Hagleitner, 
reviewed by Prasanth J)


Changes for Build #705
[gunther] HIVE-8760: Pass a copy of HiveConf to hooks (Gunther Hagleitner, 
reviewed by Gopal V)


Changes for Build #706
[thejas] HIVE-8772 : zookeeper info logs are always printed from beeline with 
service discovery mode (Thejas Nair, reviewed by Vaibhav Gumashta)


Changes for Build #707
[gunther] HIVE-8782: HBase handler doesn't compile with hadoop-1 (Jimmy Xiang, 
reviewed by Xuefu and Sergey)


Changes for Build #708

Changes for Build #709
[thejas] HIVE-8785 : HiveServer2 LogDivertAppender should be more selective for 
beeline getLogs (Thejas Nair, reviewed by Gopal V)


Changes for Build #710
[vgumashta] HIVE-8764: Windows: HiveServer2 TCP SSL cannot recognize localhost 
(Vaibhav Gumashta reviewed by Thejas Nair)


Changes for Build #711
[gunther] HIVE-8768: CBO: Fix filter selectivity for 'in clause'  '' (Laljo 
John Pullokkaran via Gunther Hagleitner)


Changes for Build #712
[gunther] HIVE-8794: Hive on Tez leaks AMs when killed before first dag is run 
(Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #713
[gunther] HIVE-8798: Some Oracle deadlocks not being caught in TxnHandler (Alan 
Gates via Gunther Hagleitner)


Changes for Build #714
[gunther] HIVE-8800: Update release notes and notice for hive .14 (Gunther 
Hagleitner, reviewed by Prasanth J)

[gunther] HIVE-8799: boatload of missing apache headers (Gunther Hagleitner, 
reviewed by Thejas M Nair)


Changes for Build #715
[gunther] Preparing for release 0.14.0


Changes for Build #716
[gunther] Preparing for release 0.14.0

[gunther] Preparing for release 0.14.0


Changes for Build #717

Changes for Build #718

Changes for Build #719

Changes for Build #720
[gunther] HIVE-8811: Dynamic partition pruning can result in NPE during query 
compilation (Gunther Hagleitner, reviewed by Gopal V)


Changes for Build #721
[gunther] HIVE-8805: CBO skipped due to SemanticException: Line 0:-1 Both left 
and right aliases encountered in JOIN 'avg_cs_ext_discount_amt' (Laljo John 
Pullokkaran via Gunther Hagleitner)

[sershe] HIVE-8715 : Hive 14 upgrade scripts can fail for statistics if 
database was created using auto-create
 ADDENDUM (Sergey Shelukhin, reviewed by Ashutosh Chauhan and Gunther 
Hagleitner)


Changes for Build #722

Changes for Build #723

Changes for Build #724
[gunther] HIVE-8845: Switch to Tez 0.5.2 (Gunther Hagleitner, reviewed by Gopal 
V)


Changes for Build #725
[sershe] HIVE-8295 : Add batch retrieve partition objects for metastore direct 
sql (Selina Zhang and Sergey Shelukhin, reviewed by Ashutosh Chauhan)


Changes for Build #726

Changes for Build #727
[gunther] HIVE-8873: Switch to calcite 0.9.2 (Gunther Hagleitner, reviewed by 
Gopal V)


Changes for Build #728
[thejas] HIVE-8830 : hcatalog process don't exit because of non daemon thread 
(Thejas Nair, reviewed by Eugene Koifman, Sushanth Sowmyan)


Changes for Build #729

Changes for Build #730

Changes for Build #731

Changes for Build #732

Changes for Build #733

Changes for Build #734

Changes for Build #735

Changes for Build #736
[sershe] HIVE-8876 : incorrect upgrade script for Oracle (13-14) (Sergey 
Shelukhin, reviewed by Ashutosh Chauhan)


Changes for Build #737

Changes for Build #738
[gunther] HIVE-: Mapjoin with LateralViewJoin generates wrong plan in Tez 
(Prasanth J via Gunther Hagleitner)


Changes for Build #739

Changes for Build #740
[cws] HIVE-8933. Check release builds for SNAPSHOT dependencies


Changes for Build #741

Changes for Build #742



No tests ran.

The Apache Jenkins build system has built Hive-0.14 (build #742)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-0.14/742/ to view 
the results.

[jira] [Commented] (HIVE-8848) data loading from text files or text file processing doesn't handle nulls correctly

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222419#comment-14222419
 ] 

Hive QA commented on HIVE-8848:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682953/HIVE-8848.01.patch

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 6651 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_gby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_semijoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_simple_select
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_udf_udaf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_union
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_views
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_windowing
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_limit
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_semijoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_simple_select
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_union
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_views
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_windowing
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1879/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1879/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1879/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682953 - PreCommit-HIVE-TRUNK-Build

 data loading from text files or text file processing doesn't handle nulls 
 correctly
 ---

 Key: HIVE-8848
 URL: https://issues.apache.org/jira/browse/HIVE-8848
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-8848.01.patch, HIVE-8848.patch


 I am not sure how nulls are supposed to be stored in text tables, but after 
 loading some data with null or NULL strings, or x00 characters, we get 
 bunch of annoying logging from LazyPrimitive that data is not in INT format 
 and was converted to null, with data being null (string saying null, I 
 assume from the code).
 Either load should load them as nulls, or there should be some defined way to 
 load nulls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8936) Add SORT_QUERY_RESULTS for join tests that do not guarantee order

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222426#comment-14222426
 ] 

Brock Noland commented on HIVE-8936:


[~xuefuz] the git mirror is two days behind svn and ptest uses svn.

 Add SORT_QUERY_RESULTS for join tests that do not guarantee order
 -

 Key: HIVE-8936
 URL: https://issues.apache.org/jira/browse/HIVE-8936
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chao
Assignee: Chao
Priority: Minor
 Attachments: HIVE-8936.patch, HIVE-8936.patch


 Since join doesn't impose ordering, we should add {{SORT_QUERY_RESULTS}} for 
 the following tests:
 {noformat}
decimal_join.q
filter_join_breaktask.q
join1.q
join10.q
join11.q
join12.q
join13.q
join14.q
join17.q
join19.q
join2.q
join3.q
join4.q
join5.q
join8.q
join9.q
join_rc.q
join_thrift.q
join_vc.q
louter_join_ppr.q
mapjoin_decimal.q
mapjoin_mapjoin.q
ppd_join.q
ppd_join2.q
ppd_join3.q
ppd_outer_join1.q
ppd_outer_join2.q
ppd_outer_join4.q
router_join_ppr.q
temp_table_join1.q
tez_join_tests.q
tez_joins_explain.q
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8909) Hive doesn't correctly read Parquet nested types

2014-11-23 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8909:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

 Hive doesn't correctly read Parquet nested types
 

 Key: HIVE-8909
 URL: https://issues.apache.org/jira/browse/HIVE-8909
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: Ryan Blue
Assignee: Ryan Blue
 Fix For: 0.15.0

 Attachments: HIVE-8909-1.patch, HIVE-8909-2.patch, HIVE-8909.2.patch, 
 HIVE-8909.3.patch, HIVE-8909.4.patch, HIVE-8909.5.patch, HIVE-8909.6.patch, 
 parquet-test-data.tar.gz


 Parquet's Avro and Thrift object models don't produce the same parquet type 
 representation for lists and maps that Hive does. In the Parquet community, 
 we've defined what should be written and backward-compatibility rules for 
 existing data written by parquet-avro and parquet-thrift in PARQUET-113. We 
 need to implement those rules in the Hive Converter classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8909) Hive doesn't correctly read Parquet nested types

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222435#comment-14222435
 ] 

Brock Noland commented on HIVE-8909:


Thank you so much!! I have committed this to trunk!

 Hive doesn't correctly read Parquet nested types
 

 Key: HIVE-8909
 URL: https://issues.apache.org/jira/browse/HIVE-8909
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.1
Reporter: Ryan Blue
Assignee: Ryan Blue
 Fix For: 0.15.0

 Attachments: HIVE-8909-1.patch, HIVE-8909-2.patch, HIVE-8909.2.patch, 
 HIVE-8909.3.patch, HIVE-8909.4.patch, HIVE-8909.5.patch, HIVE-8909.6.patch, 
 parquet-test-data.tar.gz


 Parquet's Avro and Thrift object models don't produce the same parquet type 
 representation for lists and maps that Hive does. In the Parquet community, 
 we've defined what should be written and backward-compatibility rules for 
 existing data written by parquet-avro and parquet-thrift in PARQUET-113. We 
 need to implement those rules in the Hive Converter classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8917) HIVE-5679 adds two thread safety problems

2014-11-23 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8917:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Thank you! I have committed this to trunk!

 HIVE-5679 adds two thread safety problems
 -

 Key: HIVE-8917
 URL: https://issues.apache.org/jira/browse/HIVE-8917
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Sergey Shelukhin
 Fix For: 0.15.0

 Attachments: HIVE-8917.patch


 HIVE-5679 adds two static {{SimpleDateFormat}} objects and 
 {{SimpleDateFormat}} is not thread safe. These should be converted to thread 
 locals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8936) Add SORT_QUERY_RESULTS for join tests that do not guarantee order

2014-11-23 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8936:
--
Attachment: HIVE-8936.1.patch

Patch #1 rebased with latest trunk.

 Add SORT_QUERY_RESULTS for join tests that do not guarantee order
 -

 Key: HIVE-8936
 URL: https://issues.apache.org/jira/browse/HIVE-8936
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chao
Assignee: Chao
Priority: Minor
 Attachments: HIVE-8936.1.patch, HIVE-8936.patch, HIVE-8936.patch


 Since join doesn't impose ordering, we should add {{SORT_QUERY_RESULTS}} for 
 the following tests:
 {noformat}
decimal_join.q
filter_join_breaktask.q
join1.q
join10.q
join11.q
join12.q
join13.q
join14.q
join17.q
join19.q
join2.q
join3.q
join4.q
join5.q
join8.q
join9.q
join_rc.q
join_thrift.q
join_vc.q
louter_join_ppr.q
mapjoin_decimal.q
mapjoin_mapjoin.q
ppd_join.q
ppd_join2.q
ppd_join3.q
ppd_outer_join1.q
ppd_outer_join2.q
ppd_outer_join4.q
router_join_ppr.q
temp_table_join1.q
tez_join_tests.q
tez_joins_explain.q
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8951) Spark remote context doesn't work with local-cluster [Spark Branch]

2014-11-23 Thread Xuefu Zhang (JIRA)
Xuefu Zhang created HIVE-8951:
-

 Summary: Spark remote context doesn't work with local-cluster 
[Spark Branch]
 Key: HIVE-8951
 URL: https://issues.apache.org/jira/browse/HIVE-8951
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang


{code}
14/11/23 10:42:15 INFO Worker: Spark home: /home/xzhang/apache/spark
14/11/23 10:42:15 INFO AppClient$ClientActor: Connecting to master 
spark://xzdt.local:55151...
14/11/23 10:42:15 INFO Master: Registering app Hive on Spark
14/11/23 10:42:15 INFO Master: Registered app Hive on Spark with ID 
app-20141123104215-
14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: Connected to Spark cluster 
with app ID app-20141123104215-
14/11/23 10:42:15 INFO NettyBlockTransferService: Server created on 41676
14/11/23 10:42:15 INFO BlockManagerMaster: Trying to register BlockManager
14/11/23 10:42:15 INFO BlockManagerMasterActor: Registering block manager 
xzdt.local:41676 with 265.0 MB RAM, BlockManagerId(driver, xzdt.local, 41676)
14/11/23 10:42:15 INFO BlockManagerMaster: Registered BlockManager
14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in 
use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:194)
at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
at 
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1676)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1667)
at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:204)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
at 
org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
at 
org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.init(SparkContext.scala:267)
at 
org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)
at 
org.apache.hive.spark.client.RemoteDriver.init(RemoteDriver.java:106)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:362)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:353)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
org.eclipse.jetty.server.Server@4c9fd062: java.net.BindException: Address 
already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at 

[jira] [Commented] (HIVE-8951) Spark remote context doesn't work with local-cluster [Spark Branch]

2014-11-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222460#comment-14222460
 ] 

Xuefu Zhang commented on HIVE-8951:
---

[~vanzin], could you please take a look to see if there is a problem with RSC 
or I did something obviously wrong. This is going to block HIVE-8795, which 
further blocks the completion of Hive-Spark integration. Thanks.

 Spark remote context doesn't work with local-cluster [Spark Branch]
 ---

 Key: HIVE-8951
 URL: https://issues.apache.org/jira/browse/HIVE-8951
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang

 {code}
 14/11/23 10:42:15 INFO Worker: Spark home: /home/xzhang/apache/spark
 14/11/23 10:42:15 INFO AppClient$ClientActor: Connecting to master 
 spark://xzdt.local:55151...
 14/11/23 10:42:15 INFO Master: Registering app Hive on Spark
 14/11/23 10:42:15 INFO Master: Registered app Hive on Spark with ID 
 app-20141123104215-
 14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: Connected to Spark 
 cluster with app ID app-20141123104215-
 14/11/23 10:42:15 INFO NettyBlockTransferService: Server created on 41676
 14/11/23 10:42:15 INFO BlockManagerMaster: Trying to register BlockManager
 14/11/23 10:42:15 INFO BlockManagerMasterActor: Registering block manager 
 xzdt.local:41676 with 265.0 MB RAM, BlockManagerId(driver, xzdt.local, 
 41676)
 14/11/23 10:42:15 INFO BlockManagerMaster: Registered BlockManager
 14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
 for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
 14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
 SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already 
 in use
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:174)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
   at 
 org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
   at 
 org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
   at 
 org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at org.eclipse.jetty.server.Server.doStart(Server.java:293)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:194)
   at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
   at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
   at 
 org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1676)
   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
   at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1667)
   at 
 org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:204)
   at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
   at 
 org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
   at 
 org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
   at scala.Option.foreach(Option.scala:236)
   at org.apache.spark.SparkContext.init(SparkContext.scala:267)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)
   at 
 org.apache.hive.spark.client.RemoteDriver.init(RemoteDriver.java:106)
   at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:362)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:353)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
 org.eclipse.jetty.server.Server@4c9fd062: java.net.BindException: Address 
 already in use
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:174)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
   at 
 

[jira] [Updated] (HIVE-8951) Spark remote context doesn't work with local-cluster [Spark Branch]

2014-11-23 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8951:
--
Description: 
What I did:
{code}

{code}
Exeptions seen:
{code}
14/11/23 10:42:15 INFO Worker: Spark home: /home/xzhang/apache/spark
14/11/23 10:42:15 INFO AppClient$ClientActor: Connecting to master 
spark://xzdt.local:55151...
14/11/23 10:42:15 INFO Master: Registering app Hive on Spark
14/11/23 10:42:15 INFO Master: Registered app Hive on Spark with ID 
app-20141123104215-
14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: Connected to Spark cluster 
with app ID app-20141123104215-
14/11/23 10:42:15 INFO NettyBlockTransferService: Server created on 41676
14/11/23 10:42:15 INFO BlockManagerMaster: Trying to register BlockManager
14/11/23 10:42:15 INFO BlockManagerMasterActor: Registering block manager 
xzdt.local:41676 with 265.0 MB RAM, BlockManagerId(driver, xzdt.local, 41676)
14/11/23 10:42:15 INFO BlockManagerMaster: Registered BlockManager
14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in 
use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:194)
at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
at 
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1676)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1667)
at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:204)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
at 
org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
at 
org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.init(SparkContext.scala:267)
at 
org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)
at 
org.apache.hive.spark.client.RemoteDriver.init(RemoteDriver.java:106)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:362)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:353)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
org.eclipse.jetty.server.Server@4c9fd062: java.net.BindException: Address 
already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 

[jira] [Updated] (HIVE-8951) Spark remote context doesn't work with local-cluster [Spark Branch]

2014-11-23 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8951:
--
Description: 
What I did:
{code}
set spark.home=/home/xzhang/apache/spark;
set spark.master=local-cluster[2,1,2048];
set hive.execution.engine=spark; 
set spark.executor.memory=2g;
set spark.serializer=org.apache.spark.serializer.KryoSerializer;
set spark.io.compression.codec=org.apache.spark.io.LZFCompressionCodec;
select name, avg(value) as v from dec group by name order by v;
{code}
Exeptions seen:
{code}
14/11/23 10:42:15 INFO Worker: Spark home: /home/xzhang/apache/spark
14/11/23 10:42:15 INFO AppClient$ClientActor: Connecting to master 
spark://xzdt.local:55151...
14/11/23 10:42:15 INFO Master: Registering app Hive on Spark
14/11/23 10:42:15 INFO Master: Registered app Hive on Spark with ID 
app-20141123104215-
14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: Connected to Spark cluster 
with app ID app-20141123104215-
14/11/23 10:42:15 INFO NettyBlockTransferService: Server created on 41676
14/11/23 10:42:15 INFO BlockManagerMaster: Trying to register BlockManager
14/11/23 10:42:15 INFO BlockManagerMasterActor: Registering block manager 
xzdt.local:41676 with 265.0 MB RAM, BlockManagerId(driver, xzdt.local, 41676)
14/11/23 10:42:15 INFO BlockManagerMaster: Registered BlockManager
14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in 
use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:194)
at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
at 
org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1676)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1667)
at 
org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:204)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
at 
org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
at 
org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.init(SparkContext.scala:267)
at 
org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)
at 
org.apache.hive.spark.client.RemoteDriver.init(RemoteDriver.java:106)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:362)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:353)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
org.eclipse.jetty.server.Server@4c9fd062: java.net.BindException: Address 
already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:174)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 

[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Attachment: HIVE-8774.7.patch

can not figure out why it can not apply use p1. It can be applied correctly in 
my laptop. Anyway, regenerate p0 patch.

 CBO: enable groupBy index
 -

 Key: HIVE-8774
 URL: https://issues.apache.org/jira/browse/HIVE-8774
 Project: Hive
  Issue Type: Improvement
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
 HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch, HIVE-8774.7.patch


 Right now, even when groupby index is build, CBO is not able to use it. In 
 this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Open  (was: Patch Available)

 CBO: enable groupBy index
 -

 Key: HIVE-8774
 URL: https://issues.apache.org/jira/browse/HIVE-8774
 Project: Hive
  Issue Type: Improvement
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
 HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch, HIVE-8774.7.patch


 Right now, even when groupby index is build, CBO is not able to use it. In 
 this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8774) CBO: enable groupBy index

2014-11-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-8774:
--
Status: Patch Available  (was: Open)

 CBO: enable groupBy index
 -

 Key: HIVE-8774
 URL: https://issues.apache.org/jira/browse/HIVE-8774
 Project: Hive
  Issue Type: Improvement
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-8774.1.patch, HIVE-8774.2.patch, HIVE-8774.3.patch, 
 HIVE-8774.4.patch, HIVE-8774.5.patch, HIVE-8774.6.patch, HIVE-8774.7.patch


 Right now, even when groupby index is build, CBO is not able to use it. In 
 this patch, we are trying to make it use groupby index that we build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8875) hive.optimize.sort.dynamic.partition should be turned off for ACID

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222478#comment-14222478
 ] 

Hive QA commented on HIVE-8875:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682967/HIVE-8875.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6651 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization_acid
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1881/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1881/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1881/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682967 - PreCommit-HIVE-TRUNK-Build

 hive.optimize.sort.dynamic.partition should be turned off for ACID
 --

 Key: HIVE-8875
 URL: https://issues.apache.org/jira/browse/HIVE-8875
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-8875.patch


 Turning this on causes ACID insert, updates, and deletes to produce 
 non-optimal plans with extra reduce phases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8944) TestCompactor fails with IncompatibleClassChangeError

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222519#comment-14222519
 ] 

Hive QA commented on HIVE-8944:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12682997/HIVE-8944.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6681 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1883/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1883/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1883/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12682997 - PreCommit-HIVE-TRUNK-Build

 TestCompactor fails with IncompatibleClassChangeError
 -

 Key: HIVE-8944
 URL: https://issues.apache.org/jira/browse/HIVE-8944
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Alan Gates
 Attachments: HIVE-8944.patch


 While investigating long build times I found this and the test had been 
 running for hours.
 {noformat}
 Exception in thread Thread-185 java.lang.IncompatibleClassChangeError: 
 Found interface org.apache.hadoop.mapred.JobContext, but class was expected
   at 
 org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorOutputCommitter.abortJob(CompactorMR.java:716)
   at 
 org.apache.hadoop.mapred.OutputCommitter.abortJob(OutputCommitter.java:255)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:471)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28147: HIVE-7073:Implement Binary in ParquetSerDe

2014-11-23 Thread Mohit Sabharwal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28147/#review62744
---



data/files/parquet_types.txt
https://reviews.apache.org/r/28147/#comment104827

I think this is bit confusing, since the 0b prefix gives the impression 
that data is read in binary format, whereas it is actually getting read as a 
string.

I think we can either write (preferably non-ascii) binary data instead (for 
example, see: data/files/string.txt) OR alternatively, we could write it 
legibly in hex, like 68656c6c6f (hello) and convert it to binary using 
unhex() in the INSERT OVERWRITE query. What do you think ?



ql/src/test/queries/clientpositive/parquet_types.q
https://reviews.apache.org/r/28147/#comment104828

If we write hex format (like 68656c6c6f) in parquet_types.q, we can just 
use unhex() to convert it to binary:

INSERT OVERWRITE TABLE parquet_types
SELECT cint, ctinyint, csmallint, cfloat, cdouble, cstring1, t, cchar, 
cvarchar, unhex(cbinary), m1, l1, st1 FROM parquet_types_staging;



ql/src/test/queries/clientpositive/parquet_types.q
https://reviews.apache.org/r/28147/#comment104830

Instead of select * from parquet_types... since cbinary column may have 
unprintable characters, you can pass it through hex() to make it legible:

SELECT cint, ctinyint, csmallint, cfloat, cdouble, cstring1, t, cchar, 
cvarchar, hex(cbinary), m1, l1, st1 FROM parquet_types;



ql/src/test/queries/clientpositive/parquet_types.q
https://reviews.apache.org/r/28147/#comment104829

No need to unhex here...

Can just be:

 SELECT cchar, LENGTH(cchar), cvarchar, LENGTH(cvarchar), cbinary FROM 
parquet_types
 
Or you can pass it through hex() if original data has unprintable 
characters:

 SELECT cchar, LENGTH(cchar), cvarchar, LENGTH(cvarchar), hex(cbinary) FROM 
parquet_types


- Mohit Sabharwal


On Nov. 21, 2014, 8:53 a.m., cheng xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28147/
 ---
 
 (Updated Nov. 21, 2014, 8:53 a.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch includes:
 1. binary support for ParquetHiveSerde
 2. related test cases both in unit and ql test
 
 
 Diffs
 -
 
   data/files/parquet_types.txt d342062 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
  472de8f 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
  d5aae3b 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
 4effe73 
   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestParquetSerDe.java 
 8ac7864 
   ql/src/test/queries/clientpositive/parquet_types.q 22585c3 
   ql/src/test/results/clientpositive/parquet_types.q.out 275897c 
 
 Diff: https://reviews.apache.org/r/28147/diff/
 
 
 Testing
 ---
 
 related UT and QL tests passed
 
 
 Thanks,
 
 cheng xu
 




[jira] [Commented] (HIVE-7073) Implement Binary in ParquetSerDe

2014-11-23 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222535#comment-14222535
 ] 

Mohit Sabharwal commented on HIVE-7073:
---

Thanks [~Ferd]. Left couple more comments on RB to make test more readable.

 Implement Binary in ParquetSerDe
 

 Key: HIVE-7073
 URL: https://issues.apache.org/jira/browse/HIVE-7073
 Project: Hive
  Issue Type: Sub-task
Reporter: David Chen
Assignee: Ferdinand Xu
 Attachments: HIVE-7073.1.patch, HIVE-7073.2.patch, HIVE-7073.3.patch, 
 HIVE-7073.patch


 The ParquetSerDe currently does not support the BINARY data type. This ticket 
 is to implement the BINARY data type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8950) Add support in ParquetHiveSerde to create table schema from a parquet file

2014-11-23 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222550#comment-14222550
 ] 

Ashish Kumar Singh commented on HIVE-8950:
--

[~brocknoland], [~rdblue] and [~szehon] you guys might be interested in 
reviewing this.

 Add support in ParquetHiveSerde to create table schema from a parquet file
 --

 Key: HIVE-8950
 URL: https://issues.apache.org/jira/browse/HIVE-8950
 Project: Hive
  Issue Type: Improvement
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: HIVE-8950.patch


 PARQUET-76 and PARQUET-47 ask for creating parquet backed tables without 
 having to specify the column names and types. As, parquet files store schema 
 in their footer, it is possible to generate hive schema from parquet file's 
 metadata. This will improve usability of parquet backed tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8944) TestCompactor fails with IncompatibleClassChangeError

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222556#comment-14222556
 ] 

Brock Noland commented on HIVE-8944:


HIVE-8828 has not yet been reviewed and will likely need at least one change 
given it's a large patch. Let's commit this one first.

 TestCompactor fails with IncompatibleClassChangeError
 -

 Key: HIVE-8944
 URL: https://issues.apache.org/jira/browse/HIVE-8944
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Alan Gates
 Attachments: HIVE-8944.patch


 While investigating long build times I found this and the test had been 
 running for hours.
 {noformat}
 Exception in thread Thread-185 java.lang.IncompatibleClassChangeError: 
 Found interface org.apache.hadoop.mapred.JobContext, but class was expected
   at 
 org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorOutputCommitter.abortJob(CompactorMR.java:716)
   at 
 org.apache.hadoop.mapred.OutputCommitter.abortJob(OutputCommitter.java:255)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:471)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8828) Remove hadoop 20 shims

2014-11-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-8828:
---
Status: Open  (was: Patch Available)

 Remove hadoop 20 shims
 --

 Key: HIVE-8828
 URL: https://issues.apache.org/jira/browse/HIVE-8828
 Project: Hive
  Issue Type: Task
  Components: Shims
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-8828.1.patch, HIVE-8828.2.patch, HIVE-8828.3.patch, 
 HIVE-8828.4.patch, HIVE-8828.5.patch, HIVE-8828.6.patch, HIVE-8828.patch


 CLEAR LIBRARY CACHE
 See : [mailing list discussion | 
 http://mail-archives.apache.org/mod_mbox/hive-dev/201410.mbox/%3CCABgNGzfSB5VGTecONg0GgLCDdLLFfzLuZvP%2BGSBc0i0joqf3fg%40mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8828) Remove hadoop 20 shims

2014-11-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-8828:
---
Status: Patch Available  (was: Open)

 Remove hadoop 20 shims
 --

 Key: HIVE-8828
 URL: https://issues.apache.org/jira/browse/HIVE-8828
 Project: Hive
  Issue Type: Task
  Components: Shims
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-8828.1.patch, HIVE-8828.2.patch, HIVE-8828.3.patch, 
 HIVE-8828.4.patch, HIVE-8828.5.patch, HIVE-8828.6.patch, HIVE-8828.patch


 CLEAR LIBRARY CACHE
 See : [mailing list discussion | 
 http://mail-archives.apache.org/mod_mbox/hive-dev/201410.mbox/%3CCABgNGzfSB5VGTecONg0GgLCDdLLFfzLuZvP%2BGSBc0i0joqf3fg%40mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-8896) expose (hadoop/tez) job ids in API

2014-11-23 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang resolved HIVE-8896.
---
Resolution: Duplicate

Close this as a dupe. Feel free to reopen if it's not in fact.

 expose (hadoop/tez) job ids in API
 --

 Key: HIVE-8896
 URL: https://issues.apache.org/jira/browse/HIVE-8896
 Project: Hive
  Issue Type: Improvement
  Components: Clients
Reporter: André Kelpe

 In many cases it would be very useful to be able to map the hadoop/tez jobs 
 back to a query that was executed/is currently being executed. Especially 
 when hive queries are run within a bigger process the ability to get the job 
 ids and query for counters is very beneficial to projects embeddding hive. 
 I saw that cloudera's hue is parsing the logs produced by hive in order to 
 get to the job ids. That seems rather brittle and can easily break, whenever 
 the log format changes. Exposing the jobids in the API would make it a lot 
 easier to build integrations like hue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8828) Remove hadoop 20 shims

2014-11-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-8828:
---
Attachment: HIVE-8828.6.patch

couple of more methods ... 

 Remove hadoop 20 shims
 --

 Key: HIVE-8828
 URL: https://issues.apache.org/jira/browse/HIVE-8828
 Project: Hive
  Issue Type: Task
  Components: Shims
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-8828.1.patch, HIVE-8828.2.patch, HIVE-8828.3.patch, 
 HIVE-8828.4.patch, HIVE-8828.5.patch, HIVE-8828.6.patch, HIVE-8828.patch


 CLEAR LIBRARY CACHE
 See : [mailing list discussion | 
 http://mail-archives.apache.org/mod_mbox/hive-dev/201410.mbox/%3CCABgNGzfSB5VGTecONg0GgLCDdLLFfzLuZvP%2BGSBc0i0joqf3fg%40mail.gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 27895: Remove Hadoop 20 shims

2014-11-23 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27895/
---

(Updated Nov. 24, 2014, 1:10 a.m.)


Review request for hive and Thejas Nair.


Changes
---

ZK related changes.


Bugs: HIVE-8828
https://issues.apache.org/jira/browse/HIVE-8828


Repository: hive-git


Description
---

Remove Hadoop 20 shims


Diffs (updated)
-

  beeline/src/test/org/apache/hive/beeline/ProxyAuthTest.java 95146e9 
  common/src/java/org/apache/hadoop/hive/common/FileUtils.java 95e8d7c 
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java fafd78e 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HiveClientCache.java
 ffa648d 
  hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/Security.java 
39ef86e 
  itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/MiniHiveKdc.java 
9bf5e1f 
  
itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdc.java
 3e46bed 
  
itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestMiniHiveKdc.java 
9d69952 
  
itests/hive-unit-hadoop2/src/test/java/org/apache/hadoop/hive/ql/security/TestStorageBasedMetastoreAuthorizationProviderWithACL.java
 9982195 
  
itests/hive-unit-hadoop2/src/test/java/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java
 b2bdafa 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/StorageBasedMetastoreTestBase.java
 1d8ac24 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestClientSideAuthorizationProvider.java
 f474d83 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestMetastoreAuthorizationProvider.java
 3bde2fc 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/TestStorageBasedMetastoreAuthorizationDrops.java
 c7b27a6 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/thrift/TestZooKeeperTokenStore.java
 faa51af 
  itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 3f47749 
  jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java cfac55b 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
0224629 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
fecae97 
  
metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
 0e1fafc 
  metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java 
ef1eee2 
  metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java c99ce5f 
  pom.xml b9b27f9 
  ql/pom.xml fa6c6d9 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 42e1e20 
  ql/src/java/org/apache/hadoop/hive/ql/exec/ArchiveUtils.java f834ad5 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 56fd5a0 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SecureCmdDoAs.java 832f84f 
  ql/src/java/org/apache/hadoop/hive/ql/exec/errors/TaskLogProcessor.java 
12433ca 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 18e40b3 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java d0c022b 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java 0e326cf 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java 
e5fce14 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java 65a0090 
  ql/src/java/org/apache/hadoop/hive/ql/hooks/HookContext.java 260444f 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java 7d0ca50 
  ql/src/java/org/apache/hadoop/hive/ql/io/merge/MergeFileTask.java 4c2843c 
  ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanTask.java 
75e83b8 
  
ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/truncate/ColumnTruncateTask.java
 51a2cc6 
  
ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultAuthenticator.java 
d68d19d 
  ql/src/java/org/apache/hadoop/hive/ql/security/ProxyUserAuthenticator.java 
95a98fe 
  ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 2806bd1 
  ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestTezSessionState.java 
63687eb 
  ql/src/test/queries/clientpositive/alter_numbuckets_partitioned_table.q 
5dda4c0 
  ql/src/test/queries/clientpositive/alter_numbuckets_partitioned_table2.q 
acc028b 
  ql/src/test/queries/clientpositive/alter_numbuckets_partitioned_table2_h23.q 
d814304 
  ql/src/test/queries/clientpositive/alter_numbuckets_partitioned_table_h23.q 
a039925 
  ql/src/test/queries/clientpositive/archive.q a928a81 
  ql/src/test/queries/clientpositive/archive_corrupt.q cc9801d 
  ql/src/test/queries/clientpositive/archive_excludeHadoop20.q 90757f2 
  ql/src/test/queries/clientpositive/auto_join14.q b282fb9 
  ql/src/test/queries/clientpositive/auto_join14_hadoop20.q 235b7c1 
  ql/src/test/queries/clientpositive/combine2.q 615986d 
  ql/src/test/queries/clientpositive/combine2_hadoop20.q 9a9782a 
  ql/src/test/queries/clientpositive/combine2_win.q f6090bb 
  

[jira] [Commented] (HIVE-8948) TestStreaming is flaky

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222567#comment-14222567
 ] 

Hive QA commented on HIVE-8948:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683007/HIVE-8948.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6681 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1884/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1884/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1884/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12683007 - PreCommit-HIVE-TRUNK-Build

 TestStreaming is flaky
 --

 Key: HIVE-8948
 URL: https://issues.apache.org/jira/browse/HIVE-8948
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-8948.patch


 TestStreaming seems to fail in one of its tests or another about 1 in 50 
 times.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-23 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222573#comment-14222573
 ] 

Rui Li commented on HIVE-8834:
--

[~xuefuz] - for the serializable mark, there's some discussion in SPARK-2321.

 enable job progress monitoring of Remote Spark Context [Spark Branch]
 -

 Key: HIVE-8834
 URL: https://issues.apache.org/jira/browse/HIVE-8834
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Rui Li
  Labels: Spark-M3
 Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
 HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch


 We should enable job progress monitor in Remote Spark Context, the spark job 
 progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
 progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8905) Servlet classes signer information does not match [Spark branch]

2014-11-23 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li reassigned HIVE-8905:
---

Assignee: Chengxiang Li

 Servlet classes signer information does not match [Spark branch] 
 -

 Key: HIVE-8905
 URL: https://issues.apache.org/jira/browse/HIVE-8905
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M3

 {noformat}
 2014-11-18 02:36:04,168 DEBUG spark.HttpFileServer 
 (Logging.scala:logDebug(63)) - HTTP file server started at: 
 http://10.203.137.143:46436
 2014-11-18 02:36:04,172 ERROR session.TestSparkSessionManagerImpl 
 (TestSparkSessionManagerImpl.java:run(127)) - Error executing 'Session thread 
 5'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark 
 client.
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:122)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl$SessionThread.run(TestSparkSessionManagerImpl.java:112)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.SecurityException: class 
 javax.servlet.FilterRegistration's signer information does not match signer 
 information of other classes in the same package
   at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
   at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:136)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:129)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:98)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:96)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:87)
   at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:67)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:60)
   at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:49)
   at org.apache.spark.ui.SparkUI.init(SparkUI.scala:60)
   at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:150)
   at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:105)
   at org.apache.spark.SparkContext.init(SparkContext.scala:237)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:58)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.init(LocalHiveSparkClient.java:107)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.getInstance(LocalHiveSparkClient.java:69)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:53)
   ... 3 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8905) Servlet classes signer information does not match [Spark branch]

2014-11-23 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8905:

Status: Patch Available  (was: Open)

 Servlet classes signer information does not match [Spark branch] 
 -

 Key: HIVE-8905
 URL: https://issues.apache.org/jira/browse/HIVE-8905
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M3
 Attachments: HIVE-8905.1-spark.patch


 {noformat}
 2014-11-18 02:36:04,168 DEBUG spark.HttpFileServer 
 (Logging.scala:logDebug(63)) - HTTP file server started at: 
 http://10.203.137.143:46436
 2014-11-18 02:36:04,172 ERROR session.TestSparkSessionManagerImpl 
 (TestSparkSessionManagerImpl.java:run(127)) - Error executing 'Session thread 
 5'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark 
 client.
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:122)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl$SessionThread.run(TestSparkSessionManagerImpl.java:112)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.SecurityException: class 
 javax.servlet.FilterRegistration's signer information does not match signer 
 information of other classes in the same package
   at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
   at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:136)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:129)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:98)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:96)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:87)
   at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:67)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:60)
   at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:49)
   at org.apache.spark.ui.SparkUI.init(SparkUI.scala:60)
   at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:150)
   at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:105)
   at org.apache.spark.SparkContext.init(SparkContext.scala:237)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:58)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.init(LocalHiveSparkClient.java:107)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.getInstance(LocalHiveSparkClient.java:69)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:53)
   ... 3 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8905) Servlet classes signer information does not match [Spark branch]

2014-11-23 Thread Chengxiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li updated HIVE-8905:

Attachment: HIVE-8905.1-spark.patch

 Servlet classes signer information does not match [Spark branch] 
 -

 Key: HIVE-8905
 URL: https://issues.apache.org/jira/browse/HIVE-8905
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M3
 Attachments: HIVE-8905.1-spark.patch


 {noformat}
 2014-11-18 02:36:04,168 DEBUG spark.HttpFileServer 
 (Logging.scala:logDebug(63)) - HTTP file server started at: 
 http://10.203.137.143:46436
 2014-11-18 02:36:04,172 ERROR session.TestSparkSessionManagerImpl 
 (TestSparkSessionManagerImpl.java:run(127)) - Error executing 'Session thread 
 5'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark 
 client.
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:122)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl$SessionThread.run(TestSparkSessionManagerImpl.java:112)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.SecurityException: class 
 javax.servlet.FilterRegistration's signer information does not match signer 
 information of other classes in the same package
   at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
   at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:136)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:129)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:98)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:96)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:87)
   at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:67)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:60)
   at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:49)
   at org.apache.spark.ui.SparkUI.init(SparkUI.scala:60)
   at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:150)
   at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:105)
   at org.apache.spark.SparkContext.init(SparkContext.scala:237)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:58)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.init(LocalHiveSparkClient.java:107)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.getInstance(LocalHiveSparkClient.java:69)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:53)
   ... 3 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-23 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-8834:
-
Attachment: HIVE-8834.5-spark.patch

Sorry I missed some commits in last patch, update it.

 enable job progress monitoring of Remote Spark Context [Spark Branch]
 -

 Key: HIVE-8834
 URL: https://issues.apache.org/jira/browse/HIVE-8834
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Rui Li
  Labels: Spark-M3
 Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
 HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch, HIVE-8834.5-spark.patch


 We should enable job progress monitor in Remote Spark Context, the spark job 
 progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
 progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8951) Spark remote context doesn't work with local-cluster [Spark Branch]

2014-11-23 Thread yuemeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222583#comment-14222583
 ] 

yuemeng commented on HIVE-8951:
---

hi,when i run a sql to test hive on spark ,but it seems that it can't jump to 
spark branch,for now ,the hive on spark can work well now?

 Spark remote context doesn't work with local-cluster [Spark Branch]
 ---

 Key: HIVE-8951
 URL: https://issues.apache.org/jira/browse/HIVE-8951
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang

 What I did:
 {code}
 set spark.home=/home/xzhang/apache/spark;
 set spark.master=local-cluster[2,1,2048];
 set hive.execution.engine=spark; 
 set spark.executor.memory=2g;
 set spark.serializer=org.apache.spark.serializer.KryoSerializer;
 set spark.io.compression.codec=org.apache.spark.io.LZFCompressionCodec;
 select name, avg(value) as v from dec group by name order by v;
 {code}
 Exeptions seen:
 {code}
 14/11/23 10:42:15 INFO Worker: Spark home: /home/xzhang/apache/spark
 14/11/23 10:42:15 INFO AppClient$ClientActor: Connecting to master 
 spark://xzdt.local:55151...
 14/11/23 10:42:15 INFO Master: Registering app Hive on Spark
 14/11/23 10:42:15 INFO Master: Registered app Hive on Spark with ID 
 app-20141123104215-
 14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: Connected to Spark 
 cluster with app ID app-20141123104215-
 14/11/23 10:42:15 INFO NettyBlockTransferService: Server created on 41676
 14/11/23 10:42:15 INFO BlockManagerMaster: Trying to register BlockManager
 14/11/23 10:42:15 INFO BlockManagerMasterActor: Registering block manager 
 xzdt.local:41676 with 265.0 MB RAM, BlockManagerId(driver, xzdt.local, 
 41676)
 14/11/23 10:42:15 INFO BlockManagerMaster: Registered BlockManager
 14/11/23 10:42:15 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
 for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
 14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
 SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already 
 in use
 java.net.BindException: Address already in use
   at sun.nio.ch.Net.bind0(Native Method)
   at sun.nio.ch.Net.bind(Net.java:174)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
   at 
 org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
   at 
 org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
   at 
 org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at org.eclipse.jetty.server.Server.doStart(Server.java:293)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:194)
   at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
   at org.apache.spark.ui.JettyUtils$$anonfun$2.apply(JettyUtils.scala:204)
   at 
 org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1676)
   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
   at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1667)
   at 
 org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:204)
   at org.apache.spark.ui.WebUI.bind(WebUI.scala:102)
   at 
 org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
   at 
 org.apache.spark.SparkContext$$anonfun$10.apply(SparkContext.scala:267)
   at scala.Option.foreach(Option.scala:236)
   at org.apache.spark.SparkContext.init(SparkContext.scala:267)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)
   at 
 org.apache.hive.spark.client.RemoteDriver.init(RemoteDriver.java:106)
   at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:362)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:353)
   at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
   at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 14/11/23 10:42:20 WARN AbstractLifeCycle: FAILED 
 org.eclipse.jetty.server.Server@4c9fd062: java.net.BindException: Address 
 already in use
 

[jira] [Commented] (HIVE-7292) Hive on Spark

2014-11-23 Thread yuemeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222599#comment-14222599
 ] 

yuemeng commented on HIVE-7292:
---

i am very interesting in hive on spark ,an try to use it,when i bulit it 
(download from https://github.com/apache/hive.git,and chose the spark 
branch)use maven with command: mvn package -DskipTests -Phadoop-2 -Pdist,but it 
give me some error like 
[ERROR] 
/home/ym/hive-on-spark/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkJobStatus.java:[22,24]
 cannot find symbol
[ERROR] symbol:   class JobExecutionStatus
[ERROR] location: package org.apache.spark
[ERROR] 
/home/ym/hive-on-spark/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkJobStatus.java:[33,10]
 cannot find symbol
[ERROR] symbol:   class JobExecutionStatus
[ERROR] location: interface 
org.apache.hadoop.hive.ql.exec.spark.status.SparkJobStatus
[ERROR] 
/home/ym/hive-on-spark/hive/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkJobMonitor.java:[31,24]
 cannot find symbol
[ERROR] symbol:   class JobExecutionStatus
can you tell me why?

 Hive on Spark
 -

 Key: HIVE-7292
 URL: https://issues.apache.org/jira/browse/HIVE-7292
 Project: Hive
  Issue Type: Improvement
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
  Labels: Spark-M1, Spark-M2, Spark-M3, Spark-M4, Spark-M5
 Attachments: Hive-on-Spark.pdf


 Spark as an open-source data analytics cluster computing framework has gained 
 significant momentum recently. Many Hive users already have Spark installed 
 as their computing backbone. To take advantages of Hive, they still need to 
 have either MapReduce or Tez on their cluster. This initiative will provide 
 user a new alternative so that those user can consolidate their backend. 
 Secondly, providing such an alternative further increases Hive's adoption as 
 it exposes Spark users  to a viable, feature-rich de facto standard SQL tools 
 on Hadoop.
 Finally, allowing Hive to run on Spark also has performance benefits. Hive 
 queries, especially those involving multiple reducer stages, will run faster, 
 thus improving user experience as Tez does.
 This is an umbrella JIRA which will cover many coming subtask. Design doc 
 will be attached here shortly, and will be on the wiki as well. Feedback from 
 the community is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7292) Hive on Spark

2014-11-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222600#comment-14222600
 ] 

Xuefu Zhang commented on HIVE-7292:
---

[~yuemeng], you can try removing org/apache/spark folder in your local maven 
repo to see if it fixes it.

 Hive on Spark
 -

 Key: HIVE-7292
 URL: https://issues.apache.org/jira/browse/HIVE-7292
 Project: Hive
  Issue Type: Improvement
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
  Labels: Spark-M1, Spark-M2, Spark-M3, Spark-M4, Spark-M5
 Attachments: Hive-on-Spark.pdf


 Spark as an open-source data analytics cluster computing framework has gained 
 significant momentum recently. Many Hive users already have Spark installed 
 as their computing backbone. To take advantages of Hive, they still need to 
 have either MapReduce or Tez on their cluster. This initiative will provide 
 user a new alternative so that those user can consolidate their backend. 
 Secondly, providing such an alternative further increases Hive's adoption as 
 it exposes Spark users  to a viable, feature-rich de facto standard SQL tools 
 on Hadoop.
 Finally, allowing Hive to run on Spark also has performance benefits. Hive 
 queries, especially those involving multiple reducer stages, will run faster, 
 thus improving user experience as Tez does.
 This is an umbrella JIRA which will cover many coming subtask. Design doc 
 will be attached here shortly, and will be on the wiki as well. Feedback from 
 the community is greatly appreciated!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8944) TestCompactor fails with IncompatibleClassChangeError

2014-11-23 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-8944:
---
   Resolution: Fixed
Fix Version/s: 0.15.0
   Status: Resolved  (was: Patch Available)

Thanks Alan! I have committed this to trunk!

 TestCompactor fails with IncompatibleClassChangeError
 -

 Key: HIVE-8944
 URL: https://issues.apache.org/jira/browse/HIVE-8944
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Alan Gates
 Fix For: 0.15.0

 Attachments: HIVE-8944.patch


 While investigating long build times I found this and the test had been 
 running for hours.
 {noformat}
 Exception in thread Thread-185 java.lang.IncompatibleClassChangeError: 
 Found interface org.apache.hadoop.mapred.JobContext, but class was expected
   at 
 org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorOutputCommitter.abortJob(CompactorMR.java:716)
   at 
 org.apache.hadoop.mapred.OutputCommitter.abortJob(OutputCommitter.java:255)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:471)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8374) schematool fails on Postgres versions 9.2

2014-11-23 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222608#comment-14222608
 ] 

Gunther Hagleitner commented on HIVE-8374:
--

Yes, I think this should be backported to 0.14.1. [~sershe] can you do the 
honors?

 schematool fails on Postgres versions  9.2
 ---

 Key: HIVE-8374
 URL: https://issues.apache.org/jira/browse/HIVE-8374
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Reporter: Mohit Sabharwal
Assignee: Mohit Sabharwal
 Fix For: 0.15.0

 Attachments: HIVE-8374.1.patch, HIVE-8374.2.patch, HIVE-8374.3.patch, 
 HIVE-8374.patch


 The upgrade script for HIVE-5700 creates an UDF with language 'plpgsql',
 which is available by default only for Postgres 9.2+.
 For older Postgres versions, the language must be explicitly created,
 otherwise schematool fails with the error:
 {code}
 Error: ERROR: language plpgsql does not exist
   Hint: Use CREATE LANGUAGE to load the language into the database. 
 (state=42704,code=0)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8099) IN operator for partition column fails when the partition column type is DATE

2014-11-23 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222612#comment-14222612
 ] 

Gunther Hagleitner commented on HIVE-8099:
--

[~ashutoshc] backport to .14?

 IN operator for partition column fails when the partition column type is DATE
 -

 Key: HIVE-8099
 URL: https://issues.apache.org/jira/browse/HIVE-8099
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0, 0.13.1
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.15.0

 Attachments: HIVE-8099-ppr-fix.patch, HIVE-8099.1.patch, 
 HIVE-8099.2.patch, HIVE-8099.3.patch, HIVE-8099.4.patch


 Test table DLL:
 {code}
 CREATE TABLE testTbl(col1 string) PARTITIONED BY (date_prt date);
 {code}
 Following query used to work fine in Hive 0.12 as the constant types are 
 'string' and partition column type is considered as 'string' throughout the 
 planning and optimization (including partition pruning).
 {code}
 SELECT * FROM testTbl WHERE date_prt IN ('2014-08-09', '2014-08-08'); 
 {code}
 In trunk the above query fails with:
 {code}
 Line 1:33 Wrong arguments ''2014-08-08'': The arguments for IN should be the 
 same type! Types are: {date IN (string, string)}
 {code}
 HIVE-6642 changed the SemanticAnalyzer.java to consider partition type given 
 in table definition instead of hardcoded 'string' type. (Modified [Hive 0.12 
 code|https://github.com/apache/hive/blob/branch-0.12/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L7778]).
  So changed the query as follows to go past the above error:
 {code}
 SELECT * FROM testTbl WHERE date_prt IN (CAST('2014-08-09' AS DATE), 
 CAST('2014-08-08' AS DATE)); 
 {code}
 Now query goes past the error in SemanticAnalyzer, but hits the same issue 
 (default 'string' type for partition columns) in partition pruning 
 optimization. (Realted code 
 [here|https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartExprEvalUtils.java#L110]).
  
 {code}
 14/09/14 20:07:20 ERROR ql.Driver: FAILED: SemanticException 
 MetaException(message:The arguments for IN should be the same type! Types 
 are: {string IN (date, date)})
 {code}
 We need to change partition pruning code to consider the partition column as 
 the type given in table definition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5664) Drop cascade database fails when the db has any tables with indexes

2014-11-23 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222613#comment-14222613
 ] 

Gunther Hagleitner commented on HIVE-5664:
--

[~ashutoshc] back port to .14?

 Drop cascade database fails when the db has any tables with indexes
 ---

 Key: HIVE-5664
 URL: https://issues.apache.org/jira/browse/HIVE-5664
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.14.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.15.0

 Attachments: HIVE-5664.1.patch.txt, HIVE-5664.2.patch.txt, 
 HIVE-5664.3.patch.txt


 {code}
 CREATE DATABASE db2; 
 USE db2; 
 CREATE TABLE tab1 (id int, name string); 
 CREATE INDEX idx1 ON TABLE tab1(id) as 'COMPACT' with DEFERRED REBUILD IN 
 TABLE tab1_indx; 
 DROP DATABASE db2 CASCADE;
 {code}
 Last DDL fails with the following error:
 {code}
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. Database does not exist: db2
 Hive.log has following exception
 2013-10-27 20:46:16,629 ERROR exec.DDLTask (DDLTask.java:execute(434)) - 
 org.apache.hadoop.hive.ql.metadata.HiveException: Database does not exist: db2
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3473)
 at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:231)
 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
 at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
 at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1441)
 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1219)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1047)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:915)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: NoSuchObjectException(message:db2.tab1_indx table not found)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1376)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:103)
 at com.sun.proxy.$Proxy7.get_table(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:890)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:660)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:652)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
 at com.sun.proxy.$Proxy8.dropDatabase(Unknown Source)
 at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:284)
 at 
 org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:3470)
 ... 18 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5631) Index creation on a skew table fails

2014-11-23 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222614#comment-14222614
 ] 

Gunther Hagleitner commented on HIVE-5631:
--

[~ashutoshc] sounds like it be good to port this to .14 too.

 Index creation on a skew table fails
 

 Key: HIVE-5631
 URL: https://issues.apache.org/jira/browse/HIVE-5631
 Project: Hive
  Issue Type: Bug
  Components: Indexing
Affects Versions: 0.12.0, 0.13.0, 0.14.0
Reporter: Venki Korukanti
Assignee: Venki Korukanti
 Fix For: 0.15.0

 Attachments: HIVE-5631.1.patch.txt, HIVE-5631.2.patch.txt, 
 HIVE-5631.3.patch.txt, HIVE-5631.4.patch.txt, HIVE-5631.5.patch.txt


 REPRO STEPS:
 create database skewtest;
 use skewtest;
 create table skew (id bigint, acct string) skewed by (acct) on ('CC','CH');
 create index skew_indx on table skew (id) as 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 Last DDL fails with following error.
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask. 
 InvalidObjectException(message:Invalid skew column [acct])
 When creating a table, Hive has sanity tests to make sure the columns have 
 proper names and the skewed columns are subset of the table columns. Here we 
 fail because index table has skewed column info. Index tables's skewed 
 columns include {acct} and the columns are {id, _bucketname, _offsets}. As 
 the skewed column {acct} is not part of the table columns Hive throws the 
 exception.
 The reason why Index table got skewed column info even though its definition 
 has no such info is: When creating the index table a deep copy of the base 
 table's StorageDescriptor (SD) (in this case 'skew') is made. And in that 
 copied SD, index specific parameters are set and unrelated parameters are 
 reset. Here skewed column info is not reset (there are few other params that 
 are not reset). That's why the index table contains the skewed column info.
 Fix: Instead of deep copying the base table StorageDescriptor, create a new 
 one from gathered info. This way it avoids the index table to inherit 
 unnecessary properties in SD from base table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8905) Servlet classes signer information does not match [Spark branch]

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222617#comment-14222617
 ] 

Hive QA commented on HIVE-8905:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683252/HIVE-8905.1-spark.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7182 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/411/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/411/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-411/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12683252 - PreCommit-HIVE-SPARK-Build

 Servlet classes signer information does not match [Spark branch] 
 -

 Key: HIVE-8905
 URL: https://issues.apache.org/jira/browse/HIVE-8905
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M3
 Attachments: HIVE-8905.1-spark.patch


 {noformat}
 2014-11-18 02:36:04,168 DEBUG spark.HttpFileServer 
 (Logging.scala:logDebug(63)) - HTTP file server started at: 
 http://10.203.137.143:46436
 2014-11-18 02:36:04,172 ERROR session.TestSparkSessionManagerImpl 
 (TestSparkSessionManagerImpl.java:run(127)) - Error executing 'Session thread 
 5'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark 
 client.
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:122)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl$SessionThread.run(TestSparkSessionManagerImpl.java:112)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.SecurityException: class 
 javax.servlet.FilterRegistration's signer information does not match signer 
 information of other classes in the same package
   at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
   at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:136)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:129)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:98)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:96)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:87)
   at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:67)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:60)
   at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:49)
   at org.apache.spark.ui.SparkUI.init(SparkUI.scala:60)
   at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:150)
   at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:105)
   at org.apache.spark.SparkContext.init(SparkContext.scala:237)
   at 
 

[jira] [Commented] (HIVE-8905) Servlet classes signer information does not match [Spark branch]

2014-11-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222621#comment-14222621
 ] 

Xuefu Zhang commented on HIVE-8905:
---

+1

 Servlet classes signer information does not match [Spark branch] 
 -

 Key: HIVE-8905
 URL: https://issues.apache.org/jira/browse/HIVE-8905
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M3
 Attachments: HIVE-8905.1-spark.patch


 {noformat}
 2014-11-18 02:36:04,168 DEBUG spark.HttpFileServer 
 (Logging.scala:logDebug(63)) - HTTP file server started at: 
 http://10.203.137.143:46436
 2014-11-18 02:36:04,172 ERROR session.TestSparkSessionManagerImpl 
 (TestSparkSessionManagerImpl.java:run(127)) - Error executing 'Session thread 
 5'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark 
 client.
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:122)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl$SessionThread.run(TestSparkSessionManagerImpl.java:112)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.SecurityException: class 
 javax.servlet.FilterRegistration's signer information does not match signer 
 information of other classes in the same package
   at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
   at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:136)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:129)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:98)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:96)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:87)
   at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:67)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:60)
   at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:49)
   at org.apache.spark.ui.SparkUI.init(SparkUI.scala:60)
   at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:150)
   at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:105)
   at org.apache.spark.SparkContext.init(SparkContext.scala:237)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:58)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.init(LocalHiveSparkClient.java:107)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.getInstance(LocalHiveSparkClient.java:69)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:53)
   ... 3 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6914) parquet-hive cannot write nested map (map value is map)

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222623#comment-14222623
 ] 

Hive QA commented on HIVE-6914:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683010/HIVE-6914.3.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6682 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parquet_map_of_maps
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1885/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1885/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1885/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12683010 - PreCommit-HIVE-TRUNK-Build

 parquet-hive cannot write nested map (map value is map)
 ---

 Key: HIVE-6914
 URL: https://issues.apache.org/jira/browse/HIVE-6914
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.12.0, 0.13.0
Reporter: Tongjie Chen
Assignee: Sergio Peña
  Labels: parquet, serialization
 Attachments: HIVE-6914.1.patch, HIVE-6914.1.patch, HIVE-6914.2.patch, 
 HIVE-6914.3.patch, NestedMap.parquet


 // table schema (identical for both plain text version and parquet version)
 desc hive desc text_mmap;
 m map
 // sample nested map entry
 {level1:{level2_key1:value1,level2_key2:value2}}
 The following query will fail, 
 insert overwrite table parquet_mmap select * from text_mmap;
 Caused by: parquet.io.ParquetEncodingException: This should be an 
 ArrayWritable or MapWritable: 
 org.apache.hadoop.hive.ql.io.parquet.writable.BinaryWritable@f2f8106
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:85)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:118)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:80)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:82)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:55)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
 at 
 parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:115)
 at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
 at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:77)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:90)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:622)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
 ... 9 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8905) Servlet classes signer information does not match [Spark branch]

2014-11-23 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-8905:
--
   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

Patch committed to Spark branch. Thanks, Chengxiang.

 Servlet classes signer information does not match [Spark branch] 
 -

 Key: HIVE-8905
 URL: https://issues.apache.org/jira/browse/HIVE-8905
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Chengxiang Li
  Labels: Spark-M3
 Fix For: spark-branch

 Attachments: HIVE-8905.1-spark.patch


 {noformat}
 2014-11-18 02:36:04,168 DEBUG spark.HttpFileServer 
 (Logging.scala:logDebug(63)) - HTTP file server started at: 
 http://10.203.137.143:46436
 2014-11-18 02:36:04,172 ERROR session.TestSparkSessionManagerImpl 
 (TestSparkSessionManagerImpl.java:run(127)) - Error executing 'Session thread 
 5'
 org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark 
 client.
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:55)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:122)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl$SessionThread.run(TestSparkSessionManagerImpl.java:112)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: java.lang.SecurityException: class 
 javax.servlet.FilterRegistration's signer information does not match signer 
 information of other classes in the same package
   at java.lang.ClassLoader.checkCerts(ClassLoader.java:952)
   at java.lang.ClassLoader.preDefineClass(ClassLoader.java:666)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:794)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:136)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:129)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.init(ServletContextHandler.java:98)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:96)
   at 
 org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:87)
   at org.apache.spark.ui.WebUI.attachPage(WebUI.scala:67)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at org.apache.spark.ui.WebUI$$anonfun$attachTab$1.apply(WebUI.scala:60)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:60)
   at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:49)
   at org.apache.spark.ui.SparkUI.init(SparkUI.scala:60)
   at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:150)
   at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:105)
   at org.apache.spark.SparkContext.init(SparkContext.scala:237)
   at 
 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:58)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.init(LocalHiveSparkClient.java:107)
   at 
 org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.getInstance(LocalHiveSparkClient.java:69)
   at 
 org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:52)
   at 
 org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:53)
   ... 3 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8828) Remove hadoop 20 shims

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222625#comment-14222625
 ] 

Hive QA commented on HIVE-8828:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683249/HIVE-8828.6.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1886/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1886/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1886/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-1886/source-prep.txt
+ [[ true == \t\r\u\e ]]
+ rm -rf ivy maven
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target shims/scheduler/target 
packaging/target hbase-handler/target testutils/target jdbc/target 
metastore/target itests/target itests/hcatalog-unit/target 
itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target 
itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/core/target 
hcatalog/streaming/target hcatalog/server-extensions/target 
hcatalog/hcatalog-pig-adapter/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target accumulo-handler/target hwi/target 
common/target common/src/gen contrib/target service/target serde/target 
beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target 
ql/src/test/results/clientpositive/parquet_map_of_maps.q.out 
ql/src/test/queries/clientpositive/parquet_map_of_maps.q
+ svn update
Ushims/0.20S/src/main/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java
Ushims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
Ushims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java
Ushims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
Uql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1641312.

Updated to revision 1641312.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12683249 - PreCommit-HIVE-TRUNK-Build

 Remove hadoop 20 shims
 --

 Key: HIVE-8828
 URL: https://issues.apache.org/jira/browse/HIVE-8828
 Project: Hive
  Issue Type: Task
  Components: Shims
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-8828.1.patch, HIVE-8828.2.patch, HIVE-8828.3.patch, 
 HIVE-8828.4.patch, HIVE-8828.5.patch, HIVE-8828.6.patch, HIVE-8828.patch


 CLEAR LIBRARY CACHE
 See : [mailing list discussion | 
 

[jira] [Created] (HIVE-8952) Exception handling should be improved in DataWritableWriter

2014-11-23 Thread Brock Noland (JIRA)
Brock Noland created HIVE-8952:
--

 Summary: Exception handling should be improved in 
DataWritableWriter
 Key: HIVE-8952
 URL: https://issues.apache.org/jira/browse/HIVE-8952
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland


1) In the {{write(final ArrayWritable record)}} method we should be logging the 
stack trace as well as the message.

2) We should be passing the caught exception into the arguments of the 
RuntimeException 

3) Instead of throwing RuntimeException we should throw 
IllegalArgumentException since it gives some additional semantics (i.e. the 
argument to this method was the problem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6914) parquet-hive cannot write nested map (map value is map)

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222631#comment-14222631
 ] 

Brock Noland commented on HIVE-6914:


I committed the {{NestedMap.parquet}} file but it looks like the {{.q.out}} 
file needs to be updated due to an additional drop table.

 parquet-hive cannot write nested map (map value is map)
 ---

 Key: HIVE-6914
 URL: https://issues.apache.org/jira/browse/HIVE-6914
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.12.0, 0.13.0
Reporter: Tongjie Chen
Assignee: Sergio Peña
  Labels: parquet, serialization
 Attachments: HIVE-6914.1.patch, HIVE-6914.1.patch, HIVE-6914.2.patch, 
 HIVE-6914.3.patch, NestedMap.parquet


 // table schema (identical for both plain text version and parquet version)
 desc hive desc text_mmap;
 m map
 // sample nested map entry
 {level1:{level2_key1:value1,level2_key2:value2}}
 The following query will fail, 
 insert overwrite table parquet_mmap select * from text_mmap;
 Caused by: parquet.io.ParquetEncodingException: This should be an 
 ArrayWritable or MapWritable: 
 org.apache.hadoop.hive.ql.io.parquet.writable.BinaryWritable@f2f8106
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:85)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:118)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:80)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeData(DataWritableWriter.java:82)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:55)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
 at 
 parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:115)
 at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
 at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:77)
 at 
 org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:90)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:622)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
 at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
 ... 9 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28147: HIVE-7073:Implement Binary in ParquetSerDe

2014-11-23 Thread cheng xu


 On Nov. 23, 2014, 10:59 p.m., Mohit Sabharwal wrote:
  data/files/parquet_types.txt, lines 1-3
  https://reviews.apache.org/r/28147/diff/3/?file=772138#file772138line1
 
  I think this is bit confusing, since the 0b prefix gives the impression 
  that data is read in binary format, whereas it is actually getting read as 
  a string.
  
  I think we can either write (preferably non-ascii) binary data instead 
  (for example, see: data/files/string.txt) OR alternatively, we could write 
  it legibly in hex, like 68656c6c6f (hello) and convert it to binary using 
  unhex() in the INSERT OVERWRITE query. What do you think ?

I encode some Chinese words(non-ascii) and use hex function to convert into 
string like B4F3CAFDBEDD(some Chinese words).


 On Nov. 23, 2014, 10:59 p.m., Mohit Sabharwal wrote:
  ql/src/test/queries/clientpositive/parquet_types.q, line 48
  https://reviews.apache.org/r/28147/diff/3/?file=772143#file772143line48
 
  No need to unhex here...
  
  Can just be:
  
   SELECT cchar, LENGTH(cchar), cvarchar, LENGTH(cvarchar), cbinary FROM 
  parquet_types
   
  Or you can pass it through hex() if original data has unprintable 
  characters:
  
   SELECT cchar, LENGTH(cchar), cvarchar, LENGTH(cvarchar), hex(cbinary) 
  FROM parquet_types

I think the statement of SELECT cint, ctinyint, csmallint, cfloat, cdouble, 
cstring1, t, cchar, cvarchar, hex(cbinary), m1, l1, st1 FROM parquet_types; 
will cover the case of binary. There is no need anymore for checking cbinary 
again.


- cheng


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28147/#review62744
---


On Nov. 21, 2014, 8:53 a.m., cheng xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28147/
 ---
 
 (Updated Nov. 21, 2014, 8:53 a.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch includes:
 1. binary support for ParquetHiveSerde
 2. related test cases both in unit and ql test
 
 
 Diffs
 -
 
   data/files/parquet_types.txt d342062 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
  472de8f 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
  d5aae3b 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
 4effe73 
   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestParquetSerDe.java 
 8ac7864 
   ql/src/test/queries/clientpositive/parquet_types.q 22585c3 
   ql/src/test/results/clientpositive/parquet_types.q.out 275897c 
 
 Diff: https://reviews.apache.org/r/28147/diff/
 
 
 Testing
 ---
 
 related UT and QL tests passed
 
 
 Thanks,
 
 cheng xu
 




Re: Review Request 28147: HIVE-7073:Implement Binary in ParquetSerDe

2014-11-23 Thread cheng xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28147/
---

(Updated Nov. 24, 2014, 4:16 a.m.)


Review request for hive.


Changes
---

summary:
1. add some non-ascii data into the test cases
2. regenerate the output


Repository: hive-git


Description
---

This patch includes:
1. binary support for ParquetHiveSerde
2. related test cases both in unit and ql test


Diffs (updated)
-

  data/files/parquet_types.txt d342062 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
 472de8f 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
 d5aae3b 
  ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
4effe73 
  ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestParquetSerDe.java 
8ac7864 
  ql/src/test/queries/clientpositive/parquet_types.q 22585c3 
  ql/src/test/results/clientpositive/parquet_types.q.out 275897c 

Diff: https://reviews.apache.org/r/28147/diff/


Testing
---

related UT and QL tests passed


Thanks,

cheng xu



[jira] [Commented] (HIVE-7073) Implement Binary in ParquetSerDe

2014-11-23 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222637#comment-14222637
 ] 

Ferdinand Xu commented on HIVE-7073:


Thanks [~mohitsabharwal] for your review. I have update the patch according to 
your comments in the RB entry.

 Implement Binary in ParquetSerDe
 

 Key: HIVE-7073
 URL: https://issues.apache.org/jira/browse/HIVE-7073
 Project: Hive
  Issue Type: Sub-task
Reporter: David Chen
Assignee: Ferdinand Xu
 Attachments: HIVE-7073.1.patch, HIVE-7073.2.patch, HIVE-7073.3.patch, 
 HIVE-7073.patch


 The ParquetSerDe currently does not support the BINARY data type. This ticket 
 is to implement the BINARY data type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-860) Persistent distributed cache

2014-11-23 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-860:
--
Attachment: HIVE-860.2.patch

Reattach the patch to see the CI result. Last time the CI failed to start due 
to the following error:

rsync: write failed on 
/data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-1868/succeeded/TestCliDriver-avro_add_column.q-orc_wide_table.q-query_with_semi.q-and-12-more/hive.log:
 No space left on device (28)

 Persistent distributed cache
 

 Key: HIVE-860
 URL: https://issues.apache.org/jira/browse/HIVE-860
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.12.0
Reporter: Zheng Shao
Assignee: Ferdinand Xu
 Fix For: 0.15.0

 Attachments: HIVE-860.1.patch, HIVE-860.2.patch, HIVE-860.2.patch, 
 HIVE-860.patch, HIVE-860.patch, HIVE-860.patch, HIVE-860.patch, 
 HIVE-860.patch, HIVE-860.patch, HIVE-860.patch, HIVE-860.patch, 
 HIVE-860.patch, HIVE-860.patch, HIVE-860.patch


 DistributedCache is shared across multiple jobs, if the hdfs file name is the 
 same.
 We need to make sure Hive put the same file into the same location every time 
 and do not overwrite if the file content is the same.
 We can achieve 2 different results:
 A1. Files added with the same name, timestamp, and md5 in the same session 
 will have a single copy in distributed cache.
 A2. Filed added with the same name, timestamp, and md5 will have a single 
 copy in distributed cache.
 A2 has a bigger benefit in sharing but may raise a question on when Hive 
 should clean it up in hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7313) Allow session-level temp-tables to be marked as in-memory tables

2014-11-23 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-7313:
--
Affects Version/s: 0.14.0

 Allow session-level temp-tables to be marked as in-memory tables
 

 Key: HIVE-7313
 URL: https://issues.apache.org/jira/browse/HIVE-7313
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.14.0
Reporter: Gopal V
  Labels: InMemory, Performance

 When the hadoop-2.3C shims are in action, APIs which can pin small tables 
 into memory are available.
 Any session with an in-memory table can create HDFS in-memory pools with 
 default caching semantics and add its files into the cache pool.
 Example code to implement the behaviour was prototyped for the Tez 
 Application Master, but the AM does not have enough information to determine 
 the cache policies.
 https://github.com/rajeshbalamohan/hdfs-cache-tool/blob/master/src/main/java/org/apache/hadoop/hdfs/tools/HDFSCache.java#L74



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-7313) Allow session-level temp-tables to be marked as in-memory tables

2014-11-23 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V reassigned HIVE-7313:
-

Assignee: Gopal V

 Allow session-level temp-tables to be marked as in-memory tables
 

 Key: HIVE-7313
 URL: https://issues.apache.org/jira/browse/HIVE-7313
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.14.0
Reporter: Gopal V
Assignee: Gopal V
  Labels: InMemory, Performance

 When the hadoop-2.3C shims are in action, APIs which can pin small tables 
 into memory are available.
 Any session with an in-memory table can create HDFS in-memory pools with 
 default caching semantics and add its files into the cache pool.
 Example code to implement the behaviour was prototyped for the Tez 
 Application Master, but the AM does not have enough information to determine 
 the cache policies.
 https://github.com/rajeshbalamohan/hdfs-cache-tool/blob/master/src/main/java/org/apache/hadoop/hdfs/tools/HDFSCache.java#L74



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8834) enable job progress monitoring of Remote Spark Context [Spark Branch]

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222657#comment-14222657
 ] 

Hive QA commented on HIVE-8834:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683254/HIVE-8834.5-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7182 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl.testMultiSessionMultipleUse
org.apache.hadoop.hive.ql.exec.spark.session.TestSparkSessionManagerImpl.testSingleSessionMultipleUse
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/412/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/412/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-412/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12683254 - PreCommit-HIVE-SPARK-Build

 enable job progress monitoring of Remote Spark Context [Spark Branch]
 -

 Key: HIVE-8834
 URL: https://issues.apache.org/jira/browse/HIVE-8834
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Chengxiang Li
Assignee: Rui Li
  Labels: Spark-M3
 Attachments: HIVE-8834.1-spark.patch, HIVE-8834.2-spark.patch, 
 HIVE-8834.3-spark.patch, HIVE-8834.4-spark.patch, HIVE-8834.5-spark.patch


 We should enable job progress monitor in Remote Spark Context, the spark job 
 progress info should fit into SparkJobStatus. SPARK-2321 supply new spark 
 progress API, which should make this task easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8952) Exception handling should be improved in DataWritableWriter

2014-11-23 Thread Ashish Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222679#comment-14222679
 ] 

Ashish Kumar Singh commented on HIVE-8952:
--

[~brocknoland] If no one has started working on it, can I take a stab at this?

 Exception handling should be improved in DataWritableWriter
 ---

 Key: HIVE-8952
 URL: https://issues.apache.org/jira/browse/HIVE-8952
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland

 1) In the {{write(final ArrayWritable record)}} method we should be logging 
 the stack trace as well as the message.
 2) We should be passing the caught exception into the arguments of the 
 RuntimeException 
 3) Instead of throwing RuntimeException we should throw 
 IllegalArgumentException since it gives some additional semantics (i.e. the 
 argument to this method was the problem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8952) Exception handling should be improved in DataWritableWriter

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222683#comment-14222683
 ] 

Brock Noland commented on HIVE-8952:


Yep, feel free!

 Exception handling should be improved in DataWritableWriter
 ---

 Key: HIVE-8952
 URL: https://issues.apache.org/jira/browse/HIVE-8952
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland

 1) In the {{write(final ArrayWritable record)}} method we should be logging 
 the stack trace as well as the message.
 2) We should be passing the caught exception into the arguments of the 
 RuntimeException 
 3) Instead of throwing RuntimeException we should throw 
 IllegalArgumentException since it gives some additional semantics (i.e. the 
 argument to this method was the problem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8950) Add support in ParquetHiveSerde to create table schema from a parquet file

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222686#comment-14222686
 ] 

Brock Noland commented on HIVE-8950:


Could you add some unit tests for {{ColInfoFromParquetFile}}? For example the 
{{*.parquet}} files in https://github.com/apache/hive/tree/trunk/data/files

 Add support in ParquetHiveSerde to create table schema from a parquet file
 --

 Key: HIVE-8950
 URL: https://issues.apache.org/jira/browse/HIVE-8950
 Project: Hive
  Issue Type: Improvement
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: HIVE-8950.patch


 PARQUET-76 and PARQUET-47 ask for creating parquet backed tables without 
 having to specify the column names and types. As, parquet files store schema 
 in their footer, it is possible to generate hive schema from parquet file's 
 metadata. This will improve usability of parquet backed tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8950) Add support in ParquetHiveSerde to create table schema from a parquet file

2014-11-23 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222687#comment-14222687
 ] 

Brock Noland commented on HIVE-8950:


[~spena] has a bunch of experience with Parquet as well.

 Add support in ParquetHiveSerde to create table schema from a parquet file
 --

 Key: HIVE-8950
 URL: https://issues.apache.org/jira/browse/HIVE-8950
 Project: Hive
  Issue Type: Improvement
Reporter: Ashish Kumar Singh
Assignee: Ashish Kumar Singh
 Attachments: HIVE-8950.patch


 PARQUET-76 and PARQUET-47 ask for creating parquet backed tables without 
 having to specify the column names and types. As, parquet files store schema 
 in their footer, it is possible to generate hive schema from parquet file's 
 metadata. This will improve usability of parquet backed tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28147: HIVE-7073:Implement Binary in ParquetSerDe

2014-11-23 Thread Mohit Sabharwal

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28147/#review62754
---

Ship it!


Thanks for the changes!

- Mohit Sabharwal


On Nov. 24, 2014, 4:16 a.m., cheng xu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28147/
 ---
 
 (Updated Nov. 24, 2014, 4:16 a.m.)
 
 
 Review request for hive.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This patch includes:
 1. binary support for ParquetHiveSerde
 2. related test cases both in unit and ql test
 
 
 Diffs
 -
 
   data/files/parquet_types.txt d342062 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
  472de8f 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
  d5aae3b 
   
 ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
 4effe73 
   ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestParquetSerDe.java 
 8ac7864 
   ql/src/test/queries/clientpositive/parquet_types.q 22585c3 
   ql/src/test/results/clientpositive/parquet_types.q.out 275897c 
 
 Diff: https://reviews.apache.org/r/28147/diff/
 
 
 Testing
 ---
 
 related UT and QL tests passed
 
 
 Thanks,
 
 cheng xu
 




[jira] [Commented] (HIVE-7073) Implement Binary in ParquetSerDe

2014-11-23 Thread Mohit Sabharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222694#comment-14222694
 ] 

Mohit Sabharwal commented on HIVE-7073:
---

+1 (non-binding)

 Implement Binary in ParquetSerDe
 

 Key: HIVE-7073
 URL: https://issues.apache.org/jira/browse/HIVE-7073
 Project: Hive
  Issue Type: Sub-task
Reporter: David Chen
Assignee: Ferdinand Xu
 Attachments: HIVE-7073.1.patch, HIVE-7073.2.patch, HIVE-7073.3.patch, 
 HIVE-7073.patch


 The ParquetSerDe currently does not support the BINARY data type. This ticket 
 is to implement the BINARY data type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8466) nonReserved keywords can not be used as table alias

2014-11-23 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222704#comment-14222704
 ] 

Navis commented on HIVE-8466:
-

[~ashutoshc] My bad that I've missed those. I've tested on mysql and confirmed 
that reserved keywords are not allowed as alias. I think some(most) of keywords 
are safe to be used as alias but it will introduce more confusion. 
[~cwsteinbach], What do you think about that?

 nonReserved keywords can not be used as table alias
 ---

 Key: HIVE-8466
 URL: https://issues.apache.org/jira/browse/HIVE-8466
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.12.0, 0.13.0, 0.14.0, 0.13.1
Reporter: cw
Assignee: Navis
Priority: Minor
 Attachments: HIVE-8466.1.patch, HIVE-8466.2.patch.txt, 
 HIVE-8466.3.patch.txt


 There is a small mistake in the patch of issue HIVE-2906. See the change of 
 FromClauseParser.g
 -: tabname=tableName (ts=tableSample)? (KW_AS? alias=identifier)?
 -- ^(TOK_TABREF $tabname $ts? $alias?)
 +: tabname=tableName (props=tableProperties)? (ts=tableSample)? (KW_AS? 
 alias=Identifier)?
 +- ^(TOK_TABREF $tabname $props? $ts? $alias?)
 With the 'identifier' changed to 'Identifier' we can not use nonReserved 
 keywords as table alias.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4766) Support HS2 client login timeout when the thrift thread max# is reached

2014-11-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14222715#comment-14222715
 ] 

Hive QA commented on HIVE-4766:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12683061/HIVE-4766.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6681 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan
org.apache.hive.hcatalog.streaming.TestStreaming.testInterleavedTransactionBatchCommits
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1887/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1887/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1887/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12683061 - PreCommit-HIVE-TRUNK-Build

 Support HS2 client login timeout when the thrift thread max# is reached
 ---

 Key: HIVE-4766
 URL: https://issues.apache.org/jira/browse/HIVE-4766
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Affects Versions: 0.10.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
  Labels: TODOC15
 Fix For: 0.15.0, 0.14.1

 Attachments: HIVE-4766.patch


 HiveServer2 client (beeline) hangs in login if the thrift max thread# has 
 been reached. It is because the server crashes due to a defect in currently 
 used thrift 0.9.0. When hive is upgraded to use a new version of Thrift (say 
 thrift 1.0), HS2 should support client login timeout instead of current 
 hanging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)