[jira] [Commented] (HIVE-14718) spark download is not ignored to a sufficient degree

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472865#comment-15472865
 ] 

Hive QA commented on HIVE-14718:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827471/HIVE-14718.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10530 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1130/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1130/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1130/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827471 - PreCommit-HIVE-MASTER-Build

> spark download is not ignored to a sufficient degree
> 
>
> Key: HIVE-14718
> URL: https://issues.apache.org/jira/browse/HIVE-14718
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14718.patch
>
>
> If one builds without skipSparkTests, and then runs mvn install w/o clean 
> with skipSparkTests, the download.sh generation script will not run, since 
> it's inside a profile that is disabled by this property; however, the 
> leftover one in target will still run because the execution of the script in 
> qtest project is not similarly protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-14717:
---
Attachment: HIVE-14717.01.patch

[~Ferd] [~Ke Jia] Uploading a patch for you to check. I thought it would be 
easier to discuss when you take a look at the change. Let me know if there are 
any concerns.

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
> Attachments: HIVE-14717.01.patch
>
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.546 sec - 
> in org.apache.hive.beeline.TestClientCommandHookFactory
> Running org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - 
> in org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Running org.apache.hive.beeline.TestTableOutputFormat
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - 
> in org.apache.hive.beeline.TestTableOutputFormat
> Results :
> Tests run: 44, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:03 min
> [INFO] Finished at: 2016-09-07T10:57:28-07:00
> [INFO] Final Memory: 65M/949M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project hive-beeline: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/vihang/work/src/upstream/hive/beeline && 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:MaxPermSize=512m -jar 
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefirebooter8133887423099901737.jar
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire6528142441948588259tmp
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire_05570572112194455658tmp
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> {noformat}
> My guess is this is related to recent fix for HIVE-13589 but needs more 
> investigation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14482) Add and drop table partition is not audit logged in HMS

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472706#comment-15472706
 ] 

Hive QA commented on HIVE-14482:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827468/HIVE-14482.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10530 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1129/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1129/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1129/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827468 - PreCommit-HIVE-MASTER-Build

> Add and drop table partition is not audit logged in HMS
> ---
>
> Key: HIVE-14482
> URL: https://issues.apache.org/jira/browse/HIVE-14482
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: HIVE-14482.patch
>
>
> When running:
> {code}
> ALTER TABLE test DROP PARTITION (b=140);
> {code}
> I only see the following in the HMS log:
> {code}
> 2016-08-08 23:12:34,081 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,082 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,082 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,094 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154094 duration=13 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,095 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,095 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_partitions_by_expr : 
> db=default tbl=test
> 2016-08-08 23:12:34,096 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_partitions_by_expr 
> : db=default tbl=test
> 2016-08-08 23:12:34,112 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  start=1470723154095 end=1470723154112 duration=17 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,172 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,173 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,173 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,186 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154186 duration=14 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,186 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,187 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,187 INFO  

[jira] [Commented] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472685#comment-15472685
 ] 

Vihang Karajgaonkar commented on HIVE-14717:


Do we support ./beeline -n user -p --property-file  syntax? For what I 
looked at the code, --property-file will load all the properties from the file 
and use it to connect ignoring any other parameters. So I am not sure ./beeline 
-n user -p --property-file  is a valid command.

If you concern is that the argument after -p will be used to assign the value 
of password by CommandLine then that is not a concern if we make the argument 
value as optional.

This is what I tried just now and it works as expected:

{noformat}
vihang-MBP:bin vihang$ ./beeline -n hive -p --property-file ~/beeline.properties
Connecting to jdbc:hive2://localhost:1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
16/09/07 21:03:38 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.2.0-SNAPSHOT by Apache Hive
0: jdbc:hive2://localhost:1>
{noformat}


Also, this works as expected too

{noformat}
vihang-MBP:bin vihang$ ./beeline -u "jdbc:hive2://localhost:1" -n hive -p 
-e "show tables;"
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/vihang/work/src/upstream/hive/packaging/target/apache-hive-2.2.0-SNAPSHOT-bin/apache-hive-2.2.0-SNAPSHOT-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:1
Enter password for jdbc:hive2://localhost:1: 
Connected to: Apache Hive (version 2.2.0-SNAPSHOT)
Driver: Hive JDBC (version 2.2.0-SNAPSHOT)
16/09/07 21:10:02 [main]: WARN jdbc.HiveConnection: Request to set autoCommit 
to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
OK
++
|tab_name|
++
| dummy  |
| likes  |
| longkeyvalues  |
| mydummy|
| names  |
| s3dummy|
| s3dummy_ext|
| s3dummybucket  |
| s3dummypart|
| s3dummyskewed  |
| src|
| src2   |
| test123|
++
13 rows selected (0.162 seconds)
Beeline version 2.2.0-SNAPSHOT by Apache Hive
Closing: 0: jdbc:hive2://localhost:1
vihang-MBP:bin vihang$
{noformat}

Let me submit a patch for this so that you can take a closer look. That way it 
is easier to review and discuss perhaps. Thanks for your input [~Ferd]

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests 

[jira] [Commented] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Ke Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472659#comment-15472659
 ] 

Ke Jia commented on HIVE-14717:
---

Hi [~vihangk1] [~Ferd], Thank  for you suggestion. Before HIVE-13589, Beeline 
work with the follow cases:
{noformat}
1. beeline –u url –n user –p pass
2. beeline –u url –n user
3. beeline –u url –p pass
4. beeline –u url
5. beeline {noformat}
When the password is null, the code block below set it to empty string.
{noformat}
private String constructCmd(String url, String user, String pass, String 
driver, boolean stripPasswd) {
String com = "!connect "
+ url + " "
+ (user == null || user.length() == 0 ? "''" : user) + " ";
if (stripPasswd) {
  com += PASSWD_MASK + " ";
} else {
  com += (pass == null || pass.length() == 0 ? "''" : pass) + " ";
}
com += (driver == null ? "" : driver);
return com;
  } {noformat}
Beeline prompt user to enter password when the password is null in follow code. 
However,  because the above code have assigned the empty string to the 
password, Beeline will never prompt user to enter password.
{noformat}
if (password == null) {
password = beeLine.getConsoleReader().readLine("Enter password for " + 
url + ": ",
  new Character('*'));
  } {noformat}
In the patch of Hive-13589, we add a judgement(password == null || 
password.length() == 0)) in the follow code.
{noformat}
 if (password == null || password.length() == 0) {
password = beeLine.getConsoleReader().readLine("Enter password for " + 
url + ": ",
  new Character('*'));
  } {noformat}
So, After HIVE-13589, if the password is null or empty string, Beeline will 
prompt user to enter password. So, for case 2 and 4, Beeline will ask user to 
enter the password. So if want to achieve the goal that prompt user to enter 
password, we need to add the judgement(password == null || password.length() == 
0)) even though we adopt the optional -p argument, I think. And this issue will 
also occur. What about you?

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.546 sec - 
> in org.apache.hive.beeline.TestClientCommandHookFactory
> Running org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - 
> in org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Running org.apache.hive.beeline.TestTableOutputFormat
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - 
> in org.apache.hive.beeline.TestTableOutputFormat
> Results :
> Tests run: 44, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:03 min
> [INFO] Finished at: 2016-09-07T10:57:28-07:00
> [INFO] Final Memory: 65M/949M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project hive-beeline: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/vihang/work/src/upstream/hive/beeline && 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:MaxPermSize=512m -jar 
> 

[jira] [Updated] (HIVE-14159) sorting of tuple array using multiple field[s]

2016-09-07 Thread Simanchal Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simanchal Das updated HIVE-14159:
-
Description: 
Problem Statement:

When we are working with complex structure of data like avro.
Most of the times we are encountering array contains multiple tuples and each 
tuple have struct schema.

Suppose here struct schema is like below:
{noformat}
{
"name": "employee",
"type": [{
"type": "record",
"name": "Employee",
"namespace": "com.company.Employee",
"fields": [{
"name": "empId",
"type": "int"
}, {
"name": "empName",
"type": "string"
}, {
"name": "age",
"type": "int"
}, {
"name": "salary",
"type": "double"
}]
}]
}

{noformat}
Then while running our hive query complex array looks like array of employee 
objects.
{noformat}
Example: 
//(array>)

Array[Employee(100,Foo,20,20990),Employee(500,Boo,30,50990),Employee(700,Harry,25,40990),Employee(100,Tom,35,70990)]

{noformat}
When we are implementing business use cases day to day life we are encountering 
problems like sorting a tuple array by specific field[s] like 
empId,name,salary,etc by ASC or DESC order.


Proposal:

I have developed a udf 'sort_array_by' which will sort a tuple array by one or 
more fields in ASC or DESC order provided by user ,default is ascending order .
{noformat}
Example:
1.Select 
sort_array_by(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Salary","ASC");
output: 
array[struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(500,Boo,30,50990),struct(100,Tom,35,70990)]

2.Select 
sort_array_by(array[struct(100,Foo,20,20990),struct(500,Boo,30,80990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","ASC");
output: 
array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]

3.Select 
sort_array_by(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","Age,"ASC");
output: 
array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]
{noformat}

  was:
Problem Statement:

When we are working with complex structure of data like avro.
Most of the times we are encountering array contains multiple tuples and each 
tuple have struct schema.

Suppose here struct schema is like below:
{noformat}
{
"name": "employee",
"type": [{
"type": "record",
"name": "Employee",
"namespace": "com.company.Employee",
"fields": [{
"name": "empId",
"type": "int"
}, {
"name": "empName",
"type": "string"
}, {
"name": "age",
"type": "int"
}, {
"name": "salary",
"type": "double"
}]
}]
}

{noformat}
Then while running our hive query complex array looks like array of employee 
objects.
{noformat}
Example: 
//(array>)

Array[Employee(100,Foo,20,20990),Employee(500,Boo,30,50990),Employee(700,Harry,25,40990),Employee(100,Tom,35,70990)]

{noformat}
When we are implementing business use cases day to day life we are encountering 
problems like sorting a tuple array by specific field[s] like 
empId,name,salary,etc by ASC or DESC order.


Proposal:

I have developed a udf 'sort_array_by' which will sort a tuple array by one or 
more fields in ASC or DESC order provided by user ,default is ascending order .
{noformat}
Example:
1.Select 
sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Salary","ASC");
output: 
array[struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(500,Boo,30,50990),struct(100,Tom,35,70990)]

2.Select 
sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,80990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","ASC");
output: 
array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]

3.Select 

[jira] [Commented] (HIVE-14716) Duplicate pom.xml entries for mockito

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472562#comment-15472562
 ] 

Hive QA commented on HIVE-14716:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827445/HIVE-14716.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10462 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1128/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1128/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1128/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827445 - PreCommit-HIVE-MASTER-Build

> Duplicate pom.xml entries for mockito
> -
>
> Key: HIVE-14716
> URL: https://issues.apache.org/jira/browse/HIVE-14716
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Trivial
> Attachments: HIVE-14716.01.patch
>
>
> When you build beeline there is a warning which says there are duplicate 
> pom.xml entries for mockito dependency.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hive:hive-beeline:jar:2.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 126, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 137, column 17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14715) Hive throws NumberFormatException with query with Null value

2016-09-07 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14715:

Status: Patch Available  (was: Open)

> Hive throws NumberFormatException with query with Null value
> 
>
> Key: HIVE-14715
> URL: https://issues.apache.org/jira/browse/HIVE-14715
> Project: Hive
>  Issue Type: Bug
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14715.1.patch
>
>
> The java.lang.NumberFormatException will throw with following reproduce:
> set hive.cbo.enable=false;
> CREATE TABLE `paqtest`(
> `c1` int,
> `s1` string,
> `s2` string,
> `bn1` bigint)
> ROW FORMAT SERDE
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
> STORED AS INPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
> OUTPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> insert into paqtest values (58, '', 'ABC', 0);
> SELECT
> 'PM' AS cy,
> c1,
> NULL AS iused,
> NULL AS itp,
> s2,
> NULL AS cvg,
> NULL AS acavg,
> sum(bn1) AS cca
> FROM paqtest
> WHERE (s1 IS NULL OR length(s1) = 0)
> GROUP BY 'Pricing mismatch', c1, NULL, NULL, s2, NULL, NULL;
> The stack like following:
> java.lang.NumberFormatException: ABC
> GroupByOperator.process(Object, int) line: 773
> ExecReducer.reduce(Object, Iterator, OutputCollector, Reporter) line: 236 
> ReduceTask.runOldReducer(JobConf, TaskUmbilicalProtocol, TaskReporter, 
> RawKeyValueIterator, RawComparator, Class, Class) 
> line: 444 
> ReduceTask.run(JobConf, TaskUmbilicalProtocol) line: 392  
> LocalJobRunner$Job$ReduceTaskRunnable.run() line: 319 
> Executors$RunnableAdapter.call() line: 471 
> It works fine when hive.cbo.enable = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14715) Hive throws NumberFormatException with query with Null value

2016-09-07 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14715:

Attachment: HIVE-14715.1.patch

The NumberFormat Exception throws because of mismatching columns.
In genGroupByPlanReduceSinkOperator method, getReduceKeysForReduceSink is 
called to get Reduce keys from grpByExprs by removing same columns. So 
reduceKeys.size() <= grpByExprs.size()
Reduce values are internal columns which follows grpByExprs, so to get their 
values, the pos should start from grpByExprs.size().
Attach patch 1 with the fix. 

> Hive throws NumberFormatException with query with Null value
> 
>
> Key: HIVE-14715
> URL: https://issues.apache.org/jira/browse/HIVE-14715
> Project: Hive
>  Issue Type: Bug
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
> Attachments: HIVE-14715.1.patch
>
>
> The java.lang.NumberFormatException will throw with following reproduce:
> set hive.cbo.enable=false;
> CREATE TABLE `paqtest`(
> `c1` int,
> `s1` string,
> `s2` string,
> `bn1` bigint)
> ROW FORMAT SERDE
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
> STORED AS INPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
> OUTPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> insert into paqtest values (58, '', 'ABC', 0);
> SELECT
> 'PM' AS cy,
> c1,
> NULL AS iused,
> NULL AS itp,
> s2,
> NULL AS cvg,
> NULL AS acavg,
> sum(bn1) AS cca
> FROM paqtest
> WHERE (s1 IS NULL OR length(s1) = 0)
> GROUP BY 'Pricing mismatch', c1, NULL, NULL, s2, NULL, NULL;
> The stack like following:
> java.lang.NumberFormatException: ABC
> GroupByOperator.process(Object, int) line: 773
> ExecReducer.reduce(Object, Iterator, OutputCollector, Reporter) line: 236 
> ReduceTask.runOldReducer(JobConf, TaskUmbilicalProtocol, TaskReporter, 
> RawKeyValueIterator, RawComparator, Class, Class) 
> line: 444 
> ReduceTask.run(JobConf, TaskUmbilicalProtocol) line: 392  
> LocalJobRunner$Job$ReduceTaskRunnable.run() line: 319 
> Executors$RunnableAdapter.call() line: 471 
> It works fine when hive.cbo.enable = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-13589) beeline - support prompt for password with '-u' option

2016-09-07 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu reopened HIVE-13589:
-

Revert in 63fdb51 since it breaks TestHiveCLI test and backwards compatibility 
mentioned in HIVE-14717

> beeline - support prompt for password with '-u' option
> --
>
> Key: HIVE-13589
> URL: https://issues.apache.org/jira/browse/HIVE-13589
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Thejas M Nair
>Assignee: Ke Jia
> Fix For: 2.2.0
>
> Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, 
> HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, HIVE-13589.6.patch
>
>
> Specifying connection string using commandline options in beeline is 
> convenient, as it gets saved in shell command history, and it is easy to 
> retrieve it from there.
> However, specifying the password in command prompt is not secure as it gets 
> displayed on screen and saved in the history.
> It should be possible to specify '-p' without an argument to make beeline 
> prompt for password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14608) LLAP: slow scheduling due to LlapTaskScheduler not removing nodes on kill

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14608:

   Resolution: Fixed
Fix Version/s: 2.1.1
   2.2.0
   Status: Resolved  (was: Patch Available)

Committed to branches. Thanks for the review!

> LLAP: slow scheduling due to LlapTaskScheduler not removing nodes on kill 
> --
>
> Key: HIVE-14608
> URL: https://issues.apache.org/jira/browse/HIVE-14608
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Critical
> Fix For: 2.2.0, 2.1.1
>
> Attachments: HIVE-14608.patch
>
>
> See comments; this can result in a slowdown esp. if some critical task gets 
> unlucky.
> {noformat}
>   public void workerNodeRemoved(ServiceInstance serviceInstance) {
>  // FIXME: disabling this for now
> // instanceToNodeMap.remove(serviceInstance.getWorkerIdentity());
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14710) unify DB product type treatment in directsql and txnhandler

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14710:

   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> unify DB product type treatment in directsql and txnhandler
> ---
>
> Key: HIVE-14710
> URL: https://issues.apache.org/jira/browse/HIVE-14710
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.2.0
>
> Attachments: HIVE-14710.01.patch, HIVE-14710.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472439#comment-15472439
 ] 

Ferdinand Xu commented on HIVE-14717:
-

Hi [~vihangk1] Thank you for this suggestion. There's a major issue as 
discussed in HIVE-13589. e.g. 
{noformat}
beeline -u user -p --property-file 
{noformat}

Beeline will treat the string after option p as a password since 
"-property-file" is not a key in Beeline which is defined in BeeLineOpts 
instead of BeeLine. So we have to put it in the last place to avoid this case. 
One possible remedy is to bypass the password check if user name is not 
specified. Any thoughts? [~Jk_Self] [~vihangk1]

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.546 sec - 
> in org.apache.hive.beeline.TestClientCommandHookFactory
> Running org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - 
> in org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Running org.apache.hive.beeline.TestTableOutputFormat
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - 
> in org.apache.hive.beeline.TestTableOutputFormat
> Results :
> Tests run: 44, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:03 min
> [INFO] Finished at: 2016-09-07T10:57:28-07:00
> [INFO] Final Memory: 65M/949M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project hive-beeline: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/vihang/work/src/upstream/hive/beeline && 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:MaxPermSize=512m -jar 
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefirebooter8133887423099901737.jar
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire6528142441948588259tmp
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire_05570572112194455658tmp
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> {noformat}
> My guess is this is related to recent fix for HIVE-13589 but needs more 
> investigation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14608) LLAP: slow scheduling due to LlapTaskScheduler not removing nodes on kill

2016-09-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472433#comment-15472433
 ] 

Sergey Shelukhin commented on HIVE-14608:
-

It mostly applies to reducers actually. The main problem is as indicated in the 
description - for whatever reason we don't remove nodes from the node list when 
they die. The per-node assignment explicitly checks active set to get around 
that(?) but the other path doesn't... so reducers with no location preference 
will be sent to dead nodes, potentially

> LLAP: slow scheduling due to LlapTaskScheduler not removing nodes on kill 
> --
>
> Key: HIVE-14608
> URL: https://issues.apache.org/jira/browse/HIVE-14608
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Critical
> Attachments: HIVE-14608.patch
>
>
> See comments; this can result in a slowdown esp. if some critical task gets 
> unlucky.
> {noformat}
>   public void workerNodeRemoved(ServiceInstance serviceInstance) {
>  // FIXME: disabling this for now
> // instanceToNodeMap.remove(serviceInstance.getWorkerIdentity());
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14644) use metastore information on the read path appropriately

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14644:

Status: Patch Available  (was: Open)

> use metastore information on the read path appropriately
> 
>
> Key: HIVE-14644
> URL: https://issues.apache.org/jira/browse/HIVE-14644
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14644.nogen.patch, HIVE-14644.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14644) use metastore information on the read path appropriately

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14644:

Attachment: HIVE-14644.patch
HIVE-14644.nogen.patch

The patches... still need to test some more.

> use metastore information on the read path appropriately
> 
>
> Key: HIVE-14644
> URL: https://issues.apache.org/jira/browse/HIVE-14644
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14644.nogen.patch, HIVE-14644.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14637) edit or split MoveTask to commit job results to metastore

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14637:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the feature branch

> edit or split MoveTask to commit job results to metastore
> -
>
> Key: HIVE-14637
> URL: https://issues.apache.org/jira/browse/HIVE-14637
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14637.nogen.patch, HIVE-14637.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14532) Enable qtests from IDE - eclipse

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472406#comment-15472406
 ] 

Hive QA commented on HIVE-14532:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827446/HIVE-14532.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10462 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1127/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1127/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1127/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827446 - PreCommit-HIVE-MASTER-Build

> Enable qtests from IDE - eclipse
> 
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch, 
> HIVE-14532.3.patch, HIVE-14532.4.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14718) spark download is not ignored to a sufficient degree

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14718:

Description: 
If one builds without skipSparkTests, and then runs mvn install w/o clean with 
skipSparkTests, the download.sh generation script will not run, since it's 
inside a profile that is disabled by this property; however, the leftover one 
in target will still run because the execution of the script in qtest project 
is not similarly protected.


  was:
If one builds without skipSparkTests, and then runs mvn install w/o clean with 
skipSparkTests, the download.sh generation script will not run, since it's 
generated in a profile that's disabled by the property; however, the leftover 
one in target will still run because the execution of the script in qtest 
project is not similarly protected.



> spark download is not ignored to a sufficient degree
> 
>
> Key: HIVE-14718
> URL: https://issues.apache.org/jira/browse/HIVE-14718
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14718.patch
>
>
> If one builds without skipSparkTests, and then runs mvn install w/o clean 
> with skipSparkTests, the download.sh generation script will not run, since 
> it's inside a profile that is disabled by this property; however, the 
> leftover one in target will still run because the execution of the script in 
> qtest project is not similarly protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14718) spark download is not ignored to a sufficient degree

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14718:

Status: Patch Available  (was: Open)

[~spena] does this make sense? I did what other pom files appear to do, not 
intimately familiar with Hive pom files anymore :)

> spark download is not ignored to a sufficient degree
> 
>
> Key: HIVE-14718
> URL: https://issues.apache.org/jira/browse/HIVE-14718
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14718.patch
>
>
> If one builds without skipSparkTests, and then runs mvn install w/o clean 
> with skipSparkTests, the download.sh generation script will not run, since 
> it's generated in a profile that's disabled by the property; however, the 
> leftover one in target will still run because the execution of the script in 
> qtest project is not similarly protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14718) spark download is not ignored to a sufficient degree

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14718:

Description: 
If one builds without skipSparkTests, and then runs mvn install w/o clean with 
skipSparkTests, the download.sh generation script will not run, since it's 
generated in a profile that's disabled by the property; however, the leftover 
one in target will still run because the execution of the script in qtest 
project is not similarly protected.


> spark download is not ignored to a sufficient degree
> 
>
> Key: HIVE-14718
> URL: https://issues.apache.org/jira/browse/HIVE-14718
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14718.patch
>
>
> If one builds without skipSparkTests, and then runs mvn install w/o clean 
> with skipSparkTests, the download.sh generation script will not run, since 
> it's generated in a profile that's disabled by the property; however, the 
> leftover one in target will still run because the execution of the script in 
> qtest project is not similarly protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14718) spark download is not ignored to a sufficient degree

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14718:

Attachment: HIVE-14718.patch

> spark download is not ignored to a sufficient degree
> 
>
> Key: HIVE-14718
> URL: https://issues.apache.org/jira/browse/HIVE-14718
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14718.patch
>
>
> If one builds without skipSparkTests, and then runs mvn install w/o clean 
> with skipSparkTests, the download.sh generation script will not run, since 
> it's generated in a profile that's disabled by the property; however, the 
> leftover one in target will still run because the execution of the script in 
> qtest project is not similarly protected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14715) Hive throws NumberFormatException with query with Null value

2016-09-07 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen reassigned HIVE-14715:
---

Assignee: Yongzhi Chen

> Hive throws NumberFormatException with query with Null value
> 
>
> Key: HIVE-14715
> URL: https://issues.apache.org/jira/browse/HIVE-14715
> Project: Hive
>  Issue Type: Bug
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>
> The java.lang.NumberFormatException will throw with following reproduce:
> set hive.cbo.enable=false;
> CREATE TABLE `paqtest`(
> `c1` int,
> `s1` string,
> `s2` string,
> `bn1` bigint)
> ROW FORMAT SERDE
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
> STORED AS INPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
> OUTPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> insert into paqtest values (58, '', 'ABC', 0);
> SELECT
> 'PM' AS cy,
> c1,
> NULL AS iused,
> NULL AS itp,
> s2,
> NULL AS cvg,
> NULL AS acavg,
> sum(bn1) AS cca
> FROM paqtest
> WHERE (s1 IS NULL OR length(s1) = 0)
> GROUP BY 'Pricing mismatch', c1, NULL, NULL, s2, NULL, NULL;
> The stack like following:
> java.lang.NumberFormatException: ABC
> GroupByOperator.process(Object, int) line: 773
> ExecReducer.reduce(Object, Iterator, OutputCollector, Reporter) line: 236 
> ReduceTask.runOldReducer(JobConf, TaskUmbilicalProtocol, TaskReporter, 
> RawKeyValueIterator, RawComparator, Class, Class) 
> line: 444 
> ReduceTask.run(JobConf, TaskUmbilicalProtocol) line: 392  
> LocalJobRunner$Job$ReduceTaskRunnable.run() line: 319 
> Executors$RunnableAdapter.call() line: 471 
> It works fine when hive.cbo.enable = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14686) Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS ... AS"

2016-09-07 Thread Fan Yunbo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472336#comment-15472336
 ] 

Fan Yunbo commented on HIVE-14686:
--

[~ruili] Agree with you and thanks for your advice.
I made a change:
set command type once the value of command_type decided;
change related tests output files.
And these failure tests seems not related.


> Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS 
> ... AS"
> --
>
> Key: HIVE-14686
> URL: https://issues.apache.org/jira/browse/HIVE-14686
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14686.1.patch, HIVE-14686.2.patch, 
> HIVE-14686.3.patch
>
>
> See the query: 
> {code}
> create table if not exists DST as select * from SRC;
> {code}
> if the table DST doesn't exist, SessionState.get().getHiveOperation() will 
> return HiveOperation.CREATETABLE_AS_SELECT;
> But if the table DST already exists, it will return HiveOperation.CREATETABLE;
> It really makes some trouble for those who judge operation type by 
> SessionState.get().getHiveOperation().
> The reason I find out is that the function analyzeCreateTable in 
> SemanticAnalyzer.java will return null and won't set the correct command type 
> if the table already exists.
> Here is the related code:
> {code}
> // check for existence of table
> if (ifNotExists) {
>   try {
> Table table = getTable(qualifiedTabName, false);
> if (table != null) { // table exists
>   return null;
> }
>   } catch (HiveException e) {
> // should not occur since second parameter to getTableWithQN is false
> throw new IllegalStateException("Unxpected Exception thrown: " + 
> e.getMessage(), e);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14159) sorting of tuple array using multiple field[s]

2016-09-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472299#comment-15472299
 ] 

Lefty Leverenz commented on HIVE-14159:
---

Doc note:  The *sort_array_by* UDF should be documented in the wiki for release 
2.2.0.  Added a TODOC2.2 label.

* [Hive Operators and UDFs -- Collection Functions | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-CollectionFunctions]

Also, "sort_array_field" should be changed to "sort_array_by" in three examples 
in this issue's description.

> sorting of tuple array using multiple field[s]
> --
>
> Key: HIVE-14159
> URL: https://issues.apache.org/jira/browse/HIVE-14159
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Simanchal Das
>Assignee: Simanchal Das
>  Labels: TODOC2.2, patch
> Fix For: 2.2.0
>
> Attachments: HIVE-14159.1.patch, HIVE-14159.2.patch, 
> HIVE-14159.3.patch, HIVE-14159.4.patch
>
>
> Problem Statement:
> When we are working with complex structure of data like avro.
> Most of the times we are encountering array contains multiple tuples and each 
> tuple have struct schema.
> Suppose here struct schema is like below:
> {noformat}
> {
>   "name": "employee",
>   "type": [{
>   "type": "record",
>   "name": "Employee",
>   "namespace": "com.company.Employee",
>   "fields": [{
>   "name": "empId",
>   "type": "int"
>   }, {
>   "name": "empName",
>   "type": "string"
>   }, {
>   "name": "age",
>   "type": "int"
>   }, {
>   "name": "salary",
>   "type": "double"
>   }]
>   }]
> }
> {noformat}
> Then while running our hive query complex array looks like array of employee 
> objects.
> {noformat}
> Example: 
>   //(array>)
>   
> Array[Employee(100,Foo,20,20990),Employee(500,Boo,30,50990),Employee(700,Harry,25,40990),Employee(100,Tom,35,70990)]
> {noformat}
> When we are implementing business use cases day to day life we are 
> encountering problems like sorting a tuple array by specific field[s] like 
> empId,name,salary,etc by ASC or DESC order.
> Proposal:
> I have developed a udf 'sort_array_by' which will sort a tuple array by one 
> or more fields in ASC or DESC order provided by user ,default is ascending 
> order .
> {noformat}
> Example:
>   1.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Salary","ASC");
>   output: 
> array[struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(500,Boo,30,50990),struct(100,Tom,35,70990)]
>   
>   2.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,80990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","ASC");
>   output: 
> array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]
>   3.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","Age,"ASC");
>   output: 
> array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14159) sorting of tuple array using multiple field[s]

2016-09-07 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-14159:
--
Labels: TODOC2.2 patch  (was: patch)

> sorting of tuple array using multiple field[s]
> --
>
> Key: HIVE-14159
> URL: https://issues.apache.org/jira/browse/HIVE-14159
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Simanchal Das
>Assignee: Simanchal Das
>  Labels: TODOC2.2, patch
> Fix For: 2.2.0
>
> Attachments: HIVE-14159.1.patch, HIVE-14159.2.patch, 
> HIVE-14159.3.patch, HIVE-14159.4.patch
>
>
> Problem Statement:
> When we are working with complex structure of data like avro.
> Most of the times we are encountering array contains multiple tuples and each 
> tuple have struct schema.
> Suppose here struct schema is like below:
> {noformat}
> {
>   "name": "employee",
>   "type": [{
>   "type": "record",
>   "name": "Employee",
>   "namespace": "com.company.Employee",
>   "fields": [{
>   "name": "empId",
>   "type": "int"
>   }, {
>   "name": "empName",
>   "type": "string"
>   }, {
>   "name": "age",
>   "type": "int"
>   }, {
>   "name": "salary",
>   "type": "double"
>   }]
>   }]
> }
> {noformat}
> Then while running our hive query complex array looks like array of employee 
> objects.
> {noformat}
> Example: 
>   //(array>)
>   
> Array[Employee(100,Foo,20,20990),Employee(500,Boo,30,50990),Employee(700,Harry,25,40990),Employee(100,Tom,35,70990)]
> {noformat}
> When we are implementing business use cases day to day life we are 
> encountering problems like sorting a tuple array by specific field[s] like 
> empId,name,salary,etc by ASC or DESC order.
> Proposal:
> I have developed a udf 'sort_array_by' which will sort a tuple array by one 
> or more fields in ASC or DESC order provided by user ,default is ascending 
> order .
> {noformat}
> Example:
>   1.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Salary","ASC");
>   output: 
> array[struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(500,Boo,30,50990),struct(100,Tom,35,70990)]
>   
>   2.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,80990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","ASC");
>   output: 
> array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]
>   3.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","Age,"ASC");
>   output: 
> array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14591) HS2 is shut down unexpectedly during the startup time

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472255#comment-15472255
 ] 

Hive QA commented on HIVE-14591:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827444/HIVE-14591.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10440 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
TestMiniTezCliDriver-update_orig_table.q-vectorization_limit.q-explainanalyze_3.q-and-17-more
 - did not produce a TEST-*.xml file
TestSparkNegativeCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1126/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1126/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1126/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827444 - PreCommit-HIVE-MASTER-Build

> HS2 is shut down unexpectedly during the startup time
> -
>
> Key: HIVE-14591
> URL: https://issues.apache.org/jira/browse/HIVE-14591
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14591.1.patch
>
>
> If there is issue with Zookeeper (e.g. connection issues), then it takes HS2 
> some time to connect. During this time, Ambari could issue health checks 
> against HS2 and the CloseSession call will trigger the shutdown of HS2, which 
> is not expected. That triggering should happen only when the HS2 has been 
> deregistered with Zookeeper, not during the startup time when HS2 is not 
> registered with ZK yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472244#comment-15472244
 ] 

Vihang Karajgaonkar commented on HIVE-14717:


I investigated this more. I think we can satisfy the requirement in HIVE-13589 
without breaking any backwards compatibility by making the password option 
"optional" using the OptionBuilder. That way we are not breaking backwards 
compatibility since we are widening the scope of the option. So 
{noformat}beeline -u url -n user -p pass{noformat} {noformat}beeline -u url -p 
-n user{noformat} and {noformat}beeline -u url -n user -p{noformat} will work.

If the user provides -p on the commandline the user can provide the password on 
the commandline like current behavior or choose not provide password. In that 
case Beeline will prompt for the password. So following cases should 
successfully connect without prompting any password (assuming the 
authentication mode is default) to make sure it is backwards compatible.

{noformat}
beeline -u url -n user -p pass
beeline -u url -p pass -n user
beeline -u url
beeline -u url -n user
beeline -u url -p password
beeline -u 
"jdbc:hive2://localhost:1/default;user=username;password=password"
beeline -u "jdbc:hive2://"
{noformat}

In case the user does not want the password to be seen on the command line or 
command history he/she can issue the following way to connect and beeline will 
prompt for password in each case below

{noformat}
beeline -u url -n user -p
beeline -u url -p -n user
beeline -p -u url -n user
beeline -u url -p
beeline -u url -p -n user -e "show tables;"
{noformat}

I have done some testing and it is working as expected. [~Ferd] Let me know if 
you have any thoughts?

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.546 sec - 
> in org.apache.hive.beeline.TestClientCommandHookFactory
> Running org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - 
> in org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Running org.apache.hive.beeline.TestTableOutputFormat
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - 
> in org.apache.hive.beeline.TestTableOutputFormat
> Results :
> Tests run: 44, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:03 min
> [INFO] Finished at: 2016-09-07T10:57:28-07:00
> [INFO] Final Memory: 65M/949M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project hive-beeline: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/vihang/work/src/upstream/hive/beeline && 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:MaxPermSize=512m -jar 
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefirebooter8133887423099901737.jar
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire6528142441948588259tmp
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire_05570572112194455658tmp
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of 

[jira] [Updated] (HIVE-14039) HiveServer2: Make the usage of server with JDBC thirft serde enabled, backward compatible for older clients

2016-09-07 Thread Ziyang Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ziyang Zhao updated HIVE-14039:
---
Attachment: (was: HIVE-14039.4.patch)

> HiveServer2: Make the usage of server with JDBC thirft serde enabled, 
> backward compatible for older clients
> ---
>
> Key: HIVE-14039
> URL: https://issues.apache.org/jira/browse/HIVE-14039
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC
>Affects Versions: 2.0.1
>Reporter: Vaibhav Gumashta
>Assignee: Ziyang Zhao
> Attachments: HIVE-14039.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14039) HiveServer2: Make the usage of server with JDBC thirft serde enabled, backward compatible for older clients

2016-09-07 Thread Ziyang Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ziyang Zhao updated HIVE-14039:
---
Attachment: HIVE-14039.5.patch

> HiveServer2: Make the usage of server with JDBC thirft serde enabled, 
> backward compatible for older clients
> ---
>
> Key: HIVE-14039
> URL: https://issues.apache.org/jira/browse/HIVE-14039
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, JDBC
>Affects Versions: 2.0.1
>Reporter: Vaibhav Gumashta
>Assignee: Ziyang Zhao
> Attachments: HIVE-14039.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13555) Add nullif udf

2016-09-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472230#comment-15472230
 ] 

Lefty Leverenz commented on HIVE-13555:
---

Now would be better, because it's likely to be forgotten later.  Just make sure 
you include version information that links to this JIRA issue -- copy the way 
it's done for other UDFs.

> Add nullif udf
> --
>
> Key: HIVE-13555
> URL: https://issues.apache.org/jira/browse/HIVE-13555
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13555.1.patch, HIVE-13555.2.patch, 
> HIVE-13555.2.patch
>
>
> {{nullif(exp1, exp2)}} is shorthand for: {{case when exp1 = exp2 then null 
> else exp1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14482) Add and drop table partition is not audit logged in HMS

2016-09-07 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated HIVE-14482:

Status: Patch Available  (was: Open)

Added audit log for add, drop and rename partitions

> Add and drop table partition is not audit logged in HMS
> ---
>
> Key: HIVE-14482
> URL: https://issues.apache.org/jira/browse/HIVE-14482
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: HIVE-14482.patch
>
>
> When running:
> {code}
> ALTER TABLE test DROP PARTITION (b=140);
> {code}
> I only see the following in the HMS log:
> {code}
> 2016-08-08 23:12:34,081 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,082 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,082 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,094 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154094 duration=13 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,095 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,095 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_partitions_by_expr : 
> db=default tbl=test
> 2016-08-08 23:12:34,096 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_partitions_by_expr 
> : db=default tbl=test
> 2016-08-08 23:12:34,112 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  start=1470723154095 end=1470723154112 duration=17 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,172 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,173 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,173 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,186 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154186 duration=14 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,186 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,187 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,187 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,199 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154199 duration=13 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,203 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,215 INFO  org.apache.hadoop.hive.metastore.ObjectStore: 
> [pool-4-thread-2]: JDO filter pushdown cannot be used: Filtering is supported 
> only on partition keys of type string
> 2016-08-08 23:12:34,226 ERROR org.apache.hadoop.hdfs.KeyProviderCache: 
> [pool-4-thread-2]: Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a keyProvider !!
> 2016-08-08 23:12:34,239 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: dropPartition() will move partition-directories to 
> trash-directory.
> 2016-08-08 23:12:34,239 INFO  hive.metastore.hivemetastoressimpl: 
> [pool-4-thread-2]: deleting  
> hdfs://:8020/user/hive/warehouse/default/test/b=140
> 2016-08-08 23:12:34,247 INFO  org.apache.hadoop.fs.TrashPolicyDefault: 
> [pool-4-thread-2]: Moved: 
> 'hdfs://:8020/user/hive/warehouse/default/test/b=140' to trash at: 
> hdfs://:8020/user/hive/.Trash/Current/user/hive/warehouse/default/test/b=140
> 

[jira] [Updated] (HIVE-14482) Add and drop table partition is not audit logged in HMS

2016-09-07 Thread Eric Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Lin updated HIVE-14482:

Attachment: HIVE-14482.patch

> Add and drop table partition is not audit logged in HMS
> ---
>
> Key: HIVE-14482
> URL: https://issues.apache.org/jira/browse/HIVE-14482
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Eric Lin
>Assignee: Eric Lin
>Priority: Minor
> Attachments: HIVE-14482.patch
>
>
> When running:
> {code}
> ALTER TABLE test DROP PARTITION (b=140);
> {code}
> I only see the following in the HMS log:
> {code}
> 2016-08-08 23:12:34,081 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,082 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,082 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,094 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154094 duration=13 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,095 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,095 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_partitions_by_expr : 
> db=default tbl=test
> 2016-08-08 23:12:34,096 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_partitions_by_expr 
> : db=default tbl=test
> 2016-08-08 23:12:34,112 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  start=1470723154095 end=1470723154112 duration=17 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,172 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,173 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,173 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,186 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154186 duration=14 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,186 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,187 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: 2: source:xx.xx.xxx.xxx get_table : db=default tbl=test
> 2016-08-08 23:12:34,187 INFO  
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: 
> ugi=hive ip=xx.xx.xxx.xxxcmd=source:xx.xx.xxx.xxx get_table : db=default 
> tbl=test
> 2016-08-08 23:12:34,199 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  end=1470723154199 duration=13 
> from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 
> retryCount=0 error=false>
> 2016-08-08 23:12:34,203 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: 
> [pool-4-thread-2]:  from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
> 2016-08-08 23:12:34,215 INFO  org.apache.hadoop.hive.metastore.ObjectStore: 
> [pool-4-thread-2]: JDO filter pushdown cannot be used: Filtering is supported 
> only on partition keys of type string
> 2016-08-08 23:12:34,226 ERROR org.apache.hadoop.hdfs.KeyProviderCache: 
> [pool-4-thread-2]: Could not find uri with key 
> [dfs.encryption.key.provider.uri] to create a keyProvider !!
> 2016-08-08 23:12:34,239 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: 
> [pool-4-thread-2]: dropPartition() will move partition-directories to 
> trash-directory.
> 2016-08-08 23:12:34,239 INFO  hive.metastore.hivemetastoressimpl: 
> [pool-4-thread-2]: deleting  
> hdfs://:8020/user/hive/warehouse/default/test/b=140
> 2016-08-08 23:12:34,247 INFO  org.apache.hadoop.fs.TrashPolicyDefault: 
> [pool-4-thread-2]: Moved: 
> 'hdfs://:8020/user/hive/warehouse/default/test/b=140' to trash at: 
> hdfs://:8020/user/hive/.Trash/Current/user/hive/warehouse/default/test/b=140
> 2016-08-08 23:12:34,247 INFO  

[jira] [Commented] (HIVE-14217) Druid integration

2016-09-07 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472141#comment-15472141
 ] 

Ashutosh Chauhan commented on HIVE-14217:
-

+1

> Druid integration
> -
>
> Key: HIVE-14217
> URL: https://issues.apache.org/jira/browse/HIVE-14217
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14217.01.patch, HIVE-14217.02.patch, 
> HIVE-14217.03.patch, HIVE-14217.04.patch, HIVE-14217.05.patch, 
> HIVE-14217.06.patch
>
>
> Allow Hive to query data in Druid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14217) Druid integration

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472096#comment-15472096
 ] 

Hive QA commented on HIVE-14217:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827435/HIVE-14217.06.patch

{color:green}SUCCESS:{color} +1 due to 17 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10476 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1125/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1125/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1125/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827435 - PreCommit-HIVE-MASTER-Build

> Druid integration
> -
>
> Key: HIVE-14217
> URL: https://issues.apache.org/jira/browse/HIVE-14217
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14217.01.patch, HIVE-14217.02.patch, 
> HIVE-14217.03.patch, HIVE-14217.04.patch, HIVE-14217.05.patch, 
> HIVE-14217.06.patch
>
>
> Allow Hive to query data in Druid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14608) LLAP: slow scheduling due to LlapTaskScheduler not removing nodes on kill

2016-09-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15472061#comment-15472061
 ] 

Gopal V commented on HIVE-14608:


[~sershe]: I think the original loop was written with the assumption that 
.canAcceptTask() can flip to true at some point during the loop.

That is probably a bad assumption now that we are running things in 
milliseconds - LGTM +1.

> LLAP: slow scheduling due to LlapTaskScheduler not removing nodes on kill 
> --
>
> Key: HIVE-14608
> URL: https://issues.apache.org/jira/browse/HIVE-14608
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Critical
> Attachments: HIVE-14608.patch
>
>
> See comments; this can result in a slowdown esp. if some critical task gets 
> unlucky.
> {noformat}
>   public void workerNodeRemoved(ServiceInstance serviceInstance) {
>  // FIXME: disabling this for now
> // instanceToNodeMap.remove(serviceInstance.getWorkerIdentity());
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471971#comment-15471971
 ] 

Vihang Karajgaonkar commented on HIVE-14717:


Looks like the TestHiveCli tests are not completing after the patch for 
HIVE-13589. The tests connect beeline in embedded mode which doesn't have any 
password and due to the patch tests try to prompt for password and hang until 
the maven process terminates. [~Ferd] [~Ke Jia] I think we should revert this 
patch because it is breaking tests. Also, it can potentially break backwards 
compatibility for users who have scripts written already which assumes beeline 
-u "jdbc:hive2://localhost:1" to connect in an unsecure cluster. What do 
you think?

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.546 sec - 
> in org.apache.hive.beeline.TestClientCommandHookFactory
> Running org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - 
> in org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Running org.apache.hive.beeline.TestTableOutputFormat
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - 
> in org.apache.hive.beeline.TestTableOutputFormat
> Results :
> Tests run: 44, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:03 min
> [INFO] Finished at: 2016-09-07T10:57:28-07:00
> [INFO] Final Memory: 65M/949M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project hive-beeline: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/vihang/work/src/upstream/hive/beeline && 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:MaxPermSize=512m -jar 
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefirebooter8133887423099901737.jar
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire6528142441948588259tmp
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire_05570572112194455658tmp
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> {noformat}
> My guess is this is related to recent fix for HIVE-13589 but needs more 
> investigation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471903#comment-15471903
 ] 

Hive QA commented on HIVE-12812:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827425/HIVE-12812.patch

{color:green}SUCCESS:{color} +1 due to 54 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10462 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[root_dir_external_table]
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[list_bucket_dml_2]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1124/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1124/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1124/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827425 - PreCommit-HIVE-MASTER-Build

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14715) Hive throws NumberFormatException with query with Null value

2016-09-07 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-14715:

Description: 
The java.lang.NumberFormatException will throw with following reproduce:
set hive.cbo.enable=false;
CREATE TABLE `paqtest`(
`c1` int,
`s1` string,
`s2` string,
`bn1` bigint)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';

insert into paqtest values (58, '', 'ABC', 0);

SELECT
'PM' AS cy,
c1,
NULL AS iused,
NULL AS itp,
s2,
NULL AS cvg,
NULL AS acavg,
sum(bn1) AS cca
FROM paqtest
WHERE (s1 IS NULL OR length(s1) = 0)
GROUP BY 'Pricing mismatch', c1, NULL, NULL, s2, NULL, NULL;

The stack like following:
java.lang.NumberFormatException: ABC
GroupByOperator.process(Object, int) line: 773  
ExecReducer.reduce(Object, Iterator, OutputCollector, Reporter) line: 236   
ReduceTask.runOldReducer(JobConf, TaskUmbilicalProtocol, TaskReporter, 
RawKeyValueIterator, RawComparator, Class, Class) line: 
444   
ReduceTask.run(JobConf, TaskUmbilicalProtocol) line: 392
LocalJobRunner$Job$ReduceTaskRunnable.run() line: 319   
Executors$RunnableAdapter.call() line: 471   

It works fine when hive.cbo.enable = true


  was:
The java.lang.NumberFormatException will throw with following reproduce:
set hive.cbo.enable=false;
CREATE TABLE `paqtest`(
`c1` int,
`s1` string,
`s2` string,
`bn1` bigint)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';

insert into paqtest values (58, '', 'ABC', 0);

SELECT
'Pricing mismatch' AS category,
c1,
NULL AS itemtype_used,
NULL AS acq_itemtype,
s2,
NULL AS currency_used_avg,
NULL AS acq_items_avg,
sum(bn1) AS cca
FROM paqtest
WHERE (s1 IS NULL OR length(s1) = 0)
GROUP BY 'Pricing mismatch', c1, NULL, NULL, s2, NULL, NULL;

The stack like following:
java.lang.NumberFormatException: ABC
GroupByOperator.process(Object, int) line: 773  
ExecReducer.reduce(Object, Iterator, OutputCollector, Reporter) line: 236   
ReduceTask.runOldReducer(JobConf, TaskUmbilicalProtocol, TaskReporter, 
RawKeyValueIterator, RawComparator, Class, Class) line: 
444   
ReduceTask.run(JobConf, TaskUmbilicalProtocol) line: 392
LocalJobRunner$Job$ReduceTaskRunnable.run() line: 319   
Executors$RunnableAdapter.call() line: 471   

It works fine when hive.cbo.enable = true



> Hive throws NumberFormatException with query with Null value
> 
>
> Key: HIVE-14715
> URL: https://issues.apache.org/jira/browse/HIVE-14715
> Project: Hive
>  Issue Type: Bug
>Reporter: Yongzhi Chen
>
> The java.lang.NumberFormatException will throw with following reproduce:
> set hive.cbo.enable=false;
> CREATE TABLE `paqtest`(
> `c1` int,
> `s1` string,
> `s2` string,
> `bn1` bigint)
> ROW FORMAT SERDE
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
> STORED AS INPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
> OUTPUTFORMAT
> 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat';
> insert into paqtest values (58, '', 'ABC', 0);
> SELECT
> 'PM' AS cy,
> c1,
> NULL AS iused,
> NULL AS itp,
> s2,
> NULL AS cvg,
> NULL AS acavg,
> sum(bn1) AS cca
> FROM paqtest
> WHERE (s1 IS NULL OR length(s1) = 0)
> GROUP BY 'Pricing mismatch', c1, NULL, NULL, s2, NULL, NULL;
> The stack like following:
> java.lang.NumberFormatException: ABC
> GroupByOperator.process(Object, int) line: 773
> ExecReducer.reduce(Object, Iterator, OutputCollector, Reporter) line: 236 
> ReduceTask.runOldReducer(JobConf, TaskUmbilicalProtocol, TaskReporter, 
> RawKeyValueIterator, RawComparator, Class, Class) 
> line: 444 
> ReduceTask.run(JobConf, TaskUmbilicalProtocol) line: 392  
> LocalJobRunner$Job$ReduceTaskRunnable.run() line: 319 
> Executors$RunnableAdapter.call() line: 471 
> It works fine when hive.cbo.enable = true



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14716) Duplicate pom.xml entries for mockito

2016-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471812#comment-15471812
 ] 

Sergio Peña commented on HIVE-14716:


LGTM +1

> Duplicate pom.xml entries for mockito
> -
>
> Key: HIVE-14716
> URL: https://issues.apache.org/jira/browse/HIVE-14716
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Trivial
> Attachments: HIVE-14716.01.patch
>
>
> When you build beeline there is a warning which says there are duplicate 
> pom.xml entries for mockito dependency.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hive:hive-beeline:jar:2.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 126, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 137, column 17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14011) MessageFactory is not pluggable

2016-09-07 Thread Sravya Tirukkovalur (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471759#comment-15471759
 ] 

Sravya Tirukkovalur commented on HIVE-14011:


[~mohitsabharwal] Would it be possible for you or some one to review this fix?

> MessageFactory is not pluggable
> ---
>
> Key: HIVE-14011
> URL: https://issues.apache.org/jira/browse/HIVE-14011
> Project: Hive
>  Issue Type: Bug
>Reporter: Sravya Tirukkovalur
> Attachments: HIVE-14011.patch
>
>
> Property "hcatalog.message.factory.impl.json" is available to use a custom 
> message factory implementation. Although it is not pluggable as 
> MessageFatcory is hardcoded to use JSONMessageFactory.
> https://github.com/apache/hive/blob/26b5c7b56a4f28ce3eabc0207566cce46b29b558/hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/messaging/MessageFactory.java#L39



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14532) Enable qtests from IDE - eclipse

2016-09-07 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14532:

Status: Patch Available  (was: Open)

> Enable qtests from IDE - eclipse
> 
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch, 
> HIVE-14532.3.patch, HIVE-14532.4.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14532) Enable qtests from IDE - eclipse

2016-09-07 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14532:

Attachment: HIVE-14532.4.patch

> Enable qtests from IDE - eclipse
> 
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch, 
> HIVE-14532.3.patch, HIVE-14532.4.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14716) Duplicate pom.xml entries for mockito

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471738#comment-15471738
 ] 

Vihang Karajgaonkar edited comment on HIVE-14716 at 9/7/16 8:44 PM:


Hi [~spena] Can you please review? Its a trivial change.


was (Author: vihangk1):
Hi [~spena] Can you please review? Its is trivial change.

> Duplicate pom.xml entries for mockito
> -
>
> Key: HIVE-14716
> URL: https://issues.apache.org/jira/browse/HIVE-14716
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Trivial
> Attachments: HIVE-14716.01.patch
>
>
> When you build beeline there is a warning which says there are duplicate 
> pom.xml entries for mockito dependency.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hive:hive-beeline:jar:2.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 126, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 137, column 17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14716) Duplicate pom.xml entries for mockito

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-14716:
---
Status: Patch Available  (was: Open)

> Duplicate pom.xml entries for mockito
> -
>
> Key: HIVE-14716
> URL: https://issues.apache.org/jira/browse/HIVE-14716
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Trivial
> Attachments: HIVE-14716.01.patch
>
>
> When you build beeline there is a warning which says there are duplicate 
> pom.xml entries for mockito dependency.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hive:hive-beeline:jar:2.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 126, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 137, column 17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14716) Duplicate pom.xml entries for mockito

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-14716:
---
Attachment: HIVE-14716.01.patch

Hi [~spena] Can you please review? Its is trivial change.

> Duplicate pom.xml entries for mockito
> -
>
> Key: HIVE-14716
> URL: https://issues.apache.org/jira/browse/HIVE-14716
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Trivial
> Attachments: HIVE-14716.01.patch
>
>
> When you build beeline there is a warning which says there are duplicate 
> pom.xml entries for mockito dependency.
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hive:hive-beeline:jar:2.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 126, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.mockito:mockito-all:jar -> duplicate declaration of version 
> (?) @ org.apache.hive:hive-beeline:[unknown-version], 
> /Users/vihang/work/src/upstream/hive/beeline/pom.xml, line 137, column 17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14591) HS2 is shut down unexpectedly during the startup time

2016-09-07 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-14591:
--
Status: Patch Available  (was: Open)

> HS2 is shut down unexpectedly during the startup time
> -
>
> Key: HIVE-14591
> URL: https://issues.apache.org/jira/browse/HIVE-14591
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14591.1.patch
>
>
> If there is issue with Zookeeper (e.g. connection issues), then it takes HS2 
> some time to connect. During this time, Ambari could issue health checks 
> against HS2 and the CloseSession call will trigger the shutdown of HS2, which 
> is not expected. That triggering should happen only when the HS2 has been 
> deregistered with Zookeeper, not during the startup time when HS2 is not 
> registered with ZK yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14591) HS2 is shut down unexpectedly during the startup time

2016-09-07 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-14591:
--
Status: Open  (was: Patch Available)

> HS2 is shut down unexpectedly during the startup time
> -
>
> Key: HIVE-14591
> URL: https://issues.apache.org/jira/browse/HIVE-14591
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14591.1.patch
>
>
> If there is issue with Zookeeper (e.g. connection issues), then it takes HS2 
> some time to connect. During this time, Ambari could issue health checks 
> against HS2 and the CloseSession call will trigger the shutdown of HS2, which 
> is not expected. That triggering should happen only when the HS2 has been 
> deregistered with Zookeeper, not during the startup time when HS2 is not 
> registered with ZK yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14591) HS2 is shut down unexpectedly during the startup time

2016-09-07 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-14591:
--
Attachment: HIVE-14591.1.patch

> HS2 is shut down unexpectedly during the startup time
> -
>
> Key: HIVE-14591
> URL: https://issues.apache.org/jira/browse/HIVE-14591
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: HIVE-14591.1.patch
>
>
> If there is issue with Zookeeper (e.g. connection issues), then it takes HS2 
> some time to connect. During this time, Ambari could issue health checks 
> against HS2 and the CloseSession call will trigger the shutdown of HS2, which 
> is not expected. That triggering should happen only when the HS2 has been 
> deregistered with Zookeeper, not during the startup time when HS2 is not 
> registered with ZK yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14591) HS2 is shut down unexpectedly during the startup time

2016-09-07 Thread Tao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Li updated HIVE-14591:
--
Attachment: (was: BUG-64741.1.patch)

> HS2 is shut down unexpectedly during the startup time
> -
>
> Key: HIVE-14591
> URL: https://issues.apache.org/jira/browse/HIVE-14591
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
>
> If there is issue with Zookeeper (e.g. connection issues), then it takes HS2 
> some time to connect. During this time, Ambari could issue health checks 
> against HS2 and the CloseSession call will trigger the shutdown of HS2, which 
> is not expected. That triggering should happen only when the HS2 has been 
> deregistered with Zookeeper, not during the startup time when HS2 is not 
> registered with ZK yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14717) Beeline tests failing

2016-09-07 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-14717:
--

Assignee: Vihang Karajgaonkar

> Beeline tests failing
> -
>
> Key: HIVE-14717
> URL: https://issues.apache.org/jira/browse/HIVE-14717
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Critical
>
> If you run mvn clean test in beeline you see the following errors:
> {noformat}
> ---
>  T E S T S
> ---
> Running org.apache.hive.beeline.cli.TestHiveCli
> Running org.apache.hive.beeline.TestBeelineArgParsing
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.584 sec - 
> in org.apache.hive.beeline.TestBeelineArgParsing
> Running org.apache.hive.beeline.TestBeeLineHistory
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.198 sec - 
> in org.apache.hive.beeline.TestBeeLineHistory
> Running org.apache.hive.beeline.TestBeeLineOpts
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.338 sec - 
> in org.apache.hive.beeline.TestBeeLineOpts
> Running org.apache.hive.beeline.TestBufferedRows
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - 
> in org.apache.hive.beeline.TestBufferedRows
> Running org.apache.hive.beeline.TestClientCommandHookFactory
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.546 sec - 
> in org.apache.hive.beeline.TestClientCommandHookFactory
> Running org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.379 sec - 
> in org.apache.hive.beeline.TestIncrementalRowsWithNormalization
> Running org.apache.hive.beeline.TestTableOutputFormat
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec - 
> in org.apache.hive.beeline.TestTableOutputFormat
> Results :
> Tests run: 44, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 01:03 min
> [INFO] Finished at: 2016-09-07T10:57:28-07:00
> [INFO] Final Memory: 65M/949M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
> project hive-beeline: ExecutionException The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/vihang/work/src/upstream/hive/beeline && 
> /Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:MaxPermSize=512m -jar 
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefirebooter8133887423099901737.jar
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire6528142441948588259tmp
>  
> /Users/vihang/work/src/upstream/hive/beeline/target/surefire/surefire_05570572112194455658tmp
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> {noformat}
> My guess is this is related to recent fix for HIVE-13589 but needs more 
> investigation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14591) HS2 is shut down unexpectedly during the startup time

2016-09-07 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471700#comment-15471700
 ] 

Vaibhav Gumashta commented on HIVE-14591:
-

+1.

Thanks for the patch [~taoli-hwx].

> HS2 is shut down unexpectedly during the startup time
> -
>
> Key: HIVE-14591
> URL: https://issues.apache.org/jira/browse/HIVE-14591
> Project: Hive
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
> Attachments: BUG-64741.1.patch
>
>
> If there is issue with Zookeeper (e.g. connection issues), then it takes HS2 
> some time to connect. During this time, Ambari could issue health checks 
> against HS2 and the CloseSession call will trigger the shutdown of HS2, which 
> is not expected. That triggering should happen only when the HS2 has been 
> deregistered with Zookeeper, not during the startup time when HS2 is not 
> registered with ZK yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14710) unify DB product type treatment in directsql and txnhandler

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471698#comment-15471698
 ] 

Hive QA commented on HIVE-14710:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827426/HIVE-14710.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10462 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1123/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1123/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1123/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827426 - PreCommit-HIVE-MASTER-Build

> unify DB product type treatment in directsql and txnhandler
> ---
>
> Key: HIVE-14710
> URL: https://issues.apache.org/jira/browse/HIVE-14710
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14710.01.patch, HIVE-14710.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13555) Add nullif udf

2016-09-07 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471691#comment-15471691
 ] 

Zoltan Haindrich commented on HIVE-13555:
-

[~leftylev] should I add this..or it will be added later - when 2.2.0 is 
launched?

> Add nullif udf
> --
>
> Key: HIVE-13555
> URL: https://issues.apache.org/jira/browse/HIVE-13555
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13555.1.patch, HIVE-13555.2.patch, 
> HIVE-13555.2.patch
>
>
> {{nullif(exp1, exp2)}} is shorthand for: {{case when exp1 = exp2 then null 
> else exp1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14532) Enable qtests from IDE - eclipse

2016-09-07 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14532:

Attachment: HIVE-14532.3.patch

> Enable qtests from IDE - eclipse
> 
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch, 
> HIVE-14532.3.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14532) Enable qtests from IDE - eclipse

2016-09-07 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14532:

Attachment: (was: HIVE-14532.3.patch)

> Enable qtests from IDE - eclipse
> 
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14532) Enable qtests from IDE - eclipse

2016-09-07 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14532:

Summary: Enable qtests from IDE - eclipse  (was: Enable qtests from IDE)

> Enable qtests from IDE - eclipse
> 
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch, 
> HIVE-14532.3.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14532) Enable qtests from IDE

2016-09-07 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471648#comment-15471648
 ] 

Zoltan Haindrich commented on HIVE-14532:
-

I've found a few idea related issues...about a week ago...and ever since I've 
not worked on italthrough eclipse support looks fine to me...I wanted to 
add idea support to in this patch.

I think that idea "cheats" a bit during project creation...
* if I import a commandline generated projectfiles: it seems a bit unworkable...
* importing as a pom.xml project: it looks fine at firstbut cleaning the 
project and rebuilding surfaces similar issues I've found when I've first built 
these things with eclipse

Not sure what to do now...but because this patch adds the neccessary properties 
to run qtests from ide + enables eclipse support: I think it would be ok to add 
this and open another one to address idea projectfile creation/import.

I think full idea support is not far away...but someone more familiar with idea 
might have better chances with it ;)

> Enable qtests from IDE
> --
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14532) Enable qtests from IDE

2016-09-07 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14532:

Attachment: HIVE-14532.3.patch

patch#3 - eclipse output dir is in a separate folder {{target/eclipse}}; to 
reduce interference between maven/eclipse

> Enable qtests from IDE
> --
>
> Key: HIVE-14532
> URL: https://issues.apache.org/jira/browse/HIVE-14532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Minor
> Attachments: HIVE-14532.1.patch, HIVE-14532.2.patch, 
> HIVE-14532.3.patch
>
>
> with HIVE-1 applied; i've played around with executing qtest-s from 
> eclipse...after the patch seemed ok; i've checked it with:
> {code}
> git clean -dfx
> mvn package install eclipse:eclipse -Pitests -DskipTests
> mvn -q test -Pitests -Dtest=TestCliDriver -Dqfile=combine2.q
> {code}
> the last step I think is not required...but I bootstrapped and checked my 
> project integrity this way.
> After this I was able to execute {{TestCliDriver}} from eclipse using 
> {{-Dqfile=combine.q}}, other qfiles may or may not work...but will have at 
> least some chances to be usable.
> For my biggest surprise {{alter_concatenate_indexed_table.q}} also 
> passed...which contains relative file references - and I suspected that it 
> will have issues with that..
> note: I've the datanucleus plugin installed...and i use it when I need to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14217) Druid integration

2016-09-07 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14217:
---
Attachment: HIVE-14217.06.patch

> Druid integration
> -
>
> Key: HIVE-14217
> URL: https://issues.apache.org/jira/browse/HIVE-14217
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14217.01.patch, HIVE-14217.02.patch, 
> HIVE-14217.03.patch, HIVE-14217.04.patch, HIVE-14217.05.patch, 
> HIVE-14217.06.patch
>
>
> Allow Hive to query data in Druid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14713) LDAP Authentication Provider should be covered with unit tests

2016-09-07 Thread Illya Yalovyy (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471595#comment-15471595
 ] 

Illya Yalovyy commented on HIVE-14713:
--

None of failed tests look relevant. According to test results history all of 
them were failing before and after this build.

> LDAP Authentication Provider should be covered with unit tests
> --
>
> Key: HIVE-14713
> URL: https://issues.apache.org/jira/browse/HIVE-14713
> Project: Hive
>  Issue Type: Test
>  Components: Authentication, Tests
>Affects Versions: 2.1.0
>Reporter: Illya Yalovyy
>Assignee: Illya Yalovyy
> Attachments: HIVE-14713.1.patch
>
>
> Currently LdapAuthenticationProviderImpl class is not covered with unit 
> tests. To make this class testable some minor refactoring will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14713) LDAP Authentication Provider should be covered with unit tests

2016-09-07 Thread Illya Yalovyy (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471594#comment-15471594
 ] 

Illya Yalovyy commented on HIVE-14713:
--

None of failed tests look relevant. According to test results history all of 
them were failing before and after this build.

> LDAP Authentication Provider should be covered with unit tests
> --
>
> Key: HIVE-14713
> URL: https://issues.apache.org/jira/browse/HIVE-14713
> Project: Hive
>  Issue Type: Test
>  Components: Authentication, Tests
>Affects Versions: 2.1.0
>Reporter: Illya Yalovyy
>Assignee: Illya Yalovyy
> Attachments: HIVE-14713.1.patch
>
>
> Currently LdapAuthenticationProviderImpl class is not covered with unit 
> tests. To make this class testable some minor refactoring will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2016-09-07 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-12812:
---
Status: Patch Available  (was: Open)

The mapred.input.dir.recursive itself is not a Hive property, and it is often 
ignored when user runs some queries (e.g. union) which needs it to be enabled 
to process some intermediate data files. 
This patch is to enable this property by default. The change is only needed for 
MR since Tez has done similar in its TezComplier and Spark does not need it.
For example, the following query returns nothing if this property is not 
enabled:
{code}
set hive.compute.query.using.stats=false;
set hive.optimize.union.remove=true;
select sum(salary) from sample_07 union all select sum(salary) from sample_08; 
-- returns nothing, wrong

--
set mapred.input.dir.recursive=true;
select sum(salary) from sample_07 union all select sum(salary) from sample_08; 
-- returns two rows
39282210
40679820
{code}

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.0, 1.2.1
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14451) Vectorization: Add byRef mode for borrowed Strings in VectorDeserializeRow

2016-09-07 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14451:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Vectorization: Add byRef mode for borrowed Strings in VectorDeserializeRow
> --
>
> Key: HIVE-14451
> URL: https://issues.apache.org/jira/browse/HIVE-14451
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Matt McCline
> Fix For: 2.2.0
>
> Attachments: HIVE-14451.01.patch, HIVE-14451.02.patch, 
> HIVE-14451.03.patch, HIVE-14451.04.patch
>
>
> In a majority of cases, when using the OptimizedHashMap, the references to 
> the byte[] are immutable. 
> The hashmap result always allocates on boundary conditions, but never mutates 
> a previous buffer.
> Copying Strings out of the hashtable is entirely wasteful and it would be 
> easy to know when the currentBytes is a borrowed slice from the original 
> input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14451) Vectorization: Add byRef mode for borrowed Strings in VectorDeserializeRow

2016-09-07 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-14451:

Fix Version/s: 2.2.0

> Vectorization: Add byRef mode for borrowed Strings in VectorDeserializeRow
> --
>
> Key: HIVE-14451
> URL: https://issues.apache.org/jira/browse/HIVE-14451
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Matt McCline
> Fix For: 2.2.0
>
> Attachments: HIVE-14451.01.patch, HIVE-14451.02.patch, 
> HIVE-14451.03.patch, HIVE-14451.04.patch
>
>
> In a majority of cases, when using the OptimizedHashMap, the references to 
> the byte[] are immutable. 
> The hashmap result always allocates on boundary conditions, but never mutates 
> a previous buffer.
> Copying Strings out of the hashtable is entirely wasteful and it would be 
> easy to know when the currentBytes is a borrowed slice from the original 
> input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14451) Vectorization: Add byRef mode for borrowed Strings in VectorDeserializeRow

2016-09-07 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471496#comment-15471496
 ] 

Matt McCline commented on HIVE-14451:
-

Committed to master.

> Vectorization: Add byRef mode for borrowed Strings in VectorDeserializeRow
> --
>
> Key: HIVE-14451
> URL: https://issues.apache.org/jira/browse/HIVE-14451
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Matt McCline
> Attachments: HIVE-14451.01.patch, HIVE-14451.02.patch, 
> HIVE-14451.03.patch, HIVE-14451.04.patch
>
>
> In a majority of cases, when using the OptimizedHashMap, the references to 
> the byte[] are immutable. 
> The hashmap result always allocates on boundary conditions, but never mutates 
> a previous buffer.
> Copying Strings out of the hashtable is entirely wasteful and it would be 
> easy to know when the currentBytes is a borrowed slice from the original 
> input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-14452) Vectorization: BinarySortableDeserializeRead should delegate buffer copies to VectorDeserializeRow

2016-09-07 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline resolved HIVE-14452.
-
Resolution: Duplicate

Not a duplicate -- but fixed with HIVE-14451.

> Vectorization: BinarySortableDeserializeRead should delegate buffer copies to 
> VectorDeserializeRow
> --
>
> Key: HIVE-14452
> URL: https://issues.apache.org/jira/browse/HIVE-14452
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Gopal V
>Assignee: Matt McCline
>
> Since the VectorDeserializeRow already calls a setVal(), the copy inside the 
> lower layer is entirely wasteful.
> {code}
> BinarySortableSerDe.deserializeText(
>   inputByteBuffer, columnSortOrderIsDesc[fieldIndex], tempText);
> {code}
> With HIVE-14451, the copies can be avoided for some scenarios and retained 
> for others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14710) unify DB product type treatment in directsql and txnhandler

2016-09-07 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14710:

Attachment: HIVE-14710.01.patch

The same patch for HiveQA; looks like it was ignored

> unify DB product type treatment in directsql and txnhandler
> ---
>
> Key: HIVE-14710
> URL: https://issues.apache.org/jira/browse/HIVE-14710
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14710.01.patch, HIVE-14710.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14710) unify DB product type treatment in directsql and txnhandler

2016-09-07 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471477#comment-15471477
 ] 

Sergey Shelukhin commented on HIVE-14710:
-

hmm, it seems like  METASTORE_DIRECT_SQL_MAX_ELEMENTS_IN_CLAUSE  was added that 
duplicates METASTORE_DIRECT_SQL_PARTITION_BATCH_SIZE. The latter has the value 
that makes the code detect the appropriate batch size; it seems strange to rely 
on another config setting for that - if the user wanted to configure a value 
they could just set it directly instead of having the code detect it.

> unify DB product type treatment in directsql and txnhandler
> ---
>
> Key: HIVE-14710
> URL: https://issues.apache.org/jira/browse/HIVE-14710
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14710.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2016-09-07 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-12812:
---
Attachment: HIVE-12812.patch

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14710) unify DB product type treatment in directsql and txnhandler

2016-09-07 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471460#comment-15471460
 ] 

Alan Gates commented on HIVE-14710:
---

+1

> unify DB product type treatment in directsql and txnhandler
> ---
>
> Key: HIVE-14710
> URL: https://issues.apache.org/jira/browse/HIVE-14710
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-14710.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HIVE-14714) Finishing Hive on Spark causes "java.io.IOException: Stream closed"

2016-09-07 Thread Gabor Szadovszky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-14714 started by Gabor Szadovszky.
---
> Finishing Hive on Spark causes "java.io.IOException: Stream closed"
> ---
>
> Key: HIVE-14714
> URL: https://issues.apache.org/jira/browse/HIVE-14714
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.1.0
>Reporter: Gabor Szadovszky
>Assignee: Gabor Szadovszky
>
> After execute hive command with Spark, finishing the beeline session or
> even switch the engine causes IOException. The following executed Ctrl-D to
> finish the session but "!quit" or even "set hive.execution.engine=mr;" causes
> the issue.
> From HS2 log:
> {code}
> 2016-09-06 16:15:12,291 WARN  org.apache.hive.spark.client.SparkClientImpl: 
> [HiveServer2-Handler-Pool: Thread-106]: Timed out shutting down remote 
> driver, interrupting...
> 2016-09-06 16:15:12,291 WARN  org.apache.hive.spark.client.SparkClientImpl: 
> [Driver]: Waiting thread interrupted, killing child process.
> 2016-09-06 16:15:12,296 WARN  org.apache.hive.spark.client.SparkClientImpl: 
> [stderr-redir-1]: Error in redirector thread.
> java.io.IOException: Stream closed
> at 
> java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:272)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.readLine(BufferedReader.java:317)
> at java.io.BufferedReader.readLine(BufferedReader.java:382)
> at 
> org.apache.hive.spark.client.SparkClientImpl$Redirector.run(SparkClientImpl.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14159) sorting of tuple array using multiple field[s]

2016-09-07 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-14159:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

Committed to master:

{noformat}
% g log -1 --stat
commit 6e76ee3aef2210b2a1efa20d92ac997800cfcb75
Author: Carl Steinbach 
Date:   Wed Sep 7 11:28:35 2016 -0700

HIVE-14159 : sorting of tuple array using multiple field[s] (Simanchal Das 
via Carl Steinbach)

 itests/src/test/resources/testconfiguration.properties |   
1 +
 itests/src/test/resources/testconfiguration.properties.orig|   
8 +-
 ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java   |   
1 +
 .../org/apache/hadoop/hive/ql/udf/generic/GenericUDFSortArrayByField.java  | 
202 ++
 .../apache/hadoop/hive/ql/udf/generic/TestGenericUDFSortArrayByField.java  | 
228 
 ql/src/test/queries/clientnegative/udf_sort_array_by_wrong1.q  |   
2 +
 ql/src/test/queries/clientnegative/udf_sort_array_by_wrong2.q  |   
2 +
 ql/src/test/queries/clientnegative/udf_sort_array_by_wrong3.q  |  
16 ++
 ql/src/test/queries/clientpositive/udf_sort_array_by.q | 
136 
 ql/src/test/results/beelinepositive/show_functions.q.out   |   
1 +
 ql/src/test/results/clientnegative/udf_sort_array_by_wrong1.q.out  |   
1 +
 ql/src/test/results/clientnegative/udf_sort_array_by_wrong2.q.out  |   
1 +
 ql/src/test/results/clientnegative/udf_sort_array_by_wrong3.q.out  |  
37 
 ql/src/test/results/clientpositive/show_functions.q.out|   
1 +
 ql/src/test/results/clientpositive/udf_sort_array_by.q.out | 
401 +++
 15 files changed, 1036 insertions(+), 2 deletions(-)
{noformat}

> sorting of tuple array using multiple field[s]
> --
>
> Key: HIVE-14159
> URL: https://issues.apache.org/jira/browse/HIVE-14159
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Simanchal Das
>Assignee: Simanchal Das
>  Labels: patch
> Fix For: 2.2.0
>
> Attachments: HIVE-14159.1.patch, HIVE-14159.2.patch, 
> HIVE-14159.3.patch, HIVE-14159.4.patch
>
>
> Problem Statement:
> When we are working with complex structure of data like avro.
> Most of the times we are encountering array contains multiple tuples and each 
> tuple have struct schema.
> Suppose here struct schema is like below:
> {noformat}
> {
>   "name": "employee",
>   "type": [{
>   "type": "record",
>   "name": "Employee",
>   "namespace": "com.company.Employee",
>   "fields": [{
>   "name": "empId",
>   "type": "int"
>   }, {
>   "name": "empName",
>   "type": "string"
>   }, {
>   "name": "age",
>   "type": "int"
>   }, {
>   "name": "salary",
>   "type": "double"
>   }]
>   }]
> }
> {noformat}
> Then while running our hive query complex array looks like array of employee 
> objects.
> {noformat}
> Example: 
>   //(array>)
>   
> Array[Employee(100,Foo,20,20990),Employee(500,Boo,30,50990),Employee(700,Harry,25,40990),Employee(100,Tom,35,70990)]
> {noformat}
> When we are implementing business use cases day to day life we are 
> encountering problems like sorting a tuple array by specific field[s] like 
> empId,name,salary,etc by ASC or DESC order.
> Proposal:
> I have developed a udf 'sort_array_by' which will sort a tuple array by one 
> or more fields in ASC or DESC order provided by user ,default is ascending 
> order .
> {noformat}
> Example:
>   1.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Salary","ASC");
>   output: 
> array[struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(500,Boo,30,50990),struct(100,Tom,35,70990)]
>   
>   2.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,80990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","ASC");
>   output: 
> array[struct(500,Boo,30,50990),struct(500,Boo,30,80990),struct(100,Foo,20,20990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)]
>   3.Select 
> sort_array_field(array[struct(100,Foo,20,20990),struct(500,Boo,30,50990),struct(700,Harry,25,40990),struct(100,Tom,35,70990)],"Name","Salary","Age,"ASC");
>   output: 
> 

[jira] [Commented] (HIVE-5867) JDBC driver and beeline should support executing an initial SQL script

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15471130#comment-15471130
 ] 

Hive QA commented on HIVE-5867:
---



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827380/HIVE-5867.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10454 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcDriver2.testMetaDataGetColumns
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1122/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1122/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1122/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827380 - PreCommit-HIVE-MASTER-Build

> JDBC driver and beeline should support executing an initial SQL script
> --
>
> Key: HIVE-5867
> URL: https://issues.apache.org/jira/browse/HIVE-5867
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Prasad Mujumdar
>Assignee: Jianguo Tian
> Attachments: HIVE-5867.1.patch
>
>
> HiveCLI support the .hiverc script that is executed at the start of the 
> session. This is helpful for things like registering UDFs, session specific 
> configs etc.
> This functionality is missing for beeline and JDBC clients. It would be 
> useful for JDBC driver to support an init script with SQL statements that's 
> automatically executed after connection. The script path can be specified via 
> JDBC connection URL. For example 
> {noformat}
> jdbc:hive2://localhost:1/default;initScript=/home/user1/scripts/init.sql
> {noformat}
> This can be added to Beeline's command line option like "-i 
> /home/user1/scripts/init.sql"
> To help transition from HiveCLI to Beeline, we can keep the default init 
> script as $HOME/.hiverc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14713) LDAP Authentication Provider should be covered with unit tests

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470955#comment-15470955
 ] 

Hive QA commented on HIVE-14713:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827375/HIVE-14713.1.patch

{color:green}SUCCESS:{color} +1 due to 13 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10519 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1121/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1121/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1121/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827375 - PreCommit-HIVE-MASTER-Build

> LDAP Authentication Provider should be covered with unit tests
> --
>
> Key: HIVE-14713
> URL: https://issues.apache.org/jira/browse/HIVE-14713
> Project: Hive
>  Issue Type: Test
>  Components: Authentication, Tests
>Affects Versions: 2.1.0
>Reporter: Illya Yalovyy
>Assignee: Illya Yalovyy
> Attachments: HIVE-14713.1.patch
>
>
> Currently LdapAuthenticationProviderImpl class is not covered with unit 
> tests. To make this class testable some minor refactoring will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14542) VirtualColumn::equals() should use object equality

2016-09-07 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470883#comment-15470883
 ] 

Eugene Koifman commented on HIVE-14542:
---

Test failures are not related.
[~gopalv] could you review please

> VirtualColumn::equals() should use object equality
> --
>
> Key: HIVE-14542
> URL: https://issues.apache.org/jira/browse/HIVE-14542
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor, Transactions
>Affects Versions: 0.14.0
>Reporter: Gopal V
>Assignee: Eugene Koifman
>Priority: Minor
> Attachments: HIVE-14542.3.patch, HIVE-14542.4.patch, 
> HIVE-14542.5.patch, HIVE-14542.6.patch, HIVE-14542.patch, virtual-columns.png
>
>
> The VirtualColumn() constructor is private and is only called to initialize 5 
> static objects.
> !virtual-columns.png!
> There's no reason for VirtualColumn::equals() to do a deep type inspection 
> for each access of a complex type like ROW__ID.
> {code}
>   else if(vc.equals(VirtualColumn.ROWID)) {
> if(ctx.getIoCxt().getRecordIdentifier() == null) {
>   vcValues[i] = null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14626) Support Trash in Truncate Table

2016-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470814#comment-15470814
 ] 

Sergio Peña commented on HIVE-14626:


Looks good [~ctang.ma]
+1

> Support Trash in Truncate Table
> ---
>
> Key: HIVE-14626
> URL: https://issues.apache.org/jira/browse/HIVE-14626
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>Priority: Minor
> Attachments: HIVE-14626.1.patch, HIVE-14626.patch
>
>
> Currently Truncate Table (or Partition) is implemented using 
> FileSystem.delete and then recreate the directory, so
> 1. it does not support HDFS Trash
> 2. if the table/partition directory is initially encryption protected, after 
> being deleted and recreated, it is no more protected.
> The new implementation is to clean the contents of directory using 
> multi-threaded trashFiles. If Trash is enabled and has a lower encryption 
> level than the data directory, the files under it will be deleted. Otherwise, 
> they will be Trashed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14626) Support Trash in Truncate Table

2016-09-07 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470792#comment-15470792
 ] 

Chaoyu Tang commented on HIVE-14626:


The failed tests are not related to this patch, they are aged.

> Support Trash in Truncate Table
> ---
>
> Key: HIVE-14626
> URL: https://issues.apache.org/jira/browse/HIVE-14626
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>Priority: Minor
> Attachments: HIVE-14626.1.patch, HIVE-14626.patch
>
>
> Currently Truncate Table (or Partition) is implemented using 
> FileSystem.delete and then recreate the directory, so
> 1. it does not support HDFS Trash
> 2. if the table/partition directory is initially encryption protected, after 
> being deleted and recreated, it is no more protected.
> The new implementation is to clean the contents of directory using 
> multi-threaded trashFiles. If Trash is enabled and has a lower encryption 
> level than the data directory, the files under it will be deleted. Otherwise, 
> they will be Trashed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5867) JDBC driver and beeline should support executing an initial SQL script

2016-09-07 Thread Jianguo Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianguo Tian updated HIVE-5867:
---
Status: Patch Available  (was: Open)

> JDBC driver and beeline should support executing an initial SQL script
> --
>
> Key: HIVE-5867
> URL: https://issues.apache.org/jira/browse/HIVE-5867
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Prasad Mujumdar
>Assignee: Jianguo Tian
> Attachments: HIVE-5867.1.patch
>
>
> HiveCLI support the .hiverc script that is executed at the start of the 
> session. This is helpful for things like registering UDFs, session specific 
> configs etc.
> This functionality is missing for beeline and JDBC clients. It would be 
> useful for JDBC driver to support an init script with SQL statements that's 
> automatically executed after connection. The script path can be specified via 
> JDBC connection URL. For example 
> {noformat}
> jdbc:hive2://localhost:1/default;initScript=/home/user1/scripts/init.sql
> {noformat}
> This can be added to Beeline's command line option like "-i 
> /home/user1/scripts/init.sql"
> To help transition from HiveCLI to Beeline, we can keep the default init 
> script as $HOME/.hiverc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5867) JDBC driver and beeline should support executing an initial SQL script

2016-09-07 Thread Jianguo Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianguo Tian updated HIVE-5867:
---
Attachment: HIVE-5867.1.patch

> JDBC driver and beeline should support executing an initial SQL script
> --
>
> Key: HIVE-5867
> URL: https://issues.apache.org/jira/browse/HIVE-5867
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Prasad Mujumdar
>Assignee: Jianguo Tian
> Attachments: HIVE-5867.1.patch
>
>
> HiveCLI support the .hiverc script that is executed at the start of the 
> session. This is helpful for things like registering UDFs, session specific 
> configs etc.
> This functionality is missing for beeline and JDBC clients. It would be 
> useful for JDBC driver to support an init script with SQL statements that's 
> automatically executed after connection. The script path can be specified via 
> JDBC connection URL. For example 
> {noformat}
> jdbc:hive2://localhost:1/default;initScript=/home/user1/scripts/init.sql
> {noformat}
> This can be added to Beeline's command line option like "-i 
> /home/user1/scripts/init.sql"
> To help transition from HiveCLI to Beeline, we can keep the default init 
> script as $HOME/.hiverc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5867) JDBC driver and beeline should support executing an initial SQL script

2016-09-07 Thread Jianguo Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianguo Tian updated HIVE-5867:
---
Attachment: (was: HIVE-5867.1.patch)

> JDBC driver and beeline should support executing an initial SQL script
> --
>
> Key: HIVE-5867
> URL: https://issues.apache.org/jira/browse/HIVE-5867
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Prasad Mujumdar
>Assignee: Jianguo Tian
>
> HiveCLI support the .hiverc script that is executed at the start of the 
> session. This is helpful for things like registering UDFs, session specific 
> configs etc.
> This functionality is missing for beeline and JDBC clients. It would be 
> useful for JDBC driver to support an init script with SQL statements that's 
> automatically executed after connection. The script path can be specified via 
> JDBC connection URL. For example 
> {noformat}
> jdbc:hive2://localhost:1/default;initScript=/home/user1/scripts/init.sql
> {noformat}
> This can be added to Beeline's command line option like "-i 
> /home/user1/scripts/init.sql"
> To help transition from HiveCLI to Beeline, we can keep the default init 
> script as $HOME/.hiverc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14713) LDAP Authentication Provider should be covered with unit tests

2016-09-07 Thread Illya Yalovyy (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470777#comment-15470777
 ] 

Illya Yalovyy commented on HIVE-14713:
--

CR: https://reviews.apache.org/r/51694/


> LDAP Authentication Provider should be covered with unit tests
> --
>
> Key: HIVE-14713
> URL: https://issues.apache.org/jira/browse/HIVE-14713
> Project: Hive
>  Issue Type: Test
>  Components: Authentication, Tests
>Affects Versions: 2.1.0
>Reporter: Illya Yalovyy
>Assignee: Illya Yalovyy
> Attachments: HIVE-14713.1.patch
>
>
> Currently LdapAuthenticationProviderImpl class is not covered with unit 
> tests. To make this class testable some minor refactoring will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14626) Support Trash in Truncate Table

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470771#comment-15470771
 ] 

Hive QA commented on HIVE-14626:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827362/HIVE-14626.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10453 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1120/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1120/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1120/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827362 - PreCommit-HIVE-MASTER-Build

> Support Trash in Truncate Table
> ---
>
> Key: HIVE-14626
> URL: https://issues.apache.org/jira/browse/HIVE-14626
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>Priority: Minor
> Attachments: HIVE-14626.1.patch, HIVE-14626.patch
>
>
> Currently Truncate Table (or Partition) is implemented using 
> FileSystem.delete and then recreate the directory, so
> 1. it does not support HDFS Trash
> 2. if the table/partition directory is initially encryption protected, after 
> being deleted and recreated, it is no more protected.
> The new implementation is to clean the contents of directory using 
> multi-threaded trashFiles. If Trash is enabled and has a lower encryption 
> level than the data directory, the files under it will be deleted. Otherwise, 
> they will be Trashed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14713) LDAP Authentication Provider should be covered with unit tests

2016-09-07 Thread Illya Yalovyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illya Yalovyy updated HIVE-14713:
-
Status: Patch Available  (was: Open)

> LDAP Authentication Provider should be covered with unit tests
> --
>
> Key: HIVE-14713
> URL: https://issues.apache.org/jira/browse/HIVE-14713
> Project: Hive
>  Issue Type: Test
>  Components: Authentication, Tests
>Affects Versions: 2.1.0
>Reporter: Illya Yalovyy
>Assignee: Illya Yalovyy
> Attachments: HIVE-14713.1.patch
>
>
> Currently LdapAuthenticationProviderImpl class is not covered with unit 
> tests. To make this class testable some minor refactoring will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14713) LDAP Authentication Provider should be covered with unit tests

2016-09-07 Thread Illya Yalovyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illya Yalovyy updated HIVE-14713:
-
Attachment: HIVE-14713.1.patch

> LDAP Authentication Provider should be covered with unit tests
> --
>
> Key: HIVE-14713
> URL: https://issues.apache.org/jira/browse/HIVE-14713
> Project: Hive
>  Issue Type: Test
>  Components: Authentication, Tests
>Affects Versions: 2.1.0
>Reporter: Illya Yalovyy
>Assignee: Illya Yalovyy
> Attachments: HIVE-14713.1.patch
>
>
> Currently LdapAuthenticationProviderImpl class is not covered with unit 
> tests. To make this class testable some minor refactoring will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14626) Support Trash in Truncate Table

2016-09-07 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14626:
---
Attachment: HIVE-14626.1.patch

[~spena] I revised the patch. It cleans the directory contents only when it is 
encryption-zone protected. For other cases, we do trash/delete the directory 
followed by recreation as you suggested.
I have attached new patch here and also uploaded to RB for review. Thanks. 

> Support Trash in Truncate Table
> ---
>
> Key: HIVE-14626
> URL: https://issues.apache.org/jira/browse/HIVE-14626
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>Priority: Minor
> Attachments: HIVE-14626.1.patch, HIVE-14626.patch
>
>
> Currently Truncate Table (or Partition) is implemented using 
> FileSystem.delete and then recreate the directory, so
> 1. it does not support HDFS Trash
> 2. if the table/partition directory is initially encryption protected, after 
> being deleted and recreated, it is no more protected.
> The new implementation is to clean the contents of directory using 
> multi-threaded trashFiles. If Trash is enabled and has a lower encryption 
> level than the data directory, the files under it will be deleted. Otherwise, 
> they will be Trashed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14706) Lineage information not set properly

2016-09-07 Thread Vimal Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vimal Sharma updated HIVE-14706:

Priority: Critical  (was: Major)

> Lineage information not set properly
> 
>
> Key: HIVE-14706
> URL: https://issues.apache.org/jira/browse/HIVE-14706
> Project: Hive
>  Issue Type: Bug
>Reporter: Vimal Sharma
>Priority: Critical
>
> I am trying to fetch column level lineage after a CTAS query in a Post 
> Execution hook in Hive. Below are the queries:
> {code}
> create table t1(id int, name string);
> create table t2 as select * from t1;
> {code}
> The lineage information is retrieved using the following sample piece of code:
> {code}
> lInfo = hookContext.getLinfo()
> for(Map.Entry e : 
> lInfo.entrySet()) {
> System.out.println("Col Lineage Key : " + e.getKey());
> System.out.println("Col Lineage Value: " + e.getValue());
> }
> {code}
> The Dependency field(i.e Col Lineage Value)  is coming in as null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11414) Fix OOM in MapTask with many input partitions with RCFile

2016-09-07 Thread Wang Zhiqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470173#comment-15470173
 ] 

Wang Zhiqiang commented on HIVE-11414:
--

We met the same issue when processing too many small RCFiles. Any one is 
working on this? 
Set mapred.min.split.size.per.node to a small value can solve it, but we want 
to set mapred.min.split.size.per.node to a bigger default value(eg, 32M), for 
reducing map overhead.

> Fix OOM in MapTask with many input partitions with RCFile
> -
>
> Key: HIVE-11414
> URL: https://issues.apache.org/jira/browse/HIVE-11414
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats, Serializers/Deserializers
>Affects Versions: 0.11.0, 0.12.0, 0.14.0, 0.13.1, 1.2.0
>Reporter: Zheng Shao
>Priority: Minor
>
> MapTask hit OOM in the following situation in our production environment:
> * src: 2048 partitions, each with 1 file of about 2MB using RCFile format
> * query: INSERT OVERWRITE TABLE tgt SELECT * FROM src
> * Hadoop version: Both on CDH 4.7 using MR1 and CDH 5.4.1 using YARN.
> * MapTask memory Xmx: 1.5GB
> By analyzing the heap dump using jhat, we realized that the problem is:
> * One single mapper is processing many partitions (because of 
> CombineHiveInputFormat)
> * Each input path (equivalent to partition here) will construct its own SerDe
> * Each SerDe will do its own caching of deserialized object (and try to reuse 
> it), but will never release it (in this case, the 
> serde2.columnar.ColumnarSerDeBase has a field cachedLazyStruct which can take 
> a lot of space - pretty much the last N rows of a file where N is the number 
> of rows in a columnar block).
> * This problem may exist in other SerDe as well, but columnar file format are 
> affected the most because they need bigger cache for the last N rows instead 
> of 1 row.
> Proposed solution:
> * Remove cachedLazyStruct in serde2.columnar.ColumnarSerDeBase.  The cost 
> saving of not recreating a single object is too small compared to processing 
> N rows.
> Alternative solutions:
> * We can also free up the whole SerDe after processing a block/file.  The 
> problem with that is that the input splits may contain multiple blocks/files 
> that maps to the same SerDe, and recreating a SerDe is a much bigger change 
> to the code.
> * We can also move the SerDe creation/free-up to the place when input file 
> changes.  But that requires a much bigger change to the code.
> * We can also add a "cleanup()" method to SerDe interface that release the 
> cached object, but that change is not backward compatible with many SerDes 
> that people have wrote.
> * We can make cachedLazyStruct in serde2.columnar.ColumnarSerDeBase a weakly 
> referenced object, but that feels like an overkill.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14686) Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS ... AS"

2016-09-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470166#comment-15470166
 ] 

Hive QA commented on HIVE-14686:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12827337/HIVE-14686.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10453 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineWithArgs - did not produce a TEST-*.xml file
TestHiveCli - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char]
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3]
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1119/testReport
Console output: 
https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1119/console
Test logs: 
http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1119/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12827337 - PreCommit-HIVE-MASTER-Build

> Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS 
> ... AS"
> --
>
> Key: HIVE-14686
> URL: https://issues.apache.org/jira/browse/HIVE-14686
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14686.1.patch, HIVE-14686.2.patch, 
> HIVE-14686.3.patch
>
>
> See the query: 
> {code}
> create table if not exists DST as select * from SRC;
> {code}
> if the table DST doesn't exist, SessionState.get().getHiveOperation() will 
> return HiveOperation.CREATETABLE_AS_SELECT;
> But if the table DST already exists, it will return HiveOperation.CREATETABLE;
> It really makes some trouble for those who judge operation type by 
> SessionState.get().getHiveOperation().
> The reason I find out is that the function analyzeCreateTable in 
> SemanticAnalyzer.java will return null and won't set the correct command type 
> if the table already exists.
> Here is the related code:
> {code}
> // check for existence of table
> if (ifNotExists) {
>   try {
> Table table = getTable(qualifiedTabName, false);
> if (table != null) { // table exists
>   return null;
> }
>   } catch (HiveException e) {
> // should not occur since second parameter to getTableWithQN is false
> throw new IllegalStateException("Unxpected Exception thrown: " + 
> e.getMessage(), e);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14686) Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS ... AS"

2016-09-07 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14686:
-
Status: Patch Available  (was: Open)

set command type once command_type decided;
change tests output files

> Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS 
> ... AS"
> --
>
> Key: HIVE-14686
> URL: https://issues.apache.org/jira/browse/HIVE-14686
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14686.1.patch, HIVE-14686.2.patch, 
> HIVE-14686.3.patch
>
>
> See the query: 
> {code}
> create table if not exists DST as select * from SRC;
> {code}
> if the table DST doesn't exist, SessionState.get().getHiveOperation() will 
> return HiveOperation.CREATETABLE_AS_SELECT;
> But if the table DST already exists, it will return HiveOperation.CREATETABLE;
> It really makes some trouble for those who judge operation type by 
> SessionState.get().getHiveOperation().
> The reason I find out is that the function analyzeCreateTable in 
> SemanticAnalyzer.java will return null and won't set the correct command type 
> if the table already exists.
> Here is the related code:
> {code}
> // check for existence of table
> if (ifNotExists) {
>   try {
> Table table = getTable(qualifiedTabName, false);
> if (table != null) { // table exists
>   return null;
> }
>   } catch (HiveException e) {
> // should not occur since second parameter to getTableWithQN is false
> throw new IllegalStateException("Unxpected Exception thrown: " + 
> e.getMessage(), e);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14686) Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS ... AS"

2016-09-07 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14686:
-
Attachment: HIVE-14686.3.patch

> Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS 
> ... AS"
> --
>
> Key: HIVE-14686
> URL: https://issues.apache.org/jira/browse/HIVE-14686
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14686.1.patch, HIVE-14686.2.patch, 
> HIVE-14686.3.patch
>
>
> See the query: 
> {code}
> create table if not exists DST as select * from SRC;
> {code}
> if the table DST doesn't exist, SessionState.get().getHiveOperation() will 
> return HiveOperation.CREATETABLE_AS_SELECT;
> But if the table DST already exists, it will return HiveOperation.CREATETABLE;
> It really makes some trouble for those who judge operation type by 
> SessionState.get().getHiveOperation().
> The reason I find out is that the function analyzeCreateTable in 
> SemanticAnalyzer.java will return null and won't set the correct command type 
> if the table already exists.
> Here is the related code:
> {code}
> // check for existence of table
> if (ifNotExists) {
>   try {
> Table table = getTable(qualifiedTabName, false);
> if (table != null) { // table exists
>   return null;
> }
>   } catch (HiveException e) {
> // should not occur since second parameter to getTableWithQN is false
> throw new IllegalStateException("Unxpected Exception thrown: " + 
> e.getMessage(), e);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14686) Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS ... AS"

2016-09-07 Thread Fan Yunbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fan Yunbo updated HIVE-14686:
-
Status: Open  (was: Patch Available)

> Get unexpected command type when execute query "CREATE TABLE IF NOT EXISTS 
> ... AS"
> --
>
> Key: HIVE-14686
> URL: https://issues.apache.org/jira/browse/HIVE-14686
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Fan Yunbo
>Assignee: Fan Yunbo
> Fix For: 2.2.0
>
> Attachments: HIVE-14686.1.patch, HIVE-14686.2.patch, 
> HIVE-14686.3.patch
>
>
> See the query: 
> {code}
> create table if not exists DST as select * from SRC;
> {code}
> if the table DST doesn't exist, SessionState.get().getHiveOperation() will 
> return HiveOperation.CREATETABLE_AS_SELECT;
> But if the table DST already exists, it will return HiveOperation.CREATETABLE;
> It really makes some trouble for those who judge operation type by 
> SessionState.get().getHiveOperation().
> The reason I find out is that the function analyzeCreateTable in 
> SemanticAnalyzer.java will return null and won't set the correct command type 
> if the table already exists.
> Here is the related code:
> {code}
> // check for existence of table
> if (ifNotExists) {
>   try {
> Table table = getTable(qualifiedTabName, false);
> if (table != null) { // table exists
>   return null;
> }
>   } catch (HiveException e) {
> // should not occur since second parameter to getTableWithQN is false
> throw new IllegalStateException("Unxpected Exception thrown: " + 
> e.getMessage(), e);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14712) Insert fails when using union all

2016-09-07 Thread BALAMURUGAN (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BALAMURUGAN updated HIVE-14712:
---
Affects Version/s: 1.2.1

> Insert fails when using union all
> -
>
> Key: HIVE-14712
> URL: https://issues.apache.org/jira/browse/HIVE-14712
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.2.1
>Reporter: BALAMURUGAN
>
> HUE version * 2.6.1-3485
> hdp 2.3.4
> insert overwrite table  trn_operation
> Select code , opcode from
> (
>   select code , opcode from master
>   union all
>  select code, opcode from master
> )as base;
> when i give the union it inserts but when i give union all it gives error.
> ERROR : Failed with exception checkPaths
> at org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2491)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2905)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1659)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:298)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14712) Insert fails when using union all

2016-09-07 Thread BALAMURUGAN (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BALAMURUGAN updated HIVE-14712:
---
Description: 
HUE version * 2.6.1-3485
hdp 2.3.4


insert overwrite table  trn_operation
Select code , opcode from
(
  select code , opcode from master
  union all
 select code, opcode from master
)as base;


when i give the union it inserts but when i give union all it gives error.


ERROR : Failed with exception checkPaths

at org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2491)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2905)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1659)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:298)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)

  was:
HUE version * 2.6.1-3485
hdp 2.3.4


insert overwrite trn_operation
Select code , opcode from
(
  select code , opcode from master
  union all
 select code, opcode from master
)as base;


when i give the union it inserts but when i give union all it gives error.


ERROR : Failed with exception checkPaths

at org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2491)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2905)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1659)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:298)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)


> Insert fails when using union all
> -
>
> Key: HIVE-14712
> URL: https://issues.apache.org/jira/browse/HIVE-14712
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Reporter: BALAMURUGAN
>
> HUE version * 2.6.1-3485
> hdp 2.3.4
> insert overwrite table  trn_operation
> Select code , opcode from
> (
>   select code , opcode from master
>   union all
>  select code, opcode from master
> )as base;
> when i give the union it inserts but when i give union all it gives error.
> ERROR : Failed with exception checkPaths
> at org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2491)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2905)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1659)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:298)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14712) Insert fails when using union all

2016-09-07 Thread BALAMURUGAN (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BALAMURUGAN updated HIVE-14712:
---
Description: 
HUE version * 2.6.1-3485
hdp 2.3.4


insert overwrite trn_operation
Select code , opcode from
(
  select code , opcode from master
  union all
 select code, opcode from master
)as base;


when i give the union it inserts but when i give union all it gives error.


ERROR : Failed with exception checkPaths

at org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2491)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2905)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1659)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:298)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)

> Insert fails when using union all
> -
>
> Key: HIVE-14712
> URL: https://issues.apache.org/jira/browse/HIVE-14712
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Reporter: BALAMURUGAN
>
> HUE version * 2.6.1-3485
> hdp 2.3.4
> insert overwrite trn_operation
> Select code , opcode from
> (
>   select code , opcode from master
>   union all
>  select code, opcode from master
> )as base;
> when i give the union it inserts but when i give union all it gives error.
> ERROR : Failed with exception checkPaths
> at org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2491)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2905)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1659)
>   at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:298)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14698) Metastore: Datanucleus MSSQLServerAdapter generates incorrect syntax for OFFSET-FETCH clause

2016-09-07 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-14698:

Attachment: HIVE-14698.2.patch

> Metastore: Datanucleus MSSQLServerAdapter generates incorrect syntax for 
> OFFSET-FETCH clause
> 
>
> Key: HIVE-14698
> URL: https://issues.apache.org/jira/browse/HIVE-14698
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14698.1.patch, HIVE-14698.2.patch
>
>
> See the bug description here: 
> https://github.com/datanucleus/datanucleus-rdbms/issues/110. 
> In ObjectStore#listStorageDescriptorsWithCD, we set a range on the query. For 
> MSSQLServer version >= 12, this results in an OFFSET-FETCH clause in the 
> MSSQLServerAdapter (provided by datanucleus).
> I'll attach a short term workaround for Hive here and once DN has the fix, we 
> can upgrade and remove the short term fix from Hive. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13112) Expose Lineage information in case of CTAS

2016-09-07 Thread Vimal Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469722#comment-15469722
 ] 

Vimal Sharma commented on HIVE-13112:
-

[~rhbutani] Can you please verify the issue and help. I have created a JIRA for 
this issue at https://issues.apache.org/jira/browse/HIVE-14706

> Expose Lineage information in case of CTAS
> --
>
> Key: HIVE-13112
> URL: https://issues.apache.org/jira/browse/HIVE-13112
> Project: Hive
>  Issue Type: Bug
>  Components: lineage
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 2.1.0
>
> Attachments: HIVE-13112.01.patch
>
>
> This is not happening because Lineage is being exposed by MoveTask which 
> checks for a LoadTableDesc. In case of CTAS the table is created after the 
> MoveTask.
> Proposed solution is to add a flag to CreateTableDesc to track a CTAS 
> operation, and have the DDLTask expose the lineage if this flag is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13555) Add nullif udf

2016-09-07 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469691#comment-15469691
 ] 

Lefty Leverenz commented on HIVE-13555:
---

Doc note:  The nullif UDF should be documented in the wiki for release 2.2.0.

* [Hive Operators and UDFs -- Conditional Functions | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ConditionalFunctions]

Added a TODOC2.2 label.

> Add nullif udf
> --
>
> Key: HIVE-13555
> URL: https://issues.apache.org/jira/browse/HIVE-13555
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13555.1.patch, HIVE-13555.2.patch, 
> HIVE-13555.2.patch
>
>
> {{nullif(exp1, exp2)}} is shorthand for: {{case when exp1 = exp2 then null 
> else exp1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-5867) JDBC driver and beeline should support executing an initial SQL script

2016-09-07 Thread Jianguo Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianguo Tian updated HIVE-5867:
---
Attachment: HIVE-5867.1.patch

> JDBC driver and beeline should support executing an initial SQL script
> --
>
> Key: HIVE-5867
> URL: https://issues.apache.org/jira/browse/HIVE-5867
> Project: Hive
>  Issue Type: Improvement
>  Components: Clients, JDBC
>Reporter: Prasad Mujumdar
>Assignee: Jianguo Tian
> Attachments: HIVE-5867.1.patch
>
>
> HiveCLI support the .hiverc script that is executed at the start of the 
> session. This is helpful for things like registering UDFs, session specific 
> configs etc.
> This functionality is missing for beeline and JDBC clients. It would be 
> useful for JDBC driver to support an init script with SQL statements that's 
> automatically executed after connection. The script path can be specified via 
> JDBC connection URL. For example 
> {noformat}
> jdbc:hive2://localhost:1/default;initScript=/home/user1/scripts/init.sql
> {noformat}
> This can be added to Beeline's command line option like "-i 
> /home/user1/scripts/init.sql"
> To help transition from HiveCLI to Beeline, we can keep the default init 
> script as $HOME/.hiverc



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13555) Add nullif udf

2016-09-07 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13555:
--
Labels: TODOC2.2  (was: )

> Add nullif udf
> --
>
> Key: HIVE-13555
> URL: https://issues.apache.org/jira/browse/HIVE-13555
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-13555.1.patch, HIVE-13555.2.patch, 
> HIVE-13555.2.patch
>
>
> {{nullif(exp1, exp2)}} is shorthand for: {{case when exp1 = exp2 then null 
> else exp1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >