[jira] [Commented] (HIVE-14750) Vectorization: isNull and isRepeating bugs
[ https://issues.apache.org/jira/browse/HIVE-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489608#comment-15489608 ] Matt McCline commented on HIVE-14750: - And, need to write new unit tests. > Vectorization: isNull and isRepeating bugs > -- > > Key: HIVE-14750 > URL: https://issues.apache.org/jira/browse/HIVE-14750 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-14750.01.patch > > > Various bugs in VectorUDAF* templates. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14750) Vectorization: isNull and isRepeating bugs
[ https://issues.apache.org/jira/browse/HIVE-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-14750: Status: Patch Available (was: Open) Still looking for problems -- but give current patch a run on Hive QA. > Vectorization: isNull and isRepeating bugs > -- > > Key: HIVE-14750 > URL: https://issues.apache.org/jira/browse/HIVE-14750 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-14750.01.patch > > > Various bugs in VectorUDAF* templates. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14750) Vectorization: isNull and isRepeating bugs
[ https://issues.apache.org/jira/browse/HIVE-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-14750: Attachment: HIVE-14750.01.patch > Vectorization: isNull and isRepeating bugs > -- > > Key: HIVE-14750 > URL: https://issues.apache.org/jira/browse/HIVE-14750 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-14750.01.patch > > > Various bugs in VectorUDAF* templates. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13589) beeline - support prompt for password with '-u' option
[ https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489602#comment-15489602 ] Hive QA commented on HIVE-13589: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828379/HIVE-13589.8.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10507 tests executed *Failed tests:* {noformat} TestBeeLineWithArgs - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.beeline.TestBeelinePasswordOption.testPromptPasswordOptionLast org.apache.hive.beeline.TestBeelinePasswordOption.testPromptPasswordOptionMiddle org.apache.hive.beeline.TestBeelinePasswordOption.testPromptPasswordOptionStart org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1180/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1180/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1180/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828379 - PreCommit-HIVE-MASTER-Build > beeline - support prompt for password with '-u' option > -- > > Key: HIVE-13589 > URL: https://issues.apache.org/jira/browse/HIVE-13589 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Thejas M Nair >Assignee: Ke Jia > Fix For: 2.2.0 > > Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, > HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, > HIVE-13589.6.patch, HIVE-13589.7.patch, HIVE-13589.8.patch > > > Specifying connection string using commandline options in beeline is > convenient, as it gets saved in shell command history, and it is easy to > retrieve it from there. > However, specifying the password in command prompt is not secure as it gets > displayed on screen and saved in the history. > It should be possible to specify '-p' without an argument to make beeline > prompt for password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13878) Vectorization: Column pruning for Text vectorization
[ https://issues.apache.org/jira/browse/HIVE-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-13878: Fix Version/s: 2.2.0 > Vectorization: Column pruning for Text vectorization > > > Key: HIVE-13878 > URL: https://issues.apache.org/jira/browse/HIVE-13878 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Matt McCline > Fix For: 2.2.0 > > Attachments: HIVE-13878.04.patch, HIVE-13878.05.patch, > HIVE-13878.06.patch, HIVE-13878.07.patch, HIVE-13878.08.patch, > HIVE-13878.09.patch, HIVE-13878.091.patch, HIVE-13878.092.patch, > HIVE-13878.1.patch, HIVE-13878.2.patch, HIVE-13878.3.patch > > > Column pruning in TextFile vectorization does not work with Vector SerDe > settings due to LazySimple deser codepath issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13878) Vectorization: Column pruning for Text vectorization
[ https://issues.apache.org/jira/browse/HIVE-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-13878: Resolution: Fixed Status: Resolved (was: Patch Available) > Vectorization: Column pruning for Text vectorization > > > Key: HIVE-13878 > URL: https://issues.apache.org/jira/browse/HIVE-13878 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Matt McCline > Fix For: 2.2.0 > > Attachments: HIVE-13878.04.patch, HIVE-13878.05.patch, > HIVE-13878.06.patch, HIVE-13878.07.patch, HIVE-13878.08.patch, > HIVE-13878.09.patch, HIVE-13878.091.patch, HIVE-13878.092.patch, > HIVE-13878.1.patch, HIVE-13878.2.patch, HIVE-13878.3.patch > > > Column pruning in TextFile vectorization does not work with Vector SerDe > settings due to LazySimple deser codepath issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13878) Vectorization: Column pruning for Text vectorization
[ https://issues.apache.org/jira/browse/HIVE-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489561#comment-15489561 ] Matt McCline commented on HIVE-13878: - Patch #092 committed to master. Thank you Gopal for determining the problems, writing the first code versions, and reviewing the code. > Vectorization: Column pruning for Text vectorization > > > Key: HIVE-13878 > URL: https://issues.apache.org/jira/browse/HIVE-13878 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Matt McCline > Attachments: HIVE-13878.04.patch, HIVE-13878.05.patch, > HIVE-13878.06.patch, HIVE-13878.07.patch, HIVE-13878.08.patch, > HIVE-13878.09.patch, HIVE-13878.091.patch, HIVE-13878.092.patch, > HIVE-13878.1.patch, HIVE-13878.2.patch, HIVE-13878.3.patch > > > Column pruning in TextFile vectorization does not work with Vector SerDe > settings due to LazySimple deser codepath issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14749) Insert overwrite directory deleted acl permissions
[ https://issues.apache.org/jira/browse/HIVE-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiqing li updated HIVE-14749: - Priority: Blocker (was: Major) > Insert overwrite directory deleted acl permissions > --- > > Key: HIVE-14749 > URL: https://issues.apache.org/jira/browse/HIVE-14749 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.13.0 > Environment: centos6.6,hive0.13.0 >Reporter: xiqing li >Priority: Blocker > > when i operate hive 'insert overwrite' dml,the hdfs directory's acl will be > overwritted.in other words,acl will be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13878) Vectorization: Column pruning for Text vectorization
[ https://issues.apache.org/jira/browse/HIVE-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-13878: Attachment: HIVE-13878.092.patch Remove trailing white space (tests not rerun). > Vectorization: Column pruning for Text vectorization > > > Key: HIVE-13878 > URL: https://issues.apache.org/jira/browse/HIVE-13878 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Matt McCline > Attachments: HIVE-13878.04.patch, HIVE-13878.05.patch, > HIVE-13878.06.patch, HIVE-13878.07.patch, HIVE-13878.08.patch, > HIVE-13878.09.patch, HIVE-13878.091.patch, HIVE-13878.092.patch, > HIVE-13878.1.patch, HIVE-13878.2.patch, HIVE-13878.3.patch > > > Column pruning in TextFile vectorization does not work with Vector SerDe > settings due to LazySimple deser codepath issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13589) beeline - support prompt for password with '-u' option
[ https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489445#comment-15489445 ] Ferdinand Xu commented on HIVE-13589: - Hi [~Jk_Self], thank you for your updates. Can you please create a review board for your patch? > beeline - support prompt for password with '-u' option > -- > > Key: HIVE-13589 > URL: https://issues.apache.org/jira/browse/HIVE-13589 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Thejas M Nair >Assignee: Ke Jia > Fix For: 2.2.0 > > Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, > HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, > HIVE-13589.6.patch, HIVE-13589.7.patch, HIVE-13589.8.patch > > > Specifying connection string using commandline options in beeline is > convenient, as it gets saved in shell command history, and it is easy to > retrieve it from there. > However, specifying the password in command prompt is not secure as it gets > displayed on screen and saved in the history. > It should be possible to specify '-p' without an argument to make beeline > prompt for password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14741) Incorrect results on boolean col when vectorization is ON
[ https://issues.apache.org/jira/browse/HIVE-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489441#comment-15489441 ] Amruth S commented on HIVE-14741: - Issue probably is not related to boolean. I think its related to usage of case/IF with primitives that has nulls Some more observations -- select sum(if((bool_col), 1, 0)) from bool_vect_issue; 708206 select sum(if((bool_col == True), 1, 0)) from bool_vect_issue; 697966 select sum(if((bool_col is null), 1, 0)) from bool_vect_issue; 868512 select sum(if(coalesce(bool_col,false)), 1, 0)) from bool_vect_issue; 231 select a.x, count(*) from (select bool_col as x from bool_vect_issue) a group by a.x; NULL868512 true231 select a.x, count(*) from (select if(bool_col, true, false) x from bool_vect_issue) a group by a.x; false 160537 true708206 > Incorrect results on boolean col when vectorization is ON > - > > Key: HIVE-14741 > URL: https://issues.apache.org/jira/browse/HIVE-14741 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0, 2.1.0 >Reporter: Amruth S > Labels: orc, vectorization > Attachments: 00_0 > > > I have attached the ORC part file on which the issue is manifesting. It has > just one boolean column (lot of nulls, 231=trues : verified using orc file > dump utility) > 1) Create external table on the part file attached > CREATE EXTERNAL TABLE bool_vect_issue ( > `bool_col` BOOLEAN) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > ''; > 2) > set hive.vectorized.execution.enabled = true; > select sum(if((bool_col) , 1, 0)) from bool_vect_issue; > gives > 708206 > 3) > set hive.vectorized.execution.enabled = false; > select sum(if((bool_col) , 1, 0)) from bool_vect_issue; > gives > 231 > The issue seem to have the same impact as HIVE-12435 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13589) beeline - support prompt for password with '-u' option
[ https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-13589: Status: Patch Available (was: Reopened) Trigger the Jenkins job > beeline - support prompt for password with '-u' option > -- > > Key: HIVE-13589 > URL: https://issues.apache.org/jira/browse/HIVE-13589 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Thejas M Nair >Assignee: Ke Jia > Fix For: 2.2.0 > > Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, > HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, > HIVE-13589.6.patch, HIVE-13589.7.patch, HIVE-13589.8.patch > > > Specifying connection string using commandline options in beeline is > convenient, as it gets saved in shell command history, and it is easy to > retrieve it from there. > However, specifying the password in command prompt is not secure as it gets > displayed on screen and saved in the history. > It should be possible to specify '-p' without an argument to make beeline > prompt for password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14743) ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs
[ https://issues.apache.org/jira/browse/HIVE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489338#comment-15489338 ] Hive QA commented on HIVE-14743: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828372/HIVE-14743.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10546 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_viewjoins] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1179/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1179/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1179/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828372 - PreCommit-HIVE-MASTER-Build > ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs > - > > Key: HIVE-14743 > URL: https://issues.apache.org/jira/browse/HIVE-14743 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 1.0.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > Attachments: HIVE-14743.1.patch > > > The stack: > {noformat} > 2016-09-13T09:38:49,972 ERROR [186b4545-65b5-4bfc-bc8e-3e14e251bb12 main] > exec.Task: Job Submission failed with exception > 'java.lang.ArrayIndexOutOfBoundsException(1)' > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.createFilterScan(HiveHBaseTableInputFormat.java:224) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:492) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:449) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:370) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:466) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:356) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:546) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:329) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:320) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570) > {noformat} > Repro: > {noformat} > CREATE TABLE HBASE_TABLE_TEST_1( > cvalue string , > pk string, > ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_1', > 'serialization.null.format'='' ); > CREATE VIEW VIEW_HBASE_TABLE_TEST_1 AS SELECT > hbase_table_test_1.cvalue,hbase_table_test_1.pk,hbase_table_test_1.ccount > FROM hbase_table_test_1 WHERE hbase_table_test_1.ccount IS NOT NULL; > CREATE TABLE HBASE_TABLE_TEST_2( > cvalue string , > pk
[jira] [Commented] (HIVE-13589) beeline - support prompt for password with '-u' option
[ https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489293#comment-15489293 ] Ke Jia commented on HIVE-13589: --- [~vihangk1], Thanks for your comments and suggestion. I updated the patch and uploaded it. Please help me review, Thanks. > beeline - support prompt for password with '-u' option > -- > > Key: HIVE-13589 > URL: https://issues.apache.org/jira/browse/HIVE-13589 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Thejas M Nair >Assignee: Ke Jia > Fix For: 2.2.0 > > Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, > HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, > HIVE-13589.6.patch, HIVE-13589.7.patch, HIVE-13589.8.patch > > > Specifying connection string using commandline options in beeline is > convenient, as it gets saved in shell command history, and it is easy to > retrieve it from there. > However, specifying the password in command prompt is not secure as it gets > displayed on screen and saved in the history. > It should be possible to specify '-p' without an argument to make beeline > prompt for password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13589) beeline - support prompt for password with '-u' option
[ https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ke Jia updated HIVE-13589: -- Attachment: HIVE-13589.8.patch > beeline - support prompt for password with '-u' option > -- > > Key: HIVE-13589 > URL: https://issues.apache.org/jira/browse/HIVE-13589 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Thejas M Nair >Assignee: Ke Jia > Fix For: 2.2.0 > > Attachments: HIVE-13589.1.patch, HIVE-13589.2.patch, > HIVE-13589.3.patch, HIVE-13589.4.patch, HIVE-13589.5.patch, > HIVE-13589.6.patch, HIVE-13589.7.patch, HIVE-13589.8.patch > > > Specifying connection string using commandline options in beeline is > convenient, as it gets saved in shell command history, and it is easy to > retrieve it from there. > However, specifying the password in command prompt is not secure as it gets > displayed on screen and saved in the history. > It should be possible to specify '-p' without an argument to make beeline > prompt for password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14749) Insert overwrite directory deleted acl permissions
[ https://issues.apache.org/jira/browse/HIVE-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489280#comment-15489280 ] xiqing li commented on HIVE-14749: -- > dfs -ls /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604; drwxr-xr-x+ - fin-abc fin-abc 0 2015-09-02 15:09 /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604/1 drwxr-xr-x - fin-abc hdfs 0 2016-05-12 16:00 /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604/2 drwxr-xr-x - fin-abc hdfs 0 2016-05-12 15:53 /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604/4 drwxr-xr-x - fin-abc hdfs 0 2016-05-12 15:47 /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604/5 hive> INSERT OVERWRITE TABLE abc_fct_reso_list_fvp partition(hq_month_code = '201604', hq_driv_code = '1') > SELECT '100' MODE_CODE, > 'a' YEAR_MONTH, > 'b' BELONG_ABC_DEPT_ID, > 'c' BELONG_ABC_DEPT_CODE, > 'c' BELONG_ABC_DEPT_TYPE, > 'e' FUNC_CODE, > 'f' RESO_CODE, > 'g' BILL_AMT, > 5 FROM_FLAG, > from_unixtime(unix_timestamp(), '-MM-dd HH:mm:ss') LOAD_TM, > '' SYS_NAME, > NULL FZ_COL > FROM default.dual A; > dfs -ls /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604; drwxr-xr-x - fin-abc fin-abc 0 2015-09-02 16:09 /result/fin_abc/fin_abc/fact/abc_fct_reso_list_fvp/201604/1 #The plus sign disappeared. > Insert overwrite directory deleted acl permissions > --- > > Key: HIVE-14749 > URL: https://issues.apache.org/jira/browse/HIVE-14749 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 0.13.0 > Environment: centos6.6,hive0.13.0 >Reporter: xiqing li > > when i operate hive 'insert overwrite' dml,the hdfs directory's acl will be > overwritted.in other words,acl will be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14731) Use Tez cartesian product edge in Hive (unpartitioned case only)
[ https://issues.apache.org/jira/browse/HIVE-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489167#comment-15489167 ] Hive QA commented on HIVE-14731: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828365/HIVE-14731.3.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10518 tests executed *Failed tests:* {noformat} TestMiniLlapCliDriver-schema_evol_orc_acid_table_update.q-tez_schema_evolution.q-tez_join.q-and-27-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynamic_partition_pruning] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1177/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1177/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1177/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828365 - PreCommit-HIVE-MASTER-Build > Use Tez cartesian product edge in Hive (unpartitioned case only) > > > Key: HIVE-14731 > URL: https://issues.apache.org/jira/browse/HIVE-14731 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Zhiyuan Yang > Attachments: HIVE-14731.1.patch, HIVE-14731.2.patch, > HIVE-14731.3.patch > > > Given cartesian product edge is available in Tez now (see TEZ-3230), let's > integrate it into Hive on Tez. This allows us to have more than one reducer > in cross product queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489157#comment-15489157 ] Rui Li commented on HIVE-14412: --- Thank you guys for your inputs. Like Xuefu said, the standard name is TIMESTAMP WITH TIME ZONE. Most popular DBs support this data type, e.g. Oracle, DB2, PostgreSQL. I'll try if we can handle spaces in type name. I haven't looked at ORC/vectorization support. But I think it's easy to implement because the new type is basically just timestamp plus an extra integer. We can do it in follow up tasks. > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14743) ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs
[ https://issues.apache.org/jira/browse/HIVE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-14743: Status: Patch Available (was: Open) Need code review. > ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs > - > > Key: HIVE-14743 > URL: https://issues.apache.org/jira/browse/HIVE-14743 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 1.0.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > Attachments: HIVE-14743.1.patch > > > The stack: > {noformat} > 2016-09-13T09:38:49,972 ERROR [186b4545-65b5-4bfc-bc8e-3e14e251bb12 main] > exec.Task: Job Submission failed with exception > 'java.lang.ArrayIndexOutOfBoundsException(1)' > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.createFilterScan(HiveHBaseTableInputFormat.java:224) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:492) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:449) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:370) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:466) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:356) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:546) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:329) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:320) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570) > {noformat} > Repro: > {noformat} > CREATE TABLE HBASE_TABLE_TEST_1( > cvalue string , > pk string, > ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_1', > 'serialization.null.format'='' ); > CREATE VIEW VIEW_HBASE_TABLE_TEST_1 AS SELECT > hbase_table_test_1.cvalue,hbase_table_test_1.pk,hbase_table_test_1.ccount > FROM hbase_table_test_1 WHERE hbase_table_test_1.ccount IS NOT NULL; > CREATE TABLE HBASE_TABLE_TEST_2( > cvalue string , > pk string , >ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_2', > 'serialization.null.format'=''); > CREATE VIEW VIEW_HBASE_TABLE_TEST_2 AS SELECT > hbase_table_test_2.cvalue,hbase_table_test_2.pk,hbase_table_test_2.ccount > FROM hbase_table_test_2 WHERE hbase_table_test_2.pk >='3-h-0' AND > hbase_table_test_2.pk <= '3-h-g' AND hbase_table_test_2.ccount IS NOT > NULL; > set hive.auto.convert.join=false; > SELECT p.cvalue cvalue > FROM `VIEW_HBASE_TABLE_TEST_1` `p` > LEFT OUTER JOIN `VIEW_HBASE_TABLE_TEST_2` `A1` > ON `p`.cvalue = `A1`.cvalue > LEFT OUTER JOIN `VIEW_HBASE_TABLE_TEST_1` `A2` > ON `p`.cvalue = `A2`.cvalue; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14743) ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs
[ https://issues.apache.org/jira/browse/HIVE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-14743: Attachment: HIVE-14743.1.patch > ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs > - > > Key: HIVE-14743 > URL: https://issues.apache.org/jira/browse/HIVE-14743 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 1.0.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > Attachments: HIVE-14743.1.patch > > > The stack: > {noformat} > 2016-09-13T09:38:49,972 ERROR [186b4545-65b5-4bfc-bc8e-3e14e251bb12 main] > exec.Task: Job Submission failed with exception > 'java.lang.ArrayIndexOutOfBoundsException(1)' > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.createFilterScan(HiveHBaseTableInputFormat.java:224) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:492) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:449) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:370) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:466) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:356) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:546) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:329) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:320) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570) > {noformat} > Repro: > {noformat} > CREATE TABLE HBASE_TABLE_TEST_1( > cvalue string , > pk string, > ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_1', > 'serialization.null.format'='' ); > CREATE VIEW VIEW_HBASE_TABLE_TEST_1 AS SELECT > hbase_table_test_1.cvalue,hbase_table_test_1.pk,hbase_table_test_1.ccount > FROM hbase_table_test_1 WHERE hbase_table_test_1.ccount IS NOT NULL; > CREATE TABLE HBASE_TABLE_TEST_2( > cvalue string , > pk string , >ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_2', > 'serialization.null.format'=''); > CREATE VIEW VIEW_HBASE_TABLE_TEST_2 AS SELECT > hbase_table_test_2.cvalue,hbase_table_test_2.pk,hbase_table_test_2.ccount > FROM hbase_table_test_2 WHERE hbase_table_test_2.pk >='3-h-0' AND > hbase_table_test_2.pk <= '3-h-g' AND hbase_table_test_2.ccount IS NOT > NULL; > set hive.auto.convert.join=false; > SELECT p.cvalue cvalue > FROM `VIEW_HBASE_TABLE_TEST_1` `p` > LEFT OUTER JOIN `VIEW_HBASE_TABLE_TEST_2` `A1` > ON `p`.cvalue = `A1`.cvalue > LEFT OUTER JOIN `VIEW_HBASE_TABLE_TEST_1` `A2` > ON `p`.cvalue = `A2`.cvalue; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14743) ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs
[ https://issues.apache.org/jira/browse/HIVE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489054#comment-15489054 ] Yongzhi Chen commented on HIVE-14743: - The ArrayIndexOutOfBoundsException is thrown at line: String colType = jobConf.get(serdeConstants.LIST_COLUMN_TYPES).split(",")[iKey]; Hive code is flexible with the separator for LIST_COLUMN_TYPES (columns.types), for example from metastore, it is using ":", in ReduceKeyTableDesc it uses ",". In this repro query, there are two styples, and the ArrayIndexOutOfBoundsException is thrown when try to split a ":" separated string using ",". PATCH 1 fix the issue by using TypeInfoUtils.getTypeInfosFromTypeString which can handle all the separators. > ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs > - > > Key: HIVE-14743 > URL: https://issues.apache.org/jira/browse/HIVE-14743 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 1.0.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > > The stack: > {noformat} > 2016-09-13T09:38:49,972 ERROR [186b4545-65b5-4bfc-bc8e-3e14e251bb12 main] > exec.Task: Job Submission failed with exception > 'java.lang.ArrayIndexOutOfBoundsException(1)' > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.createFilterScan(HiveHBaseTableInputFormat.java:224) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:492) > at > org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:449) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:370) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:466) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:356) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:546) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:329) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:320) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) > at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570) > {noformat} > Repro: > {noformat} > CREATE TABLE HBASE_TABLE_TEST_1( > cvalue string , > pk string, > ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_1', > 'serialization.null.format'='' ); > CREATE VIEW VIEW_HBASE_TABLE_TEST_1 AS SELECT > hbase_table_test_1.cvalue,hbase_table_test_1.pk,hbase_table_test_1.ccount > FROM hbase_table_test_1 WHERE hbase_table_test_1.ccount IS NOT NULL; > CREATE TABLE HBASE_TABLE_TEST_2( > cvalue string , > pk string , >ccount int ) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:val,:key,cf2:count', > 'hbase.scan.cache'='500', > 'hbase.scan.cacheblocks'='false', > 'serialization.format'='1') > TBLPROPERTIES ( > 'hbase.table.name'='hbase_table_test_2', > 'serialization.null.format'=''); > CREATE VIEW VIEW_HBASE_TABLE_TEST_2 AS SELECT > hbase_table_test_2.cvalue,hbase_table_test_2.pk,hbase_table_test_2.ccount > FROM hbase_table_test_2 WHERE hbase_table_test_2.pk >='3-h-0' AND > hbase_table_test_2.pk <= '3-h-g' AND hbase_table_test_2.ccount IS NOT > NULL; > set hive.auto.convert.join=false; > SELECT p.cvalue cvalue > FROM `VIEW_HBASE_TABLE_TEST_1` `p` > LEFT OUTER JOIN `VIEW_HBASE_TABLE_TEST_2` `A1` > ON `p`.cvalue = `A1`.cvalue > LEFT OUTER JOIN `VIEW_HBASE_TABLE_TEST_1` `A2` > ON `p`.cvalue = `A2`.cvalue; > {noformat} -- This message was sent by Atl
[jira] [Commented] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15489016#comment-15489016 ] Hive QA commented on HIVE-14579: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828360/HIVE-14579.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 10547 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_udf] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin2] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin4] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin5] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf4] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_floor] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_all_types] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_math_funcs] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_udf] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_math_funcs] org.apache.hadoop.hive.cli.TestCompareCliDriver.testCliDriver[vectorized_math_funcs] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[update_all_types] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_math_funcs] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_udf] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vectorized_math_funcs] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_math_funcs] org.apache.hadoop.hive.ql.udf.TestUDFDateFormatGranularity.testTimestampToTimestampWithGranularity org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1176/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1176/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1176/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 23 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828360 - PreCommit-HIVE-MASTER-Build > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch, HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14540) Add support in ptest to create batches for non qfile tests
[ https://issues.apache.org/jira/browse/HIVE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-14540: -- Attachment: HIVE-14540.03.patch Re-uploading for jenkins. > Add support in ptest to create batches for non qfile tests > -- > > Key: HIVE-14540 > URL: https://issues.apache.org/jira/browse/HIVE-14540 > Project: Hive > Issue Type: Sub-task > Components: Testing Infrastructure >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Fix For: 2.2.0 > > Attachments: HIVE-14540.01.patch, HIVE-14540.02.patch, > HIVE-14540.03.patch, HIVE-14540.03.patch > > > From run 790: > Reported runtime by junit: 17 hours > Reported runtime by ptest: 34 hours > A lot of time is wasted spinning up mvn test for each individual test, which > otherwise takes less than 1 second. These tests could end up taking 20-30 > seconds. Combined with HIVE-14539 - 60-70s. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14731) Use Tez cartesian product edge in Hive (unpartitioned case only)
[ https://issues.apache.org/jira/browse/HIVE-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhiyuan Yang updated HIVE-14731: Attachment: HIVE-14731.3.patch Modified jsonexplain to support XPROD_EDGE and overwrite some XPROD_EDGE related qtest output file. > Use Tez cartesian product edge in Hive (unpartitioned case only) > > > Key: HIVE-14731 > URL: https://issues.apache.org/jira/browse/HIVE-14731 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Zhiyuan Yang > Attachments: HIVE-14731.1.patch, HIVE-14731.2.patch, > HIVE-14731.3.patch > > > Given cartesian product edge is available in Tez now (see TEZ-3230), let's > integrate it into Hive on Tez. This allows us to have more than one reducer > in cross product queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14579: - Attachment: HIVE-14579.patch Reuploading the same patch as it did not get picked up. > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Prasanth Jayachandran > Attachments: HIVE-14579.patch, HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14579: - Assignee: Jesus Camacho Rodriguez (was: Prasanth Jayachandran) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch, HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-14579: Assignee: Prasanth Jayachandran (was: Jesus Camacho Rodriguez) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Prasanth Jayachandran > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14579: - Assignee: Jesus Camacho Rodriguez (was: Prasanth Jayachandran) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488810#comment-15488810 ] Prasanth Jayachandran commented on HIVE-14579: -- Resubmitted it for ptest2 to pick up again after restart. > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14579: - Status: Patch Available (was: Open) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Prasanth Jayachandran > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-14579: Assignee: Prasanth Jayachandran (was: Jesus Camacho Rodriguez) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Prasanth Jayachandran > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488806#comment-15488806 ] Hive QA commented on HIVE-14579: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828349/HIVE-14579.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1169/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1169/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1169/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Tests exited with: InterruptedException: null {noformat} This message is automatically generated. ATTACHMENT ID: 12828349 - PreCommit-HIVE-MASTER-Build > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14579: - Status: Open (was: Patch Available) Cancelling patch as I restarted ptest2. > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14680) retain consistent splits /during/ (as opposed to across) LLAP failures on top of HIVE-14589
[ https://issues.apache.org/jira/browse/HIVE-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488746#comment-15488746 ] Hive QA commented on HIVE-14680: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828315/HIVE-14680.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10547 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1168/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1168/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1168/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828315 - PreCommit-HIVE-MASTER-Build > retain consistent splits /during/ (as opposed to across) LLAP failures on top > of HIVE-14589 > --- > > Key: HIVE-14680 > URL: https://issues.apache.org/jira/browse/HIVE-14680 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14680.01.patch, HIVE-14680.patch > > > see HIVE-14589. > Basic idea (spent about 7 minutes thinking about this based on RB comment ;)) > is to return locations for all slots to HostAffinitySplitLocationProvider, > the missing slots being inactive locations (based solely on the last slot > actually present). For the splits mapped to these locations, fall back via > different hash functions, or some sort of probing. > This still doesn't handle all the cases, namely when the last slots are gone > (consistent hashing is supposed to be good for this?); however for that we'd > need more involved coordination between nodes or a central updater to > indicate the number of nodes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14579: --- Attachment: HIVE-14579.patch > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14579.patch > > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488694#comment-15488694 ] Jesus Camacho Rodriguez commented on HIVE-14579: {{floor ( to )}} is equivalent to {{date_trunc(, )}}. https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14579: --- Status: Patch Available (was: In Progress) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-14579 started by Jesus Camacho Rodriguez. -- > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14579) Add support for date extract and floor
[ https://issues.apache.org/jira/browse/HIVE-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14579: --- Summary: Add support for date extract and floor (was: Add extract udf) > Add support for date extract and floor > -- > > Key: HIVE-14579 > URL: https://issues.apache.org/jira/browse/HIVE-14579 > Project: Hive > Issue Type: Sub-task > Components: UDF >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > > https://www.postgresql.org/docs/9.1/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14744) Improve Hive ptest to execute tests per branch automatically and without pre-configurations
[ https://issues.apache.org/jira/browse/HIVE-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488636#comment-15488636 ] Siddharth Seth commented on HIVE-14744: --- In addition, the option to upload a partial properties file, which overrides values specified in an existing one. e.g. select different branch, select java version, disable certain tests, etc. > Improve Hive ptest to execute tests per branch automatically and without > pre-configurations > --- > > Key: HIVE-14744 > URL: https://issues.apache.org/jira/browse/HIVE-14744 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > > This task is meant to improve the way Hive PTest executes all test per branch. > Currently, when a new branch in Hive is created, someone with admin rights > needs to create a new job configuration on Jenkins and the PTest server to > allow tests on this branch. > We should remove this human interaction from Jenkins and ptest, and allow any > committer to test their patches on any branch they specify in the file > attached automatically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488638#comment-15488638 ] Jason Dere commented on HIVE-14412: --- Check if there are any issues with TypeInfoParser/PrimitiveObjectInspectorUtils. When i did interval I wasn't sure about if the spaces would be problematic internally within Hive and used underscores. In the Hive SQL parser you should be able to have "TIMESTAMP WITH TIME ZONE" be recognized as a synonym for TIMESTAMPTZ when parsing type names, but internally (serde params, describe table) the type name will currently show up using the type name that was registered in PrimitiveObjectInspectorUtils. > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488639#comment-15488639 ] Jason Dere commented on HIVE-14412: --- Check if there are any issues with TypeInfoParser/PrimitiveObjectInspectorUtils. When i did interval I wasn't sure about if the spaces would be problematic internally within Hive and used underscores. In the Hive SQL parser you should be able to have "TIMESTAMP WITH TIME ZONE" be recognized as a synonym for TIMESTAMPTZ when parsing type names, but internally (serde params, describe table) the type name will currently show up using the type name that was registered in PrimitiveObjectInspectorUtils. > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-14412: -- Comment: was deleted (was: Check if there are any issues with TypeInfoParser/PrimitiveObjectInspectorUtils. When i did interval I wasn't sure about if the spaces would be problematic internally within Hive and used underscores. In the Hive SQL parser you should be able to have "TIMESTAMP WITH TIME ZONE" be recognized as a synonym for TIMESTAMPTZ when parsing type names, but internally (serde params, describe table) the type name will currently show up using the type name that was registered in PrimitiveObjectInspectorUtils.) > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14747) Remove JAVA paths from profiles by sending them from ptest-client
[ https://issues.apache.org/jira/browse/HIVE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488633#comment-15488633 ] Siddharth Seth commented on HIVE-14747: --- The manual build step in jenkins could be enhanced to include additional options. Anyway - that's unrelated to this jira. Does the java path need configuration if Java7 isn't supported. Maybe to test with Java9 in the near future. It'll still require a manual install on all nodes. Uploading the entire properties file, or an overriding properties file would also help with this. > Remove JAVA paths from profiles by sending them from ptest-client > - > > Key: HIVE-14747 > URL: https://issues.apache.org/jira/browse/HIVE-14747 > Project: Hive > Issue Type: Sub-task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña > > Hive ptest uses some properties files per branch that contain information > about how to execute the tests. > This profile includes JAVA paths to build and execute the tests. We should > get rid of these by passing such information from Jenkins to the > ptest-server. In case a profile needs a different java version, then we can > create a specific Jenkins job for that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14739) Replace runnables directly added to runtime shutdown hooks to avoid deadlock
[ https://issues.apache.org/jira/browse/HIVE-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14739: - Attachment: HIVE-14739.4.patch This patch introduced a change to ptest2 module which is not required (unwanted dependency to hive breaking the build). Fixed it and committed. Uploading the relevant patch. > Replace runnables directly added to runtime shutdown hooks to avoid deadlock > > > Key: HIVE-14739 > URL: https://issues.apache.org/jira/browse/HIVE-14739 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Prasanth Jayachandran > Fix For: 2.2.0 > > Attachments: HIVE-14739.1.patch, HIVE-14739.2.patch, > HIVE-14739.3.patch, HIVE-14739.4.patch > > > [~deepesh] reported that a deadlock can occur when running queries through > hive cli. [~cnauroth] analyzed it and reported that hive adds shutdown hooks > directly to java Runtime which may execute in non-deterministic order causing > deadlocks with hadoop's shutdown hooks. In one case, hadoop shutdown locked > FileSystem#Cache and FileSystem.close whereas hive shutdown hook locked > FileSystem.close and FileSystem#Cache order causing a deadlock. > Hive and Hadoop has ShutdownHookManager that runs the shutdown hooks in > deterministic order based on priority. We should use that to avoid deadlock > throughout the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14726) delete statement fails when spdo is on
[ https://issues.apache.org/jira/browse/HIVE-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-14726: -- Component/s: Transactions > delete statement fails when spdo is on > -- > > Key: HIVE-14726 > URL: https://issues.apache.org/jira/browse/HIVE-14726 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer, Transactions >Affects Versions: 2.1.0 >Reporter: Deepesh Khandelwal >Assignee: Ashutosh Chauhan > Fix For: 2.2.0 > > Attachments: HIVE-14726.1.patch, HIVE-14726.2.patch, HIVE-14726.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14739) Replace runnables directly added to runtime shutdown hooks to avoid deadlock
[ https://issues.apache.org/jira/browse/HIVE-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14739: - Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) Test failures look unrelated to me. Committed patch to master. Thanks Chris and Sid for the reviews! > Replace runnables directly added to runtime shutdown hooks to avoid deadlock > > > Key: HIVE-14739 > URL: https://issues.apache.org/jira/browse/HIVE-14739 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Prasanth Jayachandran > Fix For: 2.2.0 > > Attachments: HIVE-14739.1.patch, HIVE-14739.2.patch, > HIVE-14739.3.patch > > > [~deepesh] reported that a deadlock can occur when running queries through > hive cli. [~cnauroth] analyzed it and reported that hive adds shutdown hooks > directly to java Runtime which may execute in non-deterministic order causing > deadlocks with hadoop's shutdown hooks. In one case, hadoop shutdown locked > FileSystem#Cache and FileSystem.close whereas hive shutdown hook locked > FileSystem.close and FileSystem#Cache order causing a deadlock. > Hive and Hadoop has ShutdownHookManager that runs the shutdown hooks in > deterministic order based on priority. We should use that to avoid deadlock > throughout the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14744) Improve Hive ptest to execute tests per branch automatically and without pre-configurations
[ https://issues.apache.org/jira/browse/HIVE-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488535#comment-15488535 ] Siddharth Seth commented on HIVE-14744: --- I think that's a good list to get started on. Specifically HIVE-14734, HIVE-14745, HIVE-14746. Couple of things which I would like to add (short term ptest enhancements), and these are independent of the goal to run per-branch without changes. 1) Ability to provide a full profile file - which allows users to exclude specific tests, change how tests are batched, etc. 2) Allow batch-exec.vm and source-prep.vm to be picked up from a path, rather than the copies present in the initialzied tomcat instance. This would allow some ptest enahncements without a restart. It also allows for far more advanced usage. ptest doesn't need to be restricted to Hive only. > Improve Hive ptest to execute tests per branch automatically and without > pre-configurations > --- > > Key: HIVE-14744 > URL: https://issues.apache.org/jira/browse/HIVE-14744 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > > This task is meant to improve the way Hive PTest executes all test per branch. > Currently, when a new branch in Hive is created, someone with admin rights > needs to create a new job configuration on Jenkins and the PTest server to > allow tests on this branch. > We should remove this human interaction from Jenkins and ptest, and allow any > committer to test their patches on any branch they specify in the file > attached automatically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14747) Remove JAVA paths from profiles by sending them from ptest-client
[ https://issues.apache.org/jira/browse/HIVE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488537#comment-15488537 ] Sergio Peña commented on HIVE-14747: Currently, ptest does have one branch with different java settings. This would be something new to do. Btw, java7 vs java8 has many order issues on qtests. If we try to run all tests with java7, then you will see a lot of q-tests failures due to that. > Remove JAVA paths from profiles by sending them from ptest-client > - > > Key: HIVE-14747 > URL: https://issues.apache.org/jira/browse/HIVE-14747 > Project: Hive > Issue Type: Sub-task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña > > Hive ptest uses some properties files per branch that contain information > about how to execute the tests. > This profile includes JAVA paths to build and execute the tests. We should > get rid of these by passing such information from Jenkins to the > ptest-server. In case a profile needs a different java version, then we can > create a specific Jenkins job for that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-3776) support PIVOT in hive
[ https://issues.apache.org/jira/browse/HIVE-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488527#comment-15488527 ] Dan F commented on HIVE-3776: - If you look at the Quora example, it uses functionality that is not in collect_set() or collect_list(). The Brickhouse function collects a map if given two arguments. > support PIVOT in hive > - > > Key: HIVE-3776 > URL: https://issues.apache.org/jira/browse/HIVE-3776 > Project: Hive > Issue Type: New Feature > Components: Query Processor >Reporter: Namit Jain >Assignee: Namit Jain > > It is a fairly well understood feature in databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14739) Replace runnables directly added to runtime shutdown hooks to avoid deadlock
[ https://issues.apache.org/jira/browse/HIVE-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488526#comment-15488526 ] Hive QA commented on HIVE-14739: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828311/HIVE-14739.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10545 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching org.apache.hive.spark.client.TestSparkClient.testJobSubmission {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1167/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1167/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1167/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828311 - PreCommit-HIVE-MASTER-Build > Replace runnables directly added to runtime shutdown hooks to avoid deadlock > > > Key: HIVE-14739 > URL: https://issues.apache.org/jira/browse/HIVE-14739 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Prasanth Jayachandran > Attachments: HIVE-14739.1.patch, HIVE-14739.2.patch, > HIVE-14739.3.patch > > > [~deepesh] reported that a deadlock can occur when running queries through > hive cli. [~cnauroth] analyzed it and reported that hive adds shutdown hooks > directly to java Runtime which may execute in non-deterministic order causing > deadlocks with hadoop's shutdown hooks. In one case, hadoop shutdown locked > FileSystem#Cache and FileSystem.close whereas hive shutdown hook locked > FileSystem.close and FileSystem#Cache order causing a deadlock. > Hive and Hadoop has ShutdownHookManager that runs the shutdown hooks in > deterministic order based on priority. We should use that to avoid deadlock > throughout the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-3776) support PIVOT in hive
[ https://issues.apache.org/jira/browse/HIVE-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488513#comment-15488513 ] Ruslan Dautkhanov commented on HIVE-3776: - Wonder why he suggested to use COLLECT() from Brickhouse? Hive has collect_set() and collect_list() that would do the same: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-Built-inAggregateFunctions(UDAF) > support PIVOT in hive > - > > Key: HIVE-3776 > URL: https://issues.apache.org/jira/browse/HIVE-3776 > Project: Hive > Issue Type: New Feature > Components: Query Processor >Reporter: Namit Jain >Assignee: Namit Jain > > It is a fairly well understood feature in databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14746) Remove branch and repositories from profiles by sending them from ptest-client
[ https://issues.apache.org/jira/browse/HIVE-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488512#comment-15488512 ] Siddharth Seth commented on HIVE-14746: --- We may want to associated the workDir with the branch being used - so that there's not a lot of rebases, etc happening. > Remove branch and repositories from profiles by sending them from ptest-client > -- > > Key: HIVE-14746 > URL: https://issues.apache.org/jira/browse/HIVE-14746 > Project: Hive > Issue Type: Sub-task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña > > Hive ptest uses some properties files per branch that contain information > about how to execute the tests. > This profile includes the branch name and repository URLs used to fetch the > branch code. We should get rid of these by detecting the branch from the > jenkins-execute-build.sh script, and send the information directly to > ptest-server as command line parameters. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14747) Remove JAVA paths from profiles by sending them from ptest-client
[ https://issues.apache.org/jira/browse/HIVE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488507#comment-15488507 ] Siddharth Seth commented on HIVE-14747: --- Would this be a manual trigger for precommit? e.g. Try a patch against java7 would be a manual trigger? > Remove JAVA paths from profiles by sending them from ptest-client > - > > Key: HIVE-14747 > URL: https://issues.apache.org/jira/browse/HIVE-14747 > Project: Hive > Issue Type: Sub-task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña > > Hive ptest uses some properties files per branch that contain information > about how to execute the tests. > This profile includes JAVA paths to build and execute the tests. We should > get rid of these by passing such information from Jenkins to the > ptest-server. In case a profile needs a different java version, then we can > create a specific Jenkins job for that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14747) Remove JAVA paths from profiles by sending them from ptest-client
[ https://issues.apache.org/jira/browse/HIVE-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-14747: -- Summary: Remove JAVA paths from profiles by sending them from ptest-client (was: Remove JAVA patsh from profiles by sending them from ptest-client) > Remove JAVA paths from profiles by sending them from ptest-client > - > > Key: HIVE-14747 > URL: https://issues.apache.org/jira/browse/HIVE-14747 > Project: Hive > Issue Type: Sub-task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña > > Hive ptest uses some properties files per branch that contain information > about how to execute the tests. > This profile includes JAVA paths to build and execute the tests. We should > get rid of these by passing such information from Jenkins to the > ptest-server. In case a profile needs a different java version, then we can > create a specific Jenkins job for that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14680) retain consistent splits /during/ (as opposed to across) LLAP failures on top of HIVE-14589
[ https://issues.apache.org/jira/browse/HIVE-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488492#comment-15488492 ] Sergey Shelukhin commented on HIVE-14680: - It's a golden-file-style test. It tests that the distributions fall within parameters that were determined by running the test, looking at results, and deciding that they are ok; then encoding that as acceptable results. > retain consistent splits /during/ (as opposed to across) LLAP failures on top > of HIVE-14589 > --- > > Key: HIVE-14680 > URL: https://issues.apache.org/jira/browse/HIVE-14680 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14680.01.patch, HIVE-14680.patch > > > see HIVE-14589. > Basic idea (spent about 7 minutes thinking about this based on RB comment ;)) > is to return locations for all slots to HostAffinitySplitLocationProvider, > the missing slots being inactive locations (based solely on the last slot > actually present). For the splits mapped to these locations, fall back via > different hash functions, or some sort of probing. > This still doesn't handle all the cases, namely when the last slots are gone > (consistent hashing is supposed to be good for this?); however for that we'd > need more involved coordination between nodes or a central updater to > indicate the number of nodes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488427#comment-15488427 ] Xuefu Zhang edited comment on HIVE-14412 at 9/13/16 9:29 PM: - I think the standard type name is TIMESTAMP WITH TIME ZONE. However, I'm not sure how hard to change the grammar to handle spaces in type name. It seems doable. It's fine to leave this for the future, but it's better not to introduce a nonstandard name. We can have TIMESTAMPTZ for now. [~mmccline], I believe what you mentioned can be done incrementally. was (Author: xuefuz): I think the standard type name is TIMESTAMP WITH TIMEZONE. However, I'm not sure how hard to change the grammar to handle spaces in type name. It seems doable. [~mmccline], I believe what you mentioned can be done incrementally. > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14680) retain consistent splits /during/ (as opposed to across) LLAP failures on top of HIVE-14589
[ https://issues.apache.org/jira/browse/HIVE-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488455#comment-15488455 ] Siddharth Seth commented on HIVE-14680: --- Does the silly test test anything? Otherwise may as well delete it. Maybe add a non-silly test. > retain consistent splits /during/ (as opposed to across) LLAP failures on top > of HIVE-14589 > --- > > Key: HIVE-14680 > URL: https://issues.apache.org/jira/browse/HIVE-14680 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14680.01.patch, HIVE-14680.patch > > > see HIVE-14589. > Basic idea (spent about 7 minutes thinking about this based on RB comment ;)) > is to return locations for all slots to HostAffinitySplitLocationProvider, > the missing slots being inactive locations (based solely on the last slot > actually present). For the splits mapped to these locations, fall back via > different hash functions, or some sort of probing. > This still doesn't handle all the cases, namely when the last slots are gone > (consistent hashing is supposed to be good for this?); however for that we'd > need more involved coordination between nodes or a central updater to > indicate the number of nodes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488427#comment-15488427 ] Xuefu Zhang commented on HIVE-14412: I think the standard type name is TIMESTAMP WITH TIMEZONE. However, I'm not sure how hard to change the grammar to handle spaces in type name. It seems doable. [~mmccline], I believe what you mentioned can be done incrementally. > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-3776) support PIVOT in hive
[ https://issues.apache.org/jira/browse/HIVE-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488425#comment-15488425 ] Dan F commented on HIVE-3776: - https://www.quora.com/Is-there-a-way-to-transpose-data-in-Hive/answer/John-Martinez-41 says to use the COLLECT function from https://github.com/klout/brickhouse. > support PIVOT in hive > - > > Key: HIVE-3776 > URL: https://issues.apache.org/jira/browse/HIVE-3776 > Project: Hive > Issue Type: New Feature > Components: Query Processor >Reporter: Namit Jain >Assignee: Namit Jain > > It is a fairly well understood feature in databases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14739) Replace runnables directly added to runtime shutdown hooks to avoid deadlock
[ https://issues.apache.org/jira/browse/HIVE-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488409#comment-15488409 ] Siddharth Seth commented on HIVE-14739: --- +1 > Replace runnables directly added to runtime shutdown hooks to avoid deadlock > > > Key: HIVE-14739 > URL: https://issues.apache.org/jira/browse/HIVE-14739 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Prasanth Jayachandran > Attachments: HIVE-14739.1.patch, HIVE-14739.2.patch, > HIVE-14739.3.patch > > > [~deepesh] reported that a deadlock can occur when running queries through > hive cli. [~cnauroth] analyzed it and reported that hive adds shutdown hooks > directly to java Runtime which may execute in non-deterministic order causing > deadlocks with hadoop's shutdown hooks. In one case, hadoop shutdown locked > FileSystem#Cache and FileSystem.close whereas hive shutdown hook locked > FileSystem.close and FileSystem#Cache order causing a deadlock. > Hive and Hadoop has ShutdownHookManager that runs the shutdown hooks in > deterministic order based on priority. We should use that to avoid deadlock > throughout the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488387#comment-15488387 ] Aihua Xu commented on HIVE-14251: - Those tests are not related. > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch, HIVE-14251.6.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488310#comment-15488310 ] Hive QA commented on HIVE-14251: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828310/HIVE-14251.6.patch {color:green}SUCCESS:{color} +1 due to 15 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10544 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1166/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1166/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1166/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828310 - PreCommit-HIVE-MASTER-Build > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch, HIVE-14251.6.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14680) retain consistent splits /during/ (as opposed to across) LLAP failures on top of HIVE-14589
[ https://issues.apache.org/jira/browse/HIVE-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-14680: Attachment: HIVE-14680.01.patch Added a silly test > retain consistent splits /during/ (as opposed to across) LLAP failures on top > of HIVE-14589 > --- > > Key: HIVE-14680 > URL: https://issues.apache.org/jira/browse/HIVE-14680 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14680.01.patch, HIVE-14680.patch > > > see HIVE-14589. > Basic idea (spent about 7 minutes thinking about this based on RB comment ;)) > is to return locations for all slots to HostAffinitySplitLocationProvider, > the missing slots being inactive locations (based solely on the last slot > actually present). For the splits mapped to these locations, fall back via > different hash functions, or some sort of probing. > This still doesn't handle all the cases, namely when the last slots are gone > (consistent hashing is supposed to be good for this?); however for that we'd > need more involved coordination between nodes or a central updater to > indicate the number of nodes -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14744) Improve Hive ptest to execute tests per branch automatically and without pre-configurations
[ https://issues.apache.org/jira/browse/HIVE-14744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488136#comment-15488136 ] Sergio Peña commented on HIVE-14744: [~sseth] [~prasanth_j] [~ashutoshc] I created these subtasks so that we can automate ptest without doing manual configurations everytime a new hive version is released. Would you help me identifying other subtasks to do that? Also, do you thing the ones I created are good enough to make this work? > Improve Hive ptest to execute tests per branch automatically and without > pre-configurations > --- > > Key: HIVE-14744 > URL: https://issues.apache.org/jira/browse/HIVE-14744 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > > This task is meant to improve the way Hive PTest executes all test per branch. > Currently, when a new branch in Hive is created, someone with admin rights > needs to create a new job configuration on Jenkins and the PTest server to > allow tests on this branch. > We should remove this human interaction from Jenkins and ptest, and allow any > committer to test their patches on any branch they specify in the file > attached automatically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14734) Detect ptest profile and submit to ptest-server from jenkins-execute-build.sh
[ https://issues.apache.org/jira/browse/HIVE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-14734: --- Issue Type: Sub-task (was: Task) Parent: HIVE-14744 > Detect ptest profile and submit to ptest-server from jenkins-execute-build.sh > - > > Key: HIVE-14734 > URL: https://issues.apache.org/jira/browse/HIVE-14734 > Project: Hive > Issue Type: Sub-task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-14734.patch > > > NO PRECOMMIT TESTS > Currently, to execute tests on a new branch, a manual process must be done: > 1. Create a new Jenkins job with the new branch name > 2. Create a patch to jenkins-submit-build.sh with the new branch > 3. Create a profile properties file on the ptest master with the new branch > This jira will attempt to automate steps 1 and 2 by detecting the branch > profile from a patch to test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14734) Detect ptest profile and submit to ptest-server from jenkins-execute-build.sh
[ https://issues.apache.org/jira/browse/HIVE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-14734: --- Summary: Detect ptest profile and submit to ptest-server from jenkins-execute-build.sh (was: Allow jenkins ptest job to execute tests on branch dynamically) > Detect ptest profile and submit to ptest-server from jenkins-execute-build.sh > - > > Key: HIVE-14734 > URL: https://issues.apache.org/jira/browse/HIVE-14734 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-14734.patch > > > NO PRECOMMIT TESTS > Currently, to execute tests on a new branch, a manual process must be done: > 1. Create a new Jenkins job with the new branch name > 2. Create a patch to jenkins-submit-build.sh with the new branch > 3. Create a profile properties file on the ptest master with the new branch > This jira will attempt to automate steps 1 and 2 by detecting the branch > profile from a patch to test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14739) Replace runnables directly added to runtime shutdown hooks to avoid deadlock
[ https://issues.apache.org/jira/browse/HIVE-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14739: - Attachment: HIVE-14739.3.patch Thanks [~cnauroth] for the review! This patch fixes the build. > Replace runnables directly added to runtime shutdown hooks to avoid deadlock > > > Key: HIVE-14739 > URL: https://issues.apache.org/jira/browse/HIVE-14739 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Prasanth Jayachandran > Attachments: HIVE-14739.1.patch, HIVE-14739.2.patch, > HIVE-14739.3.patch > > > [~deepesh] reported that a deadlock can occur when running queries through > hive cli. [~cnauroth] analyzed it and reported that hive adds shutdown hooks > directly to java Runtime which may execute in non-deterministic order causing > deadlocks with hadoop's shutdown hooks. In one case, hadoop shutdown locked > FileSystem#Cache and FileSystem.close whereas hive shutdown hook locked > FileSystem.close and FileSystem#Cache order causing a deadlock. > Hive and Hadoop has ShutdownHookManager that runs the shutdown hooks in > deterministic order based on priority. We should use that to avoid deadlock > throughout the code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-14251: Attachment: HIVE-14251.6.patch Update the .q.out file for one test case union_null.q of spark. > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch, HIVE-14251.6.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487918#comment-15487918 ] Hive QA commented on HIVE-14251: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828291/HIVE-14251.5.patch {color:green}SUCCESS:{color} +1 due to 15 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10515 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_null] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1165/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1165/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1165/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828291 - PreCommit-HIVE-MASTER-Build > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487701#comment-15487701 ] Aihua Xu commented on HIVE-14251: - Sure. Will update the wiki when the change is in. > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-14251: Status: Patch Available (was: In Progress) Rebase to the latest with the same change. > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-14251: Status: In Progress (was: Patch Available) > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14251) Union All of different types resolves to incorrect data
[ https://issues.apache.org/jira/browse/HIVE-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-14251: Attachment: HIVE-14251.5.patch > Union All of different types resolves to incorrect data > --- > > Key: HIVE-14251 > URL: https://issues.apache.org/jira/browse/HIVE-14251 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14251.1.patch, HIVE-14251.2.patch, > HIVE-14251.3.patch, HIVE-14251.4.patch, HIVE-14251.5.patch > > > create table src(c1 date, c2 int, c3 double); > insert into src values ('2016-01-01',5,1.25); > select * from > (select c1 from src union all > select c2 from src union all > select c3 from src) t; > It will return NULL for the c1 values. Seems the common data type is resolved > to the last c3 which is double. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14474) Create datasource in Druid from Hive
[ https://issues.apache.org/jira/browse/HIVE-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487685#comment-15487685 ] Hive QA commented on HIVE-14474: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828278/HIVE-14474.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 10547 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic1] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic2] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_intervals] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_timeseries] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_topn] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[druid_address] org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[druid_buckets] org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[druid_datasource] org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[druid_external] org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[druid_location] org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[druid_partitions] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1164/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1164/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1164/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 17 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828278 - PreCommit-HIVE-MASTER-Build > Create datasource in Druid from Hive > > > Key: HIVE-14474 > URL: https://issues.apache.org/jira/browse/HIVE-14474 > Project: Hive > Issue Type: Sub-task > Components: Druid integration >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14474.patch > > > We want to extend the DruidStorageHandler to support CTAS queries. > We need to implement a DruidOutputFormat that can create Druid segments from > the output of the Hive query and store them directly in Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487655#comment-15487655 ] Matt McCline commented on HIVE-14412: - And, SerializedWrite/DeserializationRead and ORC? > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487650#comment-15487650 ] Matt McCline commented on HIVE-14412: - What about vectorization support?? > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-14742) Hive on spark throws NPE exception for union all query
[ https://issues.apache.org/jira/browse/HIVE-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu resolved HIVE-14742. - Resolution: Duplicate Assignee: (was: Aihua Xu) Actually it has been fixed by HIVE-9570. > Hive on spark throws NPE exception for union all query > --- > > Key: HIVE-14742 > URL: https://issues.apache.org/jira/browse/HIVE-14742 > Project: Hive > Issue Type: Bug > Components: Spark >Affects Versions: 2.0.0 >Reporter: Aihua Xu > > {noformat} > create table foo (fooId string, fooData string) partitioned by (fooPartition > string) stored as parquet; > insert into foo partition (fooPartition = '1') values ('1', '1'), ('2', '2'); > set hive.execution.engine=spark; > select * from ( > select > fooId as myId, > fooData as myData > from foo where fooPartition = '1' > union all > select > fooId as myId, > fooData as myData > from foo where fooPartition = '3' > ) allData; > {noformat} > Error while compiling statement: FAILED: NullPointerException null -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14474) Create datasource in Druid from Hive
[ https://issues.apache.org/jira/browse/HIVE-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14474: --- Attachment: (was: HIVE-14474.patch) > Create datasource in Druid from Hive > > > Key: HIVE-14474 > URL: https://issues.apache.org/jira/browse/HIVE-14474 > Project: Hive > Issue Type: Sub-task > Components: Druid integration >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14474.patch > > > We want to extend the DruidStorageHandler to support CTAS queries. > We need to implement a DruidOutputFormat that can create Druid segments from > the output of the Hive query and store them directly in Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14474) Create datasource in Druid from Hive
[ https://issues.apache.org/jira/browse/HIVE-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14474: --- Attachment: HIVE-14474.patch > Create datasource in Druid from Hive > > > Key: HIVE-14474 > URL: https://issues.apache.org/jira/browse/HIVE-14474 > Project: Hive > Issue Type: Sub-task > Components: Druid integration >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14474.patch > > > We want to extend the DruidStorageHandler to support CTAS queries. > We need to implement a DruidOutputFormat that can create Druid segments from > the output of the Hive query and store them directly in Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14474) Create datasource in Druid from Hive
[ https://issues.apache.org/jira/browse/HIVE-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14474: --- Attachment: HIVE-14474.patch > Create datasource in Druid from Hive > > > Key: HIVE-14474 > URL: https://issues.apache.org/jira/browse/HIVE-14474 > Project: Hive > Issue Type: Sub-task > Components: Druid integration >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14474.patch > > > We want to extend the DruidStorageHandler to support CTAS queries. > We need to implement a DruidOutputFormat that can create Druid segments from > the output of the Hive query and store them directly in Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487429#comment-15487429 ] Sergio Peña commented on HIVE-14412: I don't know if we support spaces on type names, but I think we should try to support the standard type as that is something we have been trying to do in Hive (being more standard). How do others DB systems use the syntax for this? > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14474) Create datasource in Druid from Hive
[ https://issues.apache.org/jira/browse/HIVE-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14474: --- Status: Patch Available (was: In Progress) > Create datasource in Druid from Hive > > > Key: HIVE-14474 > URL: https://issues.apache.org/jira/browse/HIVE-14474 > Project: Hive > Issue Type: Sub-task > Components: Druid integration >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14474.patch > > > We want to extend the DruidStorageHandler to support CTAS queries. > We need to implement a DruidOutputFormat that can create Druid segments from > the output of the Hive query and store them directly in Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-14474) Create datasource in Druid from Hive
[ https://issues.apache.org/jira/browse/HIVE-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-14474 started by Jesus Camacho Rodriguez. -- > Create datasource in Druid from Hive > > > Key: HIVE-14474 > URL: https://issues.apache.org/jira/browse/HIVE-14474 > Project: Hive > Issue Type: Sub-task > Components: Druid integration >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > We want to extend the DruidStorageHandler to support CTAS queries. > We need to implement a DruidOutputFormat that can create Druid segments from > the output of the Hive query and store them directly in Druid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14734) Allow jenkins ptest job to execute tests on branch dynamically
[ https://issues.apache.org/jira/browse/HIVE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15487424#comment-15487424 ] Sergio Peña commented on HIVE-14734: [~prasanth_j] I submitted the patch to RB. I recommended you to apply the patch in your local branch and see both {{jenkins-execute-build.sh}} and {{jenkins-common.sh}} files. I think it will be easier to understand the change than looking at the differences. > Allow jenkins ptest job to execute tests on branch dynamically > -- > > Key: HIVE-14734 > URL: https://issues.apache.org/jira/browse/HIVE-14734 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-14734.patch > > > NO PRECOMMIT TESTS > Currently, to execute tests on a new branch, a manual process must be done: > 1. Create a new Jenkins job with the new branch name > 2. Create a patch to jenkins-submit-build.sh with the new branch > 3. Create a profile properties file on the ptest master with the new branch > This jira will attempt to automate steps 1 and 2 by detecting the branch > profile from a patch to test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14734) Allow jenkins ptest job to execute tests on branch dynamically
[ https://issues.apache.org/jira/browse/HIVE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-14734: --- Attachment: HIVE-14734.patch > Allow jenkins ptest job to execute tests on branch dynamically > -- > > Key: HIVE-14734 > URL: https://issues.apache.org/jira/browse/HIVE-14734 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-14734.patch > > > NO PRECOMMIT TESTS > Currently, to execute tests on a new branch, a manual process must be done: > 1. Create a new Jenkins job with the new branch name > 2. Create a patch to jenkins-submit-build.sh with the new branch > 3. Create a profile properties file on the ptest master with the new branch > This jira will attempt to automate steps 1 and 2 by detecting the branch > profile from a patch to test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14734) Allow jenkins ptest job to execute tests on branch dynamically
[ https://issues.apache.org/jira/browse/HIVE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-14734: --- Attachment: (was: HIVE-14734) > Allow jenkins ptest job to execute tests on branch dynamically > -- > > Key: HIVE-14734 > URL: https://issues.apache.org/jira/browse/HIVE-14734 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-14734.patch > > > NO PRECOMMIT TESTS > Currently, to execute tests on a new branch, a manual process must be done: > 1. Create a new Jenkins job with the new branch name > 2. Create a patch to jenkins-submit-build.sh with the new branch > 3. Create a profile properties file on the ptest master with the new branch > This jira will attempt to automate steps 1 and 2 by detecting the branch > profile from a patch to test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14726) delete statement fails when spdo is on
[ https://issues.apache.org/jira/browse/HIVE-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14726: Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Jesus for review! > delete statement fails when spdo is on > -- > > Key: HIVE-14726 > URL: https://issues.apache.org/jira/browse/HIVE-14726 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 2.1.0 >Reporter: Deepesh Khandelwal >Assignee: Ashutosh Chauhan > Fix For: 2.2.0 > > Attachments: HIVE-14726.1.patch, HIVE-14726.2.patch, HIVE-14726.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14741) Incorrect results on boolean col when vectorization is ON
[ https://issues.apache.org/jira/browse/HIVE-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amruth S updated HIVE-14741: Labels: orc vectorization (was: ) > Incorrect results on boolean col when vectorization is ON > - > > Key: HIVE-14741 > URL: https://issues.apache.org/jira/browse/HIVE-14741 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0, 2.1.0 >Reporter: Amruth S > Labels: orc, vectorization > Attachments: 00_0 > > > I have attached the ORC part file on which the issue is manifesting. It has > just one boolean column (lot of nulls, 231=trues : verified using orc file > dump utility) > 1) Create external table on the part file attached > CREATE EXTERNAL TABLE bool_vect_issue ( > `bool_col` BOOLEAN) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > ''; > 2) > set hive.vectorized.execution.enabled = true; > select sum(if((bool_col) , 1, 0)) from bool_vect_issue; > gives > 708206 > 3) > set hive.vectorized.execution.enabled = false; > select sum(if((bool_col) , 1, 0)) from bool_vect_issue; > gives > 231 > The issue seem to have the same impact as HIVE-12435 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14741) Incorrect results on boolean col when vectorization is ON
[ https://issues.apache.org/jira/browse/HIVE-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amruth S updated HIVE-14741: Attachment: 00_0 > Incorrect results on boolean col when vectorization is ON > - > > Key: HIVE-14741 > URL: https://issues.apache.org/jira/browse/HIVE-14741 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0, 2.1.0 >Reporter: Amruth S > Attachments: 00_0 > > > I have attached the ORC part file on which the issue is manifesting. It has > just one boolean column (lot of nulls, 231=trues : verified using orc file > dump utility) > 1) Create external table on the part file attached > CREATE EXTERNAL TABLE bool_vect_issue ( > `bool_col` BOOLEAN) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > ''; > 2) > set hive.vectorized.execution.enabled = true; > select sum(if((bool_col) , 1, 0)) from bool_vect_issue; > gives > 708206 > 3) > set hive.vectorized.execution.enabled = false; > select sum(if((bool_col) , 1, 0)) from bool_vect_issue; > gives > 231 > The issue seem to have the same impact as HIVE-12435 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13878) Vectorization: Column pruning for Text vectorization
[ https://issues.apache.org/jira/browse/HIVE-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486647#comment-15486647 ] Matt McCline commented on HIVE-13878: - Test failures are unrelated. > Vectorization: Column pruning for Text vectorization > > > Key: HIVE-13878 > URL: https://issues.apache.org/jira/browse/HIVE-13878 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Matt McCline > Attachments: HIVE-13878.04.patch, HIVE-13878.05.patch, > HIVE-13878.06.patch, HIVE-13878.07.patch, HIVE-13878.08.patch, > HIVE-13878.09.patch, HIVE-13878.091.patch, HIVE-13878.1.patch, > HIVE-13878.2.patch, HIVE-13878.3.patch > > > Column pruning in TextFile vectorization does not work with Vector SerDe > settings due to LazySimple deser codepath issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13878) Vectorization: Column pruning for Text vectorization
[ https://issues.apache.org/jira/browse/HIVE-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486631#comment-15486631 ] Hive QA commented on HIVE-13878: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12828181/HIVE-13878.091.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10546 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1163/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/1163/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-1163/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12828181 - PreCommit-HIVE-MASTER-Build > Vectorization: Column pruning for Text vectorization > > > Key: HIVE-13878 > URL: https://issues.apache.org/jira/browse/HIVE-13878 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Matt McCline > Attachments: HIVE-13878.04.patch, HIVE-13878.05.patch, > HIVE-13878.06.patch, HIVE-13878.07.patch, HIVE-13878.08.patch, > HIVE-13878.09.patch, HIVE-13878.091.patch, HIVE-13878.1.patch, > HIVE-13878.2.patch, HIVE-13878.3.patch > > > Column pruning in TextFile vectorization does not work with Vector SerDe > settings due to LazySimple deser codepath issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14734) Allow jenkins ptest job to execute tests on branch dynamically
[ https://issues.apache.org/jira/browse/HIVE-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486621#comment-15486621 ] Prasanth Jayachandran commented on HIVE-14734: -- [~spena] Can you plz upload the patch to RB or upload with .patch extension (I can use chrome extension to view patch)? > Allow jenkins ptest job to execute tests on branch dynamically > -- > > Key: HIVE-14734 > URL: https://issues.apache.org/jira/browse/HIVE-14734 > Project: Hive > Issue Type: Task > Components: Hive, Testing Infrastructure >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: HIVE-14734 > > > NO PRECOMMIT TESTS > Currently, to execute tests on a new branch, a manual process must be done: > 1. Create a new Jenkins job with the new branch name > 2. Create a patch to jenkins-submit-build.sh with the new branch > 3. Create a profile properties file on the ptest master with the new branch > This jira will attempt to automate steps 1 and 2 by detecting the branch > profile from a patch to test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14412) Add a timezone-aware timestamp
[ https://issues.apache.org/jira/browse/HIVE-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486593#comment-15486593 ] Rui Li commented on HIVE-14412: --- I've drafted a summary of the proposal in Google Doc and linked it here. Suggestions and feedback are welcome! > Add a timezone-aware timestamp > -- > > Key: HIVE-14412 > URL: https://issues.apache.org/jira/browse/HIVE-14412 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-14412.1.patch, HIVE-14412.2.patch, > HIVE-14412.3.patch, HIVE-14412.4.patch > > > Java's Timestamp stores the time elapsed since the epoch. While it's by > itself unambiguous, ambiguity comes when we parse a string into timestamp, or > convert a timestamp to string, causing problems like HIVE-14305. > To solve the issue, I think we should make timestamp aware of timezone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14663) Change ptest java language version to 1.7, other version changes and fixes
[ https://issues.apache.org/jira/browse/HIVE-14663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14663: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master > Change ptest java language version to 1.7, other version changes and fixes > -- > > Key: HIVE-14663 > URL: https://issues.apache.org/jira/browse/HIVE-14663 > Project: Hive > Issue Type: Sub-task >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Fix For: 2.2.0 > > Attachments: HIVE-14663.01.patch, HIVE-14663.2.patch, > HIVE-14663.2.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)