[jira] [Commented] (HIVE-17839) Cannot generate thrift definitions in standalone-metastore.

2017-10-23 Thread Harish Jaiprakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216358#comment-16216358
 ] 

Harish Jaiprakash commented on HIVE-17839:
--

Thanks, [~alangates]. Tried the patch, thrift files are being regenerated now.

> Cannot generate thrift definitions in standalone-metastore.
> ---
>
> Key: HIVE-17839
> URL: https://issues.apache.org/jira/browse/HIVE-17839
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Jaiprakash
>Assignee: Alan Gates
> Attachments: HIVE-17839.patch
>
>
> mvn clean install -Pthriftif -Dthrift.home=... does not regenerate the thrift 
> sources. This is after the https://issues.apache.org/jira/browse/HIVE-17506 
> fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17458) VectorizedOrcAcidRowBatchReader doesn't handle 'original' files

2017-10-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17458:
--
Attachment: HIVE-17458.06.patch

> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---
>
> Key: HIVE-17458
> URL: https://issues.apache.org/jira/browse/HIVE-17458
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17458.01.patch, HIVE-17458.02.patch, 
> HIVE-17458.03.patch, HIVE-17458.04.patch, HIVE-17458.05.patch, 
> HIVE-17458.06.patch
>
>
> VectorizedOrcAcidRowBatchReader will not be used for original files.  This 
> will likely look like a perf regression when converting a table from non-acid 
> to acid until it runs through a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops 
> will not vectorize until major compaction.  
> There is no reason why this should be the case.  Just like 
> OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader can look at the other 
> files in the logical tranche/bucket and calculate the offset for the RowBatch 
> of the split.  (Presumably getRecordReader().getRowNumber() works the same in 
> vector mode).
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer 
> it from file path... which in particular simplifies 
> OrcInputFormat.determineSplitStrategies()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16603) Enforce foreign keys to refer to primary keys or unique keys

2017-10-23 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-16603:
--
Labels: TODOC3.0  (was: )

> Enforce foreign keys to refer to primary keys or unique keys
> 
>
> Key: HIVE-16603
> URL: https://issues.apache.org/jira/browse/HIVE-16603
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-16603.patch
>
>
> Follow-up on HIVE-16575.
> Currently we do not enforce foreign keys to refer to primary keys or unique 
> keys (as opposed to PostgreSQL and others); we should do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17473) implement workload management pools

2017-10-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216331#comment-16216331
 ] 

Lefty Leverenz commented on HIVE-17473:
---

Okay, thanks Sergey.

> implement workload management pools
> ---
>
> Key: HIVE-17473
> URL: https://issues.apache.org/jira/browse/HIVE-17473
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 3.0.0
>
> Attachments: HIVE-17473.01.patch, HIVE-17473.03.patch, 
> HIVE-17473.04.patch, HIVE-17473.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17832) Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in metastore

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216324#comment-16216324
 ] 

Hive QA commented on HIVE-17832:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893593/HIVE17832.2.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 11317 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=145)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=101)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=204)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=221)
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
 (batchId=269)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7448/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7448/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7448/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12893593 - PreCommit-HIVE-Build

> Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in 
> metastore
> --
>
> Key: HIVE-17832
> URL: https://issues.apache.org/jira/browse/HIVE-17832
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
> Fix For: 3.0.0
>
> Attachments: HIVE17832.1.patch, HIVE17832.2.patch
>
>
> hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
> disallow incompatible column type changes through alter table.  But, this 
> parameter is not modifiable in HMS.  If HMS in not embedded into HS2, the 
> value cannot be changed.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15104) Hive on Spark generate more shuffle data than hive on mr

2017-10-23 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-15104:
--
Attachment: HIVE-15104.10.patch

Update to address review comments. Also changed the default switch back to 
false.

> Hive on Spark generate more shuffle data than hive on mr
> 
>
> Key: HIVE-15104
> URL: https://issues.apache.org/jira/browse/HIVE-15104
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1
>Reporter: wangwenli
>Assignee: Rui Li
> Attachments: HIVE-15104.1.patch, HIVE-15104.10.patch, 
> HIVE-15104.2.patch, HIVE-15104.3.patch, HIVE-15104.4.patch, 
> HIVE-15104.5.patch, HIVE-15104.6.patch, HIVE-15104.7.patch, 
> HIVE-15104.8.patch, HIVE-15104.9.patch, TPC-H 100G.xlsx
>
>
> the same sql,  running on spark  and mr engine, will generate different size 
> of shuffle data.
> i think it is because of hive on mr just serialize part of HiveKey, but hive 
> on spark which using kryo will serialize full of Hivekey object.  
> what is your opionion?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-4616) Simple reconnection support for jdbc2

2017-10-23 Thread Lingfeng Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lingfeng Su updated HIVE-4616:
--
Description: 
 jdbc:hive2://localhost:1/db2;autoReconnect=true

simple reconnection on TransportException. If hiveserver2 has not been 
shutdown, session could be reused.

  was:
jdbc:hive2://localhost:1/db2;autoReconnect=true

simple reconnection on TransportException. If hiveserver2 has not been 
shutdown, session could be reused.


> Simple reconnection support for jdbc2
> -
>
> Key: HIVE-4616
> URL: https://issues.apache.org/jira/browse/HIVE-4616
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4616.3.patch.txt, HIVE-4616.4.patch.txt, 
> HIVE-4616.5.patch.txt, HIVE-4616.D10953.1.patch, HIVE-4616.D10953.2.patch
>
>
>  jdbc:hive2://localhost:1/db2;autoReconnect=true
> simple reconnection on TransportException. If hiveserver2 has not been 
> shutdown, session could be reused.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17877) HoS: combine equivalent DPP sink works

2017-10-23 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-17877:
--
Status: Patch Available  (was: Open)

> HoS: combine equivalent DPP sink works
> --
>
> Key: HIVE-17877
> URL: https://issues.apache.org/jira/browse/HIVE-17877
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-17877.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17877) HoS: combine equivalent DPP sink works

2017-10-23 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-17877:
--
Component/s: Spark

> HoS: combine equivalent DPP sink works
> --
>
> Key: HIVE-17877
> URL: https://issues.apache.org/jira/browse/HIVE-17877
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-17877.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17877) HoS: combine equivalent DPP sink works

2017-10-23 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216249#comment-16216249
 ] 

Rui Li commented on HIVE-17877:
---

Upload a PoC patch. Here're the main changes:
# Before combining, each {{SparkPartitionPruningSinkDesc}} can target only one 
column in one map work. After combing, the remaining 
{{SparkPartitionPruningSinkDesc}} will hold the columns and map works from 
other equivalent {{SparkPartitionPruningSinkDesc}}.
# Two {{SparkPartitionPruningSinkDesc}} are equivalent if they have the same 
TableDesc.
# When we combine two equivalent works, if they contain DPP sinks, we'll merge 
the DPP sinks. Let's suppose we'll merge DPP1 and DPP2, which have target map 
works Map1 and Map2 respectively. First we add the target column/work of DPP2 
to DPP1. Then we update Map2 so that it knows it'll be pruned by DPP1 instead 
of DPP2, i.e. updating the {{eventSource}} maps and tmp path.
# Currently {{CombineEquivalentWorkResolver}} doesn't handle leaf works. With 
the patch, it'll handle leaf works if all leaf operators in the leaf works are 
DPP sinks.
# Currently {{SparkPartitionPruningSinkOperator}} writes the target column name 
into the output file. Since now it can have multiple target columns, it first 
writes the number of columns and then writes all the target column names. In 
order to make column names unique, the target map work ID will be prepended to 
the column name.
# When {{SparkDynamicPartitionPruner}} reads the file, it reads in all the 
column names and finds the {{SourceInfo}} whose name is in the column names.

> HoS: combine equivalent DPP sink works
> --
>
> Key: HIVE-17877
> URL: https://issues.apache.org/jira/browse/HIVE-17877
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-17877.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17232) "No match found" Compactor finds a bucket file thinking it's a directory

2017-10-23 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-17232:
--
Status: Patch Available  (was: Open)

>  "No match found"  Compactor finds a bucket file thinking it's a directory
> --
>
> Key: HIVE-17232
> URL: https://issues.apache.org/jira/browse/HIVE-17232
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Steve Yeom
> Attachments: HIVE-17232.01.patch
>
>
> {noformat}
> 2017-08-02T12:38:11,996  WARN [main] compactor.CompactorMR: Found a 
> non-bucket file that we thought matched the bucket pattern! 
> file:/Users/ekoifman/dev/hiv\
> erwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands2-1501702264311/warehouse/acidtblpart/p=1/delta_013_013_/bucket_1
>  Matcher=java\
> .util.regex.Matcher[pattern=^[0-9]{6} region=0,12 lastmatch=]
> 2017-08-02T12:38:11,996  INFO [main] mapreduce.JobSubmitter: Cleaning up the 
> staging area 
> file:/tmp/hadoop/mapred/staging/ekoifman1723152463/.staging/job_lo\
> cal1723152463_0183
> 2017-08-02T12:38:11,997 ERROR [main] compactor.Worker: Caught exception while 
> trying to compact 
> id:1,dbname:default,tableName:ACIDTBLPART,partName:null,stat\
> e:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestTxnId:0.
>   Marking failed to avoid repeated failures, java.lang.IllegalStateException: 
> \
> No match found
> at java.util.regex.Matcher.group(Matcher.java:536)
> at java.util.regex.Matcher.group(Matcher.java:496)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorInputFormat.addFileToMap(CompactorMR.java:577)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorInputFormat.getSplits(CompactorMR.java:549)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:330)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:322)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:198)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.launchCompactionJob(CompactorMR.java:320)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:275)
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:166)
> at 
> org.apache.hadoop.hive.ql.TestTxnCommands2.runWorker(TestTxnCommands2.java:1138)
> at 
> org.apache.hadoop.hive.ql.TestTxnCommands2.updateDeletePartitioned(TestTxnCommands2.java:894)
> {noformat}
> the stack trace points to 1st runWorker() in updateDeletePartitioned() though 
> the test run was TestTxnCommands2WithSplitUpdateAndVectorization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17232) "No match found" Compactor finds a bucket file thinking it's a directory

2017-10-23 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-17232:
--
Attachment: HIVE-17232.01.patch

>  "No match found"  Compactor finds a bucket file thinking it's a directory
> --
>
> Key: HIVE-17232
> URL: https://issues.apache.org/jira/browse/HIVE-17232
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Steve Yeom
> Attachments: HIVE-17232.01.patch
>
>
> {noformat}
> 2017-08-02T12:38:11,996  WARN [main] compactor.CompactorMR: Found a 
> non-bucket file that we thought matched the bucket pattern! 
> file:/Users/ekoifman/dev/hiv\
> erwgit/ql/target/tmp/org.apache.hadoop.hive.ql.TestTxnCommands2-1501702264311/warehouse/acidtblpart/p=1/delta_013_013_/bucket_1
>  Matcher=java\
> .util.regex.Matcher[pattern=^[0-9]{6} region=0,12 lastmatch=]
> 2017-08-02T12:38:11,996  INFO [main] mapreduce.JobSubmitter: Cleaning up the 
> staging area 
> file:/tmp/hadoop/mapred/staging/ekoifman1723152463/.staging/job_lo\
> cal1723152463_0183
> 2017-08-02T12:38:11,997 ERROR [main] compactor.Worker: Caught exception while 
> trying to compact 
> id:1,dbname:default,tableName:ACIDTBLPART,partName:null,stat\
> e:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestTxnId:0.
>   Marking failed to avoid repeated failures, java.lang.IllegalStateException: 
> \
> No match found
> at java.util.regex.Matcher.group(Matcher.java:536)
> at java.util.regex.Matcher.group(Matcher.java:496)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorInputFormat.addFileToMap(CompactorMR.java:577)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorInputFormat.getSplits(CompactorMR.java:549)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:330)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:322)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:198)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.launchCompactionJob(CompactorMR.java:320)
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:275)
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:166)
> at 
> org.apache.hadoop.hive.ql.TestTxnCommands2.runWorker(TestTxnCommands2.java:1138)
> at 
> org.apache.hadoop.hive.ql.TestTxnCommands2.updateDeletePartitioned(TestTxnCommands2.java:894)
> {noformat}
> the stack trace points to 1st runWorker() in updateDeletePartitioned() though 
> the test run was TestTxnCommands2WithSplitUpdateAndVectorization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17877) HoS: combine equivalent DPP sink works

2017-10-23 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-17877:
--
Attachment: HIVE-17877.1.patch

> HoS: combine equivalent DPP sink works
> --
>
> Key: HIVE-17877
> URL: https://issues.apache.org/jira/browse/HIVE-17877
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-17877.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17193) HoS: don't combine map works that are targets of different DPPs

2017-10-23 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216227#comment-16216227
 ] 

Rui Li commented on HIVE-17193:
---

Hi [~stakiar], I meant we can compare DPP sink works the same way we compare 
other works. If two DPP works have the same operator tree, they will have the 
same output. I'll provide a PoC patch and more details in HIVE-17877.

> HoS: don't combine map works that are targets of different DPPs
> ---
>
> Key: HIVE-17193
> URL: https://issues.apache.org/jira/browse/HIVE-17193
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
>
> Suppose {{srcpart}} is partitioned by {{ds}}. The following query can trigger 
> the issue:
> {code}
> explain
> select * from
>   (select srcpart.ds,srcpart.key from srcpart join src on srcpart.ds=src.key) 
> a
> join
>   (select srcpart.ds,srcpart.key from srcpart join src on 
> srcpart.ds=src.value) b
> on a.key=b.key;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17778) Add support for custom counters in trigger expression

2017-10-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-17778:
-
Attachment: HIVE-17778.5.patch

Addressed review comments. Added tests for multi-insert and union all, fixed 
failing tests.

> Add support for custom counters in trigger expression
> -
>
> Key: HIVE-17778
> URL: https://issues.apache.org/jira/browse/HIVE-17778
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17778.1.patch, HIVE-17778.2.patch, 
> HIVE-17778.3.patch, HIVE-17778.4.patch, HIVE-17778.5.patch
>
>
> HIVE-17508 only supports limited counters. This ticket is to extend it to 
> support custom counters (counters that are not supported by execution engine 
> will be dropped).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-12719) As a hive user, I am facing issues using permanent UDAF's.

2017-10-23 Thread Elizabeth Turpin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216079#comment-16216079
 ] 

Elizabeth Turpin commented on HIVE-12719:
-

Almost 2 years since this bug was reported, but I am experiencing the same 
issue on both versions 1.2.1 and 2.1.1.  Curious what other watchers on this 
ticket have done as a workaround?

> As a hive user, I am facing issues using permanent UDAF's.
> --
>
> Key: HIVE-12719
> URL: https://issues.apache.org/jira/browse/HIVE-12719
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Surbhit
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17841) implement applying the resource plan

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17841:

Attachment: HIVE-17841.01.patch

Fixing the tests. I still need the new tests.

> implement applying the resource plan
> 
>
> Key: HIVE-17841
> URL: https://issues.apache.org/jira/browse/HIVE-17841
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17841.01.patch, HIVE-17841.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17812) Move remaining classes that HiveMetaStore depends on

2017-10-23 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216009#comment-16216009
 ] 

Vihang Karajgaonkar edited comment on HIVE-17812 at 10/23/17 11:04 PM:
---

Hi [~alangates] I did a first pass on the patch and added some comments in the 
github review. I went over your comment regarding backwards incompatibility 
above after I submitted the review comments so some of them might seem 
redundant.

bq. Is this avoidable? Yes, I could move events and listeners in a bigger patch 
along with HiveMetaStore. However, other changes are going to break listeners 
and events anyway. Namely, the change from HiveConf -> Conf (which is not 
avoidable). Also, if we do split the metastore into a separate TLP in the 
future it will change class names, which will also obviously impact 
implementations of listener. We should look into what it will take to build a 
shim that would support existing listeners. This would need to live in the 
metastore module rather than standalone-metastore, since it will need to 
reference HiveConf.

The change from {{HMSHandler}} to {{IHMSHandler}} in the events makes sense to 
me from design point of view. But I think it is unclear if it is worth breaking 
backwards compatibility. I am not sure I understand why it would have broken 
the compatibility because of HiveConf -> Conf change. Do the events use conf 
object? Based on what I looked (I didn't look at all the events but few sample 
of the important events) I didn't see conf being used. If we change the class 
names, we might still be able to provide some shims layer to preserve 
compatibility with respect to the public API if we don't modify the method 
signatures. What do you think?



was (Author: vihangk1):
Hi [~alangates] I did a first pass on the patch and added some comments in the 
github review. I went over your comment regarding backwards incompatibility 
above after I submitted the review comments so some of them might seem 
redundant.

bq. Is this avoidable? Yes, I could move events and listeners in a bigger patch 
along with HiveMetaStore. However, other changes are going to break listeners 
and events anyway. Namely, the change from HiveConf -> Conf (which is not 
avoidable). Also, if we do split the metastore into a separate TLP in the 
future it will change class names, which will also obviously impact 
implementations of listener.
We should look into what it will take to build a shim that would support 
existing listeners. This would need to live in the metastore module rather than 
standalone-metastore, since it will need to reference HiveConf.

The change from {{HMSHandler}} to {{IHMSHandler}} in the events makes sense to 
me from design point of view. But I think it is unclear if it is worth breaking 
backwards compatibility. I am not sure I understand why it would have broken 
the compatibility because of HiveConf -> Conf change. Do the events use conf 
object? Based on what I looked (I didn't look at all the events but few sample 
of the important events) I didn't see conf being used. If we change the class 
names, we might still be able to provide some shims layer to preserve 
compatibility with respect to the public API if we don't modify the method 
signatures. What do you think?


> Move remaining classes that HiveMetaStore depends on 
> -
>
> Key: HIVE-17812
> URL: https://issues.apache.org/jira/browse/HIVE-17812
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>  Labels: pull-request-available
> Attachments: HIVE-17812.2.patch, HIVE-17812.3.patch, HIVE-17812.patch
>
>
> There are several remaining pieces that need moved before we can move 
> HiveMetaStore itself.  These include NotificationListener and 
> implementations, Events, AlterHandler, and a few other miscellaneous pieces.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17812) Move remaining classes that HiveMetaStore depends on

2017-10-23 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216009#comment-16216009
 ] 

Vihang Karajgaonkar commented on HIVE-17812:


Hi [~alangates] I did a first pass on the patch and added some comments in the 
github review. I went over your comment regarding backwards incompatibility 
above after I submitted the review comments so some of them might seem 
redundant.

bq. Is this avoidable? Yes, I could move events and listeners in a bigger patch 
along with HiveMetaStore. However, other changes are going to break listeners 
and events anyway. Namely, the change from HiveConf -> Conf (which is not 
avoidable). Also, if we do split the metastore into a separate TLP in the 
future it will change class names, which will also obviously impact 
implementations of listener.
We should look into what it will take to build a shim that would support 
existing listeners. This would need to live in the metastore module rather than 
standalone-metastore, since it will need to reference HiveConf.

The change from {{HMSHandler}} to {{IHMSHandler}} in the events makes sense to 
me from design point of view. But I think it is unclear if it is worth breaking 
backwards compatibility. I am not sure I understand why it would have broken 
the compatibility because of HiveConf -> Conf change. Do the events use conf 
object? Based on what I looked (I didn't look at all the events but few sample 
of the important events) I didn't see conf being used. If we change the class 
names, we might still be able to provide some shims layer to preserve 
compatibility with respect to the public API if we don't modify the method 
signatures. What do you think?


> Move remaining classes that HiveMetaStore depends on 
> -
>
> Key: HIVE-17812
> URL: https://issues.apache.org/jira/browse/HIVE-17812
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>  Labels: pull-request-available
> Attachments: HIVE-17812.2.patch, HIVE-17812.3.patch, HIVE-17812.patch
>
>
> There are several remaining pieces that need moved before we can move 
> HiveMetaStore itself.  These include NotificationListener and 
> implementations, Events, AlterHandler, and a few other miscellaneous pieces.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler

2017-10-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16216000#comment-16216000
 ] 

Xuefu Zhang commented on HIVE-17684:


We don't see this issue often, possibly because our settings are conservative. 
Because of the dynamic nature of GC and the possibly of different tasks that 
can concurrently run in an executor, completely avoiding this problem might be 
very hard.

When we do have memory issue when loading the hash map into memory, it's 
usually because the plan was wrong so that the map join isn't the right choice. 
For this, I think it might makes sense to keep track of the size of the hash 
map when it's written to disk. If it goes beyond a threshold (such as the value 
of noconditional.size), fail the task right way rather than later failing to 
load the table into memory.

> HoS memory issues with MapJoinMemoryExhaustionHandler
> -
>
> Key: HIVE-17684
> URL: https://issues.apache.org/jira/browse/HIVE-17684
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of 
> the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect 
> scenarios where the small table is taking too much space in memory, in which 
> case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic 
> to estimate how much memory the {{HashMap}} is consuming: 
> {{MemoryMXBean#getHeapMemoryUsage().getUsed() / 
> MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be 
> inaccurate. The value returned by this method returns all reachable and 
> unreachable memory on the heap, so there may be a bunch of garbage data, and 
> the JVM just hasn't taken the time to reclaim it all. This can lead to 
> intermittent failures of this check even though a simple GC would have 
> reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. 
> In Hive-on-MR this probably made sense to use because every Hive task was run 
> in a dedicated container, so a Hive Task could assume it created most of the 
> data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks 
> running in a single executor, each doing different things.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17781) Map MR settings to Tez settings via DeprecatedKeys

2017-10-23 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17781:

   Resolution: Fixed
Fix Version/s: 2.2.1
   2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Committed to {{master}}, {{branch-2}}, and {{branch-2.2}}. Thanks, [~cdrome].

> Map MR settings to Tez settings via DeprecatedKeys
> --
>
> Key: HIVE-17781
> URL: https://issues.apache.org/jira/browse/HIVE-17781
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration, Tez
>Affects Versions: 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Chris Drome
> Fix For: 3.0.0, 2.4.0, 2.2.1
>
> Attachments: HIVE-17781.1.patch, HIVE-17781.2-branch-2.2.patch, 
> HIVE-17781.2-branch-2.patch, HIVE-17781.2.patch
>
>
> Here's one that [~cdrome] and [~thiruvel] worked on:
> We found that certain Hadoop Map/Reduce settings that are set in site config 
> files do not take effect in Hive jobs, because the Tez site configs do not 
> contain the same settings.
> In Yahoo's case, the problem was that, at the time, there was no mapping 
> between {{MRJobConfig.COMPLETED_MAPS_FOR_REDUCE_SLOWSTART}} and 
> {{TEZ_SHUFFLE_VERTEX_MANAGER_MAX_SRC_FRACTION}}. There were situations where 
> significant capacity on production clusters were being used up doing nothing, 
> while waiting for slow tasks to complete. This would have been avoided, were 
> the mappings in place.
> Tez provides a {{DeprecatedKeys}} utility class, to help map MR settings to 
> Tez settings. Hive should use this to ensure that the mappings are in sync.
> (Note to self: YHIVE-883)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17425) Change MetastoreConf.ConfVars internal members to be private

2017-10-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215989#comment-16215989
 ] 

ASF GitHub Bot commented on HIVE-17425:
---

Github user asfgit closed the pull request at:

https://github.com/apache/hive/pull/264


> Change MetastoreConf.ConfVars internal members to be private
> 
>
> Key: HIVE-17425
> URL: https://issues.apache.org/jira/browse/HIVE-17425
> Project: Hive
>  Issue Type: Task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17425.2.patch, HIVE-17425.patch
>
>
> MetastoreConf's dual use of metastore keys and Hive keys is causing confusion 
> for developers.  We should make the relevant members private and provide 
> getter methods with comments on when it is appropriate to use them.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17425) Change MetastoreConf.ConfVars internal members to be private

2017-10-23 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17425:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Patch committed.  Thanks Vihang for the review.

> Change MetastoreConf.ConfVars internal members to be private
> 
>
> Key: HIVE-17425
> URL: https://issues.apache.org/jira/browse/HIVE-17425
> Project: Hive
>  Issue Type: Task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17425.2.patch, HIVE-17425.patch
>
>
> MetastoreConf's dual use of metastore keys and Hive keys is causing confusion 
> for developers.  We should make the relevant members private and provide 
> getter methods with comments on when it is appropriate to use them.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17425) Change MetastoreConf.ConfVars internal members to be private

2017-10-23 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-17425:
--
Labels: pull-request-available  (was: )

> Change MetastoreConf.ConfVars internal members to be private
> 
>
> Key: HIVE-17425
> URL: https://issues.apache.org/jira/browse/HIVE-17425
> Project: Hive
>  Issue Type: Task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17425.2.patch, HIVE-17425.patch
>
>
> MetastoreConf's dual use of metastore keys and Hive keys is causing confusion 
> for developers.  We should make the relevant members private and provide 
> getter methods with comments on when it is appropriate to use them.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17878) create table like table doesn't preserve ACID properties

2017-10-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17878:
--
Component/s: Transactions

> create table like table doesn't preserve ACID properties
> 
>
> Key: HIVE-17878
> URL: https://issues.apache.org/jira/browse/HIVE-17878
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>
> Discovered while looking at HIVE-17750.
> The new table is not transactional; I think table properties in general may 
> not be propagated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15016:

Status: Patch Available  (was: In Progress)

patch-7: fix some additional test failures.

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, 
> HIVE-15016.7.patch, HIVE-15016.patch, Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15016:

Affects Version/s: 3.0.0

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, 
> HIVE-15016.7.patch, HIVE-15016.patch, Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15016:

Attachment: HIVE-15016.7.patch

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, 
> HIVE-15016.7.patch, HIVE-15016.patch, Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15016:

Status: In Progress  (was: Patch Available)

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, HIVE-15016.patch, 
> Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17765) expose Hive keywords

2017-10-23 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215898#comment-16215898
 ] 

Thejas M Nair commented on HIVE-17765:
--

+1


> expose Hive keywords 
> -
>
> Key: HIVE-17765
> URL: https://issues.apache.org/jira/browse/HIVE-17765
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17765.01.patch, HIVE-17765.02.patch, 
> HIVE-17765.03.patch, HIVE-17765.nogen.patch, HIVE-17765.patch
>
>
> This could be useful e.g. for BI tools (via ODBC/JDBC drivers) to decide on 
> SQL capabilities of Hive



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17765) expose Hive keywords

2017-10-23 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair reassigned HIVE-17765:


Assignee: Sergey Shelukhin  (was: Thejas M Nair)

> expose Hive keywords 
> -
>
> Key: HIVE-17765
> URL: https://issues.apache.org/jira/browse/HIVE-17765
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17765.01.patch, HIVE-17765.02.patch, 
> HIVE-17765.03.patch, HIVE-17765.nogen.patch, HIVE-17765.patch
>
>
> This could be useful e.g. for BI tools (via ODBC/JDBC drivers) to decide on 
> SQL capabilities of Hive



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17878) create table like table doesn't preserve ACID properties

2017-10-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215891#comment-16215891
 ] 

Sergey Shelukhin commented on HIVE-17878:
-

cc [~ekoifman]

> create table like table doesn't preserve ACID properties
> 
>
> Key: HIVE-17878
> URL: https://issues.apache.org/jira/browse/HIVE-17878
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> Discovered while looking at HIVE-17750.
> The new table is not transactional; I think table properties in general may 
> not be propagated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17834) Fix flaky triggers test

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215889#comment-16215889
 ] 

Hive QA commented on HIVE-17834:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893572/HIVE-17834.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 11315 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a]
 (batchId=145)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=204)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=221)
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
 (batchId=269)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7447/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7447/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7447/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12893572 - PreCommit-HIVE-Build

> Fix flaky triggers test
> ---
>
> Key: HIVE-17834
> URL: https://issues.apache.org/jira/browse/HIVE-17834
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17834.1.patch, HIVE-17834.2.patch
>
>
> https://issues.apache.org/jira/browse/HIVE-12631?focusedCommentId=16209803=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16209803



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17698) FileSinkDesk.getMergeInputDirName() uses stmtId=0

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17698:

Attachment: HIVE-17698.patch

> FileSinkDesk.getMergeInputDirName() uses stmtId=0
> -
>
> Key: HIVE-17698
> URL: https://issues.apache.org/jira/browse/HIVE-17698
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17698.patch, HIVE-17698.patch
>
>
> this is certainly wrong for multi statement txn but may also affect writes 
> from Union All queries if these are made to follow full Acid convention
> _return new Path(root, AcidUtils.deltaSubdir(txnId, txnId, 0));_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-14731) Use Tez cartesian product edge in Hive (unpartitioned case only)

2017-10-23 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang updated HIVE-14731:

Attachment: HIVE-14731.23.patch

Rebase again...

> Use Tez cartesian product edge in Hive (unpartitioned case only)
> 
>
> Key: HIVE-14731
> URL: https://issues.apache.org/jira/browse/HIVE-14731
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: HIVE-14731.1.patch, HIVE-14731.10.patch, 
> HIVE-14731.11.patch, HIVE-14731.12.patch, HIVE-14731.13.patch, 
> HIVE-14731.14.patch, HIVE-14731.15.patch, HIVE-14731.16.patch, 
> HIVE-14731.17.patch, HIVE-14731.18.patch, HIVE-14731.19.patch, 
> HIVE-14731.2.patch, HIVE-14731.20.patch, HIVE-14731.21.patch, 
> HIVE-14731.22.patch, HIVE-14731.23.patch, HIVE-14731.3.patch, 
> HIVE-14731.4.patch, HIVE-14731.5.patch, HIVE-14731.6.patch, 
> HIVE-14731.7.patch, HIVE-14731.8.patch, HIVE-14731.9.patch
>
>
> Given cartesian product edge is available in Tez now (see TEZ-3230), let's 
> integrate it into Hive on Tez. This allows us to have more than one reducer 
> in cross product queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17673) JavaUtils.extractTxnId() etc

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17673:

Attachment: HIVE-17673.patch

> JavaUtils.extractTxnId() etc
> 
>
> Key: HIVE-17673
> URL: https://issues.apache.org/jira/browse/HIVE-17673
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Minor
> Attachments: HIVE-17673.patch, HIVE-17673.patch
>
>
> these should be in AcidUtils



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17748) ReplCopyTask doesn't support multi-file CopyWork

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17748:

Attachment: HIVE-17748.patch

> ReplCopyTask doesn't support multi-file CopyWork
> 
>
> Key: HIVE-17748
> URL: https://issues.apache.org/jira/browse/HIVE-17748
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17748.patch, HIVE-17748.patch
>
>
> has 
> {noformat}
>   Path fromPath = work.getFromPaths()[0];
>   toPath = work.getToPaths()[0];
> {noformat}
> should this throw if from/to paths have > 1 element?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17750) add a flag to automatically create most tables as MM

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17750:

Status: Patch Available  (was: Open)

A small patch; most of the size is the test output.
[~hagleitn] can you review?

> add a flag to automatically create most tables as MM 
> -
>
> Key: HIVE-17750
> URL: https://issues.apache.org/jira/browse/HIVE-17750
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17750.patch
>
>
> After merge we are going to do another round of gap identification... similar 
> to HIVE-14990.
> However the approach used there is a huge PITA. It'd be much better to make 
> tables MM by default at create time, not pretend they are MM at check time, 
> from the perspective of spurious error elimination.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17832) Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in metastore

2017-10-23 Thread Janaki Lahorani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215873#comment-16215873
 ] 

Janaki Lahorani commented on HIVE-17832:


Thanks [~sershe].  I would like to understand the reasons behind such a design 
decision.

I think making this parameter modifiable at session level is beneficial for the 
following reasons.
1.  If a user made a mistake in defining the table, then the user doesn't have 
an option to change the definition.  The user needs to drop the table and 
recreate it.  That can be a bit tedious if the table definition is quite big.
2.  This is a DDL operation that is most likely not exposed to the end user.  
The setup of tables is typically handled by an administrator.
3.  HMS embedded with HS2 is also a accepted configuration.  If the parameter 
is not modifiable for standalone HMS, then it shouldn't be modifiable for 
embedded HMS as well, which is not the case today.  You see a lot of qfile 
tests that modifies this parameter.
4.  Please refer to the following comment from [~ashutoshc] in JIRA HIVE-12320 
(cut and pasted for your reference):
_Ashutosh Chauhan added a comment - 03/Nov/15 10:05
Its better to be strict (default this to true) than leaving potential to 
corrupt data. This is only a safeguard since behavior is governed by a config 
param. An advance user who know what they are doing can still set this to false 
to potentially do dangerous schema changes.
_
I don't think the implication was also to restart the HMS after changing the 
parameter value.  

I would be very grateful if you can elaborate on why you feel this was by 
design, and the benefits for having such a design.

Thanks,
Janaki.

> Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in 
> metastore
> --
>
> Key: HIVE-17832
> URL: https://issues.apache.org/jira/browse/HIVE-17832
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
> Fix For: 3.0.0
>
> Attachments: HIVE17832.1.patch, HIVE17832.2.patch
>
>
> hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
> disallow incompatible column type changes through alter table.  But, this 
> parameter is not modifiable in HMS.  If HMS in not embedded into HS2, the 
> value cannot be changed.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17458) VectorizedOrcAcidRowBatchReader doesn't handle 'original' files

2017-10-23 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215870#comment-16215870
 ] 

Eugene Koifman edited comment on HIVE-17458 at 10/23/17 9:23 PM:
-

On disabling LLAP cache 
{noformat}
[2:07 PM] Sergey Shelukhin: OrcSplit.canUseLlapIo()
[2:07 PM] Sergey Shelukhin: in general, LlapAwareSplit
[2:07 PM] Sergey Shelukhin: is the cleanest way
[2:09 PM] Sergey Shelukhin: LlapRecordReader.create() is another place where 
one could check, on lower level
[2:09 PM] Sergey Shelukhin: and return null
{noformat}

current impl of canUseLlapIo() looks like it will disable LllapIo for 
"original" acid read


was (Author: ekoifman):
On disabling LLAP cache 
{noformat}
[2:07 PM] Sergey Shelukhin: OrcSplit.canUseLlapIo()
[2:07 PM] Sergey Shelukhin: in general, LlapAwareSplit
[2:07 PM] Sergey Shelukhin: is the cleanest way
[2:09 PM] Sergey Shelukhin: LlapRecordReader.create() is another place where 
one could check, on lower level
[2:09 PM] Sergey Shelukhin: and return null
{noformat}

> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---
>
> Key: HIVE-17458
> URL: https://issues.apache.org/jira/browse/HIVE-17458
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17458.01.patch, HIVE-17458.02.patch, 
> HIVE-17458.03.patch, HIVE-17458.04.patch, HIVE-17458.05.patch
>
>
> VectorizedOrcAcidRowBatchReader will not be used for original files.  This 
> will likely look like a perf regression when converting a table from non-acid 
> to acid until it runs through a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops 
> will not vectorize until major compaction.  
> There is no reason why this should be the case.  Just like 
> OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader can look at the other 
> files in the logical tranche/bucket and calculate the offset for the RowBatch 
> of the split.  (Presumably getRecordReader().getRowNumber() works the same in 
> vector mode).
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer 
> it from file path... which in particular simplifies 
> OrcInputFormat.determineSplitStrategies()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17750) add a flag to automatically create most tables as MM

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17750:

Attachment: HIVE-17750.patch

One of the tables in the new test that should probably remain ACID becomes 
insert-only due to HIVE-17878. When that is fixed the test output will be 
modified.


> add a flag to automatically create most tables as MM 
> -
>
> Key: HIVE-17750
> URL: https://issues.apache.org/jira/browse/HIVE-17750
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17750.patch
>
>
> After merge we are going to do another round of gap identification... similar 
> to HIVE-14990.
> However the approach used there is a huge PITA. It'd be much better to make 
> tables MM by default at create time, not pretend they are MM at check time, 
> from the perspective of spurious error elimination.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17458) VectorizedOrcAcidRowBatchReader doesn't handle 'original' files

2017-10-23 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215870#comment-16215870
 ] 

Eugene Koifman commented on HIVE-17458:
---

On disabling LLAP cache 
{noformat}
[2:07 PM] Sergey Shelukhin: OrcSplit.canUseLlapIo()
[2:07 PM] Sergey Shelukhin: in general, LlapAwareSplit
[2:07 PM] Sergey Shelukhin: is the cleanest way
[2:09 PM] Sergey Shelukhin: LlapRecordReader.create() is another place where 
one could check, on lower level
[2:09 PM] Sergey Shelukhin: and return null
{noformat}

> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---
>
> Key: HIVE-17458
> URL: https://issues.apache.org/jira/browse/HIVE-17458
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17458.01.patch, HIVE-17458.02.patch, 
> HIVE-17458.03.patch, HIVE-17458.04.patch, HIVE-17458.05.patch
>
>
> VectorizedOrcAcidRowBatchReader will not be used for original files.  This 
> will likely look like a perf regression when converting a table from non-acid 
> to acid until it runs through a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops 
> will not vectorize until major compaction.  
> There is no reason why this should be the case.  Just like 
> OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader can look at the other 
> files in the logical tranche/bucket and calculate the offset for the RowBatch 
> of the split.  (Presumably getRecordReader().getRowNumber() works the same in 
> vector mode).
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer 
> it from file path... which in particular simplifies 
> OrcInputFormat.determineSplitStrategies()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17471) Vectorization: Enable hive.vectorized.row.identifier.enabled to true by default

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17471:

Attachment: HIVE-17471.01.patch

Looks like HiveQA was broken

> Vectorization: Enable hive.vectorized.row.identifier.enabled to true by 
> default
> ---
>
> Key: HIVE-17471
> URL: https://issues.apache.org/jira/browse/HIVE-17471
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17471.01.patch, HIVE-17471.patch
>
>
> We set it disabled in https://issues.apache.org/jira/browse/HIVE-17116 
> "Vectorization: Add infrastructure for vectorization of ROW__ID struct"
> But forgot to turn it on to true by default in Teddy's ACID ROW__ID work... 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17425) Change MetastoreConf.ConfVars internal members to be private

2017-10-23 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215854#comment-16215854
 ] 

Vihang Karajgaonkar commented on HIVE-17425:


Thanks [~alangates] for the explanation related to {{Metrics}} class use-case. 
It seems like we are doing it this way because we want to fall back to two 
different hive config names if it is not defined, unlike most other cases when 
there is only one hive config name to fall back too. It may be okay for now, 
but I think we may need to add support to fall-back to multiple hive varnames 
as more and more configurations in Hive get deprecated and new ones are 
introduced instead of them. 

For now, the patch looks good +1

> Change MetastoreConf.ConfVars internal members to be private
> 
>
> Key: HIVE-17425
> URL: https://issues.apache.org/jira/browse/HIVE-17425
> Project: Hive
>  Issue Type: Task
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-17425.2.patch, HIVE-17425.patch
>
>
> MetastoreConf's dual use of metastore keys and Hive keys is causing confusion 
> for developers.  We should make the relevant members private and provide 
> getter methods with comments on when it is appropriate to use them.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17832) Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in metastore

2017-10-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215807#comment-16215807
 ] 

Sergey Shelukhin commented on HIVE-17832:
-

I think this is by design that it cannot be changed. This is a correctness 
setting that should only be settable by administrator.

> Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in 
> metastore
> --
>
> Key: HIVE-17832
> URL: https://issues.apache.org/jira/browse/HIVE-17832
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
> Fix For: 3.0.0
>
> Attachments: HIVE17832.1.patch, HIVE17832.2.patch
>
>
> hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
> disallow incompatible column type changes through alter table.  But, this 
> parameter is not modifiable in HMS.  If HMS in not embedded into HS2, the 
> value cannot be changed.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17832) Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in metastore

2017-10-23 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17832:
---
Attachment: HIVE17832.2.patch

> Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in 
> metastore
> --
>
> Key: HIVE-17832
> URL: https://issues.apache.org/jira/browse/HIVE-17832
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
> Fix For: 3.0.0
>
> Attachments: HIVE17832.1.patch, HIVE17832.2.patch
>
>
> hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
> disallow incompatible column type changes through alter table.  But, this 
> parameter is not modifiable in HMS.  If HMS in not embedded into HS2, the 
> value cannot be changed.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17832) Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in metastore

2017-10-23 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17832:
---
Description: hive.metastore.disallow.incompatible.col.type.changes when set 
to true, will disallow incompatible column type changes through alter table.  
But, this parameter is not modifiable in HMS.  If HMS in not embedded into HS2, 
the value cannot be changed.(was: 
hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
disallow incompatible column type changes through alter table.  But, this 
parameter is set system wide, and changing it requires restart of HMS.  The 
default value of this parameter is true.  User can set the parameter to false 
and change the column type through alter if this can be modified within a 
session.)

> Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in 
> metastore
> --
>
> Key: HIVE-17832
> URL: https://issues.apache.org/jira/browse/HIVE-17832
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
> Fix For: 3.0.0
>
> Attachments: HIVE17832.1.patch
>
>
> hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
> disallow incompatible column type changes through alter table.  But, this 
> parameter is not modifiable in HMS.  If HMS in not embedded into HS2, the 
> value cannot be changed.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17832) Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in metastore

2017-10-23 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17832:
---
Summary: Allow hive.metastore.disallow.incompatible.col.type.changes to be 
changed in metastore  (was: Allow 
hive.metastore.disallow.incompatible.col.type.changes to be changed within a 
session)

> Allow hive.metastore.disallow.incompatible.col.type.changes to be changed in 
> metastore
> --
>
> Key: HIVE-17832
> URL: https://issues.apache.org/jira/browse/HIVE-17832
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
> Fix For: 3.0.0
>
> Attachments: HIVE17832.1.patch
>
>
> hive.metastore.disallow.incompatible.col.type.changes when set to true, will 
> disallow incompatible column type changes through alter table.  But, this 
> parameter is set system wide, and changing it requires restart of HMS.  The 
> default value of this parameter is true.  User can set the parameter to false 
> and change the column type through alter if this can be modified within a 
> session.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17858) MM - some union cases are broken

2017-10-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17858:

Attachment: HIVE-17858.01.patch

Updated.

> MM - some union cases are broken
> 
>
> Key: HIVE-17858
> URL: https://issues.apache.org/jira/browse/HIVE-17858
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: mm-gap-1
> Attachments: HIVE-17858.01.patch, HIVE-17858.patch
>
>
> mm_all test no longer runs on LLAP; if it's executed in LLAP, one can see 
> that some union cases no longer work.
> Queries on partunion_mm, skew_dp_union_mm produce no results.
> I'm not sure what part of "integration" broke it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17874) Parquet vectorization fails on tables with complex columns when there are no projected columns

2017-10-23 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-17874:
---
Attachment: HIVE-17874.03.patch

The tests are working for me locally. I added {{--SORT_QUERY_RESULTS}} to make 
sure these are not flaky failures. Attaching the patch one more time.

> Parquet vectorization fails on tables with complex columns when there are no 
> projected columns
> --
>
> Key: HIVE-17874
> URL: https://issues.apache.org/jira/browse/HIVE-17874
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.2.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-17874.01-branch-2.patch, HIVE-17874.01.patch, 
> HIVE-17874.02.patch, HIVE-17874.03.patch
>
>
> When a parquet table contains an unsupported type like {{Map}}, {{LIST}} or 
> {{UNION}} simple queries like {{select count(*) from table}} fails with 
> {{unsupported type exception}} even though vectorized reader doesn't really 
> need read the complex type into batches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17839) Cannot generate thrift definitions in standalone-metastore.

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215723#comment-16215723
 ] 

Hive QA commented on HIVE-17839:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893551/HIVE-17839.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 11315 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=158)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=204)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=221)
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
 (batchId=269)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpRetryOnServerIdleTimeout 
(batchId=231)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7446/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7446/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7446/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12893551 - PreCommit-HIVE-Build

> Cannot generate thrift definitions in standalone-metastore.
> ---
>
> Key: HIVE-17839
> URL: https://issues.apache.org/jira/browse/HIVE-17839
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Jaiprakash
>Assignee: Alan Gates
> Attachments: HIVE-17839.patch
>
>
> mvn clean install -Pthriftif -Dthrift.home=... does not regenerate the thrift 
> sources. This is after the https://issues.apache.org/jira/browse/HIVE-17506 
> fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17458) VectorizedOrcAcidRowBatchReader doesn't handle 'original' files

2017-10-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17458:
--
Attachment: HIVE-17458.05.patch

patch 5 fixes some tests

> VectorizedOrcAcidRowBatchReader doesn't handle 'original' files
> ---
>
> Key: HIVE-17458
> URL: https://issues.apache.org/jira/browse/HIVE-17458
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17458.01.patch, HIVE-17458.02.patch, 
> HIVE-17458.03.patch, HIVE-17458.04.patch, HIVE-17458.05.patch
>
>
> VectorizedOrcAcidRowBatchReader will not be used for original files.  This 
> will likely look like a perf regression when converting a table from non-acid 
> to acid until it runs through a major compaction.
> With Load Data support, if large files are added via Load Data, the read ops 
> will not vectorize until major compaction.  
> There is no reason why this should be the case.  Just like 
> OrcRawRecordMerger, VectorizedOrcAcidRowBatchReader can look at the other 
> files in the logical tranche/bucket and calculate the offset for the RowBatch 
> of the split.  (Presumably getRecordReader().getRowNumber() works the same in 
> vector mode).
> In this case we don't even need OrcSplit.isOriginal() - the reader can infer 
> it from file path... which in particular simplifies 
> OrcInputFormat.determineSplitStrategies()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17856) MM tables - IOW is not ACID compliant

2017-10-23 Thread Steve Yeom (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215702#comment-16215702
 ] 

Steve Yeom commented on HIVE-17856:
---

FYI.
As [~sershe] mentioned, I am now focusing on IOW on MM table with the test cases
used for ACID IOW patch, checking the comments and applying possible fixes. 

> MM tables - IOW is not ACID compliant
> -
>
> Key: HIVE-17856
> URL: https://issues.apache.org/jira/browse/HIVE-17856
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Steve Yeom
>  Labels: mm-gap-1
>
> The following tests were removed from mm_all during "integration"... I should 
> have never allowed such manner of intergration.
> MM logic should have been kept intact until ACID logic could catch up. Alas, 
> here we are.
> {noformat}
> drop table iow0_mm;
> create table iow0_mm(key int) tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow0_mm select key from intermediate;
> insert into table iow0_mm select key + 1 from intermediate;
> select * from iow0_mm order by key;
> insert overwrite table iow0_mm select key + 2 from intermediate;
> select * from iow0_mm order by key;
> drop table iow0_mm;
> drop table iow1_mm; 
> create table iow1_mm(key int) partitioned by (key2 int)  
> tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow1_mm partition (key2)
> select key as k1, key from intermediate union all select key as k1, key from 
> intermediate;
> insert into table iow1_mm partition (key2)
> select key + 1 as k1, key from intermediate union all select key as k1, key 
> from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key from intermediate union all select key + 4 as k1, 
> key from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key + 3 from intermediate union all select key + 2 as 
> k1, key + 2 from intermediate;
> select * from iow1_mm order by key, key2;
> drop table iow1_mm;
> {noformat}
> {noformat}
> drop table simple_mm;
> create table simple_mm(key int) stored as orc tblproperties 
> ("transactional"="true", "transactional_properties"="insert_only");
> insert into table simple_mm select key from intermediate;
> -insert overwrite table simple_mm select key from intermediate;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17873) External LLAP client: allow same handleID to be used more than once

2017-10-23 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-17873:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master

> External LLAP client: allow same handleID to be used more than once
> ---
>
> Key: HIVE-17873
> URL: https://issues.apache.org/jira/browse/HIVE-17873
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 3.0.0
>
> Attachments: HIVE-17873.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17834) Fix flaky triggers test

2017-10-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-17834:
-
Attachment: HIVE-17834.2.patch

Missed the configs in the previous test.

> Fix flaky triggers test
> ---
>
> Key: HIVE-17834
> URL: https://issues.apache.org/jira/browse/HIVE-17834
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-17834.1.patch, HIVE-17834.2.patch
>
>
> https://issues.apache.org/jira/browse/HIVE-12631?focusedCommentId=16209803=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16209803



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17617) Rollup of an empty resultset should contain the grouping of the empty grouping set

2017-10-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215629#comment-16215629
 ] 

Ashutosh Chauhan commented on HIVE-17617:
-

*Desc object only suppose to contain configuration for runtime operators not 
runtime logic, that is suppose to get into runtime operator. emitSummaryRow() 
thus belongs in Groupby operator class. Can you move it there?
rest loos good. +1

> Rollup of an empty resultset should contain the grouping of the empty 
> grouping set
> --
>
> Key: HIVE-17617
> URL: https://issues.apache.org/jira/browse/HIVE-17617
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Attachments: HIVE-17617.01.patch, HIVE-17617.03.patch, 
> HIVE-17617.04.patch, HIVE-17617.05.patch, HIVE-17617.06.patch, 
> HIVE-17617.07.patch, HIVE-17617.07.patch
>
>
> running
> {code}
> drop table if exists tx1;
> create table tx1 (a integer,b integer,c integer);
> select  sum(c),
> grouping(b)
> fromtx1
> group by rollup (b);
> {code}
> returns 0 rows; however 
> according to the standard:
> The  is regarded as the shortest such initial sublist. 
> For example, “ROLLUP ( (A, B), (C, D) )”
> is equivalent to “GROUPING SETS ( (A, B, C, D), (A, B), () )”.
> so I think the totals row (the grouping for {{()}} should be present)  - psql 
> returns it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215599#comment-16215599
 ] 

Ashutosh Chauhan commented on HIVE-15016:
-

yeah.. no reason for having diff dependency versions in different  module. Lets 
have same  (new) version on all.

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, HIVE-15016.patch, 
> Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17867) Exception in windowing functions with TIMESTAMP WITH LOCAL TIME ZONE type

2017-10-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215596#comment-16215596
 ] 

Ashutosh Chauhan commented on HIVE-17867:
-

+1
seems like one of test needs golden file update too.

> Exception in windowing functions with TIMESTAMP WITH LOCAL TIME ZONE type
> -
>
> Key: HIVE-17867
> URL: https://issues.apache.org/jira/browse/HIVE-17867
> Project: Hive
>  Issue Type: Bug
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-17867.patch
>
>
> The following query where column {{ts}} is of type {{TIMESTAMP WITH LOCAL 
> TIME ZONE}}:
> {code}
> select ts, i, sum(f) over (partition by i order by ts)
> from over10k_2
> limit 100;
> {code}
> fails with the following stacktrace:
> {code}
> org.apache.hadoop.hive.ql.parse.SemanticException: Failed to breakup 
> Windowing invocations into Groups. At least 1 group must only depend on input 
> columns. Also check for circular dependencies.
> Underlying error: Primitive type TIMESTAMPLOCALTZ not supported in Value 
> Boundary expression
> at 
> org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:97)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:13508)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:9912)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:9871)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10784)
> ...
> {code}
> The list of supported types for boundaries expressions in PTFTranslator needs 
> to be updated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17868) Make queries in spark_local_queries.q have deterministic output

2017-10-23 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215591#comment-16215591
 ] 

Andrew Sherman commented on HIVE-17868:
---

Test failures are unrelated to my change. In addition I checked that the test I 
changed did run. 
Thanks [~xuefuz] for reviewing. 
[~stakiar] can you take a look and push if you approve? Thanks

> Make queries in spark_local_queries.q have deterministic output
> ---
>
> Key: HIVE-17868
> URL: https://issues.apache.org/jira/browse/HIVE-17868
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17868.1.patch
>
>
> Add 'order by' to queries so that output is always the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17193) HoS: don't combine map works that are targets of different DPPs

2017-10-23 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215590#comment-16215590
 ] 

Sahil Takiar commented on HIVE-17193:
-

{quote} The drawback is we'll lose some optimization opportunities - actually 
I'm not sure whether it's possible that two target map works share the same DPP 
in current implementation. {quote} As far as I know, this isn't possible. A DPP 
subtree can only be used to prune a single target {{MapWork}} - although that 
is something we want to change in HIVE-17178

{quote} Two DPP works can be considered equivalent as long as they output same 
records. {quote} I'm not sure how this would work, you don't know what a DPP 
work will output until the query actually starts to run.

I think a good fix here would to be just implement HIVE-17178 (I'm not sure, 
but this may be the same as HIVE-17877). If two DPP sinks are completely 
equivalent (same source table, filters, operations, etc.), but they only differ 
by the value of {{Target Work}}, then I think we should be able to combine them 
into a single DPP tree, with multiple target works. The value of the target 
work shouldn't change the value of the data that is written by a DPP subtree, 
so if the subtrees are equivalent, we can combine them. The main work will be 
to change the DPP code so that there can be multiple Target Works. 

> HoS: don't combine map works that are targets of different DPPs
> ---
>
> Key: HIVE-17193
> URL: https://issues.apache.org/jira/browse/HIVE-17193
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
>
> Suppose {{srcpart}} is partitioned by {{ds}}. The following query can trigger 
> the issue:
> {code}
> explain
> select * from
>   (select srcpart.ds,srcpart.key from srcpart join src on srcpart.ds=src.key) 
> a
> join
>   (select srcpart.ds,srcpart.key from srcpart join src on 
> srcpart.ds=src.value) b
> on a.key=b.key;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17473) implement workload management pools

2017-10-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215586#comment-16215586
 ] 

Sergey Shelukhin commented on HIVE-17473:
-

This can be covered as part of the parent JIRA

> implement workload management pools
> ---
>
> Key: HIVE-17473
> URL: https://issues.apache.org/jira/browse/HIVE-17473
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 3.0.0
>
> Attachments: HIVE-17473.01.patch, HIVE-17473.03.patch, 
> HIVE-17473.04.patch, HIVE-17473.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215582#comment-16215582
 ] 

Aihua Xu commented on HIVE-15016:
-

[~ashutoshc] Yeah. llap-server is referring to an older version. I'm working on 
it and will upload a new patch. Seems there is no reason to have a different 
netty version for llap-server alone than other projects, right?

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, HIVE-15016.patch, 
> Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17868) Make queries in spark_local_queries.q have deterministic output

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215570#comment-16215570
 ] 

Hive QA commented on HIVE-17868:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893545/HIVE-17868.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 11315 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[date_2] (batchId=79)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=158)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=204)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=221)
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
 (batchId=269)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=228)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=228)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7445/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7445/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7445/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12893545 - PreCommit-HIVE-Build

> Make queries in spark_local_queries.q have deterministic output
> ---
>
> Key: HIVE-17868
> URL: https://issues.apache.org/jira/browse/HIVE-17868
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17868.1.patch
>
>
> Add 'order by' to queries so that output is always the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1

2017-10-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215565#comment-16215565
 ] 

Ashutosh Chauhan commented on HIVE-15016:
-

[~aihuaxu] Seems like there is a version mismatch of netty in llap-server 
module.

> Run tests with Hadoop 3.0.0-beta1
> -
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Aihua Xu
> Attachments: HIVE-15016.2.patch, HIVE-15016.3.patch, 
> HIVE-15016.4.patch, HIVE-15016.5.patch, HIVE-15016.6.patch, HIVE-15016.patch, 
> Hadoop3Upstream.patch
>
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17743) Add InterfaceAudience and InterfaceStability annotations for Thrift generated APIs

2017-10-23 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215559#comment-16215559
 ] 

Aihua Xu commented on HIVE-17743:
-

The change looks good to me. +1.

> Add InterfaceAudience and InterfaceStability annotations for Thrift generated 
> APIs
> --
>
> Key: HIVE-17743
> URL: https://issues.apache.org/jira/browse/HIVE-17743
> Project: Hive
>  Issue Type: Sub-task
>  Components: Thrift API
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-17743.1.patch, HIVE-17743.2.patch
>
>
> The Thrift generated files don't have {{InterfaceAudience}} or 
> {{InterfaceStability}} annotations on them, mainly because all the files are 
> auto-generated.
> We should add some code that auto-tags all the Java Thrift generated files 
> with these annotations. This way even when they are re-generated, they still 
> contain the annotations.
> We should be able to do this using the 
> {{com.google.code.maven-replacer-plugin}} similar to what we do in 
> {{standalone-metastore/pom.xml}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17038) invalid result when CAST-ing to DATE

2017-10-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215551#comment-16215551
 ] 

Ashutosh Chauhan commented on HIVE-17038:
-

Michael, can you please add your patch here so that we get a CI run on it 
https://cwiki.apache.org/confluence/display/Hive/Hive+PreCommit+Patch+Testing

> invalid result when CAST-ing to DATE
> 
>
> Key: HIVE-17038
> URL: https://issues.apache.org/jira/browse/HIVE-17038
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Hive
>Affects Versions: 1.2.1
>Reporter: Jim Hopper
>
> when casting incorrect date literals to DATE data type hive returns wrong 
> values instead of NULL.
> {code}
> SELECT CAST('2017-02-31' AS DATE);
> SELECT CAST('2017-04-31' AS DATE);
> {code}
> Some examples below where it really can produce weird results:
> {code}
> select *
>   from (
> select cast('2017-07-01' as date) as dt
> ) as t
> where t.dt = '2017-06-31';
> select *
>   from (
> select cast('2017-07-01' as date) as dt
> ) as t
> where t.dt = cast('2017-06-31' as date);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-14112) Join a HBase mapped big table shouldn't convert to MapJoin

2017-10-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215537#comment-16215537
 ] 

Ashutosh Chauhan commented on HIVE-14112:
-

This is just a workaround. Proper way to fix is to provide storage handler a 
way to tell estimated size of table to compiler which then can use that info 
for planning. Infact, that interface already exists 
{{org.apache.hadoop.hive.ql.metadata.InputEstimator}} and is used in 
{{SimpleFetchOptimizer}} Lets make use of this interface for join algo 
selection as well in compiler.

> Join a HBase mapped big table shouldn't convert to MapJoin
> --
>
> Key: HIVE-14112
> URL: https://issues.apache.org/jira/browse/HIVE-14112
> Project: Hive
>  Issue Type: Bug
>  Components: StorageHandler
>Affects Versions: 1.2.0, 1.1.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Minor
> Attachments: HIVE-14112.1.patch
>
>
> Two tables, {{hbasetable_risk_control_defense_idx_uid}} is HBase mapped table:
> {noformat}
> [root@dev01 ~]# hadoop fs -du -s -h 
> /hbase/data/tandem/hbase-table-risk-control-defense-idx-uid
> 3.0 G  9.0 G  /hbase/data/tandem/hbase-table-risk-control-defense-idx-uid
> [root@dev01 ~]# hadoop fs -du -s -h /user/hive/warehouse/openapi_invoke_base
> 6.6 G  19.7 G  /user/hive/warehouse/openapi_invoke_base
> {noformat}
> The smallest table is 3.0G, is greater than 
> _hive.mapjoin.smalltable.filesize_ and 
> _hive.auto.convert.join.noconditionaltask.size_. When join these tables, Hive 
> auto convert it to mapjoin:
> {noformat}
> hive> select count(*) from hbasetable_risk_control_defense_idx_uid t1 join 
> openapi_invoke_base t2 on (t1.key=t2.merchantid);
> Query ID = root_2016062809_9f9d3f25-857b-412c-8a75-3d9228bd5ee5
> Total jobs = 1
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; 
> support was removed in 8.0
> Execution log at: 
> /tmp/root/root_2016062809_9f9d3f25-857b-412c-8a75-3d9228bd5ee5.log
> 2016-06-28 09:22:10   Starting to launch local task to process map join;  
> maximum memory = 1908932608
> {noformat} 
> the root cause is hive use 
> {{/user/hive/warehouse/hbasetable_risk_control_defense_idx_uid}} as it 
> location, but it empty. so hive auto convert it to mapjoin.
> My opinion is set right location when mapping HBase table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17869) unix_timestamp(string date, string pattern) UDF does not verify date is valid

2017-10-23 Thread Carter Shanklin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215532#comment-16215532
 ] 

Carter Shanklin commented on HIVE-17869:


Seems to be the same as HIVE-17038 which has a PR

[~ashutoshc] can anyone have a look at the PR over there and move that one 
along?

> unix_timestamp(string date, string pattern) UDF does not verify date is valid
> -
>
> Key: HIVE-17869
> URL: https://issues.apache.org/jira/browse/HIVE-17869
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 1.2.1
>Reporter: Brian Goerlitz
>
> unix_timestamp(string date, string pattern) returns a value in situations 
> which would be expected to return 0 (fail):
> {noformat}
> hive> -- Date does not exist
> > select unix_timestamp('2017/02/29', '/MM/dd');
> OK
> 1488326400
> Time taken: 0.317 seconds, Fetched: 1 row(s)
> hive> -- Date does not exist
> > select from_unixtime(unix_timestamp('2017/02/29', '/MM/dd'));
> OK
> 2017-03-01 00:00:00
> Time taken: 0.28 seconds, Fetched: 1 row(s)
> hive> -- Date in wrong format
> > select unix_timestamp('2017/02/29', 'MM/dd/');
> OK
> -55950393600
> Time taken: 0.303 seconds, Fetched: 1 row(s)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17839) Cannot generate thrift definitions in standalone-metastore.

2017-10-23 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17839:
--
Attachment: (was: HIVE-17839.patch)

> Cannot generate thrift definitions in standalone-metastore.
> ---
>
> Key: HIVE-17839
> URL: https://issues.apache.org/jira/browse/HIVE-17839
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Jaiprakash
>Assignee: Alan Gates
> Attachments: HIVE-17839.patch
>
>
> mvn clean install -Pthriftif -Dthrift.home=... does not regenerate the thrift 
> sources. This is after the https://issues.apache.org/jira/browse/HIVE-17506 
> fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17839) Cannot generate thrift definitions in standalone-metastore.

2017-10-23 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17839:
--
Attachment: HIVE-17839.patch

> Cannot generate thrift definitions in standalone-metastore.
> ---
>
> Key: HIVE-17839
> URL: https://issues.apache.org/jira/browse/HIVE-17839
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Jaiprakash
>Assignee: Alan Gates
> Attachments: HIVE-17839.patch
>
>
> mvn clean install -Pthriftif -Dthrift.home=... does not regenerate the thrift 
> sources. This is after the https://issues.apache.org/jira/browse/HIVE-17506 
> fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17635) Add unit tests to CompactionTxnHandler and use PreparedStatements for queries

2017-10-23 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215468#comment-16215468
 ] 

Andrew Sherman commented on HIVE-17635:
---

Test failures are unconnected with this change. So this is ready to push, can 
you take a look [~stakiar] ?

> Add unit tests to CompactionTxnHandler and use PreparedStatements for queries
> -
>
> Key: HIVE-17635
> URL: https://issues.apache.org/jira/browse/HIVE-17635
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17635.1.patch, HIVE-17635.2.patch, 
> HIVE-17635.3.patch, HIVE-17635.4.patch, HIVE-17635.6.patch
>
>
> It is better for jdbc code that runs against the HMS database to use 
> PreparedStatements. Convert CompactionTxnHandler queries to use 
> PreparedStatement and add tests to TestCompactionTxnHandler to test these 
> queries, and improve code coverage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17845) insert fails if target table columns are not lowercase

2017-10-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17845:

   Resolution: Fixed
Fix Version/s: (was: 2.3.0)
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Naresh!

> insert fails if target table columns are not lowercase
> --
>
> Key: HIVE-17845
> URL: https://issues.apache.org/jira/browse/HIVE-17845
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-17845.patch
>
>
> eg., 
> INSERT INTO TABLE EMP(ID,NAME) select * FROM SRC;
> FAILED: SemanticException 1:27 '[ID,NAME]' in insert schema specification are 
> not found among regular columns of default.EMP nor dynamic partition 
> columns.. Error encountered near token 'NAME'
> Whereas below insert is successful:
> INSERT INTO TABLE EMP(id,name) select * FROM SRC;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17870) Update NoDeleteRollingFileAppender to use Log4j2 api

2017-10-23 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215444#comment-16215444
 ] 

Aihua Xu commented on HIVE-17870:
-

[~prasadns14] Such appender can be configured in log4j property so log4j can 
use that to perform logging. But I didn't investigate if there is any existing 
appender has the same functionality as NoDeleteRollingFileAppender. Do you want 
to take a look?

> Update NoDeleteRollingFileAppender to use Log4j2 api
> 
>
> Key: HIVE-17870
> URL: https://issues.apache.org/jira/browse/HIVE-17870
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>
> NoDeleteRollingFileAppender is still using log4jv1 api. Since we already 
> moved to use log4j2 in hive, we better update to use log4jv2 as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17868) Make queries in spark_local_queries.q have deterministic output

2017-10-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215443#comment-16215443
 ] 

Xuefu Zhang edited comment on HIVE-17868 at 10/23/17 4:53 PM:
--

Makes sense, [~asherman]. Thanks for the explanation.

+1


was (Author: xuefuz):
Makes sense, [~asherman]. Thanks for the explanation.

> Make queries in spark_local_queries.q have deterministic output
> ---
>
> Key: HIVE-17868
> URL: https://issues.apache.org/jira/browse/HIVE-17868
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17868.1.patch
>
>
> Add 'order by' to queries so that output is always the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17868) Make queries in spark_local_queries.q have deterministic output

2017-10-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215443#comment-16215443
 ] 

Xuefu Zhang commented on HIVE-17868:


Makes sense, [~asherman]. Thanks for the explanation.

> Make queries in spark_local_queries.q have deterministic output
> ---
>
> Key: HIVE-17868
> URL: https://issues.apache.org/jira/browse/HIVE-17868
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17868.1.patch
>
>
> Add 'order by' to queries so that output is always the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17868) Make queries in spark_local_queries.q have deterministic output

2017-10-23 Thread Andrew Sherman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-17868:
--
Status: Patch Available  (was: Open)

This is a small change

> Make queries in spark_local_queries.q have deterministic output
> ---
>
> Key: HIVE-17868
> URL: https://issues.apache.org/jira/browse/HIVE-17868
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17868.1.patch
>
>
> Add 'order by' to queries so that output is always the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17868) Make queries in spark_local_queries.q have deterministic output

2017-10-23 Thread Andrew Sherman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-17868:
--
Attachment: HIVE-17868.1.patch

> Make queries in spark_local_queries.q have deterministic output
> ---
>
> Key: HIVE-17868
> URL: https://issues.apache.org/jira/browse/HIVE-17868
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-17868.1.patch
>
>
> Add 'order by' to queries so that output is always the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17847) Exclude net.hydromatic:aggdesigner-algorithm jar as compile and runtime dependency

2017-10-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17847:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Exclude net.hydromatic:aggdesigner-algorithm jar as compile and runtime 
> dependency
> --
>
> Key: HIVE-17847
> URL: https://issues.apache.org/jira/browse/HIVE-17847
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 3.0.0
>
> Attachments: HIVE-17847.patch
>
>
> Hive doesn't use this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17259) Hive JDBC does not recognize UNIONTYPE columns

2017-10-23 Thread Pierre Villard (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215333#comment-16215333
 ] 

Pierre Villard commented on HIVE-17259:
---

Not sure to understand why I get:
{noformat}
error: a/jdbc/src/java/org/apache/hive/jdbc/JdbcColumn.java: No such file or 
directory
{noformat}

> Hive JDBC does not recognize UNIONTYPE columns
> --
>
> Key: HIVE-17259
> URL: https://issues.apache.org/jira/browse/HIVE-17259
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, JDBC
> Environment: Hive 1.2.1000.2.6.1.0-129
> Beeline version 1.2.1000.2.6.1.0-129 by Apache Hive
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: HIVE-17259.patch
>
>
> Hive JDBC does not recognize UNIONTYPE columns.
> I've an external table backed by an avro schema containing a union type field.
> {noformat}
> "name" : "value",
> "type" : [ "int", "string", "null" ]
> {noformat}
> When describing the table I've:
> {noformat}
> describe test_table;
> +---+---+--+--+
> | col_name  |   data_type 
>   | comment  |
> +---+---+--+--+
> | description   | string  
>   |  |
> | name  | string  
>   |  |
> | value | uniontype  
>   |  |
> +---+---+--+--+
> {noformat}
> When doing a select query over the data using the Hive CLI, it works:
> {noformat}
> hive> select value from test_table;
> OK
> {0:10}
> {0:10}
> {0:9}
> {0:9}
> ...
> {noformat}
> But when using beeline, it fails:
> {noformat}
> 0: jdbc:hive2://> select * from test_table;
> Error: Unrecognized column type: UNIONTYPE (state=,code=0)
> {noformat}
> By applying the patch provided with this JIRA, the command succeeds and 
> return the expected output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16603) Enforce foreign keys to refer to primary keys or unique keys

2017-10-23 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215323#comment-16215323
 ] 

Jesus Camacho Rodriguez commented on HIVE-16603:


[~leftylev], yes, probably this should be documented within the constraints 
documentation.

> Enforce foreign keys to refer to primary keys or unique keys
> 
>
> Key: HIVE-16603
> URL: https://issues.apache.org/jira/browse/HIVE-16603
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: 3.0.0
>
> Attachments: HIVE-16603.patch
>
>
> Follow-up on HIVE-16575.
> Currently we do not enforce foreign keys to refer to primary keys or unique 
> keys (as opposed to PostgreSQL and others); we should do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17259) Hive JDBC does not recognize UNIONTYPE columns

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215312#comment-16215312
 ] 

Hive QA commented on HIVE-17259:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893490/HIVE-17259.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7444/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7444/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7444/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-10-23 15:36:21.638
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-7444/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-10-23 15:36:21.641
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 623ecaa HIVE-17368 : DBTokenStore fails to connect in Kerberos 
enabled remote HMS environment (Vihang Karajgaonkar, reviewed by Aihua Xu and 
Janaki Lahorani)
+ git clean -f -d
Removing standalone-metastore/src/gen/org/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 623ecaa HIVE-17368 : DBTokenStore fails to connect in Kerberos 
enabled remote HMS environment (Vihang Karajgaonkar, reviewed by Aihua Xu and 
Janaki Lahorani)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-10-23 15:36:22.854
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/jdbc/src/java/org/apache/hive/jdbc/JdbcColumn.java: No such file or 
directory
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12893490 - PreCommit-HIVE-Build

> Hive JDBC does not recognize UNIONTYPE columns
> --
>
> Key: HIVE-17259
> URL: https://issues.apache.org/jira/browse/HIVE-17259
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, JDBC
> Environment: Hive 1.2.1000.2.6.1.0-129
> Beeline version 1.2.1000.2.6.1.0-129 by Apache Hive
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: HIVE-17259.patch
>
>
> Hive JDBC does not recognize UNIONTYPE columns.
> I've an external table backed by an avro schema containing a union type field.
> {noformat}
> "name" : "value",
> "type" : [ "int", "string", "null" ]
> {noformat}
> When describing the table I've:
> {noformat}
> describe test_table;
> +---+---+--+--+
> | col_name  |   data_type 
>   | comment  |
> +---+---+--+--+
> | description   | string  
>   |  |
> | name  | string  
>   |  |
> | value | uniontype  
>   |  |
> +---+---+--+--+
> {noformat}
> When doing a select query over the data using the Hive CLI, it works:
> {noformat}
> hive> select value from test_table;
> OK
> {0:10}
> {0:10}
> {0:9}
> {0:9}
> ...
> {noformat}
> But when using beeline, it fails:
> {noformat}
> 0: jdbc:hive2://> select * from test_table;
> 

[jira] [Updated] (HIVE-17259) Hive JDBC does not recognize UNIONTYPE columns

2017-10-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-17259:

Status: Patch Available  (was: Open)

> Hive JDBC does not recognize UNIONTYPE columns
> --
>
> Key: HIVE-17259
> URL: https://issues.apache.org/jira/browse/HIVE-17259
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, JDBC
> Environment: Hive 1.2.1000.2.6.1.0-129
> Beeline version 1.2.1000.2.6.1.0-129 by Apache Hive
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: HIVE-17259.patch
>
>
> Hive JDBC does not recognize UNIONTYPE columns.
> I've an external table backed by an avro schema containing a union type field.
> {noformat}
> "name" : "value",
> "type" : [ "int", "string", "null" ]
> {noformat}
> When describing the table I've:
> {noformat}
> describe test_table;
> +---+---+--+--+
> | col_name  |   data_type 
>   | comment  |
> +---+---+--+--+
> | description   | string  
>   |  |
> | name  | string  
>   |  |
> | value | uniontype  
>   |  |
> +---+---+--+--+
> {noformat}
> When doing a select query over the data using the Hive CLI, it works:
> {noformat}
> hive> select value from test_table;
> OK
> {0:10}
> {0:10}
> {0:9}
> {0:9}
> ...
> {noformat}
> But when using beeline, it fails:
> {noformat}
> 0: jdbc:hive2://> select * from test_table;
> Error: Unrecognized column type: UNIONTYPE (state=,code=0)
> {noformat}
> By applying the patch provided with this JIRA, the command succeeds and 
> return the expected output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16601) Display Session Id and Query Name / Id in Spark UI

2017-10-23 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215254#comment-16215254
 ] 

Xuefu Zhang commented on HIVE-16601:


The new screen shot looks great! +1 on that. I didn't review the code, but I 
think it's fine since other folks have reviewed that.

> Display Session Id and Query Name / Id in Spark UI
> --
>
> Key: HIVE-16601
> URL: https://issues.apache.org/jira/browse/HIVE-16601
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16601.1.patch, HIVE-16601.2.patch, 
> HIVE-16601.3.patch, HIVE-16601.4.patch, HIVE-16601.5.patch, 
> HIVE-16601.6.patch, HIVE-16601.7.patch, HIVE-16601.8.patch, Spark UI 
> Applications List.png, Spark UI Jobs List.png
>
>
> We should display the session id for each HoS Application Launched, and the 
> Query Name / Id and Dag Id for each Spark job launched. Hive-on-MR does 
> something similar via the {{mapred.job.name}} parameter. The query name is 
> displayed in the Job Name of the MR app.
> The changes here should also allow us to leverage the config 
> {{hive.query.name}} for HoS.
> This should help with debuggability of HoS applications. The Hive-on-Tez UI 
> does something similar.
> Related issues for Hive-on-Tez: HIVE-12357, HIVE-12523



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17877) HoS: combine equivalent DPP sink works

2017-10-23 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li reassigned HIVE-17877:
-


> HoS: combine equivalent DPP sink works
> --
>
> Key: HIVE-17877
> URL: https://issues.apache.org/jira/browse/HIVE-17877
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rui Li
>Assignee: Rui Li
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-3776) support PIVOT in hive

2017-10-23 Thread Raimondas Berniunas (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16215066#comment-16215066
 ] 

Raimondas Berniunas commented on HIVE-3776:
---

Are there any date when _pivot_ / _transpose_ functionality will be officially 
available in Hive?

Thank you in advance!

r.

> support PIVOT in hive
> -
>
> Key: HIVE-3776
> URL: https://issues.apache.org/jira/browse/HIVE-3776
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
>
> It is a fairly well understood feature in databases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-17866) MetaStoreEventListener - AlterTableEvent should include the partitions affected by the ALTER TABLE statement

2017-10-23 Thread Daniel del Castillo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel del Castillo reassigned HIVE-17866:
--

Assignee: Daniel del Castillo

> MetaStoreEventListener - AlterTableEvent should include the partitions 
> affected by the ALTER TABLE statement
> 
>
> Key: HIVE-17866
> URL: https://issues.apache.org/jira/browse/HIVE-17866
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.3.0
>Reporter: Daniel del Castillo
>Assignee: Daniel del Castillo
>Priority: Minor
>
> Extend {{AlterTableEvent}} to include the set of partitions that have been 
> modified by {{HiveAlterHandle}} during the execution of {{alterTable()}}, 
> e.g. if the statement includes the {{CASCADE}} option.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17259) Hive JDBC does not recognize UNIONTYPE columns

2017-10-23 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated HIVE-17259:
--
Attachment: HIVE-17259.patch

> Hive JDBC does not recognize UNIONTYPE columns
> --
>
> Key: HIVE-17259
> URL: https://issues.apache.org/jira/browse/HIVE-17259
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, JDBC
> Environment: Hive 1.2.1000.2.6.1.0-129
> Beeline version 1.2.1000.2.6.1.0-129 by Apache Hive
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: HIVE-17259.patch
>
>
> Hive JDBC does not recognize UNIONTYPE columns.
> I've an external table backed by an avro schema containing a union type field.
> {noformat}
> "name" : "value",
> "type" : [ "int", "string", "null" ]
> {noformat}
> When describing the table I've:
> {noformat}
> describe test_table;
> +---+---+--+--+
> | col_name  |   data_type 
>   | comment  |
> +---+---+--+--+
> | description   | string  
>   |  |
> | name  | string  
>   |  |
> | value | uniontype  
>   |  |
> +---+---+--+--+
> {noformat}
> When doing a select query over the data using the Hive CLI, it works:
> {noformat}
> hive> select value from test_table;
> OK
> {0:10}
> {0:10}
> {0:9}
> {0:9}
> ...
> {noformat}
> But when using beeline, it fails:
> {noformat}
> 0: jdbc:hive2://> select * from test_table;
> Error: Unrecognized column type: UNIONTYPE (state=,code=0)
> {noformat}
> By applying the patch provided with this JIRA, the command succeeds and 
> return the expected output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17259) Hive JDBC does not recognize UNIONTYPE columns

2017-10-23 Thread Pierre Villard (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated HIVE-17259:
--
Attachment: (was: HIVE-17259.patch)

> Hive JDBC does not recognize UNIONTYPE columns
> --
>
> Key: HIVE-17259
> URL: https://issues.apache.org/jira/browse/HIVE-17259
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, JDBC
> Environment: Hive 1.2.1000.2.6.1.0-129
> Beeline version 1.2.1000.2.6.1.0-129 by Apache Hive
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> Hive JDBC does not recognize UNIONTYPE columns.
> I've an external table backed by an avro schema containing a union type field.
> {noformat}
> "name" : "value",
> "type" : [ "int", "string", "null" ]
> {noformat}
> When describing the table I've:
> {noformat}
> describe test_table;
> +---+---+--+--+
> | col_name  |   data_type 
>   | comment  |
> +---+---+--+--+
> | description   | string  
>   |  |
> | name  | string  
>   |  |
> | value | uniontype  
>   |  |
> +---+---+--+--+
> {noformat}
> When doing a select query over the data using the Hive CLI, it works:
> {noformat}
> hive> select value from test_table;
> OK
> {0:10}
> {0:10}
> {0:9}
> {0:9}
> ...
> {noformat}
> But when using beeline, it fails:
> {noformat}
> 0: jdbc:hive2://> select * from test_table;
> Error: Unrecognized column type: UNIONTYPE (state=,code=0)
> {noformat}
> By applying the patch provided with this JIRA, the command succeeds and 
> return the expected output.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16198) Vectorize GenericUDFIndex for ARRAY

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214752#comment-16214752
 ] 

Hive QA commented on HIVE-16198:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12859876/HIVE-16198.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7442/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7442/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7442/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-10-23 07:09:13.308
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-7442/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-10-23 07:09:13.310
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 623ecaa HIVE-17368 : DBTokenStore fails to connect in Kerberos 
enabled remote HMS environment (Vihang Karajgaonkar, reviewed by Aihua Xu and 
Janaki Lahorani)
+ git clean -f -d
Removing itests/src/test/resources/testconfiguration.properties.orig
Removing ql/src/test/queries/clientpositive/vectorization_parquet_projection.q
Removing 
ql/src/test/results/clientpositive/spark/vectorization_parquet_projection.q.out
Removing 
ql/src/test/results/clientpositive/vectorization_parquet_projection.q.out
Removing standalone-metastore/src/gen/org/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 623ecaa HIVE-17368 : DBTokenStore fails to connect in Kerberos 
enabled remote HMS environment (Vihang Karajgaonkar, reviewed by Aihua Xu and 
Janaki Lahorani)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-10-23 07:09:14.430
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java:63
error: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java:
 patch does not apply
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java:319
error: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java: 
patch does not apply
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java:39
error: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java: patch 
does not apply
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_complex_join.q.out:236
error: ql/src/test/results/clientpositive/llap/vector_complex_join.q.out: patch 
does not apply
error: patch failed: 
ql/src/test/results/clientpositive/vector_complex_join.q.out:214
error: ql/src/test/results/clientpositive/vector_complex_join.q.out: patch does 
not apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12859876 - PreCommit-HIVE-Build

> Vectorize GenericUDFIndex for ARRAY
> ---
>
> Key: HIVE-16198
> URL: https://issues.apache.org/jira/browse/HIVE-16198
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF, Vectorization
>Reporter: Teddy Choi
>Assignee: Teddy Choi
> Attachments: HIVE-16198.1.patch, HIVE-16198.2.patch, 
> HIVE-16198.3.patch
>
>
> Vectorize GenericUDFIndex for array data type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17193) HoS: don't combine map works that are targets of different DPPs

2017-10-23 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214737#comment-16214737
 ] 

Rui Li edited comment on HIVE-17193 at 10/23/17 6:57 AM:
-

Hi [~kellyzly],
bq. how to compare the result of dpp work in the period of physical plan?
We can compare the DPP works the same way as we compare other works, i.e. if 
two works have the same operator tree and each operator has an equivalent 
counterpart, then the two works can be combined.


was (Author: lirui):
Hi [~kellyzly],
bq. how to compare the result of dpp work in the period of physical plan?
We can compare the DPP works the same way as we compare other works, i.e. if 
two works have the same operator tree and all the each operator has an 
equivalent counterpart, then the two works can be combined.

> HoS: don't combine map works that are targets of different DPPs
> ---
>
> Key: HIVE-17193
> URL: https://issues.apache.org/jira/browse/HIVE-17193
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
>
> Suppose {{srcpart}} is partitioned by {{ds}}. The following query can trigger 
> the issue:
> {code}
> explain
> select * from
>   (select srcpart.ds,srcpart.key from srcpart join src on srcpart.ds=src.key) 
> a
> join
>   (select srcpart.ds,srcpart.key from srcpart join src on 
> srcpart.ds=src.value) b
> on a.key=b.key;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17193) HoS: don't combine map works that are targets of different DPPs

2017-10-23 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214737#comment-16214737
 ] 

Rui Li commented on HIVE-17193:
---

Hi [~kellyzly],
bq. how to compare the result of dpp work in the period of physical plan?
We can compare the DPP works the same way as we compare other works, i.e. if 
two works have the same operator tree and all the each operator has an 
equivalent counterpart, then the two works can be combined.

> HoS: don't combine map works that are targets of different DPPs
> ---
>
> Key: HIVE-17193
> URL: https://issues.apache.org/jira/browse/HIVE-17193
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
>
> Suppose {{srcpart}} is partitioned by {{ds}}. The following query can trigger 
> the issue:
> {code}
> explain
> select * from
>   (select srcpart.ds,srcpart.key from srcpart join src on srcpart.ds=src.key) 
> a
> join
>   (select srcpart.ds,srcpart.key from srcpart join src on 
> srcpart.ds=src.value) b
> on a.key=b.key;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17830) dbnotification fails to work with rdbms other than postgres

2017-10-23 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214731#comment-16214731
 ] 

Daniel Dai commented on HIVE-17830:
---

Ok so "SET @@session.sql_mode=ANSI_QUOTES" will be required, right? Last time I 
read the code, it seems prepareTxn will be invoked every time we created a new 
ObjectStore. However, I must miss somewhere as otherwise, we will never hit the 
sql syntax error.

> dbnotification fails to work with rdbms other than postgres
> ---
>
> Key: HIVE-17830
> URL: https://issues.apache.org/jira/browse/HIVE-17830
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: anishek
>Assignee: Daniel Dai
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17830.0.patch, HIVE-17830.1.patch
>
>
> as part of HIVE-17721 we had changed the direct sql to acquire the lock for 
> postgres as
> {code}
> select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" for update;
> {code}
> however this breaks other databases and we have to use different sql 
> statements for different databases 
> for postgres use
> {code}
> select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" for update;
> {code}
> for SQLServer 
> {code}
> select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" with (updlock);
> {code}
> for other databases 
> {code}
> select NEXT_EVENT_ID from NOTIFICATION_SEQUENCE for update;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17193) HoS: don't combine map works that are targets of different DPPs

2017-10-23 Thread liyunzhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214728#comment-16214728
 ] 

liyunzhang commented on HIVE-17193:
---

[~lirui]:
{quote}
1. The simplest solution is, if the DPP works' IDs (tracked by the target map 
works) are different, then we consider the target map works are different and 
don't combine them.
2. Another solution is we walk the parent tasks first, and combine equivalent 
DPP works. Two DPP works can be considered equivalent as long as they output 
same records.
{quote}
For #1, it can be implemented from the current code. For #2, how to compare the 
result of dpp work in the period of physical plan?  You mean directly comparing 
the estimated data size(Statistics: Num rows: 58 Data size: 5812)?

{code}
 Map 9 
Map Operator Tree:
TableScan
  alias: src
  Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
  Filter Operator
predicate: value is not null (type: boolean)
Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
Select Operator
  expressions: value (type: string)
  outputColumnNames: _col0
  Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
  Select Operator
expressions: _col0 (type: string)
outputColumnNames: _col0
Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
Group By Operator
  keys: _col0 (type: string)
  mode: hash
  outputColumnNames: _col0
  Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
  Spark Partition Pruning Sink Operator
Target column: ds (string)
partition key expr: ds
Statistics: Num rows: 58 Data size: 5812 Basic 
stats: COMPLETE Column stats: NONE
target work: Map 5
{code}


{code}
  Map 8 
Map Operator Tree:
TableScan
  alias: src
  Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
  Filter Operator
predicate: key is not null (type: boolean)
Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
Select Operator
  expressions: key (type: string)
  outputColumnNames: _col0
  Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
  Select Operator
expressions: _col0 (type: string)
outputColumnNames: _col0
Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
Group By Operator
  keys: _col0 (type: string)
  mode: hash
  outputColumnNames: _col0
  Statistics: Num rows: 58 Data size: 5812 Basic stats: 
COMPLETE Column stats: NONE
  Spark Partition Pruning Sink Operator
Target column: ds (string)
partition key expr: ds
Statistics: Num rows: 58 Data size: 5812 Basic 
stats: COMPLETE Column stats: NONE
target work: Map 1

{code}


> HoS: don't combine map works that are targets of different DPPs
> ---
>
> Key: HIVE-17193
> URL: https://issues.apache.org/jira/browse/HIVE-17193
> Project: Hive
>  Issue Type: Bug
>Reporter: Rui Li
>Assignee: Rui Li
>
> Suppose {{srcpart}} is partitioned by {{ds}}. The following query can trigger 
> the issue:
> {code}
> explain
> select * from
>   (select srcpart.ds,srcpart.key from srcpart join src on srcpart.ds=src.key) 
> a
> join
>   (select srcpart.ds,srcpart.key from srcpart join src on 
> srcpart.ds=src.value) b
> on a.key=b.key;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17830) dbnotification fails to work with rdbms other than postgres

2017-10-23 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214723#comment-16214723
 ] 

anishek commented on HIVE-17830:


Thanks [~daijy] for this patch. A Quick question. 

on looking at the code which sets the ANSI_QUOTE its in 
*MetastoreDirectSql.java*
{code}


public void prepareTxn() throws MetaException {
if (dbType != DatabaseProduct.MYSQL) return;
try {
  assert pm.currentTransaction().isActive(); // must be inside tx together 
with queries
  executeNoResult("SET @@session.sql_mode=ANSI_QUOTES");
} catch (SQLException sqlEx) {
  throw new MetaException("Error setting ansi quotes: " + 
sqlEx.getMessage());
}
  }

{code}


here we are setting the sql_mode only for the *session* and not *global*. I 
just ran the below on a mysql server without modifying the sql_mode

{code}
mysql> select "NEXT_EVENT_ID" from NOTIFICATION_SEQUENCE;
+---+
| NEXT_EVENT_ID |
+---+
| NEXT_EVENT_ID |
+---+
1 row in set (0.00 sec)
{code}

since we use connection pooling depending on which connection is used to 
execute the above statement we will get different results, wont we. May be i am 
missing something here. 

cc [~thejas]


> dbnotification fails to work with rdbms other than postgres
> ---
>
> Key: HIVE-17830
> URL: https://issues.apache.org/jira/browse/HIVE-17830
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: anishek
>Assignee: Daniel Dai
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17830.0.patch, HIVE-17830.1.patch
>
>
> as part of HIVE-17721 we had changed the direct sql to acquire the lock for 
> postgres as
> {code}
> select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" for update;
> {code}
> however this breaks other databases and we have to use different sql 
> statements for different databases 
> for postgres use
> {code}
> select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" for update;
> {code}
> for SQLServer 
> {code}
> select "NEXT_EVENT_ID" from "NOTIFICATION_SEQUENCE" with (updlock);
> {code}
> for other databases 
> {code}
> select NEXT_EVENT_ID from NOTIFICATION_SEQUENCE for update;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16198) Vectorize GenericUDFIndex for ARRAY

2017-10-23 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214720#comment-16214720
 ] 

Colin Ma commented on HIVE-16198:
-

hi, [~teddy.choi], [~mmccline], because of the problem HIVE-17133, I rebased 
the patch based on HIVE-2.3.0 with some minor changes. To evaluate the 
performance improvement, the following table is used:
{code}
hive> describe temperature_orc_5g;
   t_date  string   
 
   citystring   
 
   temperaturesarray
hive> show tblproperties temperature_orc_5g;
   COLUMN_STATS_ACCURATE   {"BASIC_STATS":"true"}
   numFiles   20
   numRows 1
   rawDataSize   241
   totalSize   1793960785
{code}
Tested by HIVE on Spark, with the sql {color:#59afe1}select city, 
avg(temperatures\[0\]), avg(temperatures\[5\]) from temperature_orc_5g where 
temperatures\[2\] > 20 group by city limit 10{color}, the following are the 
result:
|| ||Disable vectorization||Enable vectorization||
|execution time|{color:#d04437}34s{color}|{color:#14892c}26s{color}|
Specifically, the detail time cost for the same task which will process 
15154763 rows as follow table:
|| ||Disable vectorization||Enable vectorization||
|Time with RecorderReader|{color:#d04437}8.9s{color}|{color:#14892c}5.9s{color}|
|Time with filter 
operator|{color:#d04437}3.1s{color}|{color:#14892c}0.1s{color}|
|Time with groupBy and followup operators|10.8s|11.5s|
I think the improvement is obviously, do you know why the patch isn't committed 
until now, thanks.

> Vectorize GenericUDFIndex for ARRAY
> ---
>
> Key: HIVE-16198
> URL: https://issues.apache.org/jira/browse/HIVE-16198
> Project: Hive
>  Issue Type: Sub-task
>  Components: UDF, Vectorization
>Reporter: Teddy Choi
>Assignee: Teddy Choi
> Attachments: HIVE-16198.1.patch, HIVE-16198.2.patch, 
> HIVE-16198.3.patch
>
>
> Vectorize GenericUDFIndex for array data type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17874) Parquet vectorization fails on tables with complex columns when there are no projected columns

2017-10-23 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214719#comment-16214719
 ] 

Ferdinand Xu commented on HIVE-17874:
-

Hi [~vihangk1], can you help check the failed test cases? 

> Parquet vectorization fails on tables with complex columns when there are no 
> projected columns
> --
>
> Key: HIVE-17874
> URL: https://issues.apache.org/jira/browse/HIVE-17874
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.2.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-17874.01-branch-2.patch, HIVE-17874.01.patch, 
> HIVE-17874.02.patch
>
>
> When a parquet table contains an unsupported type like {{Map}}, {{LIST}} or 
> {{UNION}} simple queries like {{select count(*) from table}} fails with 
> {{unsupported type exception}} even though vectorized reader doesn't really 
> need read the complex type into batches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17874) Parquet vectorization fails on tables with complex columns when there are no projected columns

2017-10-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214690#comment-16214690
 ] 

Hive QA commented on HIVE-17874:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12893482/HIVE-17874.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 11317 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_parquet_projection]
 (batchId=42)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=158)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_parquet_projection]
 (batchId=121)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] 
(batchId=243)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=204)
org.apache.hadoop.hive.ql.io.parquet.TestVectorizedColumnReader.testNullSplitForParquetReader
 (batchId=262)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=221)
org.apache.hadoop.hive.ql.parse.authorization.plugin.sqlstd.TestOperation2Privilege.checkHiveOperationTypeMatch
 (batchId=269)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7441/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7441/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7441/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12893482 - PreCommit-HIVE-Build

> Parquet vectorization fails on tables with complex columns when there are no 
> projected columns
> --
>
> Key: HIVE-17874
> URL: https://issues.apache.org/jira/browse/HIVE-17874
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.2.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-17874.01-branch-2.patch, HIVE-17874.01.patch, 
> HIVE-17874.02.patch
>
>
> When a parquet table contains an unsupported type like {{Map}}, {{LIST}} or 
> {{UNION}} simple queries like {{select count(*) from table}} fails with 
> {{unsupported type exception}} even though vectorized reader doesn't really 
> need read the complex type into batches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)