[jira] [Updated] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark

2016-04-10 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-13293:
--
Attachment: HIVE-13293.1.patch

I have tried both splitting the task and caching the RDD and chose the latter 
here. Because it's simpler and works with queries that have only one 
ShuffleMapStage. Regarding performance, these two solutions provide roughly 
same performance in my local tests. I used DISK_ONLY as storage level which I 
think is good enough for performance and avoids more memory overhead.
Lifeng, could you help test the patch with your data set? Thanks.

> Query occurs performance degradation after enabling parallel order by for 
> Hive on Spark
> ---
>
> Key: HIVE-13293
> URL: https://issues.apache.org/jira/browse/HIVE-13293
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Lifeng Wang
>Assignee: Rui Li
> Attachments: HIVE-13293.1.patch
>
>
> I use TPCx-BB to do some performance test on Hive on Spark engine. And found 
> query 10 has performance degradation when enabling parallel order by.
> It seems that sampling cost much time before running the real query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13293) Query occurs performance degradation after enabling parallel order by for Hive on Spark

2016-04-10 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-13293:
--
Status: Patch Available  (was: Open)

> Query occurs performance degradation after enabling parallel order by for 
> Hive on Spark
> ---
>
> Key: HIVE-13293
> URL: https://issues.apache.org/jira/browse/HIVE-13293
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0
>Reporter: Lifeng Wang
>Assignee: Rui Li
> Attachments: HIVE-13293.1.patch
>
>
> I use TPCx-BB to do some performance test on Hive on Spark engine. And found 
> query 10 has performance degradation when enabling parallel order by.
> It seems that sampling cost much time before running the real query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13432) ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7

2016-04-10 Thread Qiuzhuang Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234480#comment-15234480
 ] 

Qiuzhuang Lian commented on HIVE-13432:
---

For more info, I also see the regression to the ACID feature for delete 
statement,

16/04/11 12:38:43 WARN shims.HadoopShimsSecure: Can't fetch tasklog: 
TaskLogServlet is not supported in MR2 mode.

Task with the most failures(4): 
-
Task ID:
  task_1458819387386_23814_r_03

URL:
  
http://nn209003:8088/taskdetails.jsp?jobid=job_1458819387386_23814=task_1458819387386_23814_r_03
-
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row (tag=0) 
{"key":{"reducesinkkey0":{"transactionid":0,"bucketid":43,"rowid":0}},"value":null}
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:257)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row (tag=0) 
{"key":{"reducesinkkey0":{"transactionid":0,"bucketid":43,"rowid":0}},"value":null}
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245)
... 7 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:759)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:97)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)
... 7 more


16/04/11 12:38:43 ERROR exec.Task: 
Task with the most failures(4): 
-
Task ID:
  task_1458819387386_23814_r_03

URL:
  
http://nn209003:8088/taskdetails.jsp?jobid=job_1458819387386_23814=task_1458819387386_23814_r_03
-
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row (tag=0) 
{"key":{"reducesinkkey0":{"transactionid":0,"bucketid":43,"rowid":0}},"value":null}
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:257)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row (tag=0) 
{"key":{"reducesinkkey0":{"transactionid":0,"bucketid":43,"rowid":0}},"value":null}
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245)
... 7 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:759)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:97)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)

> ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7
> ---
>
> Key: HIVE-13432
> URL: https://issues.apache.org/jira/browse/HIVE-13432
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
> Environment: Hadoop 2.6.2+Hive 1.2.1
>Reporter: Qiuzhuang Lian
>Assignee: Matt McCline
>
> After initiating HIVE ACID ORC table compaction, the CompactorMR job throws 
> exception:
> Error: java.lang.ArrayIndexOutOfBoundsException: 7
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1968)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> 

[jira] [Commented] (HIVE-13432) ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7

2016-04-10 Thread Qiuzhuang Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234463#comment-15234463
 ] 

Qiuzhuang Lian commented on HIVE-13432:
---

Hi Matt, 

I build hive from hive git branch 2.0 and see patch of 12984, I try compaction 
again on the same ACID table but still see the same error,

, kind: TIMESTAMP
, kind: INT
] innerStructSubtype -1
  at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2092)
  at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2518)
  at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2098)
  at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2518)
  at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:210)
  at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:662)
  at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:212)
  at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:512)
  at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1870)
  at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:575)
  at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:554)
  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
  at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/04/11 11:54:21 INFO mapreduce.Job:  map 100% reduce 0%
16/04/11 11:54:21 INFO mapreduce.Job: Job job_1458819387386_23797 failed with 
state FAILED due to: Task failed task_1458819387386_23797_m_01
Job failed as tasks failed. failedMaps:1 failedReduces:0

16/04/11 11:54:21 INFO mapreduce.Job: Counters: 14
  Job Counters 
Failed map tasks=9
Killed map tasks=9
Launched map tasks=18
Other local map tasks=8
Data-local map tasks=4
Rack-local map tasks=6
Total time spent by all maps in occupied slots (ms)=405068
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=202534
Total vcore-seconds taken by all map tasks=202534
Total megabyte-seconds taken by all map tasks=414789632
  Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
16/04/11 11:54:21 ERROR compactor.Worker: Caught exception while trying to 
compact 
id:80,dbname:lqz,tableName:my_acid_orc_table,partName:null,state:,type:MAJOR,runAs:null,tooManyAborts:false,highestTxnId:0.
  Marking clean to avoid repeated failures, java.io.IOException: Job failed!
  at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
  at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.launchCompactionJob(CompactorMR.java:247)
  at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:213)
  at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:164)

16/04/11 11:54:34 INFO txn.AcidCompactionHistoryService: History reaper reaper 
ran for 0seconds.  isAliveCounter=-2147483642

Please let me know if you need more info for this issue. 

Regards,
Qiuzhuang

> ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7
> ---
>
> Key: HIVE-13432
> URL: https://issues.apache.org/jira/browse/HIVE-13432
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
> Environment: Hadoop 2.6.2+Hive 1.2.1
>Reporter: Qiuzhuang Lian
>Assignee: Matt McCline
>
> After initiating HIVE ACID ORC table compaction, the CompactorMR job throws 
> exception:
> Error: java.lang.ArrayIndexOutOfBoundsException: 7
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1968)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1969)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:69)
>   at 
> 

[jira] [Commented] (HIVE-13410) PerfLog metrics scopes not closed if there are exceptions on HS2

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234455#comment-15234455
 ] 

Hive QA commented on HIVE-13410:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797767/HIVE-13410.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 9942 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.llap.daemon.impl.TestLlapDaemonProtocolServerImpl.test
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testSimpleTable
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccessWithReadOnly
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.api.repl.commands.TestCommands.org.apache.hive.hcatalog.api.repl.commands.TestCommands
org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7542/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7542/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7542/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 31 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797767 - PreCommit-HIVE-TRUNK-Build

> PerfLog metrics scopes not closed if there are exceptions on HS2
> 
>
> Key: HIVE-13410
> URL: https://issues.apache.org/jira/browse/HIVE-13410
> Project: Hive
>  Issue Type: Bug
>  Components: Diagnosability
>Affects Versions: 2.0.0
>Reporter: Szehon Ho
>Assignee: Szehon Ho
> Attachments: HIVE-13410.2.patch, HIVE-13410.3.patch, 
> HIVE-13410.4.patch, HIVE-13410.4.patch, HIVE-13410.patch
>
>
> If there are errors, the HS2 PerfLog api scopes are not closed.  Then there 
> are 

[jira] [Commented] (HIVE-13432) ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7

2016-04-10 Thread Qiuzhuang Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234424#comment-15234424
 ] 

Qiuzhuang Lian commented on HIVE-13432:
---

I will build hive 2.1/2.0.1 and let you know to see if it works. Many thanks.

> ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7
> ---
>
> Key: HIVE-13432
> URL: https://issues.apache.org/jira/browse/HIVE-13432
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
> Environment: Hadoop 2.6.2+Hive 1.2.1
>Reporter: Qiuzhuang Lian
>Assignee: Matt McCline
>
> After initiating HIVE ACID ORC table compaction, the CompactorMR job throws 
> exception:
> Error: java.lang.ArrayIndexOutOfBoundsException: 7
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1968)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1969)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:69)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:202)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:539)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:183)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:466)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1308)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:512)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:491)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> As a result, we see hadoop exception stack,
> 297 failed with state FAILED due to: Task failed 
> task_1458819387386_11297_m_08
> Job failed as tasks failed. failedMaps:1 failedReduces:0
> 2016-04-06 11:30:57,891 INFO  [dn209006-27]: mapreduce.Job 
> (Job.java:monitorAndPrintJob(1392)) - Counters: 14
>   Job Counters 
> Failed map tasks=16
> Killed map tasks=7
> Launched map tasks=23
> Other local map tasks=13
> Data-local map tasks=6
> Rack-local map tasks=4
> Total time spent by all maps in occupied slots (ms)=412592
> Total time spent by all reduces in occupied slots (ms)=0
> Total time spent by all map tasks (ms)=206296
> Total vcore-seconds taken by all map tasks=206296
> Total megabyte-seconds taken by all map tasks=422494208
>   Map-Reduce Framework
> CPU time spent (ms)=0
> Physical memory (bytes) snapshot=0
> Virtual memory (bytes) snapshot=0
> 2016-04-06 11:30:57,891 ERROR [dn209006-27]: compactor.Worker 
> (Worker.java:run(176)) - Caught exception while trying to compact 
> lqz.my_orc_acid_table.  Marking clean to avoid repeated failures, 
> java.io.IOException: Job failed!
>   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:186)
>   at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:162)
> 2016-04-06 11:30:57,894 ERROR [dn209006-27]: txn.CompactionTxnHandler 
> (CompactionTxnHandler.java:markCleaned(327)) - Expected to remove at least 
> one row from completed_txn_components when marking compaction entry as clean!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-12649) Hive on Spark will resubmitted application when not enough resouces to launch yarn application master

2016-04-10 Thread JoneZhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JoneZhang resolved HIVE-12649.
--
   Resolution: Resolved
Fix Version/s: 2.0.0
   1.3.0

> Hive on Spark will resubmitted application when not enough resouces to launch 
> yarn application master
> -
>
> Key: HIVE-12649
> URL: https://issues.apache.org/jira/browse/HIVE-12649
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.1, 1.2.1
>Reporter: JoneZhang
>Assignee: Xuefu Zhang
> Fix For: 1.3.0, 2.0.0
>
>
> Hive on spark will estimate reducer number when the query is not set reduce 
> number,which cause a application submit.The application will pending if the 
> yarn queue's resources is insufficient.
> So there are more than one pending applications probably because 
> there are more than one estimate call.The failure is soft, so it doesn't 
> prevent subsequent processings. We can make that a hard failure
> That code is found in 
> at 
> org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:112)
> at 
> org.apache.hadoop.hive.ql.optimizer.spark.SetSparkReducerParallelism.process(SetSparkReducerParallelism.java:115)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13430) Pass error message to failure hook

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234392#comment-15234392
 ] 

Hive QA commented on HIVE-13430:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797192/HIVE-13430.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 9947 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_15
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_views
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_column_names_with_leading_and_trailing_spaces
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_mat_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_script_env_var1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_insert_overwrite_local_directory_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_update_tmp_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_char_cast
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_15
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_parquet
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_shufflejoin
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_dyn_part_max
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_minimr_broken_pipe
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler
org.apache.hadoop.hive.ql.security.TestAuthorizationPreEventListener.testListener
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.testOutputFormat
org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7541/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7541/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7541/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 39 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797192 - PreCommit-HIVE-TRUNK-Build

> Pass error message to failure hook
> --
>
> Key: HIVE-13430
> URL: https://issues.apache.org/jira/browse/HIVE-13430
> Project: Hive

[jira] [Commented] (HIVE-11959) add simple test case for TestTableIterable

2016-04-10 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234371#comment-15234371
 ] 

Thejas M Nair commented on HIVE-11959:
--

Pushed to master and branch-1.
Thanks for the review [~ashutoshc]! 
I had forgot about this one!


> add simple test case for TestTableIterable
> --
>
> Key: HIVE-11959
> URL: https://issues.apache.org/jira/browse/HIVE-11959
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-11959.1.patch
>
>
> Adding a test case to TableIterable which was introduced in HIVE-11407



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11959) add simple test case for TestTableIterable

2016-04-10 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-11959:
-
   Resolution: Fixed
Fix Version/s: 2.1.0
   1.3.0
   Status: Resolved  (was: Patch Available)

> add simple test case for TestTableIterable
> --
>
> Key: HIVE-11959
> URL: https://issues.apache.org/jira/browse/HIVE-11959
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-11959.1.patch
>
>
> Adding a test case to TableIterable which was introduced in HIVE-11407



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13424) Refactoring the code to pass a QueryState object rather than HiveConf object

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234337#comment-15234337
 ] 

Hive QA commented on HIVE-13424:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797733/HIVE-13424.3.patch

{color:green}SUCCESS:{color} +1 due to 14 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 374 failed/errored test(s), 9983 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_2_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_alter_merge_stats_orc
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join_filters
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join_nulls
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_13
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_15
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_16
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket_map_join_tez1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket_map_join_tez2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucketpruning1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_gby
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_gby_empty
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_limit
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_semijoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_simple_select
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_stats
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_subq_exists
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_subq_in
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_subq_not_in
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_udf_udaf
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_union
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_views
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_windowing
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_column_names_with_leading_and_trailing_spaces
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_constprog_dpp
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_constprog_semijoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_correlationoptimizer1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_count
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_create_merge_compressed
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cross_join

[jira] [Updated] (HIVE-12959) LLAP: Add task scheduler timeout when no nodes are alive

2016-04-10 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-12959:
-
Attachment: HIVE-12959.6.patch

Addressed [~sseth]'s review comments.

> LLAP: Add task scheduler timeout when no nodes are alive
> 
>
> Key: HIVE-12959
> URL: https://issues.apache.org/jira/browse/HIVE-12959
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-12959.1.patch, HIVE-12959.2.patch, 
> HIVE-12959.3.patch, HIVE-12959.5.patch, HIVE-12959.6.patch
>
>
> When there are no llap daemons running task scheduler should have a timeout 
> to fail the query instead of waiting forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13380) Decimal should have lower precedence than double in type hierachy

2016-04-10 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234315#comment-15234315
 ] 

Jason Dere commented on HIVE-13380:
---

Have some concerns about b/c, though [~gopalv] has also argued that some of 
these kinds of changes may be acceptable for 2.x line. Definitely would have to 
call out this change in docs/release notes.
Surprised there aren't more test fixes due to this change.
+1

> Decimal should have lower precedence than double in type hierachy
> -
>
> Key: HIVE-13380
> URL: https://issues.apache.org/jira/browse/HIVE-13380
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-13380.2.patch, HIVE-13380.4.patch, 
> HIVE-13380.5.patch, HIVE-13380.patch
>
>
> Currently its other way round. Also, decimal should be lower than float.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12033) Move TestCliDriver/TestNegativeCliDriver out of ANT and make it debugable

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234300#comment-15234300
 ] 

Hive QA commented on HIVE-12033:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12765052/HIVE-12033.1-spark.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/1041/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/1041/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-1041/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-SPARK-Build-1041/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z spark ]]
+ [[ -d apache-git-source-source ]]
+ [[ ! -d apache-git-source-source/.git ]]
+ [[ ! -d apache-git-source-source ]]
+ cd apache-git-source-source
+ git fetch origin
>From https://github.com/apache/hive
   4ac966c..bcbc415  branch-1   -> origin/branch-1
   b2b61da..8528c63  branch-2.0 -> origin/branch-2.0
   79c1c69..8f6b28a  llap   -> origin/llap
   4f9194d..010157e  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 078dbac HIVE-12205: Unify metric collection for local and remote 
spark client. (Chinna via Chengxiang)
+ git clean -f -d
+ git checkout spark
Already on 'spark'
+ git reset --hard origin/spark
HEAD is now at 078dbac HIVE-12205: Unify metric collection for local and remote 
spark client. (Chinna via Chengxiang)
+ git merge --ff-only origin/spark
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12765052 - PreCommit-HIVE-SPARK-Build

> Move TestCliDriver/TestNegativeCliDriver out of ANT and make it debugable
> -
>
> Key: HIVE-12033
> URL: https://issues.apache.org/jira/browse/HIVE-12033
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Affects Versions: 1.2.1
>Reporter: Sergio Peña
>Assignee: Sergio Peña
>Priority: Minor
> Attachments: HIVE-12033.1-spark.patch, HIVE-12033.1.patch
>
>
> The ANT auto-generated test sources make TestCliDriver code a little 
> complicated to debug with IntelliJ and Eclipse. Remote debugging is the best 
> choice to do it.
> There must be a new way to move out the ANT auto-generated source plug-in, 
> and make TestCliDriver easy debuggable by current IDE, such as IntelliJ and 
> Eclipse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13316) Upgrade to Calcite 1.7

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234298#comment-15234298
 ] 

Hive QA commented on HIVE-13316:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797717/HIVE-13316.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 80 failed/errored test(s), 9694 tests 
executed
*Failed tests:*
{noformat}
TestCliDriver-authorization_1.q-groupby7_map_multi_single_reducer.q-bucketsortoptimize_insert_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-auto_join30.q-unionall_unbalancedppd.q-lock1.q-and-12-more - did 
not produce a TEST-*.xml file
TestCliDriver-auto_join9.q-udf_double.q-insert_into_with_schema.q-and-12-more - 
did not produce a TEST-*.xml file
TestCliDriver-cp_mj_rc.q-masking_disablecbo_1.q-decimal_3.q-and-12-more - did 
not produce a TEST-*.xml file
TestCliDriver-cte_4.q-filter_join_breaktask.q-input43.q-and-12-more - did not 
produce a TEST-*.xml file
TestCliDriver-gby_star.q-udf_regexp_replace.q-load_dyn_part2.q-and-12-more - 
did not produce a TEST-*.xml file
TestCliDriver-groupby_sort_test_1.q-skewjoinopt15.q-decimal_precision2.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-index_bitmap_rc.q-constprog_dpp.q-load_nonpart_authsuccess.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-input17.q-bucket_map_join_tez1.q-ppd_random.q-and-12-more - did 
not produce a TEST-*.xml file
TestCliDriver-sample_islocalmode_hook_use_metadata.q-udf_bitwise_shiftleft.q-decimal_6.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-timestamp_literal.q-smb_mapjoin9.q-smb_join_partition_key.q-and-12-more
 - did not produce a TEST-*.xml file
TestCliDriver-udf_decode.q-update_orig_table.q-join44.q-and-12-more - did not 
produce a TEST-*.xml file
TestCliDriver-union_remove_1.q-mapjoin_mapjoin.q-constantPropWhen.q-and-12-more 
- did not produce a TEST-*.xml file
TestCliDriver-vector_udf1.q-join16.q-insert_overwrite_local_directory_1.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join30.q-vector_data_types.q-tez_join.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_13.q-tez_self_join.q-orc_vectorization_ppd.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-cte_4.q-orc_merge5.q-vectorization_limit.q-and-12-more - 
did not produce a TEST-*.xml file
TestSparkCliDriver-skewjoinopt15.q-bucketmapjoin3.q-udf_percentile.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_excludeHadoop20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_archive_multi
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constprog3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constprog_semijoin
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_genericudf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join42
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_masking_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_num_op_type_conv
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_offset_limit_ppd_optimizer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_constant_expr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_outer_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_udf_col
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_union_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rand_partitionpruner3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_semijoin4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin_25
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_table_access_keys_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_hour
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_minute
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_parse_url
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_second
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_elt
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorization_short_regress
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_hybridgrace_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_self_join

[jira] [Commented] (HIVE-11615) Create test for max thrift message setting

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234292#comment-15234292
 ] 

Ashutosh Chauhan commented on HIVE-11615:
-

+1

> Create test for max thrift message setting
> --
>
> Key: HIVE-11615
> URL: https://issues.apache.org/jira/browse/HIVE-11615
> Project: Hive
>  Issue Type: Test
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-11615.1.patch
>
>
> Create a test case for HIVE-8680



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11806) Create test for HIVE-11174

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234290#comment-15234290
 ] 

Ashutosh Chauhan commented on HIVE-11806:
-

+1

> Create test for HIVE-11174
> --
>
> Key: HIVE-11806
> URL: https://issues.apache.org/jira/browse/HIVE-11806
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.2.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
> Attachments: HIVE-11806.1.patch
>
>
> We are lacking tests for HIVE-11174. Adding one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11959) add simple test case for TestTableIterable

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234288#comment-15234288
 ] 

Ashutosh Chauhan commented on HIVE-11959:
-

+1

> add simple test case for TestTableIterable
> --
>
> Key: HIVE-11959
> URL: https://issues.apache.org/jira/browse/HIVE-11959
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-11959.1.patch
>
>
> Adding a test case to TableIterable which was introduced in HIVE-11407



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12019) Create unit test for HIVE-10732

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234286#comment-15234286
 ] 

Ashutosh Chauhan commented on HIVE-12019:
-

+1

> Create unit test for HIVE-10732
> ---
>
> Key: HIVE-12019
> URL: https://issues.apache.org/jira/browse/HIVE-12019
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-12019.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12033) Move TestCliDriver/TestNegativeCliDriver out of ANT and make it debugable

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234285#comment-15234285
 ] 

Ashutosh Chauhan commented on HIVE-12033:
-

[~spena] This will make life a lot easier for debugging. Lets try to get this 
in.

> Move TestCliDriver/TestNegativeCliDriver out of ANT and make it debugable
> -
>
> Key: HIVE-12033
> URL: https://issues.apache.org/jira/browse/HIVE-12033
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Affects Versions: 1.2.1
>Reporter: Sergio Peña
>Assignee: Sergio Peña
>Priority: Minor
> Attachments: HIVE-12033.1-spark.patch, HIVE-12033.1.patch
>
>
> The ANT auto-generated test sources make TestCliDriver code a little 
> complicated to debug with IntelliJ and Eclipse. Remote debugging is the best 
> choice to do it.
> There must be a new way to move out the ANT auto-generated source plug-in, 
> and make TestCliDriver easy debuggable by current IDE, such as IntelliJ and 
> Eclipse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12041) Add unit test for HIVE-9386

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234281#comment-15234281
 ] 

Ashutosh Chauhan commented on HIVE-12041:
-

+1

> Add unit test for HIVE-9386
> ---
>
> Key: HIVE-12041
> URL: https://issues.apache.org/jira/browse/HIVE-12041
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.0, 1.1.0, 1.1.1, 1.2.1
>Reporter: Wei Zheng
>Assignee: Wei Zheng
> Attachments: HIVE-12041.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12342) Set default value of hive.optimize.index.filter to true

2016-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12342:

Status: Patch Available  (was: Open)

> Set default value of hive.optimize.index.filter to true
> ---
>
> Key: HIVE-12342
> URL: https://issues.apache.org/jira/browse/HIVE-12342
> Project: Hive
>  Issue Type: Task
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12342.1.patch, HIVE-12342.2.patch, 
> HIVE-12342.3.patch, HIVE-12342.patch
>
>
> This configuration governs ppd for storage layer. When applicable, it will 
> always help. It should be on by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12342) Set default value of hive.optimize.index.filter to true

2016-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12342:

Status: Open  (was: Patch Available)

> Set default value of hive.optimize.index.filter to true
> ---
>
> Key: HIVE-12342
> URL: https://issues.apache.org/jira/browse/HIVE-12342
> Project: Hive
>  Issue Type: Task
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12342.1.patch, HIVE-12342.2.patch, 
> HIVE-12342.3.patch, HIVE-12342.patch
>
>
> This configuration governs ppd for storage layer. When applicable, it will 
> always help. It should be on by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12342) Set default value of hive.optimize.index.filter to true

2016-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12342:

Attachment: HIVE-12342.3.patch

> Set default value of hive.optimize.index.filter to true
> ---
>
> Key: HIVE-12342
> URL: https://issues.apache.org/jira/browse/HIVE-12342
> Project: Hive
>  Issue Type: Task
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12342.1.patch, HIVE-12342.2.patch, 
> HIVE-12342.3.patch, HIVE-12342.patch
>
>
> This configuration governs ppd for storage layer. When applicable, it will 
> always help. It should be on by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13240) GroupByOperator: Drop the hash aggregates when closing operator

2016-04-10 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234278#comment-15234278
 ] 

Gopal V commented on HIVE-13240:


Failed tests have NameNode errors

{code}
2016-04-10T00:47:48,456 WARN  
[org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@773c7b73[]]:
 namenode.NameNodeResourceChecker 
(NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on 
volume '/dev/xvde1' is 0, which is below the configured reserved amount 
104857600
2016-04-10T00:47:48,456 WARN  
[org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@773c7b73[]]:
 namenode.FSNamesystem (FSNamesystem.java:run(5159)) - NameNode low on 
available disk space. Entering safe mode.
2016-04-10T00:47:48,456 INFO  
[org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@773c7b73[]]:
 hdfs.StateChange (FSNamesystem.java:reportStatus(6003)) - STATE* Safe mode is 
ON.
Resources are low on NN. Please add or free up more resources then turn off 
safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
leave" to turn safe mode off.
2016-04-10T00:47:54,612 WARN  [LeaseRenewer:hiveptest@localhost:35563[]]: 
hdfs.LeaseRenewer (LeaseRenewer.java:run(458)) - Failed to renew lease for 
[DFSClient_NONMAPREDUCE_623273627_1] for 30 seconds.  Will retry shortly ...
org.apache.hadoop.ipc.RemoteException: Cannot renew lease for 
DFSClient_NONMAPREDUCE_623273627_1. Name node is in safe mode.
Resources are low on NN. Please add or free up more resources then turn off 
safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
leave" to turn safe mode off.
{code}

> GroupByOperator: Drop the hash aggregates when closing operator
> ---
>
> Key: HIVE-13240
> URL: https://issues.apache.org/jira/browse/HIVE-13240
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.3.0, 1.2.1, 2.0.0
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-13240.03.patch, HIVE-13240.1.patch, 
> HIVE-13240.2.patch
>
>
> GroupByOperator holds onto the Hash aggregates accumulated when the plan is 
> cached.
> Drop the hashAggregates in case of error during forwarding to the next 
> operator.
> Added for PTF, TopN and all GroupBy cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13084) Vectorization add support for PROJECTION Multi-AND/OR

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234244#comment-15234244
 ] 

Hive QA commented on HIVE-13084:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797695/HIVE-13084.05.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7538/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7538/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7538/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-7538/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 010157e HIVE-13420 : Clarify HS2 WebUI Query 'Elapsed Time' 
(Szehon, reviewed by Aihua Xu and Mohit Sabharwal)
+ git clean -f -d
Removing 
ql/src/test/queries/clientpositive/insert_values_orig_table_use_metadata.q
Removing 
ql/src/test/results/clientpositive/insert_values_orig_table_use_metadata.q.out
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 010157e HIVE-13420 : Clarify HS2 WebUI Query 'Elapsed Time' 
(Szehon, reviewed by Aihua Xu and Mohit Sabharwal)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797695 - PreCommit-HIVE-TRUNK-Build

> Vectorization add support for PROJECTION Multi-AND/OR
> -
>
> Key: HIVE-13084
> URL: https://issues.apache.org/jira/browse/HIVE-13084
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Rajesh Balamohan
>Assignee: Matt McCline
> Attachments: HIVE-13084.01.patch, HIVE-13084.02.patch, 
> HIVE-13084.03.patch, HIVE-13084.04.patch, HIVE-13084.05.patch, 
> vector_between_date.q
>
>
> When there is case statement in group by, hive throws unable to vectorize 
> exception.
> e.g query just to demonstrate the problem
> {noformat}
> explain select l_partkey, case when l_commitdate between '2015-06-30' AND 
> '2015-07-06' THEN '2015-06-30' END as wk from lineitem_test_l_shipdate_ts 
> group by l_partkey, case when l_commitdate between '2015-06-30' AND 
> '2015-07-06' THEN '2015-06-30' END;
> org.apache.hadoop.hive.ql.metadata.HiveException: Could not vectorize 
> expression: org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc
> Vertex dependency in root stage
> Reducer 2 <- Map 1 (SIMPLE_EDGE)
> Stage-0
>   Fetch Operator
> limit:-1
> Stage-1
>   Reducer 2
>   File Output Operator [FS_7]
> Group By Operator [GBY_5] (rows=888777234 width=108)
>   Output:["_col0","_col1"],keys:KEY._col0, KEY._col1
> <-Map 1 [SIMPLE_EDGE]
>   SHUFFLE [RS_4]
> PartitionCols:_col0, _col1
> Group By Operator [GBY_3] 

[jira] [Commented] (HIVE-13341) Stats state is not captured correctly: differentiate load table and create table

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234242#comment-15234242
 ] 

Hive QA commented on HIVE-13341:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797640/HIVE-13341.04.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 478 failed/errored test(s), 9970 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_allcolref_in_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_file_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_numbuckets_partitioned_table2_h23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_numbuckets_partitioned_table_h23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_clusterby_sortby
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_skewed_table
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_not_sorted
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_table_serde2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_analyze_table_null_partition
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_output_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_SortUnionTransposeRule
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_cross_product_check_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_outer_join_ppr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnStatsUpdateForStatsOptimizer_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_column_names_with_leading_and_trailing_spaces
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_columnstats_partlvl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_combine2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_alter_list_bucketing_table1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_like
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_like_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_skewed_table1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas

[jira] [Updated] (HIVE-13473) upgrade Apache Directory Server version

2016-04-10 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HIVE-13473:

Attachment: (was: HIVE-13473.1.patch)

> upgrade Apache Directory Server version
> ---
>
> Key: HIVE-13473
> URL: https://issues.apache.org/jira/browse/HIVE-13473
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-13473.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13473) upgrade Apache Directory Server version

2016-04-10 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HIVE-13473:

Attachment: HIVE-13473.2.patch

> upgrade Apache Directory Server version
> ---
>
> Key: HIVE-13473
> URL: https://issues.apache.org/jira/browse/HIVE-13473
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-13473.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13473) upgrade Apache Directory Server version

2016-04-10 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HIVE-13473:

Release Note: HIVE-13473 Upgrade Apache Directory Server version
  Status: Patch Available  (was: Open)

[~ashutoshc]

> upgrade Apache Directory Server version
> ---
>
> Key: HIVE-13473
> URL: https://issues.apache.org/jira/browse/HIVE-13473
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-13473.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13473) upgrade Apache Directory Server version

2016-04-10 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HIVE-13473:

Attachment: HIVE-13473.1.patch

> upgrade Apache Directory Server version
> ---
>
> Key: HIVE-13473
> URL: https://issues.apache.org/jira/browse/HIVE-13473
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-13473.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234237#comment-15234237
 ] 

Gabor Liptak commented on HIVE-10293:
-

https://issues.apache.org/jira/browse/HIVE-13473

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-10293.1.patch
>
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234232#comment-15234232
 ] 

Ashutosh Chauhan commented on HIVE-10293:
-

+1 
Can you create another jira for LDAP minor version change since I assume we 
will need that for this to work.

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-10293.1.patch
>
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HIVE-10293:

Release Note: HIVE-10293 Add travis build configuration
  Status: Patch Available  (was: Open)

I removed the LDAP minor version change from this patch (as I didn't get 
successful test run with it).

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-10293.1.patch
>
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HIVE-10293:

Attachment: HIVE-10293.1.patch

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HIVE-10293.1.patch
>
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13287) Add logic to estimate stats for IN operator

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234222#comment-15234222
 ] 

Ashutosh Chauhan commented on HIVE-13287:
-

+1

> Add logic to estimate stats for IN operator
> ---
>
> Key: HIVE-13287
> URL: https://issues.apache.org/jira/browse/HIVE-13287
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-13287.01.patch, HIVE-13287.02.patch, 
> HIVE-13287.patch
>
>
> Currently, IN operator is considered in the default case: reduces the input 
> rows to the half. This may lead to wrong estimates for the number of rows 
> produced by Filter operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234219#comment-15234219
 ] 

Ashutosh Chauhan commented on HIVE-13472:
-

+1 pending tests

> Replace primitive wrapper's valueOf method with parse* method to avoid 
> unnecessary boxing/unboxing
> --
>
> Key: HIVE-13472
> URL: https://issues.apache.org/jira/browse/HIVE-13472
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HIVE-13472.0.patch
>
>
> There are lots of primitive wrapper's valueOf method which should be replaced 
> with parseXX method.
> For example, Integer.valueOf(String) returns Integer type but 
> Integer.parseInt(String) returns primitive int type so we can avoid 
> unnecessary boxing/unboxing by replacing some of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234217#comment-15234217
 ] 

Ashutosh Chauhan commented on HIVE-10293:
-

Hive tests takes more than 10 hours to run. We have to split them up so that 
they fall under travis ci limits. We can take that up later. Lets get this in 
for now. Can you upload your patch here?

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-10293:

Assignee: Gabor Liptak

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234186#comment-15234186
 ] 

Hive QA commented on HIVE-13458:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797813/HIVE-13458.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 9913 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-tez_joins_explain.q-vector_decimal_aggregate.q-vector_groupby_mapjoin.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_decimal_2.q-schema_evol_text_fetchwork_table.q-constprog_semijoin.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption
org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics
org.apache.hadoop.hive.ql.TestTxnCommands2.testDeleteIn
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
org.apache.hadoop.hive.ql.TestTxnCommands2.testOrcNoPPD
org.apache.hadoop.hive.ql.TestTxnCommands2.testUpdateMixedCase
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccessWithReadOnly
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropDatabase
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7536/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7536/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7536/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 34 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797813 - PreCommit-HIVE-TRUNK-Build

> Heartbeater doesn't fail query when heartbeat fails
> ---
>
> Key: HIVE-13458
> URL: https://issues.apache.org/jira/browse/HIVE-13458
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: 

[jira] [Commented] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234169#comment-15234169
 ] 

Gabor Liptak commented on HIVE-10293:
-

Even splitting the tests into two, we will still run into the 50 minutes limit 
...

https://travis-ci.org/gliptak/hive/builds/122060116

> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Priority: Minor
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10293) enabling travis-ci build?

2016-04-10 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234101#comment-15234101
 ] 

Gabor Liptak commented on HIVE-10293:
-

By enabling the tests, the build (job?) timed out at 50 minutes (hardcoded from 
travis-ci.org ...)

https://travis-ci.org/apache/hive/builds/121990748
https://docs.travis-ci.com/user/customizing-the-build/#Build-Timeouts

My recommendation would be to enable just install only with further 
enhancements as a followup JIRA.


> enabling travis-ci build?
> -
>
> Key: HIVE-10293
> URL: https://issues.apache.org/jira/browse/HIVE-10293
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Reporter: Gabor Liptak
>Priority: Minor
>
> I would like to contribute a .travis.yml for Hive.
> In particular, this would allow contributors working through Github, to 
> validate their own commits on their own branches.
> Please comment.
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing

2016-04-10 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-13472:
--
Attachment: HIVE-13472.0.patch

> Replace primitive wrapper's valueOf method with parse* method to avoid 
> unnecessary boxing/unboxing
> --
>
> Key: HIVE-13472
> URL: https://issues.apache.org/jira/browse/HIVE-13472
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HIVE-13472.0.patch
>
>
> There are lots of primitive wrapper's valueOf method which should be replaced 
> with parseXX method.
> For example, Integer.valueOf(String) returns Integer type but 
> Integer.parseInt(String) returns primitive int type so we can avoid 
> unnecessary boxing/unboxing by replacing some of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing

2016-04-10 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-13472:
--
Attachment: (was: HIVE-13472.0.patch)

> Replace primitive wrapper's valueOf method with parse* method to avoid 
> unnecessary boxing/unboxing
> --
>
> Key: HIVE-13472
> URL: https://issues.apache.org/jira/browse/HIVE-13472
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>
> There are lots of primitive wrapper's valueOf method which should be replaced 
> with parseXX method.
> For example, Integer.valueOf(String) returns Integer type but 
> Integer.parseInt(String) returns primitive int type so we can avoid 
> unnecessary boxing/unboxing by replacing some of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13425) Fix partition addition in MSCK REPAIR TABLE command

2016-04-10 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HIVE-13425:
--
Status: Patch Available  (was: Open)

> Fix partition addition in MSCK REPAIR TABLE command
> ---
>
> Key: HIVE-13425
> URL: https://issues.apache.org/jira/browse/HIVE-13425
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HIVE-13425.1.patch
>
>
> I create a table in following HiveQL.
> {code}
> hive> create table example (name string) partitioned by (id int);
> {code}
> And, I make some directories about this table in HDFS.
> {code}
> [root@hadoop ~]# hdfs dfs -ls -R /user/hive/warehouse/example
> drwxr-xr-x   - root hadoop  0 2016-04-05 22:21 
> /user/hive/warehouse/example/id=1
> drwxr-xr-x   - root hadoop  0 2016-04-05 22:22 
> /user/hive/warehouse/example/id=1/id=2
> -rw-r--r--   1 root hadoop  8 2016-04-05 22:22 
> /user/hive/warehouse/example/id=1/id=2/example.txt
> {code}
> Next I executed MSCK REPAIR TABLE command and added a partition. And this 
> result became as follows.
> {code}
> [root@hadoop ~]# hive -e 'msck repair table example'
> OK
> Partitions not in metastore:  example:id=1/id=2
> Repair: Added partition to metastore example:id=1/id=2
> Time taken: 1.243 seconds, Fetched: 2 row(s)
> [root@hadoop ~]# hive -e 'show partitions example'
> OK
> id=2
> {code}
> "id=1" should be a partition, but "id=2" was added. I will fix this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13425) Fix partition addition in MSCK REPAIR TABLE command

2016-04-10 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HIVE-13425:
--
Attachment: HIVE-13425.1.patch

> Fix partition addition in MSCK REPAIR TABLE command
> ---
>
> Key: HIVE-13425
> URL: https://issues.apache.org/jira/browse/HIVE-13425
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.0.0
>Reporter: Shinichi Yamashita
>Assignee: Shinichi Yamashita
>Priority: Minor
> Attachments: HIVE-13425.1.patch
>
>
> I create a table in following HiveQL.
> {code}
> hive> create table example (name string) partitioned by (id int);
> {code}
> And, I make some directories about this table in HDFS.
> {code}
> [root@hadoop ~]# hdfs dfs -ls -R /user/hive/warehouse/example
> drwxr-xr-x   - root hadoop  0 2016-04-05 22:21 
> /user/hive/warehouse/example/id=1
> drwxr-xr-x   - root hadoop  0 2016-04-05 22:22 
> /user/hive/warehouse/example/id=1/id=2
> -rw-r--r--   1 root hadoop  8 2016-04-05 22:22 
> /user/hive/warehouse/example/id=1/id=2/example.txt
> {code}
> Next I executed MSCK REPAIR TABLE command and added a partition. And this 
> result became as follows.
> {code}
> [root@hadoop ~]# hive -e 'msck repair table example'
> OK
> Partitions not in metastore:  example:id=1/id=2
> Repair: Added partition to metastore example:id=1/id=2
> Time taken: 1.243 seconds, Fetched: 2 row(s)
> [root@hadoop ~]# hive -e 'show partitions example'
> OK
> id=2
> {code}
> "id=1" should be a partition, but "id=2" was added. I will fix this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13429) Tool to remove dangling scratch dir

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234068#comment-15234068
 ] 

Hive QA commented on HIVE-13429:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797803/HIVE-13429.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 9970 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.metastore.TestMetaStoreAuthorization.testMetaStoreAuthorization
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testConnection
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeTokenAuth
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testProxyAuth
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7535/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7535/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7535/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797803 - PreCommit-HIVE-TRUNK-Build

> Tool to remove dangling scratch dir
> ---
>
> Key: HIVE-13429
> URL: https://issues.apache.org/jira/browse/HIVE-13429
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Attachments: HIVE-13429.1.patch, HIVE-13429.2.patch, 
> HIVE-13429.3.patch, HIVE-13429.4.patch
>
>
> We have seen in some cases, user will leave the scratch dir behind, and 
> eventually eat out hdfs storage. This could happen when vm restarts and leave 
> no chance for Hive to run shutdown hook. This is applicable for both HiveCli 
> and HiveServer2. Here we provide an external tool to clear dead scratch dir 
> as needed.
> We need a way to identify which scratch dir is in use. We will rely on HDFS 
> write lock for that. Here is how HDFS write lock works:
> 1. A HDFS client open HDFS file for write and only close at the time of 
> shutdown
> 2. Cleanup process can try to open HDFS file for write. If the client holding 
> this file is still running, we will get exception. Otherwise, we know the 
> client is dead
> 3. If the HDFS client dies without closing the HDFS file, NN will reclaim the 
> lease after 10 min, ie, the HDFS file hold by the dead client is writable 
> again after 10 min
> So here is how we remove dangling scratch directory in Hive:
> 1. HiveCli/HiveServer2 opens a well-named lock file in scratch directory and 
> only close it when we about to drop scratch directory
> 2. A command line tool cleardanglingscratchdir  will check every scratch 
> directory and try open the lock file for write. If it does not get exception, 
> meaning the owner is dead and we can safely remove the scratch directory
> 3. The 10 min window means it is possible a HiveCli/HiveServer2 is dead but 
> we still cannot reclaim the scratch directory for another 10 min. But this 
> should be tolerable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing

2016-04-10 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234036#comment-15234036
 ] 

Kousuke Saruta commented on HIVE-13472:
---

[~ashutoshc] Could you review this patch? Thanks.

> Replace primitive wrapper's valueOf method with parse* method to avoid 
> unnecessary boxing/unboxing
> --
>
> Key: HIVE-13472
> URL: https://issues.apache.org/jira/browse/HIVE-13472
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HIVE-13472.0.patch
>
>
> There are lots of primitive wrapper's valueOf method which should be replaced 
> with parseXX method.
> For example, Integer.valueOf(String) returns Integer type but 
> Integer.parseInt(String) returns primitive int type so we can avoid 
> unnecessary boxing/unboxing by replacing some of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing

2016-04-10 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-13472:
--
Status: Patch Available  (was: Open)

> Replace primitive wrapper's valueOf method with parse* method to avoid 
> unnecessary boxing/unboxing
> --
>
> Key: HIVE-13472
> URL: https://issues.apache.org/jira/browse/HIVE-13472
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HIVE-13472.0.patch
>
>
> There are lots of primitive wrapper's valueOf method which should be replaced 
> with parseXX method.
> For example, Integer.valueOf(String) returns Integer type but 
> Integer.parseInt(String) returns primitive int type so we can avoid 
> unnecessary boxing/unboxing by replacing some of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13472) Replace primitive wrapper's valueOf method with parse* method to avoid unnecessary boxing/unboxing

2016-04-10 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-13472:
--
Attachment: HIVE-13472.0.patch

> Replace primitive wrapper's valueOf method with parse* method to avoid 
> unnecessary boxing/unboxing
> --
>
> Key: HIVE-13472
> URL: https://issues.apache.org/jira/browse/HIVE-13472
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.1.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HIVE-13472.0.patch
>
>
> There are lots of primitive wrapper's valueOf method which should be replaced 
> with parseXX method.
> For example, Integer.valueOf(String) returns Integer type but 
> Integer.parseInt(String) returns primitive int type so we can avoid 
> unnecessary boxing/unboxing by replacing some of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13240) GroupByOperator: Drop the hash aggregates when closing operator

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15234022#comment-15234022
 ] 

Hive QA commented on HIVE-13240:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797623/HIVE-13240.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 49 failed/errored test(s), 9918 tests 
executed
*Failed tests:*
{noformat}
TestMiniTezCliDriver-auto_join30.q-vector_data_types.q-tez_join.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_acid3.q-vector_decimal_trailing.q-lvj_mapjoin.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_interval_2.q-bucket3.q-vectorization_7.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_minimr_broken_pipe
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithCommas
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
org.apache.hadoop.hive.ql.exec.tez.TestHostAffinitySplitLocationProvider.testOrcSplitsLocationAffinity
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.concurrencyFalse
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testDDLExclusive
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testDelete
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testLockTimeout
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testRollback
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testSingleReadPartition
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testSingleWriteTable
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testUpdate
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testWriteDynamicPartition
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.api.repl.commands.TestCommands.org.apache.hive.hcatalog.api.repl.commands.TestCommands
org.apache.hive.hcatalog.mapreduce.TestHCatPartitionPublish.org.apache.hive.hcatalog.mapreduce.TestHCatPartitionPublish
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp

[jira] [Updated] (HIVE-13420) Clarify HS2 WebUI Query 'Elapsed TIme'

2016-04-10 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-13420:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Ran those tests and they are not reproducible.

Committed to master, thanks guys for the review.

> Clarify HS2 WebUI Query 'Elapsed TIme'
> --
>
> Key: HIVE-13420
> URL: https://issues.apache.org/jira/browse/HIVE-13420
> Project: Hive
>  Issue Type: Sub-task
>  Components: Diagnosability
>Affects Versions: 2.0.0
>Reporter: Szehon Ho
>Assignee: Szehon Ho
> Attachments: Elapsed Time.png, HIVE-13420.2.patch, 
> HIVE-13420.2.patch, HIVE-13420.patch, Patched UI.2.png, Patched UI.png
>
>
> Today the "Queries" section of the WebUI shows SQLOperations that are not 
> closed.
> Elapsed time is thus a bit confusing, people might take this to mean query 
> runtime, actually it is the time since the operation was opened.  The query 
> may be finished, but operation is not closed.  Perhaps another timer column 
> is needed showing the runtime of the query to reduce this confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13432) ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233975#comment-15233975
 ] 

Matt McCline commented on HIVE-13432:
-

I ported a number of commits from master to branch-2.0, including:

HIVE-12894 Detect whether ORC is reading from ACID table correctly for Schema 
Evolution (Matt McCline, reviewed by Prasanth J and Eugene Koifman)

which may fix this issue.

> ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7
> ---
>
> Key: HIVE-13432
> URL: https://issues.apache.org/jira/browse/HIVE-13432
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
> Environment: Hadoop 2.6.2+Hive 1.2.1
>Reporter: Qiuzhuang Lian
>
> After initiating HIVE ACID ORC table compaction, the CompactorMR job throws 
> exception:
> Error: java.lang.ArrayIndexOutOfBoundsException: 7
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1968)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1969)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:69)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:202)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:539)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:183)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:466)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1308)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:512)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:491)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> As a result, we see hadoop exception stack,
> 297 failed with state FAILED due to: Task failed 
> task_1458819387386_11297_m_08
> Job failed as tasks failed. failedMaps:1 failedReduces:0
> 2016-04-06 11:30:57,891 INFO  [dn209006-27]: mapreduce.Job 
> (Job.java:monitorAndPrintJob(1392)) - Counters: 14
>   Job Counters 
> Failed map tasks=16
> Killed map tasks=7
> Launched map tasks=23
> Other local map tasks=13
> Data-local map tasks=6
> Rack-local map tasks=4
> Total time spent by all maps in occupied slots (ms)=412592
> Total time spent by all reduces in occupied slots (ms)=0
> Total time spent by all map tasks (ms)=206296
> Total vcore-seconds taken by all map tasks=206296
> Total megabyte-seconds taken by all map tasks=422494208
>   Map-Reduce Framework
> CPU time spent (ms)=0
> Physical memory (bytes) snapshot=0
> Virtual memory (bytes) snapshot=0
> 2016-04-06 11:30:57,891 ERROR [dn209006-27]: compactor.Worker 
> (Worker.java:run(176)) - Caught exception while trying to compact 
> lqz.my_orc_acid_table.  Marking clean to avoid repeated failures, 
> java.io.IOException: Job failed!
>   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:186)
>   at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:162)
> 2016-04-06 11:30:57,894 ERROR [dn209006-27]: txn.CompactionTxnHandler 
> (CompactionTxnHandler.java:markCleaned(327)) - Expected to remove at least 
> one row from completed_txn_components when marking compaction entry as clean!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13432) ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-13432:
---

Assignee: Matt McCline

> ACID ORC CompactorMR job throws java.lang.ArrayIndexOutOfBoundsException: 7
> ---
>
> Key: HIVE-13432
> URL: https://issues.apache.org/jira/browse/HIVE-13432
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Affects Versions: 1.2.1
> Environment: Hadoop 2.6.2+Hive 1.2.1
>Reporter: Qiuzhuang Lian
>Assignee: Matt McCline
>
> After initiating HIVE ACID ORC table compaction, the CompactorMR job throws 
> exception:
> Error: java.lang.ArrayIndexOutOfBoundsException: 7
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1968)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1969)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2368)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:69)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:202)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:539)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:183)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:466)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1308)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:512)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:491)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> As a result, we see hadoop exception stack,
> 297 failed with state FAILED due to: Task failed 
> task_1458819387386_11297_m_08
> Job failed as tasks failed. failedMaps:1 failedReduces:0
> 2016-04-06 11:30:57,891 INFO  [dn209006-27]: mapreduce.Job 
> (Job.java:monitorAndPrintJob(1392)) - Counters: 14
>   Job Counters 
> Failed map tasks=16
> Killed map tasks=7
> Launched map tasks=23
> Other local map tasks=13
> Data-local map tasks=6
> Rack-local map tasks=4
> Total time spent by all maps in occupied slots (ms)=412592
> Total time spent by all reduces in occupied slots (ms)=0
> Total time spent by all map tasks (ms)=206296
> Total vcore-seconds taken by all map tasks=206296
> Total megabyte-seconds taken by all map tasks=422494208
>   Map-Reduce Framework
> CPU time spent (ms)=0
> Physical memory (bytes) snapshot=0
> Virtual memory (bytes) snapshot=0
> 2016-04-06 11:30:57,891 ERROR [dn209006-27]: compactor.Worker 
> (Worker.java:run(176)) - Caught exception while trying to compact 
> lqz.my_orc_acid_table.  Marking clean to avoid repeated failures, 
> java.io.IOException: Job failed!
>   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:186)
>   at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:162)
> 2016-04-06 11:30:57,894 ERROR [dn209006-27]: txn.CompactionTxnHandler 
> (CompactionTxnHandler.java:markCleaned(327)) - Expected to remove at least 
> one row from completed_txn_components when marking compaction entry as clean!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12887) Handle ORC schema on read with fewer columns than file schema (after Schema Evolution changes)

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233968#comment-15233968
 ] 

Matt McCline commented on HIVE-12887:
-

Tried to port to branch-2.0 but got error 
org.apache.hadoop.hive.ql.metadata.HiveException: Changing SerDe (from 
OrcSerde) is not supported for table default.orc_partitioned. File format may 
be incompatible

Some other commit is needed, too.

> Handle ORC schema on read with fewer columns than file schema (after Schema 
> Evolution changes)
> --
>
> Key: HIVE-12887
> URL: https://issues.apache.org/jira/browse/HIVE-12887
> Project: Hive
>  Issue Type: Bug
>  Components: ORC
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-12887.01.patch, HIVE-12887.02.patch
>
>
> Exception caused by reading after column removal.
> {code}
> Caused by: java.lang.IndexOutOfBoundsException: Index: 10, Size: 10
>   at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>   at java.util.ArrayList.get(ArrayList.java:429)
>   at java.util.Collections$UnmodifiableList.get(Collections.java:1309)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2053)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2481)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:216)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:598)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:179)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$OriginalReaderPair.(OrcRawRecordMerger.java:222)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:442)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1285)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1165)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:249)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10729) Query failed when select complex columns from joinned table (tez map join only)

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233964#comment-15233964
 ] 

Matt McCline commented on HIVE-10729:
-

Committed to branch-2.0 also.

> Query failed when select complex columns from joinned table (tez map join 
> only)
> ---
>
> Key: HIVE-10729
> URL: https://issues.apache.org/jira/browse/HIVE-10729
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.0
>Reporter: Selina Zhang
>Assignee: Matt McCline
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-10729.03.patch, HIVE-10729.04.patch, 
> HIVE-10729.05.patch, HIVE-10729.1.patch, HIVE-10729.2.patch
>
>
> When map join happens, if projection columns include complex data types, 
> query will fail. 
> Steps to reproduce:
> {code:sql}
> hive> set hive.auto.convert.join;
> hive.auto.convert.join=true
> hive> desc foo;
> a array
> hive> select * from foo;
> [1,2]
> hive> desc src_int;
> key   int
> value string
> hive> select * from src_int where key=2;
> 2val_2
> hive> select * from foo join src_int src  on src.key = foo.a[1];
> {code}
> Query will fail with stack trace
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryArray cannot be cast to 
> [Ljava.lang.Object;
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector.getList(StandardListObjectInspector.java:111)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:314)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serializeField(LazySimpleSerDe.java:262)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.doSerialize(LazySimpleSerDe.java:246)
>   at 
> org.apache.hadoop.hive.serde2.AbstractEncodingAwareSerDe.serialize(AbstractEncodingAwareSerDe.java:50)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:692)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:644)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:676)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:754)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:386)
>   ... 23 more
> {noformat}
> Similar error when projection columns include a map:
> {code:sql}
> hive> CREATE TABLE test (a INT, b MAP) STORED AS ORC;
> hive> INSERT OVERWRITE TABLE test SELECT 1, MAP(1, "val_1", 2, "val_2") FROM 
> src LIMIT 1;
> hive> select * from src join test where src.key=test.a;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13111) Fix timestamp / interval_day_time wrong results with HIVE-9862

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13111:

Fix Version/s: 2.0.1

> Fix timestamp / interval_day_time wrong results with HIVE-9862 
> ---
>
> Key: HIVE-13111
> URL: https://issues.apache.org/jira/browse/HIVE-13111
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-13111.01.patch, HIVE-13111.02.patch, 
> HIVE-13111.03.patch, HIVE-13111.04.patch, HIVE-13111.05.patch, 
> HIVE-13111.06.patch, HIVE-13111.07.patch
>
>
> Fix timestamp / interval_day_time issues discovered when testing the 
> Vectorized Text patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10729) Query failed when select complex columns from joinned table (tez map join only)

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-10729:

Fix Version/s: 2.0.1

> Query failed when select complex columns from joinned table (tez map join 
> only)
> ---
>
> Key: HIVE-10729
> URL: https://issues.apache.org/jira/browse/HIVE-10729
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.0
>Reporter: Selina Zhang
>Assignee: Matt McCline
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-10729.03.patch, HIVE-10729.04.patch, 
> HIVE-10729.05.patch, HIVE-10729.1.patch, HIVE-10729.2.patch
>
>
> When map join happens, if projection columns include complex data types, 
> query will fail. 
> Steps to reproduce:
> {code:sql}
> hive> set hive.auto.convert.join;
> hive.auto.convert.join=true
> hive> desc foo;
> a array
> hive> select * from foo;
> [1,2]
> hive> desc src_int;
> key   int
> value string
> hive> select * from src_int where key=2;
> 2val_2
> hive> select * from foo join src_int src  on src.key = foo.a[1];
> {code}
> Query will fail with stack trace
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryArray cannot be cast to 
> [Ljava.lang.Object;
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector.getList(StandardListObjectInspector.java:111)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:314)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serializeField(LazySimpleSerDe.java:262)
>   at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.doSerialize(LazySimpleSerDe.java:246)
>   at 
> org.apache.hadoop.hive.serde2.AbstractEncodingAwareSerDe.serialize(AbstractEncodingAwareSerDe.java:50)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:692)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:644)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:676)
>   at 
> org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:754)
>   at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.process(MapJoinOperator.java:386)
>   ... 23 more
> {noformat}
> Similar error when projection columns include a map:
> {code:sql}
> hive> CREATE TABLE test (a INT, b MAP) STORED AS ORC;
> hive> INSERT OVERWRITE TABLE test SELECT 1, MAP(1, "val_1", 2, "val_2") FROM 
> src LIMIT 1;
> hive> select * from src join test where src.key=test.a;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13111) Fix timestamp / interval_day_time wrong results with HIVE-9862

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233962#comment-15233962
 ] 

Matt McCline commented on HIVE-13111:
-

Committed to branch-2.0 also.

> Fix timestamp / interval_day_time wrong results with HIVE-9862 
> ---
>
> Key: HIVE-13111
> URL: https://issues.apache.org/jira/browse/HIVE-13111
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-13111.01.patch, HIVE-13111.02.patch, 
> HIVE-13111.03.patch, HIVE-13111.04.patch, HIVE-13111.05.patch, 
> HIVE-13111.06.patch, HIVE-13111.07.patch
>
>
> Fix timestamp / interval_day_time issues discovered when testing the 
> Vectorized Text patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13296) Add vectorized Q test with complex types showing count(*) etc work correctly

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13296:

Fix Version/s: 2.0.1

> Add vectorized Q test with complex types showing count(*) etc work correctly
> 
>
> Key: HIVE-13296
> URL: https://issues.apache.org/jira/browse/HIVE-13296
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-13296.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13310) Vectorized Projection Comparison Number Column to Scalar broken for !noNulls and selectedInUse

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233959#comment-15233959
 ] 

Matt McCline commented on HIVE-13310:
-

Committed to branch-2.0 also.

> Vectorized Projection Comparison Number Column to Scalar broken for !noNulls 
> and selectedInUse
> --
>
> Key: HIVE-13310
> URL: https://issues.apache.org/jira/browse/HIVE-13310
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-13310.01.patch, HIVE-13310.02.patch
>
>
> LongColEqualLongScalar.java
> LongColGreaterEqualLongScalar.java
> LongColGreaterLongScalar.java
> LongColLessEqualLongScalar.java
> LongColLessLongScalar.java
> LongColNotEqualLongScalar.java
> LongScalarEqualLongColumn.java
> LongScalarGreaterEqualLongColumn.java
> LongScalarGreaterLongColumn.java
> LongScalarLessEqualLongColumn.java
> LongScalarLessLongColumn.java
> LongScalarNotEqualLongColumn.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13296) Add vectorized Q test with complex types showing count(*) etc work correctly

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233961#comment-15233961
 ] 

Matt McCline commented on HIVE-13296:
-

Committed to branch-2.0 also.

> Add vectorized Q test with complex types showing count(*) etc work correctly
> 
>
> Key: HIVE-13296
> URL: https://issues.apache.org/jira/browse/HIVE-13296
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-13296.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13313) TABLESAMPLE ROWS feature broken for Vectorization

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13313:

Fix Version/s: 2.0.1

> TABLESAMPLE ROWS feature broken for Vectorization
> -
>
> Key: HIVE-13313
> URL: https://issues.apache.org/jira/browse/HIVE-13313
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-13313.01.patch
>
>
> For vectorization, the ROWS clause is ignored causing many rows to be 
> returned.
> SELECT * FROM source TABLESAMPLE(10 ROWS);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13310) Vectorized Projection Comparison Number Column to Scalar broken for !noNulls and selectedInUse

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13310:

Fix Version/s: 2.0.1

> Vectorized Projection Comparison Number Column to Scalar broken for !noNulls 
> and selectedInUse
> --
>
> Key: HIVE-13310
> URL: https://issues.apache.org/jira/browse/HIVE-13310
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-13310.01.patch, HIVE-13310.02.patch
>
>
> LongColEqualLongScalar.java
> LongColGreaterEqualLongScalar.java
> LongColGreaterLongScalar.java
> LongColLessEqualLongScalar.java
> LongColLessLongScalar.java
> LongColNotEqualLongScalar.java
> LongScalarEqualLongColumn.java
> LongScalarGreaterEqualLongColumn.java
> LongScalarGreaterLongColumn.java
> LongScalarLessEqualLongColumn.java
> LongScalarLessLongColumn.java
> LongScalarNotEqualLongColumn.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13313) TABLESAMPLE ROWS feature broken for Vectorization

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233958#comment-15233958
 ] 

Matt McCline commented on HIVE-13313:
-

Committed to branch-2.0 also.

> TABLESAMPLE ROWS feature broken for Vectorization
> -
>
> Key: HIVE-13313
> URL: https://issues.apache.org/jira/browse/HIVE-13313
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-13313.01.patch
>
>
> For vectorization, the ROWS clause is ignored causing many rows to be 
> returned.
> SELECT * FROM source TABLESAMPLE(10 ROWS);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13263) Vectorization: Unable to vectorize regexp_extract/regexp_replace " Udf: GenericUDFBridge, is not supported"

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233957#comment-15233957
 ] 

Matt McCline commented on HIVE-13263:
-

Committed to branch-2.0 also.

> Vectorization: Unable to vectorize regexp_extract/regexp_replace " Udf: 
> GenericUDFBridge, is not supported"
> ---
>
> Key: HIVE-13263
> URL: https://issues.apache.org/jira/browse/HIVE-13263
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-13263.01.patch, HIVE-13263.02.patch
>
>
> Add regexp_extract to the UDFs we bridge to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13263) Vectorization: Unable to vectorize regexp_extract/regexp_replace " Udf: GenericUDFBridge, is not supported"

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13263:

Fix Version/s: 2.0.1

> Vectorization: Unable to vectorize regexp_extract/regexp_replace " Udf: 
> GenericUDFBridge, is not supported"
> ---
>
> Key: HIVE-13263
> URL: https://issues.apache.org/jira/browse/HIVE-13263
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 1.3.0, 2.1.0, 2.0.1
>
> Attachments: HIVE-13263.01.patch, HIVE-13263.02.patch
>
>
> Add regexp_extract to the UDFs we bridge to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233956#comment-15233956
 ] 

Matt McCline commented on HIVE-9862:


Committed to branch-2.0 also.

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch, HIVE-9862.09.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9862) Vectorized execution corrupts timestamp values

2016-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-9862:
---
Fix Version/s: 2.0.1

> Vectorized execution corrupts timestamp values
> --
>
> Key: HIVE-9862
> URL: https://issues.apache.org/jira/browse/HIVE-9862
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 1.0.0
>Reporter: Nathan Howell
>Assignee: Matt McCline
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-9862.01.patch, HIVE-9862.02.patch, 
> HIVE-9862.03.patch, HIVE-9862.04.patch, HIVE-9862.05.patch, 
> HIVE-9862.06.patch, HIVE-9862.07.patch, HIVE-9862.08.patch, HIVE-9862.09.patch
>
>
> Timestamps in the future (year 2250?) and before ~1700 are silently corrupted 
> in vectorized execution mode. Simple repro:
> {code}
> hive> DROP TABLE IF EXISTS test;
> hive> CREATE TABLE test(ts TIMESTAMP) STORED AS ORC;
> hive> INSERT INTO TABLE test VALUES ('-12-31 23:59:59');
> hive> SET hive.vectorized.execution.enabled = false;
> hive> SELECT MAX(ts) FROM test;
> -12-31 23:59:59
> hive> SET hive.vectorized.execution.enabled = true;
> hive> SELECT MAX(ts) FROM test;
> 1816-03-30 05:56:07.066277376
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12894) Detect whether ORC is reading from ACID table correctly for Schema Evolution

2016-04-10 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233954#comment-15233954
 ] 

Matt McCline commented on HIVE-12894:
-

Commited to branch-2.0 also.

> Detect whether ORC is reading from ACID table correctly for Schema Evolution
> 
>
> Key: HIVE-12894
> URL: https://issues.apache.org/jira/browse/HIVE-12894
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, ORC
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 2.1.0, 2.0.1
>
> Attachments: HIVE-12894.01.patch, HIVE-12894.02.patch, 
> HIVE-12894.03.patch
>
>
> Set an configuration variable with 'transactional' property to indicate the 
> table is ACID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13408) Issue appending HIVE_QUERY_ID without checking if the prefix already exists

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233934#comment-15233934
 ] 

Hive QA commented on HIVE-13408:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797185/HIVE-13408.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7533/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7533/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7533/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-7533/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 0ebd4d1 HIVE-13434 : BaseSemanticAnalyzer.unescapeSQLString 
doesn't unescape \u style character literals. (Kousuke Saruta via Ashutosh 
Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 0ebd4d1 HIVE-13434 : BaseSemanticAnalyzer.unescapeSQLString 
doesn't unescape \u style character literals. (Kousuke Saruta via Ashutosh 
Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797185 - PreCommit-HIVE-TRUNK-Build

> Issue appending HIVE_QUERY_ID without checking if the prefix already exists
> ---
>
> Key: HIVE-13408
> URL: https://issues.apache.org/jira/browse/HIVE-13408
> Project: Hive
>  Issue Type: Bug
>  Components: Shims
>Affects Versions: 2.0.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-13408.1.patch, HIVE-13408.2.patch
>
>
> {code}
> We are resetting the hadoop caller context to HIVE_QUERY_ID:HIVE_QUERY_ID:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13342) Improve logging in llap decider for llap

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233930#comment-15233930
 ] 

Hive QA commented on HIVE-13342:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797592/HIVE-13342.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7532/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7532/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7532/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-7532/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   226f4d6..6ceda3d  branch-2.0 -> origin/branch-2.0
+ git reset --hard HEAD
HEAD is now at 0ebd4d1 HIVE-13434 : BaseSemanticAnalyzer.unescapeSQLString 
doesn't unescape \u style character literals. (Kousuke Saruta via Ashutosh 
Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 0ebd4d1 HIVE-13434 : BaseSemanticAnalyzer.unescapeSQLString 
doesn't unescape \u style character literals. (Kousuke Saruta via Ashutosh 
Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12797592 - PreCommit-HIVE-TRUNK-Build

> Improve logging in llap decider for llap
> 
>
> Key: HIVE-13342
> URL: https://issues.apache.org/jira/browse/HIVE-13342
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-13342.1.patch, HIVE-13342.2.patch
>
>
> Currently we do not log our decisions with respect to llap. Are we running 
> everything in llap mode or only parts of the plan. We need more logging. 
> Also, if llap mode is all but for some reason, we cannot run the work in llap 
> mode, fail and throw an exception advise the user to change the mode to auto.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13287) Add logic to estimate stats for IN operator

2016-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233923#comment-15233923
 ] 

Hive QA commented on HIVE-13287:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12797587/HIVE-13287.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 46 failed/errored test(s), 9943 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_union_type_chk
org.apache.hadoop.hive.cli.TestPerfCliDriver.testPerfCliDriver_query17
org.apache.hadoop.hive.cli.TestPerfCliDriver.testPerfCliDriver_query29
org.apache.hadoop.hive.cli.TestPerfCliDriver.testPerfCliDriver_query46
org.apache.hadoop.hive.cli.TestPerfCliDriver.testPerfCliDriver_query89
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_multi_single_reducer3
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping
org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testListener
org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
org.apache.hadoop.hive.metastore.hbase.TestHBaseImport.org.apache.hadoop.hive.metastore.hbase.TestHBaseImport
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropView
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.api.repl.commands.TestCommands.org.apache.hive.hcatalog.api.repl.commands.TestCommands
org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS
org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles
org.apache.hive.spark.client.TestSparkClient.testCounters
org.apache.hive.spark.client.TestSparkClient.testErrorJob
org.apache.hive.spark.client.TestSparkClient.testJobSubmission
org.apache.hive.spark.client.TestSparkClient.testMetricsCollection
org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7531/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7531/console
Test logs: