[jira] [Updated] (HIVE-10954) AggregateStatsCache duplicated in HBaseMetastore

2015-06-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-10954:
--
Attachment: HIVE-10954.patch

 AggregateStatsCache duplicated in HBaseMetastore
 

 Key: HIVE-10954
 URL: https://issues.apache.org/jira/browse/HIVE-10954
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: hbase-metastore-branch
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10954.patch


 With the latest merge of trunk in hbase-metastore the hbase branch now 
 includes two copies of AggregateStatsCache.  This is because it was moved it 
 from hbase to the metastore in general.  We need to remove the hbase specific 
 copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10906) Value based UDAF function without orderby expression throws NPE

2015-06-08 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577551#comment-14577551
 ] 

Alan Gates commented on HIVE-10906:
---

This seems like it should be ported to branch-1, since it's a crash issue.  If 
you agree I'll port it.

 Value based UDAF function without orderby expression throws NPE
 ---

 Key: HIVE-10906
 URL: https://issues.apache.org/jira/browse/HIVE-10906
 Project: Hive
  Issue Type: Sub-task
  Components: PTF-Windowing
Reporter: Aihua Xu
Assignee: Aihua Xu
 Fix For: 2.0.0

 Attachments: HIVE-10906.2.patch, HIVE-10906.patch


 The following query throws NPE.
 {noformat}
 select key, value, min(value) over (partition by key range between unbounded 
 preceding and current row) from small;
 FAILED: NullPointerException null
 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver 
 (SessionState.java:printError(957)) - FAILED: NullPointerException null
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293)
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281)
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10866) Throw error when client try to insert into bucketed table

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577681#comment-14577681
 ] 

Hive QA commented on HIVE-10866:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738389/HIVE-10866.1.patch

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 9004 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into_with_schema2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4212/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4212/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4212/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738389 - PreCommit-HIVE-TRUNK-Build

 Throw error when client try to insert into bucketed table
 -

 Key: HIVE-10866
 URL: https://issues.apache.org/jira/browse/HIVE-10866
 Project: Hive
  Issue Type: Improvement
Affects Versions: 1.2.0, 1.3.0
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-10866.1.patch


 Currently, hive does not support appends(insert into) bucketed table, see 
 open jira HIVE-3608. When insert into such table, the data will be 
 corrupted and not fit for bucketmapjoin. 
 We need find a way to prevent client from inserting into such table.
 Reproduce:
 {noformat}
 CREATE TABLE IF NOT EXISTS buckettestoutput1( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 CREATE TABLE IF NOT EXISTS buckettestoutput2( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 set hive.enforce.bucketing = true; 
 set hive.enforce.sorting=true;
 insert into table buckettestoutput1 select code from sample_07 where 
 total_emp  134354250 limit 10;
 After this first insert, I did:
 set hive.auto.convert.sortmerge.join=true; 
 set hive.optimize.bucketmapjoin = true; 
 set hive.optimize.bucketmapjoin.sortedmerge = true; 
 set hive.auto.convert.sortmerge.join.noconditionaltask=true;
 0: jdbc:hive2://localhost:1 select * from buckettestoutput1 a join 
 buckettestoutput2 b on (a.data=b.data);
 +---+---+
 | data  | data  |
 +---+---+
 +---+---+
 So select works fine. 
 Second insert:
 0: jdbc:hive2://localhost:1 insert into table buckettestoutput1 select 
 code from sample_07 where total_emp = 134354250 limit 10;
 No rows affected (61.235 seconds)
 Then select:
 0: jdbc:hive2://localhost:1 select * from buckettestoutput1 a join 
 buckettestoutput2 b on (a.data=b.data);
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use 
 bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number 
 of buckets for table buckettestoutput1 is 2, whereas the number of files is 4 
 (state=42000,code=10141)
 0: jdbc:hive2://localhost:1
 {noformat}
 Insert into empty table or partition will be fine, but insert into the 
 non-empty one (after second insert in the reproduce), the bucketmapjoin will 
 throw an error. We should not let second insert succeed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10533) CBO (Calcite Return Path): Join to MultiJoin support for outer joins

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577541#comment-14577541
 ] 

Hive QA commented on HIVE-10533:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738373/HIVE-10533.02.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9004 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_transform_acid
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4211/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4211/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4211/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738373 - PreCommit-HIVE-TRUNK-Build

 CBO (Calcite Return Path): Join to MultiJoin support for outer joins
 

 Key: HIVE-10533
 URL: https://issues.apache.org/jira/browse/HIVE-10533
 Project: Hive
  Issue Type: Sub-task
  Components: CBO
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez
 Attachments: HIVE-10533.01.patch, HIVE-10533.02.patch, 
 HIVE-10533.patch


 CBO return path: auto_join7.q can be used to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10966) direct SQL for stats has a cast exception on some databases

2015-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-10966:

Attachment: HIVE-10966.patch

[~sushanth] can you take a look? Both for mainline, and 1.2?

 direct SQL for stats has a cast exception on some databases
 ---

 Key: HIVE-10966
 URL: https://issues.apache.org/jira/browse/HIVE-10966
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10966.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10960) LLAP: Allow finer control of startup options for LLAP

2015-06-08 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-10960:
---
Attachment: HIVE-10960.1.patch

 LLAP: Allow finer control of startup options for LLAP
 -

 Key: HIVE-10960
 URL: https://issues.apache.org/jira/browse/HIVE-10960
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Gopal V
Assignee: Gopal V
 Fix For: llap

 Attachments: HIVE-10960.1.patch


 Allow the customization of the slider settings during startup.
 The current steps involve hand-editing JSON files after creating the slider 
 pkg.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10963) Hive throws NPE rather than meaningful error message when window is missing

2015-06-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-10963:

Description: 
{{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
error message like missing window.

And also give the right window name rather than the classname in the error 
message after NPE issue is fixed.
{noformat}
org.apache.hadoop.hive.ql.parse.SemanticException: Window Spec 
org.apache.hadoop.hive.ql.parse.WindowingSpec$WindowSpec@7954e1de refers to an 
unknown source
{noformat}

  was:
{{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
error message like missing window.




 Hive throws NPE rather than meaningful error message when window is missing
 ---

 Key: HIVE-10963
 URL: https://issues.apache.org/jira/browse/HIVE-10963
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 1.3.0
Reporter: Aihua Xu
Assignee: Aihua Xu

 {{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
 error message like missing window.
 And also give the right window name rather than the classname in the error 
 message after NPE issue is fixed.
 {noformat}
 org.apache.hadoop.hive.ql.parse.SemanticException: Window Spec 
 org.apache.hadoop.hive.ql.parse.WindowingSpec$WindowSpec@7954e1de refers to 
 an unknown source
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10963) Hive throws NPE rather than meaningful error message when window is missing

2015-06-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-10963:

Attachment: HIVE-10963.patch

 Hive throws NPE rather than meaningful error message when window is missing
 ---

 Key: HIVE-10963
 URL: https://issues.apache.org/jira/browse/HIVE-10963
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 1.3.0
Reporter: Aihua Xu
Assignee: Aihua Xu
 Attachments: HIVE-10963.patch


 {{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
 error message like missing window.
 And also give the right window name rather than the classname in the error 
 message after NPE issue is fixed.
 {noformat}
 org.apache.hadoop.hive.ql.parse.SemanticException: Window Spec 
 org.apache.hadoop.hive.ql.parse.WindowingSpec$WindowSpec@7954e1de refers to 
 an unknown source
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10956) HS2 leaks HMS connections

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577679#comment-14577679
 ] 

Sergey Shelukhin commented on HIVE-10956:
-

Couldn't tid be reused? Overall I don't quite understand the mechanism of the 
leak - don't threadlocals for dead threads get GCed?

 HS2 leaks HMS connections
 -

 Key: HIVE-10956
 URL: https://issues.apache.org/jira/browse/HIVE-10956
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.3.0

 Attachments: HIVE-10956.1.patch


 HS2 uses threadlocal to cache HMS client in class Hive. When the thread is 
 dead, the HMS client is not closed. So the connection to the HMS is leaked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-10956) HS2 leaks HMS connections

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577679#comment-14577679
 ] 

Sergey Shelukhin edited comment on HIVE-10956 at 6/8/15 7:20 PM:
-

Couldn't tid be reused? Overall I don't quite understand the mechanism of the 
leak - don't threadlocals for dead threads get GCed? Maybe finalize can be 
added to take care of connections in such case


was (Author: sershe):
Couldn't tid be reused? Overall I don't quite understand the mechanism of the 
leak - don't threadlocals for dead threads get GCed?

 HS2 leaks HMS connections
 -

 Key: HIVE-10956
 URL: https://issues.apache.org/jira/browse/HIVE-10956
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.3.0

 Attachments: HIVE-10956.1.patch


 HS2 uses threadlocal to cache HMS client in class Hive. When the thread is 
 dead, the HMS client is not closed. So the connection to the HMS is leaked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10960) LLAP: Allow finer control of startup options for LLAP

2015-06-08 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V resolved HIVE-10960.

  Resolution: Fixed
Release Note: Committed to branch

 LLAP: Allow finer control of startup options for LLAP
 -

 Key: HIVE-10960
 URL: https://issues.apache.org/jira/browse/HIVE-10960
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Gopal V
Assignee: Gopal V
 Fix For: llap

 Attachments: HIVE-10960.1.patch


 Allow the customization of the slider settings during startup.
 The current steps involve hand-editing JSON files after creating the slider 
 pkg.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10964) patch testing infrastructure needs to support branch-1

2015-06-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-10964:
--
Attachment: HIVE-10964.patch

[~spena] I think this is all that needs changed on the script side.

 patch testing infrastructure needs to support branch-1
 --

 Key: HIVE-10964
 URL: https://issues.apache.org/jira/browse/HIVE-10964
 Project: Hive
  Issue Type: Task
  Components: Testing Infrastructure
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-10964.patch


 The infrastructure has the ability to support testing on different branches.  
 We need to add branch-1 as an option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10938) All the analyze table statements are failing on encryption testing framework

2015-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577609#comment-14577609
 ] 

Pengcheng Xiong commented on HIVE-10938:


[~spena], yes, i can still reproduce that on the current master. You can just 
save the commands in a q file. If you run that with TestCliDriver, you will have
{code}
STAGE DEPENDENCIES:
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: unencryptedtable
  Statistics: Num rows: 2 Data size: 22 Basic stats: COMPLETE Column 
stats: NONE
  Select Operator
expressions: key (type: string), value (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 2 Data size: 22 Basic stats: COMPLETE Column 
stats: NONE
ListSink
{code}
This is correct.
However, if you run that with TestEncryptedHDFSCliDriver, you will have
{code}
STAGE DEPENDENCIES:
  Stage-0 is a root stage

STAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: unencryptedtable
  Statistics: Num rows: 1 Data size: 24 Basic stats: COMPLETE Column 
stats: NONE
  Select Operator
expressions: key (type: string), value (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 1 Data size: 24 Basic stats: COMPLETE Column 
stats: NONE
ListSink
{code}
This is not correct. If you look into more details, you will find that analyze 
table statement never works. Thanks.

 All the analyze table statements are failing on encryption testing framework
 

 Key: HIVE-10938
 URL: https://issues.apache.org/jira/browse/HIVE-10938
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong

 To reproduce, in recent q test environment, create a q file
 {code}
 drop table IF EXISTS unencryptedTable;
 create table unencryptedTable(key string, value string);
 insert into table unencryptedTable values
 ('501', 'val_501'),
 ('502', 'val_502');
 analyze table unencryptedTable compute statistics;
 explain select * from unencryptedTable;
 {code}
 Then run with TestEncryptedHDFSCliDriver.
 analyze table will generate a MapRed task and a StatsTask. The MapRed task 
 will fail silently without generating the stats, e.g., numRows for the table. 
 And the following StatsTask can not read any results. This will fail not only 
 for encrypted tables but also non-encrypted one as shown above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10954) AggregateStatsCache duplicated in HBaseMetastore

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577687#comment-14577687
 ] 

Hive QA commented on HIVE-10954:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738395/HIVE-10954.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4213/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4213/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4213/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-4213/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
From https://github.com/apache/hive
   8256e59..785ec8e  llap   - origin/llap
+ git reset --hard HEAD
HEAD is now at 4d59230 HIVE-10929: In Tez mode,dynamic partitioning query with 
union all fails at moveTask,Invalid partition key  values (Vikram Dixit K 
reviewed by Gunther Hagleitner)
+ git clean -f -d
Removing ql/src/test/queries/clientnegative/insertinto_nonemptybucket.q
Removing ql/src/test/results/clientnegative/insertinto_nonemptybucket.q.out
+ git checkout master
Already on 'master'
+ git reset --hard origin/master
HEAD is now at 4d59230 HIVE-10929: In Tez mode,dynamic partitioning query with 
union all fails at moveTask,Invalid partition key  values (Vikram Dixit K 
reviewed by Gunther Hagleitner)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738395 - PreCommit-HIVE-TRUNK-Build

 AggregateStatsCache duplicated in HBaseMetastore
 

 Key: HIVE-10954
 URL: https://issues.apache.org/jira/browse/HIVE-10954
 Project: Hive
  Issue Type: Task
  Components: Metastore
Affects Versions: hbase-metastore-branch
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10954.patch


 With the latest merge of trunk in hbase-metastore the hbase branch now 
 includes two copies of AggregateStatsCache.  This is because it was moved it 
 from hbase to the metastore in general.  We need to remove the hbase specific 
 copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-10965:

Attachment: HIVE-10965.patch

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-10965:

Fix Version/s: 2.0.0
   1.2.1
   1.3.0

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10964) patch testing infrastructure needs to support branch-1

2015-06-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-10964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577706#comment-14577706
 ] 

Sergio Peña commented on HIVE-10964:


+1

I created the new job at 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BRANCH_1-Build/
I added the branch1-mr2.properties and branch1-mr1.properties to the jenkins 
instance as well.

I am not sure if I need to restart the jenkins server so that it detects the 
new properties file. I will wait until other tests finish, and
I will restart it. 

 patch testing infrastructure needs to support branch-1
 --

 Key: HIVE-10964
 URL: https://issues.apache.org/jira/browse/HIVE-10964
 Project: Hive
  Issue Type: Task
  Components: Testing Infrastructure
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-10964.patch


 The infrastructure has the ability to support testing on different branches.  
 We need to add branch-1 as an option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10938) All the analyze table statements are failing on encryption testing framework

2015-06-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577504#comment-14577504
 ] 

Sergio Peña commented on HIVE-10938:


I tried to reproduce but TestEncryptedHDFSCliDriver seems it got stuck in a 
loop or something. It does not have an end.
Have you had this problem with the current master branch?

 All the analyze table statements are failing on encryption testing framework
 

 Key: HIVE-10938
 URL: https://issues.apache.org/jira/browse/HIVE-10938
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong

 To reproduce, in recent q test environment, create a q file
 {code}
 drop table IF EXISTS unencryptedTable;
 create table unencryptedTable(key string, value string);
 insert into table unencryptedTable values
 ('501', 'val_501'),
 ('502', 'val_502');
 analyze table unencryptedTable compute statistics;
 explain select * from unencryptedTable;
 {code}
 Then run with TestEncryptedHDFSCliDriver.
 analyze table will generate a MapRed task and a StatsTask. The MapRed task 
 will fail silently without generating the stats, e.g., numRows for the table. 
 And the following StatsTask can not read any results. This will fail not only 
 for encrypted tables but also non-encrypted one as shown above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10907) Hive on Tez: Classcast exception in some cases with SMB joins

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577506#comment-14577506
 ] 

Sergey Shelukhin commented on HIVE-10907:
-

+1. [~sushanth] is this ok for 1.2?

 Hive on Tez: Classcast exception in some cases with SMB joins
 -

 Key: HIVE-10907
 URL: https://issues.apache.org/jira/browse/HIVE-10907
 Project: Hive
  Issue Type: Bug
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-10907.1.patch, HIVE-10907.2.patch, 
 HIVE-10907.3.patch, HIVE-10907.4.patch


 In cases where there is a mix of Map side work and reduce side work, we get a 
 classcast exception because we assume homogeneity in the code. We need to fix 
 this correctly. For now this is a workaround.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10938) All the analyze table statements are failing on encryption testing framework

2015-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577613#comment-14577613
 ] 

Pengcheng Xiong commented on HIVE-10938:


[~spena], more information: I actually deployed an encrypted real environment 
myself over the weekend. In the real environment, the analyze table statement 
works. Thus, the problem is related with the TestEncryptedHDFSCliDriver 
framework. Thanks.

 All the analyze table statements are failing on encryption testing framework
 

 Key: HIVE-10938
 URL: https://issues.apache.org/jira/browse/HIVE-10938
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong

 To reproduce, in recent q test environment, create a q file
 {code}
 drop table IF EXISTS unencryptedTable;
 create table unencryptedTable(key string, value string);
 insert into table unencryptedTable values
 ('501', 'val_501'),
 ('502', 'val_502');
 analyze table unencryptedTable compute statistics;
 explain select * from unencryptedTable;
 {code}
 Then run with TestEncryptedHDFSCliDriver.
 analyze table will generate a MapRed task and a StatsTask. The MapRed task 
 will fail silently without generating the stats, e.g., numRows for the table. 
 And the following StatsTask can not read any results. This will fail not only 
 for encrypted tables but also non-encrypted one as shown above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10937) LLAP: make ObjectCache for plans work properly in the daemon

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577696#comment-14577696
 ] 

Sergey Shelukhin commented on HIVE-10937:
-

[~sseth] [~gopalv] perhaps you can take a look.. not sure who else is familiar 
with this code

 LLAP: make ObjectCache for plans work properly in the daemon
 

 Key: HIVE-10937
 URL: https://issues.apache.org/jira/browse/HIVE-10937
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: llap

 Attachments: HIVE-10937.patch


 There's perf hit otherwise, esp. when stupid planner creates 1009 reducers of 
 4Mb each.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10956) HS2 leaks HMS connections

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577724#comment-14577724
 ] 

Sergey Shelukhin commented on HIVE-10956:
-

Well it's a tradeoff... the leak will definitely be fixed by finalizer. I don't 
think we have the pressure to close the connection ASAP.  It's a much simpler 
fix than the attached patch that reimplements threadlocal. As much as I dislike 
Java, and GC in particular I'd rather not do that.
Other patch welcome :)

 HS2 leaks HMS connections
 -

 Key: HIVE-10956
 URL: https://issues.apache.org/jira/browse/HIVE-10956
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.3.0

 Attachments: HIVE-10956.1.patch


 HS2 uses threadlocal to cache HMS client in class Hive. When the thread is 
 dead, the HMS client is not closed. So the connection to the HMS is leaked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10853) Create ExplainTask in ATS hook through ExplainWork

2015-06-08 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577544#comment-14577544
 ] 

Alan Gates commented on HIVE-10853:
---

Should this be pushed to branch-1 as well?

 Create ExplainTask in ATS hook through ExplainWork
 --

 Key: HIVE-10853
 URL: https://issues.apache.org/jira/browse/HIVE-10853
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Pengcheng Xiong
 Fix For: 2.0.0

 Attachments: HIVE-10853.01.patch, HIVE-10853.02.patch


 Right now ExplainTask is created directly. That's fragile and can lead to 
 stuff like: HIVE-10829



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10944) Fix HS2 for Metrics

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577566#comment-14577566
 ] 

Sergey Shelukhin commented on HIVE-10944:
-

+1 from my side, one nit can be fixed on commit; the others' feedback needs to 
be addressed :)

 Fix HS2 for Metrics
 ---

 Key: HIVE-10944
 URL: https://issues.apache.org/jira/browse/HIVE-10944
 Project: Hive
  Issue Type: Bug
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10944.2.patch, HIVE-10944.3.patch, HIVE-10944.patch


 Some issues about initializing the new HS2 metrics
 1.  Metrics is not working properly in HS2 due to wrong init checks
 2.  If not enabled, JVMPauseMonitor logs trash to HS2 logs as it wasnt 
 checking if metrics was enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10853) Create ExplainTask in ATS hook through ExplainWork

2015-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577564#comment-14577564
 ] 

Pengcheng Xiong commented on HIVE-10853:


[~alangates], I just talked with [~ashutoshc]. As this is an improvement and we 
assume that branch-1 is a maintenance branch, we think it is OK to not to push 
it to branch-1. Thanks.

 Create ExplainTask in ATS hook through ExplainWork
 --

 Key: HIVE-10853
 URL: https://issues.apache.org/jira/browse/HIVE-10853
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Pengcheng Xiong
 Fix For: 2.0.0

 Attachments: HIVE-10853.01.patch, HIVE-10853.02.patch


 Right now ExplainTask is created directly. That's fragile and can lead to 
 stuff like: HIVE-10829



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577689#comment-14577689
 ] 

Sergey Shelukhin commented on HIVE-10965:
-

[~pxiong] can you take a look?
[~sushanth] ok for 1.2?

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10956) HS2 leaks HMS connections

2015-06-08 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577731#comment-14577731
 ] 

Jimmy Xiang commented on HIVE-10956:


New patch does not use this threadlocal any more.

 HS2 leaks HMS connections
 -

 Key: HIVE-10956
 URL: https://issues.apache.org/jira/browse/HIVE-10956
 Project: Hive
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 1.3.0

 Attachments: HIVE-10956.1.patch


 HS2 uses threadlocal to cache HMS client in class Hive. When the thread is 
 dead, the HMS client is not closed. So the connection to the HMS is leaked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577792#comment-14577792
 ] 

Sergey Shelukhin commented on HIVE-10965:
-

[~pxiong] do you know why it would request stats with 0 columns?

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8931) Test TestAccumuloCliDriver is not completing

2015-06-08 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577798#comment-14577798
 ] 

Josh Elser commented on HIVE-8931:
--

Thanks for the review [~daijy]!

bq. The downside is we are going to maintain qtest-accumulo/pom.xml going 
forward, but I cannot think of a better solution

Yeah, that was the conclusion that I came to. I'm all ears if someone has a 
suggestion for something better to do, but I couldn't come up with anything.

 Test TestAccumuloCliDriver is not completing
 

 Key: HIVE-8931
 URL: https://issues.apache.org/jira/browse/HIVE-8931
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Josh Elser
 Fix For: 1.2.1

 Attachments: HIVE-8931.001.patch, HIVE-8931.002.patch, 
 HIVE-8931.003.patch


 Tests are taking 3 hours due to {{TestAccumuloCliDriver}} not finishing.
 Logs:
 http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1848/failed/TestAccumuloCliDriver/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10964) patch testing infrastructure needs to support branch-1

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577846#comment-14577846
 ] 

Hive QA commented on HIVE-10964:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738405/HIVE-10964.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9003 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4214/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4214/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4214/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738405 - PreCommit-HIVE-TRUNK-Build

 patch testing infrastructure needs to support branch-1
 --

 Key: HIVE-10964
 URL: https://issues.apache.org/jira/browse/HIVE-10964
 Project: Hive
  Issue Type: Task
  Components: Testing Infrastructure
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-10964.patch


 The infrastructure has the ability to support testing on different branches.  
 We need to add branch-1 as an option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577942#comment-14577942
 ] 

Prasanth Jayachandran commented on HIVE-10685:
--

[~FanTn] Thanks for the patch. I just updated the patch so that precommit test 
can apply the patch cleanly. Also made another minor change in the patch to 
move the stripe index increment out of the condition. Will commit the patch if 
the precommit test runs cleanly.

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.1.patch, HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10906) Value based UDAF function without orderby expression throws NPE

2015-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577959#comment-14577959
 ] 

Ashutosh Chauhan commented on HIVE-10906:
-

I am not sure. [~aihuaxu] has done quite a bit of work in this area in past few 
weeks. I am not sure how much of it is there in branch-1 So this particular bug 
whether it will be in branch-1 or not, don't know. [~aihuaxu] do you know any 
better?

 Value based UDAF function without orderby expression throws NPE
 ---

 Key: HIVE-10906
 URL: https://issues.apache.org/jira/browse/HIVE-10906
 Project: Hive
  Issue Type: Sub-task
  Components: PTF-Windowing
Reporter: Aihua Xu
Assignee: Aihua Xu
 Fix For: 2.0.0

 Attachments: HIVE-10906.2.patch, HIVE-10906.patch


 The following query throws NPE.
 {noformat}
 select key, value, min(value) over (partition by key range between unbounded 
 preceding and current row) from small;
 FAILED: NullPointerException null
 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver 
 (SessionState.java:printError(957)) - FAILED: NullPointerException null
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293)
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281)
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10959) Templeton launcher job should reconnect to the running child job on task retry

2015-06-08 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HIVE-10959:
--
Attachment: HIVE-10959.patch

Attaching the patch.

 Templeton launcher job should reconnect to the running child job on task retry
 --

 Key: HIVE-10959
 URL: https://issues.apache.org/jira/browse/HIVE-10959
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.15.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HIVE-10959.patch


 Currently, Templeton launcher kills all child jobs (jobs tagged with the 
 parent job's id) upon task retry. 
 Upon templeton launcher task retry, templeton should reconnect to the running 
 job and continue tracking its progress that way. 
 This logic cannot be used for all job kinds (e.g. for jobs that are driven by 
 the client side like regular hive). However, for MapReduceV2, and possibly 
 Tez and HiveOnTez, this should be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8931) Test TestAccumuloCliDriver is not completing

2015-06-08 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577794#comment-14577794
 ] 

Daniel Dai commented on HIVE-8931:
--

The downside is we are going to maintain qtest-accumulo/pom.xml going forward, 
but I cannot think of a better solution. +1 to fix unit tests which we 
definitely have to.

 Test TestAccumuloCliDriver is not completing
 

 Key: HIVE-8931
 URL: https://issues.apache.org/jira/browse/HIVE-8931
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Josh Elser
 Fix For: 1.2.1

 Attachments: HIVE-8931.001.patch, HIVE-8931.002.patch, 
 HIVE-8931.003.patch


 Tests are taking 3 hours due to {{TestAccumuloCliDriver}} not finishing.
 Logs:
 http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1848/failed/TestAccumuloCliDriver/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10966) direct SQL for stats has a cast exception on some databases

2015-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577862#comment-14577862
 ] 

Ashutosh Chauhan commented on HIVE-10966:
-

+1

 direct SQL for stats has a cast exception on some databases
 ---

 Key: HIVE-10966
 URL: https://issues.apache.org/jira/browse/HIVE-10966
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10966.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10966) direct SQL for stats has a cast exception on some databases

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577894#comment-14577894
 ] 

Hive QA commented on HIVE-10966:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738424/HIVE-10966.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4216/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4216/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4216/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-hwi ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/hwi/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-hwi ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/hwi/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-hwi ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-hwi ---
[INFO] 
[INFO] 
[INFO] Building Hive ODBC 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-odbc ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-odbc ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-odbc ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-odbc ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/odbc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] 
[INFO] Building Hive Shims Aggregator 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-shims-aggregator ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
hive-shims-aggregator ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ 
hive-shims-aggregator ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ 
hive-shims-aggregator ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/shims/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] 

[jira] [Commented] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577936#comment-14577936
 ] 

Hive QA commented on HIVE-10685:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12732760/HIVE-10685.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4218/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4218/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4218/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-4218/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ cd apache-github-source-source
+ git fetch origin
From https://github.com/apache/hive
   7db3fb3..d038bd8  branch-1   - origin/branch-1
   77b2c20..f534590  branch-1.2 - origin/branch-1.2
   a802104..7ae1d0b  master - origin/master
+ git reset --hard HEAD
HEAD is now at a802104 HIVE-8931: Test TestAccumuloCliDriver is not completing 
(Josh Elser via Daniel Dai
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
+ git reset --hard origin/master
HEAD is now at 7ae1d0b HIVE-10910 : Alter table drop partition queries in 
encrypted zone failing to remove data from HDFS (Eugene Koifman, reviewed by 
Gunther)
+ git merge --ff-only origin/master
Already up-to-date.
+ git gc
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12732760 - PreCommit-HIVE-TRUNK-Build

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-10685:
-
Attachment: HIVE-10685.1.patch

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.1.patch, HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10968) Windows: analyze json table via beeline failed throwing Class org.apache.hive.hcatalog.data.JsonSerDe not found

2015-06-08 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-10968:
-
Attachment: HIVE-10968.1.patch

cc-ing [~thejas] for review.

 Windows: analyze json table via beeline failed throwing Class 
 org.apache.hive.hcatalog.data.JsonSerDe not found
 ---

 Key: HIVE-10968
 URL: https://issues.apache.org/jira/browse/HIVE-10968
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
 Environment: Windows
Reporter: Takahiko Saito
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 1.2.1

 Attachments: HIVE-10968.1.patch


 NO PRECOMMIT TESTS
 Run the following via beeline:
 {noformat}0: jdbc:hive2://localhost:10001 analyze table all100kjson compute 
 statistics;
 15/06/05 20:44:11 INFO log.PerfLogger: PERFLOG method=parse 
 from=org.apache.hadoop.hive.ql.Driver
 15/06/05 20:44:11 INFO parse.ParseDriver: Parsing command: analyze table 
 all100kjson compute statistics
 15/06/05 20:44:11 INFO parse.ParseDriver: Parse Completed
 15/06/05 20:44:11 INFO log.PerfLogger: /PERFLOG method=parse 
 start=1433537051075 end=1433537051077 duration=2 from=org.
 apache.hadoop.hive.ql.Driver
 15/06/05 20:44:11 INFO log.PerfLogger: PERFLOG method=semanticAnalyze 
 from=org.apache.hadoop.hive.ql.Driver
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Invoking analyze on 
 original query
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Starting Semantic 
 Analysis
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Completed phase 1 
 of Semantic Analysis
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Get metadata for 
 source tables
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_table : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_table : db=default tbl=a
 ll100kjson
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_table : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_table : db=default tbl=a
 ll100kjson
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Get metadata for 
 subqueries
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Get metadata for 
 destination tables
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Completed getting 
 MetaData in Semantic Analysis
 15/06/05 20:44:11 INFO common.FileUtils: Creating directory if it doesn't 
 exist: hdfs://dal-hs211:8020/user/hcat/tests/d
 ata/all100kjson/.hive-staging_hive_2015-06-05_20-44-11_075_4520028480897676073-5
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Set stats 
 collection dir : hdfs://dal-hs211:8020/user/hcat/tes
 ts/data/all100kjson/.hive-staging_hive_2015-06-05_20-44-11_075_4520028480897676073-5/-ext-1
 15/06/05 20:44:11 INFO ppd.OpProcFactory: Processing for TS(5)
 15/06/05 20:44:11 INFO log.PerfLogger: PERFLOG method=partition-retrieving 
 from=org.apache.hadoop.hive.ql.optimizer.ppr
 .PartitionPruner
 15/06/05 20:44:11 INFO log.PerfLogger: /PERFLOG method=partition-retrieving 
 start=1433537051345 end=1433537051345 durat
 ion=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_indexes : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_indexes : db=default tbl
 =all100kjson
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_indexes : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_indexes : db=default tbl
 =all100kjson
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Looking for table 
 scans where optimization is applicable
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Found 0 null table 
 scans
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Looking for table 
 scans where optimization is applicable
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Found 0 null table 
 scans
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Looking for table 
 scans where optimization is applicable
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Found 0 null table 
 scans
 15/06/05 20:44:11 INFO physical.Vectorizer: Validating MapWork...
 15/06/05 20:44:11 INFO physical.Vectorizer: Input format: 
 org.apache.hadoop.mapred.TextInputFormat, doesn't provide vect
 orized input
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Completed plan 
 generation
 15/06/05 20:44:11 INFO ql.Driver: Semantic Analysis Completed
 15/06/05 20:44:11 INFO 

[jira] [Updated] (HIVE-10921) Change trunk pom version to reflect the branch-1 split

2015-06-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-10921:

Attachment: HIVE-10921.01.patch

 Change trunk pom version to reflect the branch-1 split
 --

 Key: HIVE-10921
 URL: https://issues.apache.org/jira/browse/HIVE-10921
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 2.0.0

 Attachments: HIVE-10921.01.patch, HIVE-10921.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577808#comment-14577808
 ] 

Pengcheng Xiong commented on HIVE-10965:


I have no idea but I would start from the query that request stats with 0 
columns. And if I remembered correctly, I remembered that [~ashutoshc] 
committed a patch to deal with a similar issue (empty partition or empty 
columns) several weeks ago. He may have a better answer. Thanks.

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577915#comment-14577915
 ] 

Prasanth Jayachandran commented on HIVE-10685:
--

LGTM, +1

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-10685:
-
Affects Version/s: 1.2.1
   1.3.0
   0.14.0
   1.0.0
   1.2.0

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577789#comment-14577789
 ] 

Pengcheng Xiong commented on HIVE-10965:


LGTM. Thanks!

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10598) Vectorization borks when column is added to table.

2015-06-08 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577796#comment-14577796
 ] 

Matt McCline commented on HIVE-10598:
-

orc_int_type_promotion is failing because the new checks are too strict: 
requiring partitions to have an exact subset of types...

(noformat}
2015-06-08 13:32:18,140 INFO  [main]: physical.Vectorizer 
(Vectorizer.java:validateInputFormatAndSchemaEvolution(586)) - Could not 
vectorize partition 
pfile:/Users/mmccline/HIVE-10598/itests/qtest/target/warehouse/src_part_orc/ds=2008-04-08.
  The first type names int:string do not match the other first type names 
bigint:str
{noformat}

 Vectorization borks when column is added to table.
 --

 Key: HIVE-10598
 URL: https://issues.apache.org/jira/browse/HIVE-10598
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Mithun Radhakrishnan
Assignee: Matt McCline
 Attachments: HIVE-10598.01.patch


 Consider the following table definition:
 {code:sql}
 create table foobar ( foo string, bar string ) partitioned by (dt string) 
 stored as orc;
 alter table foobar add partition( dt='20150101' ) ;
 {code}
 Say the partition has the following data:
 {noformat}
 1 one 20150101
 2 two 20150101
 3 three   20150101
 {noformat}
 If a new column is added to the table-schema (and the partition continues to 
 have the old schema), vectorized read from the old partitions fail thus:
 {code:sql}
 alter table foobar add columns( goo string );
 select count(1) from foobar;
 {code}
 {code:title=stacktrace}
 java.lang.Exception: java.lang.RuntimeException: Error creating a batch
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
 Caused by: java.lang.RuntimeException: Error creating a batch
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:114)
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:52)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:84)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:42)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.createValue(HadoopShimsSecure.java:156)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.createValue(MapTask.java:180)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: No type entry 
 found for column 3 in map {4=Long}
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.addScratchColumnsToBatch(VectorizedRowBatchCtx.java:632)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.createVectorizedRowBatch(VectorizedRowBatchCtx.java:343)
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:112)
   ... 14 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577932#comment-14577932
 ] 

Hive QA commented on HIVE-10965:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738419/HIVE-10965.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4217/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4217/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4217/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-hwi ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/hwi/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-hwi ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/hwi/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-hwi ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-hwi ---
[INFO] 
[INFO] 
[INFO] Building Hive ODBC 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-odbc ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-odbc ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-odbc ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-odbc ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/odbc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] 
[INFO] Building Hive Shims Aggregator 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-shims-aggregator ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
hive-shims-aggregator ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ 
hive-shims-aggregator ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ 
hive-shims-aggregator ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/shims/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] 

[jira] [Updated] (HIVE-10962) Merge master to Spark branch 6/7/2015 [Spark Branch]

2015-06-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-10962:
---
Attachment: HIVE-10962.1-spark.patch

 Merge master to Spark branch 6/7/2015 [Spark Branch]
 

 Key: HIVE-10962
 URL: https://issues.apache.org/jira/browse/HIVE-10962
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
 Attachments: HIVE-10962.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10962) Merge master to Spark branch 6/7/2015 [Spark Branch]

2015-06-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-10962:
---
Attachment: (was: HIVE-10962.1-spark.branch)

 Merge master to Spark branch 6/7/2015 [Spark Branch]
 

 Key: HIVE-10962
 URL: https://issues.apache.org/jira/browse/HIVE-10962
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
 Attachments: HIVE-10962.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10967) add mapreduce.job.tags to sql std authorization config whitelist

2015-06-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair reassigned HIVE-10967:


Assignee: Thejas M Nair

 add mapreduce.job.tags to sql std authorization config whitelist
 

 Key: HIVE-10967
 URL: https://issues.apache.org/jira/browse/HIVE-10967
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10967.1.patch


 mapreduce.job.tags is set by oozie for HiveServer2 actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10967) add mapreduce.job.tags to sql std authorization config whitelist

2015-06-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-10967:
-
Description: mapreduce.job.tags is set by oozie for HiveServer2 actions.

 add mapreduce.job.tags to sql std authorization config whitelist
 

 Key: HIVE-10967
 URL: https://issues.apache.org/jira/browse/HIVE-10967
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
 Attachments: HIVE-10967.1.patch


 mapreduce.job.tags is set by oozie for HiveServer2 actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10967) add mapreduce.job.tags to sql std authorization config whitelist

2015-06-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-10967:
-
Attachment: HIVE-10967.1.patch

 add mapreduce.job.tags to sql std authorization config whitelist
 

 Key: HIVE-10967
 URL: https://issues.apache.org/jira/browse/HIVE-10967
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10967.1.patch


 mapreduce.job.tags is set by oozie for HiveServer2 actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-10921) Change trunk pom version to reflect the branch-1 split

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577769#comment-14577769
 ] 

Sergey Shelukhin edited comment on HIVE-10921 at 6/8/15 8:22 PM:
-

I ended up making some more changes wherever 1.3 was mentioned. Attaching the 
final patch.
The 2.0 sql files are all just copies of the respective 1.3 files


was (Author: sershe):
I ended up making some more changes wherever 2.0 was mentioned. Attaching the 
final patch

 Change trunk pom version to reflect the branch-1 split
 --

 Key: HIVE-10921
 URL: https://issues.apache.org/jira/browse/HIVE-10921
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 2.0.0

 Attachments: HIVE-10921.01.patch, HIVE-10921.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577916#comment-14577916
 ] 

Prasanth Jayachandran commented on HIVE-10685:
--

Pending precommit tests.

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10965) direct SQL for stats fails in 0-column case

2015-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577938#comment-14577938
 ] 

Ashutosh Chauhan commented on HIVE-10965:
-

Patch looks good.  I will also suggest to add this check on client side too, 
i.e., HiveMetaStorClient::getAggrColStatsFor() so that we dont do a trip to 
metastore in such cases just to get a null object.

 direct SQL for stats fails in 0-column case
 ---

 Key: HIVE-10965
 URL: https://issues.apache.org/jira/browse/HIVE-10965
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 1.3.0, 1.2.1, 2.0.0

 Attachments: HIVE-10965.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10968) Windows: analyze json table via beeline failed throwing Class org.apache.hive.hcatalog.data.JsonSerDe not found

2015-06-08 Thread Takahiko Saito (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takahiko Saito updated HIVE-10968:
--
Fix Version/s: 1.2.1

 Windows: analyze json table via beeline failed throwing Class 
 org.apache.hive.hcatalog.data.JsonSerDe not found
 ---

 Key: HIVE-10968
 URL: https://issues.apache.org/jira/browse/HIVE-10968
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
 Environment: Windows
Reporter: Takahiko Saito
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 1.2.1


 Run the following via beeline:
 {noformat}0: jdbc:hive2://localhost:10001 analyze table all100kjson compute 
 statistics;
 15/06/05 20:44:11 INFO log.PerfLogger: PERFLOG method=parse 
 from=org.apache.hadoop.hive.ql.Driver
 15/06/05 20:44:11 INFO parse.ParseDriver: Parsing command: analyze table 
 all100kjson compute statistics
 15/06/05 20:44:11 INFO parse.ParseDriver: Parse Completed
 15/06/05 20:44:11 INFO log.PerfLogger: /PERFLOG method=parse 
 start=1433537051075 end=1433537051077 duration=2 from=org.
 apache.hadoop.hive.ql.Driver
 15/06/05 20:44:11 INFO log.PerfLogger: PERFLOG method=semanticAnalyze 
 from=org.apache.hadoop.hive.ql.Driver
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Invoking analyze on 
 original query
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Starting Semantic 
 Analysis
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Completed phase 1 
 of Semantic Analysis
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Get metadata for 
 source tables
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_table : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_table : db=default tbl=a
 ll100kjson
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_table : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_table : db=default tbl=a
 ll100kjson
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Get metadata for 
 subqueries
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Get metadata for 
 destination tables
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Completed getting 
 MetaData in Semantic Analysis
 15/06/05 20:44:11 INFO common.FileUtils: Creating directory if it doesn't 
 exist: hdfs://dal-hs211:8020/user/hcat/tests/d
 ata/all100kjson/.hive-staging_hive_2015-06-05_20-44-11_075_4520028480897676073-5
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Set stats 
 collection dir : hdfs://dal-hs211:8020/user/hcat/tes
 ts/data/all100kjson/.hive-staging_hive_2015-06-05_20-44-11_075_4520028480897676073-5/-ext-1
 15/06/05 20:44:11 INFO ppd.OpProcFactory: Processing for TS(5)
 15/06/05 20:44:11 INFO log.PerfLogger: PERFLOG method=partition-retrieving 
 from=org.apache.hadoop.hive.ql.optimizer.ppr
 .PartitionPruner
 15/06/05 20:44:11 INFO log.PerfLogger: /PERFLOG method=partition-retrieving 
 start=1433537051345 end=1433537051345 durat
 ion=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_indexes : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_indexes : db=default tbl
 =all100kjson
 15/06/05 20:44:11 INFO metastore.HiveMetaStore: 5: get_indexes : db=default 
 tbl=all100kjson
 15/06/05 20:44:11 INFO HiveMetaStore.audit: ugi=hadoopqa
 ip=unknown-ip-addr  cmd=get_indexes : db=default tbl
 =all100kjson
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Looking for table 
 scans where optimization is applicable
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Found 0 null table 
 scans
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Looking for table 
 scans where optimization is applicable
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Found 0 null table 
 scans
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Looking for table 
 scans where optimization is applicable
 15/06/05 20:44:11 INFO physical.NullScanTaskDispatcher: Found 0 null table 
 scans
 15/06/05 20:44:11 INFO physical.Vectorizer: Validating MapWork...
 15/06/05 20:44:11 INFO physical.Vectorizer: Input format: 
 org.apache.hadoop.mapred.TextInputFormat, doesn't provide vect
 orized input
 15/06/05 20:44:11 INFO parse.ColumnStatsSemanticAnalyzer: Completed plan 
 generation
 15/06/05 20:44:11 INFO ql.Driver: Semantic Analysis Completed
 15/06/05 20:44:11 INFO log.PerfLogger: /PERFLOG method=semanticAnalyze 
 start=1433537051077 end=1433537051367 duration=2
 90 from=org.apache.hadoop.hive.ql.Driver
 15/06/05 

[jira] [Commented] (HIVE-10963) Hive throws NPE rather than meaningful error message when window is missing

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577868#comment-14577868
 ] 

Hive QA commented on HIVE-10963:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738408/HIVE-10963.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4215/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4215/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4215/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-hwi ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/hwi/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-hwi ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/hwi/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/hwi/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-hwi ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-hwi ---
[INFO] 
[INFO] 
[INFO] Building Hive ODBC 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-odbc ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-odbc ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-odbc ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-odbc ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/odbc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/odbc/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] 
[INFO] Building Hive Shims Aggregator 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-shims-aggregator ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
hive-shims-aggregator ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ 
hive-shims-aggregator ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ 
hive-shims-aggregator ---
[INFO] Executing tasks

main:
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp
   [delete] Deleting directory 
/data/hive-ptest/working/apache-github-source-source/shims/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/warehouse
[mkdir] Created dir: 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp/conf
 [copy] Copying 11 files to 
/data/hive-ptest/working/apache-github-source-source/shims/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] 

[jira] [Commented] (HIVE-10959) Templeton launcher job should reconnect to the running child job on task retry

2015-06-08 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577914#comment-14577914
 ] 

Ivan Mitic commented on HIVE-10959:
---

RR: https://reviews.apache.org/r/35226/

 Templeton launcher job should reconnect to the running child job on task retry
 --

 Key: HIVE-10959
 URL: https://issues.apache.org/jira/browse/HIVE-10959
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.15.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HIVE-10959.patch


 Currently, Templeton launcher kills all child jobs (jobs tagged with the 
 parent job's id) upon task retry. 
 Upon templeton launcher task retry, templeton should reconnect to the running 
 job and continue tracking its progress that way. 
 This logic cannot be used for all job kinds (e.g. for jobs that are driven by 
 the client side like regular hive). However, for MapReduceV2, and possibly 
 Tez and HiveOnTez, this should be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates reassigned HIVE-10952:
-

Assignee: Alan Gates

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 {code}
 The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reopened HIVE-10685:
--

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10962) Merge master to Spark branch 6/7/2015 [Spark Branch]

2015-06-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-10962:
---
Attachment: HIVE-10962.1-spark.branch

 Merge master to Spark branch 6/7/2015 [Spark Branch]
 

 Key: HIVE-10962
 URL: https://issues.apache.org/jira/browse/HIVE-10962
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
 Attachments: HIVE-10962.1-spark.branch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10963) Hive throws NPE rather than meaningful error message when window is missing

2015-06-08 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-10963:

Description: 
{{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
error message like missing window.



  was:
{{select sum(salary) over w1 from emp;}} throws NPE rather than missing 
window.




 Hive throws NPE rather than meaningful error message when window is missing
 ---

 Key: HIVE-10963
 URL: https://issues.apache.org/jira/browse/HIVE-10963
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 1.3.0
Reporter: Aihua Xu
Assignee: Aihua Xu

 {{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
 error message like missing window.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10866) Throw error when client try to insert into bucketed table

2015-06-08 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577489#comment-14577489
 ] 

Yongzhi Chen commented on HIVE-10866:
-

Throw error message in execution time for insert into non-empty bucketed 
table/partition. This fix prevents generating the copy_1 copy_2 files into the 
bucketed folders.

 Throw error when client try to insert into bucketed table
 -

 Key: HIVE-10866
 URL: https://issues.apache.org/jira/browse/HIVE-10866
 Project: Hive
  Issue Type: Improvement
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-10866.1.patch


 Currently, hive does not support appends(insert into) bucketed table, see 
 open jira HIVE-3608. When insert into such table, the data will be 
 corrupted and not fit for bucketmapjoin. 
 We need find a way to prevent client from inserting into such table.
 Reproduce:
 {noformat}
 CREATE TABLE IF NOT EXISTS buckettestoutput1( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 CREATE TABLE IF NOT EXISTS buckettestoutput2( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 set hive.enforce.bucketing = true; 
 set hive.enforce.sorting=true;
 insert into table buckettestoutput1 select code from sample_07 where 
 total_emp  134354250 limit 10;
 After this first insert, I did:
 set hive.auto.convert.sortmerge.join=true; 
 set hive.optimize.bucketmapjoin = true; 
 set hive.optimize.bucketmapjoin.sortedmerge = true; 
 set hive.auto.convert.sortmerge.join.noconditionaltask=true;
 0: jdbc:hive2://localhost:1 select * from buckettestoutput1 a join 
 buckettestoutput2 b on (a.data=b.data);
 +---+---+
 | data  | data  |
 +---+---+
 +---+---+
 So select works fine. 
 Second insert:
 0: jdbc:hive2://localhost:1 insert into table buckettestoutput1 select 
 code from sample_07 where total_emp = 134354250 limit 10;
 No rows affected (61.235 seconds)
 Then select:
 0: jdbc:hive2://localhost:1 select * from buckettestoutput1 a join 
 buckettestoutput2 b on (a.data=b.data);
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use 
 bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number 
 of buckets for table buckettestoutput1 is 2, whereas the number of files is 4 
 (state=42000,code=10141)
 0: jdbc:hive2://localhost:1
 {noformat}
 Insert into empty table or partition will be fine, but insert into the 
 non-empty one (after second insert in the reproduce), the bucketmapjoin will 
 throw an error. We should not let second insert succeed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10866) Throw error when client try to insert into bucketed table

2015-06-08 Thread Yongzhi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-10866:

Attachment: HIVE-10866.1.patch

Need code review.

 Throw error when client try to insert into bucketed table
 -

 Key: HIVE-10866
 URL: https://issues.apache.org/jira/browse/HIVE-10866
 Project: Hive
  Issue Type: Improvement
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
 Attachments: HIVE-10866.1.patch


 Currently, hive does not support appends(insert into) bucketed table, see 
 open jira HIVE-3608. When insert into such table, the data will be 
 corrupted and not fit for bucketmapjoin. 
 We need find a way to prevent client from inserting into such table.
 Reproduce:
 {noformat}
 CREATE TABLE IF NOT EXISTS buckettestoutput1( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 CREATE TABLE IF NOT EXISTS buckettestoutput2( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 set hive.enforce.bucketing = true; 
 set hive.enforce.sorting=true;
 insert into table buckettestoutput1 select code from sample_07 where 
 total_emp  134354250 limit 10;
 After this first insert, I did:
 set hive.auto.convert.sortmerge.join=true; 
 set hive.optimize.bucketmapjoin = true; 
 set hive.optimize.bucketmapjoin.sortedmerge = true; 
 set hive.auto.convert.sortmerge.join.noconditionaltask=true;
 0: jdbc:hive2://localhost:1 select * from buckettestoutput1 a join 
 buckettestoutput2 b on (a.data=b.data);
 +---+---+
 | data  | data  |
 +---+---+
 +---+---+
 So select works fine. 
 Second insert:
 0: jdbc:hive2://localhost:1 insert into table buckettestoutput1 select 
 code from sample_07 where total_emp = 134354250 limit 10;
 No rows affected (61.235 seconds)
 Then select:
 0: jdbc:hive2://localhost:1 select * from buckettestoutput1 a join 
 buckettestoutput2 b on (a.data=b.data);
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use 
 bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number 
 of buckets for table buckettestoutput1 is 2, whereas the number of files is 4 
 (state=42000,code=10141)
 0: jdbc:hive2://localhost:1
 {noformat}
 Insert into empty table or partition will be fine, but insert into the 
 non-empty one (after second insert in the reproduce), the bucketmapjoin will 
 throw an error. We should not let second insert succeed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10906) Value based UDAF function without orderby expression throws NPE

2015-06-08 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578071#comment-14578071
 ] 

Aihua Xu commented on HIVE-10906:
-

I checked branch-1 and it has the same issue as well. 

 Value based UDAF function without orderby expression throws NPE
 ---

 Key: HIVE-10906
 URL: https://issues.apache.org/jira/browse/HIVE-10906
 Project: Hive
  Issue Type: Sub-task
  Components: PTF-Windowing
Reporter: Aihua Xu
Assignee: Aihua Xu
 Fix For: 2.0.0

 Attachments: HIVE-10906.2.patch, HIVE-10906.patch


 The following query throws NPE.
 {noformat}
 select key, value, min(value) over (partition by key range between unbounded 
 preceding and current row) from small;
 FAILED: NullPointerException null
 2015-06-03 13:48:09,268 ERROR [main]: ql.Driver 
 (SessionState.java:printError(957)) - FAILED: NullPointerException null
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateValueBoundary(WindowingSpec.java:293)
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateWindowFrame(WindowingSpec.java:281)
 at 
 org.apache.hadoop.hive.ql.parse.WindowingSpec.validateAndMakeEffective(WindowingSpec.java:155)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:11965)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8910)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8868)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9713)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9606)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10079)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:327)
 at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10090)
 at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:208)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1124)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1172)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1061)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10969) Test autogen_colalias failing on trunk

2015-06-08 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-10969:

Attachment: HIVE-10969.patch

[~sershe] can you take a quick look?

 Test autogen_colalias failing on trunk
 --

 Key: HIVE-10969
 URL: https://issues.apache.org/jira/browse/HIVE-10969
 Project: Hive
  Issue Type: Test
  Components: Tests
Affects Versions: 2.0.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-10969.patch


 Seems like HIVE-10728 didnt have right golden file updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10969) Test autogen_colalias failing on trunk

2015-06-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578091#comment-14578091
 ] 

Sergey Shelukhin commented on HIVE-10969:
-

+1

 Test autogen_colalias failing on trunk
 --

 Key: HIVE-10969
 URL: https://issues.apache.org/jira/browse/HIVE-10969
 Project: Hive
  Issue Type: Test
  Components: Tests
Affects Versions: 2.0.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-10969.patch


 Seems like HIVE-10728 didnt have right golden file updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10963) Hive throws NPE rather than meaningful error message when window is missing

2015-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578056#comment-14578056
 ] 

Ashutosh Chauhan commented on HIVE-10963:
-

Couple of comments on RB.

 Hive throws NPE rather than meaningful error message when window is missing
 ---

 Key: HIVE-10963
 URL: https://issues.apache.org/jira/browse/HIVE-10963
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Affects Versions: 1.3.0
Reporter: Aihua Xu
Assignee: Aihua Xu
 Attachments: HIVE-10963.patch


 {{select sum(salary) over w1 from emp;}} throws NPE rather than meaningful 
 error message like missing window.
 And also give the right window name rather than the classname in the error 
 message after NPE issue is fixed.
 {noformat}
 org.apache.hadoop.hive.ql.parse.SemanticException: Window Spec 
 org.apache.hadoop.hive.ql.parse.WindowingSpec$WindowSpec@7954e1de refers to 
 an unknown source
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578103#comment-14578103
 ] 

Alan Gates commented on HIVE-10952:
---

I don't think this is the right fix.  We need to make sure that the alter 
changes the table in the cache so that future readers get the right version.

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10952-1.patch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 {code}
 The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578039#comment-14578039
 ] 

Alan Gates commented on HIVE-10952:
---

When I run this against a stand alone hbase I don't see the error.  I can 
reproduce it when I run it in the qfile.

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 {code}
 The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10944) Fix HS2 for Metrics

2015-06-08 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578052#comment-14578052
 ] 

Szehon Ho commented on HIVE-10944:
--

Sure, will fix that nit.  [~lskuff] can you take a look to see if it addresses 
your comments?  Thanks.

 Fix HS2 for Metrics
 ---

 Key: HIVE-10944
 URL: https://issues.apache.org/jira/browse/HIVE-10944
 Project: Hive
  Issue Type: Bug
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10944.2.patch, HIVE-10944.3.patch, HIVE-10944.patch


 Some issues about initializing the new HS2 metrics
 1.  Metrics is not working properly in HS2 due to wrong init checks
 2.  If not enabled, JVMPauseMonitor logs trash to HS2 logs as it wasnt 
 checking if metrics was enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-10952:
--
Attachment: HIVE-10952-1.patch

[~alangates], alter1.q can be fixed with the attached patch. The reason is 
cached table changed in alter table, so the storage descriptor hash is not the 
same as the old table's.

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10952-1.patch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 {code}
 The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HIVE-6991) History not able to disable/enable after session started

2015-06-08 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578076#comment-14578076
 ] 

Jimmy Xiang commented on HIVE-6991:
---

+1. You tested it on a live cluster and it works for both disable and enable, 
right?

 History not able to disable/enable after session started
 

 Key: HIVE-6991
 URL: https://issues.apache.org/jira/browse/HIVE-6991
 Project: Hive
  Issue Type: Bug
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-6991.1.patch, HIVE-6991.2.patch, HIVE-6991.patch


 By default history is disabled, after session started if enable history 
 through this command set hive.session.history.enabled=true. It is not working.
 I think it will help to this user query
 http://mail-archives.apache.org/mod_mbox/hive-user/201404.mbox/%3ccajqy7afapa_pjs6buon0o8zyt2qwfn2wt-mtznwfmurav_8...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10685) Alter table concatenate oparetor will cause duplicate data

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578108#comment-14578108
 ] 

Hive QA commented on HIVE-10685:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738446/HIVE-10685.1.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9004 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_stats_counter
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4219/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4219/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4219/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738446 - PreCommit-HIVE-TRUNK-Build

 Alter table concatenate oparetor will cause duplicate data
 --

 Key: HIVE-10685
 URL: https://issues.apache.org/jira/browse/HIVE-10685
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0, 1.0.0, 1.2.0, 1.1.0, 1.3.0, 1.2.1
Reporter: guoliming
Assignee: guoliming
 Fix For: 1.2.0, 1.1.0

 Attachments: HIVE-10685.1.patch, HIVE-10685.patch


 Orders table has 15 rows and stored as ORC. 
 {noformat}
 hive select count(*) from orders;
 OK
 15
 Time taken: 37.692 seconds, Fetched: 1 row(s)
 {noformat}
 The table contain 14 files,the size of each file is about 2.1 ~ 3.2 GB.
 After executing command : ALTER TABLE orders CONCATENATE;
 The table is already 1530115000 rows.
 My hive version is 1.1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10967) add mapreduce.job.tags to sql std authorization config whitelist

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578169#comment-14578169
 ] 

Hive QA commented on HIVE-10967:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738453/HIVE-10967.1.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9004 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4220/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4220/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4220/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738453 - PreCommit-HIVE-TRUNK-Build

 add mapreduce.job.tags to sql std authorization config whitelist
 

 Key: HIVE-10967
 URL: https://issues.apache.org/jira/browse/HIVE-10967
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10967.1.patch


 mapreduce.job.tags is set by oozie for HiveServer2 actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10962) Merge master to Spark branch 6/7/2015 [Spark Branch]

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578085#comment-14578085
 ] 

Hive QA commented on HIVE-10962:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738449/HIVE-10962.1-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7943 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.initializationError
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_bitmap_auto_partitioned
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithURL
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/873/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/873/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-873/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738449 - PreCommit-HIVE-SPARK-Build

 Merge master to Spark branch 6/7/2015 [Spark Branch]
 

 Key: HIVE-10962
 URL: https://issues.apache.org/jira/browse/HIVE-10962
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
 Attachments: HIVE-10962.1-spark.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578058#comment-14578058
 ] 

Alan Gates commented on HIVE-10952:
---

Ok, figured it out.  You have to include the alter table set serdeproperties to 
get it to fail.

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 {code}
 The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578112#comment-14578112
 ] 

Daniel Dai commented on HIVE-10952:
---

Sure, I don't mean this is the right fix, just to share what I've found so far.

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10952-1.patch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 {code}
 The partitioned counterpart alter2.q pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10415) hive.start.cleanup.scratchdir configuration is not taking effect

2015-06-08 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578081#comment-14578081
 ] 

Jimmy Xiang commented on HIVE-10415:


+1. Ok, this setting is off by default.

 hive.start.cleanup.scratchdir configuration is not taking effect
 

 Key: HIVE-10415
 URL: https://issues.apache.org/jira/browse/HIVE-10415
 Project: Hive
  Issue Type: Bug
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Attachments: HIVE-10415.patch


 This configuration hive.start.cleanup.scratchdir is not taking effect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10952) Describe a non-partitioned table fail

2015-06-08 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578111#comment-14578111
 ] 

Alan Gates commented on HIVE-10952:
---

Ok, I think I see the issue.  We aren't being aggressive enough about flushing 
the cache.  With each new SQL statement we should be assuring that it flushes 
the caches to force a read from hbase, as another thread or server may have 
changed the table or whatever.

 Describe a non-partitioned table fail
 -

 Key: HIVE-10952
 URL: https://issues.apache.org/jira/browse/HIVE-10952
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Reporter: Daniel Dai
Assignee: Alan Gates
 Fix For: hbase-metastore-branch

 Attachments: HIVE-10952-1.patch


 This section of alter1.q fail:
 create table alter1(a int, b int);
 describe extended alter1;
 Exception:
 {code}
 Trying to fetch a non-existent storage descriptor from hash 
 iNVRGkfwwQDGK9oX0fo9XA==^M
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1765)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getTableName(DDLSemanticAnalyzer.java:1807)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDescribeTable(DDLSemanticAnalyzer.java:1985)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:318)
 at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1128)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1176)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1065)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
 at 
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:311)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1069)
 at 
 org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1043)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:139)
 at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter1(TestCliDriver.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at junit.framework.TestCase.runTest(TestCase.java:176)
 at junit.framework.TestCase.runBare(TestCase.java:141)
 at junit.framework.TestResult$1.protect(TestResult.java:122)
 at junit.framework.TestResult.runProtected(TestResult.java:142)
 at junit.framework.TestResult.run(TestResult.java:125)
 at junit.framework.TestCase.run(TestCase.java:129)
 at junit.framework.TestSuite.runTest(TestSuite.java:255)
 at junit.framework.TestSuite.run(TestSuite.java:250)
 at 
 org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
 at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch 
 table alter1. java.lang.RuntimeException: Woh, bad!  Trying to fetch a 
 non-existent storage descriptor from hash iNVRGkfwwQDGK9oX0fo9XA==^M
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1121)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1068)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1055)
 at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer$QualifiedNameUtil.getAttemptTableName(DDLSemanticAnalyzer.java:1747)
 

[jira] [Commented] (HIVE-10967) add mapreduce.job.tags to sql std authorization config whitelist

2015-06-08 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578038#comment-14578038
 ] 

Jason Dere commented on HIVE-10967:
---

+1

 add mapreduce.job.tags to sql std authorization config whitelist
 

 Key: HIVE-10967
 URL: https://issues.apache.org/jira/browse/HIVE-10967
 Project: Hive
  Issue Type: Bug
  Components: Authorization, SQLStandardAuthorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-10967.1.patch


 mapreduce.job.tags is set by oozie for HiveServer2 actions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10969) Test autogen_colalias failing on trunk

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578247#comment-14578247
 ] 

Hive QA commented on HIVE-10969:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738466/HIVE-10969.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9004 tests executed
*Failed tests:*
{noformat}
org.apache.hive.beeline.TestSchemaTool.testSchemaInit
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4221/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4221/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4221/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738466 - PreCommit-HIVE-TRUNK-Build

 Test autogen_colalias failing on trunk
 --

 Key: HIVE-10969
 URL: https://issues.apache.org/jira/browse/HIVE-10969
 Project: Hive
  Issue Type: Test
  Components: Tests
Affects Versions: 2.0.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-10969.patch


 Seems like HIVE-10728 didnt have right golden file updates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10959) Templeton launcher job should reconnect to the running child job on task retry

2015-06-08 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578231#comment-14578231
 ] 

Ivan Mitic commented on HIVE-10959:
---

Thanks for reviewing Thejas!

Child jobs are tagged with parent's job Id. So even if there is more then one 
job, we should be able to find them when we query for all child jobs (I know 
this works for hive/pig jobs which spawn more then one mr job - I tested this). 
I assume user can do the wrong thing here by not carrying the tag explicitly, 
but I would argue this is not supported. 

In this patch I log a warning if we detect more then one child job in case of 
MR. Another possibly better way to handle this is to say that reconnect is not 
supported in this case, and let the regular code path handle this (kill all 
child jobs and relaunch). Let me know what you think.

 Templeton launcher job should reconnect to the running child job on task retry
 --

 Key: HIVE-10959
 URL: https://issues.apache.org/jira/browse/HIVE-10959
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.15.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HIVE-10959.patch


 Currently, Templeton launcher kills all child jobs (jobs tagged with the 
 parent job's id) upon task retry. 
 Upon templeton launcher task retry, templeton should reconnect to the running 
 job and continue tracking its progress that way. 
 This logic cannot be used for all job kinds (e.g. for jobs that are driven by 
 the client side like regular hive). However, for MapReduceV2, and possibly 
 Tez and HiveOnTez, this should be the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10855) Make HIVE-10568 work with Spark [Spark Branch]

2015-06-08 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-10855:
---
Attachment: HIVE-10855.1-spark.patch

Attached the same patch for another run with latest branch.

 Make HIVE-10568 work with Spark [Spark Branch]
 --

 Key: HIVE-10855
 URL: https://issues.apache.org/jira/browse/HIVE-10855
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Rui Li
 Attachments: HIVE-10855.1-spark.patch, HIVE-10855.1-spark.patch


 HIVE-10568 only works with Tez. It's good to make it also work for Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10855) Make HIVE-10568 work with Spark [Spark Branch]

2015-06-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578297#comment-14578297
 ] 

Xuefu Zhang commented on HIVE-10855:


Hi [~lirui], thanks for the information. Spark branch is still good, especially 
after today's merge. You can use it for faster precommit test if you wish. 
However, do let me know if you run into any problem with it. Working on master 
is fine too.

 Make HIVE-10568 work with Spark [Spark Branch]
 --

 Key: HIVE-10855
 URL: https://issues.apache.org/jira/browse/HIVE-10855
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Rui Li
 Attachments: HIVE-10855.1-spark.patch, HIVE-10855.1-spark.patch


 HIVE-10568 only works with Tez. It's good to make it also work for Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10855) Make HIVE-10568 work with Spark [Spark Branch]

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578337#comment-14578337
 ] 

Hive QA commented on HIVE-10855:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738490/HIVE-10855.1-spark.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 7943 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.initializationError
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_pushdown
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_count_distinct
org.apache.hive.jdbc.TestSSL.testSSLVersion
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/874/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/874/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-874/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738490 - PreCommit-HIVE-SPARK-Build

 Make HIVE-10568 work with Spark [Spark Branch]
 --

 Key: HIVE-10855
 URL: https://issues.apache.org/jira/browse/HIVE-10855
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Rui Li
 Attachments: HIVE-10855.1-spark.patch, HIVE-10855.1-spark.patch


 HIVE-10568 only works with Tez. It's good to make it also work for Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10880) The bucket number is not respected in insert overwrite.

2015-06-08 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578293#comment-14578293
 ] 

Xuefu Zhang commented on HIVE-10880:


[~ychena], thanks for working on this. Looking at the patch, I wasn't confident 
that I know the root cause of the problem and how your patch addresses it. From 
the problem description, I originally thought that it's a problem of setting 
the right number of reducers. However, your patch seems not approaching in that 
direction. Instead, it appears that your patch adds the missing buckets by 
creating empty files. I'm not sure if this fixes the root cause. In general, 
the rows should be relatively evenly distributed in different buckets, and so 
missing or empty bucket files should be rare rather than normal.

Could you please share your thoughts on this?

 The bucket number is not respected in insert overwrite.
 ---

 Key: HIVE-10880
 URL: https://issues.apache.org/jira/browse/HIVE-10880
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
Priority: Blocker
 Attachments: HIVE-10880.1.patch, HIVE-10880.2.patch, 
 HIVE-10880.3.patch


 When hive.enforce.bucketing is true, the bucket number defined in the table 
 is no longer respected in current master and 1.2. This is a regression.
 Reproduce:
 {noformat}
 CREATE TABLE IF NOT EXISTS buckettestinput( 
 data string 
 ) 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 CREATE TABLE IF NOT EXISTS buckettestoutput1( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 CREATE TABLE IF NOT EXISTS buckettestoutput2( 
 data string 
 )CLUSTERED BY(data) 
 INTO 2 BUCKETS 
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
 Then I inserted the following data into the buckettestinput table
 firstinsert1 
 firstinsert2 
 firstinsert3 
 firstinsert4 
 firstinsert5 
 firstinsert6 
 firstinsert7 
 firstinsert8 
 secondinsert1 
 secondinsert2 
 secondinsert3 
 secondinsert4 
 secondinsert5 
 secondinsert6 
 secondinsert7 
 secondinsert8
 set hive.enforce.bucketing = true; 
 set hive.enforce.sorting=true;
 insert overwrite table buckettestoutput1 
 select * from buckettestinput where data like 'first%';
 set hive.auto.convert.sortmerge.join=true; 
 set hive.optimize.bucketmapjoin = true; 
 set hive.optimize.bucketmapjoin.sortedmerge = true; 
 select * from buckettestoutput1 a join buckettestoutput2 b on (a.data=b.data);
 Error: Error while compiling statement: FAILED: SemanticException [Error 
 10141]: Bucketed table metadata is not correct. Fix the metadata or don't use 
 bucketed mapjoin, by setting hive.enforce.bucketmapjoin to false. The number 
 of buckets for table buckettestoutput1 is 2, whereas the number of files is 1 
 (state=42000,code=10141)
 {noformat}
 The related debug information related to insert overwrite:
 {noformat}
 0: jdbc:hive2://localhost:1 insert overwrite table buckettestoutput1 
 select * from buckettestinput where data like 'first%'insert overwrite table 
 buckettestoutput1 
 0: jdbc:hive2://localhost:1 ;
 select * from buckettestinput where data like ' 
 first%';
 INFO  : Number of reduce tasks determined at compile time: 2
 INFO  : In order to change the average load for a reducer (in bytes):
 INFO  :   set hive.exec.reducers.bytes.per.reducer=number
 INFO  : In order to limit the maximum number of reducers:
 INFO  :   set hive.exec.reducers.max=number
 INFO  : In order to set a constant number of reducers:
 INFO  :   set mapred.reduce.tasks=number
 INFO  : Job running in-process (local Hadoop)
 INFO  : 2015-06-01 11:09:29,650 Stage-1 map = 86%,  reduce = 100%
 INFO  : Ended Job = job_local107155352_0001
 INFO  : Loading data to table default.buckettestoutput1 from 
 file:/user/hive/warehouse/buckettestoutput1/.hive-staging_hive_2015-06-01_11-09-28_166_3109203968904090801-1/-ext-1
 INFO  : Table default.buckettestoutput1 stats: [numFiles=1, numRows=4, 
 totalSize=52, rawDataSize=48]
 No rows affected (1.692 seconds)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10958) Centos: TestMiniTezCliDriver.testCliDriver_mergejoin fails

2015-06-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578312#comment-14578312
 ] 

Ashutosh Chauhan commented on HIVE-10958:
-

+1

 Centos: TestMiniTezCliDriver.testCliDriver_mergejoin fails
 --

 Key: HIVE-10958
 URL: https://issues.apache.org/jira/browse/HIVE-10958
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong
 Attachments: HIVE-10958.01.patch


 Centos: TestMiniTezCliDriver.testCliDriver_mergejoin fails due to the 
 statement set mapred.reduce.tasks = 18;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10841) [WHERE col is not null] does not work sometimes for queries with many JOIN statements

2015-06-08 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578345#comment-14578345
 ] 

Alexander Pivovarov commented on HIVE-10841:


[~jpullokkaran], thank you for HIVE-10841.1.patch.
It changes 2 files:
- SemanticAnalyzer.java
- OpProcFactory.java

I tried the fix in SemanticAnalyzer.java only. It solve the issue with my test 
query.
So, looks like the fix in SemanticAnalyzer.java is enough to solve the issue.

Why do we need fixes in OpProcFactory.java? Should we open separate Jira for 
them?

2. Looks like we need to rerun bunch of Cli, Tez and Spark tests...

 [WHERE col is not null] does not work sometimes for queries with many JOIN 
 statements
 -

 Key: HIVE-10841
 URL: https://issues.apache.org/jira/browse/HIVE-10841
 Project: Hive
  Issue Type: Bug
  Components: Query Planning, Query Processor
Affects Versions: 0.13.0, 0.14.0, 0.13.1, 1.2.0, 1.3.0
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
 Attachments: HIVE-10841.1.patch, HIVE-10841.patch


 The result from the following SELECT query is 3 rows but it should be 1 row.
 I checked it in MySQL - it returned 1 row.
 To reproduce the issue in Hive
 1. prepare tables
 {code}
 drop table if exists L;
 drop table if exists LA;
 drop table if exists FR;
 drop table if exists A;
 drop table if exists PI;
 drop table if exists acct;
 create table L as select 4436 id;
 create table LA as select 4436 loan_id, 4748 aid, 4415 pi_id;
 create table FR as select 4436 loan_id;
 create table A as select 4748 id;
 create table PI as select 4415 id;
 create table acct as select 4748 aid, 10 acc_n, 122 brn;
 insert into table acct values(4748, null, null);
 insert into table acct values(4748, null, null);
 {code}
 2. run SELECT query
 {code}
 select
   acct.ACC_N,
   acct.brn
 FROM L
 JOIN LA ON L.id = LA.loan_id
 JOIN FR ON L.id = FR.loan_id
 JOIN A ON LA.aid = A.id
 JOIN PI ON PI.id = LA.pi_id
 JOIN acct ON A.id = acct.aid
 WHERE
   L.id = 4436
   and acct.brn is not null;
 {code}
 the result is 3 rows
 {code}
 10122
 NULL  NULL
 NULL  NULL
 {code}
 but it should be 1 row
 {code}
 10122
 {code}
 2.1 explain select ... output for hive-1.3.0 MR
 {code}
 STAGE DEPENDENCIES:
   Stage-12 is a root stage
   Stage-9 depends on stages: Stage-12
   Stage-0 depends on stages: Stage-9
 STAGE PLANS:
   Stage: Stage-12
 Map Reduce Local Work
   Alias - Map Local Tables:
 a 
   Fetch Operator
 limit: -1
 acct 
   Fetch Operator
 limit: -1
 fr 
   Fetch Operator
 limit: -1
 l 
   Fetch Operator
 limit: -1
 pi 
   Fetch Operator
 limit: -1
   Alias - Map Local Operator Tree:
 a 
   TableScan
 alias: a
 Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column 
 stats: NONE
 Filter Operator
   predicate: id is not null (type: boolean)
   Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE 
 Column stats: NONE
   HashTable Sink Operator
 keys:
   0 _col5 (type: int)
   1 id (type: int)
   2 aid (type: int)
 acct 
   TableScan
 alias: acct
 Statistics: Num rows: 3 Data size: 31 Basic stats: COMPLETE 
 Column stats: NONE
 Filter Operator
   predicate: aid is not null (type: boolean)
   Statistics: Num rows: 2 Data size: 20 Basic stats: COMPLETE 
 Column stats: NONE
   HashTable Sink Operator
 keys:
   0 _col5 (type: int)
   1 id (type: int)
   2 aid (type: int)
 fr 
   TableScan
 alias: fr
 Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column 
 stats: NONE
 Filter Operator
   predicate: (loan_id = 4436) (type: boolean)
   Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE 
 Column stats: NONE
   HashTable Sink Operator
 keys:
   0 4436 (type: int)
   1 4436 (type: int)
   2 4436 (type: int)
 l 
   TableScan
 alias: l
 Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE Column 
 stats: NONE
 Filter Operator
   predicate: (id = 4436) (type: boolean)
   Statistics: Num rows: 1 Data size: 4 Basic stats: COMPLETE 
 Column stats: NONE
   HashTable Sink Operator
 keys:
   0 4436 (type: int)
 

[jira] [Commented] (HIVE-10855) Make HIVE-10568 work with Spark [Spark Branch]

2015-06-08 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578263#comment-14578263
 ] 

Rui Li commented on HIVE-10855:
---

Hi [~xuefuz], thanks for taking care of this. We need to solve HIVE-10903 first 
to get the test output right, which I'm working on.
Just want to make sure, is spark branch abandoned? Should patches for HoS 
directly target master?

 Make HIVE-10568 work with Spark [Spark Branch]
 --

 Key: HIVE-10855
 URL: https://issues.apache.org/jira/browse/HIVE-10855
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Xuefu Zhang
Assignee: Rui Li
 Attachments: HIVE-10855.1-spark.patch, HIVE-10855.1-spark.patch


 HIVE-10568 only works with Tez. It's good to make it also work for Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2015-06-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-6411:
-
Labels:   (was: TODOC14)

 Support more generic way of using composite key for HBaseHandler
 

 Key: HIVE-6411
 URL: https://issues.apache.org/jira/browse/HIVE-6411
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-6411.1.patch.txt, HIVE-6411.10.patch.txt, 
 HIVE-6411.11.patch.txt, HIVE-6411.2.patch.txt, HIVE-6411.3.patch.txt, 
 HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, HIVE-6411.6.patch.txt, 
 HIVE-6411.7.patch.txt, HIVE-6411.8.patch.txt, HIVE-6411.9.patch.txt


 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2015-06-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14576663#comment-14576663
 ] 

Lefty Leverenz commented on HIVE-6411:
--

[~amains12] documented this in the wiki so I'm removing the TODOC14 label.  
Thanks Andrew!

* [HBaseIntegration -- Complex Composite Row Keys and HBaseKeyFactory | 
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-ComplexCompositeRowKeysandHBaseKeyFactory]

Note that the doc has a TODO (so maybe the TODOC14 label should be restored):

bq.  hbase.composite.key.factory should be the fully qualified class name of 
a class implementing HBaseKeyFactory. See SampleHBaseKeyFactory2 for a fixed 
length example in the same package. This class must be on your classpath in 
order for the above example to work. TODO: place these in an accessible place; 
they're currently only in test code.

 Support more generic way of using composite key for HBaseHandler
 

 Key: HIVE-6411
 URL: https://issues.apache.org/jira/browse/HIVE-6411
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.14.0

 Attachments: HIVE-6411.1.patch.txt, HIVE-6411.10.patch.txt, 
 HIVE-6411.11.patch.txt, HIVE-6411.2.patch.txt, HIVE-6411.3.patch.txt, 
 HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, HIVE-6411.6.patch.txt, 
 HIVE-6411.7.patch.txt, HIVE-6411.8.patch.txt, HIVE-6411.9.patch.txt


 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10944) Fix HS2 for Metrics

2015-06-08 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-10944:
-
Attachment: HIVE-10944.3.patch

Updating patch addressing latest review request.

 Fix HS2 for Metrics
 ---

 Key: HIVE-10944
 URL: https://issues.apache.org/jira/browse/HIVE-10944
 Project: Hive
  Issue Type: Bug
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10944.2.patch, HIVE-10944.3.patch, HIVE-10944.patch


 Some issues about initializing the new HS2 metrics
 1.  Metrics is not working properly in HS2 due to wrong init checks
 2.  If not enabled, JVMPauseMonitor logs trash to HS2 logs as it wasnt 
 checking if metrics was enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10598) Vectorization borks when column is added to table.

2015-06-08 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-10598:

Attachment: HIVE-10598.01.patch

This is not the whole solution -- it is just phase 1.

Still need to:
1) Make VectorMapOperator not use MapOperator super class, but do simpler 
processing of partitions itself.
2) Make use the table column information to default missing columns to null.
3) And, do type conversion.

 Vectorization borks when column is added to table.
 --

 Key: HIVE-10598
 URL: https://issues.apache.org/jira/browse/HIVE-10598
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Mithun Radhakrishnan
Assignee: Matt McCline
 Attachments: HIVE-10598.01.patch


 Consider the following table definition:
 {code:sql}
 create table foobar ( foo string, bar string ) partitioned by (dt string) 
 stored as orc;
 alter table foobar add partition( dt='20150101' ) ;
 {code}
 Say the partition has the following data:
 {noformat}
 1 one 20150101
 2 two 20150101
 3 three   20150101
 {noformat}
 If a new column is added to the table-schema (and the partition continues to 
 have the old schema), vectorized read from the old partitions fail thus:
 {code:sql}
 alter table foobar add columns( goo string );
 select count(1) from foobar;
 {code}
 {code:title=stacktrace}
 java.lang.Exception: java.lang.RuntimeException: Error creating a batch
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
 Caused by: java.lang.RuntimeException: Error creating a batch
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:114)
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:52)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:84)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:42)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.createValue(HadoopShimsSecure.java:156)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.createValue(MapTask.java:180)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: No type entry 
 found for column 3 in map {4=Long}
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.addScratchColumnsToBatch(VectorizedRowBatchCtx.java:632)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.createVectorizedRowBatch(VectorizedRowBatchCtx.java:343)
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:112)
   ... 14 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10815) Let HiveMetaStoreClient Choose MetaStore Randomly

2015-06-08 Thread Nemon Lou (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14576724#comment-14576724
 ] 

Nemon Lou commented on HIVE-10815:
--

[~thiruvel]   Thanks for reply.
I find that only the first time connect use the fixed order.
There is already a mechanism that promote random Metastore URI when reconnect.
(See promoteRandomMetaStoreURI() in reconnect method for detail.)
So shuffle at the creation phase is enough to make different client connect to 
different Metastore.

 Let HiveMetaStoreClient Choose MetaStore Randomly
 -

 Key: HIVE-10815
 URL: https://issues.apache.org/jira/browse/HIVE-10815
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2, Metastore
Affects Versions: 1.2.0
Reporter: Nemon Lou
Assignee: Nemon Lou
 Attachments: HIVE-10815.patch


 Currently HiveMetaStoreClient using a fixed order to choose MetaStore URIs 
 when multiple metastores configured.
  Choosing MetaStore Randomly will be good for load balance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-2599) Support Composit/Compound Keys with HBaseStorageHandler

2015-06-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14576667#comment-14576667
 ] 

Lefty Leverenz commented on HIVE-2599:
--

Doc note:  [~amains12] documented this in the HBase Integration wikidoc, so I'm 
removing the TODOC13 label.  Thanks Andrew!

* [HBase Integration -- Simple Composite Row Keys | 
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration#HBaseIntegration-SimpleCompositeRowKeys]

 Support Composit/Compound Keys with HBaseStorageHandler
 ---

 Key: HIVE-2599
 URL: https://issues.apache.org/jira/browse/HIVE-2599
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.8.0
Reporter: Hans Uhlig
Assignee: Swarnim Kulkarni
 Fix For: 0.13.0

 Attachments: HIVE-2599.1.patch.txt, HIVE-2599.2.patch.txt, 
 HIVE-2599.2.patch.txt, HIVE-2599.3.patch.txt, HIVE-2599.4.patch.txt


 It would be really nice for hive to be able to understand composite keys from 
 an underlying HBase schema. Currently we have to store key fields twice to be 
 able to both key and make data available. I noticed John Sichi mentioned in 
 HIVE-1228 that this would be a separate issue but I cant find any follow up. 
 How feasible is this in the HBaseStorageHandler?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-2599) Support Composit/Compound Keys with HBaseStorageHandler

2015-06-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-2599:
-
Labels:   (was: TODOC13)

 Support Composit/Compound Keys with HBaseStorageHandler
 ---

 Key: HIVE-2599
 URL: https://issues.apache.org/jira/browse/HIVE-2599
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.8.0
Reporter: Hans Uhlig
Assignee: Swarnim Kulkarni
 Fix For: 0.13.0

 Attachments: HIVE-2599.1.patch.txt, HIVE-2599.2.patch.txt, 
 HIVE-2599.2.patch.txt, HIVE-2599.3.patch.txt, HIVE-2599.4.patch.txt


 It would be really nice for hive to be able to understand composite keys from 
 an underlying HBase schema. Currently we have to store key fields twice to be 
 able to both key and make data available. I noticed John Sichi mentioned in 
 HIVE-1228 that this would be a separate issue but I cant find any follow up. 
 How feasible is this in the HBaseStorageHandler?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10307) Support to use number literals in partition column

2015-06-08 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-10307:
--
Labels:   (was: TODOC1.2)

 Support to use number literals in partition column
 --

 Key: HIVE-10307
 URL: https://issues.apache.org/jira/browse/HIVE-10307
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
 Fix For: 1.2.0

 Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
 HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
 HIVE-10307.6.patch, HIVE-10307.patch


 Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
 literals with postfix like Y, S, L, or BD appended to the number. These 
 literals work in most Hive queries, but do not when they are used as 
 partition column value. For a partitioned table like:
 create table partcoltypenum (key int, value string) partitioned by (tint 
 tinyint, sint smallint, bint bigint);
 insert into partcoltypenum partition (tint=100Y, sint=1S, 
 bint=1000L) select key, value from src limit 30;
 Queries like select, describe and drop partition do not work. For an example
 select * from partcoltypenum where tint=100Y and sint=1S and 
 bint=1000L;
 does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10598) Vectorization borks when column is added to table.

2015-06-08 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577019#comment-14577019
 ] 

Hive QA commented on HIVE-10598:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12738311/HIVE-10598.01.patch

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 9002 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autogen_colalias
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_int_type_promotion
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testVectorization
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testVectorizationWithAcid
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testVectorizationWithBuckets
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4210/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/4210/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-4210/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12738311 - PreCommit-HIVE-TRUNK-Build

 Vectorization borks when column is added to table.
 --

 Key: HIVE-10598
 URL: https://issues.apache.org/jira/browse/HIVE-10598
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Mithun Radhakrishnan
Assignee: Matt McCline
 Attachments: HIVE-10598.01.patch


 Consider the following table definition:
 {code:sql}
 create table foobar ( foo string, bar string ) partitioned by (dt string) 
 stored as orc;
 alter table foobar add partition( dt='20150101' ) ;
 {code}
 Say the partition has the following data:
 {noformat}
 1 one 20150101
 2 two 20150101
 3 three   20150101
 {noformat}
 If a new column is added to the table-schema (and the partition continues to 
 have the old schema), vectorized read from the old partitions fail thus:
 {code:sql}
 alter table foobar add columns( goo string );
 select count(1) from foobar;
 {code}
 {code:title=stacktrace}
 java.lang.Exception: java.lang.RuntimeException: Error creating a batch
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
 Caused by: java.lang.RuntimeException: Error creating a batch
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:114)
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:52)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:84)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.createValue(CombineHiveRecordReader.java:42)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.createValue(HadoopShimsSecure.java:156)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.createValue(MapTask.java:180)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: No type entry 
 found for column 3 in map {4=Long}
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.addScratchColumnsToBatch(VectorizedRowBatchCtx.java:632)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.createVectorizedRowBatch(VectorizedRowBatchCtx.java:343)
   at 
 org.apache.hadoop.hive.ql.io.orc.VectorizedOrcInputFormat$VectorizedOrcRecordReader.createValue(VectorizedOrcInputFormat.java:112)
   ... 14 more
 {code}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HIVE-10165) Improve hive-hcatalog-streaming extensibility and support updates and deletes.

2015-06-08 Thread Elliot West (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14576753#comment-14576753
 ] 

Elliot West commented on HIVE-10165:


I've removed the implementation section from the issue description as it is now 
outdated and more accurately described in the comments.

 Improve hive-hcatalog-streaming extensibility and support updates and deletes.
 --

 Key: HIVE-10165
 URL: https://issues.apache.org/jira/browse/HIVE-10165
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 1.2.0
Reporter: Elliot West
Assignee: Elliot West
  Labels: streaming_api
 Attachments: HIVE-10165.0.patch, HIVE-10165.4.patch, 
 HIVE-10165.5.patch, HIVE-10165.6.patch, mutate-system-overview.png


 h3. Overview
 I'd like to extend the 
 [hive-hcatalog-streaming|https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest]
  API so that it also supports the writing of record updates and deletes in 
 addition to the already supported inserts.
 h3. Motivation
 We have many Hadoop processes outside of Hive that merge changed facts into 
 existing datasets. Traditionally we achieve this by: reading in a 
 ground-truth dataset and a modified dataset, grouping by a key, sorting by a 
 sequence and then applying a function to determine inserted, updated, and 
 deleted rows. However, in our current scheme we must rewrite all partitions 
 that may potentially contain changes. In practice the number of mutated 
 records is very small when compared with the records contained in a 
 partition. This approach results in a number of operational issues:
 * Excessive amount of write activity required for small data changes.
 * Downstream applications cannot robustly read these datasets while they are 
 being updated.
 * Due to scale of the updates (hundreds or partitions) the scope for 
 contention is high. 
 I believe we can address this problem by instead writing only the changed 
 records to a Hive transactional table. This should drastically reduce the 
 amount of data that we need to write and also provide a means for managing 
 concurrent access to the data. Our existing merge processes can read and 
 retain each record's {{ROW_ID}}/{{RecordIdentifier}} and pass this through to 
 an updated form of the hive-hcatalog-streaming API which will then have the 
 required data to perform an update or insert in a transactional manner. 
 h3. Benefits
 * Enables the creation of large-scale dataset merge processes  
 * Opens up Hive transactional functionality in an accessible manner to 
 processes that operate outside of Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10307) Support to use number literals in partition column

2015-06-08 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14576764#comment-14576764
 ] 

Lefty Leverenz commented on HIVE-10307:
---

Good docs, thanks [~ctang.ma]!  I'm removing the TODOC1.2 label.

Here are the doc links:

# [Configuration Properties – hive.typecheck.on.insert | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.typecheck.on.insert]
# [DDL – Alter Partition | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterPartition]
# [DDL – Rename Partition | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RenamePartition]
# [DDL – Describe Partition | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-DescribePartition]
# [DML – Inserting data into Hive Tables from queries | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingdataintoHiveTablesfromqueries]

By the way, links in JIRA comments are enclosed in square brackets, with | 
separating the link text from the URL.  The configuration parameter's URL has 
this format:  
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.typecheck.on.insert
 .

 Support to use number literals in partition column
 --

 Key: HIVE-10307
 URL: https://issues.apache.org/jira/browse/HIVE-10307
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 1.0.0
Reporter: Chaoyu Tang
Assignee: Chaoyu Tang
  Labels: TODOC1.2
 Fix For: 1.2.0

 Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
 HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
 HIVE-10307.6.patch, HIVE-10307.patch


 Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
 literals with postfix like Y, S, L, or BD appended to the number. These 
 literals work in most Hive queries, but do not when they are used as 
 partition column value. For a partitioned table like:
 create table partcoltypenum (key int, value string) partitioned by (tint 
 tinyint, sint smallint, bint bigint);
 insert into partcoltypenum partition (tint=100Y, sint=1S, 
 bint=1000L) select key, value from src limit 30;
 Queries like select, describe and drop partition do not work. For an example
 select * from partcoltypenum where tint=100Y and sint=1S and 
 bint=1000L;
 does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >