[jira] [Commented] (HIVE-6380) Specify jars/files when creating permanent UDFs

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13898921#comment-13898921
 ] 

Hive QA commented on HIVE-6380:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628089/HIVE-6380.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5104 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin6
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1278/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1278/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628089

 Specify jars/files when creating permanent UDFs
 ---

 Key: HIVE-6380
 URL: https://issues.apache.org/jira/browse/HIVE-6380
 Project: Hive
  Issue Type: Sub-task
  Components: UDF
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-6380.1.patch


 Need a way for a permanent UDF to reference jars/files.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6203) Privileges of role granted indrectily to user is not applied

2014-02-12 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6203:


Status: Patch Available  (was: Open)

Preliminary test

 Privileges of role granted indrectily to user is not applied
 

 Key: HIVE-6203
 URL: https://issues.apache.org/jira/browse/HIVE-6203
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-6203.1.patch.txt


 For example, 
 {noformat}
 create role r1;
 create role r2;
 grant select on table eq to role r1;
 grant role r1 to role r2;
 grant role r2 to user admin;
 select * from eq limit 5;
 {noformat}
 admin - r2 - r1 - SEL on table eq
 but user admin fails to access table eq



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6203) Privileges of role granted indrectily to user is not applied

2014-02-12 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6203:


Attachment: HIVE-6203.1.patch.txt

 Privileges of role granted indrectily to user is not applied
 

 Key: HIVE-6203
 URL: https://issues.apache.org/jira/browse/HIVE-6203
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-6203.1.patch.txt


 For example, 
 {noformat}
 create role r1;
 create role r2;
 grant select on table eq to role r1;
 grant role r1 to role r2;
 grant role r2 to user admin;
 select * from eq limit 5;
 {noformat}
 admin - r2 - r1 - SEL on table eq
 but user admin fails to access table eq



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6402) Improve Hive behavior when deleting data with miss configured Trash

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899017#comment-13899017
 ] 

Hive QA commented on HIVE-6402:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628287/HIVE-6402.2.patch

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 5086 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_format_loc
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_13_managed_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_14_managed_location_over_existing
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_17_part_managed
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_20_part_managed_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_update
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapreduce1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_date2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udf_isnull_isnotnull
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_ppd_key_range
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_pushdown
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_single_sourced_multi_insert
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_ppd_key_ranges
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_archive2
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_archive_insert2
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_archive_insert4
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_archive_multi2
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_archive_multi4
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_archive_multi6
org.apache.hadoop.hive.metastore.TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext
org.apache.hcatalog.mapreduce.TestHCatMultiOutputFormat.org.apache.hcatalog.mapreduce.TestHCatMultiOutputFormat
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testAlterTablePartRelocateFail
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testAlterTableRelocateFail
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1281/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1281/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628287

 Improve Hive behavior when deleting data with miss configured Trash
 ---

 Key: HIVE-6402
 URL: https://issues.apache.org/jira/browse/HIVE-6402
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-6402.2.patch, HIVE-6402.patch


 Today if HDFS trash is enabled but misconfigured, drop db/table/partition may 
 succeed but data is not dropped.  It seems dropping data was not considered 
 very important by design.
 This is confusing behavior, and the user should at least be notified that 
 this is happening.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6412) SMB join on Decimal columns causes caste exception in JoinUtil.computeKeys

2014-02-12 Thread Remus Rusanu (JIRA)
Remus Rusanu created HIVE-6412:
--

 Summary: SMB join on Decimal columns causes caste exception in 
JoinUtil.computeKeys
 Key: HIVE-6412
 URL: https://issues.apache.org/jira/browse/HIVE-6412
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Priority: Critical


Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
org.apache.hadoop.hive.common.type.HiveDecimal
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
at 
org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)


Repro:
create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
  CLUSTERED BY (key) 
  SORTED BY (key) INTO 1 BUCKETS 
  STORED AS ORC;
create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
  CLUSTERED BY (key) 
  SORTED BY (key) INTO 1 BUCKETS 
  STORED AS ORC;
  
insert into table vsmb_bucket_1 
  select cast(cint as decimal(9,0)) as key, 
cast(cfloat as decimal(38,10)) as value 
  from alltypesorc limit 2;
insert into table vsmb_bucket_2 
  select cast(cint as decimal(19,3)) as key, 
cast(cfloat as decimal(28,0)) as value 
  from alltypesorc limit 2;

set hive.optimize.bucketmapjoin = true;
set hive.optimize.bucketmapjoin.sortedmerge = true;
set hive.auto.convert.sortmerge.join.noconditionaltask = true;
set hive.input.format = org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;

explain
select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
b.key;
select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
b.key;



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 17737: Add DECIMAL support to vectorized group by operator

2014-02-12 Thread Remus Rusanu


 On Feb. 10, 2014, 9:58 p.m., Jitendra Pandey wrote:
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxDecimal.txt, line 53
  https://reviews.apache.org/r/17737/diff/1/?file=470006#file470006line53
 
  Should we initialize isNull to true? It seems it will always be false 
  otherwise.

It is initialized to true explicitly @407 in  public void 
reset(AggregationBuffer agg). This pattern in repeated in all aggregates, the 
aggreagt structures are always explictly initialized before use. I agree it 
should be at least documented in a comment, and did so.


- Remus


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/17737/#review33964
---


On Feb. 5, 2014, 11:04 a.m., Remus Rusanu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/17737/
 ---
 
 (Updated Feb. 5, 2014, 11:04 a.m.)
 
 
 Review request for hive, Eric Hanson and Jitendra Pandey.
 
 
 Bugs: HIVE-6344
 https://issues.apache.org/jira/browse/HIVE-6344
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Implements Decimal aggregate operators, decimal vector hash key wrapper, 
 extends vectorizer to support decimal in GBY.
 
 
 Diffs
 -
 
   ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java 1b76fc9 
   common/src/java/org/apache/hadoop/hive/common/type/Decimal128.java 2e0f058 
   common/src/java/org/apache/hadoop/hive/common/type/HiveDecimal.java 29c5168 
   common/src/java/org/apache/hadoop/hive/common/type/UnsignedInt128.java 
 fb3c346 
   common/src/java/org/apache/hive/common/util/Decimal128FastBuffer.java 
 PRE-CREATION 
   ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxDecimal.txt 
 PRE-CREATION 
   ql/src/gen/vectorization/UDAFTemplates/VectorUDAFVarDecimal.txt 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorHashKeyWrapper.java 
 f083d86 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorHashKeyWrapperBatch.java
  e978110 
   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
 f5ab731 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java 
 f513188 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/VectorExpressionWriter.java
  e5c3aa4 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/VectorExpressionWriterFactory.java
  a242fef 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFAvgDecimal.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFSumDecimal.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
 ad96fa5 
   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFAverage.java 
 1a00800 
   
 ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorGroupByOperator.java
  a2b45f8 
   
 ql/src/test/org/apache/hadoop/hive/ql/exec/vector/util/FakeVectorRowBatchFromObjectIterables.java
  c8eaea1 
   
 serde/src/test/org/apache/hadoop/hive/serde2/io/TestHiveDecimalWritable.java 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/17737/diff/
 
 
 Testing
 ---
 
 New unit tests added, manually tested decimal GBY queries
 
 
 Thanks,
 
 Remus Rusanu
 




[jira] [Updated] (HIVE-6412) SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys

2014-02-12 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6412:
---

Summary: SMB join on Decimal columns causes cast exception in 
JoinUtil.computeKeys  (was: SMB join on Decimal columns causes caste exception 
in JoinUtil.computeKeys)

 SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys
 -

 Key: HIVE-6412
 URL: https://issues.apache.org/jira/browse/HIVE-6412
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Priority: Critical

 Caused by: java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
 org.apache.hadoop.hive.common.type.HiveDecimal
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
 at 
 org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 Repro:
 create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
 create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
   
 insert into table vsmb_bucket_1 
   select cast(cint as decimal(9,0)) as key, 
 cast(cfloat as decimal(38,10)) as value 
   from alltypesorc limit 2;
 insert into table vsmb_bucket_2 
   select cast(cint as decimal(19,3)) as key, 
 cast(cfloat as decimal(28,0)) as value 
   from alltypesorc limit 2;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.auto.convert.sortmerge.join.noconditionaltask = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6345) Add DECIMAL support to vectorized JOIN operators

2014-02-12 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-6345:
---

Attachment: HIVE-6345.2.patch

 Add DECIMAL support to vectorized JOIN operators
 

 Key: HIVE-6345
 URL: https://issues.apache.org/jira/browse/HIVE-6345
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-6345.2.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 18002: Add DECIMAL support to vectorized JOIN operators and vectorized aggregates

2014-02-12 Thread Remus Rusanu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18002/
---

Review request for hive, Eric Hanson and Jitendra Pandey.


Bugs: HIVE-6344 and HIVE-6345
https://issues.apache.org/jira/browse/HIVE-6344
https://issues.apache.org/jira/browse/HIVE-6345


Repository: hive-git


Description
---

See HIVE-6344 and HIVE-6345


Diffs
-

  ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java 1b76fc9 
  common/src/java/org/apache/hadoop/hive/common/type/Decimal128.java 2e0f058 
  common/src/java/org/apache/hadoop/hive/common/type/UnsignedInt128.java 
fb3c346 
  common/src/java/org/apache/hive/common/util/Decimal128FastBuffer.java 
PRE-CREATION 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFAvg.txt cb94145 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMax.txt 2b0364c 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxDecimal.txt 
PRE-CREATION 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFMinMaxString.txt 36f483e 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFSum.txt 3573997 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFVar.txt 7c0e58f 
  ql/src/gen/vectorization/UDAFTemplates/VectorUDAFVarDecimal.txt PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java
 d1a75df 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExpressionDescriptor.java
 d9855c1 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorHashKeyWrapper.java 
f083d86 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorHashKeyWrapperBatch.java
 e978110 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java 
036f080 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
7141d63 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatchCtx.java 
d409d44 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFAvgDecimal.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFSumDecimal.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFAverage.java 
1a00800 
  
ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorGroupByOperator.java
 a2b45f8 
  
ql/src/test/org/apache/hadoop/hive/ql/exec/vector/util/FakeVectorRowBatchFromObjectIterables.java
 c8eaea1 
  ql/src/test/queries/clientpositive/vector_decimal_aggregate.q PRE-CREATION 
  ql/src/test/queries/clientpositive/vector_decimal_mapjoin.q PRE-CREATION 
  ql/src/test/results/clientpositive/vector_decimal_aggregate.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/vector_decimal_mapjoin.q.out PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.java 
008fda3 
  serde/src/test/org/apache/hadoop/hive/serde2/io/TestHiveDecimalWritable.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/18002/diff/


Testing
---

Manual, new units, new .q/.out


Thanks,

Remus Rusanu



[jira] [Commented] (HIVE-6345) Add DECIMAL support to vectorized JOIN operators

2014-02-12 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899057#comment-13899057
 ] 

Remus Rusanu commented on HIVE-6345:


https://reviews.apache.org/r/18002/

 Add DECIMAL support to vectorized JOIN operators
 

 Key: HIVE-6345
 URL: https://issues.apache.org/jira/browse/HIVE-6345
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-6345.2.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6344) Add DECIMAL support to vectorized group by operator

2014-02-12 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899059#comment-13899059
 ] 

Remus Rusanu commented on HIVE-6344:


the patch uploaded to HIVE-6345 contains the fix for this as well, since much 
of the changes needed are common.

 Add DECIMAL support to vectorized group by operator
 ---

 Key: HIVE-6344
 URL: https://issues.apache.org/jira/browse/HIVE-6344
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-6344.1.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6345) Add DECIMAL support to vectorized JOIN operators

2014-02-12 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899062#comment-13899062
 ] 

Remus Rusanu commented on HIVE-6345:


The patch contains the fix for HIVE-6344 as well, including review feedback.
This patch added fast cast (allocation free) between HiveDecimalWritable and 
Decimal128 (both to and from). 
It also includes the negative constant folding fix sent to me by Jitendra in a 
patch.
I could not fix Vectorized SMB join on decimals because of HIVE-6412

 Add DECIMAL support to vectorized JOIN operators
 

 Key: HIVE-6345
 URL: https://issues.apache.org/jira/browse/HIVE-6345
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
 Attachments: HIVE-6345.2.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-4996) unbalanced calls to openTransaction/commitTransaction

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899066#comment-13899066
 ] 

Hive QA commented on HIVE-4996:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628142/HIVE-4996.4.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5084 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1282/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1282/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628142

 unbalanced calls to openTransaction/commitTransaction
 -

 Key: HIVE-4996
 URL: https://issues.apache.org/jira/browse/HIVE-4996
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
 Environment: hiveserver1  Java HotSpot(TM) 64-Bit Server VM (build 
 20.6-b01, mixed mode)
Reporter: wangfeng
Assignee: Szehon Ho
Priority: Critical
  Labels: hive, metastore
 Attachments: HIVE-4996.1.patch, HIVE-4996.2.patch, HIVE-4996.3.patch, 
 HIVE-4996.4.patch, HIVE-4996.patch, hive-4996.path

   Original Estimate: 504h
  Remaining Estimate: 504h

 when we used hiveserver1 based on hive-0.10.0, we found the Exception 
 thrown.It was:
 FAILED: Error in metadata: MetaException(message:java.lang.RuntimeException: 
 commitTransaction was called but openTransactionCalls = 0. This probably 
 indicates that the
 re are unbalanced calls to openTransaction/commitTransaction)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 help



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5690) Support subquery for single sourced multi query

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899139#comment-13899139
 ] 

Hive QA commented on HIVE-5690:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628149/HIVE-5690.4.patch.txt

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 5087 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin7
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_subquery_multiple_cols_in_select
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1283/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1283/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628149

 Support subquery for single sourced multi query
 ---

 Key: HIVE-5690
 URL: https://issues.apache.org/jira/browse/HIVE-5690
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: D13791.1.patch, HIVE-5690.2.patch.txt, 
 HIVE-5690.3.patch.txt, HIVE-5690.4.patch.txt


 Single sourced multi (insert) query is very useful for various ETL processes 
 but it does not allow subqueries included. For example, 
 {noformat}
 explain from src 
 insert overwrite table x1 select * from (select distinct key,value) b order 
 by key
 insert overwrite table x2 select * from (select distinct key,value) c order 
 by value;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6413) Use of relative paths for data load in .q files breaks on Windows

2014-02-12 Thread Remus Rusanu (JIRA)
Remus Rusanu created HIVE-6413:
--

 Summary: Use of relative paths for data load in .q files breaks on 
Windows
 Key: HIVE-6413
 URL: https://issues.apache.org/jira/browse/HIVE-6413
 Project: Hive
  Issue Type: Test
Reporter: Remus Rusanu


Eg. partition_type_check.q:

FAILED: SemanticException Line 2:23 Invalid path ''../../data/files/T1.txt'': 
Relative path in absolute URI: file:E:/HW/project/hive-monarch/data/files/T1.txt

This happens because the path is constructed in 
LoadSemanticAnalizer.initializeFromUri by appending the user.dir system 
property:  path = new Path(System.getProperty(user.dir), 
fromPath).toString(); . The resulted path is missing the leading / in front 
the drive letter.

This was fixed in the past with HIVE-3126 (change 
39fbb41e3e96858391646c0e20897e848616e8e2) but was reverted with HIVE-6048 
(change c6f3bcccda986498ecb1e8070594961203038a8b)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Incompatibilities between metadata types and actual values read by the Parquet input format

2014-02-12 Thread Remus Rusanu
While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
... 9 more
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
cannot be cast to java.lang.Short
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
... 15 more


My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
involve vectorization):

create table if not exists alltypes_parquet (
  cint int,
  ctinyint tinyint,
  csmallint smallint,
  cfloat float,
  cdouble double,
  cstring1 string) stored as parquet;

insert overwrite table alltypes_parquet
  select cint,
ctinyint,
csmallint,
cfloat,
cdouble,
cstring1
  from alltypesorc;

explain select * from alltypes_parquet limit 10;
select * from alltypes_parquet limit 10;

explain select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;
select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;

Before opening a Jira, I thought about asking, perhaps this is a known issue 
and some solution is in the books?

Thanks,
~Remus




[jira] [Commented] (HIVE-5944) SQL std auth - authorize show all roles, create role, drop role

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899204#comment-13899204
 ] 

Hive QA commented on HIVE-5944:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628296/HIVE-5944.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5089 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_merge
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1284/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1284/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628296

 SQL std auth - authorize show all roles, create role, drop role
 ---

 Key: HIVE-5944
 URL: https://issues.apache.org/jira/browse/HIVE-5944
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-5944.1.patch, HIVE-5944.2.patch, HIVE-5944.3.patch, 
 HIVE-5944.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Only superuser should be allowed to perform show all roles, create role, drop 
 role .



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-4115) Introduce cube abstraction in hive

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899209#comment-13899209
 ] 

Hive QA commented on HIVE-4115:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12586709/HIVE-4115.D10689.4.patch

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1286/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1286/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1286/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java'
Reverted 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java'
Reverted 'ql/src/test/results/clientnegative/authorization_role_cycles1.q.out'
Reverted 'ql/src/test/results/clientnegative/authorization_role_cycles2.q.out'
Reverted 'ql/src/test/results/clientpositive/authorization_1_sql_std.q.out'
Reverted 
'ql/src/test/results/clientpositive/authorization_set_show_current_role.q.out'
Reverted 'ql/src/test/results/clientpositive/authorization_role_grant1.q.out'
Reverted 'ql/src/test/queries/clientnegative/authorization_role_cycles1.q'
Reverted 'ql/src/test/queries/clientnegative/authorization_role_cycles2.q'
Reverted 
'ql/src/test/queries/clientpositive/authorization_set_show_current_role.q'
Reverted 'ql/src/test/queries/clientpositive/authorization_role_grant1.q'
Reverted 'ql/src/test/queries/clientpositive/authorization_1_sql_std.q'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizer.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAccessController.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerImpl.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java'
Reverted 
'ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAccessControlException.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target testutils/target jdbc/target metastore/target 
itests/target itests/hcatalog-unit/target itests/test-serde/target 
itests/qtest/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target 
hcatalog/server-extensions/target hcatalog/core/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen 
service/target contrib/target serde/target beeline/target odbc/target 
cli/target ql/dependency-reduced-pom.xml ql/target 
ql/src/test/results/clientnegative/authorization_show_roles_no_admin.q.out 
ql/src/test/results/clientnegative/authorization_create_role_no_admin.q.out 
ql/src/test/results/clientnegative/authorization_drop_role_no_admin.q.out 
ql/src/test/results/clientpositive/authorization_set_show_current_role.q.out.orig
 ql/src/test/queries/clientnegative/authorization_show_roles_no_admin.q 
ql/src/test/queries/clientnegative/authorization_create_role_no_admin.q 
ql/src/test/queries/clientnegative/authorization_drop_role_no_admin.q
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1567657.

At revision 1567657.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12586709

 Introduce 

Re: Incompatibilities between metadata types and actual values read by the Parquet input format

2014-02-12 Thread Brock Noland
Hi,

Looks like a bug to me. Can you open a JIRA?

Brock


On Wed, Feb 12, 2014 at 9:25 AM, Remus Rusanu rem...@microsoft.com wrote:

  While working on HIVE-5998 I noticed that the ParquetRecordReader
 returns IntWritable for all 'int like' types, in disaccord with the row
 object inspectors. I though fine, and I worked my way around it. But I see
 now that the issue trigger failuers in other places, eg. in aggregates:



 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime
 Error while processing row
 {cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}

 at
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)

 at
 org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)

 ... 8 more

 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
 java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be
 cast to java.lang.Short

 at
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)

 at
 org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)

 at
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)

 at
 org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)

 at
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)

 at
 org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)

 at
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)

 ... 9 more

 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable
 cannot be cast to java.lang.Short

 at
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)

 at
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)

 at
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)

 at
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)

 at
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)

 at
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)

 at
 org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)

 at
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)

 at
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)

 at
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)

 ... 15 more





 My test is (I'm writing a test .q from HIVE-5998, but the repro does not
 involve vectorization):



 create table if not exists alltypes_parquet (

   cint int,

   ctinyint tinyint,

   csmallint smallint,

   cfloat float,

   cdouble double,

   cstring1 string) stored as parquet;



 insert overwrite table alltypes_parquet

   select cint,

 ctinyint,

 csmallint,

 cfloat,

 cdouble,

 cstring1

   from alltypesorc;



 explain select * from alltypes_parquet limit 10;

 select * from alltypes_parquet limit 10;



 explain select ctinyint,

   max(cint),

   min(csmallint),

   count(cstring1),

   avg(cfloat),

   stddev_pop(cdouble)

   from alltypes_parquet

   group by ctinyint;

 select ctinyint,

   max(cint),

   min(csmallint),

   count(cstring1),

   avg(cfloat),

   stddev_pop(cdouble)

   from alltypes_parquet

   group by ctinyint;



 Before opening a Jira, I thought about asking, perhaps this is a known
 issue and some solution is in the books?



 Thanks,

 ~Remus








-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Commented] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf

2014-02-12 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899240#comment-13899240
 ] 

Brock Noland commented on HIVE-6037:


Looks like something got messed up in antlr.

{noformat}
Begin query: ppd_join3.q NoViableAltException(20@[146:1: selectExpression : ( 
expression | tableAllColumns );]) at 
org.antlr.runtime.DFA.noViableAlt(DFA.java:158) at 
org.antlr.runtime.DFA.predict(DFA.java:116) at 
org.apache.hadoop.hive.ql.parse.HiveParser_SelectClauseParser.selectExpression(HiveP
{noformat}

 Synchronize HiveConf with hive-default.xml.template and support show conf
 -

 Key: HIVE-6037
 URL: https://issues.apache.org/jira/browse/HIVE-6037
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037.1.patch.txt, 
 HIVE-6037.2.patch.txt, HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, 
 HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt


 see HIVE-5879



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6329) Support column level encryption/decryption

2014-02-12 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899255#comment-13899255
 ] 

Brock Noland commented on HIVE-6329:


Hi,

It looks like this makes some changes to the init() method? I think this will 
impact existing Hive Serdes. Is it possible to make this change without 
changing the init() method?

 Support column level encryption/decryption
 --

 Key: HIVE-6329
 URL: https://issues.apache.org/jira/browse/HIVE-6329
 Project: Hive
  Issue Type: New Feature
  Components: Security, Serializers/Deserializers
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6329.1.patch.txt, HIVE-6329.2.patch.txt, 
 HIVE-6329.3.patch.txt, HIVE-6329.4.patch.txt, HIVE-6329.5.patch.txt


 Receiving some requirements on encryption recently but hive is not supporting 
 it. Before the full implementation via HIVE-5207, this might be useful for 
 some cases.
 {noformat}
 hive create table encode_test(id int, name STRING, phone STRING, address 
 STRING) 
  ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
  WITH SERDEPROPERTIES ('column.encode.indices'='2,3', 
 'column.encode.classname'='org.apache.hadoop.hive.serde2.Base64WriteOnly') 
 STORED AS TEXTFILE;
 OK
 Time taken: 0.584 seconds
 hive insert into table encode_test select 
 100,'navis','010--','Seoul, Seocho' from src tablesample (1 rows);
 ..
 OK
 Time taken: 5.121 seconds
 hive select * from encode_test;
 OK
 100   navis MDEwLTAwMDAtMDAwMA==  U2VvdWwsIFNlb2Nobw==
 Time taken: 0.078 seconds, Fetched: 1 row(s)
 hive 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6414) ParquetInputFormat provides data values that do not match the object inspectors

2014-02-12 Thread Remus Rusanu (JIRA)
Remus Rusanu created HIVE-6414:
--

 Summary: ParquetInputFormat provides data values that do not match 
the object inspectors
 Key: HIVE-6414
 URL: https://issues.apache.org/jira/browse/HIVE-6414
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu


While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
... 9 more
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
cannot be cast to java.lang.Short
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
... 15 more


My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
involve vectorization):

create table if not exists alltypes_parquet (
  cint int,
  ctinyint tinyint,
  csmallint smallint,
  cfloat float,
  cdouble double,
  cstring1 string) stored as parquet;

insert overwrite table alltypes_parquet
  select cint,
ctinyint,
csmallint,
cfloat,
cdouble,
cstring1
  from alltypesorc;

explain select * from alltypes_parquet limit 10; select * from alltypes_parquet 
limit 10;

explain select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;
select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


RE: Incompatibilities between metadata types and actual values read by the Parquet input format

2014-02-12 Thread Remus Rusanu
Done.
https://issues.apache.org/jira/browse/HIVE-6414

Thanks,
~Remus

From: Brock Noland [mailto:br...@cloudera.com]
Sent: Wednesday, February 12, 2014 6:15 PM
To: Remus Rusanu
Cc: dev@hive.apache.org
Subject: Re: Incompatibilities between metadata types and actual values read by 
the Parquet input format

Hi,

Looks like a bug to me. Can you open a JIRA?

Brock

On Wed, Feb 12, 2014 at 9:25 AM, Remus Rusanu 
rem...@microsoft.commailto:rem...@microsoft.com wrote:
While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
... 9 more
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
cannot be cast to java.lang.Short
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
... 15 more


My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
involve vectorization):

create table if not exists alltypes_parquet (
  cint int,
  ctinyint tinyint,
  csmallint smallint,
  cfloat float,
  cdouble double,
  cstring1 string) stored as parquet;

insert overwrite table alltypes_parquet
  select cint,
ctinyint,
csmallint,
cfloat,
cdouble,
cstring1
  from alltypesorc;

explain select * from alltypes_parquet limit 10;
select * from alltypes_parquet limit 10;

explain select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;
select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;

Before opening a Jira, I thought about asking, perhaps this is a known issue 
and some solution is in the books?

Thanks,
~Remus





--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Updated] (HIVE-6414) ParquetInputFormat provides data values that do not match the object inspectors

2014-02-12 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-6414:
---

Description: 
While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

{noformat}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
... 9 more
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
cannot be cast to java.lang.Short
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
... 15 more
{noformat}

My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
involve vectorization):

create table if not exists alltypes_parquet (
  cint int,
  ctinyint tinyint,
  csmallint smallint,
  cfloat float,
  cdouble double,
  cstring1 string) stored as parquet;

insert overwrite table alltypes_parquet
  select cint,
ctinyint,
csmallint,
cfloat,
cdouble,
cstring1
  from alltypesorc;

explain select * from alltypes_parquet limit 10; select * from alltypes_parquet 
limit 10;

explain select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;
select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;

  was:
While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 

[jira] [Updated] (HIVE-6414) ParquetInputFormat provides data values that do not match the object inspectors

2014-02-12 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-6414:
---

Description: 
While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

{noformat}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
... 9 more
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
cannot be cast to java.lang.Short
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
... 15 more
{noformat}

My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
involve vectorization):

{noformat}
create table if not exists alltypes_parquet (
  cint int,
  ctinyint tinyint,
  csmallint smallint,
  cfloat float,
  cdouble double,
  cstring1 string) stored as parquet;

insert overwrite table alltypes_parquet
  select cint,
ctinyint,
csmallint,
cfloat,
cdouble,
cstring1
  from alltypesorc;

explain select * from alltypes_parquet limit 10; select * from alltypes_parquet 
limit 10;

explain select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;
select ctinyint,
  max(cint),
  min(csmallint),
  count(cstring1),
  avg(cfloat),
  stddev_pop(cdouble)
  from alltypes_parquet
  group by ctinyint;
{noformat}

  was:
While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
IntWritable for all 'int like' types, in disaccord with the row object 
inspectors. I though fine, and I worked my way around it. But I see now that 
the issue trigger failuers in other places, eg. in aggregates:

{noformat}
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error 
while processing row 
{cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
to java.lang.Short
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 

[jira] [Commented] (HIVE-6414) ParquetInputFormat provides data values that do not match the object inspectors

2014-02-12 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899273#comment-13899273
 ] 

Brock Noland commented on HIVE-6414:


FYI [~jcoffey] [~xuefuz]

 ParquetInputFormat provides data values that do not match the object 
 inspectors
 ---

 Key: HIVE-6414
 URL: https://issues.apache.org/jira/browse/HIVE-6414
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu

 While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
 IntWritable for all 'int like' types, in disaccord with the row object 
 inspectors. I though fine, and I worked my way around it. But I see now that 
 the issue trigger failuers in other places, eg. in aggregates:
 {noformat}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row 
 {cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
 ... 8 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
 to java.lang.Short
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 ... 9 more
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
 cannot be cast to java.lang.Short
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
 ... 15 more
 {noformat}
 My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
 involve vectorization):
 {noformat}
 create table if not exists alltypes_parquet (
   cint int,
   ctinyint tinyint,
   csmallint smallint,
   cfloat float,
   cdouble double,
   cstring1 string) stored as parquet;
 insert overwrite table alltypes_parquet
   select cint,
 ctinyint,
 csmallint,
 cfloat,
 cdouble,
 cstring1
   from alltypesorc;
 explain select * from alltypes_parquet limit 10; select * from 
 alltypes_parquet limit 10;
 explain select ctinyint,
   max(cint),
   min(csmallint),
   count(cstring1),
   avg(cfloat),
   stddev_pop(cdouble)
   from alltypes_parquet
   group by ctinyint;
 select ctinyint,
   max(cint),
   min(csmallint),
   count(cstring1),
   avg(cfloat),
   stddev_pop(cdouble)
   from alltypes_parquet
   group by ctinyint;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6414) ParquetInputFormat provides data values that do not match the object inspectors

2014-02-12 Thread Justin Coffey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899284#comment-13899284
 ] 

Justin Coffey commented on HIVE-6414:
-

I'll investigate.

 ParquetInputFormat provides data values that do not match the object 
 inspectors
 ---

 Key: HIVE-6414
 URL: https://issues.apache.org/jira/browse/HIVE-6414
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Assignee: Justin Coffey

 While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
 IntWritable for all 'int like' types, in disaccord with the row object 
 inspectors. I though fine, and I worked my way around it. But I see now that 
 the issue trigger failuers in other places, eg. in aggregates:
 {noformat}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row 
 {cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
 ... 8 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
 to java.lang.Short
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 ... 9 more
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
 cannot be cast to java.lang.Short
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
 ... 15 more
 {noformat}
 My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
 involve vectorization):
 {noformat}
 create table if not exists alltypes_parquet (
   cint int,
   ctinyint tinyint,
   csmallint smallint,
   cfloat float,
   cdouble double,
   cstring1 string) stored as parquet;
 insert overwrite table alltypes_parquet
   select cint,
 ctinyint,
 csmallint,
 cfloat,
 cdouble,
 cstring1
   from alltypesorc;
 explain select * from alltypes_parquet limit 10; select * from 
 alltypes_parquet limit 10;
 explain select ctinyint,
   max(cint),
   min(csmallint),
   count(cstring1),
   avg(cfloat),
   stddev_pop(cdouble)
   from alltypes_parquet
   group by ctinyint;
 select ctinyint,
   max(cint),
   min(csmallint),
   count(cstring1),
   avg(cfloat),
   stddev_pop(cdouble)
   from alltypes_parquet
   group by ctinyint;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HIVE-6414) ParquetInputFormat provides data values that do not match the object inspectors

2014-02-12 Thread Justin Coffey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Coffey reassigned HIVE-6414:
---

Assignee: Justin Coffey

 ParquetInputFormat provides data values that do not match the object 
 inspectors
 ---

 Key: HIVE-6414
 URL: https://issues.apache.org/jira/browse/HIVE-6414
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Assignee: Justin Coffey

 While working on HIVE-5998 I noticed that the ParquetRecordReader returns 
 IntWritable for all 'int like' types, in disaccord with the row object 
 inspectors. I though fine, and I worked my way around it. But I see now that 
 the issue trigger failuers in other places, eg. in aggregates:
 {noformat}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row 
 {cint:528534767,ctinyint:31,csmallint:4963,cfloat:31.0,cdouble:4963.0,cstring1:cvLH6Eat2yFsyy7p}
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:534)
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:177)
 ... 8 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast 
 to java.lang.Short
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:808)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:87)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 ... 9 more
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable 
 cannot be cast to java.lang.Short
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaShortObjectInspector.get(JavaShortObjectInspector.java:41)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:671)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.compare(ObjectInspectorUtils.java:631)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.merge(GenericUDAFMin.java:109)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMin$GenericUDAFMinEvaluator.iterate(GenericUDAFMin.java:96)
 at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:183)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:641)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:838)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:735)
 at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:803)
 ... 15 more
 {noformat}
 My test is (I'm writing a test .q from HIVE-5998, but the repro does not 
 involve vectorization):
 {noformat}
 create table if not exists alltypes_parquet (
   cint int,
   ctinyint tinyint,
   csmallint smallint,
   cfloat float,
   cdouble double,
   cstring1 string) stored as parquet;
 insert overwrite table alltypes_parquet
   select cint,
 ctinyint,
 csmallint,
 cfloat,
 cdouble,
 cstring1
   from alltypesorc;
 explain select * from alltypes_parquet limit 10; select * from 
 alltypes_parquet limit 10;
 explain select ctinyint,
   max(cint),
   min(csmallint),
   count(cstring1),
   avg(cfloat),
   stddev_pop(cdouble)
   from alltypes_parquet
   group by ctinyint;
 select ctinyint,
   max(cint),
   min(csmallint),
   count(cstring1),
   avg(cfloat),
   stddev_pop(cdouble)
   from alltypes_parquet
   group by ctinyint;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6403) uncorrelated subquery is failing with auto.convert.join=true

2014-02-12 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899299#comment-13899299
 ] 

Harish Butani commented on HIVE-6403:
-

What I see is that for the Multi Insert with SubQuery case:
- the first child will be a ReduceSink; other child is converted to FileSink by 
the time it gets to CommonJoinTaskDispatcher.
- so for the Multi Insert SubQuery case checking the first child is still ok.

But yes this check that other children are FileSink, should be done in 
CommonJoinTaskDispatcher#getPosition

Beyond the above point; I am sorry, I don't follow what else you are proposing. 
Can you please elaborate.

 uncorrelated subquery is failing with auto.convert.join=true
 

 Key: HIVE-6403
 URL: https://issues.apache.org/jira/browse/HIVE-6403
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Harish Butani
 Attachments: HIVE-6403.1.patch


 Fixing HIVE-5690, I've found query in subquery_multiinsert.q is not working 
 with hive.auto.convert.join=true 
 {noformat}
 set hive.auto.convert.join=true;
 hive explain
  from src b 
  INSERT OVERWRITE TABLE src_4 
select * 
where b.key in 
 (select a.key 
  from src a 
  where b.value = a.value and a.key  '9'
 ) 
  INSERT OVERWRITE TABLE src_5 
select *  
where b.key not in  ( select key from src s1 where s1.key  '2') 
order by key 
  ;
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
   at java.util.ArrayList.get(ArrayList.java:411)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinLocalWork(MapJoinProcessor.java:149)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genLocalWorkForMapJoin(MapJoinProcessor.java:256)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.genMapJoinOpAndLocalWork(MapJoinProcessor.java:248)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.convertTaskToMapJoinTask(CommonJoinTaskDispatcher.java:191)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:481)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
   at 
 org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
   at 
 org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:100)
   at 
 org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:290)
   at 
 org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:216)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9167)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:64)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:446)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:346)
   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1056)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1099)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:687)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:626)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 

[jira] [Commented] (HIVE-6254) sql standard auth - use admin option specified in grant/revoke role statement

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899300#comment-13899300
 ] 

Hive QA commented on HIVE-6254:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628263/HIVE-6254.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5086 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1287/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1287/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628263

 sql standard auth - use admin option specified in grant/revoke role statement
 -

 Key: HIVE-6254
 URL: https://issues.apache.org/jira/browse/HIVE-6254
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6254.1.patch, HIVE-6254.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 DDLSemanticAnalyzer ignores the admin option specified in the query, and is 
 always setting it to true.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5944) SQL std auth - authorize show all roles, create role, drop role

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5944:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 SQL std auth - authorize show all roles, create role, drop role
 ---

 Key: HIVE-5944
 URL: https://issues.apache.org/jira/browse/HIVE-5944
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-5944.1.patch, HIVE-5944.2.patch, HIVE-5944.3.patch, 
 HIVE-5944.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Only superuser should be allowed to perform show all roles, create role, drop 
 role .



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6254) sql standard auth - use admin option specified in grant/revoke role statement

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6254:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 sql standard auth - use admin option specified in grant/revoke role statement
 -

 Key: HIVE-6254
 URL: https://issues.apache.org/jira/browse/HIVE-6254
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-6254.1.patch, HIVE-6254.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 DDLSemanticAnalyzer ignores the admin option specified in the query, and is 
 always setting it to true.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HIVE-5952) SQL std auth - authorize grant/revoke roles

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-5952.


   Resolution: Fixed
Fix Version/s: 0.13.0

This is fixed via HIVE-5944

 SQL std auth - authorize grant/revoke roles
 ---

 Key: HIVE-5952
 URL: https://issues.apache.org/jira/browse/HIVE-5952
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 User should be allowed to grant/revoke a role only if the user is SUPERUSER 
 or has admin privileges for the role.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6415) Disallow transform clause in std auth mode

2014-02-12 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-6415:
--

 Summary: Disallow transform clause in std auth mode
 Key: HIVE-6415
 URL: https://issues.apache.org/jira/browse/HIVE-6415
 Project: Hive
  Issue Type: Task
  Components: Authorization
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6415) Disallow transform clause in std auth mode

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6415:
---

Attachment: HIVE-6415.patch

 Disallow transform clause in std auth mode
 --

 Key: HIVE-6415
 URL: https://issues.apache.org/jira/browse/HIVE-6415
 Project: Hive
  Issue Type: Task
  Components: Authorization
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6415.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 18020: Disallows transform clause in std sql mode

2014-02-12 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18020/
---

Review request for hive.


Bugs: HIVE-6415
https://issues.apache.org/jira/browse/HIVE-6415


Repository: hive


Description
---

Disallows transform clause in std sql mode


Diffs
-

  
trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/DisallowTransformHook.java
 PRE-CREATION 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 1567674 
  trunk/ql/src/test/queries/clientnegative/authorization_disallow_transform.q 
PRE-CREATION 
  
trunk/ql/src/test/results/clientnegative/authorization_disallow_transform.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/18020/diff/


Testing
---

Added new tests


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-6415) Disallow transform clause in std auth mode

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6415:
---

Status: Patch Available  (was: Open)

 Disallow transform clause in std auth mode
 --

 Key: HIVE-6415
 URL: https://issues.apache.org/jira/browse/HIVE-6415
 Project: Hive
  Issue Type: Task
  Components: Authorization
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6415.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6415) Disallow transform clause in std auth mode

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6415:
---

Issue Type: Sub-task  (was: Task)
Parent: HIVE-5837

 Disallow transform clause in std auth mode
 --

 Key: HIVE-6415
 URL: https://issues.apache.org/jira/browse/HIVE-6415
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6415.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6167) Allow user-defined functions to be qualified with database name

2014-02-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899383#comment-13899383
 ] 

Jason Dere commented on HIVE-6167:
--

So DOT isn't normally allowed as part of an identifier.  I think this would 
only be possible if the dot was included as part of a quoted name, which seems 
like a bit of an unusual case.  If this case would need to be supported this 
may be better off being done as a separate issue.

 Allow user-defined functions to be qualified with database name
 ---

 Key: HIVE-6167
 URL: https://issues.apache.org/jira/browse/HIVE-6167
 Project: Hive
  Issue Type: Sub-task
  Components: UDF
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-6167.1.patch, HIVE-6167.2.patch, HIVE-6167.3.patch, 
 HIVE-6167.4.patch


 Function names in Hive are currently unqualified and there is a single 
 namespace for all function names. This task would allow users to define 
 temporary UDFs (and eventually permanent UDFs) with a database name, such as:
 CREATE TEMPORARY FUNCTION userdb.myfunc 'myudfclass';



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6398) MapRedTask.configureDebugVariablesForChildJVM mixes HIVE_CHILD_CLIENT_DEBUG_OPTS and HIVE_MAIN_CLIENT_DEBUG_OPTS in env check

2014-02-12 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899391#comment-13899391
 ] 

Ashutosh Chauhan commented on HIVE-6398:


+1

 MapRedTask.configureDebugVariablesForChildJVM mixes 
 HIVE_CHILD_CLIENT_DEBUG_OPTS and HIVE_MAIN_CLIENT_DEBUG_OPTS in env check
 -

 Key: HIVE-6398
 URL: https://issues.apache.org/jira/browse/HIVE-6398
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Trivial
 Attachments: HIVE-6398.1.patch


 @328:
  assert environmentVariables.containsKey(HIVE_CHILD_CLIENT_DEBUG_OPTS)
environmentVariables.get(HIVE_MAIN_CLIENT_DEBUG_OPTS) != null : 
 HIVE_CHILD_CLIENT_DEBUG_OPTS



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-5595) Implement vectorized SMB JOIN

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5595:
---

Fix Version/s: 0.13.0

 Implement vectorized SMB JOIN
 -

 Key: HIVE-5595
 URL: https://issues.apache.org/jira/browse/HIVE-5595
 Project: Hive
  Issue Type: Sub-task
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5595.1.patch, HIVE-5595.2.patch, HIVE-5595.3.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Vectorized implementation of SMB Map Join.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Review Request 16951: HIVE-6109: Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-02-12 Thread Sushanth Sowmyan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16951/#review34302
---

Ship it!


Looks good, thanks for all the changes! :)

- Sushanth Sowmyan


On Feb. 6, 2014, 7:19 p.m., Satish Mittal wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16951/
 ---
 
 (Updated Feb. 6, 2014, 7:19 p.m.)
 
 
 Review request for hive and Sushanth Sowmyan.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 - Attaching the patch that implements the functionality to support custom 
 location for external tables in dynamic partitioning.
 
 
 Diffs
 -
 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatConstants.java
  2ee50b3 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FileOutputCommitterContainer.java
  a5ae1be 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FosterStorageHandler.java
  288b7a3 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatFileUtil.java
  PRE-CREATION 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatOutputFormat.java
  78e77e8 
   
 hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/OutputJobInfo.java
  b63bdc2 
   
 hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/HCatMapReduceTest.java
  77bdb9d 
   
 hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatDynamicPartitioned.java
  d8b69c2 
   
 hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatExternalDynamicPartitioned.java
  36c7945 
 
 Diff: https://reviews.apache.org/r/16951/diff/
 
 
 Testing
 ---
 
 - Added unit test.
 - Tested the functionality through a sample MR program that uses 
 HCatOutputFormat interface configured with the new custom dynamic location.
 
 
 Thanks,
 
 Satish Mittal
 




[jira] [Commented] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning

2014-02-12 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899410#comment-13899410
 ] 

Sushanth Sowmyan commented on HIVE-6109:


+1 to the updated patch, looks good to me.

 Support customized location for EXTERNAL tables created by Dynamic 
 Partitioning
 ---

 Key: HIVE-6109
 URL: https://issues.apache.org/jira/browse/HIVE-6109
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Satish Mittal
Assignee: Satish Mittal
 Attachments: HIVE-6109.1.patch.txt, HIVE-6109.2.patch.txt, 
 HIVE-6109.3.patch.txt, HIVE-6109.pdf


 Currently when dynamic partitions are created by HCatalog, the underlying 
 directories for the partitions are created in a fixed 'Hive-style' format, 
 i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of 
 external table, user should be able to control the format of directories 
 created for dynamic partitions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5504) OrcOutputFormat honors compression properties only from within hive

2014-02-12 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899412#comment-13899412
 ] 

Sushanth Sowmyan commented on HIVE-5504:


The error reported by the precommit test seems to be unrelated to this fix.

 OrcOutputFormat honors  compression  properties only from within hive
 -

 Key: HIVE-5504
 URL: https://issues.apache.org/jira/browse/HIVE-5504
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.11.0, 0.12.0
Reporter: Venkat Ranganathan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-5504.patch


 When we import data into a HCatalog table created with the following storage  
 description
 .. stored as orc tblproperties (orc.compress=SNAPPY) 
 the resultant orc file still uses the default zlib compression
 It looks like HCatOutputFormat is ignoring the tblproperties specified.   
 show tblproperties shows that the table indeed has the properties properly 
 saved.
 An insert/select into the table has the resulting orc file honor the tbl 
 property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6256) add batch dropping of partitions to Hive metastore (as well as to dropTable)

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899420#comment-13899420
 ] 

Hive QA commented on HIVE-6256:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628292/HIVE-6256.06.patch

{color:green}SUCCESS:{color} +1 5086 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1289/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1289/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628292

 add batch dropping of partitions to Hive metastore (as well as to dropTable)
 

 Key: HIVE-6256
 URL: https://issues.apache.org/jira/browse/HIVE-6256
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Attachments: HIVE-6256.01.patch, HIVE-6256.02.patch, 
 HIVE-6256.03.patch, HIVE-6256.04.patch, HIVE-6256.05.patch, 
 HIVE-6256.06.patch, HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, 
 HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, 
 HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, HIVE-6256.patch


 Metastore drop partitions call drops one partition; when many are being 
 dropped this can be slow. Partitions could be dropped in batch instead, if 
 multiple are dropped via one command. Drop table can also use that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6407) Test authorization_revoke_table_priv.q is failing on trunk

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899423#comment-13899423
 ] 

Hive QA commented on HIVE-6407:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628356/HIVE-6407.1.patch

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1291/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1291/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-1291/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java'
Reverted 
'metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java'
Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java'
Reverted 'metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java'
Reverted 
'metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java'
Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py'
Reverted 'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py'
Reverted 
'metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote'
Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp'
Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp'
Reverted 'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h'
Reverted 'metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h'
Reverted 
'metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp'
Reverted 'metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb'
Reverted 'metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb'
Reverted 
'metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java'
Reverted 'metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php'
Reverted 'metastore/src/gen/thrift/gen-php/metastore/Types.php'
Reverted 'metastore/if/hive_metastore.thrift'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/plan/DropTableDesc.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/ArchiveUtils.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target testutils/target jdbc/target metastore/target 
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsExpr.java
 
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsRequest.java
 
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/RequestPartsSpec.java
 
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DropPartitionsResult.java
 itests/target itests/hcatalog-unit/target itests/test-serde/target 
itests/qtest/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target 
hcatalog/server-extensions/target hcatalog/core/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen 
contrib/target service/target serde/target 

[jira] [Commented] (HIVE-4996) unbalanced calls to openTransaction/commitTransaction

2014-02-12 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899432#comment-13899432
 ] 

Szehon Ho commented on HIVE-4996:
-

I saw authorization_revoke_table_priv is broken for all recent pre-commit 
builds.  I ran the minimr test without the issue, it seems to be flaky.

 unbalanced calls to openTransaction/commitTransaction
 -

 Key: HIVE-4996
 URL: https://issues.apache.org/jira/browse/HIVE-4996
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
 Environment: hiveserver1  Java HotSpot(TM) 64-Bit Server VM (build 
 20.6-b01, mixed mode)
Reporter: wangfeng
Assignee: Szehon Ho
Priority: Critical
  Labels: hive, metastore
 Attachments: HIVE-4996.1.patch, HIVE-4996.2.patch, HIVE-4996.3.patch, 
 HIVE-4996.4.patch, HIVE-4996.patch, hive-4996.path

   Original Estimate: 504h
  Remaining Estimate: 504h

 when we used hiveserver1 based on hive-0.10.0, we found the Exception 
 thrown.It was:
 FAILED: Error in metadata: MetaException(message:java.lang.RuntimeException: 
 commitTransaction was called but openTransactionCalls = 0. This probably 
 indicates that the
 re are unbalanced calls to openTransaction/commitTransaction)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 help



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Review Request 18025: Implement vectorized support for COALESCE conditional expression

2014-02-12 Thread Jitendra Pandey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18025/
---

Review request for hive and Eric Hanson.


Bugs: HIVE-5759
https://issues.apache.org/jira/browse/HIVE-5759


Repository: hive-git


Description
---

Implement vectorized support for COALESCE conditional expression


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/BytesColumnVector.java 
f1eef14 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/ColumnVector.java 0a8811f 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/DecimalColumnVector.java 
d0d8597 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/DoubleColumnVector.java 
cb23129 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/LongColumnVector.java 
aa05b19 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
7141d63 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/VectorCoalesce.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
21fe8ca 
  ql/src/test/queries/clientpositive/vector_coalesce.q PRE-CREATION 
  ql/src/test/results/clientpositive/vector_coalesce.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/18025/diff/


Testing
---


Thanks,

Jitendra Pandey



[jira] [Commented] (HIVE-5759) Implement vectorized support for COALESCE conditional expression

2014-02-12 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899436#comment-13899436
 ] 

Jitendra Nath Pandey commented on HIVE-5759:


Review board entry: https://reviews.apache.org/r/18025/

 Implement vectorized support for COALESCE conditional expression
 

 Key: HIVE-5759
 URL: https://issues.apache.org/jira/browse/HIVE-5759
 Project: Hive
  Issue Type: Sub-task
Reporter: Eric Hanson
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-5759.1.patch


 Implement full, end-to-end support for COALESCE in vectorized mode, including 
 new VectorExpression class(es), VectorizationContext translation to a 
 VectorExpression, and unit tests for these, as well as end-to-end ad hoc 
 testing. An end-to-end .q test is recommended.
 This is lower priority than IF and CASE but it is still a fairly popular 
 expression.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (HIVE-5952) SQL std auth - authorize grant/revoke roles

2014-02-12 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair reopened HIVE-5952:
-


One part of this issue is not there as part of HIVE-5944, if the user has admin 
privileges, then the user should be able to grant/revoke.


 SQL std auth - authorize grant/revoke roles
 ---

 Key: HIVE-5952
 URL: https://issues.apache.org/jira/browse/HIVE-5952
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 User should be allowed to grant/revoke a role only if the user is SUPERUSER 
 or has admin privileges for the role.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-4996) unbalanced calls to openTransaction/commitTransaction

2014-02-12 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-4996:
--

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thank Szehon for the patch.

 unbalanced calls to openTransaction/commitTransaction
 -

 Key: HIVE-4996
 URL: https://issues.apache.org/jira/browse/HIVE-4996
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
 Environment: hiveserver1  Java HotSpot(TM) 64-Bit Server VM (build 
 20.6-b01, mixed mode)
Reporter: wangfeng
Assignee: Szehon Ho
Priority: Critical
  Labels: hive, metastore
 Fix For: 0.13.0

 Attachments: HIVE-4996.1.patch, HIVE-4996.2.patch, HIVE-4996.3.patch, 
 HIVE-4996.4.patch, HIVE-4996.patch, hive-4996.path

   Original Estimate: 504h
  Remaining Estimate: 504h

 when we used hiveserver1 based on hive-0.10.0, we found the Exception 
 thrown.It was:
 FAILED: Error in metadata: MetaException(message:java.lang.RuntimeException: 
 commitTransaction was called but openTransactionCalls = 0. This probably 
 indicates that the
 re are unbalanced calls to openTransaction/commitTransaction)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 help



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6412) SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys

2014-02-12 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-6412:
--

Description: 
{code}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
org.apache.hadoop.hive.common.type.HiveDecimal
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
at 
org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
{code}

Repro:
{code}
create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
  CLUSTERED BY (key) 
  SORTED BY (key) INTO 1 BUCKETS 
  STORED AS ORC;
create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
  CLUSTERED BY (key) 
  SORTED BY (key) INTO 1 BUCKETS 
  STORED AS ORC;
  
insert into table vsmb_bucket_1 
  select cast(cint as decimal(9,0)) as key, 
cast(cfloat as decimal(38,10)) as value 
  from alltypesorc limit 2;
insert into table vsmb_bucket_2 
  select cast(cint as decimal(19,3)) as key, 
cast(cfloat as decimal(28,0)) as value 
  from alltypesorc limit 2;

set hive.optimize.bucketmapjoin = true;
set hive.optimize.bucketmapjoin.sortedmerge = true;
set hive.auto.convert.sortmerge.join.noconditionaltask = true;
set hive.input.format = org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;

explain
select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
b.key;
select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
b.key;
{code}

  was:
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
org.apache.hadoop.hive.common.type.HiveDecimal
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
at 
org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
at 
org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)


Repro:
create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
  CLUSTERED BY (key) 
  SORTED BY (key) INTO 1 BUCKETS 
  STORED AS ORC;
create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
  CLUSTERED BY (key) 
  SORTED BY (key) INTO 1 BUCKETS 
  STORED AS ORC;
  
insert into table vsmb_bucket_1 
  select cast(cint as decimal(9,0)) as key, 
cast(cfloat as decimal(38,10)) as value 
  from alltypesorc limit 2;
insert into table vsmb_bucket_2 
  select cast(cint as decimal(19,3)) as key, 
cast(cfloat as decimal(28,0)) as 

[jira] [Assigned] (HIVE-6412) SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys

2014-02-12 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-6412:
-

Assignee: Xuefu Zhang

 SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys
 -

 Key: HIVE-6412
 URL: https://issues.apache.org/jira/browse/HIVE-6412
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Assignee: Xuefu Zhang
Priority: Critical

 {code}
 Caused by: java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
 org.apache.hadoop.hive.common.type.HiveDecimal
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
 at 
 org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 {code}
 Repro:
 {code}
 create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
 create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
   
 insert into table vsmb_bucket_1 
   select cast(cint as decimal(9,0)) as key, 
 cast(cfloat as decimal(38,10)) as value 
   from alltypesorc limit 2;
 insert into table vsmb_bucket_2 
   select cast(cint as decimal(19,3)) as key, 
 cast(cfloat as decimal(28,0)) as value 
   from alltypesorc limit 2;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.auto.convert.sortmerge.join.noconditionaltask = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-4996) unbalanced calls to openTransaction/commitTransaction

2014-02-12 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899468#comment-13899468
 ] 

Szehon Ho commented on HIVE-4996:
-

Thanks Xuefu for the review.

 unbalanced calls to openTransaction/commitTransaction
 -

 Key: HIVE-4996
 URL: https://issues.apache.org/jira/browse/HIVE-4996
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0, 0.11.0, 0.12.0
 Environment: hiveserver1  Java HotSpot(TM) 64-Bit Server VM (build 
 20.6-b01, mixed mode)
Reporter: wangfeng
Assignee: Szehon Ho
Priority: Critical
  Labels: hive, metastore
 Fix For: 0.13.0

 Attachments: HIVE-4996.1.patch, HIVE-4996.2.patch, HIVE-4996.3.patch, 
 HIVE-4996.4.patch, HIVE-4996.patch, hive-4996.path

   Original Estimate: 504h
  Remaining Estimate: 504h

 when we used hiveserver1 based on hive-0.10.0, we found the Exception 
 thrown.It was:
 FAILED: Error in metadata: MetaException(message:java.lang.RuntimeException: 
 commitTransaction was called but openTransactionCalls = 0. This probably 
 indicates that the
 re are unbalanced calls to openTransaction/commitTransaction)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 help



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6407) Test authorization_revoke_table_priv.q is failing on trunk

2014-02-12 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6407:


Status: Open  (was: Patch Available)

 Test authorization_revoke_table_priv.q is failing on trunk
 --

 Key: HIVE-6407
 URL: https://issues.apache.org/jira/browse/HIVE-6407
 Project: Hive
  Issue Type: Sub-task
  Components: Tests
Reporter: Ashutosh Chauhan
Assignee: Thejas M Nair
 Attachments: HIVE-6407.1.patch, HIVE-6407.2.patch, HIVE-6407.patch


 Seems like -- SORT_BEFORE_DIFF directive is required for test.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6407) Test authorization_revoke_table_priv.q is failing on trunk

2014-02-12 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6407:


Attachment: HIVE-6407.2.patch

HIVE-6407.2.patch - patch rebased to resolve conflicts with test files.

 Test authorization_revoke_table_priv.q is failing on trunk
 --

 Key: HIVE-6407
 URL: https://issues.apache.org/jira/browse/HIVE-6407
 Project: Hive
  Issue Type: Sub-task
  Components: Tests
Reporter: Ashutosh Chauhan
Assignee: Thejas M Nair
 Attachments: HIVE-6407.1.patch, HIVE-6407.2.patch, HIVE-6407.patch


 Seems like -- SORT_BEFORE_DIFF directive is required for test.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6407) Test authorization_revoke_table_priv.q is failing on trunk

2014-02-12 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6407:


Status: Patch Available  (was: Open)

 Test authorization_revoke_table_priv.q is failing on trunk
 --

 Key: HIVE-6407
 URL: https://issues.apache.org/jira/browse/HIVE-6407
 Project: Hive
  Issue Type: Sub-task
  Components: Tests
Reporter: Ashutosh Chauhan
Assignee: Thejas M Nair
 Attachments: HIVE-6407.1.patch, HIVE-6407.2.patch, HIVE-6407.patch


 Seems like -- SORT_BEFORE_DIFF directive is required for test.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6416) Vectorized mathematical functions for decimal type.

2014-02-12 Thread Jitendra Nath Pandey (JIRA)
Jitendra Nath Pandey created HIVE-6416:
--

 Summary: Vectorized mathematical functions for decimal type.
 Key: HIVE-6416
 URL: https://issues.apache.org/jira/browse/HIVE-6416
 Project: Hive
  Issue Type: Sub-task
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey


Vectorized mathematical functions for decimal type.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HIVE-5952) SQL std auth - authorize grant/revoke roles

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-5952.


Resolution: Fixed

I will open another jira for that.

 SQL std auth - authorize grant/revoke roles
 ---

 Key: HIVE-5952
 URL: https://issues.apache.org/jira/browse/HIVE-5952
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

   Original Estimate: 48h
  Remaining Estimate: 48h

 User should be allowed to grant/revoke a role only if the user is SUPERUSER 
 or has admin privileges for the role.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6410) Allow output serializations separators to be set for HDFS path as well.

2014-02-12 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899494#comment-13899494
 ] 

Xuefu Zhang commented on HIVE-6410:
---

Obviously, this is a dupe of https://issues.apache.org/jira/browse/HIVE-5672. 
Please close this one and take HIVE-5672 instead. Thanks.

 Allow output serializations separators to be set for HDFS path as well.
 ---

 Key: HIVE-6410
 URL: https://issues.apache.org/jira/browse/HIVE-6410
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu

 HIVE-3682 adds functionality for users to set serialization constants for 
 'insert overwrite local directory'. The same functionality should be 
 available for hdfs path as well. The workaround suggested is to create a 
 table with required format and insert into the table, which enforces the 
 users to know the schema of the result and create the table ahead. Though 
 that works, it is good to have the functionality for loading into directory 
 as well.
 I'm planning to add the same functionality in 'insert overwrite directory' in 
 this jira.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6167) Allow user-defined functions to be qualified with database name

2014-02-12 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899510#comment-13899510
 ] 

Ashutosh Chauhan commented on HIVE-6167:


OK, if need be that could be done in a follow-up +1

 Allow user-defined functions to be qualified with database name
 ---

 Key: HIVE-6167
 URL: https://issues.apache.org/jira/browse/HIVE-6167
 Project: Hive
  Issue Type: Sub-task
  Components: UDF
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-6167.1.patch, HIVE-6167.2.patch, HIVE-6167.3.patch, 
 HIVE-6167.4.patch


 Function names in Hive are currently unqualified and there is a single 
 namespace for all function names. This task would allow users to define 
 temporary UDFs (and eventually permanent UDFs) with a database name, such as:
 CREATE TEMPORARY FUNCTION userdb.myfunc 'myudfclass';



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6256) add batch dropping of partitions to Hive metastore (as well as to dropTable)

2014-02-12 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899507#comment-13899507
 ] 

Ashutosh Chauhan commented on HIVE-6256:


+1

 add batch dropping of partitions to Hive metastore (as well as to dropTable)
 

 Key: HIVE-6256
 URL: https://issues.apache.org/jira/browse/HIVE-6256
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Attachments: HIVE-6256.01.patch, HIVE-6256.02.patch, 
 HIVE-6256.03.patch, HIVE-6256.04.patch, HIVE-6256.05.patch, 
 HIVE-6256.06.patch, HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, 
 HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, 
 HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, HIVE-6256.patch


 Metastore drop partitions call drops one partition; when many are being 
 dropped this can be slow. Partitions could be dropped in batch instead, if 
 multiple are dropped via one command. Drop table can also use that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Work logged] (HIVE-6378) HCatClient::createTable() doesn't allow SerDe class to be specified

2014-02-12 Thread Karl D. Gierach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6378?focusedWorklogId=15938page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-15938
 ]

Karl D. Gierach logged work on HIVE-6378:
-

Author: Karl D. Gierach
Created on: 12/Feb/14 19:59
Start Date: 12/Feb/14 19:58
Worklog Time Spent: 4m 
  Work Description: completed patch, awaiting committer review.

Issue Time Tracking
---

Worklog Id: (was: 15938)
Time Spent: 4m
Remaining Estimate: 3h 56m  (was: 4h)

 HCatClient::createTable() doesn't allow SerDe class to be specified
 ---

 Key: HIVE-6378
 URL: https://issues.apache.org/jira/browse/HIVE-6378
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Karl D. Gierach
Assignee: Karl D. Gierach
  Labels: patch
 Fix For: 0.13.0

 Attachments: HIVE-6378-1.patch

   Original Estimate: 4h
  Time Spent: 4m
  Remaining Estimate: 3h 56m

 Recreating the HCATALOG-641 under HIVE, since HCATALOG was moved into HIVE.
 With respect to HCATALOG-641, a patch was originally provided (but not 
 committed), so this work will consist of simply re-basing the original patch 
 to the current trunk and the latest released version.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6356) Dependency injection in hbase storage handler is broken

2014-02-12 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899520#comment-13899520
 ] 

Sergey Shelukhin commented on HIVE-6356:


addendum looks good to me

 Dependency injection in hbase storage handler is broken
 ---

 Key: HIVE-6356
 URL: https://issues.apache.org/jira/browse/HIVE-6356
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6356.1.patch.txt, HIVE-6356.2.patch.txt, 
 HIVE-6356.addendum.00.patch


 Dependent jars for hbase is not added to tmpjars, which is caused by the 
 change of method signature(TableMapReduceUtil.addDependencyJars).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6356) Dependency injection in hbase storage handler is broken

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6356:
---

Status: Open  (was: Patch Available)

[~navis] Can you reupload the patch with correct name so Hive QA picks it up.

 Dependency injection in hbase storage handler is broken
 ---

 Key: HIVE-6356
 URL: https://issues.apache.org/jira/browse/HIVE-6356
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6356.1.patch.txt, HIVE-6356.2.patch.txt, 
 HIVE-6356.addendum.00.patch


 Dependent jars for hbase is not added to tmpjars, which is caused by the 
 change of method signature(TableMapReduceUtil.addDependencyJars).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6378) HCatClient::createTable() doesn't allow SerDe class to be specified

2014-02-12 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899527#comment-13899527
 ] 

Thejas M Nair commented on HIVE-6378:
-

[~avandana] or [~mithun] Would you be able to review this patch ?


 HCatClient::createTable() doesn't allow SerDe class to be specified
 ---

 Key: HIVE-6378
 URL: https://issues.apache.org/jira/browse/HIVE-6378
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Karl D. Gierach
Assignee: Karl D. Gierach
  Labels: patch
 Fix For: 0.13.0

 Attachments: HIVE-6378-1.patch

   Original Estimate: 4h
  Time Spent: 4m
  Remaining Estimate: 3h 56m

 Recreating the HCATALOG-641 under HIVE, since HCATALOG was moved into HIVE.
 With respect to HCATALOG-641, a patch was originally provided (but not 
 committed), so this work will consist of simply re-basing the original patch 
 to the current trunk and the latest released version.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6406) Introduce immutable-table table property and if set, disallow insert-into

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899537#comment-13899537
 ] 

Hive QA commented on HIVE-6406:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628308/HIVE-6406.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5091 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1293/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1293/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628308

 Introduce immutable-table table property and if set, disallow insert-into
 -

 Key: HIVE-6406
 URL: https://issues.apache.org/jira/browse/HIVE-6406
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore, Query Processor, Thrift API
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6406.patch


 As part of HIVE-6405's attempt to make HCatalog and Hive behave in similar 
 ways with regards to immutable tables, this is a companion task to introduce 
 the notion of an immutable table, wherein all tables are not immutable by 
 default, and have this be a table property. If this property is set for a 
 table, and we attempt to write to a table that already has data (or a 
 partition), disallow INSERT INTO into it from hive(if destination directory 
 is non-empty). This property being set will allow hive to mimic HCatalog's 
 current immutable-table property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6406) Introduce immutable-table table property and if set, disallow insert-into

2014-02-12 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899558#comment-13899558
 ] 

Brock Noland commented on HIVE-6406:


It seems like is_ and _table are noise?

 Introduce immutable-table table property and if set, disallow insert-into
 -

 Key: HIVE-6406
 URL: https://issues.apache.org/jira/browse/HIVE-6406
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore, Query Processor, Thrift API
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6406.patch


 As part of HIVE-6405's attempt to make HCatalog and Hive behave in similar 
 ways with regards to immutable tables, this is a companion task to introduce 
 the notion of an immutable table, wherein all tables are not immutable by 
 default, and have this be a table property. If this property is set for a 
 table, and we attempt to write to a table that already has data (or a 
 partition), disallow INSERT INTO into it from hive(if destination directory 
 is non-empty). This property being set will allow hive to mimic HCatalog's 
 current immutable-table property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6417) sql std auth - new users in admin role config should get added

2014-02-12 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6417:
---

 Summary: sql std auth - new users in admin role config should get 
added
 Key: HIVE-6417
 URL: https://issues.apache.org/jira/browse/HIVE-6417
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair


if metastore is started with hive.users.in.admin.role=user1, then user1 is 
added admin role to metastore.
If the value is changed to hive.users.in.admin.role=user2, then user2 should 
get added to the role in metastore. Right now, if the admin role exists, new 
users don't get added.
A work-around is -  user1 adding user2 to the admin role using grant role 
statement.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6418) MapJoinRowContainer has large memory overhead in typical cases

2014-02-12 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-6418:
--

 Summary: MapJoinRowContainer has large memory overhead in typical 
cases
 Key: HIVE-6418
 URL: https://issues.apache.org/jira/browse/HIVE-6418
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6418) MapJoinRowContainer has large memory overhead in typical cases

2014-02-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-6418:
---

Attachment: HIVE-6418.WIP.patch

First cut.
Introduces an alternative container that basically has an array. Initially that 
just stores context and all the un-serialized writables.
On access, it deserializes the writables. It knows the row count at that point 
and can determine row length from the first deserialized row (assumes its the 
same), so array represents a matrix with this row length.
For simple case of one row, it also serves as a list, so it can return itself 
as that row. Otherwise it returns a readonly sublist.
Works for Tez, because Tez doesn't have to serialize/deserialize the hashtable. 
I am not sure the lazy part can be made to work for MR with its extra stage, 
probably not, so MR uses old container.

WIP:
Need to get rid of index stored in each row, since unless rowCount is made 
short it will round to 8 bytes I presume and it's really useless. 
Also need to run more tests, I ran some tez tests

 MapJoinRowContainer has large memory overhead in typical cases
 --

 Key: HIVE-6418
 URL: https://issues.apache.org/jira/browse/HIVE-6418
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-6418.WIP.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6418) MapJoinRowContainer has large memory overhead in typical cases

2014-02-12 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899600#comment-13899600
 ] 

Sergey Shelukhin commented on HIVE-6418:


oh, MJRC became an interface and I renamed the original one into Eager 
because I can't come up with a good interface name otherwise

 MapJoinRowContainer has large memory overhead in typical cases
 --

 Key: HIVE-6418
 URL: https://issues.apache.org/jira/browse/HIVE-6418
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-6418.WIP.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6418) MapJoinRowContainer has large memory overhead in typical cases

2014-02-12 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899598#comment-13899598
 ] 

Sergey Shelukhin commented on HIVE-6418:


[~gopalv] [~ashutoshc] fyi

 MapJoinRowContainer has large memory overhead in typical cases
 --

 Key: HIVE-6418
 URL: https://issues.apache.org/jira/browse/HIVE-6418
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-6418.WIP.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6406) Introduce immutable-table table property and if set, disallow insert-into

2014-02-12 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899604#comment-13899604
 ] 

Sushanth Sowmyan commented on HIVE-6406:


Fair enough, I'd agree. I retained the is_ because there was already a 
parameter called is_archived, and I was trying to maintain style. The 
_table I didn't think about, but can be removed as well. I'll regenerate this 
patch with that changed.

 Introduce immutable-table table property and if set, disallow insert-into
 -

 Key: HIVE-6406
 URL: https://issues.apache.org/jira/browse/HIVE-6406
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore, Query Processor, Thrift API
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6406.patch


 As part of HIVE-6405's attempt to make HCatalog and Hive behave in similar 
 ways with regards to immutable tables, this is a companion task to introduce 
 the notion of an immutable table, wherein all tables are not immutable by 
 default, and have this be a table property. If this property is set for a 
 table, and we attempt to write to a table that already has data (or a 
 partition), disallow INSERT INTO into it from hive(if destination directory 
 is non-empty). This property being set will allow hive to mimic HCatalog's 
 current immutable-table property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6256) add batch dropping of partitions to Hive metastore (as well as to dropTable)

2014-02-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6256:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sergey!

 add batch dropping of partitions to Hive metastore (as well as to dropTable)
 

 Key: HIVE-6256
 URL: https://issues.apache.org/jira/browse/HIVE-6256
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-6256.01.patch, HIVE-6256.02.patch, 
 HIVE-6256.03.patch, HIVE-6256.04.patch, HIVE-6256.05.patch, 
 HIVE-6256.06.patch, HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, 
 HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, 
 HIVE-6256.nogen.patch, HIVE-6256.nogen.patch, HIVE-6256.patch


 Metastore drop partitions call drops one partition; when many are being 
 dropped this can be slow. Partitions could be dropped in batch instead, if 
 multiple are dropped via one command. Drop table can also use that.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6406) Introduce immutable-table table property and if set, disallow insert-into

2014-02-12 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6406:
---

Attachment: HIVE-6406.2.patch

Attached updated patch.

 Introduce immutable-table table property and if set, disallow insert-into
 -

 Key: HIVE-6406
 URL: https://issues.apache.org/jira/browse/HIVE-6406
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore, Query Processor, Thrift API
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-6406.2.patch, HIVE-6406.patch


 As part of HIVE-6405's attempt to make HCatalog and Hive behave in similar 
 ways with regards to immutable tables, this is a companion task to introduce 
 the notion of an immutable table, wherein all tables are not immutable by 
 default, and have this be a table property. If this property is set for a 
 table, and we attempt to write to a table that already has data (or a 
 partition), disallow INSERT INTO into it from hive(if destination directory 
 is non-empty). This property being set will allow hive to mimic HCatalog's 
 current immutable-table property.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6233) JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode

2014-02-12 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6233:
-

Component/s: WebHCat

 JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode
 --

 Key: HIVE-6233
 URL: https://issues.apache.org/jira/browse/HIVE-6233
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Attachments: HIVE-6233.patch


 JOBS testsuite performs operations with two users test.user.name and 
 test.other.user.name. In Kerberos secure mode it should kinit as the 
 respective user.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6378) HCatClient::createTable() doesn't allow SerDe class to be specified

2014-02-12 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899622#comment-13899622
 ] 

Mithun Radhakrishnan commented on HIVE-6378:


Yes, Thejas, I'll have a look at this.

I've had an internal patch for exactly this for a while now. I haven't had the 
time to post it for review. Thanks for working on this, [~kgierach]. I'll 
comment shortly.

 HCatClient::createTable() doesn't allow SerDe class to be specified
 ---

 Key: HIVE-6378
 URL: https://issues.apache.org/jira/browse/HIVE-6378
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Karl D. Gierach
Assignee: Karl D. Gierach
  Labels: patch
 Fix For: 0.13.0

 Attachments: HIVE-6378-1.patch

   Original Estimate: 4h
  Time Spent: 4m
  Remaining Estimate: 3h 56m

 Recreating the HCATALOG-641 under HIVE, since HCATALOG was moved into HIVE.
 With respect to HCATALOG-641, a patch was originally provided (but not 
 committed), so this work will consist of simply re-basing the original patch 
 to the current trunk and the latest released version.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5989) Hive metastore authorization check is not threadsafe

2014-02-12 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899630#comment-13899630
 ] 

Sushanth Sowmyan commented on HIVE-5989:


[~thejas], could I please get a review on this? I'm not certain if this is 
affected by any of your newer patches, but this is a pretty important bug at 
scale.

 Hive metastore authorization check is not threadsafe
 

 Key: HIVE-5989
 URL: https://issues.apache.org/jira/browse/HIVE-5989
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Security
Affects Versions: 0.11.0, 0.12.0, 0.12.1
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
Priority: Critical
 Attachments: HIVE-5989.patch, SleepyAP.patch


 Metastore-side authorization has a couple of pretty important threadsafety 
 bugs in it:
 a) The HiveMetastoreAuthenticated instantiated by the 
 AuthorizationPreEventListener is static. This is a premature optimization and 
 incorrect, as it will result in Authenticator implementations that store 
 state potentially giving an incorrect result, and this bug very much exists 
 with the DefaultMetastoreAuthenticator.
 b) It assumes HMSHandler.getHiveConf() is itself going to be thread-safe, 
 which it is not. HMSHandler.getConf() is the appropriate thread-safe 
 equivalent.
 The effect of this bug is that if there are two users that are concurrently 
 running jobs on the metastore, we might :
 a) Allow a user to do something they didn't have permission to, because the 
 other person did. (Security hole)
 b) Disallow a user from doing something they should have permission to (More 
 common - annoying and can cause job failures)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6325) Enable using multiple concurrent sessions in tez

2014-02-12 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899629#comment-13899629
 ] 

Vikram Dixit K commented on HIVE-6325:
--

The MinimrCliDriver tests pass on my setup. The revoke_table_priv there is a 
minor difference in the output that has been affecting the runs of many jiras 
and is unrelated to this one.

 Enable using multiple concurrent sessions in tez
 

 Key: HIVE-6325
 URL: https://issues.apache.org/jira/browse/HIVE-6325
 Project: Hive
  Issue Type: Improvement
  Components: Tez
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-6325.1.patch, HIVE-6325.2.patch, HIVE-6325.3.patch, 
 HIVE-6325.4.patch, HIVE-6325.5.patch


 We would like to enable multiple concurrent sessions in tez via hive server 
 2. This will enable users to make efficient use of the cluster when it has 
 been partitioned using yarn queues.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6378) HCatClient::createTable() doesn't allow SerDe class to be specified

2014-02-12 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899627#comment-13899627
 ] 

Mithun Radhakrishnan commented on HIVE-6378:


Oh, hang on... I thought it looked familiar. :)) HCATALOG-641.

[~thejas], I can't +1 this if it's my patch, right? ;]

 HCatClient::createTable() doesn't allow SerDe class to be specified
 ---

 Key: HIVE-6378
 URL: https://issues.apache.org/jira/browse/HIVE-6378
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Karl D. Gierach
Assignee: Karl D. Gierach
  Labels: patch
 Fix For: 0.13.0

 Attachments: HIVE-6378-1.patch

   Original Estimate: 4h
  Time Spent: 4m
  Remaining Estimate: 3h 56m

 Recreating the HCATALOG-641 under HIVE, since HCATALOG was moved into HIVE.
 With respect to HCATALOG-641, a patch was originally provided (but not 
 committed), so this work will consist of simply re-basing the original patch 
 to the current trunk and the latest released version.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6383) Newly added tests in TestJdbcDriver2 from HIVE-4395 is not running

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899638#comment-13899638
 ] 

Hive QA commented on HIVE-6383:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628318/HIVE-6383.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5093 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_revoke_table_priv
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1294/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1294/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628318

 Newly added tests in TestJdbcDriver2 from HIVE-4395 is not running
 --

 Key: HIVE-6383
 URL: https://issues.apache.org/jira/browse/HIVE-6383
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Reporter: Navis
Assignee: Prasad Mujumdar
Priority: Minor
 Attachments: HIVE-6383.1.patch


 Newly added tests are not marked with @Test annotation and seemed not 
 running. In my try after adding the annotation, testFetchFirstQuery is 
 failed. [~prasadm] Could you check this?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6233) JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode

2014-02-12 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899635#comment-13899635
 ] 

Eugene Koifman commented on HIVE-6233:
--

can test-multi-users target be run in non-secure cluster and still have it 
test something meaningful?
If so, then the changes look good.  If not, I'm concerned that moving tests 
from test will make them be run less often during normal dev cycle...


 JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode
 --

 Key: HIVE-6233
 URL: https://issues.apache.org/jira/browse/HIVE-6233
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Attachments: HIVE-6233.patch


 JOBS testsuite performs operations with two users test.user.name and 
 test.other.user.name. In Kerberos secure mode it should kinit as the 
 respective user.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6233) JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode

2014-02-12 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899646#comment-13899646
 ] 

Deepesh Khandelwal commented on HIVE-6233:
--

Thanks [~ekoifman] for the review! Yes, the test can be run on non-secure 
clusters as well and does provide value in testing multi-user scenarios for 
validating user/group permissions.

 JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode
 --

 Key: HIVE-6233
 URL: https://issues.apache.org/jira/browse/HIVE-6233
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Attachments: HIVE-6233.patch


 JOBS testsuite performs operations with two users test.user.name and 
 test.other.user.name. In Kerberos secure mode it should kinit as the 
 respective user.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6072) With HCatalog refactoring, Hadoop_HBase e2e will fail

2014-02-12 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6072:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 With HCatalog refactoring, Hadoop_HBase e2e will fail
 -

 Key: HIVE-6072
 URL: https://issues.apache.org/jira/browse/HIVE-6072
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler, HCatalog
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-6072.1.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6072) With HCatalog refactoring, Hadoop_HBase e2e will fail

2014-02-12 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-6072:
---

Fix Version/s: 0.13.0

 With HCatalog refactoring, Hadoop_HBase e2e will fail
 -

 Key: HIVE-6072
 URL: https://issues.apache.org/jira/browse/HIVE-6072
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler, HCatalog
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 0.13.0

 Attachments: HIVE-6072.1.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6072) With HCatalog refactoring, Hadoop_HBase e2e will fail

2014-02-12 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899655#comment-13899655
 ] 

Sushanth Sowmyan commented on HIVE-6072:


Committed. Thanks, Hari!

 With HCatalog refactoring, Hadoop_HBase e2e will fail
 -

 Key: HIVE-6072
 URL: https://issues.apache.org/jira/browse/HIVE-6072
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler, HCatalog
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 0.13.0

 Attachments: HIVE-6072.1.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6233) JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode

2014-02-12 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899666#comment-13899666
 ] 

Eugene Koifman commented on HIVE-6233:
--

+1

 JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode
 --

 Key: HIVE-6233
 URL: https://issues.apache.org/jira/browse/HIVE-6233
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Attachments: HIVE-6233.patch


 JOBS testsuite performs operations with two users test.user.name and 
 test.other.user.name. In Kerberos secure mode it should kinit as the 
 respective user.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6419) Update people page as discussed on dev@

2014-02-12 Thread Brock Noland (JIRA)
Brock Noland created HIVE-6419:
--

 Summary: Update people page as discussed on dev@
 Key: HIVE-6419
 URL: https://issues.apache.org/jira/browse/HIVE-6419
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-6419.patch





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Re: Update People Page?

2014-02-12 Thread Brock Noland
Here is a patch, all I need is a +1

https://issues.apache.org/jira/browse/HIVE-6419




On Tue, Feb 11, 2014 at 12:43 PM, Owen O'Malley omal...@apache.org wrote:

 +1 to fixing the page. Maybe we should move emeritus members to the bottom
 of the page?

 .. Owen


 On Tue, Feb 11, 2014 at 9:58 AM, Brock Noland br...@cloudera.com wrote:

  On Tue, Feb 11, 2014 at 3:35 AM, Lefty Leverenz leftylever...@gmail.com
  wrote:
 
   +1
  
 Additionally I noted that HW as two different capitalizations. I am
 a
   touch OCD. :)
  
   Hooray!  So glad I'm not the only one.  Let's call it perfectionism,
 not
   OCD.  (And make that has -- which I didn't notice until cut--paste.
 :)
  
 
  Ugh :)
 
 
Now how about links on the company names, can we add more of them?
Impossible to link every one because Doc of the Bay has no website,
 it's
   just an alias for yours truly.
  
 
  I like the website links as well. I think we can add them as well.
 
  Brock
 
 
  
  
   On Mon, Feb 10, 2014 at 10:48 PM, Carl Steinbach 
 cwsteinb...@gmail.com
   wrote:
  
Sounds good to me!
On Feb 10, 2014 6:21 PM, Thejas Nair the...@hortonworks.com
 wrote:
   
 Hortonworks is the right capitalization. Yes, please update it.
 Raghotham is still at facebook (AFAIK).





 On Mon, Feb 10, 2014 at 5:40 PM, Brock Noland br...@cloudera.com
wrote:

  Hi,
 
  While creating the new version of the website I noted that the
  people
 page
  is fairly out of date. For example based on LinkedIn and Google I
noted:
 
  Yongqiang He is at Dropbox
  Raghotham Murthy is a Stanford Grad student?
  Namit Jain is at Nutanix
  Carl Steinbach is at LinkedIn
 
  Additionally I noted that HW as two different capitalizations. I
  am a
 touch
  OCD. :)
 
  Should we go ahead an update the page?
 
  Cheers,
  Brock
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
   entity
to
 which it is addressed and may contain information that is
  confidential,
 privileged and exempt from disclosure under applicable law. If the
   reader
 of this message is not the intended recipient, you are hereby
  notified
that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you
 have
 received this communication in error, please contact the sender
immediately
 and delete it from your system. Thank You.

   
  
 
 
 
  --
  Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
 




-- 
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org


[jira] [Updated] (HIVE-6419) Update people page as discussed on dev@

2014-02-12 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-6419:
---

Attachment: HIVE-6419.patch

 Update people page as discussed on dev@
 ---

 Key: HIVE-6419
 URL: https://issues.apache.org/jira/browse/HIVE-6419
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-6419.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6419) Update people page as discussed on dev@

2014-02-12 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899693#comment-13899693
 ] 

Gunther Hagleitner commented on HIVE-6419:
--

+1

 Update people page as discussed on dev@
 ---

 Key: HIVE-6419
 URL: https://issues.apache.org/jira/browse/HIVE-6419
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-6419.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HIVE-6419) Update people page as discussed on dev@

2014-02-12 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland resolved HIVE-6419.


Resolution: Fixed

Thank you!

 Update people page as discussed on dev@
 ---

 Key: HIVE-6419
 URL: https://issues.apache.org/jira/browse/HIVE-6419
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-6419.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HIVE-6233) JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode

2014-02-12 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-6233:
-

   Resolution: Fixed
Fix Version/s: 0.13.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Patch committed to trunk.

 JOBS testsuite in WebHCat E2E tests does not work correctly in secure mode
 --

 Key: HIVE-6233
 URL: https://issues.apache.org/jira/browse/HIVE-6233
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.13.0

 Attachments: HIVE-6233.patch


 JOBS testsuite performs operations with two users test.user.name and 
 test.other.user.name. In Kerberos secure mode it should kinit as the 
 respective user.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6401) root_dir_external_table.q fails with -Phadoop-2

2014-02-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899710#comment-13899710
 ] 

Jason Dere commented on HIVE-6401:
--

Looks like this is due to MAPREDUCE-5756.  In the test case, when getSplits() 
is called on /, it returns InputSplits for the folllowing paths:

/00_0
/Users/
/build/
/tmp/
/user/

Whereas in hadoop-1 it only used to return:

/00_0

The query execution seems to expect the InputSplits to be files, and not 
directories.  

Stack trace for listStatus() was:
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:272)
org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:343)
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:309)
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:427)
org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
java.security.AccessController.doPrivileged(Native
javax.security.auth.Subject.doAs(Subject.java:396)
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
java.security.AccessController.doPrivileged(Native
javax.security.auth.Subject.doAs(Subject.java:396)
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:419)
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1548)
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321)
org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1152)
org.apache.hadoop.hive.ql.Driver.run(Driver.java:992)
org.apache.hadoop.hive.ql.Driver.run(Driver.java:982)
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:359)
org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:907)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.runTest(TestMinimrCliDriver.java:133)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table(TestMinimrCliDriver.java:117)

 root_dir_external_table.q fails with -Phadoop-2
 ---

 Key: HIVE-6401
 URL: https://issues.apache.org/jira/browse/HIVE-6401
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere

 The test passes with -Phadoop-1, but when testing against Hadoop2 it reports 
 the following error:
 {quote}
 Error during job, obtaining debugging information...
 Job Tracking URL: http://dev01:40951/proxy/application_1392069796008_0002/
 Examining task ID: task_1392069796008_0002_m_01 (and more) from job 
 job_1392069796008_0002
 Task with the most failures(4):
 -
 Task ID:
   task_1392069796008_0002_m_01
 URL:
   
 http://dev01.hortonworks.com:40951/taskdetails.jsp?jobid=job_1392069796008_0002tipid=task_1392069796008_0002_m_01
 -
 Diagnostic Messages for this Task:
 Error: java.io.IOException: java.lang.reflect.InvocationTargetException
 at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
 at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:301)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.init(HadoopShimsSecure.java:248)
 at 
 

[jira] [Updated] (HIVE-6362) Support union all on tez

2014-02-12 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-6362:
-

Attachment: HIVE-6362.3.patch

 Support union all on tez
 

 Key: HIVE-6362
 URL: https://issues.apache.org/jira/browse/HIVE-6362
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch

 Attachments: HIVE-6362.1.patch, HIVE-6362.2.patch, HIVE-6362.3.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6420) upgrade script for Hive 13 is missing for Derby

2014-02-12 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-6420:
--

 Summary: upgrade script for Hive 13 is missing for Derby
 Key: HIVE-6420
 URL: https://issues.apache.org/jira/browse/HIVE-6420
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor


There's an upgrade script for all DSes but not for Derby. Nothing needs to be 
done in that script but I'm being told that some tools might break if there's 
no matching file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6421) abs() should preserve precision/scale of decimal input

2014-02-12 Thread Jason Dere (JIRA)
Jason Dere created HIVE-6421:


 Summary: abs() should preserve precision/scale of decimal input
 Key: HIVE-6421
 URL: https://issues.apache.org/jira/browse/HIVE-6421
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Jason Dere
Assignee: Jason Dere


{noformat}
hive describe dec1;
OK
c1  decimal(10,2)   None 

hive explain select c1, abs(c1) from dec1;
 ...
Select Operator
  expressions: c1 (type: decimal(10,2)), abs(c1) (type: 
decimal(38,18))

{noformat}

Given that abs() is a GenericUDF it should be possible for the return type 
precision/scale to match the input precision/scale.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6407) Test authorization_revoke_table_priv.q is failing on trunk

2014-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13899750#comment-13899750
 ] 

Hive QA commented on HIVE-6407:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12628543/HIVE-6407.2.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5087 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table
org.apache.hadoop.hive.jdbc.TestJdbcDriver.testShowRoleGrant
org.apache.hive.jdbc.TestJdbcDriver2.testShowRoleGrant
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1295/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1295/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12628543

 Test authorization_revoke_table_priv.q is failing on trunk
 --

 Key: HIVE-6407
 URL: https://issues.apache.org/jira/browse/HIVE-6407
 Project: Hive
  Issue Type: Sub-task
  Components: Tests
Reporter: Ashutosh Chauhan
Assignee: Thejas M Nair
 Attachments: HIVE-6407.1.patch, HIVE-6407.2.patch, HIVE-6407.patch


 Seems like -- SORT_BEFORE_DIFF directive is required for test.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6422) SQL std auth - revert change for view keyword in grant statement

2014-02-12 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6422:
---

 Summary: SQL std auth - revert change for view keyword in grant 
statement
 Key: HIVE-6422
 URL: https://issues.apache.org/jira/browse/HIVE-6422
 Project: Hive
  Issue Type: Sub-task
Reporter: Thejas M Nair
Assignee: Thejas M Nair


SQL standard does not support view keyword in grant statement. HIVE-6181 which 
was added as part of sql standard changes, needs to be reverted.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >