[jira] [Updated] (HIVE-6808) sql std auth - describe table, show partitions are not being authorized

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6808:


Attachment: HIVE-6808.1.patch

> sql std auth - describe table, show partitions are not being authorized
> ---
>
> Key: HIVE-6808
> URL: https://issues.apache.org/jira/browse/HIVE-6808
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.13.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-6808.1.patch
>
>
> Only users with SELECT privilege on the table should be able to do 'describe 
> table' and 'show partitions'.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6808) sql std auth - describe table, show partitions are not being authorized

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957401#comment-13957401
 ] 

Thejas M Nair commented on HIVE-6808:
-

Several .q.out files need to be updated, I am regenerating those files. 
Uploading patch without that for now, for ease of review (HIVE-6808.1.patch).


> sql std auth - describe table, show partitions are not being authorized
> ---
>
> Key: HIVE-6808
> URL: https://issues.apache.org/jira/browse/HIVE-6808
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.13.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-6808.1.patch
>
>
> Only users with SELECT privilege on the table should be able to do 'describe 
> table' and 'show partitions'.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6811) LOAD command does not work with relative paths on Windows

2014-04-01 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-6811:
-

Attachment: HIVE-6811.1.patch

Patch to use toUri() again when generating the path string.  [~xuefuz] does 
this break any of the other changes in HIVE-6048, or will this work ok?

> LOAD command does not work with relative paths on Windows
> -
>
> Key: HIVE-6811
> URL: https://issues.apache.org/jira/browse/HIVE-6811
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-6811.1.patch
>
>
> qfile tests on Windows fail when trying to load data, with 
> URISyntaxException: Relative path in absolute URI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6811) LOAD command does not work with relative paths on Windows

2014-04-01 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-6811:
-

Status: Patch Available  (was: Open)

> LOAD command does not work with relative paths on Windows
> -
>
> Key: HIVE-6811
> URL: https://issues.apache.org/jira/browse/HIVE-6811
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-6811.1.patch
>
>
> qfile tests on Windows fail when trying to load data, with 
> URISyntaxException: Relative path in absolute URI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6811) LOAD command does not work with relative paths on Windows

2014-04-01 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957394#comment-13957394
 ] 

Jason Dere commented on HIVE-6811:
--

Looks like LoadSemanticAnalyzer changes in HIVE-6048 may have caused this, when 
path string in windows is generated with Path.toUri(), "C:/path" becomes 
"/C:/path" which seems to have allowed things to work before.  Without the 
toUri() call, the path remains "C:/path" and the we see the error. 

> LOAD command does not work with relative paths on Windows
> -
>
> Key: HIVE-6811
> URL: https://issues.apache.org/jira/browse/HIVE-6811
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>
> qfile tests on Windows fail when trying to load data, with 
> URISyntaxException: Relative path in absolute URI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6811) LOAD command does not work with relative paths on Windows

2014-04-01 Thread Jason Dere (JIRA)
Jason Dere created HIVE-6811:


 Summary: LOAD command does not work with relative paths on Windows
 Key: HIVE-6811
 URL: https://issues.apache.org/jira/browse/HIVE-6811
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere


qfile tests on Windows fail when trying to load data, with URISyntaxException: 
Relative path in absolute URI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/
---

(Updated April 2, 2014, 6:28 a.m.)


Review request for hive.


Changes
---

Addressed comments


Bugs: HIVE-6411
https://issues.apache.org/jira/browse/HIVE-6411


Repository: hive-git


Description
---

HIVE-2599 introduced using custom object for the row key. But it forces key 
objects to extend HBaseCompositeKey, which is again extension of LazyStruct. If 
user provides proper Object and OI, we can replace internal key and keyOI with 
those. 

Initial implementation is based on factory interface.
{code}
public interface HBaseKeyFactory {
  void init(SerDeParameters parameters, Properties properties) throws 
SerDeException;
  ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
  LazyObjectBase createObject(ObjectInspector inspector) throws SerDeException;
}
{code}


Diffs (updated)
-

  hbase-handler/pom.xml 132af43 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/AbstractHBaseKeyFactory.java
 PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/CompositeHBaseKeyFactory.java
 PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/DefaultHBaseKeyFactory.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
5008f15 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
PRE-CREATION 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
PRE-CREATION 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
b64590d 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
4fe1b1b 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 142bfd8 
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java fc40195 
  
hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java 
13c344b 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
PRE-CREATION 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
PRE-CREATION 
  hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
7c4fc9f 
  hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
  hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
  hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
  hbase-handler/src/test/results/positive/hbase_custom_key2.q.out PRE-CREATION 
  itests/util/pom.xml e9720df 
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
  ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
d39ee2e 
  ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 5f1329c 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java 293b74e 
  
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
 bb02bab 
  
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java 
9f35575 
  ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
  ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
  ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
  serde/src/java/org/apache/hadoop/hive/serde2/BaseStructObjectInspector.java 
PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/NullStructSerDe.java dba5e33 
  serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
  serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
1fd6853 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 3334dff 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java 
82c1263 
  serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
  
serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
 8a5386a 
  serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java 
598683f 
  serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java 
caf3517 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ColumnarStructObjectInspector.java
 7d0d91c 
  
serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/

[jira] [Updated] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6411:


Attachment: HIVE-6411.9.patch.txt

> Support more generic way of using composite key for HBaseHandler
> 
>
> Key: HIVE-6411
> URL: https://issues.apache.org/jira/browse/HIVE-6411
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-6411.1.patch.txt, HIVE-6411.2.patch.txt, 
> HIVE-6411.3.patch.txt, HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, 
> HIVE-6411.6.patch.txt, HIVE-6411.7.patch.txt, HIVE-6411.8.patch.txt, 
> HIVE-6411.9.patch.txt
>
>
> HIVE-2599 introduced using custom object for the row key. But it forces key 
> objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
> If user provides proper Object and OI, we can replace internal key and keyOI 
> with those. 
> Initial implementation is based on factory interface.
> {code}
> public interface HBaseKeyFactory {
>   void init(SerDeParameters parameters, Properties properties) throws 
> SerDeException;
>   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
>   LazyObjectBase createObject(ObjectInspector inspector) throws 
> SerDeException;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6133) Support partial partition exchange

2014-04-01 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6133:


Description: 
Current alter exchange coerces source and destination table to have same 
partition columns. But source table has sub-set of partitions and provided 
partition spec supplements to be a complete partition spec, it need not to be 
that.

For example, table into partition 
{noformat}
CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
CREATE TABLE exchange_part_test2 (f1 string) 
ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH TABLE 
exchange_part_test2;
{noformat}

or 

partial partitions into parent partition
{noformat}
CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
STRING);
CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH TABLE 
exchange_part_test2;
{noformat}

can be possible.

  was:
Current alter exchange coerces source and destination table to have same 
partition columns. But source table has sub-set of partitions and provided 
partition spec supplements to be a complete partition spec, it need not to be 
that.

For example, 
{noformat}
CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
CREATE TABLE exchange_part_test2 (f1 string) 
ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH TABLE 
exchange_part_test2;
{noformat}

or 

{noformat}
CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
STRING);
CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH TABLE 
exchange_part_test2;
{noformat}

can be possible.


> Support partial partition exchange
> --
>
> Key: HIVE-6133
> URL: https://issues.apache.org/jira/browse/HIVE-6133
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-6133.1.patch.txt
>
>
> Current alter exchange coerces source and destination table to have same 
> partition columns. But source table has sub-set of partitions and provided 
> partition spec supplements to be a complete partition spec, it need not to be 
> that.
> For example, table into partition 
> {noformat}
> CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
> CREATE TABLE exchange_part_test2 (f1 string) 
> ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
> TABLE exchange_part_test2;
> {noformat}
> or 
> partial partitions into parent partition
> {noformat}
> CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
> STRING);
> CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
> ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
> TABLE exchange_part_test2;
> {noformat}
> can be possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6810) Provide example and update docs to show use of back tick when doing SHOW GRANT

2014-04-01 Thread Udai Kiran Potluri (JIRA)
Udai Kiran Potluri created HIVE-6810:


 Summary: Provide example and update docs to show use of back tick 
when doing SHOW GRANT
 Key: HIVE-6810
 URL: https://issues.apache.org/jira/browse/HIVE-6810
 Project: Hive
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 0.12.0
Reporter: Udai Kiran Potluri


The Docs at 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Authorization#LanguageManualAuthorization-ViewingGrantedPrivileges

Do not show an example or mention need to use back tick (`) character 
especially when there are special characters. Per HIVE-2074, all GRANT/REVOKE 
need a back tick character when using "-". Similarly, with the SHOW GRANT USER 
if the user id has a ".".

For eg: SHOW GRANT USER `abc.xyz` ON TABLE mock_opt; 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19903: Support bulk deleting directories for partition drop with partial spec

2014-04-01 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19903/
---

Review request for hive.


Bugs: HIVE-6809
https://issues.apache.org/jira/browse/HIVE-6809


Repository: hive-git


Description
---

In busy hadoop system, dropping many of partitions takes much more time than 
expected. In hive-0.11.0, removing 1700 partitions by single partial spec took 
90 minutes, which is reduced to 3 minutes when deleteData is set false. I 
couldn't test this in recent hive, which has HIVE-6256 but if the time-taking 
part is mostly from removing directories, it seemed not helpful to reduce whole 
processing time.


Diffs
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
 47e94ea 
  metastore/if/hive_metastore.thrift eef1b80 
  metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h 2a1b4d7 
  metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp 9567874 
  metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp 
b18009c 
  
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 4f051af 
  metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php c79624f 
  metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote 
fdedb57 
  metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py 23679be 
  metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb 56c23e6 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
27077b4 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
664dccd 
  metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
0c2209b 
  metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 6a0eabe 
  metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java e0de0e0 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
 5c00aa1 
  
metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
 5025b83 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 5cb030c 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 5d5fa78 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java a73a5e0 
  ql/src/java/org/apache/hadoop/hive/ql/plan/DropTableDesc.java ba30e1f 
  ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionSpec.java PRE-CREATION 
  ql/src/test/queries/clientpositive/drop_partitions_partialspec.q PRE-CREATION 
  ql/src/test/results/clientpositive/drop_partitions_partialspec.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/19903/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Updated] (HIVE-6809) Support bulk deleting directories for partition drop with partial spec

2014-04-01 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6809:


Status: Patch Available  (was: Open)

> Support bulk deleting directories for partition drop with partial spec
> --
>
> Key: HIVE-6809
> URL: https://issues.apache.org/jira/browse/HIVE-6809
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-6809.1.patch.txt
>
>
> In busy hadoop system, dropping many of partitions takes much more time than 
> expected. In hive-0.11.0, removing 1700 partitions by single partial spec 
> took 90 minutes, which is reduced to 3 minutes when deleteData is set false. 
> I couldn't test this in recent hive, which has HIVE-6256 but if the 
> time-taking part is mostly from removing directories, it seemed not helpful 
> to reduce whole processing time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6809) Support bulk deleting directories for partition drop with partial spec

2014-04-01 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6809:


Attachment: HIVE-6809.1.patch.txt

> Support bulk deleting directories for partition drop with partial spec
> --
>
> Key: HIVE-6809
> URL: https://issues.apache.org/jira/browse/HIVE-6809
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-6809.1.patch.txt
>
>
> In busy hadoop system, dropping many of partitions takes much more time than 
> expected. In hive-0.11.0, removing 1700 partitions by single partial spec 
> took 90 minutes, which is reduced to 3 minutes when deleteData is set false. 
> I couldn't test this in recent hive, which has HIVE-6256 but if the 
> time-taking part is mostly from removing directories, it seemed not helpful 
> to reduce whole processing time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6809) Support bulk deleting directories for partition drop with partial spec

2014-04-01 Thread Navis (JIRA)
Navis created HIVE-6809:
---

 Summary: Support bulk deleting directories for partition drop with 
partial spec
 Key: HIVE-6809
 URL: https://issues.apache.org/jira/browse/HIVE-6809
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis


In busy hadoop system, dropping many of partitions takes much more time than 
expected. In hive-0.11.0, removing 1700 partitions by single partial spec took 
90 minutes, which is reduced to 3 minutes when deleteData is set false. I 
couldn't test this in recent hive, which has HIVE-6256 but if the time-taking 
part is mostly from removing directories, it seemed not helpful to reduce whole 
processing time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6031) explain subquery rewrite for where clause predicates

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957324#comment-13957324
 ] 

Hive QA commented on HIVE-6031:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638134/HIVE-6031.2.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5541 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucketmapjoin6
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_parallel_orderby
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2074/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2074/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638134

> explain subquery rewrite for where clause predicates 
> -
>
> Key: HIVE-6031
> URL: https://issues.apache.org/jira/browse/HIVE-6031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-6031.1.patch, HIVE-6031.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6778:


Fix Version/s: 0.13.0

> ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
> =1 predicate in partition pruner. 
> --
>
> Key: HIVE-6778
> URL: https://issues.apache.org/jira/browse/HIVE-6778
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-6778.1.patch
>
>
> select key, value, ds from pcr_foo where (ds % 2 == 1);
> ql/src/test/queries/clientpositive/pcr.q
> The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
> since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6778:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13
thanks Hari, Jitendra for reviewing

> ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
> =1 predicate in partition pruner. 
> --
>
> Key: HIVE-6778
> URL: https://issues.apache.org/jira/browse/HIVE-6778
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Harish Butani
> Attachments: HIVE-6778.1.patch
>
>
> select key, value, ds from pcr_foo where (ds % 2 == 1);
> ql/src/test/queries/clientpositive/pcr.q
> The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
> since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957301#comment-13957301
 ] 

Harish Butani commented on HIVE-6778:
-

this failure is not related.
Ran locally and validated that the test passes.

> ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
> =1 predicate in partition pruner. 
> --
>
> Key: HIVE-6778
> URL: https://issues.apache.org/jira/browse/HIVE-6778
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Harish Butani
> Attachments: HIVE-6778.1.patch
>
>
> select key, value, ds from pcr_foo where (ds % 2 == 1);
> ql/src/test/queries/clientpositive/pcr.q
> The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
> since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957263#comment-13957263
 ] 

Hive QA commented on HIVE-6778:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637977/HIVE-6778.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5539 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2073/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2073/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637977

> ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
> =1 predicate in partition pruner. 
> --
>
> Key: HIVE-6778
> URL: https://issues.apache.org/jira/browse/HIVE-6778
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Harish Butani
> Attachments: HIVE-6778.1.patch
>
>
> select key, value, ds from pcr_foo where (ds % 2 == 1);
> ql/src/test/queries/clientpositive/pcr.q
> The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
> since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 16403: HIVE-5176: Wincompat : Changes for allowing various path compatibilities with Windows

2014-04-01 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16403/
---

(Updated April 2, 2014, 2:11 a.m.)


Review request for hive and Thejas Nair.


Changes
---

Moved some functionality out to a different patch.


Bugs: HIVE-5176
https://issues.apache.org/jira/browse/HIVE-5176


Repository: hive-git


Description
---

We need to make certain changes across the board to allow us to read/parse 
windows paths. Some are escaping changes, some are being strict about how we 
read paths (through URL.encode/decode, etc)


Diffs (updated)
-

  common/src/test/org/apache/hadoop/hive/conf/TestHiveConf.java a31238b 
  ql/src/test/org/apache/hadoop/hive/ql/WindowsPathUtil.java PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java 5991aae 
  ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java 
e453f56 

Diff: https://reviews.apache.org/r/16403/diff/


Testing
---


Thanks,

Jason Dere



[jira] [Commented] (HIVE-6785) query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe

2014-04-01 Thread Tongjie Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957258#comment-13957258
 ] 

Tongjie Chen commented on HIVE-6785:


This patch involves deleting file and adding new files (mv),  and there is no 
instruction to delete/add if using git in 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute; however my 
patch is using git diff, if that does not work, I will resubmit a patch using 
svn.

https://reviews.apache.org/r/19896/

> query fails when partitioned table's table level serde is ParquetHiveSerDe 
> and partition level serde is of different SerDe
> --
>
> Key: HIVE-6785
> URL: https://issues.apache.org/jira/browse/HIVE-6785
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats, Serializers/Deserializers
>Affects Versions: 0.13.0
>Reporter: Tongjie Chen
> Attachments: HIVE-6785.1.patch.txt
>
>
> More specifically, if table contains string type columns. it will result in 
> the following exception ""Failed with exception 
> java.io.IOException:java.lang.ClassCastException: 
> parquet.hive.serde.primitive.ParquetStringInspector cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.SettableTimestampObjectInspector"
> see also in the following parquet issue:
> https://github.com/Parquet/parquet-mr/issues/324



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6808) sql std auth - describe table, show partitions are not being authorized

2014-04-01 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6808:
---

 Summary: sql std auth - describe table, show partitions are not 
being authorized
 Key: HIVE-6808
 URL: https://issues.apache.org/jira/browse/HIVE-6808
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair


Only users with SELECT privilege on the table should be able to do 'describe 
table' and 'show partitions'.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6785) query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe

2014-04-01 Thread Tongjie Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tongjie Chen updated HIVE-6785:
---

Attachment: HIVE-6785.1.patch.txt

> query fails when partitioned table's table level serde is ParquetHiveSerDe 
> and partition level serde is of different SerDe
> --
>
> Key: HIVE-6785
> URL: https://issues.apache.org/jira/browse/HIVE-6785
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats, Serializers/Deserializers
>Affects Versions: 0.13.0
>Reporter: Tongjie Chen
> Attachments: HIVE-6785.1.patch.txt
>
>
> More specifically, if table contains string type columns. it will result in 
> the following exception ""Failed with exception 
> java.io.IOException:java.lang.ClassCastException: 
> parquet.hive.serde.primitive.ParquetStringInspector cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.SettableTimestampObjectInspector"
> see also in the following parquet issue:
> https://github.com/Parquet/parquet-mr/issues/324



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6807:
-

Status: Patch Available  (was: Open)

> add HCatStorer ORC test to test missing columns
> ---
>
> Key: HIVE-6807
> URL: https://issues.apache.org/jira/browse/HIVE-6807
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6807.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6807:
-

Attachment: HIVE-6807.patch

> add HCatStorer ORC test to test missing columns
> ---
>
> Key: HIVE-6807
> URL: https://issues.apache.org/jira/browse/HIVE-6807
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6807.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-4329) HCatalog should use getHiveRecordWriter rather than getRecordWriter

2014-04-01 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957201#comment-13957201
 ] 

David Chen commented on HIVE-4329:
--

I think the correct fix for this is that HCatalog should be calling the 
{{OutputFormat}}s' {{getHiveRecordWriter}} rather than {{getRecordWriter}}. 
Since the purpose of HCatalog is to provide read and write interfaces and the 
Hive Metastore's services to non-Hive clients, existing SerDes should work out 
of the box.

Fixing it this way will also allow other SerDes, such as Parquet, to work with 
HCatalog as well since the ParquetSerDe currently has the same problem.

> HCatalog should use getHiveRecordWriter rather than getRecordWriter
> ---
>
> Key: HIVE-4329
> URL: https://issues.apache.org/jira/browse/HIVE-4329
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Serializers/Deserializers
>Affects Versions: 0.10.0
> Environment: discovered in Pig, but it looks like the root cause 
> impacts all non-Hive users
>Reporter: Sean Busbey
>Assignee: David Chen
>
> Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
> with the following stacktrace:
> {code}
> java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
> cast to org.apache.hadoop.io.LongWritable
>   at 
> org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
>   at 
> org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
>   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
> {code}
> The proximal cause of this failure is that the AvroContainerOutputFormat's 
> signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
> forces a NullWritable. I'm not sure of a general fix, other than redefining 
> HiveOutputFormat to mandate a WritableComparable.
> It looks like accepting WritableComparable is what's done in the other Hive 
> OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
> be changed, since it's ignoring the key. That way fixing things so 
> FileRecordWriterContainer can always use NullWritable could get spun into a 
> different issue?
> The underlying cause for failure to write to AvroSerde tables is that 
> AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
> fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957200#comment-13957200
 ] 

Eugene Koifman commented on HIVE-6807:
--

enable a test introduced in HIVE-6766 now that HIVE-4975 is fixed

> add HCatStorer ORC test to test missing columns
> ---
>
> Key: HIVE-6807
> URL: https://issues.apache.org/jira/browse/HIVE-6807
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-6807:


 Summary: add HCatStorer ORC test to test missing columns
 Key: HIVE-6807
 URL: https://issues.apache.org/jira/browse/HIVE-6807
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-4329) HCatalog should use getHiveRecordWriter rather than getRecordWriter

2014-04-01 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-4329:
-

Summary: HCatalog should use getHiveRecordWriter rather than 
getRecordWriter  (was: HCatalog clients can't write to AvroSerde backed tables)

> HCatalog should use getHiveRecordWriter rather than getRecordWriter
> ---
>
> Key: HIVE-4329
> URL: https://issues.apache.org/jira/browse/HIVE-4329
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Serializers/Deserializers
>Affects Versions: 0.10.0
> Environment: discovered in Pig, but it looks like the root cause 
> impacts all non-Hive users
>Reporter: Sean Busbey
>Assignee: David Chen
>
> Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
> with the following stacktrace:
> {code}
> java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
> cast to org.apache.hadoop.io.LongWritable
>   at 
> org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
>   at 
> org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
>   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
> {code}
> The proximal cause of this failure is that the AvroContainerOutputFormat's 
> signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
> forces a NullWritable. I'm not sure of a general fix, other than redefining 
> HiveOutputFormat to mandate a WritableComparable.
> It looks like accepting WritableComparable is what's done in the other Hive 
> OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
> be changed, since it's ignoring the key. That way fixing things so 
> FileRecordWriterContainer can always use NullWritable could get spun into a 
> different issue?
> The underlying cause for failure to write to AvroSerde tables is that 
> AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
> fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-4329) HCatalog clients can't write to AvroSerde backed tables

2014-04-01 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-4329 started by David Chen.

> HCatalog clients can't write to AvroSerde backed tables
> ---
>
> Key: HIVE-4329
> URL: https://issues.apache.org/jira/browse/HIVE-4329
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Serializers/Deserializers
>Affects Versions: 0.10.0
> Environment: discovered in Pig, but it looks like the root cause 
> impacts all non-Hive users
>Reporter: Sean Busbey
>Assignee: David Chen
>
> Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
> with the following stacktrace:
> {code}
> java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
> cast to org.apache.hadoop.io.LongWritable
>   at 
> org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
>   at 
> org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
>   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
> {code}
> The proximal cause of this failure is that the AvroContainerOutputFormat's 
> signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
> forces a NullWritable. I'm not sure of a general fix, other than redefining 
> HiveOutputFormat to mandate a WritableComparable.
> It looks like accepting WritableComparable is what's done in the other Hive 
> OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
> be changed, since it's ignoring the key. That way fixing things so 
> FileRecordWriterContainer can always use NullWritable could get spun into a 
> different issue?
> The underlying cause for failure to write to AvroSerde tables is that 
> AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
> fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-4329) HCatalog clients can't write to AvroSerde backed tables

2014-04-01 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-4329:
-

Assignee: David Chen

> HCatalog clients can't write to AvroSerde backed tables
> ---
>
> Key: HIVE-4329
> URL: https://issues.apache.org/jira/browse/HIVE-4329
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Serializers/Deserializers
>Affects Versions: 0.10.0
> Environment: discovered in Pig, but it looks like the root cause 
> impacts all non-Hive users
>Reporter: Sean Busbey
>Assignee: David Chen
>
> Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
> with the following stacktrace:
> {code}
> java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
> cast to org.apache.hadoop.io.LongWritable
>   at 
> org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
>   at 
> org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
>   at 
> org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
>   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
>   at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
> {code}
> The proximal cause of this failure is that the AvroContainerOutputFormat's 
> signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
> forces a NullWritable. I'm not sure of a general fix, other than redefining 
> HiveOutputFormat to mandate a WritableComparable.
> It looks like accepting WritableComparable is what's done in the other Hive 
> OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
> be changed, since it's ignoring the key. That way fixing things so 
> FileRecordWriterContainer can always use NullWritable could get spun into a 
> different issue?
> The underlying cause for failure to write to AvroSerde tables is that 
> AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
> fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6788) Abandoned opened transactions not being timed out

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957188#comment-13957188
 ] 

Hive QA commented on HIVE-6788:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638090/HIVE-6788.patch

{color:green}SUCCESS:{color} +1 5539 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2072/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2072/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638090

> Abandoned opened transactions not being timed out
> -
>
> Key: HIVE-6788
> URL: https://issues.apache.org/jira/browse/HIVE-6788
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Affects Versions: 0.13.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-6788.patch
>
>
> If a client abandons an open transaction it is never closed.  This does not 
> cause any immediate problems (as locks are timed out) but it will eventually 
> lead to high levels of open transactions in the lists that readers need to be 
> aware of when reading tables or partitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19754: Defines a api for streaming data into Hive using ACID support.

2014-04-01 Thread Roshan Naik

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19754/
---

(Updated April 1, 2014, 11:53 p.m.)


Review request for hive.


Changes
---

updating patch


Bugs: HIVE-5687
https://issues.apache.org/jira/browse/HIVE-5687


Repository: hive-git


Description
---

Defines an API for streaming data into Hive using ACID support.


Diffs (updated)
-

  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 
1bbe02e 
  packaging/pom.xml de9b002 
  packaging/src/main/assembly/src.xml bdaa47b 
  pom.xml 7343683 
  streaming/pom.xml PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/AbstractRecordWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/ConnectionError.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/DelimitedInputWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/HeartBeatFailure.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/HiveEndPoint.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/ImpersonationFailed.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidColumn.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidPartition.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidTable.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidTrasactionState.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/PartitionCreationFailed.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/QueryFailedException.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/RecordWriter.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/SerializationError.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StreamingConnection.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StreamingException.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StreamingIOFailure.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StrictJsonWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/TransactionBatch.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/TransactionBatchUnAvailable.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/TransactionError.java 
PRE-CREATION 
  streaming/src/test/org/apache/hive/streaming/StreamingIntegrationTester.java 
PRE-CREATION 
  streaming/src/test/org/apache/hive/streaming/TestDelimitedInputWriter.java 
PRE-CREATION 
  streaming/src/test/org/apache/hive/streaming/TestStreaming.java PRE-CREATION 
  streaming/src/test/sit PRE-CREATION 

Diff: https://reviews.apache.org/r/19754/diff/


Testing
---

Unit tests included. Also done manual testing by streaming data using flume.


Thanks,

Roshan Naik



[jira] [Assigned] (HIVE-6626) HiveServer2 does not expand the DOWNLOADED_RESOURCES_DIR path

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-6626:
--

Assignee: Vaibhav Gumashta

> HiveServer2 does not expand the DOWNLOADED_RESOURCES_DIR path
> -
>
> Key: HIVE-6626
> URL: https://issues.apache.org/jira/browse/HIVE-6626
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.14.0
>
>
> The downloaded scratch dir is specified in HiveConf as:
> {code}
> DOWNLOADED_RESOURCES_DIR("hive.downloaded.resources.dir", 
> System.getProperty("java.io.tmpdir") + File.separator  + 
> "${hive.session.id}_resources"),
> {code}
> However, hive.session.id  does not get expanded.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5687) Streaming support in Hive

2014-04-01 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-5687:
--

Attachment: Hive Streaming Ingest API for v4 patch.pdf
HIVE-5687.v4.patch

v4 patch .. Adding JSON writer suport, tweaks to JavaDocs.
Updated pdf Document

> Streaming support in Hive
> -
>
> Key: HIVE-5687
> URL: https://issues.apache.org/jira/browse/HIVE-5687
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: 5687-api-spec4.pdf, 5687-draft-api-spec.pdf, 
> 5687-draft-api-spec2.pdf, 5687-draft-api-spec3.pdf, HIVE-5687.patch, 
> HIVE-5687.v2.patch, HIVE-5687.v3.patch, HIVE-5687.v4.patch, Hive Streaming 
> Ingest API for v3 patch.pdf, Hive Streaming Ingest API for v4 patch.pdf
>
>
> Implement support for Streaming data into HIVE.
> - Provide a client streaming API 
> - Transaction support: Clients should be able to periodically commit a batch 
> of records atomically
> - Immediate visibility: Records should be immediately visible to queries on 
> commit
> - Should not overload HDFS with too many small files
> Use Cases:
>  - Streaming logs into HIVE via Flume
>  - Streaming results of computations from Storm



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-5376) Hive does not honor type for partition columns when altering column type

2014-04-01 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan reassigned HIVE-5376:
---

Assignee: Hari Sankar Sivarama Subramaniyan  (was: Vikram Dixit K)

> Hive does not honor type for partition columns when altering column type
> 
>
> Key: HIVE-5376
> URL: https://issues.apache.org/jira/browse/HIVE-5376
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Reporter: Sergey Shelukhin
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> Followup for HIVE-5297. If partition column of type string is changed to int, 
> the data is not verified. The values for partition columns are all in 
> metastore db, so it's easy to check and fail the type change.
> alter_partition_coltype.q (or some other test?) checks this behavior right 
> now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6804:


Status: Patch Available  (was: Open)

> sql std auth - granting existing table privilege to owner should result in 
> error
> 
>
> Key: HIVE-6804
> URL: https://issues.apache.org/jira/browse/HIVE-6804
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Deepesh Khandelwal
>Assignee: Thejas M Nair
> Attachments: HIVE-6804.1.patch
>
>
> Table owner gets all privileges on the table at the time of table creation.
> But granting some or all of the privileges using grant statement still works 
> resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957101#comment-13957101
 ] 

Thejas M Nair commented on HIVE-6068:
-

+1

> HiveServer2 client on windows does not handle the non-ascii characters 
> properly
> ---
>
> Key: HIVE-6068
> URL: https://issues.apache.org/jira/browse/HIVE-6068
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.13.0
> Environment: Windows 
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch
>
>
> When running a select query against a table which contains rows with 
> non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
> {noformat}
> 738;Garçu, Le (1995);Drama
> 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
> {noformat}
> come out from a HiveServer2 beeline client as:
> {noformat}
> '738' 'Gar?u, Le (1995)'  'Drama'
> '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19893: HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19893/#review39206
---

Ship it!


Ship It!

- Thejas Nair


On April 1, 2014, 10:07 p.m., Vaibhav Gumashta wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/19893/
> ---
> 
> (Updated April 1, 2014, 10:07 p.m.)
> 
> 
> Review request for hive and Thejas Nair.
> 
> 
> Bugs: HIVE-6068
> https://issues.apache.org/jira/browse/HIVE-6068
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> https://issues.apache.org/jira/browse/HIVE-6068
> 
> 
> Diffs
> -
> 
>   data/files/non_ascii_tbl.txt PRE-CREATION 
>   itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java 
> 0163788 
>   service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
> 6c604ce 
> 
> Diff: https://reviews.apache.org/r/19893/diff/
> 
> 
> Testing
> ---
> 
> New test added to TestJdbcDriver2
> 
> 
> Thanks,
> 
> Vaibhav Gumashta
> 
>



[jira] [Commented] (HIVE-6133) Support partial partition exchange

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957095#comment-13957095
 ] 

Hive QA commented on HIVE-6133:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638030/HIVE-6133.1.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5520 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2070/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2070/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638030

> Support partial partition exchange
> --
>
> Key: HIVE-6133
> URL: https://issues.apache.org/jira/browse/HIVE-6133
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-6133.1.patch.txt
>
>
> Current alter exchange coerces source and destination table to have same 
> partition columns. But source table has sub-set of partitions and provided 
> partition spec supplements to be a complete partition spec, it need not to be 
> that.
> For example, 
> {noformat}
> CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
> CREATE TABLE exchange_part_test2 (f1 string) 
> ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
> TABLE exchange_part_test2;
> {noformat}
> or 
> {noformat}
> CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
> STRING);
> CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
> ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
> TABLE exchange_part_test2;
> {noformat}
> can be possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5814) Add DATE, TIMESTAMP, DECIMAL, CHAR, VARCHAR types support in HCat

2014-04-01 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957089#comment-13957089
 ] 

Eugene Koifman commented on HIVE-5814:
--

[~leftylev] The feature is complete but the doc changes are still needed.

> Add DATE, TIMESTAMP, DECIMAL, CHAR, VARCHAR types support in HCat
> -
>
> Key: HIVE-5814
> URL: https://issues.apache.org/jira/browse/HIVE-5814
> Project: Hive
>  Issue Type: New Feature
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.13.0
>
> Attachments: HCat-Pig Type Mapping Hive 0.13.pdf, HIVE-5814.2.patch, 
> HIVE-5814.3.patch, HIVE-5814.4.patch, HIVE-5814.5.patch
>
>
> Hive 0.12 added support for new data types.  Pig 0.12 added some as well.  
> HCat should handle these as well.Also note that CHAR was added recently.
> Also allow user to specify a parameter in Pig like so HCatStorer('','', 
> '-onOutOfRangeValue Throw') to control what happens when Pig's value is out 
> of range for target Hive column.  Valid values for the option are Throw and 
> Null.  Throw - make the runtime raise an exception, Null, which is the 
> default, means NULL is written to target column and a message to that effect 
> is emitted to the log.  Only 1 message per column/data type is sent to the 
> log.
> See attached HCat-Pig Type Mapping Hive 0.13.pdf for exact mappings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957086#comment-13957086
 ] 

Thejas M Nair commented on HIVE-6780:
-

+1

> Set tez credential file property along with MR conf property for Tez jobs
> -
>
> Key: HIVE-6780
> URL: https://issues.apache.org/jira/browse/HIVE-6780
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6780.2.patch, HIVE-6780.patch
>
>
> webhcat should set the additional property - "tez.credentials.path" to the 
> same value as the MapReduce property.
> WebHCat should always proactively set this tez.credentials.path property to 
> the same value and in the same cases where it is setting the MR equivalent 
> property.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957070#comment-13957070
 ] 

Vaibhav Gumashta commented on HIVE-6068:


[~thejas] Thanks for the review. New patch incorporates the feedback.

> HiveServer2 client on windows does not handle the non-ascii characters 
> properly
> ---
>
> Key: HIVE-6068
> URL: https://issues.apache.org/jira/browse/HIVE-6068
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.13.0
> Environment: Windows 
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch
>
>
> When running a select query against a table which contains rows with 
> non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
> {noformat}
> 738;Garçu, Le (1995);Drama
> 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
> {noformat}
> come out from a HiveServer2 beeline client as:
> {noformat}
> '738' 'Gar?u, Le (1995)'  'Drama'
> '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6068:
---

Status: Patch Available  (was: Open)

> HiveServer2 client on windows does not handle the non-ascii characters 
> properly
> ---
>
> Key: HIVE-6068
> URL: https://issues.apache.org/jira/browse/HIVE-6068
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.13.0
> Environment: Windows 
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch
>
>
> When running a select query against a table which contains rows with 
> non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
> {noformat}
> 738;Garçu, Le (1995);Drama
> 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
> {noformat}
> come out from a HiveServer2 beeline client as:
> {noformat}
> '738' 'Gar?u, Le (1995)'  'Drama'
> '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6068:
---

Status: Open  (was: Patch Available)

> HiveServer2 client on windows does not handle the non-ascii characters 
> properly
> ---
>
> Key: HIVE-6068
> URL: https://issues.apache.org/jira/browse/HIVE-6068
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.13.0
> Environment: Windows 
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch
>
>
> When running a select query against a table which contains rows with 
> non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
> {noformat}
> 738;Garçu, Le (1995);Drama
> 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
> {noformat}
> come out from a HiveServer2 beeline client as:
> {noformat}
> '738' 'Gar?u, Le (1995)'  'Drama'
> '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6068:
---

Attachment: HIVE-6068.2.patch

> HiveServer2 client on windows does not handle the non-ascii characters 
> properly
> ---
>
> Key: HIVE-6068
> URL: https://issues.apache.org/jira/browse/HIVE-6068
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.13.0
> Environment: Windows 
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch
>
>
> When running a select query against a table which contains rows with 
> non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
> {noformat}
> 738;Garçu, Le (1995);Drama
> 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
> {noformat}
> come out from a HiveServer2 beeline client as:
> {noformat}
> '738' 'Gar?u, Le (1995)'  'Drama'
> '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19893: HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19893/
---

Review request for hive and Thejas Nair.


Bugs: HIVE-6068
https://issues.apache.org/jira/browse/HIVE-6068


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-6068


Diffs
-

  data/files/non_ascii_tbl.txt PRE-CREATION 
  itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java 
0163788 
  service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
6c604ce 

Diff: https://reviews.apache.org/r/19893/diff/


Testing
---

New test added to TestJdbcDriver2


Thanks,

Vaibhav Gumashta



[jira] [Updated] (HIVE-6796) Create/drop roles is case-sensitive whereas 'set role' is case insensitive

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6796:
---

Status: Patch Available  (was: Open)

> Create/drop roles is case-sensitive whereas 'set role' is case insensitive
> --
>
> Key: HIVE-6796
> URL: https://issues.apache.org/jira/browse/HIVE-6796
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepesh Khandelwal
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-6796.patch
>
>
> Create/drop role operations should be case insensitive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6796) Create/drop roles is case-sensitive whereas 'set role' is case insensitive

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6796:
---

Attachment: HIVE-6796.patch

> Create/drop roles is case-sensitive whereas 'set role' is case insensitive
> --
>
> Key: HIVE-6796
> URL: https://issues.apache.org/jira/browse/HIVE-6796
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepesh Khandelwal
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-6796.patch
>
>
> Create/drop role operations should be case insensitive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19889: Create/drop roles is case-sensitive whereas 'set role' is case insensitive

2014-04-01 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19889/
---

Review request for hive and Thejas Nair.


Bugs: HIVE-6796
https://issues.apache.org/jira/browse/HIVE-6796


Repository: hive-git


Description
---

Create/drop roles is case-sensitive whereas 'set role' is case insensitive


Diffs
-

  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
27077b4 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java
 f4dd97b 
  ql/src/java/org/apache/hadoop/hive/ql/plan/GrantRevokeRoleDDL.java d8488a7 
  ql/src/java/org/apache/hadoop/hive/ql/plan/PrincipalDesc.java 7dc0ded 
  ql/src/java/org/apache/hadoop/hive/ql/plan/RoleDDLDesc.java b4da3d1 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrincipal.java
 62b8994 
  ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 
eda2146 
  ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out 27a6822 
  ql/src/test/results/clientnegative/authorization_drop_role_no_admin.q.out 
c03876d 
  ql/src/test/results/clientnegative/authorization_fail_7.q.out 69d 
  ql/src/test/results/clientnegative/authorization_priv_current_role_neg.q.out 
7f983ba 
  ql/src/test/results/clientnegative/authorization_public_create.q.out bccdc53 
  ql/src/test/results/clientnegative/authorization_public_drop.q.out 14f6b3a 
  ql/src/test/results/clientnegative/authorization_role_grant.q.out 0f88444 
  ql/src/test/results/clientnegative/authorization_rolehierarchy_privs.q.out 
7268370 
  ql/src/test/results/clientnegative/authorize_grant_public.q.out dae4331 
  ql/src/test/results/clientnegative/authorize_revoke_public.q.out cff88ca 
  ql/src/test/results/clientpositive/authorization_1.q.out 1c52151 
  ql/src/test/results/clientpositive/authorization_1_sql_std.q.out 3e39801 
  ql/src/test/results/clientpositive/authorization_5.q.out 3353adf 
  ql/src/test/results/clientpositive/authorization_9.q.out 3ec988c 
  ql/src/test/results/clientpositive/authorization_admin_almighty1.q.out 
df0d5c4 
  ql/src/test/results/clientpositive/authorization_role_grant1.q.out 305dd9d 
  ql/src/test/results/clientpositive/authorization_role_grant2.q.out f294311 
  ql/src/test/results/clientpositive/authorization_set_show_current_role.q.out 
d5fbc48 
  ql/src/test/results/clientpositive/authorization_view_sqlstd.q.out b431c35 

Diff: https://reviews.apache.org/r/19889/diff/


Testing
---

Updated existing test cases.


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6780:
-

Attachment: HIVE-6780.2.patch

rebased and addressed Thejas' comments

> Set tez credential file property along with MR conf property for Tez jobs
> -
>
> Key: HIVE-6780
> URL: https://issues.apache.org/jira/browse/HIVE-6780
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6780.2.patch, HIVE-6780.patch
>
>
> webhcat should set the additional property - "tez.credentials.path" to the 
> same value as the MapReduce property.
> WebHCat should always proactively set this tez.credentials.path property to 
> the same value and in the same cases where it is setting the MR equivalent 
> property.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6780:
-

Status: Open  (was: Patch Available)

> Set tez credential file property along with MR conf property for Tez jobs
> -
>
> Key: HIVE-6780
> URL: https://issues.apache.org/jira/browse/HIVE-6780
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6780.2.patch, HIVE-6780.patch
>
>
> webhcat should set the additional property - "tez.credentials.path" to the 
> same value as the MapReduce property.
> WebHCat should always proactively set this tez.credentials.path property to 
> the same value and in the same cases where it is setting the MR equivalent 
> property.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6780:
-

Status: Patch Available  (was: Open)

> Set tez credential file property along with MR conf property for Tez jobs
> -
>
> Key: HIVE-6780
> URL: https://issues.apache.org/jira/browse/HIVE-6780
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-6780.2.patch, HIVE-6780.patch
>
>
> webhcat should set the additional property - "tez.credentials.path" to the 
> same value as the MapReduce property.
> WebHCat should always proactively set this tez.credentials.path property to 
> the same value and in the same cases where it is setting the MR equivalent 
> property.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6031) explain subquery rewrite for where clause predicates

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6031:


Status: Patch Available  (was: Open)

> explain subquery rewrite for where clause predicates 
> -
>
> Key: HIVE-6031
> URL: https://issues.apache.org/jira/browse/HIVE-6031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-6031.1.patch, HIVE-6031.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6031) explain subquery rewrite for where clause predicates

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6031:


Attachment: HIVE-6031.2.patch

> explain subquery rewrite for where clause predicates 
> -
>
> Key: HIVE-6031
> URL: https://issues.apache.org/jira/browse/HIVE-6031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-6031.1.patch, HIVE-6031.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6797) Add protection against divide by zero in stats annotation

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6797:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13
thanks Prasanth.

> Add protection against divide by zero in stats annotation
> -
>
> Key: HIVE-6797
> URL: https://issues.apache.org/jira/browse/HIVE-6797
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor, Statistics
>Affects Versions: 0.13.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Fix For: 0.13.0
>
> Attachments: HIVE-6797.1.patch, HIVE-6797.2.patch
>
>
> In stats annotation, the denominator computation in join operator is not 
> protected for divide by zero exception. It will be an issue when NDV (count 
> distinct) updated by updateStats() becomes 0. This patch adds protection in 
> updateStats() method to prevent divide-by-zero in downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19880: HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread dilli dorai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19880/
---

(Updated April 1, 2014, 9:29 p.m.)


Review request for hive, Thejas Nair and Vaibhav Gumashta.


Changes
---

change the log level for log message from info to debug


Bugs: HIVE-6799
https://issues.apache.org/jira/browse/HIVE-6799


Repository: hive-git


Description
---

see hive jira https://issues.apache.org/jira/browse/HIVE-6799


Diffs (updated)
-

  service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8f4822 

Diff: https://reviews.apache.org/r/19880/diff/


Testing
---

Before the patch

Hive Server2 log file reported exception with message
Failed to validate proxy privilage of knox/hdps.example.com for sam

The intermediary service with kerberos principal name knox/hdps.example.com was 
not able to proxy user sam.

After the patch.
Hive Server2 log does not  reported exception.

The intermediary service with kerberos principal name knox/hdps.example.com was 
able to proxy user sam and create table as sam.


Thanks,

dilli dorai



[jira] [Updated] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6799:
-

Attachment: HIVE-6799.2.patch

cumulative patch with changing the level for log message from info to debug

> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
> Attachments: HIVE-6799.1.patch, HIVE-6799.2.patch, HIVE-6799.patch
>
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6797) Add protection against divide by zero in stats annotation

2014-04-01 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957031#comment-13957031
 ] 

Prasanth J commented on HIVE-6797:
--

The failures are unrelated

> Add protection against divide by zero in stats annotation
> -
>
> Key: HIVE-6797
> URL: https://issues.apache.org/jira/browse/HIVE-6797
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor, Statistics
>Affects Versions: 0.13.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Fix For: 0.13.0
>
> Attachments: HIVE-6797.1.patch, HIVE-6797.2.patch
>
>
> In stats annotation, the denominator computation in join operator is not 
> protected for divide by zero exception. It will be an issue when NDV (count 
> distinct) updated by updateStats() becomes 0. This patch adds protection in 
> updateStats() method to prevent divide-by-zero in downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6786) Off by one error in ORC PPD

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6786:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13

> Off by one error in ORC PPD 
> 
>
> Key: HIVE-6786
> URL: https://issues.apache.org/jira/browse/HIVE-6786
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0
>Reporter: Gopal V
>Assignee: Prasanth J
>Priority: Critical
> Fix For: 0.13.0
>
> Attachments: HIVE-6786.1.patch
>
>
> Turning on ORC PPD makes split computation fail for a 10Tb benchmark.
> Narrowed down to the following code fragment
> https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757
> {code}
> includeStripe[i] = (i > stripeStats.size()) ||
> isStripeSatisfyPredicate(stripeStats.get(i), sarg,
>  filterColumns);
> {code}
> I would guess that should be a >=, but [~prasanth_j], can you comment if that 
> is the right fix?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6789:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to 0.13 branch and trunk.
Thanks for the contribution Vaibhav!


> HiveStatement client transport lock should unlock in finally block.
> ---
>
> Key: HIVE-6789
> URL: https://issues.apache.org/jira/browse/HIVE-6789
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6789.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6766:


   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk and 0.13 branch.
Thanks for the contribution Eugene, and for the review Sushanth!


> HCatLoader always returns Char datatype with maxlength(255)  when table 
> format is ORC
> -
>
> Key: HIVE-6766
> URL: https://issues.apache.org/jira/browse/HIVE-6766
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Fix For: 0.13.0
>
> Attachments: HIVE-6766.1.patch, HIVE-6766.patch
>
>
> attached patch contains
> org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
> which shows that char(5) value written to Hive (ORC) table using HCatStorer 
> will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6799:
-

Attachment: HIVE-6799.1.patch

Patch to resolve the issue.
There is no difference between previously attached HIVE-6799.patch and the 
current HIVE-6799.1.patch.
This current patch is added just to keep the automated precommite process 
happy. Not sure whether precommit would handle the patch without .1 prefix 
correctly.

> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
> Attachments: HIVE-6799.1.patch, HIVE-6799.patch
>
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956999#comment-13956999
 ] 

Harish Butani commented on HIVE-6766:
-

+1 for 0.13

> HCatLoader always returns Char datatype with maxlength(255)  when table 
> format is ORC
> -
>
> Key: HIVE-6766
> URL: https://issues.apache.org/jira/browse/HIVE-6766
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-6766.1.patch, HIVE-6766.patch
>
>
> attached patch contains
> org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
> which shows that char(5) value written to Hive (ORC) table using HCatStorer 
> will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956996#comment-13956996
 ] 

Vaibhav Gumashta commented on HIVE-6799:


+1 (non-binding). Patch looks good.

> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
> Attachments: HIVE-6799.patch
>
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19880: HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19880/#review39202
---

Ship it!


Ship It!

- Vaibhav Gumashta


On April 1, 2014, 8:47 p.m., dilli dorai wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/19880/
> ---
> 
> (Updated April 1, 2014, 8:47 p.m.)
> 
> 
> Review request for hive, Thejas Nair and Vaibhav Gumashta.
> 
> 
> Bugs: HIVE-6799
> https://issues.apache.org/jira/browse/HIVE-6799
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> see hive jira https://issues.apache.org/jira/browse/HIVE-6799
> 
> 
> Diffs
> -
> 
>   service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8f4822 
> 
> Diff: https://reviews.apache.org/r/19880/diff/
> 
> 
> Testing
> ---
> 
> Before the patch
> 
> Hive Server2 log file reported exception with message
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> 
> The intermediary service with kerberos principal name knox/hdps.example.com 
> was not able to proxy user sam.
> 
> After the patch.
> Hive Server2 log does not  reported exception.
> 
> The intermediary service with kerberos principal name knox/hdps.example.com 
> was able to proxy user sam and create table as sam.
> 
> 
> Thanks,
> 
> dilli dorai
> 
>



[jira] [Commented] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956995#comment-13956995
 ] 

Dilli Arumugam commented on HIVE-6799:
--

Testing done for the patch

Before the patch

Hive Server2 log file reported exception with message
Failed to validate proxy privilage of knox/hdps.example.com for sam

The intermediary service with kerberos principal name knox/hdps.example.com was 
not able to proxy user sam.

After the patch.
Hive Server2 log does not  reported exception.

The intermediary service with kerberos principal name knox/hdps.example.com was 
able to proxy user sam and create table as sam.

> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
> Attachments: HIVE-6799.patch
>
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19880: HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread dilli dorai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19880/
---

Review request for hive, Thejas Nair and Vaibhav Gumashta.


Bugs: HIVE-6799
https://issues.apache.org/jira/browse/HIVE-6799


Repository: hive-git


Description
---

see hive jira https://issues.apache.org/jira/browse/HIVE-6799


Diffs
-

  service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8f4822 

Diff: https://reviews.apache.org/r/19880/diff/


Testing
---

Before the patch

Hive Server2 log file reported exception with message
Failed to validate proxy privilage of knox/hdps.example.com for sam

The intermediary service with kerberos principal name knox/hdps.example.com was 
not able to proxy user sam.

After the patch.
Hive Server2 log does not  reported exception.

The intermediary service with kerberos principal name knox/hdps.example.com was 
able to proxy user sam and create table as sam.


Thanks,

dilli dorai



[jira] [Updated] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6799:
-

Attachment: HIVE-6799.patch

Patch to resolve the issue



> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
> Attachments: HIVE-6799.patch
>
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-6799 started by Dilli Arumugam.

> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956989#comment-13956989
 ] 

Hive QA commented on HIVE-5998:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638055/HIVE-5998.12.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5514 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_bucketed_table
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2069/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2069/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638055

> Add vectorized reader for Parquet files
> ---
>
> Key: HIVE-5998
> URL: https://issues.apache.org/jira/browse/HIVE-5998
> Project: Hive
>  Issue Type: Sub-task
>  Components: Serializers/Deserializers, Vectorization
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
>  Labels: Parquet, vectorization
> Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
> HIVE-5998.11.patch, HIVE-5998.12.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, 
> HIVE-5998.4.patch, HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, 
> HIVE-5998.8.patch, HIVE-5998.9.patch
>
>
> HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
> format, it makes sense to provide a vectorized reader, similar to how RC and 
> ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6778:


Status: Patch Available  (was: Open)

> ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
> =1 predicate in partition pruner. 
> --
>
> Key: HIVE-6778
> URL: https://issues.apache.org/jira/browse/HIVE-6778
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Harish Butani
> Attachments: HIVE-6778.1.patch
>
>
> select key, value, ds from pcr_foo where (ds % 2 == 1);
> ql/src/test/queries/clientpositive/pcr.q
> The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
> since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5376) Hive does not honor type for partition columns when altering column type

2014-04-01 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956905#comment-13956905
 ] 

Vikram Dixit K commented on HIVE-5376:
--

[~hsubramaniyan] I am not currently working on it. Please go ahead and assign 
it to yourself if you are working on it.

> Hive does not honor type for partition columns when altering column type
> 
>
> Key: HIVE-5376
> URL: https://issues.apache.org/jira/browse/HIVE-5376
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Reporter: Sergey Shelukhin
>Assignee: Vikram Dixit K
>
> Followup for HIVE-5297. If partition column of type string is changed to int, 
> the data is not verified. The values for partition columns are all in 
> metastore db, so it's easy to check and fail the type change.
> alter_partition_coltype.q (or some other test?) checks this behavior right 
> now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956869#comment-13956869
 ] 

Jitendra Nath Pandey commented on HIVE-6778:


+1

> ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
> =1 predicate in partition pruner. 
> --
>
> Key: HIVE-6778
> URL: https://issues.apache.org/jira/browse/HIVE-6778
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Harish Butani
> Attachments: HIVE-6778.1.patch
>
>
> select key, value, ds from pcr_foo where (ds % 2 == 1);
> ql/src/test/queries/clientpositive/pcr.q
> The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
> since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6394) Implement Timestmap in ParquetSerde

2014-04-01 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6394:


Assignee: Szehon Ho

I'll take a look at this issue, there has been a decision by the parquet 
community of the data type to use.

[https://github.com/Parquet/parquet-mr/issues/218|https://github.com/Parquet/parquet-mr/issues/218]

> Implement Timestmap in ParquetSerde
> ---
>
> Key: HIVE-6394
> URL: https://issues.apache.org/jira/browse/HIVE-6394
> Project: Hive
>  Issue Type: Sub-task
>  Components: Serializers/Deserializers
>Reporter: Jarek Jarcec Cecho
>Assignee: Szehon Ho
>  Labels: Parquet
>
> This JIRA is to implement timestamp support in Parquet SerDe.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Swarnim Kulkarni

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/#review38496
---



hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java


+1. I agree. Just by looking at the name, HBaseAbstractKeyFactory sounds 
like it's some kind of HBase specific extension on an AbstractKeyFactory rather 
than an extension of HBaseKeyFactory.


- Swarnim Kulkarni


On April 1, 2014, 12:59 a.m., Navis Ryu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/18179/
> ---
> 
> (Updated April 1, 2014, 12:59 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-6411
> https://issues.apache.org/jira/browse/HIVE-6411
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-2599 introduced using custom object for the row key. But it forces key 
> objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
> If user provides proper Object and OI, we can replace internal key and keyOI 
> with those. 
> 
> Initial implementation is based on factory interface.
> {code}
> public interface HBaseKeyFactory {
>   void init(SerDeParameters parameters, Properties properties) throws 
> SerDeException;
>   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
>   LazyObjectBase createObject(ObjectInspector inspector) throws 
> SerDeException;
> }
> {code}
> 
> 
> Diffs
> -
> 
>   hbase-handler/pom.xml 132af43 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
> PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
> 5008f15 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java
>  PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseDefaultKeyFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
> PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
> PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
> PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
> b64590d 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
> 4fe1b1b 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
>  142bfd8 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
> fc40195 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java
>  13c344b 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
> PRE-CREATION 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
> PRE-CREATION 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
> 7c4fc9f 
>   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
>   hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
>   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
>   hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
> PRE-CREATION 
>   itests/util/pom.xml e9720df 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
>   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
> d39ee2e 
>   ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
> 5f1329c 
>   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
>  9f35575 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
>   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
>   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
>   serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
> PRE-CREATION 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
> 1fd6853 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 
> 3334dff 
>   serde/src/java/org/apache/hadoop/hive/serde2/la

[jira] [Commented] (HIVE-6797) Add protection against divide by zero in stats annotation

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956846#comment-13956846
 ] 

Hive QA commented on HIVE-6797:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638001/HIVE-6797.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5513 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
org.apache.hcatalog.pig.TestHCatStorerMulti.testStoreBasicTable
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2068/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2068/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638001

> Add protection against divide by zero in stats annotation
> -
>
> Key: HIVE-6797
> URL: https://issues.apache.org/jira/browse/HIVE-6797
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor, Statistics
>Affects Versions: 0.13.0
>Reporter: Prasanth J
>Assignee: Prasanth J
> Fix For: 0.13.0
>
> Attachments: HIVE-6797.1.patch, HIVE-6797.2.patch
>
>
> In stats annotation, the denominator computation in join operator is not 
> protected for divide by zero exception. It will be an issue when NDV (count 
> distinct) updated by updateStats() becomes 0. This patch adds protection in 
> updateStats() method to prevent divide-by-zero in downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Xuefu Zhang


> On March 25, 2014, 6:38 p.m., Xuefu Zhang wrote:
> > hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java,
> >  line 31
> > 
> >
> > Do you think AbstractHBaseKeyFactory is slightly better?
> 
> Navis Ryu wrote:
> Yes, it's conventionally better name. But I wanted related things 
> adjacent to each other. You don't like it?

Not that I like it or not. AbstractHBaseKeyFactory sounds a little less 
confusion and it seems more pertaining java class naming convention. For 
instance, there is a java class called AbstractExecutorService rather than 
ExecutorAbstractService. This is just my personal view, of course.


- Xuefu


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/#review38465
---


On April 1, 2014, 12:59 a.m., Navis Ryu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/18179/
> ---
> 
> (Updated April 1, 2014, 12:59 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-6411
> https://issues.apache.org/jira/browse/HIVE-6411
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-2599 introduced using custom object for the row key. But it forces key 
> objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
> If user provides proper Object and OI, we can replace internal key and keyOI 
> with those. 
> 
> Initial implementation is based on factory interface.
> {code}
> public interface HBaseKeyFactory {
>   void init(SerDeParameters parameters, Properties properties) throws 
> SerDeException;
>   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
>   LazyObjectBase createObject(ObjectInspector inspector) throws 
> SerDeException;
> }
> {code}
> 
> 
> Diffs
> -
> 
>   hbase-handler/pom.xml 132af43 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
> PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
> 5008f15 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java
>  PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseDefaultKeyFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
> PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
> PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
> PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
> b64590d 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
> 4fe1b1b 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
>  142bfd8 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
> fc40195 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java
>  13c344b 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
> PRE-CREATION 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
> PRE-CREATION 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
> 7c4fc9f 
>   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
>   hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
>   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
>   hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
> PRE-CREATION 
>   itests/util/pom.xml e9720df 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
>   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
> d39ee2e 
>   ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
> 5f1329c 
>   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
>  9f35575 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
>   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
>   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
>   serde/src/java/org/

[jira] [Commented] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956840#comment-13956840
 ] 

Thejas M Nair commented on HIVE-6766:
-

[~rhbutani] This is a very useful bug fix to have in hive 0.13 .


> HCatLoader always returns Char datatype with maxlength(255)  when table 
> format is ORC
> -
>
> Key: HIVE-6766
> URL: https://issues.apache.org/jira/browse/HIVE-6766
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-6766.1.patch, HIVE-6766.patch
>
>
> attached patch contains
> org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
> which shows that char(5) value written to Hive (ORC) table using HCatStorer 
> will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/#review39179
---


- Xuefu Zhang


On April 1, 2014, 12:59 a.m., Navis Ryu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/18179/
> ---
> 
> (Updated April 1, 2014, 12:59 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-6411
> https://issues.apache.org/jira/browse/HIVE-6411
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-2599 introduced using custom object for the row key. But it forces key 
> objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
> If user provides proper Object and OI, we can replace internal key and keyOI 
> with those. 
> 
> Initial implementation is based on factory interface.
> {code}
> public interface HBaseKeyFactory {
>   void init(SerDeParameters parameters, Properties properties) throws 
> SerDeException;
>   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
>   LazyObjectBase createObject(ObjectInspector inspector) throws 
> SerDeException;
> }
> {code}
> 
> 
> Diffs
> -
> 
>   hbase-handler/pom.xml 132af43 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
> PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
> 5008f15 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java
>  PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseDefaultKeyFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
> PRE-CREATION 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
>  PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
> PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
> PRE-CREATION 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
> b64590d 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
> 4fe1b1b 
>   
> hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
>  142bfd8 
>   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
> fc40195 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java
>  13c344b 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
> PRE-CREATION 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
> PRE-CREATION 
>   
> hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
> 7c4fc9f 
>   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
>   hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
>   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
>   hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
> PRE-CREATION 
>   itests/util/pom.xml e9720df 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
>   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
> d39ee2e 
>   ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
> 5f1329c 
>   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
>   
> ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
>  9f35575 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
>   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
>   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
>   serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
> PRE-CREATION 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
> 1fd6853 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 
> 3334dff 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java 
> 82c1263 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
>  8a5386a 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java 
> 598683f 
>   

[jira] [Created] (HIVE-6806) Native Avro support in Hive

2014-04-01 Thread Jeremy Beard (JIRA)
Jeremy Beard created HIVE-6806:
--

 Summary: Native Avro support in Hive
 Key: HIVE-6806
 URL: https://issues.apache.org/jira/browse/HIVE-6806
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Jeremy Beard
Priority: Minor


Avro is well established and widely used within Hive, however creating 
Avro-backed tables requires the messy listing of the SerDe, InputFormat and 
OutputFormat classes.

Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had 
native Avro support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig

2014-04-01 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956820#comment-13956820
 ] 

Xuefu Zhang commented on HIVE-6783:
---

+1

> Incompatible schema for maps between parquet-hive and parquet-pig
> -
>
> Key: HIVE-6783
> URL: https://issues.apache.org/jira/browse/HIVE-6783
> Project: Hive
>  Issue Type: Bug
>  Components: File Formats
>Affects Versions: 0.13.0
>Reporter: Tongjie Chen
> Fix For: 0.13.0
>
> Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, 
> HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt
>
>
> see also in following parquet issue:
> https://github.com/Parquet/parquet-mr/issues/290
> The schema written for maps isn't compatible between hive and pig. This means 
> any files written in one cannot be properly read in the other.
> More specifically,  for the same map column c1, parquet-pig generates schema:
> message pig_schema {
>   optional group c1 (MAP) {
> repeated group map (MAP_KEY_VALUE) {
>   required binary key (UTF8);
>   optional binary value;
> }   
>   }
> }
> while parquet-hive generates schema:
> message hive_schema {
>optional group c1 (MAP_KEY_VALUE) {
>  repeated group map {
>required binary key;
>optional binary value;
>}
>  }
> }



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6788) Abandoned opened transactions not being timed out

2014-04-01 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6788:
-

Status: Patch Available  (was: Open)

> Abandoned opened transactions not being timed out
> -
>
> Key: HIVE-6788
> URL: https://issues.apache.org/jira/browse/HIVE-6788
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Affects Versions: 0.13.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-6788.patch
>
>
> If a client abandons an open transaction it is never closed.  This does not 
> cause any immediate problems (as locks are timed out) but it will eventually 
> lead to high levels of open transactions in the lists that readers need to be 
> aware of when reading tables or partitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6788) Abandoned opened transactions not being timed out

2014-04-01 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6788:
-

Attachment: HIVE-6788.patch

This patch adds logic to getOpenTxns to check for any abandoned transactions 
and move them from open to aborted before returning the list of open 
transactions.

> Abandoned opened transactions not being timed out
> -
>
> Key: HIVE-6788
> URL: https://issues.apache.org/jira/browse/HIVE-6788
> Project: Hive
>  Issue Type: Bug
>  Components: Locking
>Affects Versions: 0.13.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-6788.patch
>
>
> If a client abandons an open transaction it is never closed.  This does not 
> cause any immediate problems (as locks are timed out) but it will eventually 
> lead to high levels of open transactions in the lists that readers need to be 
> aware of when reading tables or partitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6800) HiveServer2 is not passing proxy user setting through hive-site

2014-04-01 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956740#comment-13956740
 ] 

Vaibhav Gumashta commented on HIVE-6800:


[~prasadm] Thanks for taking a look. The failure looks unrelated.

> HiveServer2 is not passing proxy user setting through hive-site
> ---
>
> Key: HIVE-6800
> URL: https://issues.apache.org/jira/browse/HIVE-6800
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6800.1.patch
>
>
> Setting the following in core-site.xml works fine in a secure cluster with 
> hive.server2.allow.user.substitution set to true:
> {code}
> 
>   hadoop.proxyuser.user1.groups
>   users
> 
> 
> 
>   hadoop.proxyuser.user1.hosts
>   *
> 
> {code}
> where user1 will be proxying for user2:
> {code}
> !connect 
> jdbc:hive2:/myhostname:1/;principal=hive/_h...@example.com;hive.server2.proxy.user=user2
>  user1 fakepwd org.apache.hive.jdbc.HiveDriver
> {code}
> However, setting this in hive-site.xml throws "Failed to validate proxy 
> privilage" exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956715#comment-13956715
 ] 

Hive QA commented on HIVE-6766:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637984/HIVE-6766.1.patch

{color:green}SUCCESS:{color} +1 5539 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2066/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2066/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637984

> HCatLoader always returns Char datatype with maxlength(255)  when table 
> format is ORC
> -
>
> Key: HIVE-6766
> URL: https://issues.apache.org/jira/browse/HIVE-6766
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.13.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-6766.1.patch, HIVE-6766.patch
>
>
> attached patch contains
> org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
> which shows that char(5) value written to Hive (ORC) table using HCatStorer 
> will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6793) DDLSemanticAnalyzer.analyzeShowRoles() should use HiveAuthorizationTaskFactory

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6793:
---

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Xuefu!

> DDLSemanticAnalyzer.analyzeShowRoles() should use HiveAuthorizationTaskFactory
> --
>
> Key: HIVE-6793
> URL: https://issues.apache.org/jira/browse/HIVE-6793
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, Query Processor
>Affects Versions: 0.13.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.14.0
>
> Attachments: HIVE-6793.patch
>
>
> Currently DDLSemanticAnalyzer.analyzeShowRoles() isn't using 
> HiveAuthorizationTaskFactory to create task, at odds with other Authorization 
> related task creations such as for analyzeShowRolePrincipals(). This JIRA is 
> to make it consistent.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6795) metastore initialization should add default roles with default, SBA

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6795:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk & 0.13

> metastore initialization should add default roles with default, SBA
> ---
>
> Key: HIVE-6795
> URL: https://issues.apache.org/jira/browse/HIVE-6795
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.13.0
>Reporter: Deepesh Khandelwal
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-6795.1.patch
>
>
> Hiveserver2 running sql standard authorization can connect to a metastore 
> running storage based authorization. Currently metastore is not adding the 
> standard roles to the db in such cases.
> It would be better to add them in these cases as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6802) Fix metastore.thrift: add partition_columns.types constant

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6802:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13
thanks Jason, Sergey, Hari for reviewing

> Fix metastore.thrift: add partition_columns.types constant
> --
>
> Key: HIVE-6802
> URL: https://issues.apache.org/jira/browse/HIVE-6802
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-6802.1.patch
>
>
> HIVE-6642 edited the hive_metastoreConstants.java genned file. 
> Need to add constant to thrift file and regen thrift classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6802) Fix metastore.thrift: add partition_columns.types constant

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6802:


Fix Version/s: 0.13.0

> Fix metastore.thrift: add partition_columns.types constant
> --
>
> Key: HIVE-6802
> URL: https://issues.apache.org/jira/browse/HIVE-6802
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-6802.1.patch
>
>
> HIVE-6642 edited the hive_metastoreConstants.java genned file. 
> Need to add constant to thrift file and regen thrift classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6795) metastore initialization should add default roles with default, SBA

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956564#comment-13956564
 ] 

Harish Butani commented on HIVE-6795:
-

+1 for 0.13

> metastore initialization should add default roles with default, SBA
> ---
>
> Key: HIVE-6795
> URL: https://issues.apache.org/jira/browse/HIVE-6795
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 0.13.0
>Reporter: Deepesh Khandelwal
>Assignee: Thejas M Nair
> Attachments: HIVE-6795.1.patch
>
>
> Hiveserver2 running sql standard authorization can connect to a metastore 
> running storage based authorization. Currently metastore is not adding the 
> standard roles to the db in such cases.
> It would be better to add them in these cases as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956565#comment-13956565
 ] 

Ashutosh Chauhan commented on HIVE-6804:


+1

> sql std auth - granting existing table privilege to owner should result in 
> error
> 
>
> Key: HIVE-6804
> URL: https://issues.apache.org/jira/browse/HIVE-6804
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Deepesh Khandelwal
>Assignee: Thejas M Nair
> Attachments: HIVE-6804.1.patch
>
>
> Table owner gets all privileges on the table at the time of table creation.
> But granting some or all of the privileges using grant statement still works 
> resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956566#comment-13956566
 ] 

Harish Butani commented on HIVE-6804:
-

+1 for 0.13

> sql std auth - granting existing table privilege to owner should result in 
> error
> 
>
> Key: HIVE-6804
> URL: https://issues.apache.org/jira/browse/HIVE-6804
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Deepesh Khandelwal
>Assignee: Thejas M Nair
> Attachments: HIVE-6804.1.patch
>
>
> Table owner gets all privileges on the table at the time of table creation.
> But granting some or all of the privileges using grant statement still works 
> resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956561#comment-13956561
 ] 

Harish Butani commented on HIVE-6789:
-

+1 for .13

> HiveStatement client transport lock should unlock in finally block.
> ---
>
> Key: HIVE-6789
> URL: https://issues.apache.org/jira/browse/HIVE-6789
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6789.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6329) Support column level encryption/decryption

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956549#comment-13956549
 ] 

Hive QA commented on HIVE-6329:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637978/HIVE-6329.8.patch.txt

{color:green}SUCCESS:{color} +1 5515 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2064/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2064/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637978

> Support column level encryption/decryption
> --
>
> Key: HIVE-6329
> URL: https://issues.apache.org/jira/browse/HIVE-6329
> Project: Hive
>  Issue Type: New Feature
>  Components: Security, Serializers/Deserializers
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-6329.1.patch.txt, HIVE-6329.2.patch.txt, 
> HIVE-6329.3.patch.txt, HIVE-6329.4.patch.txt, HIVE-6329.5.patch.txt, 
> HIVE-6329.6.patch.txt, HIVE-6329.7.patch.txt, HIVE-6329.8.patch.txt
>
>
> Receiving some requirements on encryption recently but hive is not supporting 
> it. Before the full implementation via HIVE-5207, this might be useful for 
> some cases.
> {noformat}
> hive> create table encode_test(id int, name STRING, phone STRING, address 
> STRING) 
> > ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
> > WITH SERDEPROPERTIES ('column.encode.indices'='2,3', 
> 'column.encode.classname'='org.apache.hadoop.hive.serde2.Base64WriteOnly') 
> STORED AS TEXTFILE;
> OK
> Time taken: 0.584 seconds
> hive> insert into table encode_test select 
> 100,'navis','010--','Seoul, Seocho' from src tablesample (1 rows);
> ..
> OK
> Time taken: 5.121 seconds
> hive> select * from encode_test;
> OK
> 100   navis MDEwLTAwMDAtMDAwMA==  U2VvdWwsIFNlb2Nobw==
> Time taken: 0.078 seconds, Fetched: 1 row(s)
> hive> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6800) HiveServer2 is not passing proxy user setting through hive-site

2014-04-01 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956519#comment-13956519
 ] 

Prasad Mujumdar commented on HIVE-6800:
---

[~vaibhavgumashta] Thanks for fixing the issue. Looks fine to me.
+1


> HiveServer2 is not passing proxy user setting through hive-site
> ---
>
> Key: HIVE-6800
> URL: https://issues.apache.org/jira/browse/HIVE-6800
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.13.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: HIVE-6800.1.patch
>
>
> Setting the following in core-site.xml works fine in a secure cluster with 
> hive.server2.allow.user.substitution set to true:
> {code}
> 
>   hadoop.proxyuser.user1.groups
>   users
> 
> 
> 
>   hadoop.proxyuser.user1.hosts
>   *
> 
> {code}
> where user1 will be proxying for user2:
> {code}
> !connect 
> jdbc:hive2:/myhostname:1/;principal=hive/_h...@example.com;hive.server2.proxy.user=user2
>  user1 fakepwd org.apache.hive.jdbc.HiveDriver
> {code}
> However, setting this in hive-site.xml throws "Failed to validate proxy 
> privilage" exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956483#comment-13956483
 ] 

Dilli Arumugam commented on HIVE-6799:
--

[~vaibhavgumashta]
Your observations is right - the problem is with principal name of the form 
serviceName/h...@realm.com, which would be typically be another service.

> HiveServer2 needs to map kerberos name to local name before proxy check
> ---
>
> Key: HIVE-6799
> URL: https://issues.apache.org/jira/browse/HIVE-6799
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
>
> HiveServer2 does not map kerberos name of authenticated principal to local 
> name.
> Due to this, I get error like the following in HiveServer log:
> Failed to validate proxy privilage of knox/hdps.example.com for sam
> I have KINITED as knox/hdps.example@example.com
> I do have the following in core-site.xml
>   
> hadoop.proxyuser.knox.groups
> users
>   
>   
> hadoop.proxyuser.knox.hosts
> *
>   



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956456#comment-13956456
 ] 

Hive QA commented on HIVE-6411:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637969/HIVE-6411.8.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5515 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2063/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2063/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637969

> Support more generic way of using composite key for HBaseHandler
> 
>
> Key: HIVE-6411
> URL: https://issues.apache.org/jira/browse/HIVE-6411
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-6411.1.patch.txt, HIVE-6411.2.patch.txt, 
> HIVE-6411.3.patch.txt, HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, 
> HIVE-6411.6.patch.txt, HIVE-6411.7.patch.txt, HIVE-6411.8.patch.txt
>
>
> HIVE-2599 introduced using custom object for the row key. But it forces key 
> objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
> If user provides proper Object and OI, we can replace internal key and keyOI 
> with those. 
> Initial implementation is based on factory interface.
> {code}
> public interface HBaseKeyFactory {
>   void init(SerDeParameters parameters, Properties properties) throws 
> SerDeException;
>   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
>   LazyObjectBase createObject(ObjectInspector inspector) throws 
> SerDeException;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Attachment: HIVE-5998.12.patch

Not my best day... forgot to say --no-prefix on .11.patch

> Add vectorized reader for Parquet files
> ---
>
> Key: HIVE-5998
> URL: https://issues.apache.org/jira/browse/HIVE-5998
> Project: Hive
>  Issue Type: Sub-task
>  Components: Serializers/Deserializers, Vectorization
>Reporter: Remus Rusanu
>Assignee: Remus Rusanu
>Priority: Minor
>  Labels: Parquet, vectorization
> Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
> HIVE-5998.11.patch, HIVE-5998.12.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, 
> HIVE-5998.4.patch, HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, 
> HIVE-5998.8.patch, HIVE-5998.9.patch
>
>
> HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
> format, it makes sense to provide a vectorized reader, similar to how RC and 
> ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6802) Fix metastore.thrift: add partition_columns.types constant

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956354#comment-13956354
 ] 

Hive QA commented on HIVE-6802:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637968/HIVE-6802.1.patch

{color:green}SUCCESS:{color} +1 5513 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2062/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2062/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637968

> Fix metastore.thrift: add partition_columns.types constant
> --
>
> Key: HIVE-6802
> URL: https://issues.apache.org/jira/browse/HIVE-6802
> Project: Hive
>  Issue Type: Bug
>Reporter: Harish Butani
>Assignee: Harish Butani
> Attachments: HIVE-6802.1.patch
>
>
> HIVE-6642 edited the hive_metastoreConstants.java genned file. 
> Need to add constant to thrift file and regen thrift classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >