[jira] [Created] (HIVE-18376) Update committer-list

2018-01-04 Thread Chris Drome (JIRA)
Chris Drome created HIVE-18376:
--

 Summary: Update committer-list
 Key: HIVE-18376
 URL: https://issues.apache.org/jira/browse/HIVE-18376
 Project: Hive
  Issue Type: Bug
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Trivial


Adding new entry to committer-list:

{noformat}
+
+cdrome 
+Chris Drome 
+https://www.oath.com/";>Oath 
+
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: ptest not picking patches

2017-11-01 Thread Chris Drome
Me as well for HIVE-17853.

Attempting to cancel/resubmit patches.

On Wed, Nov 1, 2017 at 10:50 AM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:

> Me too. Seen this yesterday with HIVE-17834.
>
> Thanks
> Prasanth
>
>
>
> On Wed, Nov 1, 2017 at 10:26 AM -0700, "Deepak Jaiswal" <
> djais...@hortonworks.com> wrote:
>
>
> Hi,
>
> I uploaded couple of patches but they don't appear in the Pre-commit tests
> queue.
> Is anyone else facing this?
>
> Regards,
> Deepak
>
>


[jira] [Created] (HIVE-17275) Auto-merge fails on writes of UNION ALL output to ORC file with dynamic partitioning

2017-08-08 Thread Chris Drome (JIRA)
Chris Drome created HIVE-17275:
--

 Summary: Auto-merge fails on writes of UNION ALL output to ORC 
file with dynamic partitioning
 Key: HIVE-17275
 URL: https://issues.apache.org/jira/browse/HIVE-17275
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 2.2.0
Reporter: Chris Drome
Assignee: Chris Drome


If dynamic partitioning is used to write the output of UNION or UNION ALL 
queries into ORC files with hive.merge.tezfiles=true, the merge step fails as 
follows:

{noformat}
2017-08-08T11:27:19,958 ERROR [e7b1f06d-d632-408a-9dff-f7ae042cd25a main] 
SessionState: Vertex failed, vertexName=File Merge, 
vertexId=vertex_1502216690354_0001_33_00, diagnostics=[Task failed, 
taskId=task_1502216690354_0001_33_00_00, diagnostics=[TaskAttempt 0 failed, 
info=[Error: Error while running task ( failure ) : 
attempt_1502216690354_0001_33_00_00_0:java.lang.RuntimeException: 
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.IOException: Multiple partitions for one merge mapper: 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/1
 NOT EQUAL TO 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/2
  at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
  at 
org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
  at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
  at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
  at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
  at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
  at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
  at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Multiple 
partitions for one merge mapper: 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/1
 NOT EQUAL TO 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/2
  at 
org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:225)
  at 
org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.run(MergeFileRecordProcessor.java:154)
  at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:185)
  ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.IOException: Multiple partitions for one merge mapper: 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/1
 NOT EQUAL TO 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/2
  at 
org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.processKeyValuePairs(OrcFileMergeOperator.java:169)
  at 
org.apache.hadoop.hive.ql.exec.OrcFileMergeOperator.process(OrcFileMergeOperator.java:72)
  at 
org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:216)
  ... 16 more
Caused by: java.io.IOException: Multiple partitions for one merge mapper: 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/1
 NOT EQUAL TO 
hdfs://localhost:39943/build/ql/test/data/warehouse/partunion1/.hive-staging_hive_2017-08-08_11-27-09_105_286405133968521828-1/-ext-10002/part1=2014/2
  at 
org.apache.hadoop.hive.ql.exec.AbstractFileMergeOperator.checkPartitionsMatch(AbstractFileMergeOperator.java:180)
  at 
org.apache.hadoop.hive.ql.exec.AbstractFileMergeOperator.fixTmpPath(AbstractFileMergeOperator.java:197)
  at

Re: [DISCUSS] Pre-commit tests before commits

2016-10-14 Thread Chris Drome
+1.

We have a build/test environment for branch-1.2 and found the following two 
reasons for some of the flakiness in tests:
1. Derby leaves its database files hanging around the target directory, so the 
next test framework (e.g. TestMinimrCliDriver, TestMiniTezCliDriver) that runs 
picks up the metastore state from the previous set of tests, which sometimes 
results in random failures depending on what qfiles run and in which order.
2. Certain qfile tests create new tables and don't clean them up after 
completing. This alone can cause random failures depending on the order of 
tests, and combined with 1. above causes other random failures.
Upon completion, a qfile test should return the metastore to the same state it 
was in before the qfile test was run.
chris
 

On Friday, October 14, 2016 12:46 PM, Eugene Koifman 
 wrote:
 

 Making global formatting changes will create a patch that changes almost
every line and make git annotate useless which makes figuring out history
of
changes difficult.

Perhaps disabling tests should require a separate +1 (ideally from initial
author).


On 10/14/16, 12:27 PM, "Siddharth Seth"  wrote:

>Once we agree upon what the flow should be, I'm also in favor of reverting
>patches which break things. Such commits cause problems for all subsequent
>patches - where multiple people end up debugging the test.
>In terms of disabling tests - we may want to consider how often the test
>fails, before disabling them. There are flaky tests - and I suspect we'll
>end up with a lot of disabled tests if a test is disabled for a single
>failure.
>There have been offline suggestions about adding findbugs analysis, rat
>checks etc. The indentation checks, I think, falls into the same category.
>Would be really nice to hear from others as well. These changes will be
>painful to start with - but should help once the tests stabilize further.
>
>On Fri, Oct 14, 2016 at 2:01 AM, Peter Vary  wrote:
>
>> +1 from me too.
>>
>> I think it lowers the barrier for the contributors, if there are clean
>>and
>> easy to follow rules for adding a patch. I had very good experience with
>> the Hadoop commit flow where only ³all green² results are accepted.
>>
>> If we are thinking about changes in the commit flow, it would be great
>>to
>> run code format checks before and after the patch, and call out any
>> increase of formatting errors. I see too many nits in the reviews where
>>the
>> reviewer points out formatting problems. This could be avoided and the
>> reviewer can concentrate on the real code instead of formatting.
>>
>> After we decide the new commit flow we should update the documentation
>>on
>> the wiki as well.
>>
>> Of course I am happy to help out with any of the related tasks too, so
>>if
>> you need my help just say so :)
>>
>> Thanks,
>> Peter
>>
>> > On Oct 14, 2016, at 10:29 AM, Zsombor Klara
>>
>> wrote:
>> >
>> > +1 from my side as well. I also like the suggestion to revert a patch
>> > causing new test failures instead of expecting the owner to fix the
>> issues
>> > in follow up jiras.
>> >
>> > And I would also have a borderline "radical" suggestion.. if a flaky
>>test
>> > is failing and the committer isn't sure at a glance that the failure
>>is
>> > benign (stat diff failure for example I would consider low risk, race
>> > conditions on the other hand high risk), put an ignore annotation on
>>it.
>> In
>> > my view a flaky test is pretty much useless. If it breaks people will
>> just
>> > ignore it (because it fails so often...) and anyway how can you be
>>sure
>> > that the green run is the standard and the failure is the exception
>>and
>> not
>> > the other way around. We have thousands of tests, ignoring a dozen
>>won't
>> > decrease coverage significantly and will take us that much closer to a
>> > point where we can demand a green pre-commit run.
>> >
>> > Thanks
>> >
>> > On Fri, Oct 14, 2016 at 9:45 AM, Prasanth Jayachandran <
>> > pjayachand...@hortonworks.com> wrote:
>> >
>> >> +1 on the proposal. Adding more to it.
>> >>
>> >> A lot of time has been spent on improving the test runtime and
>>bringing
>> >> down the flaky tests.
>> >> Following jiras should give an overview of the effort involved
>> >> https://issues.apache.org/jira/browse/HIVE-14547
>> >> https://issues.apache.org/jira/browse/HIVE-13503
>> >>
>> >> Committers please ensure that the reported failures are absolutely
>>not
>> >> related
>> >> to the patch before committing it.
>> >>
>> >> I would also propose the following to maintain a clean and some tips
>>to
>> >> maintain fast test runs
>> >>
>> >> 1) Revert patch that is causing a failure. It should be the
>> responsibility
>> >> of
>> >> the contributor to make sure the patch is not causing any failures.
>>I am
>> >> against creating follow ups
>> >> for fixing test failures usually because it gets ignored or it gets
>> lower
>> >> priority causing wasted effort
>> >> and time for failure analysis for every other developers waiting to
>> commi

[jira] [Created] (HIVE-14870) OracleStore: RawStore implementation optimized for Oracle

2016-09-30 Thread Chris Drome (JIRA)
Chris Drome created HIVE-14870:
--

 Summary: OracleStore: RawStore implementation optimized for Oracle
 Key: HIVE-14870
 URL: https://issues.apache.org/jira/browse/HIVE-14870
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Chris Drome
Assignee: Chris Drome
 Attachments: OracleStoreDesignProposal.pdf





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14344) Intermittent failures caused by leaking delegation tokens

2016-07-26 Thread Chris Drome (JIRA)
Chris Drome created HIVE-14344:
--

 Summary: Intermittent failures caused by leaking delegation tokens
 Key: HIVE-14344
 URL: https://issues.apache.org/jira/browse/HIVE-14344
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 2.1.0, 1.2.1
Reporter: Chris Drome
Assignee: Chris Drome


We have experienced random job failures caused by leaking delegation tokens. 
The Tez child task will fail because it is attempting to read from the 
delegation tokens directory of a different (related) task.

Failure results in the following type of stack trace:

{noformat}
2016-07-21 16:57:18,061 [FATAL] [TezChild] |tez.ReduceRecordSource|: 
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
processing row (tag=0) 
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:370)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:292)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:249)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:148)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1738)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.RuntimeException: java.io.IOException: Exception reading 
file:/grid/4/tmp/yarn-local/usercache/.../appcache/application_1468602386465_489814/container_e02_1468602386465_489814_01_01/container_tokens
at 
org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:237)
at 
org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:74)
at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:650)
at 
org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:756)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:316)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:279)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:272)
at 
org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.process(CommonMergeJoinOperator.java:258)
at 
org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:361)
... 17 more
Caused by: java.lang.RuntimeException: java.io.IOException: Exception reading 
file:/grid/4/tmp/yarn-local/usercache/.../appcache/application_1468602386465_489814/container_e02_1468602386465_489814_01_01/container_tokens
at 
org.apache.hadoop.mapreduce.security.TokenCache.mergeBinaryTokens(TokenCache.java:141)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at 
org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:45)
at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at 
org.apache.hadoop.hive.ql.exec.persistence.RowContainer.first(RowContainer.java:222)
... 25 more
Caused by: java.io.IOException: Exception reading 
file:/grid/4/tmp/yarn-local/usercache/.../appcache

Review Request 50018: HIVE-13989: Extended ACLs are not handled according to specification

2016-07-13 Thread Chris Drome

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/50018/
---

Review request for hive.


Repository: hive-git


Description
---

HIVE-13989: Extended ACLs are not handled according to specification


Diffs
-

  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/FileOutputCommitterContainer.java
 9db3dc1b5d1d1354aaeb4850c6e7ec0b61682fe1 
  
itests/hive-unit-hadoop2/src/test/java/org/apache/hadoop/hive/ql/security/TestExtendedAcls.java
 b7983797fed107aeb5e0bc53bc452cfaed95fdf9 
  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/FolderPermissionBase.java
 2ae9cc0cecf15ab03e3bad9ff298bad74ee6bbc0 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 
611266f88ac62ef3cbe8810c7dda2b70d582b94d 
  shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java 
70a6857464a38d9a425511b78b54d4231f131f1f 

Diff: https://reviews.apache.org/r/50018/diff/


Testing
---


Thanks,

Chris Drome



[jira] [Created] (HIVE-13990) Client should not check dfs.namenode.acls.enabled to determine if extended ACLs are supported

2016-06-09 Thread Chris Drome (JIRA)
Chris Drome created HIVE-13990:
--

 Summary: Client should not check dfs.namenode.acls.enabled to 
determine if extended ACLs are supported
 Key: HIVE-13990
 URL: https://issues.apache.org/jira/browse/HIVE-13990
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 1.2.1
Reporter: Chris Drome


dfs.namenode.acls.enabled is a server side configuration and the client should 
not presume to know how the server is configured. Barring a method for querying 
the NN whether ACLs are supported the client should try and catch the 
appropriate exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13989) Extended ACLs are not handled according to specification

2016-06-09 Thread Chris Drome (JIRA)
Chris Drome created HIVE-13989:
--

 Summary: Extended ACLs are not handled according to specification
 Key: HIVE-13989
 URL: https://issues.apache.org/jira/browse/HIVE-13989
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 1.2.1
Reporter: Chris Drome
Assignee: Chris Drome






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13756) Map failure attempts to delete reducer _temporary directory on multi-query pig query

2016-05-13 Thread Chris Drome (JIRA)
Chris Drome created HIVE-13756:
--

 Summary: Map failure attempts to delete reducer _temporary 
directory on multi-query pig query
 Key: HIVE-13756
 URL: https://issues.apache.org/jira/browse/HIVE-13756
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 2.0.0, 1.2.1
Reporter: Chris Drome
Assignee: Chris Drome


A pig script, executed with multi-query enabled, that reads the source data and 
writes it as-is into TABLE_A as well as performing a group-by operation on the 
data which is written into TABLE_B can produce erroneous results if any map 
fails. This results in a single MR job that writes the map output to a scratch 
directory relative to TABLE_A and the reducer output to a scratch directory 
relative to TABLE_B.

If one or more maps fail it will delete the attempt data relative to TABLE_A, 
but it also deletes the _temporary directory relative to TABLE_B. This has the 
unintended side-effect of preventing subsequent maps from committing their 
data. This means that any maps which successfully completed before the first 
map failure will have its data committed as expected, other maps not, resulting 
in an incomplete result set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-13754) Fix resource leak in HiveClientCache

2016-05-12 Thread Chris Drome (JIRA)
Chris Drome created HIVE-13754:
--

 Summary: Fix resource leak in HiveClientCache
 Key: HIVE-13754
 URL: https://issues.apache.org/jira/browse/HIVE-13754
 Project: Hive
  Issue Type: Bug
  Components: Clients
Affects Versions: 2.0.0, 1.2.1
Reporter: Chris Drome
Assignee: Chris Drome


Found that the {{users}} reference count can go into negative values, which 
prevents {{tearDownIfUnused}} from closing the client connection when called.
This leads to a build up of clients which have been evicted from the cache, are 
no longer in use, but have not been shutdown.
GC will eventually call {{finalize}}, which forcibly closes the connection and 
cleans up the client, but I have seen as many as several hundred open client 
connections as a result.

The main resource for this is caused by RetryingMetaStoreClient, which will 
call {{reconnect}} on acquire, which calls {{close}}. This will decrement 
{{users}} to -1 on the reconnect, then acquire will increase this to 0 while 
using it, and back to -1 when it releases it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Hive PMC Member - Sushanth Sowmyan

2015-07-22 Thread Chris Drome
Congratulations Sushanth!
 


 On Wednesday, July 22, 2015 9:46 AM, Carl Steinbach  
wrote:
   

 I am pleased to announce that Sushanth Sowmyan has been elected to the Hive
Project Management Committee. Please join me in congratulating Sushanth!

Thanks.

- Carl


   

Re: [VOTE] Stable releases from branch-1 and experimental releases from master

2015-05-26 Thread Chris Drome
+1
I hope that important bugfixes (and functionality) will be backported from 
trunk to branch-1 for a reasonable amount of time given that many people will 
continue to rely on branch-1.
Thanks,
chris
 


 On Tuesday, May 26, 2015 1:45 PM, Gopal Vijayaraghavan  
wrote:
   

 +1

This would protect those who might not want to lose workflow features from
the stable
releases for the coming year.

I¹m not looking forward to a branch merge tearing out the hive CLI client
away in a
single chop.

Having a master branch where we can plan for the future is essential and
this is an 
established way to do that without inconveniencing the existing user base.

Also, git - I couldn¹t have handled this before we moved over onto it.


Cheers,
Gopal

On 5/26/15, 11:41 AM, "Alan Gates"  wrote:

>We have discussed this for several weeks now.  Some concerns have been
>raised which I have tried to address.  I think it is time to vote on it
>as our release plan.  To be specific, I propose:
>
>Hive makes a branch-1 from the current master.  This would be used for
>1.3 and future 1.x releases.  This branch would not deprecate existing
>functionality.  Any new features in this branch would also need to be
>put on master.  An upgrade path for users will be maintained from one
>1.x release to the next, as well as from the latest 1.x release to the
>latest 2.x release.
>
>Going forward releases numbered 2.x will be made from master.  The
>purpose of these releases will be to enable users to get access to new
>features being developed in Hive and allow developers to get feedback.
>It is expected that for a while these releases will not be production
>ready and will be clearly so labeled.  Some legacy features, such as
>Hadoop 1 and MapReduce, will no longer be supported in the master.  Any
>critical bug fixes (security, incorrect results, crashes) fixed in
>master will also be ported to branch-1 for at least a year.  This time
>period may be extended in the future based on the stability and adoption
>of 2.x releases.
>
>Based on Hive's bylaws this release plan vote will be open for 3 days
>and all active committers have binding votes.
>
>Here's my +1.
>
>Alan.



  

Re: [DISCUSS] Supporting Hadoop-1 and experimental features

2015-05-22 Thread Chris Drome
I understand the motivation and benefits of creating a branch-2 where more 
disruptive work can go on without affecting branch-1. While not necessarily 
against this approach, from Yahoo's standpoint, I do have some questions 
(concerns).
Upgrading to a new version of Hive requires a significant commitment of time 
and resources to stabilize and certify a build for deployment to our clusters. 
Given the size of our clusters and scale of datasets, we have to be 
particularly careful about adopting new functionality. However, at the same 
time we are interested in new testing and making available new features and 
functionality. That said, we would have to rely on branch-1 for the immediate 
future.
One concern is that branch-1 would be left to stagnate, at which point there 
would be no option but for users to move to branch-2 as branch-1 would be 
effectively end-of-lifed. I'm not sure how long this would take, but it would 
eventually happen as a direct result of the very reason for creating branch-2.
A related concern is how disruptive the code changes will be in branch-2. I 
imagine that changes in early in branch-2 will be easy to backport to branch-1, 
while this effort will become more difficult, if not impractical, as time goes. 
If the code bases diverge too much then this could lead to more pressure for 
users of branch-1 to add features just to branch-1, which has been mentioned as 
undesirable. By the same token, backporting any code in branch-2 will require 
an increasing amount of effort, which contributors to branch-2 may not be 
interested in committing to.
These questions affect us directly because, while we require a certain amount 
of stability, we also like to pull in new functionality that will be of value 
to our users. For example, our current 0.13 release is probably closer to 0.14 
at this point. Given the lifespan of a release, it is often more palatable to 
backport features and bugfixes than to jump to a new version.

The good thing about this proposal is the opportunity to evaluate and clean up 
alot of the old code.
Thanks,
chris
 


 On Monday, May 18, 2015 11:48 AM, Sergey Shelukhin 
 wrote:
   

 Note: by “cannot” I mean “are unwilling to”; upgrade paths exist, but some
people are set in their ways or have practical considerations and don’t
care for new shiny stuff.

On 15/5/18, 11:46, "Sergey Shelukhin"  wrote:

>I think we need some path for deprecating old Hadoop versions, the same
>way we deprecate old Java version support or old RDBMS version support.
>At some point the cost of supporting Hadoop 1 exceeds the benefit. Same
>goes for stuff like MR; supporting it, esp. for perf work, becomes a
>burden, and it’s outdated with 2 alternatives, one of which has been
>around for 2 releases.
>The branches are a graceful way to get rid of the legacy burden.
>
>Alternatively, when sweeping changes are made, we can do what Hbase did
>(which is not pretty imho), where 0.94 version had ~30 dot releases
>because people cannot upgrade to 0.96 “singularity” release.
>
>
>I posit that people who run Hadoop 1 and MR at this day and age (and more
>so as time passes) are people who either don’t care about perf and new
>features, only stability; so, stability-focused branch would be perfect to
>support them.
>
>
>On 15/5/18, 10:04, "Edward Capriolo"  wrote:
>
>>Up until recently Hive supported numerous versions of Hadoop code base
>>with
>>a simple shim layer. I would rather we stick to the shim layer. I think
>>this was easily the best part about hive was that a single release worked
>>well regardless of your hadoop version. It was also a key element to
>>hive's
>>success. I do not want to see us have multiple branches.
>>
>>On Sat, May 16, 2015 at 1:29 AM, Xuefu Zhang  wrote:
>>
>>> Thanks for the explanation, Alan!
>>>
>>> While I have understood more on the proposal, I actually see more
>>>problems
>>> than the confusion of two lines of releases. Essentially, this proposal
>>> forces a user to make a hard choice between a stabler, legacy-aware
>>>release
>>> line and an adventurous, pioneering release line. And once the choice
>>>is
>>> made, there is no easy way back or forward.
>>>
>>> Here is my interpretation. Let's say we have two main branches as
>>> proposed. I develop a new feature which I think useful for both
>>>branches.
>>> So, I commit it to both branches. My feature requires additional schema
>>> support, so I provide upgrade scripts for both branches. The scripts
>>>are
>>> different because the two branches have already diverged in schema.
>>>
>>> Now the two branches evolve in a diverging fashion like this. This is
>>>all
>>> good as long as a user stays in his line. The moment the user considers
>>>a
>>> switch, mostly likely, from branch-1 to branch-2, he is stuck. Why?
>>>Because
>>> there is no upgrade path from a release in branch-1 to a release in
>>> branch-2!
>>>
>>> If we want to provide an upgrade path, then there will be MxN paths,
>>>where
>>> M and N ar

Re: [ANNOUNCE] New Hive Committer - Mithun Radhakrishnan

2015-04-14 Thread Chris Drome
Congratulations Mithun!
 


 On Tuesday, April 14, 2015 2:57 PM, Carl Steinbach  wrote:
   

 The Apache Hive PMC has voted to make Mithun Radhakrishnan a committer on the 
Apache Hive Project. 
Please join me in congratulating Mithun.
Thanks.
- Carl


  

Re: [ANNOUNCE] New Hive PMC Member - Alan Gates

2014-10-27 Thread Chris Drome
Congratulations Alan!

From: Carl Steinbach mailto:c...@apache.org>>
Reply-To: "u...@hive.apache.org" 
mailto:u...@hive.apache.org>>
Date: Monday, October 27, 2014 at 3:38 PM
To: "dev@hive.apache.org" 
mailto:dev@hive.apache.org>>, 
"u...@hive.apache.org" 
mailto:u...@hive.apache.org>>, 
"ga...@apache.org" 
mailto:ga...@apache.org>>
Subject: [ANNOUNCE] New Hive PMC Member - Alan Gates

I am pleased to announce that Alan Gates has been elected to the Hive Project 
Management Committee. Please join me in congratulating Alan!

Thanks.

- Carl


[jira] [Updated] (HIVE-8414) HiveConnection.openSession should throw exception if session handle is null

2014-10-09 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-8414:
--
Attachment: HIVE-8414-1.patch

Attached preliminary fix. Will add unittest to exercise.

> HiveConnection.openSession should throw exception if session handle is null
> ---
>
> Key: HIVE-8414
> URL: https://issues.apache.org/jira/browse/HIVE-8414
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.14.0, 0.15.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Attachments: HIVE-8414-1.patch
>
>
> HiveConnection should verify the session handle in openSession and throw a 
> SQLException if null.
> Ran into this problem by trying to send configs that change restricted 
> params. Without a valid session handle, all subsequent operation with the 
> connection will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8414) HiveConnection.openSession should throw exception if session handle is null

2014-10-09 Thread Chris Drome (JIRA)
Chris Drome created HIVE-8414:
-

 Summary: HiveConnection.openSession should throw exception if 
session handle is null
 Key: HIVE-8414
 URL: https://issues.apache.org/jira/browse/HIVE-8414
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.14.0, 0.15.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor


HiveConnection should verify the session handle in openSession and throw a 
SQLException if null.

Ran into this problem by trying to send configs that change restricted params. 
Without a valid session handle, all subsequent operation with the connection 
will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-7195) Improve Metastore performance

2014-06-12 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14030047#comment-14030047
 ] 

Chris Drome commented on HIVE-7195:
---

We ([~mithun], [~thiruvel], [~selinazh]) have done some work in this area for 
hive-0.12.

Some of the improvements include:

1) Disabling the datanucleus cache to reduce the memory usage in the metastore.
2) Actively close datanucleus query-related resources to allow the memory the 
be reclaimed.
3) Optimizations to answer metadata-only queries directly from the metastore 
without launching MR jobs.
4) Optimizations to direct SQL statements.
5) Schema changes to speed up DROP TABLE statements.
6) Added client and server side parameters to restrict the maximum number of 
partitions that can be retrieved.

We are currently looking into:

1) Reducing the client time required to retrieve HDFS file information.
2) Using light-weight partition objects where possible to reduce the time and 
memory on client/server.

If I've forgotten anything Mithun, Thiruvel, or Selina can add more information.

> Improve Metastore performance
> -
>
> Key: HIVE-7195
> URL: https://issues.apache.org/jira/browse/HIVE-7195
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Priority: Critical
>
> Even with direct SQL, which significantly improves MS performance, some 
> operations take a considerable amount of time, when there are many partitions 
> on table. Specifically I believe the issue:
> * When a client gets all partitions we do not send them an iterator, we 
> create a collection of all data and then pass the object over the network in 
> total
> * Operations which require looking up data on the NN can still be slow since 
> there is no cache of information and it's done in a serial fashion
> * Perhaps a tangent, but our client timeout is quite dumb. The client will 
> timeout and the server has no idea the client is gone. We should use 
> deadlines, i.e. pass the timeout to the server so it can calculate that the 
> client has expired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-7189) Hive does not store column names in ORC

2014-06-06 Thread Chris Drome (JIRA)
Chris Drome created HIVE-7189:
-

 Summary: Hive does not store column names in ORC
 Key: HIVE-7189
 URL: https://issues.apache.org/jira/browse/HIVE-7189
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.13.0, 0.12.0
Reporter: Chris Drome


We uncovered the following discrepancy between writing ORC files through Pig 
and Hive:

ORCFile header contains the name of the columns. Storing through Pig 
(ORCStorage or HCatStorer), the column names are stored fine. But when stored 
through hive they are stored as _col0, _col1,,_col99 and hive uses the 
partition schema to map the column names. Reading the same file through Pig 
then has problems as user will have to manually map columns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-05-04 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989148#comment-13989148
 ] 

Chris Drome commented on HIVE-6765:
---

[~adrian-wang] this does not appear to be an issue with hive-0.13 as they are 
using Kryo for XML serialization.

For clarification, I found that several setting can impact whether this problem 
arises or not. hive.auto.convert.join.noconditionaltask.size is used to 
determine the big table candidate set. Then hive.mapjoin.smalltable.filesize is 
used as a cutoff to determine whether a map-size join should be performed.

I found that when hive.auto.convert.join is true, it will try to perform a 
map-side join first. Based on hive.auto.convert.join.noconditionaltask.size it 
will return a set of big table candidates to the physical optimizer. The 
physical optimizer will use hive.mapjoin.smalltable.filesize to determine 
whether the map-side join should proceed. If not the clonePlan method is 
called, which manifests the problem. I don't think it is solely influenced by 
the size of the tables involved in the join. In my tests, shrinking the size of 
the table allows the map-side join to proceed, while increasing the size of the 
table causes this failure.

> ASTNodeOrigin unserializable leads to fail when join with view
> --
>
> Key: HIVE-6765
> URL: https://issues.apache.org/jira/browse/HIVE-6765
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Adrian Wang
> Fix For: 0.13.0
>
> Attachments: HIVE-6765.patch.1
>
>
> when a view contains a UDF, and the view comes into a JOIN operation, Hive 
> will encounter a bug with stack trace like
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
>   at java.lang.Class.newInstance0(Class.java:359)
>   at java.lang.Class.newInstance(Class.java:327)
>   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4

2014-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-6987:
--

Assignee: (was: Chris Drome)

> Metastore qop settings won't work with Hadoop-2.4
> -
>
> Key: HIVE-6987
> URL: https://issues.apache.org/jira/browse/HIVE-6987
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
> Fix For: 0.14.0
>
>
>  [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a 
> backward incompatible change due to which the following hive call returns a 
> null map:
> {code}
> Map hadoopSaslProps =  ShimLoader.getHadoopThriftAuthBridge().
> getHadoopSaslProperties(conf); 
> {code}
> Metastore uses the underlying hadoop.rpc.protection values to set the qop 
> between metastore client/server. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4

2014-04-29 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13984902#comment-13984902
 ] 

Chris Drome commented on HIVE-6987:
---

You are correct. Sorry for the confusion.

> Metastore qop settings won't work with Hadoop-2.4
> -
>
> Key: HIVE-6987
> URL: https://issues.apache.org/jira/browse/HIVE-6987
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
>Assignee: Chris Drome
> Fix For: 0.14.0
>
>
>  [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a 
> backward incompatible change due to which the following hive call returns a 
> null map:
> {code}
> Map hadoopSaslProps =  ShimLoader.getHadoopThriftAuthBridge().
> getHadoopSaslProperties(conf); 
> {code}
> Metastore uses the underlying hadoop.rpc.protection values to set the qop 
> between metastore client/server. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4

2014-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-6987:
--

Attachment: (was: HIVE-6987-1.patch.txt)

> Metastore qop settings won't work with Hadoop-2.4
> -
>
> Key: HIVE-6987
> URL: https://issues.apache.org/jira/browse/HIVE-6987
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
>Assignee: Chris Drome
> Fix For: 0.14.0
>
>
>  [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a 
> backward incompatible change due to which the following hive call returns a 
> null map:
> {code}
> Map hadoopSaslProps =  ShimLoader.getHadoopThriftAuthBridge().
> getHadoopSaslProperties(conf); 
> {code}
> Metastore uses the underlying hadoop.rpc.protection values to set the qop 
> between metastore client/server. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4

2014-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-6987:
--

Attachment: HIVE-6987-1.patch.txt

> Metastore qop settings won't work with Hadoop-2.4
> -
>
> Key: HIVE-6987
> URL: https://issues.apache.org/jira/browse/HIVE-6987
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
>Assignee: Chris Drome
> Fix For: 0.14.0
>
> Attachments: HIVE-6987-1.patch.txt
>
>
>  [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a 
> backward incompatible change due to which the following hive call returns a 
> null map:
> {code}
> Map hadoopSaslProps =  ShimLoader.getHadoopThriftAuthBridge().
> getHadoopSaslProperties(conf); 
> {code}
> Metastore uses the underlying hadoop.rpc.protection values to set the qop 
> between metastore client/server. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4

2014-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome reassigned HIVE-6987:
-

Assignee: Chris Drome

> Metastore qop settings won't work with Hadoop-2.4
> -
>
> Key: HIVE-6987
> URL: https://issues.apache.org/jira/browse/HIVE-6987
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
>Assignee: Chris Drome
> Fix For: 0.14.0
>
>
>  [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a 
> backward incompatible change due to which the following hive call returns a 
> null map:
> {code}
> Map hadoopSaslProps =  ShimLoader.getHadoopThriftAuthBridge().
> getHadoopSaslProperties(conf); 
> {code}
> Metastore uses the underlying hadoop.rpc.protection values to set the qop 
> between metastore client/server. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4

2014-04-29 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13984728#comment-13984728
 ] 

Chris Drome commented on HIVE-6987:
---

I have a preliminary patch for this issue. Will upload shortly.

> Metastore qop settings won't work with Hadoop-2.4
> -
>
> Key: HIVE-6987
> URL: https://issues.apache.org/jira/browse/HIVE-6987
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.14.0
>Reporter: Vaibhav Gumashta
> Fix For: 0.14.0
>
>
>  [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a 
> backward incompatible change due to which the following hive call returns a 
> null map:
> {code}
> Map hadoopSaslProps =  ShimLoader.getHadoopThriftAuthBridge().
> getHadoopSaslProperties(conf); 
> {code}
> Metastore uses the underlying hadoop.rpc.protection values to set the qop 
> between metastore client/server. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6900) HostUtil.getTaskLogUrl signature change causes compilation to fail

2014-04-14 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13969265#comment-13969265
 ] 

Chris Drome commented on HIVE-6900:
---

[~navis] I spoke with some Hadoop core engineers and they commented that Hive 
should not use HostUtil as it is marked as private (and unstable).

Also, TaskLogServlet is not supported in MR2, so we just need to handle the 
case where mapreduce.framework.name != yarn (and yarn-tez I assume). We might 
want to add yarn-tez to the condition.

> HostUtil.getTaskLogUrl signature change causes compilation to fail
> --
>
> Key: HIVE-6900
> URL: https://issues.apache.org/jira/browse/HIVE-6900
> Project: Hive
>  Issue Type: Bug
>  Components: Shims
>Affects Versions: 0.13.0, 0.14.0
>Reporter: Chris Drome
> Attachments: HIVE-6900.1.patch.txt
>
>
> The signature for HostUtil.getTaskLogUrl has changed between Hadoop-2.3 and 
> Hadoop-2.4.
> Code in 
> shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java 
> works with Hadoop-2.3 method and causes compilation failure with Hadoop-2.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6447) Bucket map joins in hive-tez

2014-04-11 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13967287#comment-13967287
 ] 

Chris Drome commented on HIVE-6447:
---

Thanks for the clarification. We are using a newer version of Tez, which caused 
the problem.

> Bucket map joins in hive-tez
> 
>
> Key: HIVE-6447
> URL: https://issues.apache.org/jira/browse/HIVE-6447
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0, 0.14.0
>
> Attachments: HIVE-6447.1.patch, HIVE-6447.10.patch, 
> HIVE-6447.11.patch, HIVE-6447.12.patch, HIVE-6447.13.patch, 
> HIVE-6447.2.patch, HIVE-6447.3.patch, HIVE-6447.4.patch, HIVE-6447.5.patch, 
> HIVE-6447.6.patch, HIVE-6447.7.patch, HIVE-6447.8.patch, HIVE-6447.9.patch, 
> HIVE-6447.WIP.patch
>
>
> Support bucket map joins in tez.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6900) HostUtil.getTaskLogUrl signature change causes compilation to fail

2014-04-11 Thread Chris Drome (JIRA)
Chris Drome created HIVE-6900:
-

 Summary: HostUtil.getTaskLogUrl signature change causes 
compilation to fail
 Key: HIVE-6900
 URL: https://issues.apache.org/jira/browse/HIVE-6900
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.13.0, 0.14.0
Reporter: Chris Drome


The signature for HostUtil.getTaskLogUrl has changed between Hadoop-2.3 and 
Hadoop-2.4.

Code in 
shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java works 
with Hadoop-2.3 method and causes compilation failure with Hadoop-2.4.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6447) Bucket map joins in hive-tez

2014-04-11 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13967256#comment-13967256
 ] 

Chris Drome commented on HIVE-6447:
---

A method call in 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/CustomPartitionVertex.java is 
mispelled causing a compilation failure when building against Hadoop-2.x.

Line 681 of the patch is:

+int totalResource = context.getTotalAVailableResource().getMemory();

but should be:

+int totalResource = context.getTotalAvailableResource().getMemory();

> Bucket map joins in hive-tez
> 
>
> Key: HIVE-6447
> URL: https://issues.apache.org/jira/browse/HIVE-6447
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: tez-branch
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Fix For: 0.13.0, 0.14.0
>
> Attachments: HIVE-6447.1.patch, HIVE-6447.10.patch, 
> HIVE-6447.11.patch, HIVE-6447.12.patch, HIVE-6447.13.patch, 
> HIVE-6447.2.patch, HIVE-6447.3.patch, HIVE-6447.4.patch, HIVE-6447.5.patch, 
> HIVE-6447.6.patch, HIVE-6447.7.patch, HIVE-6447.8.patch, HIVE-6447.9.patch, 
> HIVE-6447.WIP.patch
>
>
> Support bucket map joins in tez.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6029) Add default authorization on database/table creation

2013-12-17 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13850977#comment-13850977
 ] 

Chris Drome commented on HIVE-6029:
---

[~brocknoland] the initial patch was only intended for informational purposes 
as requested by [~thejas]. There is much more clean-up to be done, so please do 
not consider this yet. I will try to look at your rebased patch in the next 
couple of days. Thanks for reviewing.

> Add default authorization on database/table creation
> 
>
> Key: HIVE-6029
> URL: https://issues.apache.org/jira/browse/HIVE-6029
> Project: Hive
>  Issue Type: Improvement
>  Components: Authorization, Metastore
>Affects Versions: 0.10.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Attachments: HIVE-6029-1.patch.txt, HIVE-6029.2.patch
>
>
> Default authorization privileges are not set when a database/table is 
> created. This allows a user to create a database/table and not be able to 
> access it through Sentry.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HIVE-6029) Add default authorization on database/table creation

2013-12-13 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-6029:
--

Attachment: HIVE-6029-1.patch.txt

Initial patch which presents the idea. Needs more work and clean up. Generated 
against Hive-0.10.

> Add default authorization on database/table creation
> 
>
> Key: HIVE-6029
> URL: https://issues.apache.org/jira/browse/HIVE-6029
> Project: Hive
>  Issue Type: Improvement
>  Components: Authorization, Metastore
>Affects Versions: 0.10.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Attachments: HIVE-6029-1.patch.txt
>
>
> Default authorization privileges are not set when a database/table is 
> created. This allows a user to create a database/table and not be able to 
> access it through Sentry.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HIVE-6029) Add default authorization on database/table creation

2013-12-13 Thread Chris Drome (JIRA)
Chris Drome created HIVE-6029:
-

 Summary: Add default authorization on database/table creation
 Key: HIVE-6029
 URL: https://issues.apache.org/jira/browse/HIVE-6029
 Project: Hive
  Issue Type: Improvement
  Components: Authorization, Metastore
Affects Versions: 0.10.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor


Default authorization privileges are not set when a database/table is created. 
This allows a user to create a database/table and not be able to access it 
through Sentry.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: [ANNOUNCE] New Hive PMC Members - Thejas Nair and Brock Noland

2013-10-28 Thread Chris Drome
Congratulations Brock and Thejas!

On 10/24/13 3:10 PM, "Carl Steinbach"  wrote:

>I am pleased to announce that Thejas Nair and Brock Noland have been
>elected to the Hive Project Management Committee. Please join me in
>congratulating Thejas and Brock!
>
>Thanks.
>
>Carl



[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-24 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13804738#comment-13804738
 ] 

Chris Drome commented on HIVE-4974:
---

Thanks [~brocknoland] and [~vaibhavgumashta].

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974.2-trunk.patch.txt, 
> HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk-2.patch.txt, 
> HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-22 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Attachment: HIVE-4974-trunk-2.patch.txt

Changes to address phabricator comments.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk-1.patch.txt, 
> HIVE-4974-trunk-2.patch.txt, HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-22 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Attachment: HIVE-4974-trunk-1.patch.txt

Fixed minor issue and added unittest.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk-1.patch.txt, HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-22 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13802099#comment-13802099
 ] 

Chris Drome commented on HIVE-4974:
---

Not a problem. I will add some tests and should have an updated patch later 
today or this evening.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-22 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13802070#comment-13802070
 ] 

Chris Drome commented on HIVE-4974:
---

[~brocknoland], I didn't add explicit tests to TestJdbcDriver2 as there is no 
new functionality per se. I did test against TestJdbcDriver2 to ensure that 
nothing broke though.

Please advise as to whether specific tests should be added.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Affects Version/s: (was: 0.10.0)

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Fix Version/s: 0.13.0
   Status: Patch Available  (was: Open)

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0, 0.11.0, 0.10.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13801272#comment-13801272
 ] 

Chris Drome commented on HIVE-4974:
---

Created phabricator ticket: https://reviews.facebook.net/D13611

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.13.0
>
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-10-21 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Attachment: HIVE-4974-trunk.patch.txt

Uploaded trunk patch.

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
> Attachments: HIVE-4974-trunk.patch.txt
>
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: [ANNOUNCE] New Hive Committer - Thejas Nair

2013-08-20 Thread Chris Drome
Congratulations Thejas!

chris

On 8/20/13 3:31 AM, "Carl Steinbach"  wrote:

>The Apache Hive PMC has voted to make Thejas Nair a committer on the
>Apache
>Hive project.
>
>Please join me in congratulating Thejas!



[jira] [Commented] (HIVE-3688) Various tests failing in TestNegativeCliDriver, TestParseNegative, TestParse when using JDK7

2013-08-12 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13737469#comment-13737469
 ] 

Chris Drome commented on HIVE-3688:
---

[~brocknoland] that would be great. I'll remove the TestParse parts of this 
patch and resubmit for the TestNegativeCliDriver cases only. Thanks.

> Various tests failing in TestNegativeCliDriver, TestParseNegative, TestParse 
> when using JDK7
> 
>
> Key: HIVE-3688
> URL: https://issues.apache.org/jira/browse/HIVE-3688
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor, Tests
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-3688-0.9.patch, HIVE-3688-trunk.patch
>
>
> The following tests are failing when using JDK7.
> TestNegativeCliDriver:
> case_sensitivity.q
> cast1.q
> groupby1.q
> groupby2.q
> groupby3.q
> groupby4.q
> groupby5.q
> groupby6.q
> input1.q
> input2.q
> input20.q
> input3.q
> input4.q
> input5.q
> input6.q
> input7.q
> input8.q
> input9.q
> input_part1.q
> input_testsequencefile.q
> input_testxpath.q
> input_testxpath2.q
> join1.q
> join2.q
> join3.q
> join4.q
> join5.q
> join6.q
> join7.q
> join8.q
> sample1.q
> sample2.q
> sample3.q
> sample4.q
> sample5.q
> sample6.q
> sample7.q
> subq.q
> udf1.q
> udf4.q
> udf6.q
> udf_case.q
> udf_when.q
> union.q
> TestParseNegative:
> invalid_function_param2.q
> TestNegativeCliDriver:
> fs_default_name1.q.out_0.23_1.7
> fs_default_name2.q.out_0.23_1.7
> invalid_cast_from_binary_1.q.out_0.23_1.7
> invalid_cast_from_binary_2.q.out_0.23_1.7
> invalid_cast_from_binary_3.q.out_0.23_1.7
> invalid_cast_from_binary_4.q.out_0.23_1.7
> invalid_cast_from_binary_5.q.out_0.23_1.7
> invalid_cast_from_binary_6.q.out_0.23_1.7
> wrong_column_type.q.out_0.23_1.7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3630) udf_substr.q fails when using JDK7

2013-08-12 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13737467#comment-13737467
 ] 

Chris Drome commented on HIVE-3630:
---

Sorry for jumping into the discussion late. Feel free to close this if it is no 
longer reproducible ([~ashutoshc]] thought that would be the case after 
HIVE-3840).

> udf_substr.q fails when using JDK7
> --
>
> Key: HIVE-3630
> URL: https://issues.apache.org/jira/browse/HIVE-3630
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.9.1, 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-3630-0.10.patch, HIVE-3630-0.9.patch, 
> HIVE-3630-trunk.patch
>
>
> Internal error: Cannot find ConstantObjectInspector for BINARY
> This exception has two causes.
> JDK7 iterators do not return values in the same order as JDK6, which selects 
> a different implementation of this UDF when the first argument is null. With 
> JDK7 this happens to be the binary version.
> The binary version is not implemented properly which ultimately causes the 
> exception when the method is called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4977) HS2: support an alternate serialization protocol between client and server

2013-08-01 Thread Chris Drome (JIRA)
Chris Drome created HIVE-4977:
-

 Summary: HS2: support an alternate serialization protocol between 
client and server
 Key: HIVE-4977
 URL: https://issues.apache.org/jira/browse/HIVE-4977
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0, 0.10.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome


Current serialization protocol between client and server as defined in 
cli_service.thrift results in 2x (or more) throughput degradation compared to 
HS1.

Initial proposal is to introduce HS1 serialization protocol as a negotiable 
alternative.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-07-31 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Description: 
The getConnection method of HiveStatement and HivePreparedStatement throw a not 
supported SQLException. The constructors should take the HiveConnection that 
creates them as an argument.

Similarly, HiveBaseResultSet is not capable of returning the Statement that 
created it.

  was:The getConnection method of HiveStatement and HivePreparedStatement throw 
a not supported SQLException. The constructors should take the HiveConnection 
that creates them as an argument.


> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.
> Similarly, HiveBaseResultSet is not capable of returning the Statement that 
> created it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4974) JDBC2 statements and result sets are not able to return their parents

2013-07-31 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4974:
--

Summary: JDBC2 statements and result sets are not able to return their 
parents  (was: JDBC2 statements are not able to return the connection that 
created them)

> JDBC2 statements and result sets are not able to return their parents
> -
>
> Key: HIVE-4974
> URL: https://issues.apache.org/jira/browse/HIVE-4974
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0, 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>Priority: Minor
>
> The getConnection method of HiveStatement and HivePreparedStatement throw a 
> not supported SQLException. The constructors should take the HiveConnection 
> that creates them as an argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4974) JDBC2 statements are not able to return the connection that created them

2013-07-31 Thread Chris Drome (JIRA)
Chris Drome created HIVE-4974:
-

 Summary: JDBC2 statements are not able to return the connection 
that created them
 Key: HIVE-4974
 URL: https://issues.apache.org/jira/browse/HIVE-4974
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0, 0.10.0, 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome
Priority: Minor


The getConnection method of HiveStatement and HivePreparedStatement throw a not 
supported SQLException. The constructors should take the HiveConnection that 
creates them as an argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4574) XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck

2013-07-30 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724939#comment-13724939
 ] 

Chris Drome commented on HIVE-4574:
---

Thanks for your thoughts. We are concerned with 0.10 at this point, but your 
point is taken. Looking forward to HIVE-1511!

> XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck
> --
>
> Key: HIVE-4574
> URL: https://issues.apache.org/jira/browse/HIVE-4574
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-4574.1.patch
>
>
> In open jdk7, XMLEncoder.writeObject call leads to calls to 
> java.beans.MethodFinder.findMethod(). MethodFinder class not thread safe 
> because it uses a static WeakHashMap that would get used from multiple 
> threads. See -
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/com/sun/beans/finder/MethodFinder.java#46
> Concurrent access to HashMap implementation that are not thread safe can 
> sometimes result in infinite-loops and other problems. If jdk7 is in use, it 
> makes sense to synchronize calls to XMLEncoder.writeObject .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4574) XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck

2013-07-30 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724840#comment-13724840
 ] 

Chris Drome commented on HIVE-4574:
---

[~thejas], sorry I was unclear in my previous comment. I was concerned about 
about other methods in the Utilities.java class that similarly call writeObject 
on XMLEncoder. It doesn't look like we need to worry about 
Utilities.serializeTasks or Utilities.serializeQueryPlan. However, it looks 
like Utilities.serializeMapRedWork or Utilities.serializeMapRedLocalWork.

Additionally, I'm not sure if we have to be concerned about deserialization 
which calls readObject.

> XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck
> --
>
> Key: HIVE-4574
> URL: https://issues.apache.org/jira/browse/HIVE-4574
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-4574.1.patch
>
>
> In open jdk7, XMLEncoder.writeObject call leads to calls to 
> java.beans.MethodFinder.findMethod(). MethodFinder class not thread safe 
> because it uses a static WeakHashMap that would get used from multiple 
> threads. See -
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/com/sun/beans/finder/MethodFinder.java#46
> Concurrent access to HashMap implementation that are not thread safe can 
> sometimes result in infinite-loops and other problems. If jdk7 is in use, it 
> makes sense to synchronize calls to XMLEncoder.writeObject .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4574) XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck

2013-07-30 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724679#comment-13724679
 ] 

Chris Drome commented on HIVE-4574:
---

Specifically, it appears that serializeMapRedWork and serializeMapRedLocalWork 
may also be called by hive.

> XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck
> --
>
> Key: HIVE-4574
> URL: https://issues.apache.org/jira/browse/HIVE-4574
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-4574.1.patch
>
>
> In open jdk7, XMLEncoder.writeObject call leads to calls to 
> java.beans.MethodFinder.findMethod(). MethodFinder class not thread safe 
> because it uses a static WeakHashMap that would get used from multiple 
> threads. See -
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/com/sun/beans/finder/MethodFinder.java#46
> Concurrent access to HashMap implementation that are not thread safe can 
> sometimes result in infinite-loops and other problems. If jdk7 is in use, it 
> makes sense to synchronize calls to XMLEncoder.writeObject .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4574) XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck

2013-07-30 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13724653#comment-13724653
 ] 

Chris Drome commented on HIVE-4574:
---

[~thejas], I was wondering why none of the other methods which use XMLEncoder 
are not synchronized as well. Is there something specific about 
serializeExpression that makes it different?

> XMLEncoder thread safety issues in openjdk7 causes HiveServer2 to be stuck
> --
>
> Key: HIVE-4574
> URL: https://issues.apache.org/jira/browse/HIVE-4574
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-4574.1.patch
>
>
> In open jdk7, XMLEncoder.writeObject call leads to calls to 
> java.beans.MethodFinder.findMethod(). MethodFinder class not thread safe 
> because it uses a static WeakHashMap that would get used from multiple 
> threads. See -
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/com/sun/beans/finder/MethodFinder.java#46
> Concurrent access to HashMap implementation that are not thread safe can 
> sometimes result in infinite-loops and other problems. If jdk7 is in use, it 
> makes sense to synchronize calls to XMLEncoder.writeObject .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4911) Enable QOP configuration for Hive Server 2 thrift transport

2013-07-22 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13716145#comment-13716145
 ] 

Chris Drome commented on HIVE-4911:
---

[~brocknoland], I marked this patch as superceding HIVE-4225. HIVE-4225 only 
addresses the fact that HS2 was ignoring the hadoop.rpc.protection setting. The 
major limitation of HIVE-4225 is that it applies the QOP setting to both 
external and internal connections.

HIVE-4911 improves upon this by allowing separate configuration of external and 
internal connections. An example of where this is important is when the HS2 
client connection must be encrypted, but the connection between HS2 and JT/NN 
does not require encryption.

> Enable QOP configuration for Hive Server 2 thrift transport
> ---
>
> Key: HIVE-4911
> URL: https://issues.apache.org/jira/browse/HIVE-4911
> Project: Hive
>  Issue Type: New Feature
>Reporter: Arup Malakar
>Assignee: Arup Malakar
> Attachments: HIVE-4911-trunk-0.patch
>
>
> The QoP for hive server 2 should be configurable to enable encryption. A new 
> configuration should be exposed "hive.server2.thrift.rpc.protection". This 
> would give greater control configuring hive server 2 service.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4496) JDBC2 won't compile with JDK7

2013-06-24 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13692364#comment-13692364
 ] 

Chris Drome commented on HIVE-4496:
---

[~navis]: Please try the latest patch. Thanks for your help on this patch.

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.12.0
>
> Attachments: HIVE-4496-1.patch, HIVE-4496-2.patch, HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4496) JDBC2 won't compile with JDK7

2013-06-24 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4496:
--

Attachment: HIVE-4496-2.patch

Restored the SQLException signature to include the original Exception in 
openSession.

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.12.0
>
> Attachments: HIVE-4496-1.patch, HIVE-4496-2.patch, HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4496) JDBC2 won't compile with JDK7

2013-06-24 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4496:
--

Attachment: HIVE-4496-1.patch

Rebased trunk patch.

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.12.0
>
> Attachments: HIVE-4496-1.patch, HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4496) JDBC2 won't compile with JDK7

2013-05-23 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13665824#comment-13665824
 ] 

Chris Drome commented on HIVE-4496:
---

[~xuefuz] I don't think the problem is with the patch.

RevisionManagerFactory.java:81 generates the javadoc warning.
This file is not part of this patch, it was imported as part of HIVE-4264.

BTW, I just finished patching trunk and running ant tar, which completed 
successfully.

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.12.0
>
> Attachments: HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4508) Fix various release issues in 0.11.0rc1

2013-05-06 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649934#comment-13649934
 ] 

Chris Drome commented on HIVE-4508:
---

HIVE-4496 fixes the build issue with JDBC2 and JDK1.7.

> Fix various release issues in 0.11.0rc1
> ---
>
> Key: HIVE-4508
> URL: https://issues.apache.org/jira/browse/HIVE-4508
> Project: Hive
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.11.0
>
>
> Carl described some non-code issues in the 0.11.0rc1 and I want to fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4496) JDBC2 won't compile with JDK7

2013-05-03 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4496:
--

Fix Version/s: 0.12.0
   Status: Patch Available  (was: Open)

Ported the HIVE-3384 patch to the HS2 JDBC code.

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.12.0
>
> Attachments: HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4496) JDBC2 won't compile with JDK7

2013-05-03 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648953#comment-13648953
 ] 

Chris Drome commented on HIVE-4496:
---

Phabricator ticket: https://reviews.facebook.net/D10647

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4496) JDBC2 won't compile with JDK7

2013-05-03 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4496:
--

Attachment: HIVE-4496.patch

Attached trunk patch.

> JDBC2 won't compile with JDK7
> -
>
> Key: HIVE-4496
> URL: https://issues.apache.org/jira/browse/HIVE-4496
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.12.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4496.patch
>
>
> HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3384) HIVE JDBC module won't compile under JDK1.7 as new methods added in JDBC specification

2013-05-03 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13648913#comment-13648913
 ] 

Chris Drome commented on HIVE-3384:
---

The error is not related to this patch. Rather it is associated with new code 
added in 0.11.

Please refer to HIVE-4496.

> HIVE JDBC module won't compile under JDK1.7 as new methods added in JDBC 
> specification
> --
>
> Key: HIVE-3384
> URL: https://issues.apache.org/jira/browse/HIVE-3384
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.10.0
>Reporter: Weidong Bian
>Assignee: Chris Drome
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: D6873-0.9.1.patch, D6873.1.patch, D6873.2.patch, 
> D6873.3.patch, D6873.4.patch, D6873.5.patch, D6873.6.patch, D6873.7.patch, 
> HIVE-3384-0.10.patch, HIVE-3384-2012-12-02.patch, HIVE-3384-2012-12-04.patch, 
> HIVE-3384.2.patch, HIVE-3384-branch-0.9.patch, HIVE-3384.patch, 
> HIVE-JDK7-JDBC.patch
>
>
> jdbc module couldn't be compiled with jdk7 as it adds some abstract method in 
> the JDBC specification 
> some error info:
>  error: HiveCallableStatement is not abstract and does not override abstract
> method getObject(String,Class) in CallableStatement
> .
> .
> .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4496) JDBC2 won't compile with JDK7

2013-05-03 Thread Chris Drome (JIRA)
Chris Drome created HIVE-4496:
-

 Summary: JDBC2 won't compile with JDK7
 Key: HIVE-4496
 URL: https://issues.apache.org/jira/browse/HIVE-4496
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.12.0
Reporter: Chris Drome
Assignee: Chris Drome


HiveServer2 related JDBC does not compile with JDK7. Related to HIVE-3384.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4467) HiveConnection does not handle failures correctly

2013-05-01 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13647046#comment-13647046
 ] 

Chris Drome commented on HIVE-4467:
---

My bad. I misunderstood the context of your comment.

> HiveConnection does not handle failures correctly
> -
>
> Key: HIVE-4467
> URL: https://issues.apache.org/jira/browse/HIVE-4467
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4467.patch
>
>
> HiveConnection uses Utils.verifySuccess* routines to check if there is any 
> error from the server side. This is not handled well. In 
> Utils.verifySuccess() when withInfo is 'false', the condition evaluates to 
> 'false' and no SQLexception is thrown even though there could be a problem on 
> the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4467) HiveConnection does not handle failures correctly

2013-05-01 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13647025#comment-13647025
 ] 

Chris Drome commented on HIVE-4467:
---

[~cwsteinbach], the issue is with the condition in the verifySuccess method:

if ((status.getStatusCode() != TStatusCode.SUCCESS_STATUS) &&
(withInfo && (status.getStatusCode() != 
TStatusCode.SUCCESS_WITH_INFO_STATUS)))

when withInfo is false this condition will always be false. We have seen the 
case where the status code is an error and because withInfo is false there is 
no exception thrown.

Thiruvel's patch addresses this case.

> HiveConnection does not handle failures correctly
> -
>
> Key: HIVE-4467
> URL: https://issues.apache.org/jira/browse/HIVE-4467
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4467.patch
>
>
> HiveConnection uses Utils.verifySuccess* routines to check if there is any 
> error from the server side. This is not handled well. In 
> Utils.verifySuccess() when withInfo is 'false', the condition evaluates to 
> 'false' and no SQLexception is thrown even though there could be a problem on 
> the server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-30 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-4-trunk.patch
HIVE-4232-4-0.11.patch

Fixed classpath problems with 20S build.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, 
> HIVE-4232-3-0.11.patch, HIVE-4232-3-trunk.patch, HIVE-4232-4-0.11.patch, 
> HIVE-4232-4-trunk.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-3-trunk.patch
HIVE-4232-3-0.11.patch

Uploaded renamed files.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, 
> HIVE-4232-3-0.11.patch, HIVE-4232-3-trunk.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: (was: HIVE-4232-trunk-3.patch)

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: (was: HIVE-4232-0.11-3.patch)

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-0.11-3.patch

Missed a file.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: (was: HIVE-4232-0.11-3.patch)

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-trunk-3.patch
HIVE-4232-0.11-3.patch

New patch fixes test failure.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-0.11-3.patch, HIVE-4232-1.patch, 
> HIVE-4232-2.patch, HIVE-4232.patch, HIVE-4232-trunk-3.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-29 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Affects Version/s: 0.12.0
Fix Version/s: 0.12.0

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0, 0.12.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0, 0.12.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-26 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-2.patch

WIP patch incorporating the comments. NOSASL/NONE transport layer test failing 
with latest trunk code.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4236) JDBC2 HivePreparedStatement does not release resources

2013-04-17 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13634059#comment-13634059
 ] 

Chris Drome commented on HIVE-4236:
---

[~navis]: I will start addressing your comments, etc when I'm back in the 
office next week.

> JDBC2 HivePreparedStatement does not release resources
> --
>
> Key: HIVE-4236
> URL: https://issues.apache.org/jira/browse/HIVE-4236
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4236.patch
>
>
> HivePreparedStatement does not close the associated server-side operation 
> when close() is called. Nor does it call close() on the ResultSet. When 
> execute() is called the current ResultSet is not closed first it is just set 
> to null.
> Similarly, HiveStatement's close() does not call close() on the ResultSet, it 
> just sets it to null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-04 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13622931#comment-13622931
 ] 

Chris Drome commented on HIVE-4232:
---

I was thinking about this a little more and would like to summarize the current 
state:

hive-site.xml -> transport -> JDBC connection string

1. hive.server2.authentication=NOSASL -> raw transport -> 
jdbc:hive2://host:port/dbname;auth=noSasl
2. hive.server2.authentication=NONE -> plain SASL transport -> 
jdbc:hive2://host:port/dbname   (*DEFAULT*)
3. hive.server2.authentication=KERBEROS -> Kerberos SASL transport -> 
jdbc:hive2://host:port/dbname;principal=

So we need to set auth=noSasl to disable SASL instead of 
auth=plainsasl|kerberos to enable SASL.

Now if the server is set to Kerberos, then the client still has to know this 
because it if they exclude the principal parameter it will happily initialize a 
plain SASL transport because the code infers SASL/Kerberos based on the 
existence of that value. Granted, in this case the connection throws an 
exception.

If it was a requirement to include say auth=plainsasl|kerberos in the 
connection string, then the code wouldn't have to guess what the intent was and 
could communicate an error if auth=kerberos and principal is not present.

Furthermore, from what I can see, the plain SASL is really just a no-op as it 
is not doing anything when authentication=NONE.

In the end, it comes down to the fact that the client still needs to know the 
authentication method of the server.
That is why I asked whether we had data about the average use case. Do we know 
how often people are using raw vs plain vs kerberos?

It seems that the real issue is that a SASL transport doesn't play nicely with 
a raw transport. If it would be possible to remove the need to support the raw 
transport, that would resolve this discussion. Even the raw transport much 
speak thrift, so that isn't an issue. Rather it is a question of whether client 
should be forced to implement a SASL transport.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-04 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13621923#comment-13621923
 ] 

Chris Drome commented on HIVE-4232:
---

[~prasadm][~cwsteinbach]: I think Prasad's point is fair. It explains why the 
defaults were set the way they were and should probably be documented somewhere 
in the code.

We have a follow-up patch which adds functionality to set the QOP of the 
transport, so that data encryption can be enabled on the client-server 
transport. This will require some changes to the JDBC connection string and I 
wanted to get buy-in to modify the existing format of the connection string 
before posting that patch.

I still feel the naming conventions are a little misleading and the JDBC 
connection string tries to infer the state. However, if everyone agrees that 
NOSASL should generally not be used, then I won't push that issue. Although it 
doesn't help in the case a client library adds auth=nosasl, resulting in the 
connection hanging.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-04 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: (was: HIVE-4232-2.patch)

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-04 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-2.patch

Updated patch with your suggestions. Added some test cases.

The test cases passed on Hadoop-0.23 and Hadoop-2.0.
They failed on Hadoop-1.0 because of classpath issues that use conflicting 
versions of slf4j at runtime. I didn't have time to fix this.

I think Thiruvel will be able to handle any blockers while I'm away.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232-2.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-03 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13620756#comment-13620756
 ] 

Chris Drome commented on HIVE-4232:
---

Working on some test cases, but need to resolve a classpath issue before I can 
include them.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-03 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Attachment: HIVE-4232-1.patch

Changes associated with previous comment.

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232-1.patch, HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-03 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13620702#comment-13620702
 ] 

Chris Drome commented on HIVE-4232:
---

In the process of writing some unit tests for the connection string changes I 
noticed that the connection type for HiveServer2 defaults to NONE, which uses 
the Plain SASL Transport Factory. So in this sense the original implementation 
of HiveConnection matches for the default case.

hive.server2.authentication can take one of the following values:
NOSASL
NONE (default)
LDAP
KERBEROS
CUSTOM

I think NONE is misleading and should be renamed PLAIN (or something similar), 
and NOSASL should be labelled NONE. And that the default usage of Plain SASL 
Transport Factory seems odd. The default behavior of HiveConnection should 
match that of HiveServer2.

I'd like to ask your opinion of what the default behavior should be. Should it 
default to the Plain SASL transport or no authentication at all?



> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-02 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13620118#comment-13620118
 ] 

Chris Drome commented on HIVE-4232:
---

Hi Ashutosh,

I will be out of the office from Thursday, so I was wondering if I could
get your feedback on the HIVE-4232 patch as soon as possible. That way I
would be able to make changes or address concerns before I leave.

I've created the fabricator ticket.

https://reviews.facebook.net/D9879

Thanks,

chris





> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-04-02 Thread Chris Drome (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13619624#comment-13619624
 ] 

Chris Drome commented on HIVE-4232:
---

Phabricator link: https://reviews.facebook.net/D9879

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4256) JDBC2 HiveConnection does not use the specified database

2013-03-29 Thread Chris Drome (JIRA)
Chris Drome created HIVE-4256:
-

 Summary: JDBC2 HiveConnection does not use the specified database
 Key: HIVE-4256
 URL: https://issues.apache.org/jira/browse/HIVE-4256
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.11.0
Reporter: Chris Drome
Assignee: Chris Drome


HiveConnection ignores the database specified in the connection string when 
configuring the connection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Fix Version/s: 0.10.0
   0.11.0

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.10.0, 0.11.0
>
> Attachments: HIVE-4166-0.10.patch, HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Status: Patch Available  (was: Open)

Fixes the problem in HiveServer2 which causes certain operations to fail by 
closing the FileSystem when the connection is closed.

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4166-0.10.patch, HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Component/s: HiveServer2

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4166-0.10.patch, HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Attachment: HIVE-4166-0.10.patch

Updated patch.

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4166-0.10.patch, HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Attachment: HIVE-4166-trunk.patch

Updated trunk patch.

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Attachment: (was: HIVE-4166-0.10.patch)

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-03-28 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4166:
--

Attachment: (was: HIVE-4166-trunk.patch)

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4236) JDBC2 HivePreparedStatement does not release resources

2013-03-27 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4236:
--

Fix Version/s: 0.11.0
   Status: Patch Available  (was: Open)

> JDBC2 HivePreparedStatement does not release resources
> --
>
> Key: HIVE-4236
> URL: https://issues.apache.org/jira/browse/HIVE-4236
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4236.patch
>
>
> HivePreparedStatement does not close the associated server-side operation 
> when close() is called. Nor does it call close() on the ResultSet. When 
> execute() is called the current ResultSet is not closed first it is just set 
> to null.
> Similarly, HiveStatement's close() does not call close() on the ResultSet, it 
> just sets it to null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4236) JDBC2 HivePreparedStatement does not release resources

2013-03-27 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4236:
--

Attachment: HIVE-4236.patch

Uploading patch which closes ResultSet and frees server-side resources.

> JDBC2 HivePreparedStatement does not release resources
> --
>
> Key: HIVE-4236
> URL: https://issues.apache.org/jira/browse/HIVE-4236
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-4236.patch
>
>
> HivePreparedStatement does not close the associated server-side operation 
> when close() is called. Nor does it call close() on the ResultSet. When 
> execute() is called the current ResultSet is not closed first it is just set 
> to null.
> Similarly, HiveStatement's close() does not call close() on the ResultSet, it 
> just sets it to null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4236) JDBC2 HivePreparedStatement does not release resources

2013-03-27 Thread Chris Drome (JIRA)
Chris Drome created HIVE-4236:
-

 Summary: JDBC2 HivePreparedStatement does not release resources
 Key: HIVE-4236
 URL: https://issues.apache.org/jira/browse/HIVE-4236
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.11.0
Reporter: Chris Drome


HivePreparedStatement does not close the associated server-side operation when 
close() is called. Nor does it call close() on the ResultSet. When execute() is 
called the current ResultSet is not closed first it is just set to null.

Similarly, HiveStatement's close() does not call close() on the ResultSet, it 
just sets it to null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4236) JDBC2 HivePreparedStatement does not release resources

2013-03-27 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4236:
--

Assignee: Chris Drome

> JDBC2 HivePreparedStatement does not release resources
> --
>
> Key: HIVE-4236
> URL: https://issues.apache.org/jira/browse/HIVE-4236
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
>
> HivePreparedStatement does not close the associated server-side operation 
> when close() is called. Nor does it call close() on the ResultSet. When 
> execute() is called the current ResultSet is not closed first it is just set 
> to null.
> Similarly, HiveStatement's close() does not call close() on the ResultSet, it 
> just sets it to null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4232) JDBC2 HiveConnection has odd defaults

2013-03-26 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-4232:
--

Fix Version/s: 0.11.0
   Status: Patch Available  (was: Open)

> JDBC2 HiveConnection has odd defaults
> -
>
> Key: HIVE-4232
> URL: https://issues.apache.org/jira/browse/HIVE-4232
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 0.11.0
>    Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.11.0
>
> Attachments: HIVE-4232.patch
>
>
> HiveConnection defaults to using a plain SASL transport if auth is not set. 
> To get a raw transport auth must be set to noSasl; furthermore noSasl is case 
> sensitive. Code tries to infer Kerberos or plain authentication based on the 
> presence of principal. There is no provision for specifying QOP level.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >