[jira] [Created] (HIVE-21490) Remove unused duplicate code added in HIVE-20506

2019-03-22 Thread Brock Noland (JIRA)
Brock Noland created HIVE-21490:
---

 Summary: Remove unused duplicate code added in HIVE-20506
 Key: HIVE-21490
 URL: https://issues.apache.org/jira/browse/HIVE-21490
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland


HIVE-20506 added a small amount of unused duplicate code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-20506) HOS times out when cluster is full while Hive-on-MR waits

2018-09-05 Thread Brock Noland (JIRA)
Brock Noland created HIVE-20506:
---

 Summary: HOS times out when cluster is full while Hive-on-MR waits
 Key: HIVE-20506
 URL: https://issues.apache.org/jira/browse/HIVE-20506
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland


My understanding is as follows:

Hive-on-MR when the cluster is full will wait for resources to be available 
before submitting a job. This is because the hadoop jar command is the primary 
mechanism Hive uses to know if a job is complete.

 

Hive-on-Spark will timeout after {{SPARK_RPC_CLIENT_CONNECT_TIMEOUT}} because 
the RPC client in the AppMaster doesn't connect back to the RPC Server in HS2. 

This is a behavior difference it'd be great to close.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-14679) csv2/tsv2 output format disables quoting by default and it's extremely difficult to enable

2016-08-31 Thread Brock Noland (JIRA)
Brock Noland created HIVE-14679:
---

 Summary: csv2/tsv2 output format disables quoting by default and 
it's extremely difficult to enable
 Key: HIVE-14679
 URL: https://issues.apache.org/jira/browse/HIVE-14679
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland


Over in HIVE-9788 we made quoting optional for csv2/tsv2.

However I see the following issues:

* JIRA doc doesn't mention it's disabled by default, this should be there an in 
the output of beeline help.
* The JIRA says the property is {--disableQuotingForSV} but it's actually a 
system property. We should not use a system property as it's non-standard so 
extremely hard for users to set. For example I must do: {env 
HADOOP_CLIENT_OPTS="-Ddisable.quoting.for.sv=false" beeline ...}
* The arg {--disableQuotingForSV} should be documented in beeline help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Bylaws change to allow some commits without review

2016-05-13 Thread Brock Noland
+1

On Mon, Apr 18, 2016 at 1:02 PM, Thejas Nair  wrote:

> ?+1
>
> 
> From: Wei Zheng 
> Sent: Monday, April 18, 2016 10:51 AM
> To: u...@hive.apache.org
> Subject: Re: [VOTE] Bylaws change to allow some commits without review
>
> +1
>
> Thanks,
> Wei
>
> From: Siddharth Seth mailto:ss...@apache.org>>
> Reply-To: "u...@hive.apache.org" <
> u...@hive.apache.org>
> Date: Monday, April 18, 2016 at 10:29
> To: "u...@hive.apache.org" <
> u...@hive.apache.org>
> Subject: Re: [VOTE] Bylaws change to allow some commits without review
>
> +1
>
> On Wed, Apr 13, 2016 at 3:58 PM, Lars Francke  > wrote:
> Hi everyone,
>
> we had a discussion on the dev@ list about allowing some forms of
> contributions to be committed without a review.
>
> The exact sentence I propose to add is: "Minor issues (e.g. typos, code
> style issues, JavaDoc changes. At committer's discretion) can be committed
> after soliciting feedback/review on the mailing list and not receiving
> feedback within 2 days."
>
> The proposed bylaws can also be seen here <
> https://cwiki.apache.org/confluence/display/Hive/Proposed+Changes+to+Hive+Project+Bylaws+-+April+2016
> >
>
> This vote requires a 2/3 majority of all Active PMC members so I'd love to
> get as many votes as possible. The vote will run for at least six days.
>
> Thanks,
> Lars
>
>


[jira] [Created] (HIVE-11891) Add basic performance logging at trace level to metastore calls

2015-09-18 Thread Brock Noland (JIRA)
Brock Noland created HIVE-11891:
---

 Summary: Add basic performance logging at trace level to metastore 
calls
 Key: HIVE-11891
 URL: https://issues.apache.org/jira/browse/HIVE-11891
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 1.1.0, 1.2.0, 1.0.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor
 Fix For: 2.0.0


At present it's extremely difficult to debug slow calls to the metastore. 
Ideally there would be some basic means of doing so, disabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: ORC separate project

2015-04-06 Thread Brock Noland
Hey guys,

Good discussion here. One point of order, I feel like this should be a
[DISCUSS] thread. Some folks filter on that specific text as it's
quite standard in Apache to use that subject prefix for big issues
like this one.

Brock

On Fri, Apr 3, 2015 at 3:59 PM, Thejas Nair  wrote:
> On Fri, Apr 3, 2015 at 1:25 PM, Lefty Leverenz 
> wrote:
>
>> Hive users who wished to use ORC would obviously need to pull in ORC
>>> artifacts in addition to Hive.
>>>
>>
>> What would happen with Hive features that (currently) only work with ORC?
>> Would they be extended to work with other file formats and stay in Hive?
>> What about future features -- would they have to work with multiple file
>> formats from the get-go?
>>
>
>
> The storage-api module proposed above would lead to clearer storage
> interfaces in hive. That will in turn help to implement such features using
> other storage including parquet, hbase etc.
> The result of this work will not automatically make those features worth
> with ORC, somebody would need to do that.
>
> Whether future features would work for all formats would depend on whether
> the new feature needs new functionality to be supported by the storage
> layer. If the feature needs new storage functionality, I would expect new
> interfaces to be defined in hive, and then implemented by the storage
> engines that want to support that feature.
>
> This will not negatively impact experience of users with respect to ORC or
> other storage formats. The way we package parquet in hive, we can package
> ORC as well. In fact, users would be more easily be able to upgrade their
> version of ORC being used, as releases can happen independent of each other.


Re: Request for feedback on work intent for non-equijoin support

2015-04-01 Thread Brock Noland
Nice, it'd be great if someone finally implemented this :)

On Wed, Apr 1, 2015 at 10:10 PM, Szehon Ho  wrote:
> From Hive side, there has been some thought on the subject here:
> https://cwiki.apache.org/confluence/display/Hive/Theta+Join, it has some
> ideas but nobody has gotten around to giving it a try.  It might be of
> interest.
>
> Thanks
> Szehon
>
>
> On Wed, Apr 1, 2015 at 10:05 PM, Lefty Leverenz 
> wrote:
>
>> D'oh!  Thanks Chao.
>>
>> -- Lefty
>>
>> On Thu, Apr 2, 2015 at 12:59 AM, Chao Sun  wrote:
>>
>> > Hey Lefty,
>> >
>> > You need to use the ftp protocol, not http.
>> > After clicking the link, you'll need to remove "http://"; from the
>> address
>> > bar.
>> >
>> > Best,
>> > Chao
>> >
>> > On Wed, Apr 1, 2015 at 9:41 PM, Lefty Leverenz 
>> > wrote:
>> >
>> > > Andrés, I followed that link and got the dread 404 Not Found:
>> > >
>> > > "The requested URI /pub/torres/Hiperfuse/extended_hiperfuse.pdf was not
>> > > found on this server."
>> > >
>> > > -- Lefty
>> > >
>> > > On Wed, Apr 1, 2015 at 7:23 PM,  wrote:
>> > >
>> > > > Dear Lefty,
>> > > >
>> > > > Thank you very much for pointing that out and for your initial
>> > pointers.
>> > > > Here is the missing link:
>> > > >
>> > > > ftp.parc.com/pub/torres/Hiperfuse/extended_hiperfuse.pdf
>> > > >
>> > > > Regards,
>> > > >
>> > > > Andrés
>> > > >
>> > > > -Original Message-
>> > > > From: Lefty Leverenz [mailto:leftylever...@gmail.com]
>> > > > Sent: Wednesday, April 01, 2015 12:48 AM
>> > > > To: dev@hive.apache.org
>> > > > Subject: Re: Request for feedback on work intent for non-equijoin
>> > support
>> > > >
>> > > > Hello Andres, the link to your paper is missing:
>> > > >
>> > > > In our preliminary work, which you can find here (pointer to the
>> paper)
>> > > ...
>> > > >
>> > > >
>> > > > You can find general information about contributing to Hive in the
>> > > > wiki:  Resources
>> > > > for Contributors
>> > > > <
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/Hive/Home#Home-ResourcesforContributors
>> > > > >
>> > > > , How to Contribute
>> > > > .
>> > > >
>> > > > -- Lefty
>> > > >
>> > > > On Tue, Mar 31, 2015 at 10:42 PM,  wrote:
>> > > >
>> > > > >  Dear Hive development community members,
>> > > > >
>> > > > >
>> > > > >
>> > > > > I am interested in learning more about the current support for
>> > > > > non-equijoins in Hive and/or other Hadoop SQL engines, and in
>> getting
>> > > > > feedback about community interest in more extensive support for
>> such
>> > a
>> > > > > feature. I intend to work on this challenge, assuming people find
>> it
>> > > > > compelling, and I intend to contribute results to the community.
>> > Where
>> > > > > possible, it would be great to receive feedback and engage in
>> > > > > collaborations along the way (for a bit more context, see the
>> > > > > postscript of this message).
>> > > > >
>> > > > >
>> > > > >
>> > > > > My initial goal is to support query conditions such as the
>> following:
>> > > > >
>> > > > >
>> > > > >
>> > > > > A.x < B.y
>> > > > >
>> > > > > A.x in_range [B.y, B.z]
>> > > > >
>> > > > > distance(A.x, B.y) < D
>> > > > >
>> > > > >
>> > > > >
>> > > > > where A and B are distinct tables/files. It is my understanding
>> that
>> > > > > current support for performing non-equijoins like those above is
>> > quite
>> > > > > limited, and where some forms are supported (like in Cloudera's
>> > > > > Impala), this support is based on doing a potentially expensive
>> cross
>> > > > product join.
>> > > > > Depending on the data types involved, I believe that joins with
>> these
>> > > > > conditions can be made to be tractable (at least on the average)
>> with
>> > > > > join algorithms that exploit properties of the data types, possibly
>> > > > > with some pre-scanning of the data.
>> > > > >
>> > > > >
>> > > > >
>> > > > > I am asking for feedback on the interest & need in the community
>> for
>> > > > > this work, as well as any pointers to similar work. In particular,
>> I
>> > > > > would appreciate any answers people could give on the following
>> > > > questions:
>> > > > >
>> > > > >
>> > > > >
>> > > > > - Is my understanding of the state of the art in Hive and similar
>> > > > > tools accurate? Are there groups currently working on similar or
>> > > > > related issues, or tools that already accomplish some or all of
>> what
>> > I
>> > > > have proposed?
>> > > > >
>> > > > > - Is there significant value to the community in the support of
>> such
>> > a
>> > > > > feature? In other words, are the manual workarounds necessary
>> because
>> > > > > of the absence of non-equijoins such as these enough of a pain to
>> > > > > justify the work I propose?
>> > > > >
>> > > > > - Being aware that the potential pre-scanning adds to the cost of
>> the
>> > > > > join, and that data could still blow-up in the worst case, am I
>> > > > > missing any other important con

Re: [ANNOUNCE] Apache Hive 1.1.0 Released

2015-03-09 Thread Brock Noland
Agreed! This is an exciting release! :) nice work team!

On Monday, March 9, 2015, Xuefu Zhang  wrote:

> Great job, guys! This is a much major release with significant new features
> and improvement. Thanks to everyone who contributed to make this happen.
>
> Thanks,
> Xuefu
>
>
> On Sun, Mar 8, 2015 at 10:40 PM, Brock Noland  > wrote:
>
> > The Apache Hive team is proud to announce the the release of Apache
> > Hive version 1.1.0.
> >
> > The Apache Hive (TM) data warehouse software facilitates querying and
> > managing large datasets residing in distributed storage. Built on top
> > of Apache Hadoop (TM), it provides:
> >
> > * Tools to enable easy data extract/transform/load (ETL)
> >
> > * A mechanism to impose structure on a variety of data formats
> >
> > * Access to files stored either directly in Apache HDFS (TM) or in other
> >   data storage systems such as Apache HBase (TM)
> >
> > * Query execution via Apache Hadoop MapReduce and Apache Tez frameworks.
> >
> > For Hive release details and downloads, please visit:
> > https://hive.apache.org/downloads.html
> >
> > Hive X.Y.Z Release Notes are available here:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310843&styleName=Text&version=12329363
> >
> > We would like to thank the many contributors who made this release
> > possible.
> >
> > Regards,
> >
> > The Apache Hive Team
> >
>


[jira] [Created] (HIVE-9895) Update hive people page with recent changes

2015-03-08 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9895:
--

 Summary: Update hive people page with recent changes
 Key: HIVE-9895
 URL: https://issues.apache.org/jira/browse/HIVE-9895
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[ANNOUNCE] Apache Hive 1.1.0 Released

2015-03-08 Thread Brock Noland
The Apache Hive team is proud to announce the the release of Apache
Hive version 1.1.0.

The Apache Hive (TM) data warehouse software facilitates querying and
managing large datasets residing in distributed storage. Built on top
of Apache Hadoop (TM), it provides:

* Tools to enable easy data extract/transform/load (ETL)

* A mechanism to impose structure on a variety of data formats

* Access to files stored either directly in Apache HDFS (TM) or in other
  data storage systems such as Apache HBase (TM)

* Query execution via Apache Hadoop MapReduce and Apache Tez frameworks.

For Hive release details and downloads, please visit:
https://hive.apache.org/downloads.html

Hive X.Y.Z Release Notes are available here:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310843&styleName=Text&version=12329363

We would like to thank the many contributors who made this release
possible.

Regards,

The Apache Hive Team


[jira] [Created] (HIVE-9860) MapredLocalTask/SecureCmdDoAs leaks local files

2015-03-04 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9860:
--

 Summary: MapredLocalTask/SecureCmdDoAs leaks local files
 Key: HIVE-9860
 URL: https://issues.apache.org/jira/browse/HIVE-9860
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland


The class {{SecureCmdDoAs}} creates a temp file but does not clean it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9823) Load spark-defaults.conf from classpath [Spark Branch]

2015-02-28 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9823:
--

 Summary: Load spark-defaults.conf from classpath [Spark Branch]
 Key: HIVE-9823
 URL: https://issues.apache.org/jira/browse/HIVE-9823
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: spark-branch
Reporter: Brock Noland
Assignee: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9803) SparkClientImpl should not attempt impersonation in CLI mode [Spark Branch]

2015-02-26 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9803:
--

 Summary: SparkClientImpl should not attempt impersonation in CLI 
mode [Spark Branch]
 Key: HIVE-9803
 URL: https://issues.apache.org/jira/browse/HIVE-9803
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: spark-branch
Reporter: Brock Noland
Assignee: Brock Noland


My bad. In CLI mode we attempt to impersonate oursevles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 29807: HIVE-9253: MetaStore server should support timeout for long running requests

2015-02-26 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29807/#review74365
---


This looks great to me! We can do the check in more places in a follow-on jira. 
I just had one thought about the property name and then I think let's commit!

Nice work Dong!!


common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
<https://reviews.apache.org/r/29807/#comment120931>

I don't feel it makes any sense to have a smaller "long running timeout" 
than hive.metastore.client.socket.timeout or any larger. Thus as opposed to 
creating a new config I feel like we should re-use 
hive.metastore.client.socket.timeout.


- Brock Noland


On Feb. 26, 2015, 8:51 a.m., Dong Chen wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29807/
> ---
> 
> (Updated Feb. 26, 2015, 8:51 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-9253: MetaStore server should support timeout for long running requests
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 8e072f7 
>   metastore/src/java/org/apache/hadoop/hive/metastore/Deadline.java 
> PRE-CREATION 
>   metastore/src/java/org/apache/hadoop/hive/metastore/DeadlineException.java 
> PRE-CREATION 
>   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
> ab011fc 
>   metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java 
> 574141c 
>   metastore/src/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java 
> 01ad36a 
>   
> metastore/src/java/org/apache/hadoop/hive/metastore/SessionPropertiesListener.java
>  PRE-CREATION 
>   metastore/src/test/org/apache/hadoop/hive/metastore/TestDeadline.java 
> PRE-CREATION 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStoreTimeout.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/29807/diff/
> 
> 
> Testing
> ---
> 
> UT passed
> 
> 
> Thanks,
> 
> Dong Chen
> 
>



[jira] [Created] (HIVE-9793) Remove hard coded paths from cli driver tests

2015-02-25 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9793:
--

 Summary: Remove hard coded paths from cli driver tests
 Key: HIVE-9793
 URL: https://issues.apache.org/jira/browse/HIVE-9793
 Project: Hive
  Issue Type: Improvement
  Components: Tests
Reporter: Brock Noland


At some point a change which generates a hard coded path into the test files 
snuck in. Insert we should use the {{HIVE_ROOT}} directory as this is better 
for ptest environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31379: HIVE-9772 Hive parquet timestamp conversion doesn't work with new Parquet

2015-02-25 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31379/#review74080
---

Ship it!


Ship It!

- Brock Noland


On Feb. 24, 2015, 8:55 p.m., Jimmy Xiang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31379/
> ---
> 
> (Updated Feb. 24, 2015, 8:55 p.m.)
> 
> 
> Review request for hive and Brock Noland.
> 
> 
> Bugs: HIVE-9772
> https://issues.apache.org/jira/browse/HIVE-9772
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Using Conf to pass timestamp conversion info around, instead of 
> readSupportMetadata, which is not supported by latest Parquet any more
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
> 377e362 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
>  47cd682 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
>  6dc85fa 
> 
> Diff: https://reviews.apache.org/r/31379/diff/
> 
> 
> Testing
> ---
> 
> Unit test, qtest
> 
> 
> Thanks,
> 
> Jimmy Xiang
> 
>



[jira] [Created] (HIVE-9788) Make double quote optional in tsv/csv/dsv output

2015-02-25 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9788:
--

 Summary: Make double quote optional in tsv/csv/dsv output
 Key: HIVE-9788
 URL: https://issues.apache.org/jira/browse/HIVE-9788
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland


Similar to HIVE-7390 some customers would like the double quotes to be 
optional. So if the data is {{"A"}} then the output from beeline should be 
{{"A"}} which is the same as the Hive CLI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31379: HIVE-9772 Hive parquet timestamp conversion doesn't work with new Parquet

2015-02-25 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31379/#review73956
---


Hey, can you generate a parquet file with this on and off an ensure that the 
flag is stored correctly? As other engines like impala use this flag...


ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
<https://reviews.apache.org/r/31379/#comment120367>

this populates the metadata written by parquet, correct?


- Brock Noland


On Feb. 24, 2015, 8:55 p.m., Jimmy Xiang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31379/
> ---
> 
> (Updated Feb. 24, 2015, 8:55 p.m.)
> 
> 
> Review request for hive and Brock Noland.
> 
> 
> Bugs: HIVE-9772
> https://issues.apache.org/jira/browse/HIVE-9772
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Using Conf to pass timestamp conversion info around, instead of 
> readSupportMetadata, which is not supported by latest Parquet any more
> 
> 
> Diffs
> -
> 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
> 377e362 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java
>  47cd682 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/ParquetRecordReaderWrapper.java
>  6dc85fa 
> 
> Diff: https://reviews.apache.org/r/31379/diff/
> 
> 
> Testing
> ---
> 
> Unit test, qtest
> 
> 
> Thanks,
> 
> Jimmy Xiang
> 
>



[jira] [Created] (HIVE-9781) Utilize spark.kryo.registrator [Spark Branch]

2015-02-24 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9781:
--

 Summary: Utilize spark.kryo.registrator [Spark Branch]
 Key: HIVE-9781
 URL: https://issues.apache.org/jira/browse/HIVE-9781
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland


I noticed in several thread dumps that it appears kyro is serializing the class 
names associated with our keys and values.

Kyro supports pre-registering classes so that you don't have to serialize the 
class name and spark supports this via the {{spark.kryo.registrator}} property. 
We should do this so we don't have to serialize class names.

{noformat}
Thread 12154: (state = BLOCKED)
 - java.lang.Object.hashCode() @bci=0 (Compiled frame; information may be 
imprecise)
 - com.esotericsoftware.kryo.util.ObjectMap.get(java.lang.Object) @bci=1, 
line=265 (Compiled frame)
 - 
com.esotericsoftware.kryo.util.DefaultClassResolver.getRegistration(java.lang.Class)
 @bci=18, line=61 (Compiled frame)
 - com.esotericsoftware.kryo.Kryo.getRegistration(java.lang.Class) @bci=20, 
line=429 (Compiled frame)
 - 
com.esotericsoftware.kryo.util.DefaultClassResolver.readName(com.esotericsoftware.kryo.io.Input)
 @bci=242, line=148 (Compiled frame)
 - 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(com.esotericsoftware.kryo.io.Input)
 @bci=65, line=115 (Compiled frame)
 - com.esotericsoftware.kryo.Kryo.readClass(com.esotericsoftware.kryo.io.Input) 
@bci=20, line=610 (Compiled frame)
 - 
com.esotericsoftware.kryo.Kryo.readClassAndObject(com.esotericsoftware.kryo.io.Input)
 @bci=21, line=721 (Compiled frame)
 - com.twitter.chill.Tuple2Serializer.read(com.esotericsoftware.kryo.Kryo, 
com.esotericsoftware.kryo.io.Input, java.lang.Class) @bci=6, line=41 (Compiled 
frame)
 - com.twitter.chill.Tuple2Serializer.read(com.esotericsoftware.kryo.Kryo, 
com.esotericsoftware.kryo.io.Input, java.lang.Class) @bci=4, line=33 (Compiled 
frame)
 - 
com.esotericsoftware.kryo.Kryo.readClassAndObject(com.esotericsoftware.kryo.io.Input)
 @bci=126, line=729 (Compiled frame)
 - 
org.apache.spark.serializer.KryoDeserializationStream.readObject(scala.reflect.ClassTag)
 @bci=8, line=142 (Compiled frame)
 - org.apache.spark.serializer.DeserializationStream$$anon$1.getNext() @bci=10, 
line=133 (Compiled frame)
 - org.apache.spark.util.NextIterator.hasNext() @bci=16, line=71 (Compiled 
frame)
 - org.apache.spark.util.CompletionIterator.hasNext() @bci=4, line=32 (Compiled 
frame)
 - scala.collection.Iterator$$anon$13.hasNext() @bci=4, line=371 (Compiled 
frame)
 - org.apache.spark.util.CompletionIterator.hasNext() @bci=4, line=32 (Compiled 
frame)
 - org.apache.spark.InterruptibleIterator.hasNext() @bci=22, line=39 (Compiled 
frame)
 - scala.collection.Iterator$$anon$11.hasNext() @bci=4, line=327 (Compiled 
frame)
 - 
org.apache.spark.util.collection.ExternalSorter.insertAll(scala.collection.Iterator)
 @bci=191, line=217 (Compiled frame)
 - org.apache.spark.shuffle.hash.HashShuffleReader.read() @bci=278, line=61 
(Interpreted frame)
 - org.apache.spark.rdd.ShuffledRDD.compute(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=46, line=92 (Interpreted frame)
 - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame)
 - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame)
 - org.apache.spark.rdd.MapPartitionsRDD.compute(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=24, line=35 (Interpreted frame)
 - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame)
 - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame)
 - org.apache.spark.rdd.MapPartitionsRDD.compute(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=24, line=35 (Interpreted frame)
 - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame)
 - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame)
 - org.apache.spark.rdd.UnionRDD.compute(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=22, line=87 (Interpreted frame)
 - org.apache.spark.rdd.RDD.computeOrReadCheckpoint(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=26, line=263 (Interpreted frame)
 - org.apache.spark.rdd.RDD.iterator(org.apache.spark.Partition, 
org.apache.spark.TaskContext) @bci=33, line=230 (Interpreted frame)
 - 
org.apache.spark.scheduler.ShuffleMapTask.runTask(org.apache.spark.TaskContext) 
@bci=166, line=68 (Interpreted frame)
 - 
org.apache.spark.scheduler.ShuffleMapTask.runTask(org.apache.spark.TaskContext) 
@b

Re: Review Request 30717: HIVE-8119: Implement Date in ParquetSerde

2015-02-24 Thread Brock Noland


> On Feb. 10, 2015, 7:50 p.m., Ryan Blue wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java,
> >  line 99
> > 
> >
> > For primitive types, this should be using the Types API (like the line 
> > above) because we're going to remove the constructors from the public API 
> > in favor of the bulider. This is to avoid invalid types, like an INT64 with 
> > a DATE annotation.
> > 
> > This should be:
> > ```java
> > Types.primitive(repetition, INT32).as(DATE).named(name);
> > ```
> 
> Sergio Pena wrote:
> Agree. Should we follow this new API in another JIRA so that we cover all 
> primitive types?.

Yes let's do this in a follow on.


- Brock


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30717/#review71844
---


On Feb. 6, 2015, 7:51 a.m., Dong Chen wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30717/
> ---
> 
> (Updated Feb. 6, 2015, 7:51 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-8119: Implement Date in ParquetSerde
> 
> This patch map the Date in Hive to INT32 in Parquet, based on the Parquet 
> Logical Type Definitions in 
> https://github.com/apache/incubator-parquet-format/blob/master/LogicalTypes.md
> 
> 
> Diffs
> -
> 
>   data/files/parquet_types.txt 31a10c9 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java 
> 377e362 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/HiveSchemaConverter.java
>  e5bd70c 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ArrayWritableObjectInspector.java
>  bb066af 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.java 
> 9199127 
>   
> ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java
>  1d83bf3 
>   
> ql/src/test/org/apache/hadoop/hive/ql/io/parquet/TestHiveSchemaConverter.java 
> f232c57 
>   ql/src/test/queries/clientnegative/parquet_date.q 89d3602 
>   ql/src/test/queries/clientpositive/parquet_types.q 806db24 
>   ql/src/test/results/clientnegative/parquet_date.q.out d1c38d6 
>   ql/src/test/results/clientpositive/parquet_types.q.out dc5ceb0 
> 
> Diff: https://reviews.apache.org/r/30717/diff/
> 
> 
> Testing
> ---
> 
> UT passed. 2 tests are added
> 
> 
> Thanks,
> 
> Dong Chen
> 
>



[jira] [Created] (HIVE-9774) Print yarn application id to console

2015-02-24 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9774:
--

 Summary: Print yarn application id to console
 Key: HIVE-9774
 URL: https://issues.apache.org/jira/browse/HIVE-9774
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Reporter: Brock Noland


Oozie would like to use beeline to capture the yarn application id of apps so 
that if a workflow is canceled, the job can be cancelled. When running under MR 
we print the job id but under spark we do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-22 Thread Brock Noland
with 3 binding +1, 1 non-binding +1, and 0 -1's this vote passes

On Fri, Feb 20, 2015 at 5:20 PM, Brock Noland  wrote:

> Both solutions are reasonable from my perspective...
>
> Brock
>
> On Fri, Feb 20, 2015 at 2:01 PM, Thejas Nair 
> wrote:
>
>> Thanks for finding the reason for this Brock!
>> I think the ATSHook contents should be shimmed (in trunk) so that it
>> is not excluded from a hadoop-1 build. (Or maybe, we should start
>> surveying if people are still using newer versions of hive on Hadoop
>> 1.x).
>>
>> I also ran a few simple queries using the RC in a single node cluster
>> and everything looked good.
>>
>>
>> On Fri, Feb 20, 2015 at 1:00 PM, Brock Noland  wrote:
>> > Hi,
>> >
>> > That is true and by design when built with the hadoop-1 profile:
>> >
>> >
>> https://github.com/apache/hive/commit/820690b9bb908f48f8403ca87d14b26c18f00c38
>> >
>> > Brock
>> >
>> > On Fri, Feb 20, 2015 at 11:08 AM, Thejas Nair 
>> wrote:
>> >> A few classes seem to be missing from the hive-exec*jar in binary
>> >> tar.gz. When I build from the source tar.gz , the hive-exec*jar has
>> >> those. ie, the source tar.gz looks fine.
>> >>
>> >> It is the ATSHook classes that are missing. Those are needed to be
>> >> able to register job progress information with Yarn timeline server.
>> >>
>> >>  diff /tmp/src.txt /tmp/bin.txt
>> >> 4768,4775d4767
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
>> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class
>> >>
>> >>
>> >> On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
>> >>> +1
>> >>>
>> >>> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin
>> with some
>> >>> DDL/DML queries.
>> >>> 2. Tested the bin with some DDL/DML queries.
>> >>> 3. Verified signature for bin and src, both asc and md5.
>> >>>
>> >>> Chao
>> >>>
>> >>> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho 
>> wrote:
>> >>>
>> >>>> +1
>> >>>>
>> >>>> 1.  Verified signature for bin and src
>> >>>> 2.  Built src with hadoop2
>> >>>> 3.  Ran few queries from beeline with src
>> >>>> 4.  Ran few queries from beeline with bin
>> >>>> 5.  Verified no SNAPSHOT deps
>> >>>>
>> >>>> Thanks
>> >>>> Szehon
>> >>>>
>> >>>> On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang 
>> wrote:
>> >>>>
>> >>>> > +1
>> >>>> >
>> >>>> > 1. downloaded the src tarball and built w/ -Phadoop-1/2
>> >>>> > 2. verified no binary (jars) in the src tarball
>> >>>> >
>> >>>> > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
>> >>>> wrote:
>> >>>> >
>> >>>> > > +1
>> >>>> > >
>> >>>> > > verified sigs, hashes, created tables, ran MR on YARN jobs
>> >>>> > >
>> >>>> > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland <
>> br...@cloudera.com>
>> >>>> > wrote:
>> >>>> > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
>> >>>> > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
>> >>>> > > >
>> >>>> > > > Maven artifacts are available here:
>> >>>> > > >
>> >>>>
>> https://repository.apache.org/content/repositories/orgapachehive-1026/
>> >>>> > > >
>> >>>> > > > Source tag for RC3 is at:
>> >>>> > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
>> >>>> > > >
>> >>>> > > > My key is located here:
>> >>>> https://people.apache.org/keys/group/hive.asc
>> >>>> > > >
>> >>>> > > > Voting will conclude in 72 hours
>> >>>> > >
>> >>>> >
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Best,
>> >>> Chao
>>
>
>


[jira] [Created] (HIVE-9749) ObjectStore schema verification logic is incorrect

2015-02-22 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9749:
--

 Summary: ObjectStore schema verification logic is incorrect
 Key: HIVE-9749
 URL: https://issues.apache.org/jira/browse/HIVE-9749
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.13.1, 0.14.0, 1.0.0, 1.1.0
Reporter: Brock Noland
Assignee: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-22 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Attachment: HIVE-9671.3-spark.patch

Makes sense

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9671.1-spark.patch, 
> HIVE-9671.2-spark.patch, HIVE-9671.3-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9543) MetaException(message:Metastore contains multiple versions)

2015-02-22 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332299#comment-14332299
 ] 

Brock Noland commented on HIVE-9543:


Hmm looking at this code:
https://github.com/apache/hive/blob/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L6630

The only way I could see multiple versions being inserted is by multiple 
clients executing against a HMS which did not have a version recorded.

> MetaException(message:Metastore contains multiple versions)
> ---
>
> Key: HIVE-9543
> URL: https://issues.apache.org/jira/browse/HIVE-9543
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.13.1
>Reporter: Junyong Li
>
> When i run bin/hive command, i got the following exception:
> {noformat}
> Logging initialized using configuration in 
> jar:file:/home/hadoop/apache-hive-0.13.1-bin/lib/hive-common-0.13.1.jar!/hive-log4j.properties
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.RuntimeException: Unable to instantiate 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
> ... 7 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
> ... 12 more
> Caused by: MetaException(message:Metastore contains multiple versions)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6368)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6330)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6289)
> at 
> org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6277)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
> at com.sun.proxy.$Proxy9.verifySchema(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:476)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:356)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
> at 
> org.apache.hadoop.hive.metastore.HiveMe

[jira] [Updated] (HIVE-9543) MetaException(message:Metastore contains multiple versions)

2015-02-22 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9543:
---
Description: 
When i run bin/hive command, i got the following exception:
{noformat}
Logging initialized using configuration in 
jar:file:/home/hadoop/apache-hive-0.13.1-bin/lib/hive-common-0.13.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: 
java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at 
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 7 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 12 more
Caused by: MetaException(message:Metastore contains multiple versions)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:6368)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:6330)
at 
org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:6289)
at 
org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:6277)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
at com.sun.proxy.$Proxy9.verifySchema(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:476)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:356)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:171)
... 17 more
{noformat}



And i have found two record in metastore table VERSION. after reading source 
code, i found following code maybe cause the problem: 
In the org.apache.hadoop.hive.metastore.ObjectStore.java:6289:
{noformat}
String schemaVer = getMetaStoreSchemaVersion();
if (schemaVer == null) {
  // metastore has no schema version information
  if (strictValidation) {
throw new MetaException("Version information not found in 
metastore. ");
  } else {
LOG.warn("Version information not found in metastore. "
+ HiveConf.ConfVars.METASTORE_SCHEMA_VERIFICATION.toString() +
" is not enabled so recording the schema version " +
MetaStoreSchemaInfo.getHiveSchemaVersion());

setMetaStoreSchemaVersion(MetaStoreSchemaInfo.getHiveSchemaVersion(),
"Set by MetaStore");
}
}
{noformat}



If the

[jira] [Commented] (HIVE-9620) Cannot retrieve column statistics using HMS API if column name contains uppercase characters

2015-02-22 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14332297#comment-14332297
 ] 

Brock Noland commented on HIVE-9620:


[~j...@cloudera.com] - what is the error you see with this? Is there any impala 
jira for it?

> Cannot retrieve column statistics using HMS API if column name contains 
> uppercase characters 
> -
>
> Key: HIVE-9620
> URL: https://issues.apache.org/jira/browse/HIVE-9620
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Statistics
>Affects Versions: 0.13.1
>Reporter: Juan Yu
>Assignee: Chaoyu Tang
>
> The issue only happens on avro table,
> {code}
> CREATE TABLE t2_avro (
> columnNumber1 int,
> columnNumber2 string )
> PARTITIONED BY (p1 string)
> ROW FORMAT SERDE 
>   'org.apache.hadoop.hive.serde2.avro.AvroSerDe' 
> STORED AS INPUTFORMAT 
>   'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' 
> OUTPUTFORMAT 
>   'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
> TBLPROPERTIES(
> 'avro.schema.literal'='{
> "namespace": "testing.hive.avro.serde",
> "name": "test",
> "type": "record",
> "fields": [
> { "name":"columnNumber1", "type":"int" },
> { "name":"columnNumber2", "type":"string" }
> ]}');
> {code}
> I don't have latest hive so I am not sure if this is already fixed in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Attachment: HIVE-9671.2-spark.patch

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9671.1-spark.patch, 
> HIVE-9671.2-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Attachment: HIVE-9671.1-spark.patch

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9625) Delegation tokens for HMS are not renewed

2015-02-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14330449#comment-14330449
 ] 

Brock Noland commented on HIVE-9625:


I think that getting a new token on failure is going to be pretty difficult. 
The only places I can see retrying are in the metastore package in {{HMSC}} or 
{{RetryingMetastore}} but there is no way to get a new token there. 
Additionally I believe the token needs to be acquired outside of a 
{{doas(user)}} call.

Looks like a non-trivial change.

> Delegation tokens for HMS are not renewed
> -
>
> Key: HIVE-9625
> URL: https://issues.apache.org/jira/browse/HIVE-9625
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9625.1.patch
>
>
> AFAICT the delegation tokens stored in [HiveSessionImplwithUGI 
> |https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java#L45]
>  for HMS + Impersonation are never renewed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Affects Version/s: spark-branch
   Status: Patch Available  (was: Open)

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14330406#comment-14330406
 ] 

Brock Noland commented on HIVE-9671:


It doesn't appear we can test this automatically since minimr doesn't support 
kerberos.

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Comment: was deleted

(was: 

{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12699751/HIVE-9671.1-spark.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7510 tests executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/738/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/738/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-738/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12699751 - PreCommit-HIVE-SPARK-Build)

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-21 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Attachment: HIVE-9671.1-spark.patch

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329897#comment-14329897
 ] 

Brock Noland commented on HIVE-9726:


Thanks everyone for your help!

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-20 Thread Brock Noland
Both solutions are reasonable from my perspective...

Brock

On Fri, Feb 20, 2015 at 2:01 PM, Thejas Nair  wrote:

> Thanks for finding the reason for this Brock!
> I think the ATSHook contents should be shimmed (in trunk) so that it
> is not excluded from a hadoop-1 build. (Or maybe, we should start
> surveying if people are still using newer versions of hive on Hadoop
> 1.x).
>
> I also ran a few simple queries using the RC in a single node cluster
> and everything looked good.
>
>
> On Fri, Feb 20, 2015 at 1:00 PM, Brock Noland  wrote:
> > Hi,
> >
> > That is true and by design when built with the hadoop-1 profile:
> >
> >
> https://github.com/apache/hive/commit/820690b9bb908f48f8403ca87d14b26c18f00c38
> >
> > Brock
> >
> > On Fri, Feb 20, 2015 at 11:08 AM, Thejas Nair 
> wrote:
> >> A few classes seem to be missing from the hive-exec*jar in binary
> >> tar.gz. When I build from the source tar.gz , the hive-exec*jar has
> >> those. ie, the source tar.gz looks fine.
> >>
> >> It is the ATSHook classes that are missing. Those are needed to be
> >> able to register job progress information with Yarn timeline server.
> >>
> >>  diff /tmp/src.txt /tmp/bin.txt
> >> 4768,4775d4767
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
> >> < org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class
> >>
> >>
> >> On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
> >>> +1
> >>>
> >>> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin with
> some
> >>> DDL/DML queries.
> >>> 2. Tested the bin with some DDL/DML queries.
> >>> 3. Verified signature for bin and src, both asc and md5.
> >>>
> >>> Chao
> >>>
> >>> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho 
> wrote:
> >>>
> >>>> +1
> >>>>
> >>>> 1.  Verified signature for bin and src
> >>>> 2.  Built src with hadoop2
> >>>> 3.  Ran few queries from beeline with src
> >>>> 4.  Ran few queries from beeline with bin
> >>>> 5.  Verified no SNAPSHOT deps
> >>>>
> >>>> Thanks
> >>>> Szehon
> >>>>
> >>>> On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang 
> wrote:
> >>>>
> >>>> > +1
> >>>> >
> >>>> > 1. downloaded the src tarball and built w/ -Phadoop-1/2
> >>>> > 2. verified no binary (jars) in the src tarball
> >>>> >
> >>>> > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
> >>>> wrote:
> >>>> >
> >>>> > > +1
> >>>> > >
> >>>> > > verified sigs, hashes, created tables, ran MR on YARN jobs
> >>>> > >
> >>>> > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland  >
> >>>> > wrote:
> >>>> > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
> >>>> > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
> >>>> > > >
> >>>> > > > Maven artifacts are available here:
> >>>> > > >
> >>>>
> https://repository.apache.org/content/repositories/orgapachehive-1026/
> >>>> > > >
> >>>> > > > Source tag for RC3 is at:
> >>>> > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
> >>>> > > >
> >>>> > > > My key is located here:
> >>>> https://people.apache.org/keys/group/hive.asc
> >>>> > > >
> >>>> > > > Voting will conclude in 72 hours
> >>>> > >
> >>>> >
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Best,
> >>> Chao
>


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
   Resolution: Fixed
Fix Version/s: spark-branch
   Status: Resolved  (was: Patch Available)

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Fix For: spark-branch
>
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329857#comment-14329857
 ] 

Brock Noland commented on HIVE-3454:


+1 LGTM

best we can do in this situation.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.4.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
Attachment: HIVE-9726.1-spark.patch

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329750#comment-14329750
 ] 

Brock Noland commented on HIVE-9726:


Sandy helped me debug this. Basically  need to set: 

{{yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler}}

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, HIVE-9726.1-spark.patch, 
> hive.log.txt.gz, yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-20 Thread Brock Noland
Hi,

That is true and by design when built with the hadoop-1 profile:

https://github.com/apache/hive/commit/820690b9bb908f48f8403ca87d14b26c18f00c38

Brock

On Fri, Feb 20, 2015 at 11:08 AM, Thejas Nair  wrote:
> A few classes seem to be missing from the hive-exec*jar in binary
> tar.gz. When I build from the source tar.gz , the hive-exec*jar has
> those. ie, the source tar.gz looks fine.
>
> It is the ATSHook classes that are missing. Those are needed to be
> able to register job progress information with Yarn timeline server.
>
>  diff /tmp/src.txt /tmp/bin.txt
> 4768,4775d4767
> < org/apache/hadoop/hive/ql/hooks/ATSHook$1.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$2.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$3.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$EntityTypes.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$EventTypes.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$OtherInfoTypes.class
> < org/apache/hadoop/hive/ql/hooks/ATSHook$PrimaryFilterTypes.class
>
>
> On Thu, Feb 19, 2015 at 8:54 AM, Chao Sun  wrote:
>> +1
>>
>> 1. Build src with hadoop-1 and hadoop-2, tested the generated bin with some
>> DDL/DML queries.
>> 2. Tested the bin with some DDL/DML queries.
>> 3. Verified signature for bin and src, both asc and md5.
>>
>> Chao
>>
>> On Thu, Feb 19, 2015 at 1:55 AM, Szehon Ho  wrote:
>>
>>> +1
>>>
>>> 1.  Verified signature for bin and src
>>> 2.  Built src with hadoop2
>>> 3.  Ran few queries from beeline with src
>>> 4.  Ran few queries from beeline with bin
>>> 5.  Verified no SNAPSHOT deps
>>>
>>> Thanks
>>> Szehon
>>>
>>> On Wed, Feb 18, 2015 at 10:03 PM, Xuefu Zhang  wrote:
>>>
>>> > +1
>>> >
>>> > 1. downloaded the src tarball and built w/ -Phadoop-1/2
>>> > 2. verified no binary (jars) in the src tarball
>>> >
>>> > On Wed, Feb 18, 2015 at 8:56 PM, Brock Noland 
>>> wrote:
>>> >
>>> > > +1
>>> > >
>>> > > verified sigs, hashes, created tables, ran MR on YARN jobs
>>> > >
>>> > > On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland 
>>> > wrote:
>>> > > > Apache Hive 1.1.0 Release Candidate 3 is available here:
>>> > > > http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
>>> > > >
>>> > > > Maven artifacts are available here:
>>> > > >
>>> https://repository.apache.org/content/repositories/orgapachehive-1026/
>>> > > >
>>> > > > Source tag for RC3 is at:
>>> > > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
>>> > > >
>>> > > > My key is located here:
>>> https://people.apache.org/keys/group/hive.asc
>>> > > >
>>> > > > Voting will conclude in 72 hours
>>> > >
>>> >
>>>
>>
>>
>>
>> --
>> Best,
>> Chao


[jira] [Commented] (HIVE-9625) Delegation tokens for HMS are not renewed

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329278#comment-14329278
 ] 

Brock Noland commented on HIVE-9625:


Before calling {{getDelegationToken}} we call {{Hive.closeCurrent}} for this 
reason. I'll test it and see what happens.

> Delegation tokens for HMS are not renewed
> -
>
> Key: HIVE-9625
> URL: https://issues.apache.org/jira/browse/HIVE-9625
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9625.1.patch
>
>
> AFAICT the delegation tokens stored in [HiveSessionImplwithUGI 
> |https://github.com/apache/hive/blob/trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java#L45]
>  for HMS + Impersonation are never renewed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329235#comment-14329235
 ] 

Brock Noland commented on HIVE-9726:


None we are specifying two executors via the old mechanism.

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, hive.log.txt.gz, 
> yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
Attachment: hive.log.txt.gz
yarn-am-stdout.txt
yarn-am-stderr.txt

logs attached.

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch, hive.log.txt.gz, 
> yarn-am-stderr.txt, yarn-am-stdout.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329177#comment-14329177
 ] 

Brock Noland commented on HIVE-9726:


[~sandyr],

We are trying to upgrade to {{1.3}} and seeing some strangeness with YARN mode. 
Basically we wait until the containers we request on start, actually start:

https://github.com/apache/hive/blob/trunk/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java#L914

in the attached log we see:

{noformat}
2015-02-20 08:52:08,597 INFO  [Reporter] yarn.YarnAllocator 
(Logging.scala:logInfo(59)) - Received 2 containers from YARN, launching 
executors on 0 of them.
{noformat}

then a few minutes later we error out:

{noformat}
2015-02-20 08:55:54,235 ERROR [main]: QTestUtil 
(QTestUtil.java:setSparkSession(916)) - Timed out waiting for Spark cluster to 
init
{noformat}

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329164#comment-14329164
 ] 

Brock Noland commented on HIVE-9726:


I think there is some real issue with TestMiniSparkOnYarnCliDriver.

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14328425#comment-14328425
 ] 

Brock Noland commented on HIVE-9726:


Woops I see I named this patch wrong and had attached it here: 
https://issues.apache.org/jira/browse/HIVE-9671?focusedCommentId=14328418&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14328418

earlier. To be clear the current patch is for upgrading spark.

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Attachment: (was: HIVE-9671.1-spark.patch)

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>    Reporter: Brock Noland
>Assignee: Brock Noland
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: setting up a branch for testing

2015-02-19 Thread Brock Noland
I will actually be talking about this tonight at the
Hive meetup. I will post slides here afterwards.

On Thu, Feb 19, 2015 at 2:28 PM, Sergey Shelukhin
 wrote:
> Can you elaborate on how many machines are needed, minimum (is it 1+, just 
> determined by the throughput of QA runs that we want; or is there some fixed 
> requirement too), and what is the setup/process to make them work with HiveQA 
> (in general so we'd know how we can provide machines)?
>
> Thanks!
> 
> From: Szehon 
> Sent: Friday, January 23, 2015 7:38 PM
> To: dev@hive.apache.org
> Subject: Re: setting up a branch for testing
>
> Yea but for precommit testing it would need a cluster setup that runs an 
> instance of Ptest server.  We only have spark branch setup for that other 
> than trunk, we have one cluster running spark and another running trunk.
>
> Setup is doable (actually just need to setup master) but takes some steps, 
> and physical machines.
>
> Thanks
> Szehon
>
>> On Jan 23, 2015, at 6:04 PM, Sergey Shelukhin  wrote:
>>
>> Hi.
>> Hive dev doc mentions that patches can be tested by HiveQA against the
>> branch by supplying the branch name in the patch name.
>> However, as far as I understand this requires some setup for each specific
>> branch.
>>
>> Is it possible to set up "llap" branch for HiveQA testing?
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
Assignee: Brock Noland
  Status: Patch Available  (was: Open)

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
Attachment: HIVE-9671.1-spark.patch

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Assignee: Brock Noland
  Status: Patch Available  (was: Open)

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9671) Support Impersonation [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9671:
---
Attachment: HIVE-9671.1-spark.patch

> Support Impersonation [Spark Branch]
> 
>
> Key: HIVE-9671
> URL: https://issues.apache.org/jira/browse/HIVE-9671
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>    Reporter: Brock Noland
> Attachments: HIVE-9671.1-spark.patch
>
>
> SPARK-5493 in 1.3 implemented proxy user authentication. We need to implement 
> using this option in spark client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9726) Upgrade to spark 1.3

2015-02-19 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9726:
--

 Summary: Upgrade to spark 1.3
 Key: HIVE-9726
 URL: https://issues.apache.org/jira/browse/HIVE-9726
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9726) Upgrade to spark 1.3 [Spark Branch]

2015-02-19 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9726:
---
Affects Version/s: spark-branch
  Summary: Upgrade to spark 1.3 [Spark Branch]  (was: Upgrade to 
spark 1.3)

> Upgrade to spark 1.3 [Spark Branch]
> ---
>
> Key: HIVE-9726
> URL: https://issues.apache.org/jira/browse/HIVE-9726
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: spark-branch
>Reporter: Brock Noland
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9721) Hadoop23Shims.setFullFileStatus should check for null

2015-02-18 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9721:
--

 Summary: Hadoop23Shims.setFullFileStatus should check for null
 Key: HIVE-9721
 URL: https://issues.apache.org/jira/browse/HIVE-9721
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland


{noformat}
2015-02-18 22:46:10,209 INFO org.apache.hadoop.hive.shims.HadoopShimsSecure: 
Skipping ACL inheritance: File system for path 
file:/tmp/hive/f1a28dee-70e8-4bc3-bd35-9be13834d1fc/hive_2015-02-18_22-46-10_065_3348083202601156561-1
 does not support ACLs but dfs.namenode.acls.enabled is set to true: 
java.lang.UnsupportedOperationException: RawLocalFileSystem doesn't support 
getAclStatus
java.lang.UnsupportedOperationException: RawLocalFileSystem doesn't support 
getAclStatus
at org.apache.hadoop.fs.FileSystem.getAclStatus(FileSystem.java:2429)
at 
org.apache.hadoop.fs.FilterFileSystem.getAclStatus(FilterFileSystem.java:562)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims.getFullFileStatus(Hadoop23Shims.java:645)
at org.apache.hadoop.hive.common.FileUtils.mkdir(FileUtils.java:524)
at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:234)
at 
org.apache.hadoop.hive.ql.Context.getExtTmpPathRelTo(Context.java:424)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:6290)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:9069)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8961)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9807)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9700)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10136)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:284)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10147)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:190)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1106)
at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:101)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:172)
at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:379)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:366)
at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:271)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:415)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:692)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-02-18 17:30:58,753 INFO org.apache.hadoop.hive.shims.HadoopShimsSecure: 
Skipping ACL inheritance: File system for path 
file:/tmp/hive/e3eb01f0-bb58-45a8-b773-8f4f3420457c/hive_2015-02-18_17-30-58_346_5020255420422913166-1/-mr-1
 does not support ACLs but dfs.namenode.acls.enabled is set to true: 
java.lang.NullPointerException
java.lang.NullPointerException
at 
org.apache.hadoop.hive.shims.Hadoop23Shims.setFullFileStatus(Hadoop23Shims.java:668)
at org.apache.hadoop.hive.common.FileUtils.mkdir(FileUtils.java:527)
at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:234)
at 
org.apache.hadoop.hive.ql.Context.getExtTmpPathRelTo(Context.java:424)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFi

Re: [VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-18 Thread Brock Noland
+1

verified sigs, hashes, created tables, ran MR on YARN jobs

On Wed, Feb 18, 2015 at 8:54 PM, Brock Noland  wrote:
> Apache Hive 1.1.0 Release Candidate 3 is available here:
> http://people.apache.org/~brock/apache-hive-1.1.0-rc3/
>
> Maven artifacts are available here:
> https://repository.apache.org/content/repositories/orgapachehive-1026/
>
> Source tag for RC3 is at:
> http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/
>
> My key is located here: https://people.apache.org/keys/group/hive.asc
>
> Voting will conclude in 72 hours


[VOTE] Apache Hive 1.1.0 Release Candidate 3

2015-02-18 Thread Brock Noland
Apache Hive 1.1.0 Release Candidate 3 is available here:
http://people.apache.org/~brock/apache-hive-1.1.0-rc3/

Maven artifacts are available here:
https://repository.apache.org/content/repositories/orgapachehive-1026/

Source tag for RC3 is at:
http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc3/

My key is located here: https://people.apache.org/keys/group/hive.asc

Voting will conclude in 72 hours


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 2

2015-02-18 Thread Brock Noland
We should be able to generate those values in webhcat-default.xml. Eugene?

On Wed, Feb 18, 2015 at 4:43 PM, Lefty Leverenz  wrote:
> Four configuration values in webhcat-default.xml need to be updated (same
> as HIVE-8807 <https://issues.apache.org/jira/browse/HIVE-8807> updated in
> the patch for release 1.0.0
> <https://issues.apache.org/jira/secure/attachment/12695112/HIVE8807.patch>):
>
>- templeton.pig.path
>- templeton.hive.path
>- templeton.hive.home
>- templeton.hcat.home
>
> How can we make this happen in every release, without reminders?
>
>
> -- Lefty
>
> On Wed, Feb 18, 2015 at 4:04 PM, Brock Noland  wrote:
>
>> Yeah that is really strange. I have seen that before, a long time
>> back, and but not found the root cause. I think it's a bug in either
>> antlr or how we use antlr.
>>
>> I will re-generate the binaries and start another vote. Note the
>> source tag will be the same which is technically what we vote on..
>>
>> On Wed, Feb 18, 2015 at 3:59 PM, Chao Sun  wrote:
>> > I tested apache-hive.1.1.0-bin and I also got the same error as Szehon
>> > reported.
>> >
>> > On Wed, Feb 18, 2015 at 3:48 PM, Brock Noland 
>> wrote:
>> >
>> >> Hi,
>> >>
>> >>
>> >>
>> >> On Wed, Feb 18, 2015 at 2:21 PM, Gopal Vijayaraghavan <
>> gop...@apache.org>
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > From the release branch, I noticed that the hive-exec.jar now
>> contains a
>> >> > copy of guava-14 without any relocations.
>> >> >
>> >> > The hive spark-client pom.xml adds guava as a lib jar instead of
>> shading
>> >> > it in.
>> >> >
>> >> >
>> https://github.com/apache/hive/blob/branch-1.1/spark-client/pom.xml#L111
>> >> >
>> >> >
>> >> > That seems to be a great approach for guava compat issues across
>> >> execution
>> >> > engines.
>> >> >
>> >> >
>> >> > Spark itself relocates guava-14 for compatibility with
>> Hive-on-Spark(??).
>> >> >
>> >> > https://issues.apache.org/jira/browse/SPARK-2848
>> >> >
>> >> >
>> >> > Does any of the same compatibility issues occur when using a
>> >> hive-exec.jar
>> >> > containing guava-14 on MRv2 (which has guava-11 in the classpath)?
>> >>
>> >> Not that I am aware of. I've tested it on top of MRv2 a number of
>> >> times and I think the unit tests also excercise these code paths.
>> >>
>> >> >
>> >> > Cheers,
>> >> > Gopal
>> >> >
>> >> > On 2/17/15, 3:14 PM, "Brock Noland"  wrote:
>> >> >
>> >> >>Apache Hive 1.1.0 Release Candidate 2 is available here:
>> >> >>http://people.apache.org/~brock/apache-hive-1.1.0-rc2/
>> >> >>
>> >> >>Maven artifacts are available here:
>> >> >>
>> https://repository.apache.org/content/repositories/orgapachehive-1025/
>> >> >>
>> >> >>Source tag for RC1 is at:
>> >> >>http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc2/
>> >> >>
>> >> >>My key is located here: https://people.apache.org/keys/group/hive.asc
>> >> >>
>> >> >>Voting will conclude in 72 hours
>> >> >
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > Best,
>> > Chao
>>


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 2

2015-02-18 Thread Brock Noland
Yeah that is really strange. I have seen that before, a long time
back, and but not found the root cause. I think it's a bug in either
antlr or how we use antlr.

I will re-generate the binaries and start another vote. Note the
source tag will be the same which is technically what we vote on..

On Wed, Feb 18, 2015 at 3:59 PM, Chao Sun  wrote:
> I tested apache-hive.1.1.0-bin and I also got the same error as Szehon
> reported.
>
> On Wed, Feb 18, 2015 at 3:48 PM, Brock Noland  wrote:
>
>> Hi,
>>
>>
>>
>> On Wed, Feb 18, 2015 at 2:21 PM, Gopal Vijayaraghavan 
>> wrote:
>> > Hi,
>> >
>> > From the release branch, I noticed that the hive-exec.jar now contains a
>> > copy of guava-14 without any relocations.
>> >
>> > The hive spark-client pom.xml adds guava as a lib jar instead of shading
>> > it in.
>> >
>> > https://github.com/apache/hive/blob/branch-1.1/spark-client/pom.xml#L111
>> >
>> >
>> > That seems to be a great approach for guava compat issues across
>> execution
>> > engines.
>> >
>> >
>> > Spark itself relocates guava-14 for compatibility with Hive-on-Spark(??).
>> >
>> > https://issues.apache.org/jira/browse/SPARK-2848
>> >
>> >
>> > Does any of the same compatibility issues occur when using a
>> hive-exec.jar
>> > containing guava-14 on MRv2 (which has guava-11 in the classpath)?
>>
>> Not that I am aware of. I've tested it on top of MRv2 a number of
>> times and I think the unit tests also excercise these code paths.
>>
>> >
>> > Cheers,
>> > Gopal
>> >
>> > On 2/17/15, 3:14 PM, "Brock Noland"  wrote:
>> >
>> >>Apache Hive 1.1.0 Release Candidate 2 is available here:
>> >>http://people.apache.org/~brock/apache-hive-1.1.0-rc2/
>> >>
>> >>Maven artifacts are available here:
>> >>https://repository.apache.org/content/repositories/orgapachehive-1025/
>> >>
>> >>Source tag for RC1 is at:
>> >>http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc2/
>> >>
>> >>My key is located here: https://people.apache.org/keys/group/hive.asc
>> >>
>> >>Voting will conclude in 72 hours
>> >
>> >
>>
>
>
>
> --
> Best,
> Chao


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 2

2015-02-18 Thread Brock Noland
Hi,



On Wed, Feb 18, 2015 at 2:21 PM, Gopal Vijayaraghavan  wrote:
> Hi,
>
> From the release branch, I noticed that the hive-exec.jar now contains a
> copy of guava-14 without any relocations.
>
> The hive spark-client pom.xml adds guava as a lib jar instead of shading
> it in.
>
> https://github.com/apache/hive/blob/branch-1.1/spark-client/pom.xml#L111
>
>
> That seems to be a great approach for guava compat issues across execution
> engines.
>
>
> Spark itself relocates guava-14 for compatibility with Hive-on-Spark(??).
>
> https://issues.apache.org/jira/browse/SPARK-2848
>
>
> Does any of the same compatibility issues occur when using a hive-exec.jar
> containing guava-14 on MRv2 (which has guava-11 in the classpath)?

Not that I am aware of. I've tested it on top of MRv2 a number of
times and I think the unit tests also excercise these code paths.

>
> Cheers,
> Gopal
>
> On 2/17/15, 3:14 PM, "Brock Noland"  wrote:
>
>>Apache Hive 1.1.0 Release Candidate 2 is available here:
>>http://people.apache.org/~brock/apache-hive-1.1.0-rc2/
>>
>>Maven artifacts are available here:
>>https://repository.apache.org/content/repositories/orgapachehive-1025/
>>
>>Source tag for RC1 is at:
>>http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc2/
>>
>>My key is located here: https://people.apache.org/keys/group/hive.asc
>>
>>Voting will conclude in 72 hours
>
>


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 2

2015-02-18 Thread Brock Noland
Good idea... since it's not a blocker I will add that for 1.1.1 and 1.2.0.

On Wed, Feb 18, 2015 at 10:37 AM, Prasad Mujumdar  wrote:
> I guess the README.txt can list Apache Spark as query execution
> framework along with MapReduce and Tez.
>
> thanks
> Prasad
>
>
> On Wed, Feb 18, 2015 at 8:26 AM, Xuefu Zhang  wrote:
>
>> +1
>>
>> 1. downloaded the src and bin, and verified md5.
>> 2. built the src with -Phadoop-1 and -Phadoop-2.
>> 3. ran a few unit tests
>>
>> Thanks,
>> Xuefu
>>
>> On Tue, Feb 17, 2015 at 3:14 PM, Brock Noland  wrote:
>>
>> > Apache Hive 1.1.0 Release Candidate 2 is available here:
>> > http://people.apache.org/~brock/apache-hive-1.1.0-rc2/
>> >
>> > Maven artifacts are available here:
>> > https://repository.apache.org/content/repositories/orgapachehive-1025/
>> >
>> > Source tag for RC1 is at:
>> > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc2/
>> >
>> > My key is located here: https://people.apache.org/keys/group/hive.asc
>> >
>> > Voting will conclude in 72 hours
>> >
>>


[jira] [Updated] (HIVE-9716) Map job fails when table's LOCATION does not have scheme

2015-02-18 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9716:
---
Description: 
When a table's location (the value of column 'LOCATION' in SDS table in 
metastore) does not have a scheme, map job returns error. For example, 
when do select count ( * ) from t1, get following exception:

{noformat}
15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: 
job_local2120192529_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.IllegalStateException: Invalid input path 
file:/user/hive/warehouse/t1/data
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: Invalid 
input path file:/user/hive/warehouse/t1/data
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Invalid input path 
file:/user/hive/warehouse/t1/data
at 
org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442)
at 
org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
... 9 more
{noformat}

  was:
When a table's location (the value of column 'LOCATION' in SDS table in 
metastore) does not have a scheme, map job returns error. For example, 
when do select count ( * ) from t1, get following exception:

15/02/18 12:29:43 [Thread-22]: WARN mapred.LocalJobRunner: 
job_local2120192529_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.IllegalStateException: Invalid input path 
file:/user/hive/warehouse/t1/data
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.RuntimeException: java.lang.IllegalStateException: Invalid 
input path file:/user/hive/warehouse/t1/data
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Invalid input path 
file:/user/hive/warehouse/t1/data
at 
org.apache.hadoop.hive.ql.exec.MapOperator.getNominalPath(MapOperator.java:406)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:442)
at 
org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
... 9 more


> Map job fails when table's LOCATION does not have scheme
> 
>
> Key: HIVE-9716
> URL: https://issues.apache.org/jira/browse/HIVE-9716
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0, 0.13.0, 0.14.0
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>Priority: Minor
>
> When a table's location (the value of column 'LOCATION' in SDS table in 
> metastore) does not have a scheme, map job returns error. For example, 
> when do select count ( * ) from t1, get following exception:
> {noformat}
> 15/02/1

[jira] [Comment Edited] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-18 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14326158#comment-14326158
 ] 

Brock Noland edited comment on HIVE-3454 at 2/18/15 4:40 PM:
-

Have we tested this as part of an MR job? I don't think that the hive-site.xml 
is shipped as part of MR jobs. If that is true, how about we do as follows:

1) Add method {{public static void initialize(Configuration)}} to 
{{TimestampWritable}}
2) Call this method from {{AbstractSerDe.initialize}} which should be called, 
with configuration, in all the right places.
3) In {{TimestampWritable.initialize}} you can use the static 
{{HiveCon.getBoolVar}}

a bit kludgy but it should work. This all assuming the current impl doesn't 
work.

bq. "timestamp conversion."

I think we need a space after this.


was (Author: brocknoland):
Have we tested this as part of an MR job? I don't think that the hive-site.xml 
is shipped as part of MR jobs. If that is true, how about we do as follows:

1) Add method {{public static void initialize(Configuration)}} to 
{{TimestampWritable}}
2) Call this method from {{AbstractSerDe.initialize}} which should be called, 
with configuration, in all the right places.
3) In {{TimestampWritable.Configuration}} you can use the static 
{{HiveCon.getBoolVar}}

a bit kludgy but it should work. This all assuming the current impl doesn't 
work.

bq. "timestamp conversion."

I think we need a space after this.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-18 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14326158#comment-14326158
 ] 

Brock Noland commented on HIVE-3454:


Have we tested this as part of an MR job? I don't think that the hive-site.xml 
is shipped as part of MR jobs. If that is true, how about we do as follows:

1) Add method {{public static void initialize(Configuration)}} to 
{{TimestampWritable}}
2) Call this method from {{AbstractSerDe.initialize}} which should be called, 
with configuration, in all the right places.
3) In {{TimestampWritable.Configuration}} you can use the static 
{{HiveCon.getBoolVar}}

a bit kludgy but it should work. This all assuming the current impl doesn't 
work.

bq. "timestamp conversion."

I think we need a space after this.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2015-02-18 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14326158#comment-14326158
 ] 

Brock Noland edited comment on HIVE-3454 at 2/18/15 4:40 PM:
-

Have we tested this as part of an MR job? I don't think that the hive-site.xml 
is shipped as part of MR jobs. If that is true, how about we do as follows:

1) Add method {{public static void initialize(Configuration)}} to 
{{TimestampWritable}}
2) Call this method from {{AbstractSerDe.initialize}} which should be called, 
with configuration, in all the right places.
3) In {{TimestampWritable.initialize}} you can use the static 
{{HiveConf.getBoolVar}}

a bit kludgy but it should work. This all assuming the current impl doesn't 
work.

bq. "timestamp conversion."

I think we need a space after this.


was (Author: brocknoland):
Have we tested this as part of an MR job? I don't think that the hive-site.xml 
is shipped as part of MR jobs. If that is true, how about we do as follows:

1) Add method {{public static void initialize(Configuration)}} to 
{{TimestampWritable}}
2) Call this method from {{AbstractSerDe.initialize}} which should be called, 
with configuration, in all the right places.
3) In {{TimestampWritable.initialize}} you can use the static 
{{HiveCon.getBoolVar}}

a bit kludgy but it should work. This all assuming the current impl doesn't 
work.

bq. "timestamp conversion."

I think we need a space after this.

> Problem with CAST(BIGINT as TIMESTAMP)
> --
>
> Key: HIVE-3454
> URL: https://issues.apache.org/jira/browse/HIVE-3454
> Project: Hive
>  Issue Type: Bug
>  Components: Types, UDF
>Affects Versions: 0.8.0, 0.8.1, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 
> 0.13.1
>Reporter: Ryan Harris
>Assignee: Aihua Xu
>  Labels: newbie, newdev, patch
> Attachments: HIVE-3454.1.patch.txt, HIVE-3454.2.patch, 
> HIVE-3454.3.patch, HIVE-3454.3.patch, HIVE-3454.patch
>
>
> Ran into an issue while working with timestamp conversion.
> CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
> time from the BIGINT returned by unix_timestamp()
> Instead, however, a 1970-01-16 timestamp is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9706) HBase handler support for snapshots should confirm properties before use

2015-02-18 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9706:
---
   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   Status: Resolved  (was: Patch Available)

Thank you Sean! I have committed this to trunk!

> HBase handler support for snapshots should confirm properties before use
> 
>
> Key: HIVE-9706
> URL: https://issues.apache.org/jira/browse/HIVE-9706
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.0
>
> Attachments: HIVE-9707.1.patch
>
>
> The HBase Handler's support for running over snapshots attempts to copy a 
> number of hbase internal configurations into a job configuration.
> Some of these configuration keys are removed in HBase 1.0.0+ and the current 
> implementation will fail when copying the resultant null value into a new 
> configuration. Additionally, some internal configs added in later HBase 0.98 
> versions are not respected.
> Instead, setup should check for the presence of the keys it expects and then 
> make the new configuration consistent with them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] Apache Hive 1.1.0 Release Candidate 2

2015-02-17 Thread Brock Noland
Apache Hive 1.1.0 Release Candidate 2 is available here:
http://people.apache.org/~brock/apache-hive-1.1.0-rc2/

Maven artifacts are available here:
https://repository.apache.org/content/repositories/orgapachehive-1025/

Source tag for RC1 is at:
http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc2/

My key is located here: https://people.apache.org/keys/group/hive.asc

Voting will conclude in 72 hours


[jira] [Updated] (HIVE-9708) Remove testlibs directory

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9708:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove testlibs directory
> -
>
> Key: HIVE-9708
> URL: https://issues.apache.org/jira/browse/HIVE-9708
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9708.patch
>
>
> The {{testlibs}} directory is left over from the old ant build. We can delete 
> it as it's downloaded by maven now:
> https://github.com/apache/hive/blob/trunk/pom.xml#L610



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9650) Fix HBase tests post 1.x API changes

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9650:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Fix HBase tests post 1.x API changes
> 
>
> Key: HIVE-9650
> URL: https://issues.apache.org/jira/browse/HIVE-9650
> Project: Hive
>  Issue Type: Bug
>        Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9650.patch
>
>
> The API {{TableInputFormatBase.setTable}} has been deprecated and the 
> connection management API has changed.
> {noformat}
> java.io.IOException: The connection has to be unmanaged.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getAdmin(ConnectionManager.java:720)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.setHTable(TableInputFormatBase.java:359)
>   at 
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:444)
>   at 
> org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:432)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:306)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:408)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:361)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9706) HBase handler support for snapshots should confirm properties before use

2015-02-17 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324902#comment-14324902
 ] 

Brock Noland commented on HIVE-9706:


Full stack, FWIW:

{noformat}
2015-02-17 13:11:56,000 ERROR [main]: optimizer.SimpleFetchOptimizer 
(SimpleFetchOptimizer.java:transform(113)) - 
java.lang.IllegalArgumentException: The value of property 
hbase.offheapcache.percentage must not be null
  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
  at org.apache.hadoop.conf.Configuration.set(Configuration.java:1048)
  at org.apache.hadoop.conf.Configuration.set(Configuration.java:1029)
  at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureTableJobProperties(HBaseStorageHandler.java:406)
  at 
org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureInputJobProperties(HBaseStorageHandler.java:317)
  at 
org.apache.hadoop.hive.ql.plan.PlanUtils.configureJobPropertiesForStorageHandler(PlanUtils.java:809)
  at 
org.apache.hadoop.hive.ql.plan.PlanUtils.configureInputJobPropertiesForStorageHandler(PlanUtils.java:779)
  at 
org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.convertToWork(SimpleFetchOptimizer.java:379)
  at 
org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer$FetchData.access$000(SimpleFetchOptimizer.java:319)
  at 
org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.optimize(SimpleFetchOptimizer.java:135)
  at 
org.apache.hadoop.hive.ql.optimizer.SimpleFetchOptimizer.transform(SimpleFetchOptimizer.java:106)
  at org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:182)
  at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10202)
  at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:190)
  at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421)
  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
  at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112)
  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1160)
  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039)
  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305)
  at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1012)
  at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:986)
  at 
org.apache.hadoop.hive.cli.TestHBaseCliDriver.runTest(TestHBaseCliDriver.java:112)
  at 
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_handler_snapshot(TestHBaseCliDriver.java:94)
{noformat}

> HBase handler support for snapshots should confirm properties before use
> 
>
> Key: HIVE-9706
> URL: https://issues.apache.org/jira/browse/HIVE-9706
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.0, 1.1.0
>
> Attachments: HIVE-9707.1.patch
>
>
> The HBase Handler's support for running over snapshots attempts to copy a 
> number of hbase internal configurations into a job configuration.
> Some of these configuration keys are removed in HBase 1.0.0+ and the current 
> implementation will fail when copying the resultant null value into a new 
> configuration. Additionally, some internal configs added in later HBase 0.98 
> versions are not respected.
> Instead, setup should check for the presence of the keys it expects and then 
> make the new configuration consistent with them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9706) HBase handler support for snapshots should confirm properties before use

2015-02-17 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324903#comment-14324903
 ] 

Brock Noland commented on HIVE-9706:


+1 pending tests

> HBase handler support for snapshots should confirm properties before use
> 
>
> Key: HIVE-9706
> URL: https://issues.apache.org/jira/browse/HIVE-9706
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.14.0, 1.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.0, 1.1.0
>
> Attachments: HIVE-9707.1.patch
>
>
> The HBase Handler's support for running over snapshots attempts to copy a 
> number of hbase internal configurations into a job configuration.
> Some of these configuration keys are removed in HBase 1.0.0+ and the current 
> implementation will fail when copying the resultant null value into a new 
> configuration. Additionally, some internal configs added in later HBase 0.98 
> versions are not respected.
> Instead, setup should check for the presence of the keys it expects and then 
> make the new configuration consistent with them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 1

2015-02-17 Thread Brock Noland
Thank you Alan. That is my mistake actually. We can delete this now and
will do so here: https://issues.apache.org/jira/browse/HIVE-9708

On Tue, Feb 17, 2015 at 10:37 AM, Alan Gates  wrote:

> It looks like a jar file snuck into the source release:
> gates> find . -name \*.jar
> ./testlibs/ant-contrib-1.0b3.jar
>
> Apache policy is that binary files cannot be in releases.
>
> Alan.
>
>   Brock Noland 
>  February 16, 2015 at 21:08
> Apache Hive 1.1.0 Release Candidate 0 is available here:
> http://people.apache.org/~brock/apache-hive-1.1.0-rc1/
>
> Maven artifacts are available here:
> https://repository.apache.org/content/repositories/orgapachehive-1024/
>
> Source tag for RC1 is at:
> http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc1/
>
> My key is located here: https://people.apache.org/keys/group/hive.asc
>
> Voting will conclude in 72 hours
>
>


[jira] [Updated] (HIVE-9707) ExecDriver does not get token from environment

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9707:
---
   Resolution: Fixed
Fix Version/s: 1.1.0
   Status: Resolved  (was: Patch Available)

> ExecDriver does not get token from environment
> --
>
> Key: HIVE-9707
> URL: https://issues.apache.org/jira/browse/HIVE-9707
> Project: Hive
>  Issue Type: Improvement
>        Reporter: Brock Noland
>    Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9707.patch
>
>
> Broken in HIVE-8828



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9705) All curator deps should be listed in dependency management section

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9705:
---
   Resolution: Fixed
Fix Version/s: 1.1.0
   Status: Resolved  (was: Patch Available)

> All curator deps should be listed in dependency management section
> --
>
> Key: HIVE-9705
> URL: https://issues.apache.org/jira/browse/HIVE-9705
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9705.patch
>
>
> HADOOP-11492 brings in a new version of curator which doesn't work for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30909: HIVE-9252 Linking custom SerDe jar to table definition.

2015-02-17 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30909/#review72782
---


Hey Ferdinand,

Generally I think the approach is sound. That is adding a permanent storage 
handler/serdes will make this more useful to users. It's unclear of the current 
code only handlers storage handlers or SERDEs as well?

One item we should change is the API signature of the new thrift HMS calls. All 
new HMS API's should be in the request/response format. For example if we have 
an API called "Some" the method should be:

SomeResponse Some(SomeRequest)

We should also stay away from ENUM's as they are not compatbile across releases.

Thanks you!!

Brock

- Brock Noland


On Feb. 17, 2015, 3:22 a.m., cheng xu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30909/
> ---
> 
> (Updated Feb. 17, 2015, 3:22 a.m.)
> 
> 
> Review request for hive, Brock Noland, Dong Chen, Mohit Sabharwal, and Sergio 
> Pena.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Changes includes:
> 1.Update the DDLTask to support using statement
> 2.Serialize the resource uri into table properties
> 3.Deserialize the resource uri and add them to the session classloader
> 4.Some query test and unit tests added
> 
> 
> Diffs
> -
> 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
>  130fd67 
>   itests/test-serde/pom.xml cb79072 
>   
> itests/test-serde/src/main/java/org/apache/hadoop/hive/storagehandler/TestBase64TextOutputFormat.java
>  PRE-CREATION 
>   
> itests/test-serde/src/main/java/org/apache/hadoop/hive/storagehandler/TestStorageHandler.java
>  PRE-CREATION 
>   metastore/if/hive_metastore.thrift c2a2419 
>   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
> 26ca208 
>   
> metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
> 0ff2863 
>   metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
> 0aa0f51 
>   metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
> fcaffc7 
>   metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 2b49eab 
>   
> metastore/src/model/org/apache/hadoop/hive/metastore/model/MStorageDescriptor.java
>  9da3071 
>   
> metastore/src/model/org/apache/hadoop/hive/metastore/model/MStorageHandler.java
>  PRE-CREATION 
>   metastore/src/model/package.jdo b41b3d8 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
>  cf068e4 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
>  5f28d73 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 089bd94 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionTask.java 569c125 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 7d72783 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 3a2a6ee 
>   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 9d5730d 
>   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 4aac39a 
>   ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java c4633f6 
>   ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 69a4545 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java 
> 1ef6d1b 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 149b788 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java bdb9204 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/StorageFormat.java 7723430 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java 8cadb96 
>   ql/src/java/org/apache/hadoop/hive/ql/util/SemanticAnalyzerHelper.java 
> PRE-CREATION 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java 69f8889 
>   ql/src/test/queries/clientpositive/storage_handler_link_external_jar.q 
> PRE-CREATION 
>   ql/src/test/results/clientpositive/storage_handler_link_external_jar.q.out 
> PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30909/diff/
> 
> 
> Testing
> ---
> 
> newly added UT passed locally
> 
> 
> Thanks,
> 
> cheng xu
> 
>



[jira] [Updated] (HIVE-9708) Remove testlibs directory

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9708:
---
Description: 
The {{testlibs}} directory is left over from the old ant build. We can delete 
it as it's downloaded by maven now:

https://github.com/apache/hive/blob/trunk/pom.xml#L610

  was:The {{testlibs}} directory is left over from the old ant build. We can 
delete it.


> Remove testlibs directory
> -
>
> Key: HIVE-9708
> URL: https://issues.apache.org/jira/browse/HIVE-9708
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9708.patch
>
>
> The {{testlibs}} directory is left over from the old ant build. We can delete 
> it as it's downloaded by maven now:
> https://github.com/apache/hive/blob/trunk/pom.xml#L610



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9708) Remove testlibs directory

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9708:
---
Fix Version/s: 1.1.0
Affects Version/s: 1.1.0
   Status: Patch Available  (was: Open)

> Remove testlibs directory
> -
>
> Key: HIVE-9708
> URL: https://issues.apache.org/jira/browse/HIVE-9708
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9708.patch
>
>
> The {{testlibs}} directory is left over from the old ant build. We can delete 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9708) Remove testlibs directory

2015-02-17 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9708:
--

 Summary: Remove testlibs directory
 Key: HIVE-9708
 URL: https://issues.apache.org/jira/browse/HIVE-9708
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-9708.patch

The {{testlibs}} directory is left over from the old ant build. We can delete 
it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9705) All curator deps should be listed in dependency management section

2015-02-17 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324695#comment-14324695
 ] 

Brock Noland commented on HIVE-9705:


The UDAF test is flaky and {{TestCustomAuthentication}} passes locally.

> All curator deps should be listed in dependency management section
> --
>
> Key: HIVE-9705
> URL: https://issues.apache.org/jira/browse/HIVE-9705
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9705.patch
>
>
> HADOOP-11492 brings in a new version of curator which doesn't work for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9708) Remove testlibs directory

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9708:
---
Attachment: HIVE-9708.patch

> Remove testlibs directory
> -
>
> Key: HIVE-9708
> URL: https://issues.apache.org/jira/browse/HIVE-9708
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9708.patch
>
>
> The {{testlibs}} directory is left over from the old ant build. We can delete 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9707) ExecDriver does not get token from environment

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9707:
---
Attachment: HIVE-9707.patch

> ExecDriver does not get token from environment
> --
>
> Key: HIVE-9707
> URL: https://issues.apache.org/jira/browse/HIVE-9707
> Project: Hive
>  Issue Type: Improvement
>        Reporter: Brock Noland
> Attachments: HIVE-9707.patch
>
>
> Broken in HIVE-8828



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9707) ExecDriver does not get token from environment

2015-02-17 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9707:
--

 Summary: ExecDriver does not get token from environment
 Key: HIVE-9707
 URL: https://issues.apache.org/jira/browse/HIVE-9707
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland


Broken in HIVE-8828



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9707) ExecDriver does not get token from environment

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9707:
---
Assignee: Brock Noland
  Status: Patch Available  (was: Open)

> ExecDriver does not get token from environment
> --
>
> Key: HIVE-9707
> URL: https://issues.apache.org/jira/browse/HIVE-9707
> Project: Hive
>  Issue Type: Improvement
>        Reporter: Brock Noland
>    Assignee: Brock Noland
> Attachments: HIVE-9707.patch
>
>
> Broken in HIVE-8828



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9705) All curator deps should be listed in dependency management section

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9705:
---
Affects Version/s: 1.2.0
   Status: Patch Available  (was: Open)

> All curator deps should be listed in dependency management section
> --
>
> Key: HIVE-9705
> URL: https://issues.apache.org/jira/browse/HIVE-9705
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9705.patch
>
>
> HADOOP-11492 brings in a new version of curator which doesn't work for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9705) All curator deps should be listed in dependency management section

2015-02-17 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9705:
---
Attachment: HIVE-9705.patch

> All curator deps should be listed in dependency management section
> --
>
> Key: HIVE-9705
> URL: https://issues.apache.org/jira/browse/HIVE-9705
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-9705.patch
>
>
> HADOOP-11492 brings in a new version of curator which doesn't work for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9705) All curator deps should be listed in dependency management section

2015-02-17 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9705:
--

 Summary: All curator deps should be listed in dependency 
management section
 Key: HIVE-9705
 URL: https://issues.apache.org/jira/browse/HIVE-9705
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland


HADOOP-11492 brings in a new version of curator which doesn't work for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9703) Merge from Spark branch to trunk 02/16/2015

2015-02-17 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324322#comment-14324322
 ] 

Brock Noland commented on HIVE-9703:


+1

> Merge from Spark branch to trunk 02/16/2015
> ---
>
> Key: HIVE-9703
> URL: https://issues.apache.org/jira/browse/HIVE-9703
> Project: Hive
>  Issue Type: Task
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Attachments: HIVE-9703.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] Apache Hive 1.1.0 Release Candidate 1

2015-02-16 Thread Brock Noland
Apache Hive 1.1.0 Release Candidate 0 is available here:
http://people.apache.org/~brock/apache-hive-1.1.0-rc1/

Maven artifacts are available here:
https://repository.apache.org/content/repositories/orgapachehive-1024/

Source tag for RC1 is at:
http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc1/

My key is located here: https://people.apache.org/keys/group/hive.asc

Voting will conclude in 72 hours


[jira] [Updated] (HIVE-9701) JMH module does not compile under hadoop-1 profile

2015-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9701:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> JMH module does not compile under hadoop-1 profile
> --
>
> Key: HIVE-9701
> URL: https://issues.apache.org/jira/browse/HIVE-9701
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Blocker
> Fix For: 1.1.0
>
> Attachments: HIVE-9701.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9702) Fix HOS ptest environment

2015-02-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14323406#comment-14323406
 ] 

Brock Noland commented on HIVE-9702:


>From 
>http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-730/failed/TestCliDriver-alter_char1.q-serde_reported_schema.q-bucketmapjoin1.q-and-12-more/TEST-TestCliDriver-alter_char1.q-serde_reported_schema.q-bucketmapjoin1.q-and-12-more-TEST-org.apache.hadoop.hive.cli.TestCliDriver.xml

{{The jurisdiction policy files are not signed by a trusted signer!}}

First google hit for that message:
http://stackoverflow.com/questions/9745193/java-lang-securityexception-the-jurisdiction-policy-files-are-not-signed-by-a-t

> Fix HOS ptest environment
> -
>
> Key: HIVE-9702
> URL: https://issues.apache.org/jira/browse/HIVE-9702
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Sergio Peña
>
> Precommits for HOS are failing.
> e.g. 
> http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/730/testReport/junit/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_authorization_4/
> {noformat}
> Begin query: authorization_4.q
> java.lang.NoClassDefFoundError: Could not initialize class 
> javax.crypto.JceSecurity
>   at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324)
>   at javax.crypto.KeyGenerator.(KeyGenerator.java:157)
>   at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
>   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
>   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>   at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
>   at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1185)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1041)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305)
>   at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1019)
>   at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:993)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:234)
>   at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4(TestCliDriver.java:166)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9702) Fix HOS ptest environment

2015-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9702:
---
Description: 
Precommits for HOS are failing.

e.g. 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/730/testReport/junit/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_authorization_4/
{noformat}
Begin query: authorization_4.q
java.lang.NoClassDefFoundError: Could not initialize class 
javax.crypto.JceSecurity
at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324)
at javax.crypto.KeyGenerator.(KeyGenerator.java:157)
at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1185)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1041)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1019)
at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:993)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:234)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4(TestCliDriver.java:166)
{noformat}

  was:
Precommits for HOS are failing with:
{noformat}
Begin query: authorization_4.q
java.lang.NoClassDefFoundError: Could not initialize class 
javax.crypto.JceSecurity
at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324)
at javax.crypto.KeyGenerator.(KeyGenerator.java:157)
at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at

[jira] [Created] (HIVE-9702) Fix HOS ptest environment

2015-02-16 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9702:
--

 Summary: Fix HOS ptest environment
 Key: HIVE-9702
 URL: https://issues.apache.org/jira/browse/HIVE-9702
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Sergio Peña


Precommits for HOS are failing with:
{noformat}
Begin query: authorization_4.q
java.lang.NoClassDefFoundError: Could not initialize class 
javax.crypto.JceSecurity
at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:324)
at javax.crypto.KeyGenerator.(KeyGenerator.java:157)
at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:207)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:473)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:428)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1185)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1051)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1041)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1019)
at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:993)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:234)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_4(TestCliDriver.java:166)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Apache Hive 1.1.0 Release Candidate 0

2015-02-16 Thread Brock Noland
ive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[244,32]
> cannot find symbol
> [ERROR] symbol:   class Configuration
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench.StorageFormatTest
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[278,26]
> cannot access org.apache.hadoop.mapred.OutputFormat
> [ERROR] class file for org.apache.hadoop.mapred.OutputFormat not found
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[283,15]
> cannot find symbol
> [ERROR] symbol:   class FileSplit
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench.StorageFormatTest
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[293,12]
> cannot access org.apache.hadoop.mapred.FileOutputFormat
> [ERROR] class file for org.apache.hadoop.mapred.FileOutputFormat not found
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[342,34]
> cannot find symbol
> [ERROR] symbol:   class Configuration
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[342,10]
> cannot find symbol
> [ERROR] symbol:   variable FileSystem
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[345,21]
> cannot find symbol
> [ERROR] symbol:   class Path
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[348,20]
> cannot find symbol
> [ERROR] symbol:   class Path
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench
> [ERROR]
>
> /home/xzhang/tmp/hive1.1.0/apache-hive-1.1.0-src/itests/hive-jmh/src/main/java/org/apache/hive/benchmark/storage/ColumnarStorageBench.java:[375,28]
> cannot find symbol
> [ERROR] symbol:   class Path
> [ERROR] location: class
> org.apache.hive.benchmark.storage.ColumnarStorageBench
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn  -rf :hive-jmh
>
>
> On Sat, Feb 14, 2015 at 9:14 PM, Szehon Ho  wrote:
>
> > Hi Brock,
> >
> > Where is your key used to sign the release?  I checked in usual places (
> > https://people.apache.org/keys/group/hive.asc,
> > https://people.apache.org/keys/group/hive.asc) and couldn't find your
> key
> > in there?  I might be missing the location.
> >
> > Other that than, everything checks out:
> > 1. Verified md5 on src and bin
> > 2. Built src, ran queries from both HiveCLI and Beeline
> > 3. On bin, ran queries from both HiveCLI and Beeline
> > 4. Verified no SNAPSHOT dependencies listed in poms
> >
> > Thanks
> > Szehon
> >
> > On Sat, Feb 14, 2015 at 2:48 PM, Brock Noland 
> wrote:
> >
> > > Apache Hive 1.1.0 Release Candidate 0 is available here:
> > > http://people.apache.org/~brock/apache-hive-1.1.0-rc0/
> > >
> > > Maven artifacts are available here:
> > > *
> https://repository.apache.org/content/repositories/orgapachehive-1023/
> > > <
> https://repository.apache.org/content/repositories/orgapachehive-1023/
> > >*
> > >
> > > Source tag for RC1 is at:
> > > http://svn.apache.org/repos/asf/hive/tags/release-1.1.0-rc0/
> > >
> > > Voting will conclude in 96 hours (due to the holiday)
> > >
> >
>


[jira] [Updated] (HIVE-9686) HiveMetastore.logAuditEvent can be used before sasl server is started

2015-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9686:
---
Fix Version/s: (was: 1.2.0)
   1.1.0

> HiveMetastore.logAuditEvent can be used before sasl server is started
> -
>
> Key: HIVE-9686
> URL: https://issues.apache.org/jira/browse/HIVE-9686
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.0.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9686.patch
>
>
> Metastore listeners can use logAudit before the sasl server is started 
> resulting in an NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9685) CLIService should create SessionState after logging into kerberos

2015-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9685:
---
Fix Version/s: (was: 1.2.0)
   1.1.0

> CLIService should create SessionState after logging into kerberos
> -
>
> Key: HIVE-9685
> URL: https://issues.apache.org/jira/browse/HIVE-9685
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 1.1.0
>
> Attachments: HIVE-9685.patch
>
>
> {noformat}
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
> at 
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:409)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:230)
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1483)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:64)
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2841)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2860)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:453)
> at 
> org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:123)
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:81)
> at 
> org.apache.hive.service.CompositeService.init(CompositeService.java:59)
> at 
> org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92)
> at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309)
> at 
> org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68)
> at 
> org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523)
> at 
> org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9701) JMH module does not compile under hadoop-1 profile

2015-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9701:
---
Attachment: HIVE-9701.patch

> JMH module does not compile under hadoop-1 profile
> --
>
> Key: HIVE-9701
> URL: https://issues.apache.org/jira/browse/HIVE-9701
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Blocker
> Fix For: 1.1.0
>
> Attachments: HIVE-9701.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9701) JMH module does not compile under hadoop-1 profile

2015-02-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9701:
---
Fix Version/s: 1.1.0
Affects Version/s: 1.1.0
   Status: Patch Available  (was: Open)

> JMH module does not compile under hadoop-1 profile
> --
>
> Key: HIVE-9701
> URL: https://issues.apache.org/jira/browse/HIVE-9701
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>    Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Blocker
> Fix For: 1.1.0
>
> Attachments: HIVE-9701.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9701) JMH module does not compile under hadoop-1 profile

2015-02-16 Thread Brock Noland (JIRA)
Brock Noland created HIVE-9701:
--

 Summary: JMH module does not compile under hadoop-1 profile
 Key: HIVE-9701
 URL: https://issues.apache.org/jira/browse/HIVE-9701
 Project: Hive
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9696) Address RB comments for HIVE-9425 [Spark Branch]

2015-02-15 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-9696:
---
Attachment: HIVE-9696.1-spark.patch

> Address RB comments for HIVE-9425 [Spark Branch]
> 
>
> Key: HIVE-9696
> URL: https://issues.apache.org/jira/browse/HIVE-9696
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Rui Li
>Priority: Trivial
> Attachments: HIVE-9696.1-spark.patch, HIVE-9696.1-spark.patch
>
>
> A followup task of HIVE-9425.
> The pending RB comment can be found 
> [here|https://reviews.apache.org/r/30984/#comment118482].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >