Re: Tez query failed with OutOfMemoryError: Java heap space

2017-07-11 Thread Vaibhav Gumashta
Hi Xin,

Can you provide these:

  1.  Output of explain plan
  2.  Output of set –v (this will list the configs, so you might want to 
anonymize these)

In addition to that, it looks like vertex vertex_1495595408051_21107_2_03 
failed with OOM. Using Tez counters you can find out the amount of data input 
to this vertex which can further help you in narrowing down the root cause.

Hope this helps,
—Vaibhav

From: , Xin mailto:xiy...@visa.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Thursday, July 6, 2017 at 10:37 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Re: Tez query failed with OutOfMemoryError: Java heap space

Here're the version information:

Hive: 1.2.1
Tez: 0.8.5
Hadoop 2.6.0-cdh5.8.3

Please let me know if you need more information.

Regards,
Xin

From: "Yang, Xin" mailto:xiy...@visa.com>>
Date: Thursday, June 29, 2017 at 11:48 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Tez query failed with OutOfMemoryError: Java heap space

Hi,

We ran a Tez query and it failed with OOM. Then, we computed stats, it still 
failed with the OOM.

Settings:

set hive.tez.container.size=4096;
set tez.am.resource.memory.mb=1024;
set hive.tez.java.opts=-Xmx3276m;

set hive.tez.dynamic.partition.pruning=false;
set hive.tez.dynamic.partition.pruning.max.event.size=1048576;
set hive.tez.dynamic.partition.pruning.max.data.size=104857600;

set hive.prewarm.enabled=true;
set hive.prewarm.numcontainers=10;

set tez.am.container.reuse.enabled=true;

set hive.cbo.enable=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;

set hive.auto.convert.join=true;
set hive.auto.convert.join.noconditionaltask=true;
set hive.auto.convert.join.noconditionaltask.size=20971520;
set hive.mapjoin.hybridgrace.hashtable=false;
set hive.optimize.bucketmapjoin.sortedmerge=false;
set hive.map.aggr.hash.percentmemory=0.5;
set hive.map.aggr=true;

set hive.vectorized.execution.enabled=false;
set hive.vectorized.execution.reduce.enabled=false;
set hive.vectorized.execution.reduce.groupby.enabled=false;

set hive.exec.parallel=true;
set hive.exec.parallel.thread.number=16;

set hive.exec.reducers.max=800;
set hive.optimize.reducededuplication=true;
set hive.optimize.reducededuplication.min.reducer=4;

set hive.merge.mapfiles=true;
set hive.merge.mapredfiles=false;
set hive.merge.smallfiles.avgsize=1600;
set hive.merge.size.per.task=25600;
set hive.smbjoin.cache.rows=1;
set hive.fetch.task.conversion=more;
set hive.optimize.sort.dynamic.partition=true;

set hive.tez.auto.reducer.parallelism=true;

Stacktrace:

Status: Failed
Vertex failed, vertexName=Map 3, vertexId=vertex_1495595408051_21107_2_03, 
diagnostics=[Task failed, taskId=task_1495595408051_21107_2_03_00, 
diagnostics=[TaskAttempt 0 failed, info=[Error: exceptio
nThrown=java.lang.OutOfMemoryError: Java heap space
at 
org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:56)
at 
org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:46)
at 
org.apache.tez.runtime.library.common.shuffle.MemoryFetchedInput.(MemoryFetchedInput.java:38)
at 
org.apache.tez.runtime.library.common.shuffle.impl.SimpleFetchedInputAllocator.allocate(SimpleFetchedInputAllocator.java:141)
at 
org.apache.tez.runtime.library.common.shuffle.Fetcher.fetchInputs(Fetcher.java:717)
at 
org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:489)
at 
org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:398)
at 
org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:195)
at 
org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:70)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
, errorMessage=Fetch failed:java.lang.OutOfMemoryError: Java heap space
at 
org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:56)
at 
org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:46)
at 
org.apache.tez.runtime.library.common.shuffle.MemoryFetchedInput.(MemoryFetchedInput.java:38)
at 
org.apache.tez.runtime.library.common.shuffle.impl.SimpleFetchedInputAllocator.allocate(SimpleFetchedInputAllocator.java:141)
at 
org.apache.tez.runtime.library.common.shuffle.Fetcher.fetchInputs(Fetcher.java:717)
at 
org.apache.tez.runtime.l

Re: Jimmy Xiang now a Hive PMC member

2017-05-25 Thread Vaibhav Gumashta
Congrats Jimmy!

‹Vaibhav

On 5/25/17, 10:48 AM, "Vineet Garg"  wrote:

>Congrats Jimmy!
>
>> On May 24, 2017, at 9:16 PM, Xuefu Zhang  wrote:
>> 
>> Hi all,
>> 
>> It's an honer to announce that Apache Hive PMC has recently voted to
>>invite Jimmy Xiang as a new Hive PMC member. Please join me in
>>congratulating him and looking forward to a bigger role that he will
>>play in Apache Hive project.
>> 
>> Thanks,
>> Xuefu
>
>



CVE-2016-3083: Apache Hive SSL vulnerability bug disclosure

2017-05-24 Thread Vaibhav Gumashta
Severity: Important

Vendor: The Apache Software Foundation

Versions Affected:
Apache Hive 0.13.x
Apache Hive 0.14.x
Apache Hive 1.0.0 - 1.0.1
Apache Hive 1.1.0 - 1.1.1
Apache Hive 1.2.0 - 1.2.1
Apache Hive 2.0.0

Description:

Apache Hive (JDBC + HiveServer2) implements SSL for plain TCP and HTTP 
connections (it supports both transport modes). While validating the server's 
certificate during the connection setup, the client doesn't seem to be 
verifying the common name attribute of the certificate. In this way, if a JDBC 
client sends an SSL request to server abc.com, and the server responds with a 
valid certificate (certified by CA) but issued to xyz.com, the client will 
accept that as a valid certificate and the SSL handshake will go through.

Mitigation:

Upgrade to Apache Hive 1.2.2 for 1.x release line, or to Apache Hive 2.0.1 or 
later for 2.0.x release line, or to Apache Hive 2.1.0 and later for 2.1.x 
release line.

Credit: This issue was discovered by Branden Crawford from Inteco Systems 
Limited (inetco.com).


[ANNOUNCE] Apache Hive 1.2.2 Released

2017-04-07 Thread Vaibhav Gumashta
The Apache Hive team is proud to announce the release of Apache Hive version 
1.2.2.

The Apache Hive (TM) data warehouse software facilitates querying and
managing large datasets residing in distributed storage. Built on top
of Apache Hadoop (TM), it provides, among others:

* Tools to enable easy data extract/transform/load (ETL)

* A mechanism to impose structure on a variety of data formats

* Access to files stored either directly in Apache HDFS (TM) or in other
  data storage systems such as Apache HBase (TM)

* Query execution via Apache Hadoop MapReduce, Apache Tez and Apache Spark 
frameworks.

For Hive release details and downloads, please visit:
https://hive.apache.org/downloads.html

Hive 1.2.2 Release Notes are available here:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12332952&styleName=Text&projectId=12310843

We would like to thank the many contributors who made this release
possible.

Regards,

The Apache Hive Team


Re: [ANNOUNCE] New Hive Committer - Rajesh Balamohan

2016-12-15 Thread Vaibhav Gumashta
Congrats Rajesh!

‹Vaibhav

On 12/15/16, 3:55 AM, "Peter Vary"  wrote:

>Congratulations Rajesh!
>
>> On Dec 15, 2016, at 6:40 AM, Rui Li  wrote:
>> 
>> Congratulations :)
>> 
>> On Thu, Dec 15, 2016 at 6:50 AM, Gunther Hagleitner <
>> ghagleit...@hortonworks.com> wrote:
>> 
>>> Congrats Rajesh!
>>> 
>>> From: Jimmy Xiang 
>>> Sent: Wednesday, December 14, 2016 11:38 AM
>>> To: user@hive.apache.org
>>> Cc: d...@hive.apache.org; rbalamo...@apache.org
>>> Subject: Re: [ANNOUNCE] New Hive Committer - Rajesh Balamohan
>>> 
>>> Congrats, Rajesh!!
>>> 
>>> On Wed, Dec 14, 2016 at 11:32 AM, Sergey Shelukhin
>>>  wrote:
 Congratulations!
 
 From: Chao Sun 
 Reply-To: "user@hive.apache.org" 
 Date: Wednesday, December 14, 2016 at 10:52
 To: "d...@hive.apache.org" 
 Cc: "user@hive.apache.org" , "
>>> rbalamo...@apache.org"
 
 Subject: Re: [ANNOUNCE] New Hive Committer - Rajesh Balamohan
 
 Congrats Rajesh!
 
 On Wed, Dec 14, 2016 at 9:26 AM, Vihang Karajgaonkar <
>>> vih...@cloudera.com>
 wrote:
> 
> Congrats Rajesh!
> 
> On Wed, Dec 14, 2016 at 1:54 AM, Jesus Camacho Rodriguez <
> jcamachorodrig...@hortonworks.com> wrote:
> 
>> Congrats Rajesh, well deserved! :)
>> 
>> --
>> Jesús
>> 
>> 
>> 
>> 
>> On 12/14/16, 8:41 AM, "Lefty Leverenz" 
>>> wrote:
>> 
>>> Congratulations Rajesh!
>>> 
>>> -- Lefty
>>> 
>>> 
>>> On Tue, Dec 13, 2016 at 11:58 PM, Rajesh Balamohan
>>> >> 
>>> wrote:
>>> 
 Thanks a lot for providing this opportunity and to all for their
>> messages.
 :)
 
 ~Rajesh.B
 
 On Wed, Dec 14, 2016 at 11:33 AM, Dharmesh Kakadia
 >> 
 wrote:
 
> Congrats Rajesh !
> 
> Thanks,
> Dharmesh
> 
> On Tue, Dec 13, 2016 at 7:37 PM, Vikram Dixit K <
>> vikram.di...@gmail.com>
> wrote:
> 
>> Congrats Rajesh! :)
>> 
>> On Tue, Dec 13, 2016 at 9:36 PM, Pengcheng Xiong
>> 
>> wrote:
>> 
>>> Congrats Rajesh! :)
>>> 
>>> On Tue, Dec 13, 2016 at 6:51 PM, Prasanth Jayachandran <
>>> prasan...@apache.org
 wrote:
>>> 
 The Apache Hive PMC has voted to make Rajesh Balamohan a
>> committer on
>>> the
 Apache Hive Project. Please join me in congratulating Rajesh.
 
 Congratulations Rajesh!
 
 Thanks
 Prasanth
>>> 
>> 
>> 
>> 
>> --
>> Nothing better than when appreciated for hard work.
>> -Mark
>> 
> 
> 
 
>> 
 
 
>>> 
>>> 
>> 
>> 
>> -- 
>> Best regards!
>> Rui Li
>> Cell: (+86) 13564950210
>
>



Re: [ANNOUNCE] New PMC Member : Jesus

2016-07-18 Thread Vaibhav Gumashta
Congrats Jesüs!

--Vaibhav

From: Vineet Garg 
Sent: Monday, July 18, 2016 6:51 PM
To: d...@hive.apache.org; user@hive.apache.org
Subject: Re: [ANNOUNCE] New PMC Member : Jesus

Congrats Jesus !




On 7/18/16, 10:27 AM, "Jesus Camacho Rodriguez" 
 wrote:

>Thanks everybody! Looking forward to continue contributing to the project!
>
>--
>Jesús
>
>
>
>
>On 7/18/16, 6:21 PM, "Prasanth Jayachandran"  
>wrote:
>
>>Congratulations Jesus!
>>
>>> On Jul 18, 2016, at 10:10 AM, Jimmy Xiang  wrote:
>>>
>>> Congrats!!
>>>
>>> On Mon, Jul 18, 2016 at 9:54 AM, Vihang Karajgaonkar
>>>  wrote:
 Congratulations Jesus!

> On Jul 18, 2016, at 8:30 AM, Sergio Pena  wrote:
>
> Congrats Jesus !!!
>
> On Mon, Jul 18, 2016 at 7:28 AM, Peter Vary  wrote:
>
>> Congratulations Jesus!
>>
>>> On Jul 18, 2016, at 6:55 AM, Wei Zheng  wrote:
>>>
>>> Congrats Jesus!
>>>
>>> Thanks,
>>>
>>> Wei
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 7/17/16, 14:29, "Sushanth Sowmyan"  wrote:
>>>
 Good to have you onboard, Jesus! :)

 On Jul 17, 2016 12:00, "Lefty Leverenz" 
>> wrote:

> Congratulations Jesus!
>
> -- Lefty
>
> On Sun, Jul 17, 2016 at 1:01 PM, Ashutosh Chauhan <
>> hashut...@apache.org>
> wrote:
>
>> Hello Hive community,
>>
>> I'm pleased to announce that Jesus Camacho Rodriguez has accepted the
>> Apache Hive PMC's
>> invitation, and is now our newest PMC member. Many thanks to Jesus 
>> for
>> all of
>> his hard work.
>>
>> Please join me congratulating Jesus!
>>
>> Best,
>> Ashutosh
>> (On behalf of the Apache Hive PMC)
>>
>
>
>>
>>

>>>
>>
>>


Re: [ANNOUNCE] New PMC Member : Pengcheng

2016-07-18 Thread Vaibhav Gumashta
Congrats Pengcheng!

From: Prasanth Jayachandran 
Sent: Monday, July 18, 2016 10:21 AM
To: user@hive.apache.org
Cc: d...@hive.apache.org
Subject: Re: [ANNOUNCE] New PMC Member : Pengcheng

Congratulations Pengcheng!

> On Jul 18, 2016, at 10:10 AM, Jimmy Xiang  wrote:
>
> Congrats!!
>
> On Mon, Jul 18, 2016 at 9:55 AM, Vihang Karajgaonkar
>  wrote:
>> Congratulations!
>>
>>> On Jul 18, 2016, at 5:28 AM, Peter Vary  wrote:
>>>
>>> Congratulations Pengcheng!
>>>
>>>
 On Jul 18, 2016, at 6:55 AM, Wei Zheng  wrote:

 Congrats Pengcheng!

 Thanks,

 Wei






 On 7/17/16, 16:01, "Xuefu Zhang"  wrote:

> Congrats, PengCheng!
>
> On Sun, Jul 17, 2016 at 2:28 PM, Sushanth Sowmyan 
> wrote:
>
>> Welcome aboard Pengcheng! :)
>>
>> On Jul 17, 2016 12:01, "Lefty Leverenz"  wrote:
>>
>>> Congratulations Pengcheng!
>>>
>>> -- Lefty
>>>
>>> On Sun, Jul 17, 2016 at 1:03 PM, Ashutosh Chauhan 
>>> wrote:
>>>
>
> Hello Hive community,
>
> I'm pleased to announce that Pengcheng Xiong has accepted the Apache
 Hive
> PMC's
> invitation, and is now our newest PMC member. Many thanks to Pengcheng
 for
> all of his hard work.
>
> Please join me congratulating Pengcheng!
>
> Best,
> Ashutosh
> (On behalf of the Apache Hive PMC)
>

>>>
>>>
>>
>>>
>>
>




Re: [ANNOUNCE] New Hive PMC Chair - Ashutosh Chauhan

2015-09-16 Thread Vaibhav Gumashta
Congrats Ashutosh!

-Vaibhav

From: Prasanth Jayachandran 
mailto:pjayachand...@hortonworks.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Wednesday, September 16, 2015 at 12:50 PM
To: "d...@hive.apache.org" 
mailto:d...@hive.apache.org>>, 
"user@hive.apache.org" 
mailto:user@hive.apache.org>>
Cc: "d...@hive.apache.org" 
mailto:d...@hive.apache.org>>, Ashutosh Chauhan 
mailto:hashut...@apache.org>>
Subject: Re: [ANNOUNCE] New Hive PMC Chair - Ashutosh Chauhan

Congratulations Ashutosh!





On Wed, Sep 16, 2015 at 12:48 PM -0700, "Xuefu Zhang" 
mailto:xzh...@cloudera.com>> wrote:

Congratulations, Ashutosh!. Well-deserved.

Thanks to Carl also for the hard work in the past few years!

--Xuefu

On Wed, Sep 16, 2015 at 12:39 PM, Carl Steinbach 
mailto:c...@apache.org>> wrote:

> I am very happy to announce that Ashutosh Chauhan is taking over as the
> new VP of the Apache Hive project. Ashutosh has been a longtime contributor
> to Hive and has played a pivotal role in many of the major advances that
> have been made over the past couple of years. Please join me in
> congratulating Ashutosh on his new role!
>


Re: Empty Table in MR with "union all" (created in Tez)

2015-06-10 Thread Vaibhav Gumashta
Might be related to: https://issues.apache.org/jira/browse/HIVE-10929

Thanks,
-Vaibhav

From: Gufran Mohammed Pathan 
mailto:gufran.path...@mu-sigma.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Wednesday, June 10, 2015 at 10:18 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Empty Table in MR with "union all" (created in Tez)

Hi,

I'm unable to retrieve rows from a table created in Tez when using MR as 
execution engine. This happens only when the create table statement has "union 
all".

Here's a way to reproduce the issue:

set hive.execution.engine=tez;

create table test_table as
select "1" from table1
union all
select "1" from table2 ;

select * from test_table;  --gives correct roles

set hive.execution.engine=mr;

select * from test_table;  --gives empty table

Any ideas on why this must be happening? This issue does not occur when I don't 
use a "union all" query.

I'm using HDP 2.2 with Hive 0.14 and Tez 0.4.0.2.

Thanks,
Gufran Pathan| +91 7760913355|www.mu-sigma.com|

Correlation does not imply causation, but it does waggle its eyebrows 
suggestively and gesture furtively while mouthing "look over there." -Randall 
Munroe

Disclaimer: http://www.mu-sigma.com/disclaimer.html


Re: [ANNOUNCE] New Hive Committer - Mithun Radhakrishnan

2015-04-15 Thread Vaibhav Gumashta
Congrats Mithun.

-Vaibhav

From: venkatanathen kannan 
mailto:venkatanat...@yahoo.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>, venkatanathen kannan 
mailto:venkatanat...@yahoo.com>>
Date: Wednesday, April 15, 2015 at 11:40 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>, 
"d...@hive.apache.org" 
mailto:d...@hive.apache.org>>, Chris Drome 
mailto:cdr...@yahoo-inc.com>>
Cc: "mit...@apache.org" 
mailto:mit...@apache.org>>
Subject: Re: [ANNOUNCE] New Hive Committer - Mithun Radhakrishnan

Congrats Mithun !



On Wednesday, April 15, 2015 2:18 PM, Sergey Shelukhin 
mailto:ser...@hortonworks.com>> wrote:


Congrats!

From: , Cheng A mailto:cheng.a...@intel.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Tuesday, April 14, 2015 at 18:03
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>, 
"d...@hive.apache.org" 
mailto:d...@hive.apache.org>>, Chris Drome 
mailto:cdr...@yahoo-inc.com>>
Cc: "mit...@apache.org" 
mailto:mit...@apache.org>>
Subject: RE: [ANNOUNCE] New Hive Committer - Mithun Radhakrishnan

Congrats Mithun!

From: Gunther Hagleitner [mailto:ghagleit...@hortonworks.com]
Sent: Wednesday, April 15, 2015 8:10 AM
To: d...@hive.apache.org; Chris Drome; 
user@hive.apache.org
Cc: mit...@apache.org
Subject: Re: [ANNOUNCE] New Hive Committer - Mithun Radhakrishnan

Congrats Mithun!

Thanks,
Gunther.

From: Chao Sun mailto:c...@cloudera.com>>
Sent: Tuesday, April 14, 2015 3:48 PM
To: d...@hive.apache.org; Chris Drome
Cc: user@hive.apache.org; 
mit...@apache.org
Subject: Re: [ANNOUNCE] New Hive Committer - Mithun Radhakrishnan

Congrats Mithun!

On Tue, Apr 14, 2015 at 3:29 PM, Chris Drome 
mailto:cdr...@yahoo-inc.com.invalid>> wrote:
Congratulations Mithun!



 On Tuesday, April 14, 2015 2:57 PM, Carl Steinbach 
mailto:c...@apache.org>> wrote:


 The Apache Hive PMC has voted to make Mithun Radhakrishnan a committer on the 
Apache Hive Project.
Please join me in congratulating Mithun.
Thanks.
- Carl






--
Best,
Chao




Re: [ANNOUNCE] New Hive Committers - Jimmy Xiang, Matt McCline, and Sergio Pena

2015-03-23 Thread Vaibhav Gumashta
Congrats to all.

From: Sergey Shelukhin mailto:ser...@hortonworks.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Monday, March 23, 2015 at 12:52 PM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>, 
"d...@hive.apache.org" 
mailto:d...@hive.apache.org>>, Matthew McCline 
mailto:mmccl...@hortonworks.com>>, 
"jxi...@apache.org" 
mailto:jxi...@apache.org>>, Sergio Pena 
mailto:sergio.p...@cloudera.com>>
Subject: Re: [ANNOUNCE] New Hive Committers - Jimmy Xiang, Matt McCline, and 
Sergio Pena

Congrats!

From: Carl Steinbach mailto:c...@apache.org>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Monday, March 23, 2015 at 10:08
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>, 
"d...@hive.apache.org" 
mailto:d...@hive.apache.org>>, Matthew McCline 
mailto:mmccl...@hortonworks.com>>, 
"jxi...@apache.org" 
mailto:jxi...@apache.org>>, Sergio Pena 
mailto:sergio.p...@cloudera.com>>
Subject: [ANNOUNCE] New Hive Committers - Jimmy Xiang, Matt McCline, and Sergio 
Pena

The Apache Hive PMC has voted to make Jimmy Xiang, Matt McCline, and Sergio 
Pena committers on the Apache Hive Project.

Please join me in congratulating Jimmy, Matt, and Sergio.

Thanks.

- Carl



Re: Connecting to hiveserver via beeline is established but I never get beeline prompt

2015-03-06 Thread Vaibhav Gumashta
Looks like you are trying to connect to HiveServer v1. Beeline is the CLI 
client that is supported to work with HiveServer2. You can start HiveServer2 
like this: hive --service hiveserver2 and then connect to it using beeline.

Thanks,
—Vaibhav

From: Mich Talebzadeh mailto:m...@peridale.co.uk>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Friday, March 6, 2015 at 1:02 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Connecting to hiveserver via beeline is established but I never get 
beeline prompt

Hi,

I am trying to use beeline to access Hive/Hadoop.

Hive server is running on the Linux node that Hadoop is installed as follows:

hduser@rhes564::/home/hduser/jobs> hive --service hiveserver 10001 -v &
[1] 30025
hduser@rhes564::/home/hduser/jobs> Starting Hive Thrift Server
This usage has been deprecated, consider using the new command line syntax (run 
with -h to see usage information)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/hduser/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/hduser/apache-hive-0.14.0-bin/lib/hive-jdbc-0.14.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/hduser/apache-hive-0.14.0-bin/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Starting hive server on port 10001 with 100 min worker threads and 2147483647 
max worker threads

Now locally I use beeline to access it but it just does not go further

Beeline version 0.14.0 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10001/default 
org.apache.hive.jdbc.HiveDriver
scan complete in 13ms
Connecting to jdbc:hive2://localhost:10001/default
Enter password for jdbc:hive2://localhost:10001/default:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/hduser/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/hduser/apache-hive-0.14.0-bin/lib/hive-jdbc-0.14.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Now ikf I go to another Linux host that have hive and Hadoop installed (single 
node on another Linux host, IP adxdress 50.140.197.216)  I get the same issue

beeline
Beeline version 0.14.0 by Apache Hive
beeline> !connect jdbc:hive2://rhes564:10001 "" "" 
org.apache.hive.jdbc.HiveDriver
Connecting to jdbc:hive2://rhes564:10001
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/hduser/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/hduser/apache-hive-0.14.0-bin/lib/hive-jdbc-0.14.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Error: Could not open client transport with JDBC Uri: 
jdbc:hive2://rhes564:10001: java.net.SocketException: Connection reset 
(state=08S01,code=0)
0: jdbc:hive2://rhes564:10001 (closed)> !connect jdbc:hive2://rhes564:10001 "" 
"" org.apache.hive.jdbc.HiveDriver
Connecting to jdbc:hive2://rhes564:10001

Now I can see that the connections are established when I do

hduser@rhes564::/home/hduser/jobs> netstat -anlp | grep 10001
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp0  0 0.0.0.0:10001   0.0.0.0:*   
LISTEN  30025/java
tcp0  0 127.0.0.1:43507 127.0.0.1:10001 
ESTABLISHED 32621/java
tcp0  0 127.0.0.1:10001 127.0.0.1:43507 
ESTABLISHED 30025/java
tcp0  0 50.140.197.217:1000150.140.197.216:50921
ESTABLISHED 30025/java

rhes564 has IP adrees 50.140.197.217 and is the host where hiveserver is 
running. There is one connection from local beeline (see above, Ip address 
50.140.197.216)

So the connections are established. Just it never gets to the prompt!

Any ideas appreciated.

Thanks,

Mich

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries 
or their employees, un

Re: ERROR OutOfMemoryError: Java heap space

2015-02-25 Thread Vaibhav Gumashta
Hi,

auth=nosasl should be only provided in JDBC url if 
"hive.server2.authentication” is explicitly set to NOSASL in your hive-site. 
Otherwise, the jdbc client tries to open a plain socket while the the server is 
expecting a sasl negotiation to happen. Due to an issue with thrift, this 
results in OOM on client side.

—Vaibhav

From: Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Wednesday, February 25, 2015 at 3:40 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Re: ERROR OutOfMemoryError: Java heap space

Is your problem solved?

Thanks & Regards,

Srinivas T

On Wed, Feb 25, 2015 at 4:17 PM, Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>> wrote:
Hi,

You can set fetch size based certain datatype like blob, image, etc

statement.setFetchSize(1000);

Try this.


Thanks & Regards,

Srinivas T

On Wed, Feb 25, 2015 at 4:10 PM, Jadhav Shweta 
mailto:jadhav.shw...@tcs.com>> wrote:
Query is working fine if i have small data set in same table.
But its throwing error for large data set

Thanks
Shweta Jadhav



-Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>> wrote: -
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
From: Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>>
Date: 02/25/2015 03:45PM

Subject: Re: ERROR OutOfMemoryError: Java heap space

not required. Make sure there will not be unnecessary looping in your code

Thanks & Regards,

Srinivas T

On Wed, Feb 25, 2015 at 2:58 PM, Jadhav Shweta 
mailto:jadhav.shw...@tcs.com>> wrote:
Do I need to increase HADOOP_HEAPSIZE
I haven't set it.
I have cluster with 4 machines each having 8GB ram.
I read somewhere that we should mention auth=noSasl in hive jdbc url. Is it 
necessary

Thanks
Shweta Jadhav



-Jadhav Shweta mailto:jadhav.shw...@tcs.com>> wrote: 
-
To: user@hive.apache.org
From: Jadhav Shweta mailto:jadhav.shw...@tcs.com>>
Date: 02/25/2015 02:54PM

Subject: Re: ERROR OutOfMemoryError: Java heap space

I am running the query using spring batch framework.

Thanks
Shweta Jadhav

-Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>> wrote: -
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
From: Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>>
Date: 02/25/2015 02:43PM
Subject: Re: ERROR OutOfMemoryError: Java heap space

Hi,

Might be problem in your code.

Generally it happens when there are unnecessary looping.

Please check your java code.

Thanks & Regards,

Srinivas T

On Wed, Feb 25, 2015 at 2:38 PM, Jadhav Shweta 
mailto:jadhav.shw...@tcs.com>> wrote:
Hi

I have installed apache hive 0.13.0.
I have configured metastore as postgres db.
Query is working fine in beeline command line interface but giving Heap space 
error while executing using JDBC java client.

Thanks
Shweta Jadhav

-Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>> wrote: -
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
From: Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>>
Date: 02/25/2015 02:32PM
Subject: Re: ERROR OutOfMemoryError: Java heap space


Hi,

Let me know how you configured Hive?

R u using cloudera?

If you are installed Hive separately then have you configured Metastore?



Thanks & Regards,

Srinivas T

On Wed, Feb 25, 2015 at 2:27 PM, Jadhav Shweta 
mailto:jadhav.shw...@tcs.com>> wrote:
Hi

I am running simple select query

select * from table;

thanks
Shweta Jadhav



-Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>> wrote: -
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
From: Srinivas Thunga 
mailto:srinivas.thu...@gmail.com>>
Date: 02/25/2015 01:09PM
Subject: Re:

Hi,

Can  you place the query as well which you are trying?



Thanks & Regards,

Srinivas T

On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
mailto:jadhav.shw...@tcs.com>> wrote:
Hi

I am trying to run hive query
Its getting executed from beeline interface
but its throwing

java.lang.OutOfMemoryError: Java heap space

error when connecting using jdbc.

I am using hive 0.13.0 version and hiveserver2.

which parameters i need to configure for the same.

thanks

Shweta Jadhav

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you








Re: GSS initiate failed exception

2015-02-25 Thread Vaibhav Gumashta
Looks like your Kerberos authentication failed. Did you renew the ticket? 
Typical expiry is 24 hours.

From: , Chandra Reddy 
mailto:chandra.bog...@gs.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Wednesday, February 25, 2015 at 3:42 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: GSS initiate failed exception

Hi,
  My Hive jdbc client  queries ( hiveserver2)  to secured cluster fails with 
below exception after one or two days running fine from tomcat. Any idea what 
might be the issue? Is it a known issue?

2015-02-25 04:49:43,174 ERROR [org.apache.thrift.transport.TSaslTransport] SASL 
negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
 ~[na:1.7.0_06]
at 
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
 [hive-exec-0.13.0.jar:0.13.0]

--
-
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
at 
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:221)
at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:297)
at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at 
org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:203)

Thanks,
Chandra



Re: [ANNOUNCE] New Hive PMC Member - Sergey Shelukhin

2015-02-25 Thread Vaibhav Gumashta
Congrats Sergey!

On 2/25/15, 9:06 AM, "Vikram Dixit"  wrote:

>Congrats Sergey!
>
>On 2/25/15, 8:43 AM, "Carl Steinbach"  wrote:
>
>>I am pleased to announce that Sergey Shelukhin has been elected to the
>>Hive
>>Project Management Committee. Please join me in congratulating Sergey!
>>
>>Thanks.
>>
>>- Carl
>



Re: [ANNOUNCE] New Hive Committers -- Chao Sun, Chengxiang Li, and Rui Li

2015-02-09 Thread Vaibhav Gumashta
Congratulations to all.

‹Vaibhav

On 2/9/15, 2:06 PM, "Prasanth Jayachandran"
 wrote:

>Congratulations!
>
>> On Feb 9, 2015, at 1:57 PM, Na Yang  wrote:
>> 
>> Congratulations!
>> 
>> On Mon, Feb 9, 2015 at 1:06 PM, Vikram Dixit K 
>> wrote:
>> 
>>> Congrats guys!
>>> 
>>> On Mon, Feb 9, 2015 at 12:42 PM, Szehon Ho  wrote:
>>> 
 Congratulations guys !
 
 On Mon, Feb 9, 2015 at 3:38 PM, Jimmy Xiang 
wrote:
 
> Congrats!!
> 
> On Mon, Feb 9, 2015 at 12:36 PM, Alexander Pivovarov <
 apivova...@gmail.com
>> 
> wrote:
> 
>> Congrats!
>> 
>> On Mon, Feb 9, 2015 at 12:31 PM, Carl Steinbach 
 wrote:
>> 
>>> The Apache Hive PMC has voted to make Chao Sun, Chengxiang Li, and
>>> Rui
> Li
>>> committers on the Apache Hive Project.
>>> 
>>> Please join me in congratulating Chao, Chengxiang, and Rui!
>>> 
>>> Thanks.
>>> 
>>> - Carl
>>> 
>>> 
>> 
> 
 
>>> 
>>> 
>>> 
>>> --
>>> Nothing better than when appreciated for hard work.
>>> -Mark
>>> 
>



Re: Hiveserver2 memory / thread leak v 0.13.1 (hdp-2.1.5)

2015-02-06 Thread Vaibhav Gumashta
Alexander,

Can you share the jmap histo (or even better, a heapdump)? What are the top 
consumers? What are the  heap+permgen sizes that the JVM is configured to use?

FYI, we fixed this memory leak in Hive-14: 
https://issues.apache.org/jira/browse/HIVE-7353.

Thanks,
-Vaibhav

From: Alexander Pivovarov mailto:apivova...@gmail.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Wednesday, February 4, 2015 at 6:03 PM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Hiveserver2 memory / thread leak v 0.13.1 (hdp-2.1.5)

I'm using hive-0.13.1 (hdp-2.1.5)

Hiveserver2 is running for 6 days

nobody is connected to the server now.
last connection was 30 min ago.

Used memory: 688MB   (after GC)
Live threads: 297
perm: 99.6% used

Two days ago numbers were
Used memory: 259MB   (after GC)
Live threads: 108
perm: 98% used

Looks like a memory/thread leak.

Today and 2 days ago VisualVM screenshots are attached



Re: HiveServer2 multiple instances

2015-01-29 Thread Vaibhav Gumashta
Arali,

You can use this feature from Hive 14 onwards for your use case: 
https://issues.apache.org/jira/browse/HIVE-8376

Thanks,
-Vaibhav

From: arali tech mailto:aralitec...@gmail.com>>
Reply-To: user mailto:user@hive.apache.org>>
Date: Monday, December 15, 2014 at 1:35 AM
To: user mailto:user@hive.apache.org>>
Subject: HiveServer2 multiple instances

Hi,

Has anybody tried to run multiple instances of HiveServer2 on same node with 
differeent port numbers

OR

multiple instances of HiverServer2 on different nodes with loadbalancing 
between the different instances.

Will HiveServer2 scale with single instance no matter what is the size of load 
or number of requests.

Regards,
Arali


Re: tfetchresultsresp row returns zero

2015-01-28 Thread Vaibhav Gumashta
Hi George,

This was done as part of https://issues.apache.org/jira/browse/HIVE-3746. The 
reason was that the previous serialization design (row major) was very 
inefficient and resulted in a lot of unnecessary network traffic. The current 
design (column major) addresses some of those issues.

Thanks,
-Vaibhav


From: George Livingston mailto:georgeli2...@gmail.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Wednesday, January 28, 2015 at 2:35 AM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: tfetchresultsresp row returns zero

Hi Team,

I have generated C# source for TCLIService using thrift, to connect to the 
Hiveserver2, when I connect the hiveserver2 with Hive version 0.13, 
TFetchResultsResp result is always returned in values of columns and not in 
rows i.e. rows count always zero.


When I tried with the Hive version 0.12, TFetchResultsResp result is always 
returned in rows and not in column i.e. column count always zero.

Please advise whether i need to set any property to fetch both columns and rows 
in results in all the Hive versions.

TSocket transport = new TSocket("localhost", 1);
TBinaryProtocol protocol = new TBinaryProtocol(transport);
TCLIService.Client client = new TCLIService.Client(protocol);

transport.Open();
TOpenSessionReq openReq = new TOpenSessionReq();
TOpenSessionResp openResp = client.OpenSession(openReq);
TSessionHandle sessHandle = openResp.SessionHandle;

TExecuteStatementReq execReq = new TExecuteStatementReq();
execReq.SessionHandle = sessHandle;
execReq.Statement = "show tables";
TExecuteStatementResp execResp = client.ExecuteStatement(execReq);
TOperationHandle stmtHandle = execResp.OperationHandle;

TFetchResultsReq fetchReq = new TFetchResultsReq();
fetchReq.OperationHandle = stmtHandle;
fetchReq.Orientation = TFetchOrientation.FETCH_FIRST;
fetchReq.MaxRows = ;
TFetchResultsResp resultsResp = client.FetchResults(fetchReq);

TRowSet resultsSet = resultsResp.Results;
//In hive version 0.13, rows count zero
List resultRows = resultsSet.Rows;
//In Hive version 0.12, columns count zero
List resultColumn = resultsSet.Columns;


TCloseOperationReq closeReq = new TCloseOperationReq();
closeReq.OperationHandle = stmtHandle;
client.CloseOperation(closeReq);
TCloseSessionReq closeConnectionReq = new TCloseSessionReq();
closeConnectionReq.SessionHandle = sessHandle;
client.CloseSession(closeConnectionReq);

transport.Close();

Thanks in advance.

Regards,

George


Re: how to set dynamic service discovery for HiveServer2

2015-01-28 Thread Vaibhav Gumashta
Hi,

You can follow the document posted on this jira: 
https://issues.apache.org/jira/browse/HIVE-8376

Basically, in your JDBC url, you will have to refer to the ZooKeeper ensemble 
instead of a particular HiveServer2 instance. The JDBC driver will select one 
HiveServer2 instance from ZooKeeper and use that for the rest of the client 
session.

Thanks,
-Vaibhav

From: Allen <529958...@qq.com>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Date: Tuesday, January 27, 2015 at 9:50 PM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: how to set dynamic service discovery for HiveServer2

hi,all

I want to use dynamic service discovery for HiveServer2.

I has already set some configuration parameters about  dynamic service 
discovery in hive-site.xml. The hive server2 has been started in two 
machines,and these nodes has register itself in zk.

   But I don't know how to use it in beeline.

Can anybody tell me how to write the connection url ?

Thanks.


Re: [ANNOUNCE] New Hive PMC Members - Szehon Ho, Vikram Dixit, Jason Dere, Owen O'Malley and Prasanth Jayachandran

2015-01-28 Thread Vaibhav Gumashta
Congratulations e’one!

—Vaibhav
On Jan 28, 2015, at 1:20 PM, Xuefu Zhang 
mailto:xzh...@cloudera.com>> wrote:

Congratulations to all!

--Xuefu

On Wed, Jan 28, 2015 at 1:15 PM, Carl Steinbach 
mailto:c...@apache.org>> wrote:
I am pleased to announce that Szehon Ho, Vikram Dixit, Jason Dere, Owen 
O'Malley and Prasanth Jayachandran have been elected to the Hive Project 
Management Committee. Please join me in congratulating the these new PMC 
members!

Thanks.

- Carl




Re: Remote Beeline with kerberos issue.

2014-11-03 Thread Vaibhav Gumashta
Are you doing a kinit before launching beeline shell? Also what Hive
version are you on?

You need to do a kinit before you start using the beeline shell in a
kerberized cluster setup. Kinit will essentially log the end user to
Kerberos KDC, and set up the Kerberos TGT in the local system cache.

Thanks,
--Vaibhav

On Mon, Nov 3, 2014 at 12:32 AM, konark modi  wrote:

> Hi All,
>
> I am running into a strange issue since last week. Would request your help.
>
> I have a remote client from where in I ant to connect to hive via beeline.
>
> I have two hadoop clusters one with kerberos authentication and other
> without authentication. I am able to connect to HIVE which does not have
> the authentication but running into problems when trying to connect to
> cluster with kerberos authentication.
>
> On the contrary I am able to connect between the two clusters but not from
> remote client. Please suggest what could be missing.
>
>
> Installations & Logfile :
>
> 1. Remote client : yum install hadoop-client
> 2. Log file : Remote client :http://pastebin.com/pUrY6i31
>   HiveServer2 : http://pastebin.com/RNnbLv9f
>
> Regards
> Konark Modi
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive PMC Member - Alan Gates

2014-10-27 Thread Vaibhav Gumashta
Congratulations Alan!

On Mon, Oct 27, 2014 at 3:54 PM, Chris Drome  wrote:

>  Congratulations Alan!
>
>   From: Carl Steinbach 
> Reply-To: "user@hive.apache.org" 
> Date: Monday, October 27, 2014 at 3:38 PM
> To: "d...@hive.apache.org" , "user@hive.apache.org" <
> user@hive.apache.org>, "ga...@apache.org" 
> Subject: [ANNOUNCE] New Hive PMC Member - Alan Gates
>
>   I am pleased to announce that Alan Gates has been elected to the Hive
> Project Management Committee. Please join me in congratulating Alan!
>
>  Thanks.
>
>  - Carl
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Java api for connecting to hiveserver2

2014-10-11 Thread Vaibhav Gumashta
Hanish,

I agree with Suhas and would strongly encourage you to use the JDBC API for
HiveServer2. HiveServer2 has a thrift api for the client-server RPC, but
that is *not* intended for end user consumption and could end up breaking
your current code in future.

Is there any specific feature you are looking at which Hive's JDBC driver
doesn't implement?

Thanks,
--Vaibhav

On Sat, Oct 11, 2014 at 12:43 PM, Suhas Gogate  wrote:

> Sorry Anish, but being database programmer in the past, I always used
> embedded SQL interface... I was wondering should we really need direct Java
> interface w/ HiveServer2? May be I am wrong, but would like to know your
> view on what are the limitations of using embedded SQL vs direct Java API.
>
> --Suhas
>
> On Sat, Oct 11, 2014 at 12:38 PM, Suhas Gogate  wrote:
>
>> Hanish, this is interesting question and I also faced similar limitation
>> lately. Although as Hive getting more closer to relational model with
>> richer SQL interface (DDL/Authorization, DML) and HiveServer2 as a way to
>> invoke embedded SQL in Java, the real question is should Hive Metastore
>> Client (java) API should at all be used by user, rather all the existing
>> Hive Client interfaces be talking to Hive Metastore internally?
>>
>> --Suhas
>>
>>
>>
>> On Fri, Oct 10, 2014 at 9:21 PM, Hanish Bansal <
>> hanish.bansal.agar...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am planning to use sql based authorization that is recently introduced
>>> in hive 0.13.0.
>>>
>>> I was using hive metastore client java api early which has direct apis
>>> for all operations like grant, revoke etc.
>>> But for using new authorization I ll have to use hiveserver2 and pass
>>> all requests through hiveserver2. I came up with JDBC client which can be
>>> used to connect hiveserver2. The restriction there is we must have to write
>>> SQL statements.
>>>
>>> I want to know is there any java api to connect hiveserver2, that have
>>> direct java methods to perform operations and we don't need to write SQL
>>> statements ??
>>>
>>> Thanks,
>>> Hanish
>>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Getting access to hadoop output from Hive JDBC session

2014-09-25 Thread Vaibhav Gumashta
Alex,

This is probably what you are looking for: Beeline should have an option
for user to see the query progress (
https://issues.apache.org/jira/browse/HIVE-7615).

Thanks,
--Vaibhav

On Mon, Aug 11, 2014 at 11:34 PM, Alexander Kolbasov 
wrote:

>  Hello,
>
>  I am switching from Hive 0.9 to Hive 0.12 and decided to start using
> Hive metadata server mode. As it turns out, Hive1 JDBC driver connected as
> "jdbc:hive://" only works via direct access to the metastore database. The
> Hive2 driver connected as "jdbc:hive2://" does work with the remote Hive
> metastore server, but there is another serious difference in behavior. When
> I was using Hive1 driver I saw Hadoop output - the information about Hive
> job ID and the usual Hadoop output showing percentages of map and reduce
> done. The Hive2 driver silently waited for map/reduce to complete and just
> produced the result.
>
>  As I can see, both Hive itself and beeline are able to get the same
> Hadoop output as I was getting with Hive1 driver, so it should be somehow
> possible but it isn't clear how they do this. Can someone suggest the way
> to get Hadoop output with Hive2 JDBC driver?
>
>  Thanks for any help!
>
>  - Alex
>
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: trouble starting hiveserver2

2014-09-25 Thread Vaibhav Gumashta
Hi Praveen,

hive.server2.servermode is not a valid hive config. Please
use hive.server2.transport.mode to specify which transport mode you'd like
to use for sending thrift messages between the client and the server. Can
you turn the debug logging on to see if you get any more info?

Thanks,
--Vaibhav

On Thu, Sep 18, 2014 at 1:22 AM,  wrote:

>   Hi,
>
>  I am unable to start service hiveserver2 on windows. I am starting this
> service to configure ODBC connection to Excel.
>
>  I am issuing the following command to start the service:
> > *hive.cmd --service hiveserver2*
>
>  It is not throwing any errors. When I am running with "echo on" option.
> I can see its trying to run following command
>
>  c:\hive\bin>*call C:\hadoop\bin\hadoop.cmd jar
> C:\hive\lib\hive-service-0.13.1.ja*
> *r org.apache.hive.service.server.HiveServer2*
>
>  Neither script exits nor I get any error messages after this.
>
>  Following is my *hive-site.xml* settings:
>
>  
>   hive.server2.servermode
>   *thrift*
>   Server  mode. "thrift" or "http".
> 
>
>   property>
>   hive.server2.transport.mode
>   binary
>   Server transport mode. "binary" or "http".
> 
>
>   
>   hive.server2.thrift.port
>   1
> 
>
>  
>   hive.server2.thrift.bind.host
>   localhost
>  
>
>  
>   hive.server2.authentication
>   NONE
> 
>
>  Regards,
> Praveen.
>
> The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments.
>
> WARNING: Computer viruses can be transmitted via email. The recipient
> should check this email and any attachments for the presence of viruses.
> The company accepts no liability for any damage caused by any virus
> transmitted by this email.
>
> www.wipro.com
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Hiveserver2 crash with RStudio (using RJDBC)

2014-09-25 Thread Vaibhav Gumashta
Nathalie,

Can you grab a heapdump at the time the server crashes (export this to your
environment: HADOOP_CLIENT_OPTS="-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath= $HADOOP_CLIENT_OPTS".)? What type of
metastore are you using with HiveServer2 - embedded (if you specify
-hiveconf hive.metastore.uris=" " in the HiveServer2 startup command, it
uses embedded metastore) or remote?

Thanks,
--Vaibhav

On Mon, Sep 22, 2014 at 10:55 AM, Nathalie Blais  wrote:

>  Hello,
>
>
>
> We are currently experiencing a severe reproducible hiveserver2 crash when
> using the RJDBC connector in RStudio (please refer to the description below
> for the detailed test case).  We have a hard time pinpointing the source of
> the problem and we are wondering whether this is a known issue or we have a
> glitch in our configuration; we would sincerely appreciate your input on
> this case.
>
>
>
> Case
>
> Severe Hiveserver2 crash when returning “a certain” volume of data (really
> not that big) to RStudio through RJDBC
>
>
>
> Config Versions
>
> Hadoop Distribution: Cloudera – *cdh5.0.1p0.47*
>
> Hiverserver2: *0.12*
>
> RStudio: *0.98.1056*
>
> RJDBC: *0.2-4*
>
>
>
> How to Reproduce
>
> 1.   In a SQL client application (Aqua Data Studio was used for the
> purpose of this example), create Hive test table
>
> a.   create table test_table_connection_crash(col1 string);
>
> 2.   Load data into table (data file attached)
>
> a.   LOAD DATA INPATH '/user/test/testFile.txt' INTO TABLE
> test_table_connection_crash;
>
> 3.   Verify row count
>
> a.   select count(*) nbRows from test_table_connection_crash;
>
> b.  720 000 rows
>
> 4.   Display all rows
>
> a.   select * from test_table_connection_crash order by col1 desc
>
> b.  All the rows are returned by the Map/Reduce to the client and
> displayed properly in the interface
>
> 5.   Open RStudio
>
> 6.   Create connection to Hive
>
> a.   library(RJDBC)
>
> b.  drv <- JDBC(driverClass="org.apache.hive.jdbc.HiveDriver",
> classPath=list.files("D:/myJavaDriversFolderFromClusterInstall/",
> pattern="jar$", full.names=T), identifier.quote="`")
>
> c.   conn <- dbConnect(drv,
> "jdbc:hive2://server_name:1/default;ssl=true;sslTrustStore=C:/Progra~1/Java/jdk1.7.0_60/jre/lib/security/cacerts;trustStorePassword=pswd",
> "user", "password")
>
> 7.   Verify connection with a small query
>
> a.   r <- dbGetQuery(conn, "select * from test_table_connection_crash
> order by col1 desc limit 100")
>
> b.  print(r)
>
> c.   100 rows are returned to RStudio and properly displayed in the
> console interface
>
> 8.   Remove the limit and try the original query (as performed in the
> SQL client application)
>
> a.   r <- dbGetQuery(conn, "select * from test_table_connection_crash
> order by col1 desc")
>
> b.  Query starts running
>
> c.   *** Cluster crash ***
>
>
>
> Worst comes to worst, in the eventuality that RStudio desktop client
> cannot handle such an amount of data, we might expect the desktop
> application to crash; not the whole hiveserver2.
>
>
>
> Please let us know whether or not you are aware of any issues of the
> kind.  Also, please do not hesitate to request any configuration file you
> might need to examine.
>
>
>
> Thank you very much!
>
>
>
> Best regards,
>
>
>
> Nathalie
>
>
>
>
>
> [image: dna_signature]
>
> *Nathalie Blais*
>
> B.I. Developer | Technology Group
>
> Ubisoft Montreal
>
>
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HiveServer2 Availability

2014-09-12 Thread Vaibhav Gumashta
Check this: https://issues.apache.org/jira/browse/HIVE-7935

This enables dynamic service discovery, but in future the plan is to use
this feature to achieve HA.

Thanks,
--Vaibhav

On Fri, Jul 25, 2014 at 7:32 PM, Edward Capriolo 
wrote:

> I have put stAndard load balancers infront before and wrote a thrift show
> tables as a test.
>
>
> On Friday, July 25, 2014, Raymond Lau  wrote:
> > Has anyone had any experience with a multiple-machine HiveServer2 setup?
>  Hive needs to be available at all times for our use-case, so if for some
> reason, one of our HiveServer2 machines goes down or the connection drops,
> the user should be able to just re-connect to another machine.
> > In the end, we'd like a system where someone just connects to, for
> example, hiveserver2.company.com and the user is automatically routed to
> a server round-robin style.
> > Anyone have any thoughts on this subject?
> > Thanks in advance.
> > --
> > Raymond Lau
> > Software Engineer - Intern | <
> https://lh6.googleusercontent.com/hyz76OkGgnUwiU5b-fZWpAjIcTm-SaytgiFJAbgc6A_dzWIRMpwuB1497LLBOhbB4GU7X04YXaB9B4Qth_bB042dxPIXbHONj8r6LCTlp6Mt3QQpj3c
> >
> > r...@ooyala.com | (925) 395-3806
> >
>
> --
> Sorry this was sent from mobile. Will do less grammar and spell check than
> usual.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committer - Eugene Koifman

2014-09-12 Thread Vaibhav Gumashta
Congrats Eugene!

On Fri, Sep 12, 2014 at 3:35 PM, Carl Steinbach  wrote:

> The Apache Hive PMC has voted to make Eugene Koifman a committer on the
> Apache Hive Project.
>
> Please join me in congratulating Eugene!
>
> Thanks.
>
> - Carl
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committers - Gopal Vijayaraghavan and Szehon Ho

2014-06-23 Thread Vaibhav Gumashta
Congrats Gopal and Szehon!

--Vaibhav


On Mon, Jun 23, 2014 at 8:48 AM, Szehon Ho  wrote:

> Thank you all very much, and congrats Gopal!
> Szehon
>
>
> On Sun, Jun 22, 2014 at 8:42 PM, Carl Steinbach  wrote:
>
> > The Apache Hive PMC has voted to make Gopal Vijayaraghavan and Szehon Ho
> > committers on the Apache Hive Project.
> >
> > Please join me in congratulating Gopal and Szehon!
> >
> > Thanks.
> >
> > - Carl
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Hive 0.13 Metastore => MySQL BoneCP issue

2014-06-10 Thread Vaibhav Gumashta
Soam,

What version of BoneCP are you using?

Thanks,
--Vaibhav


On Tue, Jun 10, 2014 at 12:41 PM, Soam Acharya  wrote:

> Hi folks,
>
> we're seeing an intermittent issue between our Hive 0.13 metastore and
> mysql instance. After being idle for about 5 minutes or so, any
> transactions involving the metastore and mysql causes the following error
> to appear in the metastore log:
>
> 2014-06-09 05:13:52,066 ERROR bonecp.ConnectionHandle
> (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem.
> Killing off this connection and all remaining connectio
> ns in the connection pool. SQL State = 08S01
> 2014-06-09 05:13:52,068 ERROR metastore.RetryingHMSHandler
> (RetryingHMSHandler.java:invoke(157)) - Retrying HMSHandler after 1000 ms
> (attempt 1 of 1) with error: javax.jdo.JDODataStore
> Exception: Communications link failure
>
> The last packet successfully received from the server was 502,751
> milliseconds ago.  The last packet sent successfully to the server was 1
> milliseconds ago.
> at
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
> at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:900)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:832)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>
> [full stack trace is below]
>
> Subsequent transactions go through fine, however. This only occurs if the
> metastore has has been idle for a while.
>
> We've been looking at the code and the problem seems to lie in the channel
> between the driver and metastore connection pool. We looked at the wait
> time configuration settings both mysql and BoneCP. Both of them are set for
> max value.
>
> Our hive 0.12 install uses DBCP for datanucleus.connectionPoolingType, not
> BoneCP and we don't see these issues there. We are not running Tez.
>
> This seems like such a basic issue that we'd thought to check and see if
> anyone else has encountered it. Any insights would be greatly appreciated.
>
> Thanks!
>
> Soam
>
> =
>
> Full stack trace:
>
> 2014-06-09 05:13:52,066 ERROR bonecp.ConnectionHandle
> (ConnectionHandle.java:markPossiblyBroken(388)) - Database access problem.
> Killing off this connection and all remaining connectio
> ns in the connection pool. SQL State = 08S01
> 2014-06-09 05:13:52,068 ERROR metastore.RetryingHMSHandler
> (RetryingHMSHandler.java:invoke(157)) - Retrying HMSHandler after 1000 ms
> (attempt 1 of 1) with error: javax.jdo.JDODataStore
> Exception: Communications link failure
>
> The last packet successfully received from the server was 502,751
> milliseconds ago.  The last packet sent successfully to the server was 1
> milliseconds ago.
> at
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
> at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:900)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:832)
> at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
> at com.sun.proxy.$Proxy0.getTable(Unknown Source)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1559)
> at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
> at com.sun.proxy.$Proxy11.get_table(Unknown Source)
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:8146)
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:8130)
> at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
> at
> org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(

Re: [ANNOUNCE] New Hive Committers - Prasanth J and Vaibhav Gumashta

2014-04-25 Thread Vaibhav Gumashta
This is cool! Thanks a lot guys!

--Vaibhav


On Fri, Apr 25, 2014 at 10:19 AM, Thejas Nair wrote:

> Congrats Prasanth and Vaibhav!
>
>
>
> On Fri, Apr 25, 2014 at 10:16 AM, Prasanth J wrote:
>
>> Its a wonderful news. :) Thanks for your wishes guys!!
>>
>> Thanks
>> Prasanth
>>
>>
>> > On Apr 25, 2014, at 8:52 AM, Sushanth Sowmyan 
>> wrote:
>> >
>> > Congrats, guys! :)
>> >
>> > On Fri, Apr 25, 2014 at 12:33 AM, Lefty Leverenz
>> >  wrote:
>> >> Congratulations!
>> >>
>> >> -- Lefty
>> >>
>> >>
>> >> On Fri, Apr 25, 2014 at 12:10 AM, Hari Subramaniyan <
>> >> hsubramani...@hortonworks.com> wrote:
>> >>
>> >>> Congrats Prasanth and Vaibhav!
>> >>>
>> >>> Thanks
>> >>> Hari
>> >>>
>> >>>
>> >>> On Thu, Apr 24, 2014 at 8:45 PM, Chinna Rao Lalam <
>> >>> lalamchinnara...@gmail.com> wrote:
>> >>>
>> >>>> Congratulations to Prasanth and Vaibhav!
>> >>>>
>> >>>>
>> >>>>> On Fri, Apr 25, 2014 at 8:23 AM, Shengjun Xin 
>> >>>> wrote:
>> >>>>
>> >>>>> Congratulations ~~
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Apr 25, 2014 at 10:33 AM, Carl Steinbach <
>> >>> cwsteinb...@gmail.com
>> >>>>> wrote:
>> >>>>>
>> >>>>>> + Prasanth's correct email address
>> >>>>>>
>> >>>>>>
>> >>>>>> On Thu, Apr 24, 2014 at 7:31 PM, Xuefu Zhang 
>> >>>> wrote:
>> >>>>>>
>> >>>>>>> Congratulations to Prasanth and Vaibhav!
>> >>>>>>>
>> >>>>>>> --Xuefu
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On Thu, Apr 24, 2014 at 7:26 PM, Carl Steinbach 
>> >>>> wrote:
>> >>>>>>>
>> >>>>>>>> The Apache Hive PMC has voted to make Prasanth J and Vaibhav
>> >>>>>>>> Gumashta committers on the Apache Hive Project.
>> >>>>>>>>
>> >>>>>>>> Please join me in congratulating Prasanth and Vaibhav!
>> >>>>>>>>
>> >>>>>>>> Thanks.
>> >>>>>>>>
>> >>>>>>>> - Carl
>> >>>>>
>> >>>>>
>> >>>>> --
>> >>>>> Regards
>> >>>>> Shengjun
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Hope It Helps,
>> >>>> Chinna
>> >>>
>> >>> --
>> >>> CONFIDENTIALITY NOTICE
>> >>> NOTICE: This message is intended for the use of the individual or
>> entity to
>> >>> which it is addressed and may contain information that is
>> confidential,
>> >>> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >>> of this message is not the intended recipient, you are hereby
>> notified that
>> >>> any printing, copying, dissemination, distribution, disclosure or
>> >>> forwarding of this communication is strictly prohibited. If you have
>> >>> received this communication in error, please contact the sender
>> immediately
>> >>> and delete it from your system. Thank You.
>> >>>
>>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committers - Alan Gates, Daniel Dai, and Sushanth Sowmyan

2014-04-14 Thread Vaibhav Gumashta
Congratulations to all!

Thanks,
--Vaibhav


On Mon, Apr 14, 2014 at 10:55 AM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:

> Congratulations everyone!!
>
> Thanks
> Prasanth Jayachandran
>
> On Apr 14, 2014, at 10:51 AM, Carl Steinbach  wrote:
>
> > The Apache Hive PMC has voted to make Alan Gates, Daniel Dai, and
> Sushanth
> > Sowmyan committers on the Apache Hive Project.
> >
> > Please join me in congratulating Alan, Daniel, and Sushanth!
> >
> > - Carl
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HiveServer2 http mode?

2014-04-10 Thread Vaibhav Gumashta
Adam,

It is doing the latter for now - encapsulating thrift payloads inside http
calls.

Thanks,
--Vaibhav


On Thu, Apr 10, 2014 at 1:33 PM, Adam Faris  wrote:

> The "Setting Up HiveServer2" wiki page mentions that HiveServer2 is
> providing a "http mode" in 0.13.  Is "http mode" going to be a rest API or
> is it encapsulating thrift/jdbc connections inside http traffic?
>
> - Thanks, Adam

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive PMC Member - Xuefu Zhang

2014-02-28 Thread Vaibhav Gumashta
Congrats Xuefu!


On Fri, Feb 28, 2014 at 9:20 AM, Prasad Mujumdar wrote:

>Congratulations Xuefu !!
>
> thanks
> Prasad
>
>
>
> On Fri, Feb 28, 2014 at 1:20 AM, Carl Steinbach  wrote:
>
> > I am pleased to announce that Xuefu Zhang has been elected to the Hive
> > Project Management Committee. Please join me in congratulating Xuefu!
> >
> > Thanks.
> >
> > Carl
> >
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Hive JDBC + Jetty

2014-02-26 Thread Vaibhav Gumashta
Hi Jone,

What version of jetty are you using? It is possible that you have multiple
versions in your classpath.

Thanks,
--Vaibhav


On Fri, Feb 21, 2014 at 5:31 AM, Jone Lura  wrote:

> Hi,
>
> I have encountered a problem when I try to deploy my application on Jetty.
>
> In my Junit test environment everything works fine, whilst deploying the
> same application in Jetty I receive the following message;
>
> java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
> at
> org.apache.hive.service.cli.thrift.EmbeddedThriftCLIService.(EmbeddedThriftCLIService.java:32)
>
> In my classpath I have included the hive-jdbc-0.11.0.jar,
> hive-service-0.11.0.jar, libfb303-0.9.1.jar, libthrift-0.9.1.jar in
> addition to the log jars, which according to the documentation should be
> sufficient when connecting to a remote hive server.
>
> Do I miss something? For me it seems like its trying to connect to an
> embedded environment, although I am not quite sure.
>
> Any ideas?
>
>
> Best regards,
>
> Jone
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committers - Sergey Shelukhin and Jason Dere

2014-01-27 Thread Vaibhav Gumashta
Congrats Sergey and Jason!

--Vaibhav


On Mon, Jan 27, 2014 at 10:47 AM, Vikram Dixit wrote:

> Congrats Sergey and Jason!
>
> Thanks
> Vikram.
>
> On Jan 27, 2014, at 8:36 AM, Carl Steinbach wrote:
>
> > The Apache Hive PMC has voted to make Sergey Shelukhin and Jason Dere
> > committers on the Apache Hive Project.
> >
> > Please join me in congratulating Sergey and Jason!
> >
> > Thanks.
> >
> > Carl
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committer - Vikram Dixit

2014-01-06 Thread Vaibhav Gumashta
Congrats Vikram!!!


On Mon, Jan 6, 2014 at 11:24 AM, Jason Dere  wrote:

> Congrats Vikram!
>
> On Jan 6, 2014, at 11:10 AM, Prasanth Jayachandran <
> pjayachand...@hortonworks.com> wrote:
>
> > Congratulations Vikram!!
> >
> > Thanks
> > Prasanth Jayachandran
> >
> > On Jan 6, 2014, at 11:50 PM, Eugene Koifman 
> wrote:
> >
> >> Congratulations!
> >>
> >>
> >> On Mon, Jan 6, 2014 at 9:44 AM, Gunther Hagleitner <
> >> ghagleit...@hortonworks.com> wrote:
> >>
> >>> Congratulations Vikram!
> >>>
> >>> Thanks,
> >>> Gunther.
> >>>
> >>>
> >>> On Mon, Jan 6, 2014 at 9:33 AM, Hari Subramaniyan <
> >>> hsubramani...@hortonworks.com> wrote:
> >>>
>  congrats Vikram!!
> 
> 
> 
> 
>  On Mon, Jan 6, 2014 at 9:22 AM, Thejas Nair 
>  wrote:
> 
> > Congrats Vikram!
> >
> >
> > On Mon, Jan 6, 2014 at 9:01 AM, Jarek Jarcec Cecho <
> jar...@apache.org>
> > wrote:
> >> Congratulations Vikram!
> >>
> >> Jarcec
> >>
> >> On Mon, Jan 06, 2014 at 08:58:06AM -0800, Carl Steinbach wrote:
> >>> The Apache Hive PMC has voted to make Vikram Dixit a committer on
> the
> >>> Apache Hive Project.
> >>>
> >>> Please join me in congratulating Vikram!
> >>>
> >>> Thanks.
> >>>
> >>> Carl
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or
>  entity to
> > which it is addressed and may contain information that is
> confidential,
> > privileged and exempt from disclosure under applicable law. If the
>  reader
> > of this message is not the intended recipient, you are hereby
> notified
>  that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
>  immediately
> > and delete it from your system. Thank You.
> >
> 
>  --
>  CONFIDENTIALITY NOTICE
>  NOTICE: This message is intended for the use of the individual or
> entity
>  to
>  which it is addressed and may contain information that is
> confidential,
>  privileged and exempt from disclosure under applicable law. If the
> reader
>  of this message is not the intended recipient, you are hereby notified
>  that
>  any printing, copying, dissemination, distribution, disclosure or
>  forwarding of this communication is strictly prohibited. If you have
>  received this communication in error, please contact the sender
>  immediately
>  and delete it from your system. Thank You.
> 
> >>>
> >>>
> >>> CONFIDENTIALITY NOTICE
> >>> NOTICE: This message is intended for the use of the individual or
> entity
> >>> to which it is addressed and may contain information that is
> confidential,
> >>> privileged and exempt from disclosure under applicable law. If the
> reader
> >>> of this message is not the intended recipient, you are hereby notified
> that
> >>> any printing, copying, dissemination, distribution, disclosure or
> >>> forwarding of this communication is strictly prohibited. If you have
> >>> received this communication in error, please contact the sender
> immediately
> >>> and delete it from your system. Thank You.
> >>>
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or
> entity to
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> that
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> immediately
> >> and delete it from your system. Thank You.
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, 

Re: [ANNOUNCE] New Hive Committers - Jitendra Nath Pandey and Eric Hanson

2013-11-22 Thread Vaibhav Gumashta
Congrats guys!


On Fri, Nov 22, 2013 at 11:54 PM, Vikram Dixit wrote:

> Congrats to both of you!
>
>
> On Fri, Nov 22, 2013 at 9:34 AM, Jason Dere  wrote:
>
> > Congrats!
> >
> > On Nov 22, 2013, at 2:25 AM, Biswajit Nayak 
> > wrote:
> >
> > > Congrats to both of you..
> > >
> > >
> > > On Fri, Nov 22, 2013 at 1:26 PM, Lefty Leverenz <
> leftylever...@gmail.com>
> > wrote:
> > > Congratulations, Jitendra and Eric!  The more the merrier.
> > >
> > > -- Lefty
> > >
> > >
> > > On Thu, Nov 21, 2013 at 6:31 PM, Jarek Jarcec Cecho  >
> > wrote:
> > > Congratulations, good job!
> > >
> > > Jarcec
> > >
> > > On Thu, Nov 21, 2013 at 03:29:07PM -0800, Carl Steinbach wrote:
> > > > The Apache Hive PMC has voted to make Jitendra Nath Pandey and Eric
> > Hanson
> > > > committers on the Apache Hive project.
> > > >
> > > > Please join me in congratulating Jitendra and Eric!
> > > >
> > > > Thanks.
> > > >
> > > > Carl
> > >
> > >
> > >
> > > _
> > > The information contained in this communication is intended solely for
> > the use of the individual or entity to whom it is addressed and others
> > authorized to receive it. It may contain confidential or legally
> privileged
> > information. If you are not the intended recipient you are hereby
> notified
> > that any disclosure, copying, distribution or taking any action in
> reliance
> > on the contents of this information is strictly prohibited and may be
> > unlawful. If you have received this communication in error, please notify
> > us immediately by responding to this email and then delete it from your
> > system. The firm is neither liable for the proper and complete
> transmission
> > of the information contained in this communication nor for any delay in
> its
> > receipt.
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committer and PMC Member - Lefty Leverenz

2013-11-17 Thread Vaibhav Gumashta
Awesome! Congrats Lefty!


On Sun, Nov 17, 2013 at 12:06 AM, Thejas Nair wrote:

> Congrats Lefty!
>
>
> On Sat, Nov 16, 2013 at 9:20 PM, Carl Steinbach  wrote:
> > The Apache Hive PMC has voted to make Lefty Leverenz a committer and PMC
> > member on the Apache Hive Project.
> >
> > Please join me in congratulating Lefty!
> >
> > Thanks.
> >
> > Carl
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive PMC Member - Harish Butani

2013-11-14 Thread Vaibhav Gumashta
Congrats Harish!

--Vaibhav


On Thu, Nov 14, 2013 at 5:18 PM, Clark Yang (杨卓荦) wrote:

> Congrats, Harish!
>
> Cheers,
> Zhuoluo (Clark) Yang
>
>
> 2013/11/15 Carl Steinbach 
>
> > I am pleased to announce that Harish Butani has been elected to the Hive
> > Project Management Committee. Please join me in congratulating Harish!
> >
> > Thanks.
> >
> > Carl
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committer - Prasad Mujumdar

2013-11-10 Thread Vaibhav Gumashta
Congrats Prasad!


On Sun, Nov 10, 2013 at 8:17 PM, Lefty Leverenz wrote:

> Congratulations Prasad!
>
> -- Lefty
>
>
> On Sun, Nov 10, 2013 at 11:04 PM, Brock Noland  wrote:
>
> > Congratulations!!
> >
> > On Sunday, November 10, 2013, Thejas Nair wrote:
> >
> > > Congrats Prasad!
> > >
> > > On Sun, Nov 10, 2013 at 6:46 PM, Jarek Jarcec Cecho  > >
> > > wrote:
> > > > Congratulations Prasad, good job!
> > > >
> > > > Jarcec
> > > >
> > > > On Sun, Nov 10, 2013 at 06:42:45PM -0800, Carl Steinbach wrote:
> > > >> The Apache Hive PMC has voted to make Prasad Mujumdar a committer on
> > the
> > > >> Apache Hive Project.
> > > >>
> > > >> Please join me in congratulating Prasad!
> > > >>
> > > >> Thanks.
> > > >>
> > > >> Carl
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
> >
> > --
> > Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: exception when build Hive from source then start Hive CLI

2013-11-06 Thread Vaibhav Gumashta
Zhang,

You can find instructions to build the tarball here:
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ#HiveDeveloperFAQ-Howtogeneratetarball%3F

--Vaibhav


On Wed, Nov 6, 2013 at 5:45 PM, 金杰  wrote:

> You can go to this page
> https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ for
> reference.
>
>
> Best Regards
> 金杰 (Jay Jin)
>
>
> On Thu, Nov 7, 2013 at 1:29 AM, Zhang Xiaoyu wrote:
>
>> Hi, Jay,
>> Thanks for your reply. Do you know the way to build a Hive tarball from
>> source? Hive recently move to maven from ant, but wiki still shows ant
>> related command.
>>
>> Johnny
>>
>>
>> On Wed, Nov 6, 2013 at 6:04 AM, 金杰  wrote:
>>
>>> Hi, Xiaoyu
>>>
>>> You may run hive cli using maven exec plugin
>>>
>>> For example:
>>>
>>> jj@hellojinjie hive :) $ cd cli/
>>> jj@hellojinjie cli :) $ mvn exec:java
>>> -Dexec.mainClass=org.apache.hadoop.hive.cli.CliDriver
>>>
>>>
>>>
>>>
>>> Best Regards
>>> 金杰 (Jay Jin)
>>>
>>>
>>> On Wed, Nov 6, 2013 at 11:11 AM, Zhang Xiaoyu 
>>> wrote:
>>>
 so looks like jline jar is maven dependency which is pulled to ~/.m2
 folder. The question here is what is the right way to build a Hive tarball
 by maven command? looks like mvn clean install -DskipTests is not..

 Thanks,
 Johnny


 On Tue, Nov 5, 2013 at 6:14 PM, Zhang Xiaoyu 
 wrote:

> Hi, all,
> I am trying to build hive from source and start CLI. What I did is
> (1) git clone the source
>
> (2) mvn clean install -DskipTests
>
> (3) cp */target/*.jar lib/
>  this step basically copy all jar files to lib
>
> (4) start cli by ./bin/hive
>
> I got exception
> ./bin/hive: line 80: [:
> /Users/admin/Documents/hive/lib/hive-exec-0.13.0-SNAPSHOT-tests.jar: 
> binary
> operator expected
> ./bin/hive: line 85: [:
> /Users/admin/Documents/hive/lib/hive-metastore-0.13.0-SNAPSHOT-tests.jar:
> binary operator expected
> Exception in thread "main" java.lang.NoClassDefFoundError:
> jline/ArgumentCompletor$ArgumentDelimiter
> at java.lang.Class.forName0(Native Method)
>  at java.lang.Class.forName(Class.java:270)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:205)
> Caused by: java.lang.ClassNotFoundException:
> jline.ArgumentCompletor$ArgumentDelimiter
>  at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>  at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 3 more
>
>
> Anyone has idea what did I miss? BTW, I am using JDK7, but it doesn't
> looks like the root cause.
>
> Thanks,
> Johnny
>
>

>>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Stopping hive metastore

2013-11-06 Thread Vaibhav Gumashta
Thanks for pointing it out Nitin. I've created a JIRA here:
https://issues.apache.org/jira/browse/HIVE-5764

--Vaibhav


On Wed, Nov 6, 2013 at 5:24 AM, Nitin Pawar  wrote:

> there is no other way as long as I know if you are using standalone hive
>
>
> On Wed, Nov 6, 2013 at 5:11 PM, Garg, Rinku wrote:
>
>>  Is there any way to stop hive metastore without killing PID.
>>
>>
>>
>> Thanks & Regards,
>>
>> *Rinku Garg*
>>
>>
>>  _
>> The information contained in this message is proprietary and/or
>> confidential. If you are not the intended recipient, please: (i) delete the
>> message and all copies; (ii) do not disclose, distribute or use the message
>> in any manner; and (iii) notify the sender immediately. In addition, please
>> be aware that any message addressed to our domain is subject to archiving
>> and review by persons other than the intended recipient. Thank you.
>>
>
>
>
> --
> Nitin Pawar
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committer - Xuefu Zhang

2013-11-04 Thread Vaibhav Gumashta
Congrats Xuefu!


On Mon, Nov 4, 2013 at 9:17 AM, Prasad Mujumdar wrote:

>Congratulations Xuefu!
>
> thanks
> Prasad
>
>
>
> On Sun, Nov 3, 2013 at 8:06 PM, Carl Steinbach  >wrote:
>
> > The Apache Hive PMC has voted to make Xuefu Zhang a committer on the
> Apache
> > Hive project.
> >
> > Please join me in congratulating Xuefu!
> >
> > Thanks.
> >
> > Carl
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive PMC Members - Thejas Nair and Brock Noland

2013-10-24 Thread Vaibhav Gumashta
Congrats Brock and Thejas!


On Thu, Oct 24, 2013 at 3:25 PM, Prasad Mujumdar wrote:

>
>Congratulations Thejas and Brock !
>
> thanks
> Prasad
>
>
>
> On Thu, Oct 24, 2013 at 3:10 PM, Carl Steinbach  wrote:
>
>> I am pleased to announce that Thejas Nair and Brock Noland have been
>> elected to the Hive Project Management Committee. Please join me in
>> congratulating Thejas and Brock!
>>
>> Thanks.
>>
>> Carl
>>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ClassCastExceptions using Hiveserver2 Thrift interface, Hive 0.11

2013-10-23 Thread Vaibhav Gumashta
Hi Andy,

Using Http over Thrift is currently a feature in development and has
limited functionality. As of now, it does not support doAs and works with
only NOSASL authentication mode set on the server side. There is a
follow-up JIRA (https://issues.apache.org/jira/browse/HIVE-4764) which will
add support for authentication modes.

Thanks,
--Vaibhav


On Wed, Oct 23, 2013 at 8:31 AM, Andy Sykes wrote:

> I found a solution.
>
> In testing, I tried out the HTTPClientTransport that's now supported for
> Thrift in Hive 0.12. This required me to set the Hive conf var
> hive.server2.enable.doAs to false in order for it to work. When I reverted
> back to using the BufferedTransport, with this property in my
> hive-site.xml, I found the Thrift interface no longer threw
> ClassCastExceptions.
>
> I have no idea if this is a bug, or by design.
>
>
> On 21 October 2013 11:54, David Morel  wrote:
>
>> We have it working fine with hive 0.11, not tested with 0.12. I'll have a
>> look this week.
>>
>> David Morel
>>
>>
>> On 21 octobre 2013 at 11:57:03, Andy Sykes 
>> (andy.sy...@forward3d.com)
>> wrote:
>>
>>  I can report I see this same error when working with the Perl thrift
>> Hive client (
>> http://search.cpan.org/~dmor/Thrift-API-HiveClient2-0.011/lib/Thrift/API/HiveClient2.pm),
>> and with the newly released Hive 0.12.0.
>>
>> Is anyone successfully using Hiveserver2, or do I have something odd in
>> my environment?
>>
>>
>> On 14 October 2013 17:44, Andy Sykes  wrote:
>>
>>> Hi there,
>>>
>>> We're using the rbhive gem (https://github.com/forward3d/rbhive) to
>>> connect to the Hiveserver2 Thrift service running on Hive 0.11.
>>>
>>> I've turned SASL auth off with the following in hive-site.xml:
>>>
>>>  
>>>   hive.server2.authentication
>>>   NOSASL
>>> 
>>>
>>> When I connect, I get the following stack trace in the Hiveserver2 logs:
>>>
>>>  13/10/14 17:42:17 ERROR server.TThreadPoolServer: Error occurred
>>> during processing of message.
>>> java.lang.ClassCastException: org.apache.thrift.transport.TSocket cannot
>>> be cast to org.apache.thrift.transport.TSaslServerTransport
>>>   at
>>> org.apache.hive.service.auth.TUGIContainingProcessor.process(TUGIContainingProcessor.java:35)
>>>   at
>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
>>>   at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>>>   at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>>>   at java.lang.Thread.run(Thread.java:662)
>>>
>>> This didn't happen with Hive 0.10 - we could connect with SASL turned
>>> off without a problem.
>>>
>>> Can anyone shed any light on this?
>>>
>>> --
>>> *Andy Sykes*
>>> DevOps Engineer
>>>
>>> +4420 7806 5904
>>> +4475 9005 2109
>>>
>>> 19 Mandela Street
>>> Floor 2, Centro 3
>>> London
>>> NW1 0DU
>>>
>>> Privacy & Email Policy 
>>>
>>
>>
>>
>> --
>> *Andy Sykes*
>> DevOps Engineer
>>
>> +4420 7806 5904
>> +4475 9005 2109
>>
>> 19 Mandela Street
>> Floor 2, Centro 3
>> London
>> NW1 0DU
>>
>> Privacy & Email Policy 
>>
>>
>
>
> --
> *Andy Sykes*
> DevOps Engineer
>
> +4420 7806 5904
> +4475 9005 2109
>
> 19 Mandela Street
> Floor 2, Centro 3
> London
> NW1 0DU
>
> Privacy & Email Policy 
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: hive_service.thrift vs TCLIService.thrift

2013-10-21 Thread Vaibhav Gumashta
Hi Haroon,

hive_service.thrift is the interface definition language (IDL) file for
HiveServer (older one), whereas TCLIService.thrift is the IDL for
HiveServer2. HiveServer2 was introduced in Hive-0.11. Check the docs here:

https://cwiki.apache.org/confluence/display/Hive/Setting+up+HiveServer2

Thanks,
--Vaibhav


On Mon, Oct 21, 2013 at 11:03 AM, Haroon Muhammad
wrote:

> Hi,
>
> Can anyone share what's the difference between hive_service.thrift and
> TCLIService.thrift?
>
> Thanks,
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: how to make async call to hive

2013-09-29 Thread Vaibhav Gumashta
Hi Gary,

HiveServer2 has recently added an API to support asynchronous execution:
https://github.com/apache/hive/blob/trunk/service/if/TCLIService.thrift#L604

You will have to create an instance of Thrift HiveServer2 client and while
creating the request object for ExecuteStatement, set runAsync as true.

Thanks,
--Vaibhav


On Sun, Sep 29, 2013 at 9:23 PM, Gary Zhao  wrote:

> I'm using node.js which is async.
>
>
> On Sun, Sep 29, 2013 at 5:32 PM, Brad Ruderman wrote:
>
>> Typically it be your application that opens the process off the main
>> thread. Hue (Beeswax specifically) does this and you can see the code here:
>> https://github.com/cloudera/hue/tree/master/apps/beeswax
>>
>> Thx
>>
>>
>>
>> On Sun, Sep 29, 2013 at 5:15 PM, kentkong_work wrote:
>>
>>> **
>>> hi all,
>>> just wonder if there is offical solution for async call to hive?
>>> hive query runs so long time, my application can't block until it
>>> returns.
>>>
>>>
>>> Kent
>>>
>>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [ANNOUNCE] New Hive Committer - Yin Huai

2013-09-03 Thread Vaibhav Gumashta
Congrats Yin!


On Tue, Sep 3, 2013 at 11:37 PM, Jarek Jarcec Cecho wrote:

> Congratulations Yin!
>
> Jarcec
>
> On Tue, Sep 03, 2013 at 09:49:55PM -0700, Carl Steinbach wrote:
> > The Apache Hive PMC has voted to make Yin Huai a committer on the Apache
> > Hive project.
> >
> > Please join me in congratulating Yin!
> >
> > Thanks.
> >
> > Carl
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.