Re: hive concurrency not working

2016-08-03 Thread Divakar Reddy
Reg " hive concurrency not working" in HDP.

yes, it's known issue in HDP and with Hue 2.6

I have below information on this issue and hope it will help you.

When you are running Hive queries through Hue (Beeswax), users are unable
to run multiple queries concurrently. In practice, this doesn't matter if
it is separate browser sessions, separate clients, etc. it seems to be tied
to the user.
In looking at the way Tez works and looking through the code for the patch
in Hive 0.14 that supports concurrent queries in general with Tez, it does
not support parallel queries in a particular TezSession, only serial
queries. This is also documented in Tez documentation. It seems the way
that Hive creates a session is based upon the user.  Upon further digging,
we found a ticket HIVE-9223 that is in open state which describes this
issue.
---

Regards,
Divakar


On Wed, Aug 3, 2016 at 11:50 AM, Sergey Shelukhin 
wrote:

> Can you elaborate on not working? Is it giving an error, or hanging (and
> if so, does it queue and eventually execute); are you using HS2; what
> commands/actions do the users perform?
> Also, what version of Hive is this?
>
> From: Raj hadoop 
> Reply-To: "user@hive.apache.org" 
> Date: Wednesday, August 3, 2016 at 06:14
> To: "user@hive.apache.org" 
> Subject: hive concurrency not working
>
> Dear All,
>
> In need or your help,
>
> we have horton works 4 node cluster,and the problem is hive is allowing
> only one user at a time,
>
> if any second resource need to login hive is not working,
>
> could someone please help me in this
>
> Thanks,
> Rajesh
>


Re: hive concurrency not working

2016-08-03 Thread Jörn Franke
You need to configure the yarn scheduler (fair or capacity depending on your 
needs)

> On 03 Aug 2016, at 15:14, Raj hadoop  wrote:
> 
> Dear All,
> 
> In need or your help,
> 
> we have horton works 4 node cluster,and the problem is hive is allowing only 
> one user at a time,
> 
> if any second resource need to login hive is not working,
> 
> could someone please help me in this
> 
> Thanks,
> Rajesh


RE: hive concurrency not working

2016-08-03 Thread Amit Bajpai
You need to increase the value for the below hive property value in Ambari

hive.server2.tez.sessions.per.default.queue

If this does not fix the issue then you need to update the capacity scheduler 
property values.

From: Raj hadoop [mailto:raj.had...@gmail.com]
Sent: Wednesday, August 03, 2016 8:15 AM
To: user@hive.apache.org
Subject: hive concurrency not working

Dear All,

In need or your help,

we have horton works 4 node cluster,and the problem is hive is allowing only 
one user at a time,

if any second resource need to login hive is not working,

could someone please help me in this

Thanks,
Rajesh

Legal Disclaimer:
The information contained in this message may be privileged and confidential. 
It is intended to be read only by the individual or entity to whom it is 
addressed or by their designee. If the reader of this message is not the 
intended recipient, you are on notice that any distribution of this message, in 
any form, is strictly prohibited. If you have received this message in error, 
please immediately notify the sender and delete or destroy any copy of this 
message!


Re: hive concurrency not working

2016-08-03 Thread Sergey Shelukhin
Can you elaborate on not working? Is it giving an error, or hanging (and if so, 
does it queue and eventually execute); are you using HS2; what commands/actions 
do the users perform?
Also, what version of Hive is this?

From: Raj hadoop >
Reply-To: "user@hive.apache.org" 
>
Date: Wednesday, August 3, 2016 at 06:14
To: "user@hive.apache.org" 
>
Subject: hive concurrency not working

Dear All,

In need or your help,

we have horton works 4 node cluster,and the problem is hive is allowing only 
one user at a time,

if any second resource need to login hive is not working,

could someone please help me in this

Thanks,
Rajesh


Re: Create table from orc file

2016-08-03 Thread Marcin Tustin
Correct you need to specify the columns. If you created the file I assume
you have a record of them.

Someone more familiar with the hive code will have to comment on the
exceptions.

On Wednesday, August 3, 2016, Johannes Stamminger <
johannes.stammin...@airbus.com> wrote:

> But doing so I assume it does not detect the columns on it's own, I have to
> specify such manually - or am I wrong? The orc file I finally want to work
> with contains ~28000 columns (513MB size, ~50 rows, 3 structs with 2 of
> them containing ~14000 fields each) ...
>
> The hive documentation for the create table statement shows the columns
> part
> being optional. In fact it seems required, at least I found no way to avoid
> it.
>
>
> For testing purposes I started with a smaller one and found two ways of
> bringing the data to hive. Unfortunately I actually fail on accessing it:
>
>
> a) create external table:
>
> Succeeding statement:
>
> create external table if not exists CFA1_Fan_Speed_DMC(record
> struct) stored as ORC location
> '...';
>
> with the location having specified containig my existing orc file named
> exactly like the table, CFA1_Fan_Speed_DMC.
>
> But every selection for data results in:
>
> Error: java.io.IOException: java.lang.RuntimeException: Char length 256
> out of
> allowed range [1, 255] (state=,code=0)
>
> Tried with:
>  - select * from CFA1_Fan_Speed_DMC;
>  - select record from CFA1_Fan_Speed_DMC;
>  - select record.normalizedTime from CFA1_Fan_Speed_DMC;
>
>
> b) create table and load from file
>
> Succeeding statements:
>
> create table cfa1(record
> struct)
> stored as orc;
>
> load data inpath '.../CFA1_Fan_Speed_DMC' into table cfa1;
>
> Same statements for querying as above (of course using the different table
> name) still fail, but now with:
>
> Error: java.io.IOException: java.io.IOException: ORC does not support type
> conversion from file type bigint (1) to reader type
> struct (1) (state=,code=0)
>
>
>
> So what is wrong with the above?
>
>
> I should mention, that I created the orc files having used using the latest
> orc-core lib (1.1.2). That seems not to be the same implementation for orc
> files access as being used in hive.
>
>
> Thanks for all hints!
>
>
>
> Am Mittwoch, 3. August 2016, 08:45:45 CEST schrieb Marcin Tustin:
> > Yes. Create an external table whose location contains only the orc
> file(s)
> > you want to include in the table.
> >
> > On Wed, Aug 3, 2016 at 7:53 AM, Johannes Stamminger <
> >
> > johannes.stammin...@airbus.com > wrote:
> > > Hi,
> > >
> > >
> > > is it possible to write data to an orc file(s) using the hive-orc api
> and
> > > to
> > > use such by hive (create a table from it)?
> > >
> > >
> > > Regards
> > > This email (including any attachments) may contain confidential and/or
> > > privileged information or information otherwise protected from
> disclosure.
> > > If you are not the intended recipient, please notify the sender
> > > immediately, do not copy this message or any attachments and do not
> use it
> > > for any purpose or disclose its content to any person, but delete this
> > > message and any attachments from your system. Astrium and Airbus Group
> > > companies disclaim any and all liability if this email transmission was
> > > virus corrupted, altered or falsified.
> > > -
> > > Airbus DS GmbH
> > > Vorsitzender des Aufsichtsrates: Bernhard Gerwert
> > > Geschäftsführung: Evert Dudok (Vorsitzender), Dr. Lars Immisch, Dr.
> > > Michael Menking, Dr. Johannes von Thadden
> > > Sitz der Gesellschaft: München - Registergericht: Amtsgericht München,
> HRB
> > > Nr. 107 647
> > > Ust. Ident. Nr. /VAT reg. no. DE167015356
>
>
> --
>johannes.stammin...@airbus.com  [2FE783D0 http://wwwkeys.PGP.net]
> -- <--{(@ --  AIRBUS Defence & Space
> Koenigsberger Str. 17, 28857 Barrien Ground SW Eng. & Del. (TSOTC 6)
> +49 4242 169582 (Tel + FAX) Airbus Allee 1, 28199 Bremen
> +49 174 7731593 (Mobile) +49 421 539 4152 (Tel) / 4378 (FAX)
>
> This email (including any attachments) may contain confidential and/or
> privileged information or information otherwise protected from disclosure.
> If you are not the intended recipient, please notify the sender
> immediately, do not copy this message or any attachments and do not use it
> for any purpose or disclose its content to any person, but delete this
> message and any attachments from your system. Astrium and Airbus Group
> companies disclaim any and all liability if this email transmission was
> virus corrupted, altered or falsified.
> -
> Airbus DS GmbH
> Vorsitzender des Aufsichtsrates: Bernhard Gerwert
> Geschäftsführung: Evert Dudok (Vorsitzender), Dr. Lars Immisch, Dr.
> 

Re: Create table from orc file

2016-08-03 Thread Johannes Stamminger
But doing so I assume it does not detect the columns on it's own, I have to
specify such manually - or am I wrong? The orc file I finally want to work
with contains ~28000 columns (513MB size, ~50 rows, 3 structs with 2 of
them containing ~14000 fields each) ...

The hive documentation for the create table statement shows the columns part
being optional. In fact it seems required, at least I found no way to avoid
it.


For testing purposes I started with a smaller one and found two ways of
bringing the data to hive. Unfortunately I actually fail on accessing it:


a) create external table:

Succeeding statement:

create external table if not exists CFA1_Fan_Speed_DMC(record
struct) stored as ORC location
'...';

with the location having specified containig my existing orc file named
exactly like the table, CFA1_Fan_Speed_DMC.

But every selection for data results in:

Error: java.io.IOException: java.lang.RuntimeException: Char length 256 out of
allowed range [1, 255] (state=,code=0)

Tried with:
 - select * from CFA1_Fan_Speed_DMC;
 - select record from CFA1_Fan_Speed_DMC;
 - select record.normalizedTime from CFA1_Fan_Speed_DMC;


b) create table and load from file

Succeeding statements:

create table cfa1(record struct)
stored as orc;

load data inpath '.../CFA1_Fan_Speed_DMC' into table cfa1;

Same statements for querying as above (of course using the different table
name) still fail, but now with:

Error: java.io.IOException: java.io.IOException: ORC does not support type
conversion from file type bigint (1) to reader type
struct (1) (state=,code=0)



So what is wrong with the above?


I should mention, that I created the orc files having used using the latest
orc-core lib (1.1.2). That seems not to be the same implementation for orc
files access as being used in hive.


Thanks for all hints!



Am Mittwoch, 3. August 2016, 08:45:45 CEST schrieb Marcin Tustin:
> Yes. Create an external table whose location contains only the orc file(s)
> you want to include in the table.
>
> On Wed, Aug 3, 2016 at 7:53 AM, Johannes Stamminger <
>
> johannes.stammin...@airbus.com> wrote:
> > Hi,
> >
> >
> > is it possible to write data to an orc file(s) using the hive-orc api and
> > to
> > use such by hive (create a table from it)?
> >
> >
> > Regards
> > This email (including any attachments) may contain confidential and/or
> > privileged information or information otherwise protected from disclosure.
> > If you are not the intended recipient, please notify the sender
> > immediately, do not copy this message or any attachments and do not use it
> > for any purpose or disclose its content to any person, but delete this
> > message and any attachments from your system. Astrium and Airbus Group
> > companies disclaim any and all liability if this email transmission was
> > virus corrupted, altered or falsified.
> > -
> > Airbus DS GmbH
> > Vorsitzender des Aufsichtsrates: Bernhard Gerwert
> > Geschäftsführung: Evert Dudok (Vorsitzender), Dr. Lars Immisch, Dr.
> > Michael Menking, Dr. Johannes von Thadden
> > Sitz der Gesellschaft: München - Registergericht: Amtsgericht München, HRB
> > Nr. 107 647
> > Ust. Ident. Nr. /VAT reg. no. DE167015356


--
   johannes.stammin...@airbus.com  [2FE783D0 http://wwwkeys.PGP.net]
-- <--{(@ --  AIRBUS Defence & Space
Koenigsberger Str. 17, 28857 Barrien Ground SW Eng. & Del. (TSOTC 6)
+49 4242 169582 (Tel + FAX) Airbus Allee 1, 28199 Bremen
+49 174 7731593 (Mobile) +49 421 539 4152 (Tel) / 4378 (FAX)

This email (including any attachments) may contain confidential and/or 
privileged information or information otherwise protected from disclosure. If 
you are not the intended recipient, please notify the sender immediately, do 
not copy this message or any attachments and do not use it for any purpose or 
disclose its content to any person, but delete this message and any attachments 
from your system. Astrium and Airbus Group companies disclaim any and all 
liability if this email transmission was virus corrupted, altered or falsified.
-
Airbus DS GmbH
Vorsitzender des Aufsichtsrates: Bernhard Gerwert
Geschäftsführung: Evert Dudok (Vorsitzender), Dr. Lars Immisch, Dr. Michael 
Menking, Dr. Johannes von Thadden
Sitz der Gesellschaft: München - Registergericht: Amtsgericht München, HRB Nr. 
107 647
Ust. Ident. Nr. /VAT reg. no. DE167015356



hive concurrency not working

2016-08-03 Thread Raj hadoop
Dear All,

In need or your help,

we have horton works 4 node cluster,and the problem is hive is allowing
only one user at a time,

if any second resource need to login hive is not working,

could someone please help me in this

Thanks,
Rajesh


Malformed orc file

2016-08-03 Thread Igor Kuzmenko
Hello, I've got a malformed ORC file in my Hive table. File was created by
Hive Streaming API and I have no idea under what circumstances it
became corrupted.

File on google drive: link


Exception message when trying to perform select from table:

ERROR : Vertex failed, vertexName=Map 1,
vertexId=vertex_1468498236400_1106_6_00, diagnostics=[Task failed,
taskId=task_1468498236400_1106_6_00_00, diagnostics=[TaskAttempt 0
failed, info=[Error: Failure while running task:java.lang.RuntimeException:
java.lang.RuntimeException: java.io.IOException:
org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://
sorm-master01.msk.mts.ru:8020/apps/hive/warehouse/pstn_connections/dt=20160711/directory_number_last_digit=5/delta_71700156_71700255/bucket_0.
Invalid postscript length 0
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
at
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException:
org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://
sorm-master01.msk.mts.ru:8020/apps/hive/warehouse/pstn_connections/dt=20160711/directory_number_last_digit=5/delta_71700156_71700255/bucket_0.
Invalid postscript length 0
at
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:196)
at
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:142)
at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:113)
at
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:61)
at
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:326)
at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
... 14 more
Caused by: java.io.IOException:
org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC file hdfs://
sorm-master01.msk.mts.ru:8020/apps/hive/warehouse/pstn_connections/dt=20160711/directory_number_last_digit=5/delta_71700156_71700255/bucket_0.
Invalid postscript length 0
at
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251)
at
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193)
... 19 more
Caused by: org.apache.hadoop.hive.ql.io.FileFormatException: Malformed ORC
file hdfs://
sorm-master01.msk.mts.ru:8020/apps/hive/warehouse/pstn_connections/dt=20160711/directory_number_last_digit=5/delta_71700156_71700255/bucket_0.
Invalid postscript length 0
at
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.ensureOrcFooter(ReaderImpl.java:236)
at
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:376)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:317)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238)
at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1259)
at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1151)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:249)
... 20 more

Does anyone encountered such a situation?


Re: Create table from orc file

2016-08-03 Thread Marcin Tustin
Yes. Create an external table whose location contains only the orc file(s)
you want to include in the table.

On Wed, Aug 3, 2016 at 7:53 AM, Johannes Stamminger <
johannes.stammin...@airbus.com> wrote:

> Hi,
>
>
> is it possible to write data to an orc file(s) using the hive-orc api and
> to
> use such by hive (create a table from it)?
>
>
> Regards
> This email (including any attachments) may contain confidential and/or
> privileged information or information otherwise protected from disclosure.
> If you are not the intended recipient, please notify the sender
> immediately, do not copy this message or any attachments and do not use it
> for any purpose or disclose its content to any person, but delete this
> message and any attachments from your system. Astrium and Airbus Group
> companies disclaim any and all liability if this email transmission was
> virus corrupted, altered or falsified.
> -
> Airbus DS GmbH
> Vorsitzender des Aufsichtsrates: Bernhard Gerwert
> Geschäftsführung: Evert Dudok (Vorsitzender), Dr. Lars Immisch, Dr.
> Michael Menking, Dr. Johannes von Thadden
> Sitz der Gesellschaft: München - Registergericht: Amtsgericht München, HRB
> Nr. 107 647
> Ust. Ident. Nr. /VAT reg. no. DE167015356

-- 
Want to work at Handy? Check out our culture deck and open roles 

Latest news  at Handy
Handy just raised $50m 

 led 
by Fidelity



Create table from orc file

2016-08-03 Thread Johannes Stamminger
Hi,


is it possible to write data to an orc file(s) using the hive-orc api and to 
use such by hive (create a table from it)?


Regards
This email (including any attachments) may contain confidential and/or 
privileged information or information otherwise protected from disclosure. If 
you are not the intended recipient, please notify the sender immediately, do 
not copy this message or any attachments and do not use it for any purpose or 
disclose its content to any person, but delete this message and any attachments 
from your system. Astrium and Airbus Group companies disclaim any and all 
liability if this email transmission was virus corrupted, altered or falsified.
-
Airbus DS GmbH 
Vorsitzender des Aufsichtsrates: Bernhard Gerwert 
Geschäftsführung: Evert Dudok (Vorsitzender), Dr. Lars Immisch, Dr. Michael 
Menking, Dr. Johannes von Thadden 
Sitz der Gesellschaft: München - Registergericht: Amtsgericht München, HRB Nr. 
107 647 
Ust. Ident. Nr. /VAT reg. no. DE167015356

Re: unsubscribe

2016-08-03 Thread Lefty Leverenz
To unsubscribe please send a message to user-unsubscr...@hive.apache.org as
described here:  Mailing Lists .
Thanks.

-- Lefty


On Mon, Aug 1, 2016 at 11:13 PM, zhang jp  wrote:

> unsubscribe
>