lowered 1073741824 to half of it but still getting the same issue.
On Wed, Sep 21, 2016 at 6:44 PM, Sanjeev Verma
wrote:
> its 1073741824 now but I cant see anything running on client side, the job
> which kicked up by the query got completed but HS2 is crashing
>
> On Wed, Sep 21,
value for
> hive.fetch.task.conversion.threshold?
>
> Thanks
> Prasanth
> > On Sep 21, 2016, at 6:37 PM, Sanjeev Verma
> wrote:
> >
> > I am getting hiveserver2 memory even after increasing the heap size from
> 8G to 24G, in clue why it still going to OOM with eno
I am getting hiveserver2 memory even after increasing the heap size from 8G
to 24G, in clue why it still going to OOM with enough heapsize
"HiveServer2-HttpHandler-Pool: Thread-58026" prio=5 tid=58026 RUNNABLE
at java.lang.OutOfMemoryError.(OutOfMemoryError.java:48)
at org.apache.hadoop.
Hi
on hive-1.2.1 orc backed table I am running query select * from table where
id=some it is returning me some 40 rows but when I did select count(*) from
table where id= then
it is returning me 14. tried to disable compute query using stats but no
luck.
could you please help me find out the issu
Any help will be much appreciated.Thanks
On Tue, Nov 17, 2015 at 2:39 PM, Sanjeev Verma
wrote:
> Thank Elliot, Eugene
> I am able to see the Base file created in one of the partition, seems the
> Compactor kicked in and created it but it has not created base files in
> rest of t
n+Properties#ConfigurationProperties-hive.compactor.worker.threads>
> >0
>
> Also, see
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/PartitionCompact
> on
> how to trigger compaction manually.
>
> *Eugene*
>
> F
I have enable the hive transaction and able to see the delta files created
for some of the partition but i dont not see any base file created yet.it
seems strange to me seeing so many delta files without any base file.
Could somebody let me know when Base file created.
Thanks
> use 0.12.
>
> From: Sanjeev Verma
> Reply-To: "user@hive.apache.org"
> Date: Tuesday, November 3, 2015 at 02:56
> To: "user@hive.apache.org"
> Subject: hive metastore update from 0.12 to 1.0
>
> Hi
>
> I am trying to update the metastore usin
Any help will be appreciated...
On Tue, Nov 3, 2015 at 4:26 PM, Sanjeev Verma
wrote:
> Hi
>
> I am trying to update the metastore using schematool but getting error
>
> schematool -dbType derby -upgradeSchemaFrom 0.12
>
> Upgrade script upgrade-0.12.0-to-0.13.0.derby.sq
Hi
I am trying to update the metastore using schematool but getting error
schematool -dbType derby -upgradeSchemaFrom 0.12
Upgrade script upgrade-0.12.0-to-0.13.0.derby.sql
Error: Table/View 'TXNS' already exists in Schema 'APP'.
(state=X0Y32,code=3)
org.apache.hadoop.hive.metastore.HiveMeta
, 2015 at 12:42 PM, Syed Abulullah wrote:
> Hi Sanjeev -
>
> Did you try to change your query to explicitly specify par.*
>
> create table sample_table AS select *par.** from parquet_table par inner
> join parquet_table_counter ptc ON ptc.user_id=par.user_id;
>
> Thank
I am creating a table from the two parquet partioned tables and getting the
error duplicate column. any idea whats going wrong here.
create table sample_table AS select * from parquet_table par inner
join parquet_table_counter ptc ON ptc.user_di=par.user_id;
FAILED: SemanticException [Error 100
Even having enough heap size my hiveserver2 going outofmemory, I enable
heap dump on error which producing 650MB of heap although I have
hiveserver2 configured with 8GB Heap.
here is the stacktrace of the thread which went in to OOM,could anybody let
me know why it throwing OOM
"pool-2-thread-4"
Even having enough heap size my hiveserver2 going outofmemory, I enable
heap dump on error which producing 650MB of heap although I have
hiveserver2 configured with 8GB Heap.
here is the stacktrace of the thread which went in to OOM,could anybody let
me know why it throwing OOM
"pool-2-thread-4"
.(OutOfMemoryError.java:48)"
>
> which def. indicates that it's not very happy memory wise. I would def.
> recommend to bump up the memory and see if it helps. If not, we can debug
> further from there.
>
> On Tue, Sep 8, 2015 at 12:17 PM, Sanjeev Verma
> wrote:
&
What this exception implies here? how to identify the problem here.
Thanks
On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma
wrote:
> We have 8GB HS2 java heap, we have not tried any bumping.
>
> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@g
We have 8GB HS2 java heap, we have not tried any bumping.
On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> How much memory have you currently provided to HS2? Have you tried bumping
> that up?
>
> On Mon, Sep 7, 2015 at 1:09
our
> problem?
>
> [1] https://issues.apache.org/jira/browse/HIVE-10410
>
> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma
> wrote:
>
>> We are using hive-0.13 with hadoop1.
>>
>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
>> kulkarni.swar...@
We are using hive-0.13 with hadoop1.
On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Sanjeev,
>
> Can you tell me more details about your hive version/hadoop version etc.
>
> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Ver
Can somebody gives me some pointer to looked upon?
On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
wrote:
> Hi
> We are experiencing a strange problem with the hiveserver2, in one of the
> job it gets the GC limit exceed from mapred task and hangs even having
> enough heap availabl
Hi
We are experiencing a strange problem with the hiveserver2, in one of the
job it gets the GC limit exceed from mapred task and hangs even having
enough heap available.we are not able to identify what causing this issue.
Could anybody help me identify the issue and let me know what pointers I
nee
21 matches
Mail list logo