"user@hive.apache.org"
Date: Saturday, November 30, 2019 at 1:40 AM
To: "user@hive.apache.org"
Subject: hive error: "Too many bytes before delimiter: 2147483648"
Hello all,
I encountered a problem while using hive, please ask you. The following is the
specific situ
Hello all,I encountered a problem while using hive, please ask you. The
following is the specific situation.
Platform: hive on spark
Error: java.io.IOException: Too many bytes before delimiter: 2147483648
Description: When using small files of the same format for testing, there is no
problem; but
yes, 2 nodes is very few
On Fri, Nov 15, 2019, 16:37 Sai Teja Desu
wrote:
> Thanks for your detailed explanation Pau. The query actually never
> returned even after 4 hours, I had to cancel the query. The reason might
> be, I have too many small orc files as an input to Hive table.
>
> Also, Yo
Thanks for your detailed explanation Pau. The query actually never
returned even after 4 hours, I had to cancel the query. The reason might
be, I have too many small orc files as an input to Hive table.
Also, You are right my Cluster capacity is very less. But, do you suggest
we should keep on in
Hi Sai,
Let me summarize some of your data:
You have a 9 billion record table with 4 columns, which should account for
a minimum raw size of about 200 GiB (not including string column)
You want to select ALL columns from rows with a specific value in a column
which is not partitioned, so Hive has
Hey Pau,
Thanks for the clarification. Yes, that helped to start the query, however
the query was taking huge time to retrieve a few records.
May I know what steps can I take to make this kind of query performance
better? I mean the predicates which does not have partitioning.
Thanks,
Sai.
On T
Hi,
The error is from the AM (Application Master), because it has s
many partitions to orchestrate that needs lots of RAM.
As Venkat said, try increasing tez.am.resource.memory.mb to 2G, even 4 or 8
might be needed.
Cheers,
Pau.
Missatge de Sai Teja Desu del dia dj.,
14 de nov. 2019 a
Thanks for the reply Venkatesh. I did tried to increase the tez container
size to 4GB but still giving me the same error. In addition, below are the
settings I have tried:
set mapreduce.map.memory.mb=4096;
set mapreduce.map.java.opts=-Xmx3686m;
set mapreduce.reduce.memory.mb=8192;
set mapreduce.
Try increasing the AM Container memory. set it to 2 gigs may be.
Regards,
Venkat
On Thu, Nov 14, 2019, 6:46 AM Sai Teja Desu <
saiteja.d...@globalfoundries.com> wrote:
> Hello All,
>
> I'm new to hive development and I'm memory limitation error for running a
> simple query with a predicate which
Hello All,
I'm new to hive development and I'm memory limitation error for running a
simple query with a predicate which should return only a few records. Below
are the details of the Hive table, Query and Error. Please advise me on how
to efficiently query on predicates which does not have partit
Hi,
I'm a freshman of Hive and try to use the UDF of python script.
I wrote a simple projection function in add.py
#!/usr/local/python/bin/python
import sys
import string
try:
line = sys.stdin.readline()
a, b = string.split(line, "\t")
print a
except:
print sys.exc
x27;: Cannot convert column 4
from struct to
struct.
-Original Message-
From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal
Vijayaraghavan
Sent: Tuesday, June 28, 2016 6:17 PM
To: user@hive.apache.org
Subject: Re: Hive error : Can not convert struct<> to
> PARTITION(state='CA')
> SELECT * WHERE se.adr.st='CA'
> FAILED: SemanticException [Error 10044]: Line 2:23 Cannot insert into
>target table because column number/types are different ''CA'':
The error is bogus, but the issue has to do with the "SELECT *".
Inserts where a partition is specified
kuldeep.chitra...@synechron.com]
Sent: Tuesday, June 28, 2016 4:03 PM
To: user@hive.apache.org
Subject: Hive error : Can not convert struct<> to
Hi
I have staged table as
hive (revise)> desc employees_se;
OK
namestring
salary
Hi
I have staged table as
hive (revise)> desc employees_se;
OK
namestring
salaryfloat
subordinates array
deductions map
adr struct
I am trying to insert the data in partitioned table employees as
hive (revise)> desc e
Yes, the explain plan definitely only has Move Operators (no Copy
Operators). With that though, this definitely looks like a hive bug? Does
anyone know if there is corresponding HIVE ticket or a workaround for the
issue? Thanks!
Stage: Stage-3
Move Operator
files:
hdfs direct
> Moving data to:
>s3n://:@my_bucket/a/b/2015-07-30/.hive-staging_hiv
>e_2015-08-04_18-38-47_649_1476668515119011800-1/-ext-1
> Failed with exception Wrong FS:
>s3n://:@my_bucket/a/b/2015-07-30/.hive-staging_hiv
>e_2015-08-04_18-38-47_649_1476668515119011800-1/-ext-10002, expected:
>hdfs://s
Hello,
I have a query that used to work fine previously. I am testing on hive 1.1
now and it is failing. The AWS access and secret key have permissions to
read and write data to this directory. The directory exists.
hive -e " insert overwrite directory
's3n://:@my_bucket/a/b/2015-07-30' SELECT *
l of nodes. This is WEIRD :( ]
Thanks,
Pratik
-Original Message-
From: Amith sha [mailto:amithsh...@gmail.com]
Sent: Wednesday, March 11, 2015 2:31 PM
To: user@hive.apache.org
Subject: Re: FW: Hive error while starting up services using Ambari
Check your mysql Database whether you have
d or not.
>>
>>
>>
>> Thanks,
>>
>> Pratik
>>
>>
>>
>> From: Srinivas Thunga [mailto:srinivas.thu...@gmail.com]
>> Sent: Thursday, March 05, 2015 3:27 PM
>> To: user@hive.apache.org
>> Subject: Re: FW: Hive error while
Thanks,
>
> Pratik
>
>
>
> *From:* Srinivas Thunga [mailto:srinivas.thu...@gmail.com]
> *Sent:* Thursday, March 05, 2015 3:27 PM
> *To:* user@hive.apache.org
> *Subject:* Re: FW: Hive error while starting up services using Ambari
>
>
>
> Hi,
>
>
[mailto:srinivas.thu...@gmail.com]
Sent: Thursday, March 05, 2015 3:27 PM
To: user@hive.apache.org
Subject: Re: FW: Hive error while starting up services using Ambari
Hi,
Have you created the Hive Metastore?
http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-1-x/CDH5-Installation-Guide
Hi,
Have you created the Hive Metastore?
http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-1-x/CDH5-Installation-Guide/cdh5ig_hive_metastore_configure.html
and then try to start the server
*Thanks & Regards,*
*Srinivas T*
On Thu, Mar 5, 2015 at 2:57 PM, Pratik Gadiya
wrote:
Hi,
I am trying to deploy a hadoop cluster using Ambari Blueprint.
All the services are up and running except one i.e. Hive Server 2.
I tried to look into the logs(/var/log/hive/hiveserver2.log) and looks like
Hive is trying to access the MySQL service using username:hive.
However, I think it does
Hi,
I got the following error when i trying to connect mongodb with hive
using monog-hadoop connector.
2014-09-16 17:32:24,279 INFO [main]
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
application appattempt_1410858694842_0013_01
*2014-09-16 17:32:24,742 FATAL [mai
Could you please show us your query?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Sep 12, 2013 at 1:49 AM, Siddharth Tiwari wrote:
> Hi Team
>
> I am getting following error when I am trying to load csv file in my hive
> table:-
>
> FAILED: Parse Error: line 1:71 character '' not suppor
Hi TeamI am getting following error when I am trying to load csv file in my
hive table:-FAILED: Parse Error: line 1:71 character '' not supported here
Can you please explain whats this error and its resolution ?
**
Cheers !!!
Siddharth Tiwari
Have a refreshing day !!!
1:16 PM
To: "user@hive.apache.org<mailto:user@hive.apache.org>"
mailto:user@hive.apache.org>>
Subject: FW: Hive Error
Hi Guys,
We’ve been facing problems using Hive on the cluster. Please find attached the
snapshot of the error that we encountered.
· The Hive err
Hi Guys,
We've been facing problems using Hive on the cluster. Please find attached the
snapshot of the error that we encountered.
* The Hive error we are facing- Error in metadata: MetaException [PFA:
the snapshot of the terminal]
* We tried a number of things to fix i
I've tried to deserialize your data.
0 = bigint = -6341068275337623706
1 = string = TTFVUFHFH
2 = int = -1037822201
3 = int = -1467607277
4 = int = -1473682089
5 = int = -1337884091
6 = string = I
7 = string = IVH ISH
8 = int = -1321908327
9 = int = -1475321453
10 = int = -1476394752
11 = string =
I am not sure, what can be the issue...I had it long back and got no
response. I tried these things:
1. Increased the Child JVM heap size.
2. Reduced the number of reducers for the job.
3. Check whether your disks are not getting full while running the query.
3. Checked my data again. I think many
Hi praveenesh kumar :
I am getting the same error today.
Do you have any solution ?
2012/3/23 praveenesh kumar
> Hi all,
>
> I am getting this following error when I am trying to do select ...with
> group by operation.I am grouping on around 25 columns
>
> java.lang.RuntimeException:
> org.ap
12 9:48 PM
Subject: HIVE ERROR
hi all,
I have overriden some properities in hive.Iam getting following error
when executing a query is this error due to overriding properties or
LOCAL FILE SYSTEM is out of space??
overriden properties
set io.sort.mb=512;
set io.sort.facto
When ever you execute any query except "select * from tablename" hive runs
mapreduce job in background for which it needs hadoop to be properly
configured and proper commication between hadoop and hive. the error you
specified happens when the hive not able to connect to hadoop properly
here is t
The query I'm trying to run is: select count(*) from customers;
The table exists and there is data in it. However when I run this command I
get the following: http://imgur.com/sOXXB and the error log shows:
http://imgur.com/5EWrS
Any idea what I'm doing wrong?
Sorry about the logs being pictures,
I get the following ClosedByInterruptException often - but not always - when
running a query with hive.exec.parallel=true. It seems to happen only when 2 MR
jobs are being launched in parallel. I doubt I'm the first person to have seen
this error in this scenario, but googling didn't help me. An
nuary 27, 2011 11:21 AM
>
> *To:* user@hive.apache.org
> *Subject:* RE: Hive Error on medium sized dataset
>
>
>
> I removed the part of the SerDe that handled the arbitrary key/value pairs
> and I was able to process my entire data set. Sadly the part I removed has
> all the int
Subject: RE: Hive Error on medium sized dataset
I removed the part of the SerDe that handled the arbitrary key/value pairs and
I was able to process my entire data set. Sadly the part I removed has all the
interesting data.
I'll play more with the heap settings and see if that lets me proces
rrect way to set the child heap value?
Thanks,
Pat
From: Christopher, Pat
Sent: Thursday, January 27, 2011 10:27 AM
To: user@hive.apache.org
Subject: RE: Hive Error on medium sized dataset
It will be tricky to clean up the data format as I'm operating on somewhat
arbitrary key-value pairs
my
mapred-site.xml:
mapred.child.java.opts
-Xm512M
Is that how I'm supposed to do that?
Thanks,
Pat
From: hadoop n00b [mailto:new2h...@gmail.com]
Sent: Wednesday, January 26, 2011 9:09 PM
To: user@hive.apache.org
Subject: Re: Hive Error on medium sized dataset
We typically get t
We typically get this error while running complex queries on our 4-node
setup when the child JVM runs out of heap size. Would be interested in what
the experts have to say about this error.
On Thu, Jan 27, 2011 at 7:27 AM, Ajo Fod wrote:
> Any chance you can convert the data to a tab separated t
Any chance you can convert the data to a tab separated text file and try the
same query?
It may not be the SerDe, but it may be good to isolate that away as a
potential source of the problem.
-Ajo.
On Wed, Jan 26, 2011 at 5:47 PM, Christopher, Pat <
patrick.christop...@hp.com> wrote:
> Hi,
>
>
Hi,
I'm attempting to load a small to medium sized log file, ~250MB, and produce
some basic reports from it, counts etc. Nothing fancy. However, whenever I
try and read the entire dataset, ~330k rows, I get the following error:
FAILED: Execution Error, return code 2 from
org.apache.hadoop.h
43 matches
Mail list logo