I don't think that is a not he main issue because I found that the versions are
the same.
On my machine which runs hadoop 1.2.0
[mahmood@tiger Index]$ java -versionjava version 1.7.0_71
OpenJDK Runtime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed
Can you explain more?To be honest, I am running a third party script (not mine)
and the developers have no idea on the error.
Do you mean that running hadoop -jar indexdata.jar `pwd`/result
hdfs://127.0.0.1:9000/data-Index is a better one? For that, I get this error:
[mahmood@tiger Index]$
Looks like your java version is lower than used for creation of jar file.
Can you recompile and create jar in your env ? or upgrade your java
version?
On Thu, Apr 30, 2015 at 1:20 PM, Mahmood Naderan nt_mahm...@yahoo.com
wrote:
There was a syntax error in the previous post. The correct is:
You are running it with the Java command rather than hadopp jar ... Do you
have a mechanism inside your Java code to find hadoop like creating your
own Configuration?
On Apr 30, 2015 1:54 AM, Mahmood Naderan nt_mahm...@yahoo.com wrote:
Hi,
when I run the following command, I get ipc.client
There was a syntax error in the previous post. The correct is:
[mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result
hdfs://127.0.0.1:9000/data-Index
Warning: $HADOOP_HOME is deprecated.
Exception in thread main java.lang.UnsupportedClassVersionError: IndexHDFS :
Unsupported major.minor
I found out that the $JAVA_HOME specified in hadoop-env.sh was different from
java -version in the command line. So I fix the variable to point to JAVA_1.7
(the jar file is also written with 1.7)
Still I get ipc.client error but this time it sound different. The whole output
(in verbose mode)
Is there any configuration in MR2 and YARN to limit concurrent max
applications by setting max limit on ApplicationMasters in the cluster?
The reason is that the Json parsing code is in a 3rd party library which is
not included in the default map reduce/hadoop distribution. You have to
add them in your classpath at *runtime*. There are multiple ways to do it
(which also depends upon how you plan to run and package/deploy your code.)
Hi,
I am working on an assignment on Hadoop Map reduce. I am very new to Map Reduce.
The assignment has many sections but for now I am trying to parse JSON data.
The input(i.e. value) to the map function is a single record of the form
xyz, {'abc’:’pqr1’,'abc2’:'pq1, pq2’}, {‘key’:'value1’}
Take a look at
yarn.scheduler.capacity.maximum-am-resource-percent
On Thu, Apr 30, 2015 at 11:38 AM, Shushant Arora shushantaror...@gmail.com
wrote:
Is there any configuration in MR2 and YARN to limit concurrent max
applications by setting max limit on ApplicationMasters in the cluster?
Hi Alex,
How to create external textfile hive table pointing to /extract/DBCLOC and
specify CSVSerde ?
Thanks
Jay
On Wed, Apr 29, 2015 at 3:43 PM, Alexander Pivovarov apivova...@gmail.com
wrote:
1. Create external textfile hive table pointing to /extract/DBCLOC and
specify CSVSerde
if
Follow the links I sent you already.
On Apr 30, 2015 11:52 AM, Kumar Jayapal kjayapa...@gmail.com wrote:
Hi Alex,
How to create external textfile hive table pointing to /extract/DBCLOC and
specify CSVSerde ?
Thanks
Jay
On Wed, Apr 29, 2015 at 3:43 PM, Alexander Pivovarov
With Capacity Scheduler, the other useful param would be:
yarn.scheduler.capacity.maximum-applications
http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html
On Thu, Apr 30, 2015 at 11:52 AM, Prashant Kommireddi prash1...@gmail.com
wrote:
Take a look at
Try to find the file in hdfs trash
On Apr 30, 2015 2:14 PM, Kumar Jayapal kjayapa...@gmail.com wrote:
Hi,
I loaded one file to hive table it is in .gz extension. file is
moved/deleted from hdfs.
when I execute select command I get an error.
Error: Error while processing statement: FAILED:
Curious, did you check fs.defaultFS in the core-site.xml ? Just to make
sure the HDFS port is 9000 and not 8020
-Rajesh
On Thu, Apr 30, 2015 at 4:42 AM, Mahmood Naderan nt_mahm...@yahoo.com
wrote:
I found out that the $JAVA_HOME specified in hadoop-env.sh was different
from java -version in
Hi,
I loaded one file to hive table it is in .gz extension. file is
moved/deleted from hdfs.
when I execute select command I get an error.
Error: Error while processing statement: FAILED: Execution Error, return
code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
(state=08S01,code=2)
how
Thanks!!
Is it for capacity scheduler only. Or applicable to fair scheduler also?
On Fri, May 1, 2015 at 12:22 AM, Prashant Kommireddi prash1...@gmail.com
wrote:
Take a look at
yarn.scheduler.capacity.maximum-am-resource-percent
On Thu, Apr 30, 2015 at 11:38 AM, Shushant Arora
Hi Shushant,
If you use fair scheduler, you can restrict number of AM by configuring queue:
* maxRunningApps: limit the number of apps from the queue to run at once
* maxAMShare: limit the fraction of the queue's fair share that can be
used to run application masters. This property can only be
Hello Nitin,
Dint understand what you mean. Are you telling me to set
COMPRESSION_CODEC=gzip ?
thanks
Jay
On Thu, Apr 30, 2015 at 10:02 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
You loaded a gz file in a table stored as text file
either define compression format or uncompress the file
You loaded a gz file in a table stored as text file
either define compression format or uncompress the file and load it
On Fri, May 1, 2015 at 9:17 AM, Kumar Jayapal kjayapa...@gmail.com wrote:
Created table CREATE TABLE raw (line STRING) PARTITIONED BY (FISCAL_YEAR
smallint, FISCAL_PERIOD
Alex,
I followed the same steps as mentioned in the site. Once I load data into
table which is create below
Created table CREATE TABLE raw (line STRING) PARTITIONED BY (FISCAL_YEAR
smallint, FISCAL_PERIOD smallint)
STORED AS TEXTFILE;
and loaded it with data.
LOAD DATA LOCAL INPATH
Created table CREATE TABLE raw (line STRING) PARTITIONED BY (FISCAL_YEAR
smallint, FISCAL_PERIOD smallint)
STORED AS TEXTFILE;
and loaded it with data.
LOAD DATA LOCAL INPATH '/tmp/weblogs/20090603-access.log.gz' INTO TABLE raw;
I have to load it to parque table
when I say select * from raw
I did not find it in .Trash file is moved to hive table I want to move it
back to hdfs.
On Thu, Apr 30, 2015 at 2:20 PM, Alexander Pivovarov apivova...@gmail.com
wrote:
Try to find the file in hdfs trash
On Apr 30, 2015 2:14 PM, Kumar Jayapal kjayapa...@gmail.com wrote:
Hi,
I loaded one
try
desc formatted table_name;
it shows you table location on hdfs
On Thu, Apr 30, 2015 at 2:43 PM, Kumar Jayapal kjayapa...@gmail.com wrote:
I did not find it in .Trash file is moved to hive table I want to move it
back to hdfs.
On Thu, Apr 30, 2015 at 2:20 PM, Alexander Pivovarov
24 matches
Mail list logo