Hi Manoj
You can get the file in a readable format using
hadoop fs -text
Provided you have lzo codec within the property 'io.compression.codecs' in
core-site.xml
A 'hadoop fs -ls' command would itself display the file size.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Orig
Hi Bejoy,
'hadoop fs -ls' is not displaying the file size. is there any other way to
find the original file size.
Thanks in advance.
Cheers!
Manoj.
On Sun, Oct 21, 2012 at 1:47 PM, Bejoy KS wrote:
> **
> Hi Manoj
>
> You can get the file in a readable format using
> hadoop fs -text
>
> Pro
Dne 17.10.2012 20:28, Vinod Kumar Vavilapalli napsal(a):
Speculative execution is a per-job concept, so in 2.* release line, it
is MR AM's responsibility.
Because there is a two level scheduling - one at RM and one at AM, AMs
have not way of figuring out if there are other jobs are not.
AM can
On Oct 21, 2012, at 1:45 AM, Lin Ma wrote:
> Thanks for the detailed reply, Mike. Yes, my most confusion is resolved by
> you. The last two questions (or comments) are used to confirm my
> understanding is correct,
>
> - is it normal use case or best practices for a job to consume/read the
>
I'm running CDH 4 on a 4 node cluster each with 96 G of RAM. Up until last
week the cluster was running until there was an error in the name node log file
and I had to reformat it put the data back
Now when I run hive on YARN. I keep getting a Java heap space error. Based on
the research I did
Hi Bejoy,
I am sorry. I can able to see the file size of compressed one but i am
trying to find what will be size of the file if it is not compressed and by
without extracting all set of files.
Cheers!
Manoj.
On Sun, Oct 21, 2012 at 3:28 PM, Manoj Babu wrote:
> Hi Bejoy,
>
> 'hadoop fs -ls'
>From a java heap perspective, if you don't want huge full GC pauses,
avoid going over 2GB.
There's no simple answers on how many facts can be loaded in a rule
engine. If you want to learn more, email directly. Hadoop mailing list
isn't an appropriate place to get into the weeds of how to build
ef
Regards, Subash.
Can you share more information about your YARN cluster?
- Mensaje original -
De: Subash D'Souza
Para: user@hadoop.apache.org
Enviado: Sun, 21 Oct 2012 09:18:43 -0400 (CDT)
Asunto: Java heap space error
I'm running CDH 4 on a 4 node cluster each with 96 G of RAM. Up unti
Hi,
By using lzop -l can able to get the compressed uncompr size in local fs.
Is there any way to do the same for files in HDFS?
$:~> lzop -l input/part-8.lzo
method compressed uncompr. ratio uncompressed_name
LZO1X-1 40955533 497723469 8.2% input/part-8
Cheers!
Manoj.
Try upping the child to 1.5GB or more.
On Oct 21, 2012, at 8:18 AM, Subash D'Souza wrote:
> I'm running CDH 4 on a 4 node cluster each with 96 G of RAM. Up until last
> week the cluster was running until there was an error in the name node log
> file and I had to reformat it put the data back
Here's the mapped-site and yarn-site configs
Mapred-site.xml
mapreduce.framework.name
yarn
mapred.job.tracker
hadoop1.rad.wc.truecarcorp.com:8021
mapreduce.jobhistory.address
hadoop1.rad.wc.truecarcorp.com:10020
mapreduce.jobhistory.webapp.address
hadoop1.rad.wc.truecarcorp.com:19888
map
Thank you Nitin and Balaji.
I was able to resolve the issue. I used ip address instead and the issue is
resolved.
From: Nitin Pawar
To: user@hadoop.apache.org
Cc: Sundeep Kambhmapati ; "li...@balajin.net"
Sent: Saturday, 20 October 2012 11:39 AM
Subject: R
HDFS User Guide link returns a 404. See this:
http://hadoop.apache.org/docs/r2.0.2-alpha/
The 404 link is
http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-project-dist/hadoop-hdfs/hdfs_user_guide.html
Can someone fix it?
Can somebody point me how to setup HDFS 2.0.2. I wanted to setup a
single-no
Hi Adrian,
Please use user@hadoop.apache.org for user-related questions
Which version of Hadoop are you using? Where do you want the object? In a
map/reduce task? For the currently executing job or for a different job?
In 0.23, you can use the RM webservices.
https://issues.apache.org/jira/bro
14 matches
Mail list logo