hey..!!
I have a question.
If I copy some file on HDFS file system, it will get split into blocks and
Namenode will keep all these meta info with it.
How can I see that info.
I copied 5 GB file on NameNode, but I see that file only on the NameNode..
It doesnot get split into blocks..??
How can I
One way: Opening the file in the web UI and looking at the bottom of
the page will show you all block locations split by split.
On Sat, May 21, 2011 at 11:46 AM, praveenesh kumar praveen...@gmail.com wrote:
hey..!!
I have a question.
If I copy some file on HDFS file system, it will get split
Good job. I brought this up an another thread, but was told it was not a
problem. Good thing I'm not crazy.
On Sat, May 21, 2011 at 12:42 AM, Joe Stein
charmal...@allthingshadoop.comwrote:
I came up with a nice little hack to trick hadoop into calculating disk
usage with df instead of du
Does this copy text bother anyone else? Sure winning any award is great
but
does hadoop want to be associated with innovation like WikiLeaks?
[Only] through the free distribution of information, the guaranteed integrity
of said information and an aggressive system of checks and balances
Hi,
Although I like the thought of doing things smarter I'm never ever
going to change core Unix/Linux applications for the sake of a
specific application. Linux handles scripts and binaries completely
different with regards to security. So how do you know for sure (I
mean 100% sure, not just
Another way is executing this cmd:
hadoop fsck file path -files -blocks -locations
-Bharath
From: Harsh J ha...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Saturday, May 21, 2011 6:45 AM
Subject: Re: How to see block information on NameNode ?
One way:
Hi,
I'm running a job with maps only and I want by end of each map
(ie.Close() function) to open the file that the current map has wrote using
its output.collector.
I know job.getWorkingDirectory() would give me the parent path of the
file written, but how to get the full path or the name
I'm trying to sort Sequence files using the Hadoop-Example TeraSort. But
after taking a couple of minutes .. output is empty.
HDFS has the following Sequence files:
-rw-r--r-- 1 Hadoop supergroup 196113760 2011-05-21 12:16
/user/Hadoop/out/part-0
-rw-r--r-- 1 Hadoop supergroup 250935096
What if you run a MapReduce program to generate a Sequence File from your
text file where key is the line number and value is the whole line, then for
the second job, the splits are done record wise hence, each mapper will be
getting a split/block of records [lineNumberline] ~Cheers,
Mark
On Wed,
On Fri, May 20, 2011 at 9:02 AM, Matyas Markovics
markovics.mat...@gmail.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I am trying to get jvm metrics from the new verison of hadoop.
I have read the migration instructions and come up with the following
content for
10 matches
Mail list logo