Hey Thanh,
Following the recent project split, the bin folder is still in the
common project. While it's possible to start the namenode and
datanode(s) directly from Eclipse, the general workaround is to just
copy the bin folder from a checkout of the common/core project into the
hdfs folder
Thanh-
If you would like the run execute the tests that have been instrumented
to use the fault injection framework the ant target is
run-test-hdfs-fault-inject. These were used extensively in the recent
append work and there are quite a few append-related tests. Was there
something more spe
>Would be great if someone wrote some tools that, given a block ID, tracked
>the life of the file that contained it (including renames of containing
> dirs, etc). Shouldn't be too difficult.
There's a tool for this in MapRed's contrib section under
block_forensics. It was released in 21, I beli
Is the 2NN reachable at http://10.1.1.5:50090? This is the addr the NN
is being told to grab the merged image from. There can be problems
with VIPs, etc. if this address is not reachable.
On Thu, Jan 6, 2011 at 12:57 PM, Tyler Coffin wrote:
>
>
> 2011-01-06 15:52:00,814 INFO
> org.apache.hadoop.
files that have been rm'ed but not yet expunged are stored in each
user's .Trash folder within their home directory. This is the
safeguard against accidentally deleting files; adding a prompt is a
non-starter.
On Thu, Jun 9, 2011 at 2:17 AM, Florin P wrote:
> Ok..Thank you...But where the delet
> Posted URL
> master:50070putimage=1&port=50090&machine=0.0.0.0&token=-31:1318804155:0:1328129935000:1328129628242
Have you defined your secondary namenode address? The 2NN is telling
the NN to pull the merged image from http://0.0.0.0.0:50090.
On Wed, Feb 1, 2012 at 1:23 PM, Gabriel Rosendorf