Suresh-
Thanks for the tips, I'll check those functions out, and examine plugging in
a different NetworkTopology.
So to clarify, under the current scheme, if we have 1 block on two local
rack nodes A and B, it randomly chooses between those? IE, if DataNode A is
serving 20 clients and DataNode B
Hi Aaron,
Presently i am in 0.20.2 version.
I debugged the problem for some time. Could not find any clue. Wanted to know
any of the dev/users faced this situation in their clusters.
Regards,
Uma
From: Aaron T. Myers [a...@cloudera.com]
Sent: Thursday, J
Currently it sorts the block locations as:
# local node
# local rack node
# random order of remote nodes
See DatanodeManager#sortLocatedBlock(...) and
NetworkTopology#pseudoSortByDistance(...).
You can play around with other policies by plugging in different
NetworkTopology.
On Thu, Jan 5, 2012
Hi-
How does the NameNode handle load balancing of non-local reads with multiple
block locations when locality is equal?
IE, if the client is equidistant (same rack) from 2 DataNodes hosting the
same block, does the NameNode consider current client count or any other
load indicators when de
Alternatively, it could depend on the replication factor of the file you're
attempting to download. If you're not using replication (which is a
distinct possibility for a small cluster) and the file has a block on the
datanode you shut down... well, I'd expect exceptions such as those you're
encou
Hi
After you stopped one of your data node did you check whether it was
shown as dead node in hdfs report. You can view and confirm the same from
http://namenodeHost:50070/dfshealth.jsp in dead nodes list . It could be a
reason for the error that the datanode is not still marked as dead.
Reg
Hi Sheesha
Basically for benchmarking purposes there would be multiple options
available. We basically use job tracker metrics pretty much available from
the job tracker web UI to capture the map reduce statistics like
-Timings for atomic levels like map,sort and shuffle,reduce as well as
e
Hi All,
I am new to Hadoop. I was able to 3 datanode running and working.
I purposefully shutdown one datanode and execute
"bin/hadoop fs -copyFromLocal ../hadoop.sh
/user/coka/somedir/slave02-datanodeDown" to see what happen.
The execution fails with the exception below.
Why it is so ?
Thanks
Hi guys,
Am trying to implement some solutions for small file problem in hdfs as
part of my project work.
I got my own set of files stored in my hadoop cluster.
I need a tool or method to test and establish benchmarks for
1. memory, performance of read and write operations etc
2. performance of map
I'm not sure if java is using the system's libc resolver, but assuming it is,
you cannot use utilities like nslookup or dig because their use their own
resolver. Ping usually uses the libc resolver. If you are on linux, you can
use "getent hosts $hostname" to definitively test the libc resolve
hi,
I was just going to ask this on hadoop list, but luckily I checked this one
first.
I've been also trying to search net about backup solutions hdfs, but there
isn't too much information available.
So, I'd dare to say that it hasn't been asked myriad of times. ;)
I found this question (which i
Hi Bharath / Harsh,
How about this facebook-hadoop :
https://github.com/facebook/hadoop-20
or
https://github.com/gnawux/hadoop-cmri/tree/master/bin
or
http://de-de.facebook.com/note.php?note_id=106157472002
Have you tried one of these? I'm not really understand hadoop too deep, so
I'm thinki
12 matches
Mail list logo