Hi Prashant,
Thanks also for your advice. I have it works right now, I deleted the data
folder inside the hadoop.tmp.dir and I run it again and it's now have total
4 nodes.
Thanks and Happy New Year 2012.
On Mon, Jan 2, 2012 at 2:15 PM, Martinus Martinus wrote:
> Hi Harsh J,
>
> Thanks for your
Hi Harsh J,
Thanks for your advice. I have it works right now, I deleted the data
folder inside the hadoop.tmp.dir and I run it again and it's now have total
4 nodes.
Thanks and Happy New Year 2012.
On Mon, Jan 2, 2012 at 11:34 AM, Harsh J wrote:
> Check your other 3 DN's logs. Could be that y
Check your other 3 DN's logs. Could be that you may not have propagated
configurations properly, or could be that you have a firewall you need to turn
off/configure, to let the DataNodes communicate with the NameNode.
On 02-Jan-2012, at 8:53 AM, Martinus Martinus wrote:
> Hi,
>
> I have setup
You can check (datanode) logs on every system. Most probably datanodes are
not able to join the namenode.
-P
On Mon, Jan 2, 2012 at 8:53 AM, Martinus Martinus wrote:
> Hi,
>
> I have setup a hadoop clusters with 4 nodes and I have start-all.sh and
> checked in every node, there are tasktracker a
Hi,
I have setup a hadoop clusters with 4 nodes and I have start-all.sh and
checked in every node, there are tasktracker and datanode run, but when I
run hadoop dfsadmin -report it's said like this :
Configured Capacity: 30352158720 (28.27 GB)
Present Capacity: 3756392448 (3.5 GB)
DFS Remaining:
Hamed Ghavamnia wrote on 01/01/12 at 05:36:53 -0800:
> I found out what was wrong. I had made a really stupid mistake in the
> directory name, and it wasn't pointing to the mount point of the new
> volume, so the capacity wouldn't change.
> But still it's using too much for nonDFS usage, I'
I found out what was wrong. I had made a really stupid mistake in the
directory name, and it wasn't pointing to the mount point of the new
volume, so the capacity wouldn't change.
But still it's using too much for nonDFS usage, I've set the
dfs.datanode.du.reserved to 0, 1000, 100 bytes but in
what does the mbean Hadoop:service=DataNode,name=DataNodeInfo on the
datanode show? You should see something like this
http://dn1.hadoop.apache.org:1006/jmx?qry=Hadoop:service=DataNode,name=DataNodeInfo
{
"beans": [
{
"name": "Hadoop:service=DataNode,name=DataNodeInfo",
I've already added the new volume with dfs.data.dir, and it adds without
any problem. My problem is that the volume I'm adding has 150 GBs of free
space, but when I check the namenode:50070 it only adds 5GB to the total
capacity, of which has reserved 50% for non-dfs usage. I've set the
dfs.datanod
dfsadmin -setSpaceQuota applies to hdfs filesystem. This doesn't apply to
datanode volumes.
to add a volume, update dfs.data.dir (hdfs-site.xml on datanode) , and restart
the datanode.
check the datanode log to see if the new volume as activated. You should see
additional space in
namenode
Thanks for the help.
I checked the quotas, it seems they're used for setting the maximum size on
the files inside the hdfs, and not the datanode itself. For example, if I
set my dfs.data.dir to /media/newhard (which I've mounted my new hard disk
to), I can't use dfsadmin -setSpaceQuota n /media/new
11 matches
Mail list logo