You only specify the script on the namenode.
So, you could do something like:
#!/bin/bash
#rack_decider.sh
if [ $1 = "server1.mydomain" -o $1 = "192.168.0.1" ] ; then
echo rack1
elif [ $1 = "server2.mydomain" -o $1 = "192.168.0.2" ] ; then
echo rack1
elif [ $1 = "server3.mydomain" -o $1 = "19
On 03/18/2010 06:21 PM, Michael Thomas wrote:
On 03/17/2010 08:34 PM, Mag Gam wrote:
Well, I didn't really solve the problem. Now I have even more questions.
I came across this script,
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
but it makes no sense to me! Can someone please
On 03/17/2010 08:34 PM, Mag Gam wrote:
Well, I didn't really solve the problem. Now I have even more questions.
I came across this script,
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
but it makes no sense to me! Can someone please try to explain what
its trying to do?
MikeT
On 03/18/2010 05:41 PM, Mag Gam wrote:
Chris:
This clears up my questions a lot! Thankyou.
So, if I have 4 data servers and I want 2 racks. I can do this
#!/bin/bash
#rack1.sh
echo rack1
#bin/bash
#rack2.sh
echo rack2
So, I can do this for 2 servers
topology.script.file.name
rack1.sh
Chris:
This clears up my questions a lot! Thankyou.
So, if I have 4 data servers and I want 2 racks. I can do this
#!/bin/bash
#rack1.sh
echo rack1
#bin/bash
#rack2.sh
echo rack2
So, I can do this for 2 servers
topology.script.file.name
rack1.sh
And for the other 2 servers, I can do th
Thanks, Tom!
We just need one more vote from a Hadoop PMC member to release this.
Doug
Tom White wrote:
+1
Based on checking checksums and signatures, and running tests.
Tom
On Fri, Mar 12, 2010 at 2:43 PM, Doug Cutting wrote:
I have created a candidate build for Avro release 1.3.1.
Chan
As you may have seen on the various dev lists, some of the Hadoop sub-
projects such as HBase and Avro have started discussions on their dev
lists about becoming top level Apache projects. This is largely
motivated by the Apache board's continued warnings to Hadoop and
Lucene against becomin
Hadoop will identify data nodes in your cluster by name and execute
your script with the data node as an argument. The expected output of
your script is the name of the rack on which it is located.
The script you referenced takes the node name as an argument ($1), and
crawls through a separate fil