those
> errors. Run the script I sent out earlier to fix those errors and
> bring everything into compliance with the new rack awareness setup.
>
>
> On Thu, Mar 22, 2012 at 13:36, Patai Sangbutsarakum
> wrote:
>> I restarted the cluster yesterday with rack-awareness enable
everything into compliance with the new rack awareness setup.
On Thu, Mar 22, 2012 at 13:36, Patai Sangbutsarakum
wrote:
> I restarted the cluster yesterday with rack-awareness enable.
> Things went well. confirm that there was no issues at all.
>
> Thanks you all again.
>
>
> On
I restarted the cluster yesterday with rack-awareness enable.
Things went well. confirm that there was no issues at all.
Thanks you all again.
On Tue, Mar 20, 2012 at 4:19 PM, Patai Sangbutsarakum
wrote:
> Thanks you all.
>
>
> On Tue, Mar 20, 2012 at 2:44 PM, Harsh J wrote:
Thanks you all.
On Tue, Mar 20, 2012 at 2:44 PM, Harsh J wrote:
> John has already addressed your concern. I'd only like to add that
> fixing of replication violations does not require your NN to be in
> safe mode and it won't be. Your worry can hence be voided :)
>
> On Wed, Mar 21, 2012 at 2:0
John has already addressed your concern. I'd only like to add that
fixing of replication violations does not require your NN to be in
safe mode and it won't be. Your worry can hence be voided :)
On Wed, Mar 21, 2012 at 2:08 AM, Patai Sangbutsarakum
wrote:
> Thanks for your reply and script. Hopef
Thanks for your reply and script. Hopefully it still apply to 0.20.203
As far as I play with test cluster. The balancer would take care of
replica placement.
I just don't want to fall into the situation that the hdfs sit in the
safemode
for hours and users can't use hadoop and start yelping.
Let's
k / | grep "Replica placement policy is violated"
| head -n8 | awk -F: '{print $1}'`; do
hadoop fs -setrep -w 4 $f
hadoop fs -setrep 3 $f
done
On Tue, Mar 20, 2012 at 16:20, Patai Sangbutsarakum
wrote:
> Hadoopers!!
>
> I am going to restart hadoop cluster in
Hi Hadoopers,
Currently I am running hadoop version 0.20.203 in production with 600 TB in her.
I am planning to enable rack awareness in my production, but I still
didn't see it through.
plan/questions.
1. I have script that can solve datanode/tasktracker IP to rack name.
2
On Fri, Jul 2, 2010 at 2:27 PM, Allen Wittenauer
wrote:
>
> On Jul 1, 2010, at 7:50 PM, elton sky wrote:
>
>> hello,
>>
>> I am trying to separate my 6 nodes onto 2 different racks.
>> For test purpose, I wrote a bash file which smply returns "rack0" all the
>> time. And I add property "topology.s
On Jul 1, 2010, at 7:50 PM, elton sky wrote:
> hello,
>
> I am trying to separate my 6 nodes onto 2 different racks.
> For test purpose, I wrote a bash file which smply returns "rack0" all the
> time. And I add property "topology.script.file.name" in core-site.xml.
rack0 or /rack0?
I think the
other issue is that you may have to put in your machine name, the fully
qualified name and the IP address.
I'm not sure which is getting passed in so I have 3 lists that I maintain in
the script.
HTH
-Mike
> Date: Fri, 2 Jul 2010 12:50:26 +1000
> Subject: problem with rack-a
hello,
I am trying to separate my 6 nodes onto 2 different racks.
For test purpose, I wrote a bash file which smply returns "rack0" all the
time. And I add property "topology.script.file.name" in core-site.xml.
When I restart by start-dfs.sh, the namenode could not find any datanode at
all. All d
Feel free to add this here:
http://wiki.apache.org/hadoop/topology_rack_awareness_scripts
On Thu, Nov 19, 2009 at 11:18 AM, Michael Thomas wrote:
> Steve Loughran wrote:
>> Michael Thomas wrote:
>>> IPs are passed to the rack awareness script. We use 'dig' to do th
Steve Loughran wrote:
> Michael Thomas wrote:
>> IPs are passed to the rack awareness script. We use 'dig' to do the
>> reverse lookup to find the hostname, as we also embed the rack id in
>> the worker node hostnames.
>>
>
> It might be nice to have s
Michael Thomas wrote:
IPs are passed to the rack awareness script. We use 'dig' to do the
reverse lookup to find the hostname, as we also embed the rack id in the
worker node hostnames.
It might be nice to have some example scripts up on the wiki, to give
people a good starting place
On 11/18/09 10:02 AM, "Edward Capriolo" wrote:
> It was never clear to me what would be needed ip vs hostname. I
> specified ip, short hostnames, and long hostnames just to be safe. And
> you know things sometimes change with hadoop ::wink-wink::
IIRC, everything is pretty much passed around as I
On Wed, Nov 18, 2009 at 11:28 AM, Michael Thomas wrote:
> IPs are passed to the rack awareness script. We use 'dig' to do the reverse
> lookup to find the hostname, as we also embed the rack id in the worker node
> hostnames.
>
> --Mike
>
> On 11/18/2009
IPs are passed to the rack awareness script. We use 'dig' to do the
reverse lookup to find the hostname, as we also embed the rack id in the
worker node hostnames.
--Mike
On 11/18/2009 08:20 AM, David J. O'Dell wrote:
I'm trying to figure out if I should use ip addresse
I'm trying to figure out if I should use ip addresses or dns names in my
rack awareness script.
Its easier for me to use dns names because we have the row and rack
number in the name which means I can dynamically determine the rack
without having to manually update the list when adding
On Mon, Aug 24, 2009 at 3:40 AM, Sugandha
Naolekar wrote:
> Hello!
>
> **
> Below is the python script I have written::
>
> #!/usr/bin/env python
>
> '''
> This script used by hadoop to determine network/rack topolo
Hello!
**
Below is the python script I have written::
#!/usr/bin/env python
'''
This script used by hadoop to determine network/rack topology. It
should be specified in hadoop-site.xml via topology.script.file.n
Hello!
I have 6 nodes and I want to configure them in racks. Below are the details
of machines::
*Name of the machine* *IP's* *Roles Played*
namenode 10.20.220.30 namenode
jobsec 10.20.220.31 jobtracker and secondaryNN
repository1 10.20.220.35 DN and TT -1 repository2 10.20.220.7
22 matches
Mail list logo