I think it wants you to type a capital Y, as silly as that may sound...
On Feb 4, 2011, at 7:38 AM, ahmednagy wrote:
>
> I have a cluster with a master and 7 nodes when i try to start hadoop it
> starts the mapreduce processes and the hdfs processes on all the nodes.
> formated the hdfs but
ed in multiple availability zones in the us-west and us-east
regions and the experience has been the same. For cc1.4xlarge instances I've
only tested in us-east.
On Tue, Feb 1, 2011 at 7:48 AM, Steve Loughran wrote:
> On 31/01/11 23:22, Aaron Eng wrote:
>
>> Hi all,
>>
>
Hi all,
I was wondering if any of you have had a similar experience working with
Hadoop in Amazon's environment. I've been running a few jobs over the last
few months and have noticed them taking more and more time. For instance, I
was running teragen/terasort/teravalidate as a benchmark and I'v
Pros:
- Easier to build out and tear down clusters vs. using physical machines in
a lab
- Easier to scale up and scale down a cluster as needed
Cons:
- Reliability. In my experience I've had machines die, had machines fail to
start up, had network outages between Amazon instances, etc. These pro
Can you send the mapred-site.xml config for reference? It could be a
formatting issue. I've seen that problem when there was a type in the XML
after hand-editing.
On Tue, Nov 23, 2010 at 10:35 AM, Skye Berghel wrote:
> On 11/19/2010 10:07 PM, Harsh J wrote:
>
>> How are you starting your JobTr
Maybe try doing a "grep -R local " to see if its picking it up
from somewhere in there. Also, maybe try specifying an actual IP instead of
myserver as a test to see if name resolution is an issue.
On Fri, Nov 19, 2010 at 5:56 PM, Skye Berghel wrote:
> I'm trying to set up a Hadoop cluster. Howe
>bin/hadoop jar hadoop-*-examples.jar grep input
output 'dfs[a-z]+'
Have you tried specifying the actual file name instead of the using the '*'
wildcard?
On Tue, Nov 9, 2010 at 2:10 PM, Fabio A. Miranda
wrote:
> Give a fresh installation, I followed the Single Node Setup doc from
> hadoop websit
Did you set the namenode URI?
2010-11-09 15:38:38,255 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
java.lang.IllegalArgumentException: Invalid URI for NameNode address
(check fs.defaultFS): file:/// has no authority.
You should have some config defined in the core-site.xml file similar t
Hi Fabio,
I found this site extremely helpful in explaining how to do a one node setup
for a first time user:
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29
On Tue, Nov 9, 2010 at 10:54 AM, Fabio A. Miranda wrote:
> Hello,
>
>
> > You don't need 4 mach