Hi guys,
I'm having a problem running map reduce jobs in
hadoop. Whenever I try to run a map reduce job, I get the following
exception:
Caused by: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=slave, access=EXECUTE
Hi Harsh,
When I typed in bin/hadoop jar hadoop-examples-1.0.0.jar wordcount input
output, it just keep hanging and didn't stop. and the log is from the DN
when I running the job.
Thanks.
On Tue, Jan 31, 2012 at 1:56 PM, Harsh J wrote:
> Did this also lead to a task/job failure, or did you jus
Did this also lead to a task/job failure, or did you just notice the
log on the DN after running a job?
On Tue, Jan 31, 2012 at 11:24 AM, Martinus Martinus
wrote:
> Hi,
>
> I have my hadoop clusters running with 1 master and 6 slaves, and when I run
> bin/hadoop jar hadoop-examples-1.0.0.jar word
Hi,
I have my hadoop clusters running with 1 master and 6 slaves, and when I
run bin/hadoop jar hadoop-examples-1.0.0.jar wordcount input output, I got
the following error message from my slave datanode logs :
java.net.SocketTimeoutException: 63000 millis timeout while waiting for
channel to be r
Ina,
Also, is "namenode" a valid alias for your namenode machine?
Kindest regards.
Ron
On Mon, Jan 30, 2012 at 2:40 PM, Alex Kozlov wrote:
> You need mapred.job.tracker property (in your mapred-site.xml commonly)
>
>
> On Mon, Jan 30, 2012 at 2:37 PM, Ina M wrote:
>
>> Hi all,
>>
>> I am usi
Greetings to all-
How does the W3C protocols play a role in distributed data model through
Hadoop?
Web Services:
Simple Access Object Protocol: SOAP
XML: Extensible Mark Up Language
UDDI: Universal Description Discovery and Integration
WSDL: Web service description language
You need mapred.job.tracker property (in your mapred-site.xml commonly)
On Mon, Jan 30, 2012 at 2:37 PM, Ina M wrote:
> Hi all,
>
> I am using hadoop 1.0.0 running in Cygwin. I'm getting the following
> error message when I try to run jobtracker.
>
> 12/01/30 17:18:21 FATAL mapred.JobTracker:
>
Hi all,
I am using hadoop 1.0.0 running in Cygwin. I'm getting the following
error message when I try to run jobtracker.
12/01/30 17:18:21 FATAL mapred.JobTracker:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: local
at org.apache.hadoop.net.NetUtils.cre
Three disks each mounted separately. What you say is true, it will
handle failures better and generally perform better. You'll need to
configure the dfs.datanode.failed.volumes.tolerated parameter in
hdfs-site.xml to make sure that it handles a single failed volume
gracefully.
-Joey
On Mon, Jan 3
Given a HDFS slave node setup of 3 disks per node, should I have 3
filesystems (one file system per disk) in my dfs.data.dir listing, or
should I have a single filesystem on a JBOD setup of 3 disks? Googling
this problem suggests using "JBOD" instead of RAID 0, but I'm talking
about two differ
Hi Mohamed,
I got this when I tried to bin/hadoop dfs -put single.txt input/ :
12/01/30 17:52:40 INFO hdfs.DFSClient: Exception in createBlockOutputStream
172.16.4.85:50010 java.io.IOException: Bad connect ack with firstBadLink as
172.16.2.130:50010
12/01/30 17:52:40 INFO hdfs.DFSClient: Abandoni
11 matches
Mail list logo