well, this means you didn't start a compute cluster. Most likely because the 
wrong value of mapreduce.jobtracker.address cause the slave node cannot start 
the node manager. ( I am not familiar with the ec2 script, so I don't know 
whether the slave node has node manager installed or not.) 
Can you check the slave node the hadoop daemon log to see whether you started 
the nodemanager  but failed or there is no nodemanager to start? The log file 
location defaults to
/var/log/hadoop-xxx if my memory is correct.

Sent from my iPhone

> On 2014年9月9日, at 0:08, Tomer Benyamini <tomer....@gmail.com> wrote:
> 
> No tasktracker or nodemanager. This is what I see:
> 
> On the master:
> 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
> org.apache.hadoop.hdfs.server.namenode.NameNode
> 
> On the data node (slave):
> 
> org.apache.hadoop.hdfs.server.datanode.DataNode
> 
> 
> 
>> On Mon, Sep 8, 2014 at 6:39 PM, Ye Xianjin <advance...@gmail.com> wrote:
>> what did you see in the log? was there anything related to mapreduce?
>> can you log into your hdfs (data) node, use jps to list all java process and
>> confirm whether there is a tasktracker process (or nodemanager) running with
>> datanode process
>> 
>> --
>> Ye Xianjin
>> Sent with Sparrow
>> 
>> On Monday, September 8, 2014 at 11:13 PM, Tomer Benyamini wrote:
>> 
>> Still no luck, even when running stop-all.sh followed by start-all.sh.
>> 
>> On Mon, Sep 8, 2014 at 5:57 PM, Nicholas Chammas
>> <nicholas.cham...@gmail.com> wrote:
>> 
>> Tomer,
>> 
>> Did you try start-all.sh? It worked for me the last time I tried using
>> distcp, and it worked for this guy too.
>> 
>> Nick
>> 
>> 
>> On Mon, Sep 8, 2014 at 3:28 AM, Tomer Benyamini <tomer....@gmail.com> wrote:
>> 
>> 
>> ~/ephemeral-hdfs/sbin/start-mapred.sh does not exist on spark-1.0.2;
>> 
>> I restarted hdfs using ~/ephemeral-hdfs/sbin/stop-dfs.sh and
>> ~/ephemeral-hdfs/sbin/start-dfs.sh, but still getting the same error
>> when trying to run distcp:
>> 
>> ERROR tools.DistCp (DistCp.java:run(126)) - Exception encountered
>> 
>> java.io.IOException: Cannot initialize Cluster. Please check your
>> configuration for mapreduce.framework.name and the correspond server
>> addresses.
>> 
>> at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
>> 
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
>> 
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
>> 
>> at org.apache.hadoop.tools.DistCp.createMetaFolderPath(DistCp.java:352)
>> 
>> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:146)
>> 
>> at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
>> 
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>> 
>> at org.apache.hadoop.tools.DistCp.main(DistCp.java:374)
>> 
>> Any idea?
>> 
>> Thanks!
>> Tomer
>> 
>> On Sun, Sep 7, 2014 at 9:27 PM, Josh Rosen <rosenvi...@gmail.com> wrote:
>> 
>> If I recall, you should be able to start Hadoop MapReduce using
>> ~/ephemeral-hdfs/sbin/start-mapred.sh.
>> 
>> On Sun, Sep 7, 2014 at 6:42 AM, Tomer Benyamini <tomer....@gmail.com>
>> wrote:
>> 
>> 
>> Hi,
>> 
>> I would like to copy log files from s3 to the cluster's
>> ephemeral-hdfs. I tried to use distcp, but I guess mapred is not
>> running on the cluster - I'm getting the exception below.
>> 
>> Is there a way to activate it, or is there a spark alternative to
>> distcp?
>> 
>> Thanks,
>> Tomer
>> 
>> mapreduce.Cluster (Cluster.java:initialize(114)) - Failed to use
>> org.apache.hadoop.mapred.LocalClientProtocolProvider due to error:
>> Invalid "mapreduce.jobtracker.address" configuration value for
>> LocalJobRunner : "XXX:9001"
>> 
>> ERROR tools.DistCp (DistCp.java:run(126)) - Exception encountered
>> 
>> java.io.IOException: Cannot initialize Cluster. Please check your
>> configuration for mapreduce.framework.name and the correspond server
>> addresses.
>> 
>> at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
>> 
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
>> 
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
>> 
>> at org.apache.hadoop.tools.DistCp.createMetaFolderPath(DistCp.java:352)
>> 
>> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:146)
>> 
>> at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
>> 
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>> 
>> at org.apache.hadoop.tools.DistCp.main(DistCp.java:374)
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>> 
>> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to