what did you see in the log? was there anything related to mapreduce?
can you log into your hdfs (data) node, use jps to list all java process and 
confirm whether there is a tasktracker process (or nodemanager) running with 
datanode process


-- 
Ye Xianjin
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Monday, September 8, 2014 at 11:13 PM, Tomer Benyamini wrote:

> Still no luck, even when running stop-all.sh (http://stop-all.sh) followed by 
> start-all.sh (http://start-all.sh).
> 
> On Mon, Sep 8, 2014 at 5:57 PM, Nicholas Chammas
> <nicholas.cham...@gmail.com (mailto:nicholas.cham...@gmail.com)> wrote:
> > Tomer,
> > 
> > Did you try start-all.sh (http://start-all.sh)? It worked for me the last 
> > time I tried using
> > distcp, and it worked for this guy too.
> > 
> > Nick
> > 
> > 
> > On Mon, Sep 8, 2014 at 3:28 AM, Tomer Benyamini <tomer....@gmail.com 
> > (mailto:tomer....@gmail.com)> wrote:
> > > 
> > > ~/ephemeral-hdfs/sbin/start-mapred.sh (http://start-mapred.sh) does not 
> > > exist on spark-1.0.2;
> > > 
> > > I restarted hdfs using ~/ephemeral-hdfs/sbin/stop-dfs.sh 
> > > (http://stop-dfs.sh) and
> > > ~/ephemeral-hdfs/sbin/start-dfs.sh (http://start-dfs.sh), but still 
> > > getting the same error
> > > when trying to run distcp:
> > > 
> > > ERROR tools.DistCp (DistCp.java:run(126)) - Exception encountered
> > > 
> > > java.io.IOException: Cannot initialize Cluster. Please check your
> > > configuration for mapreduce.framework.name 
> > > (http://mapreduce.framework.name) and the correspond server
> > > addresses.
> > > 
> > > at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
> > > 
> > > at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
> > > 
> > > at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
> > > 
> > > at org.apache.hadoop.tools.DistCp.createMetaFolderPath(DistCp.java:352)
> > > 
> > > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:146)
> > > 
> > > at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
> > > 
> > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > > 
> > > at org.apache.hadoop.tools.DistCp.main(DistCp.java:374)
> > > 
> > > Any idea?
> > > 
> > > Thanks!
> > > Tomer
> > > 
> > > On Sun, Sep 7, 2014 at 9:27 PM, Josh Rosen <rosenvi...@gmail.com 
> > > (mailto:rosenvi...@gmail.com)> wrote:
> > > > If I recall, you should be able to start Hadoop MapReduce using
> > > > ~/ephemeral-hdfs/sbin/start-mapred.sh (http://start-mapred.sh).
> > > > 
> > > > On Sun, Sep 7, 2014 at 6:42 AM, Tomer Benyamini <tomer....@gmail.com 
> > > > (mailto:tomer....@gmail.com)>
> > > > wrote:
> > > > > 
> > > > > Hi,
> > > > > 
> > > > > I would like to copy log files from s3 to the cluster's
> > > > > ephemeral-hdfs. I tried to use distcp, but I guess mapred is not
> > > > > running on the cluster - I'm getting the exception below.
> > > > > 
> > > > > Is there a way to activate it, or is there a spark alternative to
> > > > > distcp?
> > > > > 
> > > > > Thanks,
> > > > > Tomer
> > > > > 
> > > > > mapreduce.Cluster (Cluster.java:initialize(114)) - Failed to use
> > > > > org.apache.hadoop.mapred.LocalClientProtocolProvider due to error:
> > > > > Invalid "mapreduce.jobtracker.address" configuration value for
> > > > > LocalJobRunner : "XXX:9001"
> > > > > 
> > > > > ERROR tools.DistCp (DistCp.java:run(126)) - Exception encountered
> > > > > 
> > > > > java.io.IOException: Cannot initialize Cluster. Please check your
> > > > > configuration for mapreduce.framework.name 
> > > > > (http://mapreduce.framework.name) and the correspond server
> > > > > addresses.
> > > > > 
> > > > > at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
> > > > > 
> > > > > at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
> > > > > 
> > > > > at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
> > > > > 
> > > > > at 
> > > > > org.apache.hadoop.tools.DistCp.createMetaFolderPath(DistCp.java:352)
> > > > > 
> > > > > at org.apache.hadoop.tools.DistCp.execute(DistCp.java:146)
> > > > > 
> > > > > at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
> > > > > 
> > > > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > > > > 
> > > > > at org.apache.hadoop.tools.DistCp.main(DistCp.java:374)
> > > > > 
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> > > > > (mailto:user-unsubscr...@spark.apache.org)
> > > > > For additional commands, e-mail: user-h...@spark.apache.org 
> > > > > (mailto:user-h...@spark.apache.org)
> > > > > 
> > > > 
> > > > 
> > > 
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> > > (mailto:user-unsubscr...@spark.apache.org)
> > > For additional commands, e-mail: user-h...@spark.apache.org 
> > > (mailto:user-h...@spark.apache.org)
> > > 
> > 
> > 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org 
> (mailto:user-unsubscr...@spark.apache.org)
> For additional commands, e-mail: user-h...@spark.apache.org 
> (mailto:user-h...@spark.apache.org)
> 
> 


Reply via email to