Hi Elkhan,

I ran Spark with --verbose, but the output looked the same to me - what should I be looking for? At the beginning, the system properties which are set are:

System properties:
SPARK_SUBMIT -> true
spark.app.name -> tests.testFileReader
spark.jars -> file:/C:/Users/jbeaudan/Spark/spark-1.3.1-bin-hadoop2.4/sparkTest1.jar
spark.master -> spark://192.168.194.128:7077
Classpath elements:
file:/C:/Users/jbeaudan/Spark/spark-1.3.1-bin-hadoop2.4/sparkTest1.jar

I'm not sure why, but the file paths here seem formatted correctly (it is same from the command terminal and Cygwin), so the path must get edited afterwards?

Julien

On 07/17/2015 03:00 PM, Elkhan Dadashov wrote:
Run Spark with --verbose flag, to see what it read for that path.

I guess in Windows if you are using backslash, you need 2 of them (\\), or just use forward slashes everywhere.

On Fri, Jul 17, 2015 at 2:40 PM, Julien Beaudan <jbeau...@stottlerhenke.com <mailto:jbeau...@stottlerhenke.com>> wrote:

    Hi,

    I running a stand-alone cluster in Windows 7, and when I try to
    run any worker on the machine, I get the following error:

    15/07/17 14:14:43 ERROR ExecutorRunner: Error running executor
    java.io.IOException: Cannot run program
    
"C:\cygdrive\c\Users\jbeaudan\Spark\spark-1.3.1-bin-hadoop2.4/bin/compute-classpath.cmd"
    (in directory "."): CreateProcess error=2, The system cannot find
    the file specified
            at java.lang.ProcessBuilder.start(Unknown Source)
            at
    org.apache.spark.util.Utils$.executeCommand(Utils.scala:1067)
            at
    org.apache.spark.util.Utils$.executeAndGetOutput(Utils.scala:1084)
            at
    
org.apache.spark.deploy.worker.CommandUtils$.buildJavaOpts(CommandUtils.scala:112)
            at
    
org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:61)
            at
    
org.apache.spark.deploy.worker.CommandUtils$.buildProcessBuilder(CommandUtils.scala:47)
            at
    
org.apache.spark.deploy.worker.ExecutorRunner.fetchAndRunExecutor(ExecutorRunner.scala:132)
            at
    
org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:68)
    Caused by: java.io.IOException: CreateProcess error=2, The system
    cannot find the file specified
            at java.lang.ProcessImpl.create(Native Method)
            at java.lang.ProcessImpl.<init>(Unknown Source)
            at java.lang.ProcessImpl.start(Unknown Source)
            ... 8 more


    I'm pretty sure the problem is that Spark is looking for the
    following path, which mixes forward and back slashes:

    
C:\cygdrive\c\Users\jbeaudan\Spark\spark-1.3.1-bin-hadoop2.4/bin/compute-classpath.cmd

    Is there anyway to fix this?

    (Also, I have also tried running this from a normal terminal,
    instead of from cygwin, and I get the same issue, except this time
    the path is:
    
C:\Users\jbeaudan\Spark\spark-1.3.1-bin-hadoop2.4\bin../bin/compute-classpath.cmd
    )

    Thank you!

    Julien






--

Best regards,
Elkhan Dadashov

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to