I have Spark 1.5.1 running directly on Windows7 but would like to run it on
console2.
I have JAVA_HOME, SCALA_HOME, and SPARK_HOME setup and have verified Java
and Scala are working property (did a -version and able to run programs).
However, when I try to use Spark using "spark-shell" it return the following
error. Appreciate any help.

Windows7
Console2-2.0.148
Scala-2.10.6/scala-2.11.7  tried switching but made no difference
Oracle JDK 1.8.0_20-b26 64bit

$ spark-shell
/c/ApacheSpark/spark-1.5.1-bin-hadoop2.6/bin/spark-class: cannot make pipe
for process substitution: Function not implemented
/c/ApacheSpark/spark-1.5.1-bin-hadoop2.6/bin/spark-class: cannot make pipe
for process substitution: Function not implemented
/c/ApacheSpark/spark-1.5.1-bin-hadoop2.6/bin/spark-class: line 77:
<("$RUNNER" -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$@"):
ambiguous redirect




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Apache-Spark-1-5-1-on-console2-tp25271.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to