Is you using the pyspark?
If pyspark, you can try to set env about PYSPARK_PYTHON SPARK_HOME
Example:
import os
os.environ['PYSPARK_PYTHON'] = “python path”
os.environ[’SPARK_HOME’] = “SPARK path”
you can try this code…may it can resolved this.
在 2022年9月20日 17:34,Bjørn Jørgensen 写道:
Hi,
Hi, we have a user group at user@spark.apache.org
You must install a java JRE
If you are on ubuntu you can type
apt-get install openjdk-17-jre-headless
tir. 20. sep. 2022 kl. 06:15 skrev yogita bhardwaj <
yogita.bhard...@iktara.ai>:
>
>
> I am getting the py4j.protocol.Py4JJavaError while
I've created a spark app, which runs fine if I copy the corresponding
jar to the hadoop-server (where yarn is running) and submit it there.
If it try it to submit it from my local machine, I get the error which
I've attached below.
Submit cmd: spark-submit.cmd --class
It's because you committed the job in Windows to a Hadoop cluster running
in Linux. Spark has not yet supported it. See
https://issues.apache.org/jira/browse/SPARK-1825
Best Regards,
Shixiong Zhu
2015-01-28 17:35 GMT+08:00 Marco marco@gmail.com:
I've created a spark app, which runs fine if