bogao007 commented on PR #47133:
URL: https://github.com/apache/spark/pull/47133#issuecomment-2253557239

   @HyukjinKwon I got some other dependency errors for tests running in yarn 
and k8s
   ```
   [info] - run Python application in yarn-client mode *** FAILED *** (4 
seconds, 30 milliseconds)
   [info]   FAILED did not equal FINISHED WARNING: Using incubator modules: 
jdk.incubator.vector
   [info]   23:36:43.052 WARN org.apache.hadoop.util.NativeCodeLoader: Unable 
to load native-hadoop library for your platform... using builtin-java classes 
where applicable
   [info]   
   [info]   Traceback (most recent call last):
   [info]     File 
"/home/runner/work/spark/spark/target/tmp/spark-f738b923-9285-4c53-8678-bae00510c9ff/test.py",
 line 6, in <module>
   [info]       from pyspark import SparkConf , SparkContext
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/__init__.py", 
line 129, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/__init__.py", 
line 43, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/context.py", 
line 37, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/session.py", 
line 44, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py",
 line 42, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/streaming/__init__.py",
 line 21, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/streaming/stateful_processor.py",
 line 21, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/streaming/stateful_processor_api_client.py",
 line 23, in <module>
   [info]     File "<frozen zipimport>", line 259, in load_module
   [info]     File 
"/home/runner/work/spark/spark/python/lib/pyspark.zip/pyspark/sql/streaming/StateMessage_pb2.py",
 line 23, in <module>
   [info]   ModuleNotFoundError: No module named 'google'
   ```
   Do you know where I can specify the new dependency I added? I found this 
[blog](https://www.databricks.com/blog/2020/12/22/how-to-manage-python-dependencies-in-pyspark.html)
 but might not be related to unit test failure?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to