Hey all,

I just did a clean checkout of github.com/apache/spark but failed to start
PySpark, this is what I did:

git clone g...@github.com:apache/spark.git; cd spark; build/sbt package;
bin/pyspark

And got this exception:

(spark-dev) Lis-MacBook-Pro:spark icexelloss$ bin/pyspark

Python 3.6.3 |Anaconda, Inc.| (default, Nov  8 2017, 18:10:31)

[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin

Type "help", "copyright", "credits" or "license" for more information.

18/06/14 11:34:14 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable

Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).

/Users/icexelloss/workspace/upstream2/spark/python/pyspark/shell.py:45:
UserWarning: Failed to initialize Spark session.

  warnings.warn("Failed to initialize Spark session.")

Traceback (most recent call last):

  File
"/Users/icexelloss/workspace/upstream2/spark/python/pyspark/shell.py", line
41, in <module>

    spark = SparkSession._create_shell_session()

  File
"/Users/icexelloss/workspace/upstream2/spark/python/pyspark/sql/session.py",
line 564, in _create_shell_session

    SparkContext._jvm.org.apache.hadoop.hive.conf.HiveConf()

TypeError: 'JavaPackage' object is not callable

I also tried to delete hadoop deps from my ivy2 cache and reinstall them
but no luck. I wonder:


   1. I have not seen this before, could this be caused by recent change to
   head?
   2. Am I doing something wrong in the build process?


Thanks much!
Li

Reply via email to