[ https://issues.apache.org/jira/browse/SPARK-24475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16504163#comment-16504163 ]
Joseph Toth commented on SPARK-24475: ------------------------------------- It looks like it was my version of java on this machine. It was 9 jre. I upgraded to 10jdk, but received the error at the bottom. Then I downgraded to 8jdk and it worked. Thanks a bunch for getting back to me! openjdk version "1.8.0_171" OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-2-b11) OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode) ❯ java --version openjdk 10.0.1 2018-04-17 OpenJDK Runtime Environment (build 10.0.1+10-Debian-4) OpenJDK 64-Bit Server VM (build 10.0.1+10-Debian-4, mixed mode) ❯ spark-shell --master 'local[2]' WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 2018-06-06 22:08:16 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Failed to initialize compiler: object java.lang.Object in compiler mirror not found. ** Note that as of 2.8 scala does not assume use of the java classpath. ** For the old behavior pass -usejavacp to scala, or if using a Settings ** object programmatically, settings.usejavacp.value = true. Failed to initialize compiler: object java.lang.Object in compiler mirror not found. ** Note that as of 2.8 scala does not assume use of the java classpath. ** For the old behavior pass -usejavacp to scala, or if using a Settings ** object programmatically, settings.usejavacp.value = true. Exception in thread "main" java.lang.NullPointerException at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256) at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896) at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895) at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895) at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895) at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918) at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337) at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336) at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64) at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336) at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908) at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002) at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997) at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565) > Nested JSON count() Exception > ----------------------------- > > Key: SPARK-24475 > URL: https://issues.apache.org/jira/browse/SPARK-24475 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.3.0 > Reporter: Joseph Toth > Priority: Major > > I have nested structure json file only 2 rows. > > {{spark = SparkSession.builder.appName("JSONRead").getOrCreate()}} > {{jsonData = spark.read.json(file)}} > {{jsonData.count() will crash with the following exception, jsonData.head(10) > works.}}{{}} > > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/IPython/core/interactiveshell.py", line > 2882, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "<ipython-input-46-ef7220990d92>", line 1, in <module> > jsonData.count() > File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/dataframe.py", line > 455, in count > return int(self._jdf.count()) > File "/usr/local/lib/python3.6/dist-packages/py4j/java_gateway.py", line > 1160, in __call__ > answer, self.gateway_client, self.target_id, self.name) > File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/utils.py", line 63, > in deco > return f(*a, **kw) > File "/usr/local/lib/python3.6/dist-packages/py4j/protocol.py", line 320, in > get_return_value > format(target_id, ".", name), value) > py4j.protocol.Py4JJavaError: An error occurred while calling o411.count. > : java.lang.IllegalArgumentException > at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) > at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) > at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source) > at > org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46) > at > org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449) > at > org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432) > at > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) > at > scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103) > at > scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103) > at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) > at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) > at > org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432) > at org.apache.xbean.asm5.ClassReader.a(Unknown Source) > at org.apache.xbean.asm5.ClassReader.b(Unknown Source) > at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) > at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) > at > org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262) > at > org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261) > at scala.collection.immutable.List.foreach(List.scala:381) > at > org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261) > at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159) > at org.apache.spark.SparkContext.clean(SparkContext.scala:2292) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092) > at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) > at > org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) > at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) > at org.apache.spark.rdd.RDD.collect(RDD.scala:938) > at > org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297) > at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2770) > at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2769) > at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253) > at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) > at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252) > at org.apache.spark.sql.Dataset.count(Dataset.scala:2769) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) > at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) > at py4j.Gateway.invoke(Gateway.java:282) > at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) > at py4j.commands.CallCommand.execute(CallCommand.java:79) > at py4j.GatewayConnection.run(GatewayConnection.java:214) > at java.base/java.lang.Thread.run(Thread.java:844) > > {"insertId":"1x3kn8pg3lweeql","jsonPayload":\{"channel":"ORDER-BITF--NEO--USD","ordertype":"Sell","price":30.007999999999999,"quantity":5.2375409399999997,"timestamp":"2017-10-18 > > 03:59:59","total":"157.16812853"},"logName":"projects/m/logs/coinigy-dev","receiveTimestamp":"2017-10-18T03:59:59.911829261Z","resource":\{"labels":{"project_id":"m"},"type":"global"},"timestamp":"2017-10-18T03:59:59.911829261Z"} > {"insertId":"2shvsbg3lt5jpc","jsonPayload":\{"channel":"ORDER-BITF--NEO--USD","ordertype":"Sell","price":30,"quantity":353.83487022999998,"timestamp":"2017-10-18 > > 03:59:59","total":"10615.04610690"},"logName":"projects/m/logs/coinigy-dev","receiveTimestamp":"2017-10-18T03:59:59.994692698Z","resource":\{"labels":{"project_id":"m"},"type":"global"},"timestamp":"2017-10-18T03:59:59.994692698Z"} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org