Re: Reading JSON in Pyspark throws scala.MatchError

2015-10-20 Thread Balaji Vijayan
You are correct, that was the issue.

On Tue, Oct 20, 2015 at 10:18 PM, Jeff Zhang  wrote:

> BTW, I think Json Parser should verify the json format at least when
> inferring the schema of json.
>
> On Wed, Oct 21, 2015 at 12:59 PM, Jeff Zhang  wrote:
>
>> I think this is due to the json file format.  DataFrame can only accept
>> json file with one valid record per line.  Multiple line per record is
>> invalid for DataFrame.
>>
>>
>> On Tue, Oct 6, 2015 at 2:48 AM, Davies Liu  wrote:
>>
>>> Could you create a JIRA to track this bug?
>>>
>>> On Fri, Oct 2, 2015 at 1:42 PM, balajikvijayan
>>>  wrote:
>>> > Running Windows 8.1, Python 2.7.x, Scala 2.10.5, Spark 1.4.1.
>>> >
>>> > I'm trying to read in a large quantity of json data in a couple of
>>> files and
>>> > I receive a scala.MatchError when I do so. Json, Python and stack
>>> trace all
>>> > shown below.
>>> >
>>> > Json:
>>> >
>>> > {
>>> > "dataunit": {
>>> > "page_view": {
>>> > "nonce": 438058072,
>>> > "person": {
>>> > "user_id": 5846
>>> > },
>>> > "page": {
>>> > "url": "http://mysite.com/blog";
>>> > }
>>> > }
>>> > },
>>> > "pedigree": {
>>> > "true_as_of_secs": 1438627992
>>> > }
>>> > }
>>> >
>>> > Python:
>>> >
>>> > import pyspark
>>> > sc = pyspark.SparkContext()
>>> > sqlContext = pyspark.SQLContext(sc)
>>> > pageviews = sqlContext.read.json("[Path to folder containing file with
>>> above
>>> > json]")
>>> > pageviews.collect()
>>> >
>>> > Stack Trace:
>>> > Py4JJavaError: An error occurred while calling
>>> > z:org.apache.spark.api.python.PythonRDD.collectAndServe.
>>> > : org.apache.spark.SparkException: Job aborted due to stage failure:
>>> Task 1
>>> > in stage 32.0 failed 1 times, most recent failure: Lost task 1.0 in
>>> stage
>>> > 32.0 (TID 133, localhost): scala.MatchError:
>>> > (VALUE_STRING,ArrayType(StructType(),true)) (of class scala.Tuple2)
>>> > at
>>> >
>>> org.apache.spark.sql.json.JacksonParser$.convertField(JacksonParser.scala:49)
>>> > at
>>> >
>>> org.apache.spark.sql.json.JacksonParser$$anonfun$parseJson$1$$anonfun$apply$1.apply(JacksonParser.scala:201)
>>> > at
>>> >
>>> org.apache.spark.sql.json.JacksonParser$$anonfun$parseJson$1$$anonfun$apply$1.apply(JacksonParser.scala:193)
>>> > at
>>> scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>>> > at
>>> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>> > at
>>> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>> > at
>>> >
>>> org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:116)
>>> > at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>> > at
>>> >
>>> org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:111)
>>> > at
>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>> > at
>>> >
>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>> > at
>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>> > at scala.collection.TraversableOnce$class.to
>>> (TraversableOnce.scala:273)
>>> > at
>>> >
>>> org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.to(SerDeUtil.scala:111)
>>> > at
>>> >
>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>> > at
>>> >
>>> org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.toBuffer(SerDeUtil.scala:111)
>>> > at
>>> >
>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>> > at
>>> >
>>> org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.toArray(SerDeUtil.scala:111)
>>> > at
>>> >
>>> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
>>> > at
>>> >
>>> org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
>>> > at
>>> >
>>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
>>> > at
>>> >
>>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
>>> > at
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
>>> > at org.apache.spark.scheduler.Task.run(Task.scala:70)
>>> > at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>>> > at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> > at
>>> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> > at java.lang.Thread.run(Thread.java:745)
>>> >
>>> > Driver stacktrace:
>>> > at
>>> > org.apache.spark.scheduler.DAGScheduler.org
>>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
>>> > at
>>> >
>>> org

Troubleshooting "Task not serializable" in Spark/Scala environments

2015-09-21 Thread Balaji Vijayan
Howdy,

I'm a relative novice at Spark/Scala and I'm puzzled by some behavior that
I'm seeing in 2 of my local Spark/Scala environments (Scala for Jupyter and
Scala IDE) but not the 3rd (Spark Shell). The following code throws the
following stack trace error in the former 2 environments but executes
successfully in the 3rd. I'm not sure how to go about troubleshooting my
former 2 environments so any assistance is greatly appreciated.

Code:

//get file
val logFile = "s3n://file"
val logData  = sc.textFile(logFile)
// header
val header =  logData.first
// filter out header
val sample = logData.filter(!_.contains(header)).map {
 line => line.replaceAll("['\"]","").substring(0,line.length()-1)
}.takeSample(false,100,12L)

Stack Trace:

org.apache.spark.SparkException: Task not serializable

org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:315)

org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305)
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)
org.apache.spark.SparkContext.clean(SparkContext.scala:1893)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:311)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:310)

org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)

org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
org.apache.spark.rdd.RDD.filter(RDD.scala:310)
cmd6$$user$$anonfun$3.apply(Main.scala:134)
cmd6$$user$$anonfun$3.apply(Main.scala:133)
java.io.NotSerializableException: org.apache.spark.SparkConf
Serialization stack:
- object not serializable (class: org.apache.spark.SparkConf, value:
org.apache.spark.SparkConf@309ed441)
- field (class: cmd2$$user, name: conf, type: class 
org.apache.spark.SparkConf)
- object (class cmd2$$user, cmd2$$user@75a88665)
- field (class: cmd6, name: $ref$cmd2, type: class cmd2$$user)
- object (class cmd6, cmd6@5e9e8f0b)
- field (class: cmd6$$user, name: $outer, type: class cmd6)
- object (class cmd6$$user, cmd6$$user@692f81c)
- field (class: cmd6$$user$$anonfun$3, name: $outer, type: class 
cmd6$$user)
- object (class cmd6$$user$$anonfun$3, )
- field (class: cmd6$$user$$anonfun$3$$anonfun$apply$1, name: $outer,
type: class cmd6$$user$$anonfun$3)
- object (class cmd6$$user$$anonfun$3$$anonfun$apply$1, )

org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)

org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)

org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:81)

org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:312)

org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305)
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)
org.apache.spark.SparkContext.clean(SparkContext.scala:1893)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:311)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:310)

org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)

org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
org.apache.spark.rdd.RDD.filter(RDD.scala:310)
cmd6$$user$$anonfun$3.apply(Main.scala:134)
cmd6$$user$$anonfun$3.apply(Main.scala:133)

Thanks,
Balaji