The JSON support in Spark SQL handles a file with one JSON object per line
or one JSON array of objects per line. What is the format your file? Does
it only contain a single line?

On Wed, Aug 26, 2015 at 6:47 AM, gsvic <victora...@gmail.com> wrote:

> Hi,
>
> I have the following issue. I am trying to load a 2.5G JSON file from a
> 10-node Hadoop Cluster. Actually, I am trying to create a DataFrame, using
> sqlContext.read.json("hdfs://master:9000/path/file.json").
>
> The JSON file contains a parsed table(relation) from the TPCH benchmark.
>
> After finishing some tasks, the job fails by throwing several
> java.io.IOExceptions. For smaller files (eg 700M it works fine). I am
> posting a part of the log and the whole stack trace below:
>
> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 10.1 in stage 1.0 (TID
> 47, 192.168.5.146, ANY, 1416 bytes)
> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 11.1 in stage 1.0 (TID
> 48, 192.168.5.150, ANY, 1416 bytes)
> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 4.1 in stage 1.0 (TID
> 49, 192.168.5.149, ANY, 1416 bytes)
> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 8.1 in stage 1.0 (TID
> 50, 192.168.5.246, ANY, 1416 bytes)
> 15/08/26 16:31:53 INFO TaskSetManager: Finished task 10.0 in stage 1.0 (TID
> 17) in 104681 ms on 192.168.5.243 (27/35)
> 15/08/26 16:31:53 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID
> 15) in 105541 ms on 192.168.5.193 (28/35)
> 15/08/26 16:31:55 INFO TaskSetManager: Finished task 11.0 in stage 1.0 (TID
> 18) in 107122 ms on 192.168.5.167 (29/35)
> 15/08/26 16:31:57 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID
> 12) in 109583 ms on 192.168.5.245 (30/35)
> 15/08/26 16:32:08 INFO TaskSetManager: Finished task 4.1 in stage 1.0 (TID
> 49) in 24135 ms on 192.168.5.149 (31/35)
> 15/08/26 16:32:13 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 9,
> 192.168.5.246): java.io.IOException: Too many bytes before newline:
> 2147483648
>         at
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:249)
>         at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>         at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:134)
>         at
>
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>         at
> org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:70)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/SQLContext-read-json-path-throws-java-io-IOException-tp13841.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to