Re: OutOfMemoryError on parquet SnappyDecompressor

2016-11-21 Thread Ryan Blue
uetRecordReader.nextKeyValue( >> InternalParquetRecordReader.java:172) >> > >> > >> > >> > >> > >> >> > parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130) >> >> > >> > >> > >> > >> > >> >>

Re: OutOfMemoryError on parquet SnappyDecompressor

2016-11-21 Thread Aniket
t;> > > scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) > > >> > > scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) > > >> > > scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) > > >> > s

Re: OutOfMemoryError on parquet SnappyDecompressor

2016-11-21 Thread Ryan Blue
ecordReader.nextKeyValue(ParquetRecordReader.java:130) >> >> > >> > >> > >> > >> > >> >> > org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:139) >> >> > >> > >> > >> > >> > &g

Re: OutOfMemoryError on parquet SnappyDecompressor

2016-11-20 Thread Aniket
a.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) > > >> > > scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) > > >> > scala.collection.Iterator$class.isEmpty(Iterator.scala:256) > > >> > > scala.collection.AbstractIte

Re: OutOfMemoryError on parquet SnappyDecompressor

2014-09-23 Thread Aaron Davidson
This may be related: https://github.com/Parquet/parquet-mr/issues/211 Perhaps if we change our configuration settings for Parquet it would get better, but the performance characteristics of Snappy are pretty bad here under some circumstances. On Tue, Sep 23, 2014 at 10:13 AM, Cody Koeninger wrot

Re: OutOfMemoryError on parquet SnappyDecompressor

2014-09-23 Thread Cody Koeninger
Cool, that's pretty much what I was thinking as far as configuration goes. Running on Mesos. Worker nodes are amazon xlarge, so 4 core / 15g. I've tried executor memory sizes as high as 6G Default hdfs block size 64m, about 25G of total data written by a job with 128 partitions. The exception c

Re: OutOfMemoryError on parquet SnappyDecompressor

2014-09-23 Thread Michael Armbrust
I actually submitted a patch to do this yesterday: https://github.com/apache/spark/pull/2493 Can you tell us more about your configuration. In particular how much memory/cores do the executors have and what does the schema of your data look like? On Tue, Sep 23, 2014 at 7:39 AM, Cody Koeninger

Re: OutOfMemoryError on parquet SnappyDecompressor

2014-09-23 Thread Cody Koeninger
So as a related question, is there any reason the settings in SQLConf aren't read from the spark context's conf? I understand why the sql conf is mutable, but it's not particularly user friendly to have most spark configuration set via e.g. defaults.conf or --properties-file, but for spark sql to

OutOfMemoryError on parquet SnappyDecompressor

2014-09-22 Thread Cody Koeninger
After commit 8856c3d8 switched from gzip to snappy as default parquet compression codec, I'm seeing the following when trying to read parquet files saved using the new default (same schema and roughly same size as files that were previously working): java.lang.OutOfMemoryError: Direct buffer memor