[
https://issues.apache.org/jira/browse/PARQUET-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146834#comment-14146834
]
Tongjie Chen commented on PARQUET-99:
-------------------------------------
I disable dictionary and it still run into the problem.
it keep creating new slabs (after 10 times, grow exponentially) until it
exhaust memory.
Sep 24, 2014 7:10:28 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:28 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:29 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:29 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:29 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:30 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:30 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:30 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 5381786
Sep 24, 2014 7:10:31 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: used
11 slabs, new slab size 55827257
Sep 24, 2014 7:10:31 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 55827257
Sep 24, 2014 7:10:34 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: used
12 slabs, new slab size 102292952
Sep 24, 2014 7:10:34 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 102292952
Sep 24, 2014 7:10:42 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: used
13 slabs, new slab size 212521210
Sep 24, 2014 7:10:42 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 212521210
Sep 24, 2014 7:10:55 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: used
14 slabs, new slab size 426081554
Sep 24, 2014 7:10:55 PM INFO: parquet.bytes.CapacityByteArrayOutputStream: new
slab of size 426081554
Sep 24, 2014 7:10:56 PM INFO: parquet.hadoop.InternalParquetRecordWriter: mem
size 463,559,051 > 134,217,728: flushing 100 records to disk.
Sep 24, 2014 7:10:56 PM INFO: parquet.hadoop.InternalParquetRecordWriter:
Flushing mem store to file. allocated memory: 907,972,142
> parquet writer runs into OOM during writing
> -------------------------------------------
>
> Key: PARQUET-99
> URL: https://issues.apache.org/jira/browse/PARQUET-99
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-mr
> Affects Versions: parquet-mr_1.6.0
> Reporter: Tongjie Chen
>
> If columns contains lots of lengthy string value, it will run into OOM error
> during writing.
> 2014-09-22 19:16:11,626 FATAL [main] org.apache.hadoop.mapred.YarnChild:
> Error running child : java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2271)
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
> at
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
> at
> org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:83)
> at
> org.apache.hadoop.io.compress.CompressorStream.write(CompressorStream.java:76)
> at
> parquet.bytes.CapacityByteArrayOutputStream.writeTo(CapacityByteArrayOutputStream.java:144)
> at
> parquet.bytes.BytesInput$CapacityBAOSBytesInput.writeAllTo(BytesInput.java:308)
> at
> parquet.bytes.BytesInput$SequenceBytesIn.writeAllTo(BytesInput.java:233)
> at
> parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:108)
> at
> parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:110)
> at
> parquet.column.impl.ColumnWriterImpl.writePage(ColumnWriterImpl.java:147)
> at parquet.column.impl.ColumnWriterImpl.flush(ColumnWriterImpl.java:236)
> at
> parquet.column.impl.ColumnWriteStoreImpl.flush(ColumnWriteStoreImpl.java:113)
> at
> parquet.hadoop.InternalParquetRecordWriter.flushStore(InternalParquetRecordWriter.java:151)
> at
> parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:130)
> at
> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:122)
> at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:81)
> at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:77)
> at
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:90)
> at
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:688)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
> at
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
> at
> org.apache.hadoop.hive.ql.exec.FilterOperator.processOp(FilterOperator.java:132)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
> at
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:90)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)