[ https://issues.apache.org/jira/browse/DRILL-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014680#comment-16014680 ]
ASF GitHub Bot commented on DRILL-5379: --------------------------------------- Github user parthchandra commented on a diff in the pull request: https://github.com/apache/drill/pull/826#discussion_r117094096 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java --- @@ -380,14 +384,21 @@ public void endRecord() throws IOException { // since ParquetFileWriter will overwrite empty output file (append is not supported) // we need to re-apply file permission - parquetFileWriter = new ParquetFileWriter(conf, schema, path, ParquetFileWriter.Mode.OVERWRITE); + if (useConfiguredBlockSize) { + // Round up blockSize to multiple of 64K. + long writeBlockSize = ((long) ceil((double)blockSize/BLOCKSIZE_MULTIPLE)) * BLOCKSIZE_MULTIPLE; --- End diff -- This is not quite consistent with the use of block size in the `checkBlockSizeReached` function. You want to use the same size in both places. > Set Hdfs Block Size based on Parquet Block Size > ----------------------------------------------- > > Key: DRILL-5379 > URL: https://issues.apache.org/jira/browse/DRILL-5379 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Parquet > Affects Versions: 1.9.0 > Reporter: F Méthot > Fix For: Future > > > It seems there a way to force Drill to store CTAS generated parquet file as a > single block when using HDFS. Java HDFS API allows to do that, files could be > created with the Parquet block-size set in a session or system config. > Since it is ideal to have single parquet file per hdfs block. > Here is the HDFS API that allow to do that: > http://archive.cloudera.com/cdh4/cdh/4/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long) > http://archive.cloudera.com/cdh4/cdh/4/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long) > Drill uses the hadoop ParquetFileWriter > (https://github.com/Parquet/parquet-mr/blob/master/parquet-hadoop/src/main/java/parquet/hadoop/ParquetFileWriter.java). > This is where the file creation occurs so it might be tricky. > However, ParquetRecordWriter.java > (https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java) > in Drill creates the ParquetFileWriter with an hadoop configuration object. > something to explore: Could the block size be set as a property within the > Configuration object before passing it to ParquetFileWriter constructor? -- This message was sent by Atlassian JIRA (v6.3.15#6346)