[ 
https://issues.apache.org/jira/browse/PARQUET-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17733144#comment-17733144
 ] 

ASF GitHub Bot commented on PARQUET-1822:
-----------------------------------------

amousavigourabi commented on code in PR #1111:
URL: https://github.com/apache/parquet-mr/pull/1111#discussion_r1231219827


##########
parquet-common/src/main/java/org/apache/parquet/io/LocalOutputFile.java:
##########
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.io;
+
+import java.io.BufferedOutputStream;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardOpenOption;
+
+/**
+ * {@code LocalOutputFile} is an implementation needed by Parquet to write
+ * to local data files using {@link PositionOutputStream} instances.
+ */
+public class LocalOutputFile implements OutputFile {
+
+  private class LocalPositionOutputStream extends PositionOutputStream {
+    
+    private final BufferedOutputStream stream;
+    private long pos = 0;
+
+    public LocalPositionOutputStream(int buffer, StandardOpenOption... 
openOption) throws IOException {
+      stream = new BufferedOutputStream(Files.newOutputStream(path, 
openOption), buffer);
+    }
+
+    @Override
+    public long getPos() {
+      return pos;
+    }
+
+    @Override
+    public void write(int data) throws IOException {
+      pos++;
+      stream.write(data);
+    }
+
+    @Override
+    public void write(byte[] data) throws IOException {
+      pos += data.length;
+      stream.write(data);
+    }
+
+    @Override
+    public void write(byte[] data, int off, int len) throws IOException {
+      pos += len;
+      stream.write(data, off, len);
+    }
+
+    @Override
+    public void flush() throws IOException {
+      stream.flush();
+    }
+
+    @Override
+    public void close() throws IOException {
+      stream.close();
+    }
+  }
+
+  private final Path path;
+
+  public LocalOutputFile(Path file) {
+    path = file;
+  }
+
+  @Override
+  public PositionOutputStream create(long buffer) throws IOException {
+    return new LocalPositionOutputStream((int) buffer, 
StandardOpenOption.CREATE_NEW);
+  }
+
+  @Override
+  public PositionOutputStream createOrOverwrite(long buffer) throws 
IOException {
+    return new LocalPositionOutputStream((int) buffer, 
StandardOpenOption.CREATE,
+      StandardOpenOption.TRUNCATE_EXISTING);
+  }
+
+  @Override
+  public boolean supportsBlockSize() {
+    return true;
+  }
+
+  @Override
+  public long defaultBlockSize() {
+    return 512;

Review Comment:
   512 is just how the BufferedOutputStream behaves by default. It is 
configurable via `create` and `createAndOverwrite` (as signalled by the `true` 
return in `supportsBlockSize`)





> Parquet without Hadoop dependencies
> -----------------------------------
>
>                 Key: PARQUET-1822
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1822
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-avro
>    Affects Versions: 1.11.0
>         Environment: Amazon Fargate (linux), Windows development box.
> We are writing Parquet to be read by the Snowflake and Athena databases.
>            Reporter: mark juchems
>            Priority: Minor
>              Labels: documentation, newbie
>
> I have been trying for weeks to create a parquet file from avro and write to 
> S3 in Java.  This has been incredibly frustrating and odd as Spark can do it 
> easily (I'm told).
> I have assembled the correct jars through luck and diligence, but now I find 
> out that I have to have hadoop installed on my machine. I am currently 
> developing in Windows and it seems a dll and exe can fix that up but am 
> wondering about Linus as the code will eventually run in Fargate on AWS.
> *Why do I need external dependencies and not pure java?*
> The thing really is how utterly complex all this is.  I would like to create 
> an avro file and convert it to Parquet and write it to S3, but I am trapped 
> in "ParquetWriter" hell! 
> *Why can't I get a normal OutputStream and write it wherever I want?*
> I have scoured the web for examples and there are a few but we really need 
> some documentation on this stuff.  I understand that there may be reasons for 
> all this but I can't find them on the web anywhere.  Any help?  Can't we get 
> the "SimpleParquet" jar that does this:
>  
> ParquetWriter writer = 
> AvroParquetWriter.<GenericData.Record>builder(outputStream)
>  .withSchema(avroSchema)
>  .withConf(conf)
>  .withCompressionCodec(CompressionCodecName.SNAPPY)
>  .withWriteMode(Mode.OVERWRITE)//probably not good for prod. (overwrites 
> files).
>  .build();
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to