[ 
https://issues.apache.org/jira/browse/PARQUET-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191022#comment-17191022
 ] 

Ben Watson commented on PARQUET-1822:
-------------------------------------

I also had this problem and I hope I can help you. I maintain an [IntelliJ 
plugin|https://github.com/benwatson528/intellij-avro-parquet-plugin] that lets 
people view Avro and Parquet files. I assumed that creating this plugin would 
be a trivially easy task, but I've been surprised at just how much effort it is 
and how many bugs people still keep finding.

[I asked about this on Stack Overflow a while 
ago|https://stackoverflow.com/questions/59939309/read-local-parquet-file-without-hadoop-path-api]
 and got an answer that works. The solution I implemented does have some Hadoop 
dependencies, but the critical difference is that it uses 
[{{org.apache.parquet.io.InputFile}}|https://www.javadoc.io/doc/org.apache.parquet/parquet-common/latest/org/apache/parquet/io/InputFile.html]
 and does not require {{org.apache.hadoop.Path.}} This skips a lot of Hadoop 
libraries and helped me to avoid a lot of JAR Hell issues. This works on 
Windows without any additional setup or PATH changes.

Feel free to copy the relevant code from my repo:
 * 
[https://github.com/benwatson528/intellij-avro-parquet-plugin/blob/master/src/main/java/uk/co/hadoopathome/intellij/viewer/fileformat/ParquetFileReader.java]
 * 
[https://github.com/benwatson528/intellij-avro-parquet-plugin/blob/master/src/main/java/uk/co/hadoopathome/intellij/viewer/fileformat/LocalInputFile.java]

Disclaimer: this does not produce valid JSON if Logical Types are used as the 
date strings are not surrounded by quotes (see [this open SO 
question|https://stackoverflow.com/questions/63655421/writing-parquet-avro-genericrecord-to-json-while-maintaining-logicaltypes]).

> Parquet without Hadoop dependencies
> -----------------------------------
>
>                 Key: PARQUET-1822
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1822
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-avro
>    Affects Versions: 1.11.0
>         Environment: Amazon Fargate (linux), Windows development box.
> We are writing Parquet to be read by the Snowflake and Athena databases.
>            Reporter: mark juchems
>            Priority: Minor
>              Labels: documentation, newbie
>
> I have been trying for weeks to create a parquet file from avro and write to 
> S3 in Java.  This has been incredibly frustrating and odd as Spark can do it 
> easily (I'm told).
> I have assembled the correct jars through luck and diligence, but now I find 
> out that I have to have hadoop installed on my machine. I am currently 
> developing in Windows and it seems a dll and exe can fix that up but am 
> wondering about Linus as the code will eventually run in Fargate on AWS.
> *Why do I need external dependencies and not pure java?*
> The thing really is how utterly complex all this is.  I would like to create 
> an avro file and convert it to Parquet and write it to S3, but I am trapped 
> in "ParquetWriter" hell! 
> *Why can't I get a normal OutputStream and write it wherever I want?*
> I have scoured the web for examples and there are a few but we really need 
> some documentation on this stuff.  I understand that there may be reasons for 
> all this but I can't find them on the web anywhere.  Any help?  Can't we get 
> the "SimpleParquet" jar that does this:
>  
> ParquetWriter writer = 
> AvroParquetWriter.<GenericData.Record>builder(outputStream)
>  .withSchema(avroSchema)
>  .withConf(conf)
>  .withCompressionCodec(CompressionCodecName.SNAPPY)
>  .withWriteMode(Mode.OVERWRITE)//probably not good for prod. (overwrites 
> files).
>  .build();
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to