[
https://issues.apache.org/jira/browse/PARQUET-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17731774#comment-17731774
]
ASF GitHub Bot commented on PARQUET-1822:
-----------------------------------------
amousavigourabi opened a new pull request, #1111:
URL: https://github.com/apache/parquet-mr/pull/1111
Make sure you have checked _all_ steps below.
### Jira
- [x] My PR addresses the following [Parquet
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
- https://issues.apache.org/jira/browse/PARQUET-XXX
- In case you are adding a dependency, check if the license complies with
the [ASF 3rd Party License
Policy](https://www.apache.org/legal/resolved.html#category-x).
### Tests
- [x] My PR adds the following unit tests __OR__ does not need testing for
this extremely good reason:
### Commits
- [x] My commits all reference Jira issues in their subject lines. In
addition, my commits follow the guidelines from "[How to write a good git
commit message](http://chris.beams.io/posts/git-commit/)":
1. Subject is separated from body by a blank line
1. Subject is limited to 50 characters (not including Jira issue reference)
1. Subject does not end with a period
1. Subject uses the imperative mood ("add", not "adding")
1. Body wraps at 72 characters
1. Body explains "what" and "why", not "how"
### Documentation
- [x] In case of new functionality, my PR adds documentation that describes
how to use it.
- All the public functions and the classes in the PR contain Javadoc that
explain what it does
> Parquet without Hadoop dependencies
> -----------------------------------
>
> Key: PARQUET-1822
> URL: https://issues.apache.org/jira/browse/PARQUET-1822
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-avro
> Affects Versions: 1.11.0
> Environment: Amazon Fargate (linux), Windows development box.
> We are writing Parquet to be read by the Snowflake and Athena databases.
> Reporter: mark juchems
> Priority: Minor
> Labels: documentation, newbie
>
> I have been trying for weeks to create a parquet file from avro and write to
> S3 in Java. This has been incredibly frustrating and odd as Spark can do it
> easily (I'm told).
> I have assembled the correct jars through luck and diligence, but now I find
> out that I have to have hadoop installed on my machine. I am currently
> developing in Windows and it seems a dll and exe can fix that up but am
> wondering about Linus as the code will eventually run in Fargate on AWS.
> *Why do I need external dependencies and not pure java?*
> The thing really is how utterly complex all this is. I would like to create
> an avro file and convert it to Parquet and write it to S3, but I am trapped
> in "ParquetWriter" hell!
> *Why can't I get a normal OutputStream and write it wherever I want?*
> I have scoured the web for examples and there are a few but we really need
> some documentation on this stuff. I understand that there may be reasons for
> all this but I can't find them on the web anywhere. Any help? Can't we get
> the "SimpleParquet" jar that does this:
>
> ParquetWriter writer =
> AvroParquetWriter.<GenericData.Record>builder(outputStream)
> .withSchema(avroSchema)
> .withConf(conf)
> .withCompressionCodec(CompressionCodecName.SNAPPY)
> .withWriteMode(Mode.OVERWRITE)//probably not good for prod. (overwrites
> files).
> .build();
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)