In my use case, I am compressing data than storing data in S3.
Unfortunately, hadoop-snappy is not able to uncompress snappy-java. So
using snappy-java files in Hive is not possible.

It would be nice to have the option to select hadoop-snappy from
CompressContent and just add the native libs to the jvm similar to PutHdfs.
I will also look into SnappyHadoopCompatibleOutputStream.

I will make the effort to contribute back if I go this route.

Thank you
Noe

On Tue, Nov 26, 2019 at 12:54 PM Bryan Bende <bbe...@gmail.com> wrote:

> Not sure if this is relevant, but snappy-java has a specific
> SnappyHadoopCompatibleOutputStream so CompressContent could offer a
> third snappy option like "snappy-hadoop" which used that.
>
> Shawn is correct though that we wouldn't want to introduce Hadoop libs
> into CompressContent.
>
> [1]
> https://github.com/xerial/snappy-java/blob/73c67c70303e509be1642af5e302411d39434249/src/main/java/org/xerial/snappy/SnappyHadoopCompatibleOutputStream.java
>
> On Tue, Nov 26, 2019 at 11:51 AM Shawn Weeks <swe...@weeksconsulting.us>
> wrote:
> >
> > It uses snappy-java to get around the native class path issues that
> would exist otherwise. What’s wrong with snappy-java?
> >
> >
> >
> > Thanks
> >
> > Shawn
> >
> >
> >
> > From: Noe Detore <ndet...@minerkasch.com>
> > Reply-To: "users@nifi.apache.org" <users@nifi.apache.org>
> > Date: Monday, November 25, 2019 at 2:16 PM
> > To: "users@nifi.apache.org" <users@nifi.apache.org>
> > Subject: CompressContent hadoop-snappy
> >
> >
> >
> > Hello
> >
> >
> >
> > CompressContent ver 1.9 uses snappy-java. Is there an easy way to change
> it to hadoop-snappy? Or a custom processor needs to be created?
> >
> >
> >
> > thank you
> >
> > Noe
>

Reply via email to