There is another alternative which you could try like this

val stream:DataStream[GenericRecord] = _

val aof:AvroOutputFormat[GenericRecord] = new AvroOutputFormat(new
org.apache.flink.core.fs.Path(""),classOf[GenericRecord])

aof.setSchema(schema)

aof.setCodec(AvroOutputFormat.Codec.SNAPPY)

stream:DataStream.writeUsingOutputFormat(aof)

Regards,

Ravi



On Wed, Jul 29, 2020 at 9:12 PM Ravi Bhushan Ratnakar <
ravibhushanratna...@gmail.com> wrote:

> Hi Vijayendra,
>
> Currently AvroWriters doesn't support compression. If you want to use
> compression then you need to have a custom implementation of AvroWriter
> where you can add features of compression. Please find a sample
> customization for AvroWriters where you could use compression. You can use
> the example below.
>
> codeName = org.apache.hadoop.io.compress.SnappyCodec
>
> CustomAvroWriters.forGenericRecord(schema, codeName)
>
> Regards,
> Ravi
>
> On Wed, Jul 29, 2020 at 7:36 PM Vijayendra Yadav <contact....@gmail.com>
> wrote:
>
>> Hi Team,
>>
>> Could you please provide a sample for Enabling Compression (Snappy) of
>> Avro:
>> DataStream[GenericRecord]
>>
>> AvroWriters.forGenericRecord(schema)
>>
>> Regards,
>> Vijay
>>
>

Reply via email to