Thanks Matthias for pointing us to the links, we will definitely follow
those.

*Regards*
*Akshay Agarwal*


On Thu, Aug 26, 2021 at 6:43 PM Matthias Pohl <matth...@ververica.com>
wrote:

> Hi Akshay,
> thanks for reaching out to the community. There was a similar question on
> the mailing list earlier this month [1]. Unfortunately, it just doesn't
> seem to be supported, yet. The feature request was already created with
> FLINK-23589 [2].
>
> Best,
> Matthias
>
> [1]
> https://lists.apache.org/thread.html/r463f748358202d207e4bf9c7fdcb77e609f35bbd670dbc5278dd7615%40%3Cuser.flink.apache.org%3E
> [2] https://issues.apache.org/jira/browse/FLINK-23589
>
> On Thu, Aug 26, 2021 at 11:07 AM Akshay Agarwal <
> akshay.agar...@grofers.com> wrote:
>
>> Hi everyone,
>>
>> We are trying out flink 1.13.1 with kafka topics as avro backend but we
>> are facing an issue while creating Table SQL that avro doesn't support
>> precision greater than 3. I am not getting the reason why flink isn't
>> supporting timestamps greater than 3 (exception
>> <https://github.com/apache/flink/blob/3555741a12ba9fb65e8db9f731a131ab39d1cfe8/flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSchemaConverter.java#L359>)
>> as avro(1.10.0) does support microseconds precision.
>> Our kafka records contain a timestamp as YYYY-MM-DD'T'HH-mm-ss.SSSSSS'Z'.
>> Eventually I have read the records as a custom UDF which drops seconds
>> offset but wanted to know if there is a better way to handle this also the
>> reason why isn't it's supported. It would be a great help for us to know
>> about it and we can build on that.
>>
>> *Regards*
>> *Akshay Agarwal*
>>
>> [image: https://grofers.com] <https://grofers.com>
>
>

-- 
 <https://grofers.com>

Reply via email to