I think you can also do CAST((e / 1000) AS TIMESTAMP)

On Tue, Mar 2, 2021 at 7:27 PM Sebastián Magrí <sebasma...@gmail.com> wrote:

> Thanks a lot Jark,
>
> On Mon, 1 Mar 2021 at 02:38, Jark Wu <imj...@gmail.com> wrote:
>
>> Hi Sebastián,
>>
>> You can use `TO_TIMESTAMP(FROM_UNIXTIME(e))` to get a timestamp value.
>> The BIGINT should be in seconds.  Please note to declare the computed
>> column
>>  in DDL schema and declare a watermark strategy on this computed field to
>> make
>>  the field to be a rowtime attribute. Because streaming over window
>> requires to
>>  order by a time attribute.
>>
>> Best,
>> Jark
>>
>> On Sun, 21 Feb 2021 at 07:32, Sebastián Magrí <sebasma...@gmail.com>
>> wrote:
>>
>>> I have a table with two BIGINT fields for start and end of an event as
>>> UNIX time in milliseconds. I want to be able to have a resulting column
>>> with the delta in milliseconds and group by that difference. Also, I want
>>> to be able to have aggregations with window functions based upon the `end`
>>> field.
>>>
>>> The table definition looks like this:
>>>     |CREATE TABLE sessions (
>>>     |  `ats`   STRING,
>>>     |  `e`     BIGINT,
>>>     |  `s`     BIGINT,
>>>     |  `proc_time` AS PROCTIME(),
>>>     |  PRIMARY KEY (`ats`, `s`, `e`) NOT ENFORCED
>>>     |)
>>>
>>> Then I have a few views like this:
>>>
>>> CREATE VIEW second_sessions AS
>>>   SELECT * FROM sessions
>>>   WHERE `e` - `s` = 1000
>>>
>>> And some windows using these views like this:
>>>
>>>   WINDOW w3m AS (
>>>     PARTITION BY `t`
>>>     ORDER BY `proc_time`
>>>     RANGE BETWEEN INTERVAL '3' MINUTE PRECEDING AND CURRENT ROW
>>>   )
>>>
>>> I'd like to use the `e` field for windowing instead of `proc_time`. But
>>> I keep running into errors with the `TO_TIMESTAMP(BIGINT)` function now
>>> missing or with unsupported timestamp arithmetics.
>>>
>>> What is the best practice for a case such as this?
>>>
>>> Best Regards,
>>> --
>>> Sebastián Ramírez Magrí
>>>
>>
>
> --
> Sebastián Ramírez Magrí
>

Reply via email to