Am 12.05.23 um 13:04 schrieb Durumdara:
[...]
The LAG function seems to be ok, but how to handle if more than two periods
are missing?
03:00 10
03:01 NULL
03:02 NULL
03:03 NULL
03:04 11
03:05 13
[...]
and how do you think about NULL in first(and second/third) row?
On Sat, May 13, 2023 at 2:18 AM Andrew Gierth
wrote:
> > "Durumdara" == Durumdara writes:
>
> create table tmp_test_table(mmin,val)
>as select o, v
> from unnest(array[1,5,NULL,3,NULL,NULL,10,7,NULL,NULL,NULL,4])
>with ordinality as u(v,o);
> select * from
LLS option for
window functions, but it's somewhat awkward.
Someone else already mentioned using generate_series in lateral position
to fill in values; in most cases this is probably the best approach,
though it requires a little care.
There is also a window function approach based on usin
On Fri, 12 May 2023, GF wrote:
>"The SQL standard defines a RESPECT NULLS or IGNORE NULLS option for lead,
>lag, first_value, last_value, and nth_value. This is not implemented in
>PostgreSQL: the behavior is always the same as the standard's default,
>namely RESPECT NULLS".
Yeah, THAT caused no
On Fri, 12 May 2023 at 13:04, Durumdara wrote:
> Dear Members!
>
> I have a table with temperature measures.
> The data is coming from the PLC, but sometimes the period is "slipping",
> so the values are not correctly minute based.
>
> 03:00 10
> 03:02 12
> 03:03 11
> 03:05 13
>
Dear Members!
I have a table with temperature measures.
The data is coming from the PLC, but sometimes the period is "slipping", so
the values are not correctly minute based.
03:00 10
03:02 12
03:03 11
03:05 13
I have to make a virtual table which is minute based.
I thought I
On Sat, Jun 4, 2022 at 1:18 AM Robert Stanford wrote:
> Hi,
>
> From this:
> Input Start End
> 5 2022-06-04 09:09:00 2022-06-04 09:09:29
> 4 2022-06-04 09:08:50 2022-06-04 09:09:00
> 4 2022-06-04 09:08:10 2022-06-04 09:08:50
> 4 2022-06-04
On Sat, Jun 4, 2022 at 7:50 AM Tom Lane wrote:
> Robert Stanford writes:
> > Can anyone suggest an elegant way to coalesce consecutive rows so only
> the
> > first start time and last end time for each group of events (by input) is
> > returned.
>
> Window functions don't change the number of
Robert Stanford writes:
> Can anyone suggest an elegant way to coalesce consecutive rows so only the
> first start time and last end time for each group of events (by input) is
> returned.
Window functions don't change the number of query result rows, so that
idea is a dead end. The standard
On Sat, 4 Jun 2022 at 22:56, Thiemo Kellner
wrote:
> Hi Robert
>
> Interesting problem. I need to think about it.
>
> You need to figure out when Input changes. You can achieve this by using
> lead or lag (depending of the sort direction over start)
>
Hi Robert
Interesting problem. I need to think about it.
You need to figure out when Input changes. You can achieve this by using
lead or lag (depending of the sort direction over start)
https://www.postgresql.org/docs/current/functions-window.html .
Hope this nudges you to a solution.
Hi,
I have time series data from multiple inputs with start and
end timestamps.
Can anyone suggest an elegant way to coalesce consecutive rows so only the
first start time and last end time for each group of events (by input) is
returned.
Sample from and to below where the rows for Input number
On Thu, 3 Sep 2020 at 20:59, Michael Lewis wrote:
> It seems like you are maybe wanting this- If the previous row is the same,
> then get the previous row's run_nr. If it is different, then increment.
>
> case when lag( property_A ) over() = property_A and lag( property_B )
> over() = property_B
It seems like you are maybe wanting this- If the previous row is the same,
then get the previous row's run_nr. If it is different, then increment.
case when lag( property_A ) over() = property_A and lag( property_B )
over() = property_B then coalesce( lag( run_nr ) over(), 1 ) else lag(
run_nr )
ocess_data
> window run as (partition by property_A, property_B order by datetime)
> ;
>
> You can't do it with a window function over the "run" window because
> no window function ever looks outside the current partition. But
> that's easy to fix by using a differ
is?
I think this'll work:
select datetime, property_A, property_B
, first_value(datetime)::time over run as swap_time
, dense_rank() over (order by property_A, property_B)
, value
from process_data
window run as (partition by property_A, property_B order by datetime)
;
You can't do it with a win
We are trying to add some information to a query over data from a
continuous process. Most of what we want can be done quite nicely using
window functions, but I got stuck on this particular problem:
The data has records with a timestamp and a few properties that make
subsequent rows belong to
17 matches
Mail list logo