[
https://issues.apache.org/jira/browse/SPARK-54630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Vinod KC updated SPARK-54630:
-----------------------------
Description:
Add a new timestamp_bucket function to group temporal data into fixed-width
intervals with configurable bucket alignment. This function addresses the need
for arbitrary interval bucketing beyond calendar-based units.
Currently, Spark SQL lacks a flexible way to bucket timestamps into arbitrary
intervals with custom alignment:
Limitations of existing functions:
* date_trunc: Only supports fixed calendar units (year, month, week, day,
hour, minute, second)
* No support for custom bucket alignment (e.g., Monday-aligned weeks, fiscal
periods, shift schedules)
Common use cases that are difficult today:
# Bucketing by arbitrary intervals (e.g., 15 minutes, 6 hours, 90 days)
# Aligning buckets to business schedules (e.g., weekly reports starting Monday)
# Grouping sensor data or logs into custom time windows
# Time-series analysis with non-standard intervals
Add `timestamp_bucket(bucket_width, timestamp[, origin])` function with the
following characteristics:
Function Signature:
{code:java}
timestamp_bucket(bucket_width INTERVAL DAY TO SECOND, timestamp
DATE|TIMESTAMP|TIMESTAMP_NTZ, origin TIMESTAMP) -> TIMESTAMP{code}
*Comparison with Other Databases*
{_}PostgreSQL{_}:
date_bin(interval, timestamp, origin)
{_}TimescaleDB{_}:
time_bucket(interval, timestamp, origin)
The proposed function provides similar functionality to PostgreSQL's date_bin
and TimescaleDB's time_bucket
was:
Add {{date_bucket}} function to bucket DATE values into arbitrary
fixed-duration intervals aligned to the Unix epoch (1970-01-01)
Currently, Spark provides:
* {*}{{date_trunc()}}{*}: Truncates to calendar-based units (year, quarter,
month, week, day) but only supports predefined units
* {*}{{window()}}{*}: For TIMESTAMP bucketing with tumbling/sliding windows,
returns a struct with start/end times
However, there is no function to bucket dates by arbitrary fixed durations like:
* 7-day buckets (weekly reporting aligned to epoch, not calendar weeks)
* 14-day buckets (bi-weekly cycles)
* Custom N-day intervals for business cycles
This is a common requirement for:
* {*}Time-series analytics{*}: Grouping events by custom time periods
* {*}Reporting{*}: Creating consistent bucketing across different date ranges
* {*}Data partitioning{*}: Organizing data by fixed-duration intervals
{code:java}
-- 7-day buckets
SELECT date_bucket(INTERVAL '7' DAY, DATE'2025-01-15');
-- Result: 2025-01-09
-- 2 WEEKS = 14 days
SELECT date_bucket(INTERVAL '2' WEEK, DATE'2025-01-15');
-- Result: 2025-01-02
-- Group orders by 7-day buckets
SELECT date_bucket(INTERVAL '7' DAY, order_date) AS week_bucket,
COUNT(*) AS order_count,
SUM(amount) AS total_sales
FROM orders
WHERE order_date >= DATE'2025-01-01'
GROUP BY week_bucket
ORDER BY week_bucket;
-- Demonstrate date_bucket with Multiple Input Dates
SELECT stack(8,
DATE'2025-01-01', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-01'),
DATE'2025-01-05', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-05'),
DATE'2025-01-08', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-08'),
DATE'2025-01-15', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-15'),
DATE'2025-01-14', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-14'),
DATE'2025-01-13', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-13'),
DATE'2025-01-22', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-22'),
DATE'2025-01-29', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-29')
) AS (input_date, bucket_start)
2025-01-01 2024-12-26
2025-01-05 2025-01-02
2025-01-08 2025-01-02
2025-01-15 2025-01-09
2025-01-14 2025-01-09
2025-01-13 2025-01-09
2025-01-22 2025-01-16
2025-01-29 2025-01-23{code}
> Add timestamp_bucket function for temporal bucketing with configurable
> alignment
> --------------------------------------------------------------------------------
>
> Key: SPARK-54630
> URL: https://issues.apache.org/jira/browse/SPARK-54630
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 3.5.0, 4.0.0
> Reporter: Vinod KC
> Priority: Major
> Labels: pull-request-available
>
> Add a new timestamp_bucket function to group temporal data into fixed-width
> intervals with configurable bucket alignment. This function addresses the
> need for arbitrary interval bucketing beyond calendar-based units.
> Currently, Spark SQL lacks a flexible way to bucket timestamps into arbitrary
> intervals with custom alignment:
> Limitations of existing functions:
> * date_trunc: Only supports fixed calendar units (year, month, week, day,
> hour, minute, second)
> * No support for custom bucket alignment (e.g., Monday-aligned weeks, fiscal
> periods, shift schedules)
> Common use cases that are difficult today:
> # Bucketing by arbitrary intervals (e.g., 15 minutes, 6 hours, 90 days)
> # Aligning buckets to business schedules (e.g., weekly reports starting
> Monday)
> # Grouping sensor data or logs into custom time windows
> # Time-series analysis with non-standard intervals
>
> Add `timestamp_bucket(bucket_width, timestamp[, origin])` function with the
> following characteristics:
> Function Signature:
> {code:java}
> timestamp_bucket(bucket_width INTERVAL DAY TO SECOND, timestamp
> DATE|TIMESTAMP|TIMESTAMP_NTZ, origin TIMESTAMP) -> TIMESTAMP{code}
> *Comparison with Other Databases*
> {_}PostgreSQL{_}:
> date_bin(interval, timestamp, origin)
> {_}TimescaleDB{_}:
> time_bucket(interval, timestamp, origin)
> The proposed function provides similar functionality to PostgreSQL's date_bin
> and TimescaleDB's time_bucket
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]