[
https://issues.apache.org/jira/browse/IGNITE-27272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ivan Zlenko resolved IGNITE-27272.
----------------------------------
Resolution: Won't Fix
We do not need to have this limitation anymore. Looks like the underlying issue
got fixed.
> Block too large batches from being inserted using data streamer
> ---------------------------------------------------------------
>
> Key: IGNITE-27272
> URL: https://issues.apache.org/jira/browse/IGNITE-27272
> Project: Ignite
> Issue Type: Improvement
> Reporter: Ivan Zlenko
> Priority: Major
> Labels: ignite-3
>
> If the batch size for the data streamer is too large and we risk running out
> of memory, it is a good idea to prevent this batch from being inserted until
> we have a mechanism in place that will automatically split such batches into
> smaller ones.
> The maximum available size for one batch could be calculated from the
> available memory and the table schema into which the batch should be inserted.
> Otherwise, there is a potential issue where the cluster could become
> unresponsive after such a batch is sent.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)