Thanks Weston for the information.

On Thu, Feb 16, 2023 at 1:32 PM Weston Pace <weston.p...@gmail.com> wrote:

> There is a little bit at the end-to-end level.  One goal is to be able to
> repartition a very large dataset.  This means we read from something bigger
> than memory and then write to it.  This workflow is tested in
> `test_write_dataset_with_backpresure` in test_dataset.py in pyarrow.
>
> Then there is a one unit test in plan_test.cc (ExecPlanExecution,
> SinkNodeBackpressure).  And of course, there is some testing in the asof
> join test.
>
> The dataset writer and scanner have their own concepts of backpressure and
> these are independently unit tested.  However, this is more or less
> external to Acero.
>
> So I think there is certainly room for improvement here.
>
> On Thu, Feb 16, 2023 at 5:34 AM Yaron Gvili <rt...@hotmail.com> wrote:
>
> > Hi,
> >
> > What testing of back-pressure exist in Acero? I'm mostly interested in
> > testing of back-pressure that applies to any ExecNode, but could also
> learn
> > from more specific testing. If this is not well covered, I'd look into
> > implementing such testing.
> >
> >
> > Cheers,
> > Yaron.
> >
>

Reply via email to