I filed an issue for adding writes to the logical plan in DataFusion a
while back. It would be a good addition. Spark does something similar.
https://github.com/apache/arrow-datafusion/issues/5076
Thanks,
Andy.
On Sun, Apr 2, 2023 at 5:57 AM Metehan Yıldırım
wrote:
> Hi,
>
> What are the di
Hi,
What are the differences in requirements? I am not completely familiar with
the Ballista requirements, for example I am not sure whether final data may
end up in different hosts or not.
About LogicalPlan, is there an example usage in other OLAPs? Maybe you can
find a better fitting solution w
Hi,
Thanks for your response.
Sorry - my bad, didn't specify it clearly. However, I will check your solution.
What I'm looking for is Ballista - I need a distributed version of
export/save, currently on Ballista you can only read, like
S3(minio)/HDFS, but after processing I need to save the outpu
Hi,
As far as I know, exporting data from a SQL database to a CSV file or other
external file format is typically not considered part of the logical plan
for executing a SQL query.
At present, I am developing a table sink feature in Datafusion, where I
have successfully added new APIs (insert_int
Hi,
Looking for advice:
I'm looking into creating a writer part for ballista.
There is a data source but not a sink.
I started looking into object store -> put/put_multipart.
But looks like simple context extension is not enough - do I need to
extend logical/physical plan?
If you have any pointer