I think we already have this table capacity: ACCEPT_ANY_SCHEMA. Can you try
that?
On Thu, May 14, 2020 at 6:17 AM Russell Spitzer
wrote:
> I would really appreciate that, I'm probably going to just write a planner
> rule for now which matches up my table schema with the query output if they
> ar
I would really appreciate that, I'm probably going to just write a planner
rule for now which matches up my table schema with the query output if they
are valid, and fails analysis otherwise. This approach is how I got
metadata columns in so I believe it would work for writing as well.
On Wed, May
I agree with adding a table capability for this. This is something that we
support in our Spark branch so that users can evolve tables without
breaking existing ETL jobs -- when you add an optional column, it shouldn't
fail the existing pipeline writing data to a table. I can contribute the
changes
In DSV1 this was pretty easy to do because of the burden of verification
for writes had to be in the datasource, the new setup makes partial writes
difficult.
resolveOuptutColumns checks the table schema against the writeplan's output
and will fail any requests which don't contain every column as
This looks a bit specific and maybe it's better to allow catalogs to
customize the error message, which is more general.
On Wed, May 13, 2020 at 12:16 AM Russell Spitzer
wrote:
> Currently the way some actions work, we receive an error during analysis
> phase. For example, doing a "SELECT * FROM