Hi Wing Yew,
Thanks for the pointer to the PR. That's what I was looking for.
I will watch #1508 and #1029 and let's continue discussing on Github.
Best,
Tianyi
On Tue, Dec 15, 2020 at 3:44 AM Wing Yew Poon
wrote:
> Hi Tianyi,
> The behavior you found is indeed the current behavior in Iceberg.
Hi Tianyi,
The behavior you found is indeed the current behavior in Iceberg. I too
found it unexpected. I have a PR to address this:
https://github.com/apache/iceberg/pull/1508. Due to other work, I had not
followed up on this for a while, but I am returning to it now.
- Wing Yew
On Mon, Dec 14,
I had a call with some developers from S3 and asked and they said this
change should resolve the "negative caching" issue.
Atomic renames are on their radar but they said this will take a lot of
work on their part.
On Fri, 4 Dec 2020 at 21:57, Ryan Blue wrote:
> It isn't clear whether this S3 c
Hi all,
We have a proposed PR here[1] which allows for custom Catalogs to be used
in the Spark3 Dataframe API. As discussed in the PR[2] this change breaks
support for specifying schema in the dataframe reader/writer eg:
spark.read().scheam(schema).format("iceberg").load(table)
The schema argum
Hi,
I have a question regarding the behavior of schema evolution with
time-travel in Iceberg.
When I do a time-travel query against a table with schema changes.
I expect that the result is structured using the schema. But it turned out
to be structured using the current schema.
Is this an expecte