I had a call with some developers from S3 and asked and they said this
change should resolve the "negative caching" issue.
Atomic renames are on their radar but they said this will take a lot of
work on their part.
On Fri, 4 Dec 2020 at 21:57, Ryan Blue wrote:
> It isn't clear whether this S3 c
It isn't clear whether this S3 consistency change also fixes the negative
caching (HEAD when file doesn't exist causes later HEAD to not see the
file), but I think that it does not fix it because there was a PR opened to
add consistency using LIST before a HEAD operation.
I think it is still a goo
This feature will definitely help cases where we saw a file not found
exception after creating the new file using s3a (spark use to retry task in
that case).
On Wed, Dec 2, 2020 at 2:11 AM Jungtaek Lim
wrote:
> What about S3FileIO implementation? I see some issue filed that even with
> Hive cata
What about S3FileIO implementation? I see some issue filed that even with
Hive catalog working with S3 brings unexpected issues, and S3FileIO
supposed to fix the issue (according to Ryan). Is it safe without S3FileIO
to use Hive catalog + Hadoop API for S3 now?
2020년 12월 2일 (수) 오후 6:54, Vivekanand
Iceberg tables backed by HadoopTables and HadoopCatalog require an atomic
rename. This is not yet supported with S3.
On Wed, Dec 2, 2020 at 3:20 PM Mass Dosage wrote:
> Hello all,
>
> Yesterday AWS announced that S3 now has strong read-after-write
> consistency:
>
>
> https://aws.amazon.com/blog
Hello all,
Yesterday AWS announced that S3 now has strong read-after-write consistency:
https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency
https://aws.amazon.com/s3/consistency/
Does this mean that Iceberg tables backed by HadoopTables and HadoopCatalog
can no