With 12 +1 votes, 1 -0 vote, and no -1 votes, this candidate passes.
I'll open a PR to document and explain the behavior that Dongjoon noted.
Thank you for voting everyone!
On Thu, Jul 9, 2020 at 6:45 PM Ryan Blue wrote:
> Hi everyone,
>
> I propose the following RC to be released as the offic
I agree that we should follow up and address this. I'll open a PR with an
update for docs and we can explore other options as well.
I think that the main issue here is how tables are loaded when not using a
catalog. When using a catalog, tables are cached so that writes update the
in-memory refere
On Mon, Jul 13, 2020 at 4:28 PM Anton Okolnychyi
wrote:
> I think the issue that was brought up by Dongjoon is valid and we should
> document the current caching behavior.
> The problem is also more generic and does not apply only to views as
> operations that are happening through the source dir
+1 (binding)
I think the issue that was brought up by Dongjoon is valid and we should
document the current caching behavior.
The problem is also more generic and does not apply only to views as operations
that are happening through the source directly may not propagated to the
catalog.
I thin
+1 (non-binding)
- Verified signatures
- Verified checksum
- Build from src tarball and ran tests
- Ran internal test suite, they pass
On Mon, Jul 13, 2020 at 11:46 AM Pavan Lanka
wrote:
> +1 (non-binding)
>
>
>- Environment
> - OSX
> - openjdk 1.8.0_252
>- Build from source
One more thing: a work-around is to redefine the view. That discards the
original logical plan and table and returns the expected result in Spark 3.
On Mon, Jul 13, 2020 at 11:53 AM Ryan Blue wrote:
> Dongjoon,
>
> Thanks for raising this issue. I did some digging and the problem is that
> in Sp
Dongjoon,
Thanks for raising this issue. I did some digging and the problem is that
in Spark 3.0, the logical plans saves a Table instance with the current
state when it was loaded -- when the `createOrReplaceTempView` call
happened. That never gets refreshed, which is why you get stale data. In
2
+1 (non-binding)
Environment
OSX
openjdk 1.8.0_252
Build from source with tests
Build time ~7mins
Except for some warnings looks good
> On Jul 10, 2020, at 9:20 AM, Ryan Murray wrote:
>
> 1. Verify the signature: OK
> 2. Verify the checksum: OK
> 3. Untar the archive tarball: OK
> 4. Run RAT c
+1 (binding)
- Verified signatures
- Verified checksum
- Built src from tarball and ran tests.
- Looked at JMH dependency to make sure it wasn't leaking into the
published artifacts.
.. Owen
On Mon, Jul 13, 2020 at 11:00 AM RD wrote:
> +1
> - verified signatures and checksum
> -
+1
- verified signatures and checksum
- Ran RAT checks
- Build src and ran all tests
- Ran a simple spark job.
-Best,
R.
On Mon, Jul 13, 2020 at 8:36 AM Junjie Chen
wrote:
> I ran the following steps:
>- downloaded and verified signature and checksum.
>- ran ./gradlew build, it took
I ran the following steps:
- downloaded and verified signature and checksum.
- ran ./gradlew build, it took 8m23s on an 8core16g cloud virtual
machine.
- rebuilt our app with iceberg-spark-runtime-0.9.0.jar and verified on a
spark cluster. It works well.
+1 (non-binding)
On Mon, Jul 13,
+1 (non-binding)
- verified signature and checksum
- built from source and run tests
- Validated Spark3: Used Ryan's example command, played with Spark3, looks
very good.
- Validated vectorized reads: open vectorization-enabled, works well.
Best,
Jingsong
On Mon, Jul 13, 2020 at 2:37 PM Gautam
*Followed the steps:*
1. Downloaded the source tarball, signature (.asc), and checksum (.sha512)
from
https://dist.apache.org/repos/dist/dev/iceberg/apache-iceberg-0.9.0-rc5/
2. Downloaded https://dist.apache.org/repos/dist/dev/incubator/iceberg/KEYS
Import gpg keys: download KEYS and run gpg
I verified the hash/sign/build/UT and manual testing with Apache Spark
2.4.6 (hadoop-2.7) and 3.0.0 (hadoop-3.2) on Apache Hive 2.3.7 metastore.
(BTW, for spark-3.0.0-bin-hadoop3.2, I used Ryan's example command with
`spark.sql.warehouse.dir` instead of `spark.warehouse.path`)
1. Iceberg 0.9 + Spa
+1 (binding)
Verified sigs/sums/license/build/test
I did have an issue with the test metastore for the spark3 tests on the
first run, but couldn't replicate it in subsequent tests.
-Dan
On Fri, Jul 10, 2020 at 10:42 AM Ryan Blue
wrote:
> +1 (binding)
>
> Verified checksums, ran tests, staged
+1 (binding)
Verified checksums, ran tests, staged convenience binaries.
I also ran a few tests using Spark 3.0.0 and Spark 2.4.5 and the runtime
Jars. For anyone that would like to use spark-sql or spark-shell, here are
the commands that I used:
~/Apps/spark-3.0.0-bin-hadoop2.7/bin/spark-sql \
1. Verify the signature: OK
2. Verify the checksum: OK
3. Untar the archive tarball: OK
4. Run RAT checks to validate license headers: RAT checks passed
5. Build and test the project: all unit tests passed.
+1 (non-binding)
I did see that my build took >12 minutes and used all 100% of all 8 cores
I followed the verify guide here (
https://lists.apache.org/thread.html/rd5e6b1656ac80252a9a7d473b36b6227da91d07d86d4ba4bee10df66%40%3Cdev.iceberg.apache.org%3E)
:
1. Verify the signature: OK
2. Verify the checksum: OK
3. Untar the archive tarball: OK
4. Run RAT checks to validate license headers:
Hi everyone,
I propose the following RC to be released as the official Apache Iceberg
0.9.0 release.
The commit id is 4e66b4c10603e762129bc398146e02d21689e6dd
* This corresponds to the tag: apache-iceberg-0.9.0-rc5
* https://github.com/apache/iceberg/commits/apache-iceberg-0.9.0-rc5
* https://git
19 matches
Mail list logo