BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1037241618
Hi team, wondering if anyone was able to corroborate (or refute) my above
statement.
Also wondering if this issue could be re-opened?
Thanks!
--
This is an automated messa
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1037241618
Hi team, wondering if anyone was able to corroborate (or refute) my above
statement.
Also wondering if this issue could be re-opened?
Thanks!
--
This is an automated messa
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1022077497
Hi team, I only tested with Hive yesterday and it worked. However, testing
with Presto `0.247` now and I still cannot read the table correctly.
I did some research and found out
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1021303523
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1021327116
I found the problem. The JAR in my Hive aux path was outdated. I replaced it
with the latest version and all was good. Sorry for the inconvenience, and
thank you so much for supporting
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1021303523
Hi again,
Still trying to understand the issue :)
Here's the create table statement I run to create the table using
`spark-sql`:
```
CREATE TABLE IF NOT EXISTS
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1018636053
@nsivabalan I'm not specifying any input format when I query using Presto or
Hive. I followed [these
steps](https://hudi.apache.org/docs/query_engine_setup#hive) initially for
Presto
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1014468802
Hi @codope , sorry for late reply...
I have tested with `master` and I'm still facing the same above issue.
Presto/Hive do not recognize `*.replacecommit` files I believe and I a
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1004806239
I'm facing that issues using Hudi `0.9.0` and `0.10.0`. I'll try with the
latest master.
Thank you for following up on this!
--
This is an automated message from the Apache Git Se
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1004675484
@codope Hive is automatically synced. It works correctly for inserts,
updates, deletes. I followed [these
steps](https://hudi.apache.org/docs/query_engine_setup#hive) initially for
Pr
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-1004148341
Hi everyone, sorry for the late reply as I was on holidays.
I tried again after @YannByron 's comment but this time I tried reading the
files using Scala (`spark.read.format("hud
BenjMaq commented on issue #4154:
URL: https://github.com/apache/hudi/issues/4154#issuecomment-994623507
@YannByron I use the same Spark and Hudi as you and I still have this issue
in `0.10.0`.
I see the parquet files being correctly created but the table still points
to the old files
12 matches
Mail list logo