退订
On 11/15/2023 13:15,Ajantha Bhat wrote:
+1
Thanks,
Ajantha
On Wed, Nov 15, 2023 at 10:42 AM Jean-Baptiste Onofré wrote:
Hi guys,
Avro 1.11.3 has been released, fixing CVE-2023-39410.
We already updated to Avro 1.11.3 on main.
About CVE, we also already use guava 32.1.3, fixing CVE
Additional supplement:
In hive, I can mv data(pt=today-60) from hdfs to oss, and then set the
partition to point to the oss directory in hms
But in iceberg, What should I do ?
| |
liubo07199
|
|
liubo07...@hellobike.com
|
On 08/23/2021 19:27,1 wrote:
Hi:
I have a table partitioned by
Hi:
I have a table partitioned by day, the historical data of this table needs to
be stored on oss, That is to say:
Table A, partitioned by day, store on hdfs. transfer
data(partition=today-60) to oss everyday.
data(partiton>today-60) is store on hdfs, and data(partiton<=today
Hi,all:
I need to embed the iceberg table, which is regarded as real-time table, into
our workflow. That is to say, Flink writes data into Iceberg table in
real-time, I need something to indicate the data completeness on the ingestion
path so that downstream batch consumer jobs can be trigge
Hi, all:
I want to code iceberg with idea, so is there a code style ?
thx
| |
liubo07199
|
|
|
Error pic cannot upload, hive error is
FAILED: ValidationFailureSemanticException table is not partitioned but
partition spec exists: {pt=xxx}
| |
liubo07199
|
|
liubo07...@hellobike.com
|
签名由网易邮箱大师定制
On 05/10/2021 16:07,1 wrote:
Hi, team
When I migrate tables from hive to iceberg u
Hi, team
When I migrate tables from hive to iceberg use spark3, partition info in
ddl is hidden.
When I run insert overwrite table xxx partition (pt='xxx’) on flink or
spark sql-shell, it’s ok, but when I run it on hive sql-shell, I get a error
like below:
So what can I d
Hi:
Issue: https://github.com/apache/iceberg/issues/2567
hive : 2.3.8
iceberg : 0.11.0
hadoop : 2.3.7
I have two table, table A is iceberg format and table B is simple textfile
format .
table A like:
| ROW FORMAT SERDE
'org.apache.iceberg.mr.hive.HiveIcebergSerDe'
STORED BY
'org.apache.i
Hi, all
We have historical data in hive, so how do we import these historical
data into iceberg
the amount of data is too large, so if there any way to build the meta
file only?
Thx
| |
liubo07199
|
|
liubo07...@hellobike.com
|
ject.
On Sun, Feb 7, 2021 at 12:38 AM 1 wrote:
Hi, All:
I'm very happy for 0.11's release, that’s great!
I test the new features Immediately, like flink cdc, flink rewriteDataFiles,
flink streaming read
But when I write a row level delete, there is something badly.
I test flink
lob/master/flink/src/test/java/org/apache/iceberg/flink/sink/TestFlinkIcebergSinkV2.java
) and flink rewriteDataFiles on iceberg 0.11, everything is ok when appending
records (+I,1,aaa,20210128), but when write a row level delete file by
id(-D,1,20210128), rewriteDataFiles throw an exception, t
On 12/28/2020 10:35,OpenInx wrote:
You edited the v1.metadata.json to support iceberg format v2 ? That's not the
correct way to use iceberg format v2. let's discuss this issue in the latest
email .
On Sat, Dec 26, 2020 at 7:01 PM 1 wrote:
Hi, all:
I vim the v1.metadat
always 1. Implementations must throw an exception if a
table’s version is higher than the supported version. so what can i do to test
row-level deletion ?
So what can I do to have a try to row level delete? how to create a v2 table
?
thx
Code is :
private static void deleteRead() throws
Hi, all:
I vim the v1.metadata.json, so old .v1.metadata.json.crc is not match,
how can I generate a new .v1.metadata.json.crc for the new v1.metadata.json ?
Thx
| |
liubo07199
|
|
liubo07...@hellobike.com
|
14 matches
Mail list logo