[jira] [Updated] (FLINK-27652) CompactManager.Rewriter cannot handle different partition keys invoked compaction

2022-05-16 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27652:
--
Description: 
h3. Issue Description
When enabling {{commit.force-compact}} for the partitioned managed table, there 
had a chance that the successive synchronized
writes got failure.  The current impl of {{CompactManager.Rewriter}} is an 
anonymous class in {{FileStoreWriteImpl}}. However, the {{partition}} and 
{{bucket}} are referenced as local variables; and this may lead to the 
{{rewrite}} method messing up with the wrong data file with the {{partition}} 
and {{bucket}}.

h3. Root Cause
{code:java}
Caused by: java.io.IOException: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: java.io.FileNotFoundException: File 
file:/var/folders/xd/9dp1y4vd3h56kjkvdk426l50gn/T/junit5920507275110651781/junit4163667468681653619/default_catalog.catalog/default_database.db/T1/f1=Autumn/bucket-0/data-59826283-c5d1-4344-96ae-2203d4e60a57-0
 does not exist or the user running Flink ('jane.cjm') has insufficient 
permissions to access it. at 
org.apache.flink.table.store.connector.sink.StoreSinkWriter.prepareCommit(StoreSinkWriter.java:172)
{code}
However, data-59826283-c5d1-4344-96ae-2203d4e60a57-0 does not belong to 
partition Autumn. It seems like the rewriter found the wrong partition/bucket 
with the wrong file.

h3. How to Reproduce
{code:java}
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.flink.table.store.connector;

import org.junit.Test;

import java.util.Collections;
import java.util.List;
import java.util.concurrent.ExecutionException;

/** A reproducible case. */
public class ForceCompactionITCase extends FileStoreTableITCase {

@Override
protected List ddl() {
return Collections.singletonList(
"CREATE TABLE IF NOT EXISTS T1 ("
+ "f0 INT, f1 STRING, f2 STRING) PARTITIONED BY (f1)");
}

@Test
public void test() throws ExecutionException, InterruptedException {
bEnv.executeSql("ALTER TABLE T1 SET ('num-levels' = '3')");
bEnv.executeSql("ALTER TABLE T1 SET ('commit.force-compact' = 'true')");
bEnv.executeSql(
"INSERT INTO T1 VALUES(1, 'Winter', 'Winter is Coming')"
+ ",(2, 'Winter', 'The First Snowflake'), "
+ "(2, 'Spring', 'The First Rose in Spring'), "
+ "(7, 'Summer', 'Summertime Sadness')")
.await();
bEnv.executeSql("INSERT INTO T1 VALUES(12, 'Winter', 'Last 
Christmas')").await();
bEnv.executeSql("INSERT INTO T1 VALUES(11, 'Winter', 'Winter is 
Coming')").await();
bEnv.executeSql("INSERT INTO T1 VALUES(10, 'Autumn', 
'Refrain')").await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(6, 'Summer', 'Watermelon 
Sugar'), "
+ "(4, 'Spring', 'Spring Water')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(66, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(666, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(6, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(66, 'Summer', 'Summer 
Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 

[jira] [Commented] (FLINK-27652) CompactManager.Rewriter cannot handle different partition keys invoked compaction

2022-05-16 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17537593#comment-17537593
 ] 

Jane Chan commented on FLINK-27652:
---

h3. Full Stacktrace
{code:java}
org.apache.flink.table.store.connector.ForceCompactionITCase.test(ForceCompactionITCase.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
Caused by: org.apache.flink.table.api.TableException: Failed to wait job finish
at 
org.apache.flink.table.api.internal.InsertResultProvider.hasNext(InsertResultProvider.java:85)
at 
org.apache.flink.table.api.internal.InsertResultProvider.isFirstRowReady(InsertResultProvider.java:71)
at 
org.apache.flink.table.api.internal.TableResultImpl.lambda$awaitInternal$1(TableResultImpl.java:105)
at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.flink.table.api.internal.InsertResultProvider.hasNext(InsertResultProvider.java:83)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
at 
org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:259)
at 

[jira] [Created] (FLINK-27652) CompactManager.Rewriter cannot handle different partition keys invoked compaction

2022-05-16 Thread Jane Chan (Jira)
Jane Chan created FLINK-27652:
-

 Summary: CompactManager.Rewriter cannot handle different partition 
keys invoked compaction
 Key: FLINK-27652
 URL: https://issues.apache.org/jira/browse/FLINK-27652
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0


h3. Issue Description
When enable {{commit.force-compact}} for partitioned managed table, there had a 
chance that the successive synchronized
writes got failure. The root cause is

h3. Root Cause
{code:java}
Caused by: java.io.IOException: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: java.io.FileNotFoundException: File 
file:/var/folders/xd/9dp1y4vd3h56kjkvdk426l50gn/T/junit5920507275110651781/junit4163667468681653619/default_catalog.catalog/default_database.db/T1/f1=Autumn/bucket-0/data-59826283-c5d1-4344-96ae-2203d4e60a57-0
 does not exist or the user running Flink ('jane.cjm') has insufficient 
permissions to access it. at 
org.apache.flink.table.store.connector.sink.StoreSinkWriter.prepareCommit(StoreSinkWriter.java:172)
{code}
However, data-59826283-c5d1-4344-96ae-2203d4e60a57-0 does not belong to 
partition Autumn. It seems like the rewriter found the wrong partition/bucket 
with the wrong file.

h3. How to Reproduce
{code:java}
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.flink.table.store.connector;

import org.junit.Test;

import java.util.Collections;
import java.util.List;
import java.util.concurrent.ExecutionException;

/** A reproducible case. */
public class ForceCompactionITCase extends FileStoreTableITCase {

@Override
protected List ddl() {
return Collections.singletonList(
"CREATE TABLE IF NOT EXISTS T1 ("
+ "f0 INT, f1 STRING, f2 STRING) PARTITIONED BY (f1)");
}

@Test
public void test() throws ExecutionException, InterruptedException {
bEnv.executeSql("ALTER TABLE T1 SET ('num-levels' = '3')");
bEnv.executeSql("ALTER TABLE T1 SET ('commit.force-compact' = 'true')");
bEnv.executeSql(
"INSERT INTO T1 VALUES(1, 'Winter', 'Winter is Coming')"
+ ",(2, 'Winter', 'The First Snowflake'), "
+ "(2, 'Spring', 'The First Rose in Spring'), "
+ "(7, 'Summer', 'Summertime Sadness')")
.await();
bEnv.executeSql("INSERT INTO T1 VALUES(12, 'Winter', 'Last 
Christmas')").await();
bEnv.executeSql("INSERT INTO T1 VALUES(11, 'Winter', 'Winter is 
Coming')").await();
bEnv.executeSql("INSERT INTO T1 VALUES(10, 'Autumn', 
'Refrain')").await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(6, 'Summer', 'Watermelon 
Sugar'), "
+ "(4, 'Spring', 'Spring Water')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(66, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(666, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(6, 'Summer', 'Summer Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(66, 'Summer', 'Summer 
Vibe'),"
+ " (9, 'Autumn', 'Wake Me Up When September 
Ends')")
.await();
bEnv.executeSql(
"INSERT INTO T1 VALUES(666, 

[jira] [Updated] (FLINK-27558) Introduce a new optional option for TableStoreFactory to represent planned manifest entries

2022-05-12 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27558:
--
Description: 
When 
{code:java}
TableStoreFactory.onCompactTable
{code}
gets called, the planned manifest entries need to be injected back into the 
enriched options, and we need a new key to represent it.

The explained plan should be like 

{code}
== Abstract Syntax Tree ==
LogicalSink(table=[default_catalog.default_database.T0], fields=[f0, f1, f2], 
hints=[[[OPTIONS options:{num-sorted-run.compaction-trigger=2, 
path=file:/var/folders/xd/9dp1y4vd3h56kjkvdk426l50gn/T/junit6150615541165154705/junit667583448612916332/,
 
compaction.scanned-manifest={"snapshotId":4,"manifestEntries":{"__DEFAULT__":{"bucket-0":[{"fileName":"data-05f48089-3b58-4a29-8274-9105bfdce224-0","minKey":["f0=6","f1=Jane
 Eyre","f2=9.9"],"maxKey":["f0=6","f1=Jane 
Eyre","f2=9.9"]},{"fileName":"data-980d06b9-50c6-4540-ae01-d5c1ff688c4d-0","minKey":["f0=5","f1=Northanger
 Abby","f2=8.6"],"maxKey":["f0=5","f1=Northanger 
Abby","f2=8.6"]},{"fileName":"data-beadf34e-9918-4abd-a271-1dcd9733dfb7-0","minKey":["f0=3","f1=The
 Mansfield Park","f2=7.0"],"maxKey":["f0=4","f1=Sense and 
Sensibility","f2=9.0"]},{"fileName":"data-5e772406-6b80-4b2f-b6cc-20bf79a11369-0","minKey":["f0=1","f1=Pride
 and Prejudice","f2=9.0"],"maxKey":["f0=2","f1=Emma","f2=8.5"]}]]]])
+- LogicalTableScan(table=[[default_catalog, default_database, T0]], 
hints=[[[OPTIONS inheritPath:[] options:{num-sorted-run.compaction-trigger=2, 
path=file:/var/folders/xd/9dp1y4vd3h56kjkvdk426l50gn/T/junit6150615541165154705/junit667583448612916332/,
 
compaction.scanned-manifest={"snapshotId":4,"manifestEntries":{"__DEFAULT__":{"bucket-0":[{"fileName":"data-05f48089-3b58-4a29-8274-9105bfdce224-0","minKey":["f0=6","f1=Jane
 Eyre","f2=9.9"],"maxKey":["f0=6","f1=Jane 
Eyre","f2=9.9"]},{"fileName":"data-980d06b9-50c6-4540-ae01-d5c1ff688c4d-0","minKey":["f0=5","f1=Northanger
 Abby","f2=8.6"],"maxKey":["f0=5","f1=Northanger 
Abby","f2=8.6"]},{"fileName":"data-beadf34e-9918-4abd-a271-1dcd9733dfb7-0","minKey":["f0=3","f1=The
 Mansfield Park","f2=7.0"],"maxKey":["f0=4","f1=Sense and 
Sensibility","f2=9.0"]},{"fileName":"data-5e772406-6b80-4b2f-b6cc-20bf79a11369-0","minKey":["f0=1","f1=Pride
 and Prejudice","f2=9.0"],"maxKey":["f0=2","f1=Emma","f2=8.5"]}]]]])
{code}


  was:
When 
{code:java}
TableStoreFactory.onCompactTable
{code}
gets called, the planned manifest entries need to be injected back into the 
enriched options, and we need a new key to represent it.


> Introduce a new optional option for TableStoreFactory to represent planned 
> manifest entries
> ---
>
> Key: FLINK-27558
> URL: https://issues.apache.org/jira/browse/FLINK-27558
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> When 
> {code:java}
> TableStoreFactory.onCompactTable
> {code}
> gets called, the planned manifest entries need to be injected back into the 
> enriched options, and we need a new key to represent it.
> The explained plan should be like 
> {code}
> == Abstract Syntax Tree ==
> LogicalSink(table=[default_catalog.default_database.T0], fields=[f0, f1, f2], 
> hints=[[[OPTIONS options:{num-sorted-run.compaction-trigger=2, 
> path=file:/var/folders/xd/9dp1y4vd3h56kjkvdk426l50gn/T/junit6150615541165154705/junit667583448612916332/,
>  
> compaction.scanned-manifest={"snapshotId":4,"manifestEntries":{"__DEFAULT__":{"bucket-0":[{"fileName":"data-05f48089-3b58-4a29-8274-9105bfdce224-0","minKey":["f0=6","f1=Jane
>  Eyre","f2=9.9"],"maxKey":["f0=6","f1=Jane 
> Eyre","f2=9.9"]},{"fileName":"data-980d06b9-50c6-4540-ae01-d5c1ff688c4d-0","minKey":["f0=5","f1=Northanger
>  Abby","f2=8.6"],"maxKey":["f0=5","f1=Northanger 
> Abby","f2=8.6"]},{"fileName":"data-beadf34e-9918-4abd-a271-1dcd9733dfb7-0","minKey":["f0=3","f1=The
>  Mansfield Park","f2=7.0"],"maxKey":["f0=4","f1=Sense and 
> Sensibility","f2=9.0"]},{"fileName":"data-5e772406-6b80-4b2f-b6cc-20bf79a11369-0","minKey":["f0=1","f1=Pride
>  and Prejudice","f2=9.0"],"maxKey":["f0=2","f1=Emma","f2=8.5"]}]]]])
> +- LogicalTableScan(table=[[default_catalog, default_database, T0]], 
> hints=[[[OPTIONS inheritPath:[] options:{num-sorted-run.compaction-trigger=2, 
> path=file:/var/folders/xd/9dp1y4vd3h56kjkvdk426l50gn/T/junit6150615541165154705/junit667583448612916332/,
>  
> compaction.scanned-manifest={"snapshotId":4,"manifestEntries":{"__DEFAULT__":{"bucket-0":[{"fileName":"data-05f48089-3b58-4a29-8274-9105bfdce224-0","minKey":["f0=6","f1=Jane
>  

[jira] [Created] (FLINK-27558) Introduce a new optional option for TableStoreFactory to represent planned manifest entries

2022-05-09 Thread Jane Chan (Jira)
Jane Chan created FLINK-27558:
-

 Summary: Introduce a new optional option for TableStoreFactory to 
represent planned manifest entries
 Key: FLINK-27558
 URL: https://issues.apache.org/jira/browse/FLINK-27558
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0


When 
{code:java}
TableStoreFactory.onCompactTable
{code}
gets called, the planned manifest entries need to be injected back into the 
enriched options, and we need a new key to represent it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27557) Create the empty writer for 'ALTER TABLE ... COMPACT'

2022-05-09 Thread Jane Chan (Jira)
Jane Chan created FLINK-27557:
-

 Summary: Create the empty writer for 'ALTER TABLE ... COMPACT'
 Key: FLINK-27557
 URL: https://issues.apache.org/jira/browse/FLINK-27557
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0


Currently, FileStoreWrite only creates an empty writer for the \{{INSERT 
OVERWRITE}} clause. We should also create the empty writer for manual 
compaction.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27526) Support scaling bucket number for FileStore

2022-05-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27526:
--
Description: 
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{{}table-storage.bucket{}}}) once the managed table 
is created. The reason is that the LSM tree is built under bucket level, and 
thus writing with different bucket numbers will cause the same record to be 
hashed to another bucket and leads to data corruption. In the release-0.1, 
TableStore will detect this change and throw an exception when scanning. See 
FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number via {{ALTER TABLE 
SET (...)}} and utilize the compaction mechanism to provide a way for users to 
reorganize the existing data layout.

 
{code:sql}
-- alter catalog metadata
ALTER TABLE managed_table SET('bucket' = 'bucket-num');

-- alter configuration under session level, the successive compact operations 
will compact data by the new bucket number
SET 'table-store.compaction.rescale-bucket' = 'true';
ALTER TABLE managed_table PARTITION partition_spec[, PARTITION partition_spec, 
...] COMPACT;

-- per command setting
ALTER TABLE managed_table /*+OPTIONS('compaction.rescale-bucket' = 'true')*/ 
PARTITION partition_spec[, PARTITION partition_spec, ...] COMPACT; 
{code}

  was:
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{{}table-storage.bucket{}}}) once the managed table 
is created. The reason is that the LSM tree is built under bucket level, and 
thus writing with different bucket numbers will cause the same record to be 
hashed to another bucket and leads to data corruption. In the release-0.1, 
TableStore will detect this change and throw an exception when scanning. See 
FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number via {{ALTER TABLE 
SET (...)}} and utilize the compaction mechanism to provide a way for users to 
reorganize the existing data layout.

 
{code:sql}
-- alter catalog metadata
ALTER TABLE managed_table SET('bucket' = 'bucket-num');

-- alter configuration under session level, the successive compact operations 
will compact data by the new bucket number
SET 'table-store.compact-rescale' = 'true';
ALTER TABLE managed_table PARTITION partition_spec[, PARTITION partition_spec, 
...] COMPACT;

-- per command setting
ALTER TABLE managed_table /*+OPTIONS('compact-rescale' = 'true')*/ PARTITION 
partition_spec[, PARTITION partition_spec, ...] COMPACT; 
{code}


> Support scaling bucket number for FileStore
> ---
>
> Key: FLINK-27526
> URL: https://issues.apache.org/jira/browse/FLINK-27526
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Currently, TableStore does not support changing the number of the bucket 
> (denoted by config option {{{}table-storage.bucket{}}}) once the managed 
> table is created. The reason is that the LSM tree is built under bucket 
> level, and thus writing with different bucket numbers will cause the same 
> record to be hashed to another bucket and leads to data corruption. In the 
> release-0.1, TableStore will detect this change and throw an exception when 
> scanning. See FLINK-27316.
> However, this is not flexible and user-friendly. To be more specific, if the 
> bucket number remains unchanged, the number of files under each bucket will 
> grow fast as time passes, slowing down the scan speed to restore the LSM tree 
> and finally influencing read and write latency.
> In this ticket, we aim to support changing the bucket number via {{ALTER 
> TABLE SET (...)}} and utilize the compaction mechanism to provide a way for 
> users to reorganize the existing data layout.
>  
> {code:sql}
> -- alter catalog metadata
> ALTER TABLE managed_table SET('bucket' = 'bucket-num');
> -- alter configuration under session level, the successive compact operations 
> will compact data by the new bucket number
> SET 'table-store.compaction.rescale-bucket' = 'true';
> ALTER TABLE managed_table PARTITION 

[jira] [Updated] (FLINK-27528) Introduce a new configuration option 'compaction.rescale-bucket' for FileStore

2022-05-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27528:
--
Summary: Introduce a new configuration option 'compaction.rescale-bucket' 
for FileStore  (was: Introduce a new configuration option 
'compact.rescale-bucket' for FileStore)

> Introduce a new configuration option 'compaction.rescale-bucket' for FileStore
> --
>
> Key: FLINK-27528
> URL: https://issues.apache.org/jira/browse/FLINK-27528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> This config key controls the behavior for {{{}ALTER TABLE ... COMPACT{}}}.
> When {{compact.rescale-bucket}} is false, it indicates the compaction will 
> rewrite data according to the bucket number, which is read from manifest 
> meta. The commit will only add/delete files; o.w. it suggests the compaction 
> will read bucket number from catalog meta, and the commit will overwrite the 
> whole partition directory.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27540) Let FileStoreSource accept pre-planned manifest entries

2022-05-07 Thread Jane Chan (Jira)
Jane Chan created FLINK-27540:
-

 Summary: Let FileStoreSource accept pre-planned manifest entries
 Key: FLINK-27540
 URL: https://issues.apache.org/jira/browse/FLINK-27540
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0


When manual compaction is triggered, the manifest entries are collected at the 
planning phase already(to accelerate the compaction). The source does not need 
to scan and plan again during the runtime.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27528) Introduce a new configuration option 'compact.rescale-bucket' for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27528:
--
Summary: Introduce a new configuration option 'compact.rescale-bucket' for 
FileStore  (was: Introduce a new configuration option key 
'compact.rescale-bucket' for FileStore)

> Introduce a new configuration option 'compact.rescale-bucket' for FileStore
> ---
>
> Key: FLINK-27528
> URL: https://issues.apache.org/jira/browse/FLINK-27528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> This config key controls the behavior for {{{}ALTER TABLE ... COMPACT{}}}.
> When {{compact.rescale-bucket}} is false, it indicates the compaction will 
> rewrite data according to the bucket number, which is read from manifest 
> meta. The commit will only add/delete files; o.w. it suggests the compaction 
> will read bucket number from catalog meta, and the commit will overwrite the 
> whole partition directory.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27528) Introduce a new configuration option key 'compact.rescale-bucket' for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27528:
--
Description: 
This config key controls the behavior for {{{}ALTER TABLE ... COMPACT{}}}.

When {{compact.rescale-bucket}} is false, it indicates the compaction will 
rewrite data according to the bucket number, which is read from manifest meta. 
The commit will only add/delete files; o.w. it suggests the compaction will 
read bucket number from catalog meta, and the commit will overwrite the whole 
partition directory.

  was:This option will be added to FileStoreOptions to control compaction 
behavior


> Introduce a new configuration option key 'compact.rescale-bucket' for 
> FileStore
> ---
>
> Key: FLINK-27528
> URL: https://issues.apache.org/jira/browse/FLINK-27528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> This config key controls the behavior for {{{}ALTER TABLE ... COMPACT{}}}.
> When {{compact.rescale-bucket}} is false, it indicates the compaction will 
> rewrite data according to the bucket number, which is read from manifest 
> meta. The commit will only add/delete files; o.w. it suggests the compaction 
> will read bucket number from catalog meta, and the commit will overwrite the 
> whole partition directory.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27528) Introduce a new configuration option key 'compact.rescale-bucket' for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27528:
--
Summary: Introduce a new configuration option key 'compact.rescale-bucket' 
for FileStore  (was: Introduce a new configuration option key 'compact-rescale')

> Introduce a new configuration option key 'compact.rescale-bucket' for 
> FileStore
> ---
>
> Key: FLINK-27528
> URL: https://issues.apache.org/jira/browse/FLINK-27528
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> This option will be added to FileStoreOptions to control compaction behavior



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27526) Support scaling bucket number for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27526:
--
Description: 
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{{}table-storage.bucket{}}}) once the managed table 
is created. The reason is that the LSM tree is built under bucket level, and 
thus writing with different bucket numbers will cause the same record to be 
hashed to another bucket and leads to data corruption. In the release-0.1, 
TableStore will detect this change and throw an exception when scanning. See 
FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number via {{ALTER TABLE 
SET (...)}} and utilize the compaction mechanism to provide a way for users to 
reorganize the existing data layout.

 
{code:sql}
-- alter catalog metadata
ALTER TABLE managed_table SET('bucket' = 'bucket-num');

-- alter configuration under session level, the successive compact operations 
will compact data by the new bucket number
SET 'table-store.compact-rescale' = 'true';
ALTER TABLE managed_table PARTITION partition_spec[, PARTITION partition_spec, 
...] COMPACT;

-- per command setting
ALTER TABLE managed_table /*+OPTIONS('compact-rescale' = 'true')*/ PARTITION 
partition_spec[, PARTITION partition_spec, ...] COMPACT; 
{code}

  was:
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{{}table-storage.bucket{}}}) once the managed table 
is created. The reason is that the LSM tree is built under bucket level, and 
thus writing with different bucket numbers will cause the same record to be 
hashed to another bucket and leads to data corruption. In the release-0.1, 
TableStore will detect this change and throw an exception when scanning. See 
FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number via {{ALTER TABLE 
SET (...)}} and provide a way for users to reorganize the existing data layout.

 
{code:sql}
-- alter catalog metadata
ALTER TABLE managed_table SET('bucket' = 'bucket-num');

-- alter configuration under session level, the successive compact operations 
will compact data by the new bucket number
SET 'table-store.compact-rescale' = 'true';
ALTER TABLE managed_table PARTITION partition_spec[, PARTITION partition_spec, 
...] COMPACT;

-- per command setting
ALTER TABLE managed_table /*+OPTIONS('compact-rescale' = 'true')*/ PARTITION 
partition_spec[, PARTITION partition_spec, ...] COMPACT; 
{code}


> Support scaling bucket number for FileStore
> ---
>
> Key: FLINK-27526
> URL: https://issues.apache.org/jira/browse/FLINK-27526
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Currently, TableStore does not support changing the number of the bucket 
> (denoted by config option {{{}table-storage.bucket{}}}) once the managed 
> table is created. The reason is that the LSM tree is built under bucket 
> level, and thus writing with different bucket numbers will cause the same 
> record to be hashed to another bucket and leads to data corruption. In the 
> release-0.1, TableStore will detect this change and throw an exception when 
> scanning. See FLINK-27316.
> However, this is not flexible and user-friendly. To be more specific, if the 
> bucket number remains unchanged, the number of files under each bucket will 
> grow fast as time passes, slowing down the scan speed to restore the LSM tree 
> and finally influencing read and write latency.
> In this ticket, we aim to support changing the bucket number via {{ALTER 
> TABLE SET (...)}} and utilize the compaction mechanism to provide a way for 
> users to reorganize the existing data layout.
>  
> {code:sql}
> -- alter catalog metadata
> ALTER TABLE managed_table SET('bucket' = 'bucket-num');
> -- alter configuration under session level, the successive compact operations 
> will compact data by the new bucket number
> SET 'table-store.compact-rescale' = 'true';
> ALTER TABLE managed_table PARTITION partition_spec[, PARTITION 
> partition_spec, ...] COMPACT;
> -- per 

[jira] [Created] (FLINK-27528) Introduce a new configuration option key 'compact-rescale'

2022-05-06 Thread Jane Chan (Jira)
Jane Chan created FLINK-27528:
-

 Summary: Introduce a new configuration option key 'compact-rescale'
 Key: FLINK-27528
 URL: https://issues.apache.org/jira/browse/FLINK-27528
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0


This option will be added to FileStoreOptions to control compaction behavior



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27526) Support scaling bucket number for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27526:
--
Description: 
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{{}table-storage.bucket{}}}) once the managed table 
is created. The reason is that the LSM tree is built under bucket level, and 
thus writing with different bucket numbers will cause the same record to be 
hashed to another bucket and leads to data corruption. In the release-0.1, 
TableStore will detect this change and throw an exception when scanning. See 
FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number via {{ALTER TABLE 
SET (...)}} and provide a way for users to reorganize the existing data layout.

 
{code:sql}
-- alter catalog metadata
ALTER TABLE managed_table SET('bucket' = 'bucket-num');

-- alter configuration under session level, the successive compact operations 
will compact data by the new bucket number
SET 'table-store.compact-rescale' = 'true';
ALTER TABLE managed_table PARTITION partition_spec[, PARTITION partition_spec, 
...] COMPACT;

-- per command setting
ALTER TABLE managed_table /*+OPTIONS('compact-rescale' = 'true')*/ PARTITION 
partition_spec[, PARTITION partition_spec, ...] COMPACT; 
{code}

  was:
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{table-storage.bucket}}) once the managed table is 
created. The reason is that the LSM tree is built under bucket level, and thus 
writing with different bucket numbers will cause the same record to be hashed 
to another bucket and leads to data corruption. In the release-0.1, TableStore 
will detect this change and throw an exception when scanning. See FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number and provide a way 
for users to reorganize the existing data layout.


> Support scaling bucket number for FileStore
> ---
>
> Key: FLINK-27526
> URL: https://issues.apache.org/jira/browse/FLINK-27526
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Currently, TableStore does not support changing the number of the bucket 
> (denoted by config option {{{}table-storage.bucket{}}}) once the managed 
> table is created. The reason is that the LSM tree is built under bucket 
> level, and thus writing with different bucket numbers will cause the same 
> record to be hashed to another bucket and leads to data corruption. In the 
> release-0.1, TableStore will detect this change and throw an exception when 
> scanning. See FLINK-27316.
> However, this is not flexible and user-friendly. To be more specific, if the 
> bucket number remains unchanged, the number of files under each bucket will 
> grow fast as time passes, slowing down the scan speed to restore the LSM tree 
> and finally influencing read and write latency.
> In this ticket, we aim to support changing the bucket number via {{ALTER 
> TABLE SET (...)}} and provide a way for users to reorganize the existing data 
> layout.
>  
> {code:sql}
> -- alter catalog metadata
> ALTER TABLE managed_table SET('bucket' = 'bucket-num');
> -- alter configuration under session level, the successive compact operations 
> will compact data by the new bucket number
> SET 'table-store.compact-rescale' = 'true';
> ALTER TABLE managed_table PARTITION partition_spec[, PARTITION 
> partition_spec, ...] COMPACT;
> -- per command setting
> ALTER TABLE managed_table /*+OPTIONS('compact-rescale' = 'true')*/ PARTITION 
> partition_spec[, PARTITION partition_spec, ...] COMPACT; 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27526) Support scaling bucket number for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27526:
--
Description: 
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option {{table-storage.bucket}}) once the managed table is 
created. The reason is that the LSM tree is built under bucket level, and thus 
writing with different bucket numbers will cause the same record to be hashed 
to another bucket and leads to data corruption. In the release-0.1, TableStore 
will detect this change and throw an exception when scanning. See FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

In this ticket, we aim to support changing the bucket number and provide a way 
for users to reorganize the existing data layout.

  was:
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option 'table-storage.bucket') once the managed table is 
created. The reason is that the LSM tree is built under bucket level, and thus 
writing with different bucket numbers will cause the same record to be hashed 
to another bucket and leads to data corruption. In the release-0.1, TableStore 
will detect this change and throw an exception when scanning. See FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.


> Support scaling bucket number for FileStore
> ---
>
> Key: FLINK-27526
> URL: https://issues.apache.org/jira/browse/FLINK-27526
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Currently, TableStore does not support changing the number of the bucket 
> (denoted by config option {{table-storage.bucket}}) once the managed table is 
> created. The reason is that the LSM tree is built under bucket level, and 
> thus writing with different bucket numbers will cause the same record to be 
> hashed to another bucket and leads to data corruption. In the release-0.1, 
> TableStore will detect this change and throw an exception when scanning. See 
> FLINK-27316.
> However, this is not flexible and user-friendly. To be more specific, if the 
> bucket number remains unchanged, the number of files under each bucket will 
> grow fast as time passes, slowing down the scan speed to restore the LSM tree 
> and finally influencing read and write latency.
> In this ticket, we aim to support changing the bucket number and provide a 
> way for users to reorganize the existing data layout.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27526) Support scaling bucket number for FileStore

2022-05-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27526:
--
Description: 
Currently, TableStore does not support changing the number of the bucket 
(denoted by config option 'table-storage.bucket') once the managed table is 
created. The reason is that the LSM tree is built under bucket level, and thus 
writing with different bucket numbers will cause the same record to be hashed 
to another bucket and leads to data corruption. In the release-0.1, TableStore 
will detect this change and throw an exception when scanning. See FLINK-27316.

However, this is not flexible and user-friendly. To be more specific, if the 
bucket number remains unchanged, the number of files under each bucket will 
grow fast as time passes, slowing down the scan speed to restore the LSM tree 
and finally influencing read and write latency.

> Support scaling bucket number for FileStore
> ---
>
> Key: FLINK-27526
> URL: https://issues.apache.org/jira/browse/FLINK-27526
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Currently, TableStore does not support changing the number of the bucket 
> (denoted by config option 'table-storage.bucket') once the managed table is 
> created. The reason is that the LSM tree is built under bucket level, and 
> thus writing with different bucket numbers will cause the same record to be 
> hashed to another bucket and leads to data corruption. In the release-0.1, 
> TableStore will detect this change and throw an exception when scanning. See 
> FLINK-27316.
> However, this is not flexible and user-friendly. To be more specific, if the 
> bucket number remains unchanged, the number of files under each bucket will 
> grow fast as time passes, slowing down the scan speed to restore the LSM tree 
> and finally influencing read and write latency.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27526) Support scaling bucket number for FileStore

2022-05-06 Thread Jane Chan (Jira)
Jane Chan created FLINK-27526:
-

 Summary: Support scaling bucket number for FileStore
 Key: FLINK-27526
 URL: https://issues.apache.org/jira/browse/FLINK-27526
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27515) Support 'ALTER TABLE ... COMPACT' in TableStore

2022-05-05 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27515:
--
Fix Version/s: table-store-0.2.0
Affects Version/s: table-store-0.2.0

> Support 'ALTER TABLE ... COMPACT' in TableStore
> ---
>
> Key: FLINK-27515
> URL: https://issues.apache.org/jira/browse/FLINK-27515
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.2.0
>
>
> Implement 'ALTER TABLE ... COMPACT'[1] in TableStore. This feature lays a 
> foundation for dynamic bucket scaling.
> [1] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27515) Support 'ALTER TABLE ... COMPACT' in TableStore

2022-05-05 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27515:
--
Description: 
Implement 'ALTER TABLE ... COMPACT'[1] in TableStore. This feature lays a 
foundation for dynamic bucket scaling.

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage

  was:
Implement 'ALTER TABLE ... COMPACT'[1] in TableStore. 

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage


> Support 'ALTER TABLE ... COMPACT' in TableStore
> ---
>
> Key: FLINK-27515
> URL: https://issues.apache.org/jira/browse/FLINK-27515
> Project: Flink
>  Issue Type: New Feature
>  Components: Table Store
>Reporter: Jane Chan
>Priority: Major
>
> Implement 'ALTER TABLE ... COMPACT'[1] in TableStore. This feature lays a 
> foundation for dynamic bucket scaling.
> [1] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27515) Support 'ALTER TABLE ... COMPACT' in TableStore

2022-05-05 Thread Jane Chan (Jira)
Jane Chan created FLINK-27515:
-

 Summary: Support 'ALTER TABLE ... COMPACT' in TableStore
 Key: FLINK-27515
 URL: https://issues.apache.org/jira/browse/FLINK-27515
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Reporter: Jane Chan


Implement 'ALTER TABLE ... COMPACT'[1] in TableStore. 

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27361) Introduce retry mechanism for SnapshotFinder

2022-04-23 Thread Jane Chan (Jira)
Jane Chan created FLINK-27361:
-

 Summary: Introduce retry mechanism for SnapshotFinder
 Key: FLINK-27361
 URL: https://issues.apache.org/jira/browse/FLINK-27361
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan
 Fix For: table-store-0.1.0


When concurrent read/write tasks are finding the earliest/latest snapshot, it 
gets a chance to throw FileNotFoundException, which is caused by FLINK-25453.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27316) Prevent users from changing bucket number

2022-04-20 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27316:
--
Description: 
Before supporting this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}
-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as rowkind
-- E.g. changelogRow("-D", 7, "Little Woman"),
-- changelogRow("-D", 2, "Pride and Prejudice"), 
-- changelogRow("-D", 3, "Emma"), 
-- changelogRow("-D", 4, "Mansfield Park"), 
-- changelogRow("-D", 5, "Northanger Abbey"), 
-- changelogRow("-D", 6, "The Mad Woman in the Attic"), 
-- changelogRow("-D", 8, "Jane Eyre"), 
-- changelogRow("-D", 1, "Sense and Sensibility"), 
-- changelogRow("-D", 1, "Sense and Sensibility") 

CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}

which will get wrong results

  was:
Before we support this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}
-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as rowkind
-- E.g. changelogRow("-D", 7, "Little Woman"),
-- changelogRow("-D", 2, "Pride and Prejudice"), 
-- changelogRow("-D", 3, "Emma"), 
-- changelogRow("-D", 4, "Mansfield Park"), 
-- changelogRow("-D", 5, "Northanger Abbey"), 
-- changelogRow("-D", 6, "The Mad Woman in the Attic"), 
-- changelogRow("-D", 8, "Jane Eyre"), 
-- changelogRow("-D", 1, "Sense and Sensibility"), 
-- changelogRow("-D", 1, "Sense and Sensibility") 

CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}

which will get wrong results


> Prevent users from changing bucket number
> -
>
> Key: FLINK-27316
> URL: https://issues.apache.org/jira/browse/FLINK-27316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> Before supporting this feature, we should throw a meaningful exception to 
> prevent data corruption which is caused by
> {code:sql}
>  ALTER TABLE ... SET ('bucket' = '...');
>  ALTER TABLE ... RESET ('bucket');{code}
>  
> h3. How to reproduce
> {code:sql}
> -- Suppose we defined a managed table like
> CREATE TABLE IF NOT EXISTS managed_table (
>   f0 INT,
>   f1 STRING) WITH (
> 'path' = '...'
> 'bucket' = '3');
> -- then write some data
> INSERT INTO managed_table
> VALUES (1, 'Sense and Sensibility),
> (2, 'Pride and Prejudice), 
> (3, 'Emma'), 
> (4, 'Mansfield Park'), 
> (5, 'Northanger Abbey'),
> (6, 'The Mad 

[jira] [Updated] (FLINK-27316) Prevent users from changing bucket number

2022-04-20 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27316:
--
Description: 
Before we support this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}
-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as rowkind
-- E.g. changelogRow("-D", 7, "Little Woman"),
-- changelogRow("-D", 2, "Pride and Prejudice"), 
-- changelogRow("-D", 3, "Emma"), 
-- changelogRow("-D", 4, "Mansfield Park"), 
-- changelogRow("-D", 5, "Northanger Abbey"), 
-- changelogRow("-D", 6, "The Mad Woman in the Attic"), 
-- changelogRow("-D", 8, "Jane Eyre"), 
-- changelogRow("-D", 1, "Sense and Sensibility"), 
-- changelogRow("-D", 1, "Sense and Sensibility") 

CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}

which will get wrong results

  was:
Before we support this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}
-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as rowkind
-- E.g. changelogRow("-D", 7, "Little Woman"),
-- changelogRow("-D", 2, "Pride and Prejudice"), 
-- changelogRow("-D", 3, "Emma"), 
-- changelogRow("-D", 4, "Mansfield Park"), 
-- changelogRow("-D", 5, "Northanger Abbey"), 
-- changelogRow("-D", 6, "The Mad Woman in the Attic"), 
-- changelogRow("-D", 8, "Jane Eyre"), 
-- changelogRow("-D", 1, "Sense and Sensibility"), 
-- changelogRow("-D", 1, "Sense and Sensibility") 

CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}


> Prevent users from changing bucket number
> -
>
> Key: FLINK-27316
> URL: https://issues.apache.org/jira/browse/FLINK-27316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> Before we support this feature, we should throw a meaningful exception to 
> prevent data corruption which is caused by
> {code:sql}
>  ALTER TABLE ... SET ('bucket' = '...');
>  ALTER TABLE ... RESET ('bucket');{code}
>  
> h3. How to reproduce
> {code:sql}
> -- Suppose we defined a managed table like
> CREATE TABLE IF NOT EXISTS managed_table (
>   f0 INT,
>   f1 STRING) WITH (
> 'path' = '...'
> 'bucket' = '3');
> -- then write some data
> INSERT INTO managed_table
> VALUES (1, 'Sense and Sensibility),
> (2, 'Pride and Prejudice), 
> (3, 'Emma'), 
> (4, 'Mansfield Park'), 
> (5, 'Northanger Abbey'),
> (6, 'The Mad Woman in the Attic'),
> (7, 

[jira] [Updated] (FLINK-27316) Prevent users from changing bucket number

2022-04-20 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27316:
--
Description: 
Before we support this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}
-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as rowkind
-- E.g. changelogRow("-D", 7, "Little Woman"),
-- changelogRow("-D", 2, "Pride and Prejudice"), 
-- changelogRow("-D", 3, "Emma"), 
-- changelogRow("-D", 4, "Mansfield Park"), 
-- changelogRow("-D", 5, "Northanger Abbey"), 
-- changelogRow("-D", 6, "The Mad Woman in the Attic"), 
-- changelogRow("-D", 8, "Jane Eyre"), 
-- changelogRow("-D", 1, "Sense and Sensibility"), 
-- changelogRow("-D", 1, "Sense and Sensibility") 

CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}

  was:
Before we support this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}

-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as changelog mode
-- E.g. changelogRow("-D", 7, "Little Woman"), changelogRow("-D", 2, "Pride and 
Prejudice"), changelogRow("-D", 3, "Emma"), changelogRow("-D", 4, "Mansfield 
Park"), changelogRow("-D", 5, "Northanger Abbey"), changelogRow("-D", 6, "The 
Mad Woman in the Attic"), changelogRow("-D", 8, "Jane Eyre"), 
changelogRow("-D", 1, "Sense and Sensibility"), changelogRow("-D", 1, "Sense 
and Sensibility") CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}


> Prevent users from changing bucket number
> -
>
> Key: FLINK-27316
> URL: https://issues.apache.org/jira/browse/FLINK-27316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> Before we support this feature, we should throw a meaningful exception to 
> prevent data corruption which is caused by
> {code:sql}
>  ALTER TABLE ... SET ('bucket' = '...');
>  ALTER TABLE ... RESET ('bucket');{code}
>  
> h3. How to reproduce
> {code:sql}
> -- Suppose we defined a managed table like
> CREATE TABLE IF NOT EXISTS managed_table (
>   f0 INT,
>   f1 STRING) WITH (
> 'path' = '...'
> 'bucket' = '3');
> -- then write some data
> INSERT INTO managed_table
> VALUES (1, 'Sense and Sensibility),
> (2, 'Pride and Prejudice), 
> (3, 'Emma'), 
> (4, 'Mansfield Park'), 
> (5, 'Northanger Abbey'),
> (6, 'The Mad Woman in the Attic'),
> (7, 'Little Woman');
> -- change bucket number
> ALTER 

[jira] [Created] (FLINK-27316) Prevent users from changing bucket number

2022-04-20 Thread Jane Chan (Jira)
Jane Chan created FLINK-27316:
-

 Summary: Prevent users from changing bucket number
 Key: FLINK-27316
 URL: https://issues.apache.org/jira/browse/FLINK-27316
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan
 Fix For: table-store-0.1.0


Before we support this feature, we should throw a meaningful exception to 
prevent data corruption which is caused by
{code:sql}
 ALTER TABLE ... SET ('bucket' = '...');
 ALTER TABLE ... RESET ('bucket');{code}
 
h3. How to reproduce
{code:sql}

-- Suppose we defined a managed table like
CREATE TABLE IF NOT EXISTS managed_table (
  f0 INT,
  f1 STRING) WITH (
'path' = '...'
'bucket' = '3');

-- then write some data
INSERT INTO managed_table
VALUES (1, 'Sense and Sensibility),
(2, 'Pride and Prejudice), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'),
(6, 'The Mad Woman in the Attic'),
(7, 'Little Woman');

-- change bucket number
ALTER TABLE managed_table SET ('bucket' = '5');

-- write some data again
INSERT INTO managed_table 
VALUES (1, 'Sense and Sensibility'), 
(2, 'Pride and Prejudice'), 
(3, 'Emma'), 
(4, 'Mansfield Park'), 
(5, 'Northanger Abbey'), 
(6, 'The Mad Woman in the Attic'), 
(7, 'Little Woman'), 
(8, 'Jane Eyre');

-- change bucket number again
ALTER TABLE managed_table SET ('bucket' = '1')

-- then write some record with '-D' as changelog mode
-- E.g. changelogRow("-D", 7, "Little Woman"), changelogRow("-D", 2, "Pride and 
Prejudice"), changelogRow("-D", 3, "Emma"), changelogRow("-D", 4, "Mansfield 
Park"), changelogRow("-D", 5, "Northanger Abbey"), changelogRow("-D", 6, "The 
Mad Woman in the Attic"), changelogRow("-D", 8, "Jane Eyre"), 
changelogRow("-D", 1, "Sense and Sensibility"), changelogRow("-D", 1, "Sense 
and Sensibility") CREATE TABLE helper_source (
  f0 INT, 
  f1 STRING) WITH (
  'connector' = 'values', 
  'data-id' = '${register-id}', 
  'bounded' = 'false', 
  'changelog-mode' = 'I,UA,UB,D'
); 

INSERT INTO managed_table SELECT * FROM helper_source;

-- then read the snapshot

SELECT * FROM managed_table
{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27264) Add ITCase for concurrent batch overwrite and streaming insert

2022-04-15 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27264:
--
Description: 
!image-2022-04-15-19-26-09-649.png|width=609,height=241!

Add it case for user story

> Add ITCase for concurrent batch overwrite and streaming insert
> --
>
> Key: FLINK-27264
> URL: https://issues.apache.org/jira/browse/FLINK-27264
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
> Attachments: image-2022-04-15-19-26-09-649.png
>
>
> !image-2022-04-15-19-26-09-649.png|width=609,height=241!
> Add it case for user story



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27264) Add ITCase for concurrent batch overwrite and streaming insert

2022-04-15 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27264:
--
Attachment: image-2022-04-15-19-26-09-649.png

> Add ITCase for concurrent batch overwrite and streaming insert
> --
>
> Key: FLINK-27264
> URL: https://issues.apache.org/jira/browse/FLINK-27264
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
> Attachments: image-2022-04-15-19-26-09-649.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27264) Add ITCase for concurrent batch overwrite and streaming insert

2022-04-15 Thread Jane Chan (Jira)
Jane Chan created FLINK-27264:
-

 Summary: Add ITCase for concurrent batch overwrite and streaming 
insert
 Key: FLINK-27264
 URL: https://issues.apache.org/jira/browse/FLINK-27264
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan
 Fix For: table-store-0.1.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27123) Introduce filter predicate for 'LIKE'

2022-04-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-27123:
--
Summary: Introduce filter predicate for 'LIKE'  (was: Introduce filter 
predicate for 'IN' and 'LIKE')

> Introduce filter predicate for 'LIKE'
> -
>
> Key: FLINK-27123
> URL: https://issues.apache.org/jira/browse/FLINK-27123
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27123) Introduce filter predicate for 'IN' and 'LIKE'

2022-04-07 Thread Jane Chan (Jira)
Jane Chan created FLINK-27123:
-

 Summary: Introduce filter predicate for 'IN' and 'LIKE'
 Key: FLINK-27123
 URL: https://issues.apache.org/jira/browse/FLINK-27123
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan
 Fix For: table-store-0.1.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26913) Add ITCase for computed column and watermark

2022-03-29 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17513969#comment-17513969
 ] 

Jane Chan commented on FLINK-26913:
---

Hi [~lzljs3620320], would you mind assigning this ticket to me:)

> Add ITCase for computed column and watermark
> 
>
> Key: FLINK-26913
> URL: https://issues.apache.org/jira/browse/FLINK-26913
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26879) LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work

2022-03-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26879:
--
Description: We should avoid creating a hybrid source when the 
latest/from_timestamp mode is enabled, o.w. we cannot consume changelog from 
log store only.  (was: We should avoid creating a hybrid source when the 
latest/from_timestamp mode is enabled, o.w. we cannot consume changelog from 
log store.)

> LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work
> --
>
> Key: FLINK-26879
> URL: https://issues.apache.org/jira/browse/FLINK-26879
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> We should avoid creating a hybrid source when the latest/from_timestamp mode 
> is enabled, o.w. we cannot consume changelog from log store only.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26879) LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work

2022-03-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26879:
--
Description: We should avoid creating a hybrid source when the 
latest/from_timestamp mode is enabled, o.w. we cannot consume changelog from 
log store.  (was: We should avoid creating a hybrid source when the latest mode 
is enabled, o.w. we cannot consume changelog from log store.)

> LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work
> --
>
> Key: FLINK-26879
> URL: https://issues.apache.org/jira/browse/FLINK-26879
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> We should avoid creating a hybrid source when the latest/from_timestamp mode 
> is enabled, o.w. we cannot consume changelog from log store.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26879) LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work

2022-03-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26879:
--
Summary: LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work 
 (was: LogStartupMode.LATEST does not work)

> LogStartupMode.LATEST and LogStartupMode.FROM_TIMESTAMP don't work
> --
>
> Key: FLINK-26879
> URL: https://issues.apache.org/jira/browse/FLINK-26879
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> We should avoid creating a hybrid source when the latest mode is enabled, 
> o.w. we cannot consume changelog from log store.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26879) LogStartupMode.LATEST does not work

2022-03-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26879:
--
Description: We should avoid creating a hybrid source when the latest mode 
is enabled, o.w. we cannot consume changelog from log store.  (was: When 
creating a hybrid source, we should check the latest mode is enabled or not, 
o.w. we cannot consume changelog)

> LogStartupMode.LATEST does not work
> ---
>
> Key: FLINK-26879
> URL: https://issues.apache.org/jira/browse/FLINK-26879
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Blocker
> Fix For: table-store-0.1.0
>
>
> We should avoid creating a hybrid source when the latest mode is enabled, 
> o.w. we cannot consume changelog from log store.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26879) LogStartupMode.LATEST does not work

2022-03-28 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26879:
--
Description: When creating a hybrid source, we should check the latest mode 
is enabled or not, o.w. we cannot consume changelog  (was: When creating a 
hybrid source, we should check the latest mode is enabled or not)

> LogStartupMode.LATEST does not work
> ---
>
> Key: FLINK-26879
> URL: https://issues.apache.org/jira/browse/FLINK-26879
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Blocker
> Fix For: table-store-0.1.0
>
>
> When creating a hybrid source, we should check the latest mode is enabled or 
> not, o.w. we cannot consume changelog



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26879) LogStartupMode.LATEST does not work

2022-03-28 Thread Jane Chan (Jira)
Jane Chan created FLINK-26879:
-

 Summary: LogStartupMode.LATEST does not work
 Key: FLINK-26879
 URL: https://issues.apache.org/jira/browse/FLINK-26879
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan
 Fix For: table-store-0.1.0


When creating a hybrid source, we should check the latest mode is enabled or not



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26877) Introduce auto type inference for filter push down

2022-03-27 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26877:
--
Fix Version/s: table-store-0.1.0
   (was: 0.1.0)

> Introduce auto type inference for filter push down
> --
>
> Key: FLINK-26877
> URL: https://issues.apache.org/jira/browse/FLINK-26877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> Currently, the following query will fail when PredicateConverter generates 
> Literal because auto type inference is not supported
> {code:sql}
> CREATE TABLE managed_table (
>   f0 BIGINT,
>   f1 STRING
> )
> SELECT * FROM managed_table WHERE f0 > 100 {code}
> Stacktrace
> {code:java}
> Caused by: java.lang.RuntimeException: Failed to read ManifestEntry list 
> concurrently
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:212)
>     at 
> org.apache.flink.table.store.connector.source.FileStoreSource.restoreEnumerator(FileStoreSource.java:147)
>     at 
> org.apache.flink.table.store.connector.source.FileStoreSource.createEnumerator(FileStoreSource.java:117)
>     at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:197)
>     ... 33 more
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.ClassCastException
>     at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:210)
>     ... 36 more
> Caused by: java.lang.ClassCastException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
>     at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1005)
>     ... 37 more
> Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>     at java.lang.Integer.compareTo(Integer.java:52)
>     at 
> org.apache.flink.table.store.file.predicate.Literal.compareValueTo(Literal.java:59)
>     at 
> org.apache.flink.table.store.file.predicate.GreaterThan.test(GreaterThan.java:51)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.filterManifestEntry(FileStoreScanImpl.java:269)
>     at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>     at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>     at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
>     at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270)
>     at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>     at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>     at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>     at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>     at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
>     at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
>     at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
>     at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
>     at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>     at 
> java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714)
>     at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>     at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.lambda$plan$3(FileStoreScanImpl.java:209)
>     at 
> java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424)
>     ... 4 more {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26877) Introduce auto type inference for filter push down

2022-03-27 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26877:
--
Affects Version/s: table-store-0.1.0
   (was: 0.1.0)

> Introduce auto type inference for filter push down
> --
>
> Key: FLINK-26877
> URL: https://issues.apache.org/jira/browse/FLINK-26877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: 0.1.0
>
>
> Currently, the following query will fail when PredicateConverter generates 
> Literal because auto type inference is not supported
> {code:sql}
> CREATE TABLE managed_table (
>   f0 BIGINT,
>   f1 STRING
> )
> SELECT * FROM managed_table WHERE f0 > 100 {code}
> Stacktrace
> {code:java}
> Caused by: java.lang.RuntimeException: Failed to read ManifestEntry list 
> concurrently
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:212)
>     at 
> org.apache.flink.table.store.connector.source.FileStoreSource.restoreEnumerator(FileStoreSource.java:147)
>     at 
> org.apache.flink.table.store.connector.source.FileStoreSource.createEnumerator(FileStoreSource.java:117)
>     at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:197)
>     ... 33 more
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.ClassCastException
>     at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:210)
>     ... 36 more
> Caused by: java.lang.ClassCastException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
>     at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1005)
>     ... 37 more
> Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>     at java.lang.Integer.compareTo(Integer.java:52)
>     at 
> org.apache.flink.table.store.file.predicate.Literal.compareValueTo(Literal.java:59)
>     at 
> org.apache.flink.table.store.file.predicate.GreaterThan.test(GreaterThan.java:51)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.filterManifestEntry(FileStoreScanImpl.java:269)
>     at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
>     at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>     at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
>     at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270)
>     at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>     at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
>     at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>     at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
>     at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
>     at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
>     at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
>     at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>     at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>     at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
>     at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>     at 
> java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714)
>     at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>     at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.lambda$plan$3(FileStoreScanImpl.java:209)
>     at 
> java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424)
>     ... 4 more {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26877) Introduce auto type inference for filter push down

2022-03-27 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26877:
--
Description: 
Currently, the following query will fail when PredicateConverter generates 
Literal because auto type inference is not supported
{code:sql}
CREATE TABLE managed_table (
  f0 BIGINT,
  f1 STRING
)

SELECT * FROM managed_table WHERE f0 > 100 {code}
Stacktrace
{code:java}
Caused by: java.lang.RuntimeException: Failed to read ManifestEntry list 
concurrently
    at 
org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:212)
    at 
org.apache.flink.table.store.connector.source.FileStoreSource.restoreEnumerator(FileStoreSource.java:147)
    at 
org.apache.flink.table.store.connector.source.FileStoreSource.createEnumerator(FileStoreSource.java:117)
    at 
org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:197)
    ... 33 more
Caused by: java.util.concurrent.ExecutionException: java.lang.ClassCastException
    at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1006)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:210)
    ... 36 more
Caused by: java.lang.ClassCastException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at 
java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:598)
    at java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:1005)
    ... 37 more
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
java.lang.Integer
    at java.lang.Integer.compareTo(Integer.java:52)
    at 
org.apache.flink.table.store.file.predicate.Literal.compareValueTo(Literal.java:59)
    at 
org.apache.flink.table.store.file.predicate.GreaterThan.test(GreaterThan.java:51)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanImpl.filterManifestEntry(FileStoreScanImpl.java:269)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174)
    at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
    at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
    at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:270)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
    at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
    at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
    at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
    at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
    at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
    at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
    at java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanImpl.lambda$plan$3(FileStoreScanImpl.java:209)
    at 
java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1424)
    ... 4 more {code}

  was:
Currently, the following query will fail when PredicateConverter generates 
Literal because auto type inference is not supported
{code:sql}
CREATE TABLE managed_table (
  f0 BIGINT,
  f1 STRING
)

SELECT * FROM managed_table WHERE f0 > 100 {code}


> Introduce auto type inference for filter push down
> --
>
> Key: FLINK-26877
> URL: https://issues.apache.org/jira/browse/FLINK-26877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: 0.1.0
>
>
> Currently, the following query will fail when PredicateConverter generates 
> Literal because auto type inference is not supported
> {code:sql}
> CREATE TABLE managed_table (
>   f0 BIGINT,
>   f1 STRING
> )
> SELECT * FROM managed_table WHERE f0 > 100 {code}
> Stacktrace
> {code:java}
> Caused by: java.lang.RuntimeException: Failed to read ManifestEntry list 
> concurrently
> 

[jira] [Updated] (FLINK-26877) Introduce auto type inference for filter push down

2022-03-27 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26877:
--
Description: 
Currently, the following query will fail when PredicateConverter generates 
Literal because auto type inference is not supported
{code:sql}
CREATE TABLE managed_table (
  f0 BIGINT,
  f1 STRING
)

SELECT * FROM managed_table WHERE f0 > 100 {code}

> Introduce auto type inference for filter push down
> --
>
> Key: FLINK-26877
> URL: https://issues.apache.org/jira/browse/FLINK-26877
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
> Fix For: 0.1.0
>
>
> Currently, the following query will fail when PredicateConverter generates 
> Literal because auto type inference is not supported
> {code:sql}
> CREATE TABLE managed_table (
>   f0 BIGINT,
>   f1 STRING
> )
> SELECT * FROM managed_table WHERE f0 > 100 {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26877) Introduce auto type inference for filter push down

2022-03-27 Thread Jane Chan (Jira)
Jane Chan created FLINK-26877:
-

 Summary: Introduce auto type inference for filter push down
 Key: FLINK-26877
 URL: https://issues.apache.org/jira/browse/FLINK-26877
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan
 Fix For: 0.1.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26857) Add ITCase for projection and filter predicate

2022-03-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26857:
--
Description: 
To cover the conditions like 
 * filter predicate on partition fields
 * filter predicate on regular fields
 * projection under different changelog mode

 

  was:
To cover the conditions like 
 * filter predicate on partition fields
 * filter predicate on normal fields
 * projection under different changelog mode

 


> Add ITCase for projection and filter predicate
> --
>
> Key: FLINK-26857
> URL: https://issues.apache.org/jira/browse/FLINK-26857
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: 0.1.0
>
>
> To cover the conditions like 
>  * filter predicate on partition fields
>  * filter predicate on regular fields
>  * projection under different changelog mode
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26857) Add ITCase for projection and filter predicate

2022-03-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26857:
--
Fix Version/s: 0.1.0

> Add ITCase for projection and filter predicate
> --
>
> Key: FLINK-26857
> URL: https://issues.apache.org/jira/browse/FLINK-26857
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: 0.1.0
>
>
> To cover the conditions like 
>  * filter predicate on partition fields
>  * filter predicate on normal fields
>  * projection under different changelog mode
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26856) Add ITCase for compound primary key and multi-partition

2022-03-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26856:
--
Fix Version/s: 0.1.0

> Add ITCase for compound primary key and multi-partition
> ---
>
> Key: FLINK-26856
> URL: https://issues.apache.org/jira/browse/FLINK-26856
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: 0.1.0
>
>
> Cover the condition with 
>  * multi-partitions like dt='2022-01-01'/region='Euro'
>  * compound partitions like primary key (f0, f1, ...)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26863) Filter predicate does not work

2022-03-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26863:
--
Fix Version/s: 0.1.0

> Filter predicate does not work
> --
>
> Key: FLINK-26863
> URL: https://issues.apache.org/jira/browse/FLINK-26863
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Blocker
> Fix For: 0.1.0
>
>
> {code:java}
> Caused by: java.lang.RuntimeException: Failed to fetch next result
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>     at 
> org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:219)
>     at 
> org.apache.flink.table.store.file.utils.BlockingIterator.doCollect(BlockingIterator.java:94)
>     at 
> org.apache.flink.table.store.file.utils.BlockingIterator.lambda$collect$1(BlockingIterator.java:76)
>     at 
> java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.io.IOException: Failed to fetch job execution result
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:184)
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
>     ... 9 more
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>     at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>     at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:182)
>     ... 11 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>     at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>     at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
>     at 
> java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:680)
>     at 
> java.base/java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658)
>     at 
> java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094)
>     at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobExecutionResult(MiniClusterJobClient.java:138)
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:181)
>     ... 11 more
> Caused by: org.apache.flink.runtime.client.JobInitializationException: Could 
> not start the JobMaster.
>     at 
> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
>     at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
>     at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
>     at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>     at 
> java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1705)
>     at 
> java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
>     ... 3 more
> Caused by: java.util.concurrent.CompletionException: 
> java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot 
> instantiate the coordinator for operator Source: 
> managed_table_0fbb07bd-5474-4dbb-b2bf-382e075d2a23[3] -> Calc[4] -> 
> ConstraintEnforcer[5]
>     at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
>     at 
> java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1702)
>     

[jira] [Updated] (FLINK-26863) Filter predicate does not work

2022-03-25 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26863:
--
Description: 
{code:java}
Caused by: java.lang.RuntimeException: Failed to fetch next result
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
    at 
org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:219)
    at 
org.apache.flink.table.store.file.utils.BlockingIterator.doCollect(BlockingIterator.java:94)
    at 
org.apache.flink.table.store.file.utils.BlockingIterator.lambda$collect$1(BlockingIterator.java:76)
    at 
java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.io.IOException: Failed to fetch job execution result
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:184)
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
    ... 9 more
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
    at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
    at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022)
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:182)
    ... 11 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed.
    at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
    at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
    at 
java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:680)
    at 
java.base/java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658)
    at 
java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094)
    at 
org.apache.flink.runtime.minicluster.MiniClusterJobClient.getJobExecutionResult(MiniClusterJobClient.java:138)
    at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:181)
    ... 11 more
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could 
not start the JobMaster.
    at 
org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1705)
    at 
java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
    ... 3 more
Caused by: java.util.concurrent.CompletionException: 
java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot 
instantiate the coordinator for operator Source: 
managed_table_0fbb07bd-5474-4dbb-b2bf-382e075d2a23[3] -> Calc[4] -> 
ConstraintEnforcer[5]
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
    at 
java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1702)
    ... 4 more
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.JobException: 
Cannot instantiate the coordinator for operator Source: 
managed_table_0fbb07bd-5474-4dbb-b2bf-382e075d2a23[3] -> Calc[4] -> 
ConstraintEnforcer[5]
    at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:319)
    at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114)
    at 
java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1700)
    ... 4 more
Caused by: 

[jira] [Created] (FLINK-26863) Filter predicate does not work

2022-03-25 Thread Jane Chan (Jira)
Jane Chan created FLINK-26863:
-

 Summary: Filter predicate does not work
 Key: FLINK-26863
 URL: https://issues.apache.org/jira/browse/FLINK-26863
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26857) Add ITCase for projection and filter predicate

2022-03-24 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26857:
--
Description: 
To cover the conditions like 
 * filter predicate on partition fields
 * filter predicate on normal fields
 * projection under different changelog mode

 

> Add ITCase for projection and filter predicate
> --
>
> Key: FLINK-26857
> URL: https://issues.apache.org/jira/browse/FLINK-26857
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> To cover the conditions like 
>  * filter predicate on partition fields
>  * filter predicate on normal fields
>  * projection under different changelog mode
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26857) Add ITCase for projection and filter predicate

2022-03-24 Thread Jane Chan (Jira)
Jane Chan created FLINK-26857:
-

 Summary: Add ITCase for projection and filter predicate
 Key: FLINK-26857
 URL: https://issues.apache.org/jira/browse/FLINK-26857
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26856) Add ITCase for compound primary key and multi-partition

2022-03-24 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26856:
--
Description: 
Cover the condition with 
 * multi-partitions like dt='2022-01-01'/region='Euro'
 * compound partitions like primary key (f0, f1, ...)

> Add ITCase for compound primary key and multi-partition
> ---
>
> Key: FLINK-26856
> URL: https://issues.apache.org/jira/browse/FLINK-26856
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> Cover the condition with 
>  * multi-partitions like dt='2022-01-01'/region='Euro'
>  * compound partitions like primary key (f0, f1, ...)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26856) Add ITCase for compound primary key and multi-partition

2022-03-24 Thread Jane Chan (Jira)
Jane Chan created FLINK-26856:
-

 Summary: Add ITCase for compound primary key and multi-partition
 Key: FLINK-26856
 URL: https://issues.apache.org/jira/browse/FLINK-26856
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26753) PK constraint should include partition key

2022-03-20 Thread Jane Chan (Jira)
Jane Chan created FLINK-26753:
-

 Summary: PK constraint should include partition key
 Key: FLINK-26753
 URL: https://issues.apache.org/jira/browse/FLINK-26753
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan


We should check that the primary key should include partition key if the table 
is partitioned



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26669) Refactor ReadWriteITCase and add more test cases

2022-03-15 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26669:
--
Parent: FLINK-26653
Issue Type: Sub-task  (was: Improvement)

> Refactor ReadWriteITCase and add more test cases
> 
>
> Key: FLINK-26669
> URL: https://issues.apache.org/jira/browse/FLINK-26669
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> ReadWriteITCase is a parameterized test using junit4. However, since junit4 
> doesn't support parameterized parameters for different methods, it's hard to 
> extend the test to cover more conditions (such as 
> overwrite/change-tracking/projection/filter/batch write + streaming read or 
> vice versa), and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26669) Refactor ReadWriteITCase and add more test cases

2022-03-15 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26669:
--
Parent: (was: FLINK-26116)
Issue Type: Improvement  (was: Sub-task)

> Refactor ReadWriteITCase and add more test cases
> 
>
> Key: FLINK-26669
> URL: https://issues.apache.org/jira/browse/FLINK-26669
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> ReadWriteITCase is a parameterized test using junit4. However, since junit4 
> doesn't support parameterized parameters for different methods, it's hard to 
> extend the test to cover more conditions (such as 
> overwrite/change-tracking/projection/filter/batch write + streaming read or 
> vice versa), and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26669) Refactor ReadWriteITCase and add more test cases

2022-03-15 Thread Jane Chan (Jira)
Jane Chan created FLINK-26669:
-

 Summary: Refactor ReadWriteITCase and add more test cases
 Key: FLINK-26669
 URL: https://issues.apache.org/jira/browse/FLINK-26669
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan


ReadWriteITCase is a parameterized test using junit4. However, since junit4 
doesn't support parameterized parameters for different methods, it's hard to 
extend the test to cover more conditions (such as 
overwrite/change-tracking/projection/filter/batch write + streaming read or 
vice versa), and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26562) Introduce table.path option for FileStoreOptions

2022-03-09 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26562:
--
Summary: Introduce table.path option for FileStoreOptions  (was: Introduce 
table.path option in FileStoreOptions)

> Introduce table.path option for FileStoreOptions
> 
>
> Key: FLINK-26562
> URL: https://issues.apache.org/jira/browse/FLINK-26562
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> Currently, the {{FileStoreOptions}} only has the {{FILE_PATH}} option as the 
> table store root dir, we should have another {{TABLE_PATH}} for generated 
> {{sst/manifest/snapshot}} per table.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26562) Introduce table.path option in FileStoreOptions

2022-03-09 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26562:
--
Summary: Introduce table.path option in FileStoreOptions  (was: Introduce 
table path option)

> Introduce table.path option in FileStoreOptions
> ---
>
> Key: FLINK-26562
> URL: https://issues.apache.org/jira/browse/FLINK-26562
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>
> Currently, the {{FileStoreOptions}} only has the {{FILE_PATH}} option as the 
> table store root dir, we should have another {{TABLE_PATH}} for generated 
> {{sst/manifest/snapshot}} per table.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26562) Introduce table path option

2022-03-09 Thread Jane Chan (Jira)
Jane Chan created FLINK-26562:
-

 Summary: Introduce table path option
 Key: FLINK-26562
 URL: https://issues.apache.org/jira/browse/FLINK-26562
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan


Currently, the {{FileStoreOptions}} only has the {{FILE_PATH}} option as the 
table store root dir, we should have another {{TABLE_PATH}} for generated 
{{sst/manifest/snapshot}} per table.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26458) Rename Accumulator to MergeFunction

2022-03-08 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17502822#comment-17502822
 ] 

Jane Chan commented on FLINK-26458:
---

I can take this ticket.

> Rename Accumulator to MergeFunction
> ---
>
> Key: FLINK-26458
> URL: https://issues.apache.org/jira/browse/FLINK-26458
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: table-store-0.1.0
>
>
> See org.apache.flink.table.store.file.mergetree.compact.Accumulator.
> Actually, it is not an accumulator, but a merger. The naming of the 
> accumulator is misleading.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26535) Introduce StoreTableSource and StoreTableSink

2022-03-08 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26535:
--
Description: Introduce StoreTableSource and StoreTableSink, which creates 
StoreSource and StoreSink respectively, and interact with TableStoreFactory, to 
enable stream/batch jobs via SQL.  (was: Introduce StoreTableSource and 
StoreTableSink, which creates StoreSource and StoreSink respectively, and 
interact with TableStoreFactory,  in order to enable stream/batch jobs via SQL.)

> Introduce StoreTableSource and StoreTableSink
> -
>
> Key: FLINK-26535
> URL: https://issues.apache.org/jira/browse/FLINK-26535
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> Introduce StoreTableSource and StoreTableSink, which creates StoreSource and 
> StoreSink respectively, and interact with TableStoreFactory, to enable 
> stream/batch jobs via SQL.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26535) Introduce StoreTableSource and StoreTableSink

2022-03-08 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26535:
--
Description: Introduce StoreTableSource and StoreTableSink, which creates 
StoreSource and StoreSink respectively, and interact with TableStoreFactory,  
in order to enable stream/batch jobs via SQL.  (was: Introduce StoreTableSource 
and StoreTableSink, which creates StoreSource and StoreSink respectively, and 
interact with TableStoreFactory,  in order to enable running stream/batch jobs 
via SQL.)

> Introduce StoreTableSource and StoreTableSink
> -
>
> Key: FLINK-26535
> URL: https://issues.apache.org/jira/browse/FLINK-26535
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> Introduce StoreTableSource and StoreTableSink, which creates StoreSource and 
> StoreSink respectively, and interact with TableStoreFactory,  in order to 
> enable stream/batch jobs via SQL.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26535) Introduce StoreTableSource and StoreTableSink

2022-03-08 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26535:
--
Description: Introduce StoreTableSource and StoreTableSink, which creates 
StoreSource and StoreSink respectively, and interact with TableStoreFactory,  
in order to enable running stream/batch jobs via SQL.

> Introduce StoreTableSource and StoreTableSink
> -
>
> Key: FLINK-26535
> URL: https://issues.apache.org/jira/browse/FLINK-26535
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: 0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> Introduce StoreTableSource and StoreTableSink, which creates StoreSource and 
> StoreSink respectively, and interact with TableStoreFactory,  in order to 
> enable running stream/batch jobs via SQL.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26535) Introduce StoreTableSource and StoreTableSink

2022-03-08 Thread Jane Chan (Jira)
Jane Chan created FLINK-26535:
-

 Summary: Introduce StoreTableSource and StoreTableSink
 Key: FLINK-26535
 URL: https://issues.apache.org/jira/browse/FLINK-26535
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: 0.1.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26208) Introduce implementation of ManagedTableFactory

2022-03-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26208:
--
Description: Introduce impl for 
`org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
#onCreateTable, #onDropTable) to support interaction with Flink's TableEnv via 
SQL  (was: Introduce impl for 
`org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
#onCreateTable, #onDropTable and #onCompactTable) to support interaction with 
Flink's TableEnv)

> Introduce implementation of ManagedTableFactory
> ---
>
> Key: FLINK-26208
> URL: https://issues.apache.org/jira/browse/FLINK-26208
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.1.0
>
>
> Introduce impl for 
> `org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
> #onCreateTable, #onDropTable) to support interaction with Flink's TableEnv 
> via SQL



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26495) Dynamic table options does not work for view

2022-03-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26495:
--
Component/s: Table SQL / Planner
 (was: Table SQL / API)

> Dynamic table options does not work for view
> 
>
> Key: FLINK-26495
> URL: https://issues.apache.org/jira/browse/FLINK-26495
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
>
> The dynamic table options (aka. table hints) syntax
> {code:java}
> table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
> does not work for the view without any exception thrown or suggestions to 
> users. It is not user-friendly and misleading. We should either throw a 
> meaningful exception or support this feature for view.
>  
> h4. How to reproduce
> Run the following statements in SQL CLI
> {code:java}
> Flink SQL> create table datagen (f0 int, f1 double) with ('connector' = 
> 'datagen', 'number-of-rows' = '5');
> [INFO] Execute statement succeed.
> Flink SQL> create view my_view as select * from datagen;
> [INFO] Execute statement succeed.
> Flink SQL> explain plan for select * from my_view /*+ 
> OPTIONS('number-of-rows' = '1') */;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, datagen]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1]) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26495) Dynamic table options does not work for view

2022-03-06 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17502057#comment-17502057
 ] 

Jane Chan edited comment on FLINK-26495 at 3/7/22, 2:19 AM:


FLINK-25609 made {{CatalogSourceTable#toRel}} throw a {{ValidationException}} 
when hints are not empty and {{TableKind}} is the view. But it seems that the 
expanding sub-query logic goes to {{ExpandingPreparingTable}}, so no exception 
will be thrown. If we prohibit the view with hints, we should also check 
{{ExpandingPreparingTable#toRel}}. cc [~jark] and [~lzljs3620320] 


was (Author: qingyue):
FLINK-25609 made {{CatalogSourceTable#toRel}} throw a {{ValidationException}} 
when hints are not empty and {{TableKind}} is the view. But it seems that the 
expanding sub-query logic goes to {{ExpandingPreparingTable, so }}no exception 
will be thrown. If we prohibit the view with hints, we should also check 
{{ExpandingPreparingTable#toRel. cc [~jark] and [~lzljs3620320] }}

> Dynamic table options does not work for view
> 
>
> Key: FLINK-26495
> URL: https://issues.apache.org/jira/browse/FLINK-26495
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
>
> The dynamic table options (aka. table hints) syntax
> {code:java}
> table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
> does not work for the view without any exception thrown or suggestions to 
> users. It is not user-friendly and misleading. We should either throw a 
> meaningful exception or support this feature for view.
>  
> h4. How to reproduce
> Run the following statements in SQL CLI
> {code:java}
> Flink SQL> create table datagen (f0 int, f1 double) with ('connector' = 
> 'datagen', 'number-of-rows' = '5');
> [INFO] Execute statement succeed.
> Flink SQL> create view my_view as select * from datagen;
> [INFO] Execute statement succeed.
> Flink SQL> explain plan for select * from my_view /*+ 
> OPTIONS('number-of-rows' = '1') */;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, datagen]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1]) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26495) Dynamic table options does not work for view

2022-03-06 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17502057#comment-17502057
 ] 

Jane Chan commented on FLINK-26495:
---

FLINK-25609 made {{CatalogSourceTable#toRel}} throw a {{ValidationException}} 
when hints are not empty and {{TableKind}} is the view. But it seems that the 
expanding sub-query logic goes to {{ExpandingPreparingTable, so }}no exception 
will be thrown. If we prohibit the view with hints, we should also check 
{{ExpandingPreparingTable#toRel. cc [~jark] and [~lzljs3620320] }}

> Dynamic table options does not work for view
> 
>
> Key: FLINK-26495
> URL: https://issues.apache.org/jira/browse/FLINK-26495
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
>
> The dynamic table options (aka. table hints) syntax
> {code:java}
> table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
> does not work for the view without any exception thrown or suggestions to 
> users. It is not user-friendly and misleading. We should either throw a 
> meaningful exception or support this feature for view.
>  
> h4. How to reproduce
> Run the following statements in SQL CLI
> {code:java}
> Flink SQL> create table datagen (f0 int, f1 double) with ('connector' = 
> 'datagen', 'number-of-rows' = '5');
> [INFO] Execute statement succeed.
> Flink SQL> create view my_view as select * from datagen;
> [INFO] Execute statement succeed.
> Flink SQL> explain plan for select * from my_view /*+ 
> OPTIONS('number-of-rows' = '1') */;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, datagen]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1]) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26495) Dynamic table options does not work for view

2022-03-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-26495:
--
Description: 
The dynamic table options (aka. table hints) syntax
{code:java}
table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
does not work for the view without any exception thrown or suggestions to 
users. It is not user-friendly and misleading. We should either throw a 
meaningful exception or support this feature for view.

 
h4. How to reproduce

Run the following statements in SQL CLI
{code:java}
Flink SQL> create table datagen (f0 int, f1 double) with ('connector' = 
'datagen', 'number-of-rows' = '5');
[INFO] Execute statement succeed.

Flink SQL> create view my_view as select * from datagen;
[INFO] Execute statement succeed.

Flink SQL> explain plan for select * from my_view /*+ OPTIONS('number-of-rows' 
= '1') */;
== Abstract Syntax Tree ==
LogicalProject(f0=[$0], f1=[$1])
+- LogicalTableScan(table=[[default_catalog, default_database, datagen]])


== Optimized Physical Plan ==
TableSourceScan(table=[[default_catalog, default_database, datagen]], 
fields=[f0, f1])


== Optimized Execution Plan ==
TableSourceScan(table=[[default_catalog, default_database, datagen]], 
fields=[f0, f1]) {code}

  was:
The dynamic table options (aka. table hints) syntax
{code:java}
table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
does not work for the view without any exception thrown or suggestions to 
users. It is not user-friendly and misleading. We should either throw a 
meaningful exception or support this feature for view.


> Dynamic table options does not work for view
> 
>
> Key: FLINK-26495
> URL: https://issues.apache.org/jira/browse/FLINK-26495
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
>
> The dynamic table options (aka. table hints) syntax
> {code:java}
> table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
> does not work for the view without any exception thrown or suggestions to 
> users. It is not user-friendly and misleading. We should either throw a 
> meaningful exception or support this feature for view.
>  
> h4. How to reproduce
> Run the following statements in SQL CLI
> {code:java}
> Flink SQL> create table datagen (f0 int, f1 double) with ('connector' = 
> 'datagen', 'number-of-rows' = '5');
> [INFO] Execute statement succeed.
> Flink SQL> create view my_view as select * from datagen;
> [INFO] Execute statement succeed.
> Flink SQL> explain plan for select * from my_view /*+ 
> OPTIONS('number-of-rows' = '1') */;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, datagen]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, datagen]], 
> fields=[f0, f1]) {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26495) Dynamic table options does not work for view

2022-03-05 Thread Jane Chan (Jira)
Jane Chan created FLINK-26495:
-

 Summary: Dynamic table options does not work for view
 Key: FLINK-26495
 URL: https://issues.apache.org/jira/browse/FLINK-26495
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.15.0
Reporter: Jane Chan


The dynamic table options (aka. table hints) syntax
{code:java}
table_identifier /*+ OPTIONS(key=val [, key=val]*) */ {code}
does not work for the view without any exception thrown or suggestions to 
users. It is not user-friendly and misleading. We should either throw a 
meaningful exception or support this feature for view.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26208) Introduce implementation of ManagedTableFactory

2022-02-16 Thread Jane Chan (Jira)
Jane Chan created FLINK-26208:
-

 Summary: Introduce implementation of ManagedTableFactory
 Key: FLINK-26208
 URL: https://issues.apache.org/jira/browse/FLINK-26208
 Project: Flink
  Issue Type: Sub-task
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan


Introduce impl for 
`org.apache.flink.table.factories.ManagedTableFactory`(#enrichOptions, 
#onCreateTable, #onDropTable and #onCompactTable) to support interaction with 
Flink's TableEnv



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26201 ]


Jane Chan deleted comment on FLINK-26201:
---

was (Author: qingyue):
The reason is due to the dependency of flink-avro is not updated.

> FileStoreScanTest is not stable
> ---
>
> Key: FLINK-26201
> URL: https://issues.apache.org/jira/browse/FLINK-26201
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
> are randomly failed with the following stacktrace.
>  
> h3. How to reproduce
> You can reproduce this issue either by `mvn clean package/install` or run the 
> individual test in IDE.
> h3. Details
> h6. FileStoreScanTest#testWithSnapshot
>  
> {code:java}
> java.lang.IllegalStateException: Trying to add file 
> {org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
> sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
> might be corrupted.    at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
>     at java.base/java.util.HashMap.compute(HashMap.java:1228)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> 

[jira] [Closed] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-26201.
-
Resolution: Not A Bug

The reason is due to the dependency of flink-avro is not updated.

> FileStoreScanTest is not stable
> ---
>
> Key: FLINK-26201
> URL: https://issues.apache.org/jira/browse/FLINK-26201
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
> are randomly failed with the following stacktrace.
>  
> h3. How to reproduce
> You can reproduce this issue either by `mvn clean package/install` or run the 
> individual test in IDE.
> h3. Details
> h6. FileStoreScanTest#testWithSnapshot
>  
> {code:java}
> java.lang.IllegalStateException: Trying to add file 
> {org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
> sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
> might be corrupted.    at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
>     at java.base/java.util.HashMap.compute(HashMap.java:1228)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>     at 
> 

[jira] [Commented] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493678#comment-17493678
 ] 

Jane Chan commented on FLINK-26201:
---

The reason is due to the dependency of flink-avro is not being updated

> FileStoreScanTest is not stable
> ---
>
> Key: FLINK-26201
> URL: https://issues.apache.org/jira/browse/FLINK-26201
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.1.0
>Reporter: Jane Chan
>Priority: Major
>
> FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
> are randomly failed with the following stacktrace.
>  
> h3. How to reproduce
> You can reproduce this issue either by `mvn clean package/install` or run the 
> individual test in IDE.
> h3. Details
> h6. FileStoreScanTest#testWithSnapshot
>  
> {code:java}
> java.lang.IllegalStateException: Trying to add file 
> {org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
> sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
> might be corrupted.    at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
>     at java.base/java.util.HashMap.compute(HashMap.java:1228)
>     at 
> org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
>     at 
> org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
>     at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
>     at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>     at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
>     at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
>     at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>     at 
> 

[jira] [Created] (FLINK-26201) FileStoreScanTest is not stable

2022-02-16 Thread Jane Chan (Jira)
Jane Chan created FLINK-26201:
-

 Summary: FileStoreScanTest is not stable
 Key: FLINK-26201
 URL: https://issues.apache.org/jira/browse/FLINK-26201
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.1.0
Reporter: Jane Chan


FileStoreScanTest#testWithSnapshot and FileStoreScanTest#testWithManifestList 
are randomly failed with the following stacktrace.

 
h3. How to reproduce

You can reproduce this issue either by `mvn clean package/install` or run the 
individual test in IDE.
h3. Details
h6. FileStoreScanTest#testWithSnapshot

 
{code:java}
java.lang.IllegalStateException: Trying to add file 
{org.apache.flink.table.data.binary.BinaryRowData@ddd21057, 5, 0, 
sst-b60934a5-eb23-4e2d-9b07-155d2ef29e15-0} which is already added. Manifest 
might be corrupted.    at 
org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanImpl.scan(FileStoreScanImpl.java:184)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanImpl.plan(FileStoreScanImpl.java:145)
    at 
org.apache.flink.table.store.file.operation.FileStoreWriteImpl.createWriter(FileStoreWriteImpl.java:59)
    at 
org.apache.flink.table.store.file.operation.OperationTestUtils.lambda$writeAndCommitData$1(OperationTestUtils.java:174)
    at java.base/java.util.HashMap.compute(HashMap.java:1228)
    at 
org.apache.flink.table.store.file.operation.OperationTestUtils.writeAndCommitData(OperationTestUtils.java:169)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanTest.writeData(FileStoreScanTest.java:202)
    at 
org.apache.flink.table.store.file.operation.FileStoreScanTest.testWithSnapshot(FileStoreScanTest.java:135)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
    at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
    at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
    at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
    at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
    at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
    at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
    at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
    at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
    at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
    at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
    at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
    at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
    at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
    at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
    at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
    at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
    at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
    at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
    at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
    at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
    at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
    at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
    at 

[jira] [Comment Edited] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-23 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479931#comment-17479931
 ] 

Jane Chan edited comment on FLINK-25746 at 1/23/22, 3:34 PM:
-

By simply adding the guava as test dependencies to flink-orc and flink-parquet, 
we can fix it. But I haven't spent too much time exploring the reason so I'm 
not sure what the root cause is


was (Author: qingyue):
By simply adding the guava as test dependencies to flink-orc and flink-parquet, 
we can fix it. But I'm not sure what the root cause is.

My gut feeling is related to FLINK-25128

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> 

[jira] [Commented] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479931#comment-17479931
 ] 

Jane Chan commented on FLINK-25746:
---

By simply adding the guava as test dependencies to flink-orc and flink-parquet, 
we can fix it. But I'm not sure what the root cause is.

My gut feeling is related to FLINK-25128

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at 

[jira] [Commented] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479923#comment-17479923
 ] 

Jane Chan commented on FLINK-25746:
---

h4. Some observations

The dependency tree is listed below.

Note that although guava is under classpath,  neither the version is 11.0.2 
which does not have the class MoreObjects or it is shaded.

 
{code:java}
[INFO] --- maven-dependency-plugin:3.2.0:tree (default-cli) @ flink-parquet ---
[INFO] org.apache.flink:flink-parquet:jar:1.15-SNAPSHOT
[INFO] +- org.apache.flink:flink-core:jar:1.15-SNAPSHOT:provided
[INFO] |  +- org.apache.flink:flink-annotations:jar:1.15-SNAPSHOT:provided
[INFO] |  +- org.apache.flink:flink-metrics-core:jar:1.15-SNAPSHOT:provided
[INFO] |  +- org.apache.flink:flink-shaded-asm-7:jar:7.1-14.0:provided
[INFO] |  +- org.apache.commons:commons-lang3:jar:3.3.2:provided
[INFO] |  +- com.esotericsoftware.kryo:kryo:jar:2.24.0:provided
[INFO] |  |  \- com.esotericsoftware.minlog:minlog:jar:1.2:provided
[INFO] |  +- commons-collections:commons-collections:jar:3.2.2:provided
[INFO] |  +- org.apache.commons:commons-compress:jar:1.21:compile
[INFO] |  \- org.apache.flink:flink-shaded-guava:jar:30.1.1-jre-14.0:provided
[INFO] +- org.apache.flink:flink-table-common:jar:1.15-SNAPSHOT:provided 
(optional) 
[INFO] |  \- com.ibm.icu:icu4j:jar:67.1:provided (optional) 
[INFO] +- org.apache.flink:flink-avro:jar:1.15-SNAPSHOT:compile (optional) 
[INFO] |  \- org.apache.avro:avro:jar:1.10.0:compile
[INFO] |     +- com.fasterxml.jackson.core:jackson-core:jar:2.13.0:compile
[INFO] |     \- com.fasterxml.jackson.core:jackson-databind:jar:2.13.0:compile
[INFO] |        \- 
com.fasterxml.jackson.core:jackson-annotations:jar:2.13.0:compile
[INFO] +- org.apache.flink:flink-connector-files:jar:1.15-SNAPSHOT:provided 
(optional) 
[INFO] |  \- org.apache.flink:flink-file-sink-common:jar:1.15-SNAPSHOT:provided
[INFO] +- org.apache.parquet:parquet-hadoop:jar:1.12.2:compile
[INFO] |  +- org.apache.parquet:parquet-column:jar:1.12.2:compile
[INFO] |  |  \- org.apache.parquet:parquet-encoding:jar:1.12.2:compile
[INFO] |  +- org.apache.parquet:parquet-format-structures:jar:1.12.2:compile
[INFO] |  |  \- javax.annotation:javax.annotation-api:jar:1.3.2:compile
[INFO] |  +- org.apache.parquet:parquet-jackson:jar:1.12.2:compile
[INFO] |  +- commons-pool:commons-pool:jar:1.6:compile
[INFO] |  \- com.github.luben:zstd-jni:jar:1.4.9-1:compile
[INFO] +- org.apache.hadoop:hadoop-common:jar:2.8.5:provided
[INFO] |  +- org.apache.hadoop:hadoop-annotations:jar:2.8.5:provided
[INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
[INFO] |  +- commons-cli:commons-cli:jar:1.5.0:provided
[INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:provided
[INFO] |  +- xmlenc:xmlenc:jar:0.52:provided
[INFO] |  +- org.apache.httpcomponents:httpclient:jar:4.5.13:compile
[INFO] |  |  \- org.apache.httpcomponents:httpcore:jar:4.4.14:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.15:compile
[INFO] |  +- commons-io:commons-io:jar:2.11.0:provided
[INFO] |  +- commons-net:commons-net:jar:3.1:provided
[INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
[INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:provided
[INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:provided
[INFO] |  +- org.mortbay.jetty:jetty-sslengine:jar:6.1.26:provided
[INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:provided
[INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:provided
[INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:provided
[INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:provided
[INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:provided
[INFO] |  |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:provided
[INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:provided
[INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:provided
[INFO] |  |  \- asm:asm:jar:3.1:provided
[INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
[INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:provided
[INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:provided
[INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
[INFO] |  +- commons-configuration:commons-configuration:jar:1.7:provided
[INFO] |  |  +- commons-digester:commons-digester:jar:1.8.1:provided
[INFO] |  |  \- commons-beanutils:commons-beanutils:jar:1.8.3:provided
[INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:provided
[INFO] |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:provided
[INFO] |  +- com.google.code.gson:gson:jar:2.2.4:provided
[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:2.8.5:provided
[INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:provided
[INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:provided
[INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:provided (version selected 
from constraint [1.3.1,2.3])
[INFO] |  |  |     \- net.minidev:accessors-smart:jar:1.2:provided
[INFO] |  |  |        \- 

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Affects Version/s: 1.15.0

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> 

[jira] [Commented] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479909#comment-17479909
 ] 

Jane Chan commented on FLINK-25746:
---

While it's ok when running tests from the command line
{code:java}

mvn test -Dtest=ParquetFileCompactionITCase {code}
!image-2022-01-21-16-56-42-156.png!

 

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> 

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Attachment: image-2022-01-21-16-56-42-156.png

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png, 
> image-2022-01-21-16-56-42-156.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> 

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Attachment: image-2022-01-21-16-54-12-354.png

> Failed to run ITCase locally with IDEA under flink-orc and flink-parquet 
> module
> ---
>
> Key: FLINK-25746
> URL: https://issues.apache.org/jira/browse/FLINK-25746
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jane Chan
>Priority: Major
> Attachments: image-2022-01-21-16-54-12-354.png
>
>
> Recently, it has been observed that several integration test cases failed 
> when running from IDEA locally, but running them from the maven command line 
> is OK.
> h4. How to reproduce
> {code:java}
> // switch to master branch
> git fetch origin
> git rebase origin/master
> mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
> {code}
> Then run the following tests from IntelliJ IDEA
> h4. The affected tests
> {code:java}
> org.apache.flink.orc.OrcFileSystemITCase
> org.apache.flink.orc.OrcFsStreamingSinkITCase
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> org.apache.flink.formats.parquet.ParquetFileSystemITCase
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
> h4. The stack trace
> !image-2022-01-21-16-54-12-354.png!
> {code:java}
> java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
> org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
>     at 
> org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
>     at org.apache.calcite.util.Util.(Util.java:152)
>     at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
>     at 
> org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
>     at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
>     at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
>     at 
> org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
>     at 
> org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
>     at 
> org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
>     at 
> org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
>     at 
> org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
>     at 
> org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
>     at 
> org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> 

[jira] [Updated] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-25746:
--
Description: 
Recently, it has been observed that several integration test cases failed when 
running from IDEA locally, but running them from the maven command line is OK.
h4. How to reproduce
{code:java}
// switch to master branch
git fetch origin
git rebase origin/master
mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
{code}
Then run the following tests from IntelliJ IDEA
h4. The affected tests
{code:java}
org.apache.flink.orc.OrcFileSystemITCase
org.apache.flink.orc.OrcFsStreamingSinkITCase
org.apache.flink.formats.parquet.ParquetFileCompactionITCase
org.apache.flink.formats.parquet.ParquetFileSystemITCase
org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
h4. The stack trace

!image-2022-01-21-16-54-12-354.png!
{code:java}
java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
    at 
org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
    at org.apache.calcite.util.Util.(Util.java:152)
    at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
    at 
org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
    at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
    at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
    at 
org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
    at org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
    at 
org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
    at 
org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
    at 
org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
    at 
org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
    at 
org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
    at 
org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.MoreObjects
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 38 more
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.calcite.sql2rel.StandardConvertletTable    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
    at 

[jira] [Created] (FLINK-25746) Failed to run ITCase locally with IDEA under flink-orc and flink-parquet module

2022-01-21 Thread Jane Chan (Jira)
Jane Chan created FLINK-25746:
-

 Summary: Failed to run ITCase locally with IDEA under flink-orc 
and flink-parquet module
 Key: FLINK-25746
 URL: https://issues.apache.org/jira/browse/FLINK-25746
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Jane Chan


Recently, it has been observed that several integration test cases failed when 
running from IDEA locally, but running them from the maven command line is OK.
h4. How to reproduce
{code:java}
// switch to master branch
git fetch origin
git rebase origin/master
mvn clean install -DskipTests -Dfast -Pskip-webui-build -Dscala-2.12 -T 1C  
{code}

Then run the following tests from IntelliJ IDEA
h4. The affected tests
{code:java}
org.apache.flink.orc.OrcFileSystemITCase
org.apache.flink.orc.OrcFsStreamingSinkITCase
org.apache.flink.formats.parquet.ParquetFileCompactionITCase
org.apache.flink.formats.parquet.ParquetFileSystemITCase
org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase {code}
h4. The stack trace
{code:java}
java.lang.NoClassDefFoundError: com/google/common/base/MoreObjects    at 
org.apache.calcite.config.CalciteSystemProperty.loadProperties(CalciteSystemProperty.java:404)
    at 
org.apache.calcite.config.CalciteSystemProperty.(CalciteSystemProperty.java:47)
    at org.apache.calcite.util.Util.(Util.java:152)
    at org.apache.calcite.sql.type.SqlTypeName.(SqlTypeName.java:142)
    at 
org.apache.calcite.sql.type.SqlTypeFamily.getTypeNames(SqlTypeFamily.java:163)
    at org.apache.calcite.sql.type.ReturnTypes.(ReturnTypes.java:127)
    at org.apache.calcite.sql.SqlSetOperator.(SqlSetOperator.java:45)
    at 
org.apache.calcite.sql.fun.SqlStdOperatorTable.(SqlStdOperatorTable.java:97)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:101)
    at 
org.apache.calcite.sql2rel.StandardConvertletTable.(StandardConvertletTable.java:91)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:234)
    at 
org.apache.calcite.tools.Frameworks$ConfigBuilder.(Frameworks.java:215)
    at org.apache.calcite.tools.Frameworks.newConfigBuilder(Frameworks.java:199)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.createFrameworkConfig(PlannerContext.java:145)
    at 
org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:129)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:118)
    at 
org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:55)
    at 
org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:62)
    at 
org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:53)
    at 
org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl$.create(StreamTableEnvironmentImpl.scala:323)
    at 
org.apache.flink.table.api.bridge.scala.StreamTableEnvironment$.create(StreamTableEnvironment.scala:925)
    at 
org.apache.flink.table.planner.runtime.utils.StreamingTestBase.before(StreamingTestBase.scala:54)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
    at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.MoreObjects
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 38 more
java.lang.NoClassDefFoundError: Could not initialize class 

[jira] [Commented] (FLINK-25312) Hive supports managed table

2022-01-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17475895#comment-17475895
 ] 

Jane Chan commented on FLINK-25312:
---

Hi, [~lzljs3620320], I'm interested in this subtask, and would you mind 
assigning it to me?

> Hive supports managed table
> ---
>
> Key: FLINK-25312
> URL: https://issues.apache.org/jira/browse/FLINK-25312
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25520) Implement "ALTER TABLE ... COMPACT" SQL

2022-01-05 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17469669#comment-17469669
 ] 

Jane Chan commented on FLINK-25520:
---

Hi [~lzljs3620320], I'm interested in this task, and would you mind assigning 
it to me?

> Implement "ALTER TABLE ... COMPACT" SQL
> ---
>
> Key: FLINK-25520
> URL: https://issues.apache.org/jira/browse/FLINK-25520
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
> Environment: FLINK-25176 Introduce "ALTER TABLE ... COMPACT" syntax. 
> In this ticket, we should implement "ALTER TABLE ... COMPACT" SQL
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25176) Introduce "ALTER TABLE ... COMPACT" SQL

2021-12-19 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17462368#comment-17462368
 ] 

Jane Chan commented on FLINK-25176:
---

Hi, [~lzljs3620320], I'm interested in this subtask, and would you mind 
assigning it to me?

> Introduce "ALTER TABLE ... COMPACT" SQL
> ---
>
> Key: FLINK-25176
> URL: https://issues.apache.org/jira/browse/FLINK-25176
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.15.0
>
>
> * Introduce "ALTER TABLE ... COMPACT" SQL
>  * Work with managed table



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-22732) Restrict ALTER TABLE from setting empty table options

2021-05-20 Thread Jane Chan (Jira)
Jane Chan created FLINK-22732:
-

 Summary: Restrict ALTER TABLE from setting empty table options
 Key: FLINK-22732
 URL: https://issues.apache.org/jira/browse/FLINK-22732
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17845) Can't remove a table connector property with ALTER TABLE

2021-05-15 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17345019#comment-17345019
 ] 

Jane Chan commented on FLINK-17845:
---

Hi [~jark], sorry to joining this discussion late. I went through the 
conversion and had a question on this syntax's behavior. According to 
[Postgresql-doc|https://www.postgresql.org/docs/9.1/sql-altertable.html]
{quote}This form resets one or more storage parameters to their defaults
{quote}
So I'd like to clarify that {{RESET('k1', 'k2')}} syntax is supposed to 
<1>remove the mentioned property keys {{'k1', 'k2' }}or <2>remove all existed 
options and reset the mentioned keys {{'k1', 'k2' }}to their defaults or 
<3>reset the mentioned keys {{'k1', 'k2' }}to their defaults and keep 
everything else unchanged.

 

 

> Can't remove a table connector property with ALTER TABLE
> 
>
> Key: FLINK-17845
> URL: https://issues.apache.org/jira/browse/FLINK-17845
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Fabian Hueske
>Priority: Major
>  Labels: stale-major
>
> It is not possible to remove an existing table property from a table.
> Looking at the [source 
> code|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sqlexec/SqlToOperationConverter.java#L295]
>  this seems to be the intended semantics, but it seems counter-intuitive to 
> me.
> If I create a table with the following statement:
> {code}
> CREATE TABLE `testTable` (
>   id INT
> )
> WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topicX' = 'test',  -- Woops, I made a typo here
> [...]
> )
> {code}
> The statement will be successfully executed. However, the table cannot be 
> used due to the typo.
> Fixing the typo with the following DDL is not possible:
> {code}
> ALTER TABLE `testTable` SET (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = 'test',  -- Fixing the typo
> )
> {code}
> because the key {{connector.topicX}} is not removed.
> Right now it seems that the only way to fix a table with an invalid key is to 
> DROP and CREATE it. I think that this use case should be supported by ALTER 
> TABLE.
> I would even argue that the expected behavior is that previous properties are 
> removed and replaced by the new properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-22337) Support 'ALTER TABLE ADD' statement

2021-04-18 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-22337.
-
Resolution: Duplicate

> Support 'ALTER TABLE ADD' statement
> ---
>
> Key: FLINK-22337
> URL: https://issues.apache.org/jira/browse/FLINK-22337
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API, Table SQL / Client
>Affects Versions: 1.14.0
>Reporter: Jane Chan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22337) Support 'ALTER TABLE ADD' statement

2021-04-18 Thread Jane Chan (Jira)
Jane Chan created FLINK-22337:
-

 Summary: Support 'ALTER TABLE ADD' statement
 Key: FLINK-22337
 URL: https://issues.apache.org/jira/browse/FLINK-22337
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.14.0
Reporter: Jane Chan






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21045) Improve Usability of Pluggable Modules

2021-03-23 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17307074#comment-17307074
 ] 

Jane Chan commented on FLINK-21045:
---

Thanks to all of you :D

> Improve Usability of Pluggable Modules
> --
>
> Key: FLINK-21045
> URL: https://issues.apache.org/jira/browse/FLINK-21045
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Nicholas Jiang
>Assignee: Jane Chan
>Priority: Major
> Fix For: 1.13.0
>
>
> This improvement aims to
>  # Simplify the module discovery mapping by module name. This will encourage 
> users to create singleton of module instances.
>  # Support changing the resolution order of modules in a flexible manner. 
> This will introduce two methods {{#useModules}} and {{#listFullModules}} in 
> both {{ModuleManager}} and {{TableEnvironment}}.
>  # Support SQL syntax upon {{LOAD/UNLOAD MODULE}}, {{USE MODULES}}, and 
> {{SHOW [FULL] MODULES}} in both {{FlinkSqlParserImpl}} and {{SqlClient}}.
>  # Update the documentation to keep users informed of this improvement.
> Please reach to the [discussion 
> thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLINK-21045-Support-load-module-and-unload-module-SQL-syntax-td48398.html]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21849) 'SHOW MODULES' tests for CliClientITCase lack the default module test case

2021-03-18 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303879#comment-17303879
 ] 

Jane Chan commented on FLINK-21849:
---

Hi, [~nicholasjiang] and [~jark], thanks for reporting this issue, I'll work on 
the improvement right now :)

> 'SHOW MODULES' tests for CliClientITCase lack the default module test case
> --
>
> Key: FLINK-21849
> URL: https://issues.apache.org/jira/browse/FLINK-21849
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Nicholas Jiang
>Assignee: Jane Chan
>Priority: Minor
> Fix For: 1.13.0
>
>
> Currently 'SHOW MODULES' command tests for CliClientITCase lack the default 
> module test case, which default module is core module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21774) Do not display column names when return set is emtpy in SQL Client

2021-03-16 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17302313#comment-17302313
 ] 

Jane Chan commented on FLINK-21774:
---

Hi [~nicholasjiang], please also update the docs as well, see 
[https://github.com/apache/flink/pull/15230]

> Do not display column names when return set is emtpy in SQL Client
> --
>
> Key: FLINK-21774
> URL: https://issues.apache.org/jira/browse/FLINK-21774
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
>
> Currently, SQL Client will display column names even if the return set is 
> empty:
> {code}
> SHOW MODULES;
> +-+
> | module name |
> +-+
> 0 row in set
> !ok
> {code}
> In mature databases, e.g. MySQL, they only show "Empty Set" instead of column 
> names:
> {code}
> mysql> show tables;
> Empty set (0.00 sec)
> {code}
> We can improve this by simply omit the column names header. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21738) reduce unnecessary method calls in ModuleManager

2021-03-11 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300097#comment-17300097
 ] 

Jane Chan commented on FLINK-21738:
---

Hi [~zoucao], I guess using the `json_tuple` lateral table function may much 
more suitable for parsing multiple key-value 

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-json_tuple

> reduce unnecessary method calls  in ModuleManager
> -
>
> Key: FLINK-21738
> URL: https://issues.apache.org/jira/browse/FLINK-21738
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: zoucao
>Priority: Major
>
> In flink sql, if we use many functions(hive func or flink built-in func), 
> Flink will call method
> `getFunctionDefinition` in 
> [ModuleManager|https://github.com/apache/flink/blob/97bfd049951f8d52a2e0aed14265074c4255ead0/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java#L44]
>  many times to load func and each module's method `listFunctions` will be 
> called at the same time. I think the same result will be returned for one 
> module, so maybe a cache should be used here to reduce time waste.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21738) reduce unnecessary method calls in ModuleManager

2021-03-11 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299988#comment-17299988
 ] 

Jane Chan commented on FLINK-21738:
---

Hi [~zoucao], thanks for bringing out this issue. 
{quote}the same result will be returned for one module, so maybe a cache should 
be used here to reduce time waste.
{quote}
IMO, a map (function definition -> which module it belongs to) in module 
manager does accelerate the search speed. However, the resolution order makes a 
difference. If there're two functions with the same name, Flink always resolves 
the object reference to the one in the 1st loaded module. At the same time, 
FLINK-21045 supports loading/unloading/changing resolution orders during a 
running session. So if we decide to support the cache, we may also need to keep 
the loading order as well. This causes the cache maintenance to become a little 
bit complicated.  

What do you think? [~jark]

> reduce unnecessary method calls  in ModuleManager
> -
>
> Key: FLINK-21738
> URL: https://issues.apache.org/jira/browse/FLINK-21738
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: zoucao
>Priority: Major
>
> In flink sql, if we use many functions(hive func or flink built-in func), 
> Flink will call method
> `getFunctionDefinition` in 
> [ModuleManager|https://github.com/apache/flink/blob/97bfd049951f8d52a2e0aed14265074c4255ead0/flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/module/ModuleManager.java#L44]
>  many times to load func and each module's method `listFunctions` will be 
> called at the same time. I think the same result will be returned for one 
> module, so maybe a cache should be used here to reduce time waste.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21717) Links to Java/Python docs are Broken

2021-03-10 Thread Jane Chan (Jira)
Jane Chan created FLINK-21717:
-

 Summary: Links to Java/Python docs are Broken
 Key: FLINK-21717
 URL: https://issues.apache.org/jira/browse/FLINK-21717
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.13.0
Reporter: Jane Chan
 Attachments: image-2021-03-10-20-22-09-425.png, 
image-2021-03-10-20-31-52-502.png

After migrating to Hugo, all the hyperlinks refer to Python docs became 
unreachable.

E.g. {{docs/dev/python/table_environment.md}} 
{code:html}
link
{code}
is generating 
[http://localhost:1313/docs/dev/python/table/table_environment/%7B%7B%20site.pythondocs_baseurl%20%7D%7D/api/python/pyflink.table.html#pyflink.table.TableEnvironment.from_elements]

 Another problem is the param defined in the config.toml is *JavaDocs* while 
the shortcode javadoc.html has a typo(*JavaDoc*), which rendered broken links 
either.

E.g. {{docs/connectors/datastream/jdbc.md}}

!image-2021-03-10-20-31-52-502.png!

is generating [http://api/java/org/apache/flink/connector/jdbc/JdbcSink.html]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21300) Update module documentation

2021-03-10 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298646#comment-17298646
 ] 

Jane Chan commented on FLINK-21300:
---

Will open a PR once [https://github.com/apache/flink/pull/15135] merged.

> Update module documentation
> ---
>
> Key: FLINK-21300
> URL: https://issues.apache.org/jira/browse/FLINK-21300
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API, Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21651) Migrate module-related tests in LocalExecutorITCase to new integration test framework

2021-03-07 Thread Jane Chan (Jira)
Jane Chan created FLINK-21651:
-

 Summary: Migrate module-related tests in LocalExecutorITCase to 
new integration test framework
 Key: FLINK-21651
 URL: https://issues.apache.org/jira/browse/FLINK-21651
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Client
Affects Versions: 1.13.0
Reporter: Jane Chan


Migrate module-related tests in `LocalExecutorITCase` after FLINK-21614 merged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


<    2   3   4   5   6   7   8   >