[jira] [Created] (FLINK-35519) Flink Job fails with SingleValueAggFunction received more than one element

2024-06-04 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35519:


 Summary: Flink Job fails with SingleValueAggFunction received more 
than one element
 Key: FLINK-35519
 URL: https://issues.apache.org/jira/browse/FLINK-35519
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz


When running a query:
{code}
select 
   (SELECT 
   t.id 
FROM raw_pagerduty_users, UNNEST(teams) AS t(id, type, summary, self, 
html_url))
from raw_pagerduty_users;
{code}
it is translated to:

{code}
Sink(table=[default_catalog.default_database.sink], fields=[EXPR$0])
+- Calc(select=[$f0 AS EXPR$0])
   +- Join(joinType=[LeftOuterJoin], where=[true], select=[c, $f0], 
leftInputSpec=[NoUniqueKey], rightInputSpec=[JoinKeyContainsUniqueKey])
  :- Exchange(distribution=[single])
  :  +- Calc(select=[c])
  : +- TableSourceScan(table=[[default_catalog, default_database, 
raw_pagerduty_users, project=[c, teams], metadata=[]]], fields=[c, 
teams])(reuse_id=[1])
  +- Exchange(distribution=[single])
 +- GroupAggregate(select=[SINGLE_VALUE(id) AS $f0])
+- Exchange(distribution=[single])
   +- Calc(select=[id])
  +- Correlate(invocation=[$UNNEST_ROWS$1($cor0.teams)], 
correlate=[table($UNNEST_ROWS$1($cor0.teams))], 
select=[c,teams,id,type,summary,self,html_url], rowType=[RecordType(BIGINT c, 
RecordType:peek_no_expand(VARCHAR(2147483647) id, VARCHAR(2147483647) type, 
VARCHAR(2147483647) summary, VARCHAR(2147483647) self, VARCHAR(2147483647) 
html_url) ARRAY teams, VARCHAR(2147483647) id, VARCHAR(2147483647) type, 
VARCHAR(2147483647) summary, VARCHAR(2147483647) self, VARCHAR(2147483647) 
html_url)], joinType=[INNER])
 +- Reused(reference_id=[1])
{code}

and it fails with:

{code}
java.lang.RuntimeException: SingleValueAggFunction received more than one 
element.
at GroupAggsHandler$150.accumulate(Unknown Source)
at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
at 
org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:83)
at 
org.apache.flink.streaming.runtime.io.RecordProcessorUtils.lambda$getRecordProcessor$0(RecordProcessorUtils.java:60)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:237)
at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:146)
at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:110)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:571)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:900)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:849)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.base/java.lang.Thread.run(Thread.java:829)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35437) BlockStatementGrouper uses lots of memory

2024-05-23 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35437:


 Summary: BlockStatementGrouper uses lots of memory
 Key: FLINK-35437
 URL: https://issues.apache.org/jira/browse/FLINK-35437
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


For deeply nested {{if else}} statements {{BlockStatementGrouper}} uses loads 
of memory and fails with OOM quickly.

When running JMs with around 400mb a query like:
```
select case when orderid = 0 then 1 when orderid = 1 then 2 when orderid
= 2 then 3 when orderid = 3 then 4 when orderid = 4 then 5 when orderid = 5 
then
6 when orderid = 6 then 7 when orderid = 7 then 8 when orderid = 8 then 9 
when
orderid = 9 then 10 when orderid = 10 then 11 when orderid = 11 then 12 
when orderid
= 12 then 13 when orderid = 13 then 14 when orderid = 14 then 15 when 
orderid
= 15 then 16 when orderid = 16 then 17 when orderid = 17 then 18 when 
orderid
= 18 then 19 when orderid = 19 then 20 when orderid = 20 then 21 when 
orderid
= 21 then 22 when orderid = 22 then 23 when orderid = 23 then 24 when 
orderid
= 24 then 25 when orderid = 25 then 26 when orderid = 26 then 27 when 
orderid
= 27 then 28 when orderid = 28 then 29 when orderid = 29 then 30 when 
orderid
= 30 then 31 when orderid = 31 then 32 when orderid = 32 then 33 when 
orderid
= 33 then 34 when orderid = 34 then 35 when orderid = 35 then 36 when 
orderid
= 36 then 37 when orderid = 37 then 38 when orderid = 38 then 39 when 
orderid
= 39 then 40 when orderid = 40 then 41 when orderid = 41 then 42 when 
orderid
= 42 then 43 when orderid = 43 then 44 when orderid = 44 then 45 when 
orderid
= 45 then 46 when orderid = 46 then 47 when orderid = 47 then 48 when 
orderid
= 48 then 49 when orderid = 49 then 50 when orderid = 50 then 51 when 
orderid
= 51 then 52 when orderid = 52 then 53 when orderid = 53 then 54 when 
orderid
= 54 then 55 when orderid = 55 then 56 when orderid = 56 then 57 when 
orderid
= 57 then 58 when orderid = 58 then 59 when orderid = 59 then 60 when 
orderid
= 60 then 61 when orderid = 61 then 62 when orderid = 62 then 63 when 
orderid
= 63 then 64 when orderid = 64 then 65 when orderid = 65 then 66 when 
orderid
= 66 then 67 when orderid = 67 then 68 when orderid = 68 then 69 when 
orderid
= 69 then 70 when orderid = 70 then 71 when orderid = 71 then 72 when 
orderid
= 72 then 73 when orderid = 73 then 74 when orderid = 74 then 75 when 
orderid
= 75 then 76 when orderid = 76 then 77 when orderid = 77 then 78 when 
orderid
= 78 then 79 when orderid = 79 then 80 when orderid = 80 then 81 when 
orderid
= 81 then 82 when orderid = 82 then 83 when orderid = 83 then 84 when 
orderid
= 84 then 85 when orderid = 85 then 86 when orderid = 86 then 87 when 
orderid
= 87 then 88 when orderid = 88 then 89 when orderid = 89 then 90 when 
orderid
= 90 then 91 when orderid = 91 then 92 when orderid = 92 then 93 when 
orderid
= 93 then 94 when orderid = 94 then 95 when orderid = 95 then 96 when 
orderid
= 96 then 97 when orderid = 97 then 98 when orderid = 98 then 99 when 
orderid
= 99 then 100 when orderid = 100 then 101 when orderid = 101 then 102 when 
orderid
= 102 then 103 when orderid = 103 then 104 when orderid = 104 then 105 when 
orderid
= 105 then 106 when orderid = 106 then 107 when orderid = 107 then 108 when 
orderid
= 108 then 109 when orderid = 109 then 110 when orderid = 110 then 111 when 
orderid
= 111 then 112 when orderid = 112 then 113 when orderid = 113 then 114 when 
orderid
= 114 then 115 when orderid = 115 then 116 when orderid = 116 then 117 when 
orderid
= 117 then 118 when orderid = 118 then 119 when orderid = 119 then 120 when 
orderid
= 120 then 121 when orderid = 121 then 122 when orderid = 122 then 123 when 
orderid
= 123 then 124 when orderid = 124 then 125 when orderid = 125 then 126 when 
orderid
= 126 then 127 when orderid = 127 then 128 when orderid = 128 then 129 when 
orderid
= 129 then 130 when orderid = 130 then 131 when orderid = 131 then 132 when 
orderid
= 132 then 133 when orderid = 133 then 134 when orderid = 134 then 135 when 
orderid
= 135 then 136 when orderid = 136 then 137 when orderid = 137 then 138 when 
orderid
= 138 then 139 when orderid = 139 then 140 when orderid = 140 then 141 when 
orderid
= 141 then 142 when orderid = 142 then 143 when orderid = 143 then 144 when 
orderid
= 144 then 145 when orderid = 145 then 146 when orderid = 146 then 147 when 
orderid
= 147 then 148 when orderid = 148 then 149 when orderid = 149 then 150 when 
orderid
= 150 then 151 when orderid = 151 then 152 when orderid = 152 then 153 when 

[jira] [Created] (FLINK-35216) Support for RETURNING clause of JSON_QUERY

2024-04-23 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35216:


 Summary: Support for RETURNING clause of JSON_QUERY
 Key: FLINK-35216
 URL: https://issues.apache.org/jira/browse/FLINK-35216
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


SQL standard says JSON_QUERY should support RETURNING clause similar to 
JSON_VALUE. Calcite supports the clause for JSON_VALUE already, but not for the 
JSON_QUERY.

{code}
 ::=
  JSON_QUERY 
  
  [  ]
  [  WRAPPER ]
  [  QUOTES [ ON SCALAR STRING ] ]
  [  ON EMPTY ]
  [  ON ERROR ]
  

 ::=
  RETURNING 
  [ FORMAT  ]
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35155) Introduce TableRuntimeException

2024-04-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35155:


 Summary: Introduce TableRuntimeException
 Key: FLINK-35155
 URL: https://issues.apache.org/jira/browse/FLINK-35155
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


The `throwException` internal function throws a {{RuntimeException}}. It would 
be nice to have a specific kind of exception thrown from there, so that it's 
easier to classify those.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35021) AggregateQueryOperations produces wrong asSerializableString representation

2024-04-05 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-35021:


 Summary: AggregateQueryOperations produces wrong 
asSerializableString representation
 Key: FLINK-35021
 URL: https://issues.apache.org/jira/browse/FLINK-35021
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


A table API query:
{code}
env.fromValues(1, 2, 3)
.as("number")
.select(col("number").count())
.insertInto(TEST_TABLE_API)
{code}

produces

{code}
INSERT INTO `default`.`timo_eu_west_1`.`table_api_basic_api` SELECT `EXPR$0` 
FROM (
SELECT (COUNT(`number`)) AS `EXPR$0` FROM (
SELECT `f0` AS `number` FROM (
SELECT `f0` FROM (VALUES 
(1),
(2),
(3)
) VAL$0(`f0`)
)
)
GROUP BY 
)
{code}

which is missing a grouping expression



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34938) Incorrect behaviour for comparison functions

2024-03-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34938:


 Summary: Incorrect behaviour for comparison functions
 Key: FLINK-34938
 URL: https://issues.apache.org/jira/browse/FLINK-34938
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


There are a few issues with comparison functions.

Some versions throw:
{code}
Incomparable types: TIMESTAMP_LTZ(3) NOT NULL and TIMESTAMP(3)
{code}

Results of some depend on the comparison order, because the least restrictive 
precision is not calculated correctly.

E.g.

{code}
final Instant ltz3 = Instant.ofEpochMilli(1_123);
final Instant ltz0 = Instant.ofEpochMilli(1_000);

TestSetSpec.forFunction(BuiltInFunctionDefinitions.EQUALS)
.onFieldsWithData(ltz3, ltz0)
.andDataTypes(TIMESTAMP_LTZ(3), TIMESTAMP_LTZ(0))
// compare same type, but different precision, should 
always adjust to the higher precision
.testResult($("f0").isEqual($("f1")), "f0 = f1", false, 
DataTypes.BOOLEAN())
.testResult($("f1").isEqual($("f0")), "f1 = f0", true 
/* but should be false */, DataTypes.BOOLEAN())
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34910) Can not plan window join without projections

2024-03-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34910:


 Summary: Can not plan window join without projections
 Key: FLINK-34910
 URL: https://issues.apache.org/jira/browse/FLINK-34910
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


When running:
{code}
  @Test
  def testWindowJoinWithoutProjections(): Unit = {
val sql =
  """
|SELECT *
|FROM
|  TABLE(TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE)) AS L
|JOIN
|  TABLE(TUMBLE(TABLE MyTable2, DESCRIPTOR(rowtime), INTERVAL '15' 
MINUTE)) AS R
|ON L.window_start = R.window_start AND L.window_end = R.window_end AND 
L.a = R.a
  """.stripMargin
util.verifyRelPlan(sql)
  }
{code}

It fails with:
{code}
FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME_MATERIALIZE(proctime) AS 
proctime, window_start, window_end, window_time, a0, b0, c0, rowtime0, 
PROCTIME_MATERIALIZE(proctime0) AS proctime0, window_start0, window_end0, 
window_time0])
+- FlinkLogicalCorrelate(correlation=[$cor0], joinType=[inner], 
requiredColumns=[{}])
   :- FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR($3), 
90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) b, 
BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* proctime, 
TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) *ROWTIME* 
window_time)])
   :  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
1000:INTERVAL SECOND)])
   : +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS proctime])
   :+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, MyTable]], fields=[a, b, c, rowtime])
   +- 
FlinkLogicalTableFunctionScan(invocation=[TUMBLE(DESCRIPTOR(CAST($3):TIMESTAMP(3)),
 90:INTERVAL MINUTE)], rowType=[RecordType(INTEGER a, VARCHAR(2147483647) 
b, BIGINT c, TIMESTAMP(3) *ROWTIME* rowtime, TIMESTAMP_LTZ(3) *PROCTIME* 
proctime, TIMESTAMP(3) window_start, TIMESTAMP(3) window_end, TIMESTAMP(3) 
*ROWTIME* window_time)])
  +- FlinkLogicalWatermarkAssigner(rowtime=[rowtime], watermark=[-($3, 
1000:INTERVAL SECOND)])
 +- FlinkLogicalCalc(select=[a, b, c, rowtime, PROCTIME() AS proctime])
+- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, MyTable2]], fields=[a, b, c, rowtime])

Failed to get time attribute index from DESCRIPTOR(CAST($3):TIMESTAMP(3)). This 
is a bug, please file a JIRA issue.
Please check the documentation for the set of currently supported SQL features.
{code}

In prior versions this had another problem of ambiguous {{rowtime}} column, but 
this has been fixed by [FLINK-32648]. In versions < 1.19 WindowTableFunctions 
were incorrectly scoped, because they were not extending from Calcite's 
SqlWindowTableFunction and the scoping implemented in 
SqlValidatorImpl#convertFrom was incorrect. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34745) Parsing temporal table join throws cryptic exceptions

2024-03-19 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34745:


 Summary: Parsing temporal table join throws cryptic exceptions
 Key: FLINK-34745
 URL: https://issues.apache.org/jira/browse/FLINK-34745
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.19.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.20.0


1. Wrong expression type in `AS OF`:
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF 'o.rowtime' AS r " +
  "ON o.currency = r.currency
{code}

throws: 

{code}
java.lang.AssertionError: cannot convert CHAR literal to class 
org.apache.calcite.util.TimestampString
{code}

2. Not a table simple table reference
{code}
SELECT * " +
  "FROM Orders AS o JOIN " +
  "RatesHistoryWithPK FOR SYSTEM_TIME AS OF o.rowtime + INTERVAL '1' SECOND 
AS r " +
  "ON o.currency = r.currency
{code}

throws:
{code}
java.lang.AssertionError: no unique expression found for {id: o.rowtime, 
prefix: 1}; count is 0
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34507) JSON functions have wrong operand checker

2024-02-23 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34507:


 Summary: JSON functions have wrong operand checker
 Key: FLINK-34507
 URL: https://issues.apache.org/jira/browse/FLINK-34507
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.1
Reporter: Dawid Wysakowicz


I believe that all JSON functions (`JSON_VALUE`, `JSON_QUERY`, ...) have wrong 
operand checker.

As far as I can tell the first argument (the JSON) should be a `STRING` 
argument. That's what all other systems do (some accept clob/blob additionally 
e.g. ORACLE).

We via Calcite accept `ANY` type there, which I believe is wrong: 
https://github.com/apache/calcite/blob/c49792f9c72159571f898c5fca1e26cba9870b07/core/src/main/java/org/apache/calcite/sql/fun/SqlJsonValueFunction.java#L61



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34399:


 Summary: Release Testing: Verify FLINK-33644 Make QueryOperations 
SQL serializable
 Key: FLINK-34399
 URL: https://issues.apache.org/jira/browse/FLINK-34399
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz


Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:


Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34153) Set ALWAYS ChainingStrategy in TemporalSort

2024-01-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34153:


 Summary: Set ALWAYS ChainingStrategy in TemporalSort
 Key: FLINK-34153
 URL: https://issues.apache.org/jira/browse/FLINK-34153
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Similarly to FLINK-27992 we should ALWAYS chaining strategy in TemporalSort 
operator



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33823) Serialize PlannerQueryOperation into SQL

2023-12-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33823:


 Summary: Serialize PlannerQueryOperation into SQL
 Key: FLINK-33823
 URL: https://issues.apache.org/jira/browse/FLINK-33823
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33754) Serialize QueryOperations into SQL

2023-12-05 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33754:


 Summary: Serialize QueryOperations into SQL
 Key: FLINK-33754
 URL: https://issues.apache.org/jira/browse/FLINK-33754
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33675) Serialize ValueLiteralExpressions into SQL

2023-11-28 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33675:


 Summary: Serialize ValueLiteralExpressions into SQL
 Key: FLINK-33675
 URL: https://issues.apache.org/jira/browse/FLINK-33675
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33663) Serialize CallExpressions into SQL

2023-11-27 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33663:


 Summary: Serialize CallExpressions into SQL
 Key: FLINK-33663
 URL: https://issues.apache.org/jira/browse/FLINK-33663
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


The task is about introducing {{CallSyntax}} and implementing versions for 
non-standard SQL functions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33644) FLIP-393: Make QueryOperations SQL serializable

2023-11-24 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33644:


 Summary: FLIP-393: Make QueryOperations SQL serializable
 Key: FLINK-33644
 URL: https://issues.apache.org/jira/browse/FLINK-33644
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33599) Run restore tests with RocksDB state backend

2023-11-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33599:


 Summary: Run restore tests with RocksDB state backend
 Key: FLINK-33599
 URL: https://issues.apache.org/jira/browse/FLINK-33599
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33597) Can not use a nested column for a join condition

2023-11-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33597:


 Summary: Can not use a nested column for a join condition
 Key: FLINK-33597
 URL: https://issues.apache.org/jira/browse/FLINK-33597
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Query:
{code}
SELECT A.after.CUSTOMER_ID FROM `CUSTOMERS` A INNER JOIN `PRODUCTS` B ON 
A.after.CUSTOMER_ID = B.after.PURCHASER;
{code}

fails with:

{code}
java.lang.RuntimeException: Error while applying rule 
FlinkProjectWatermarkAssignerTransposeRule, args 
[rel#411017:LogicalProject.NONE.any.None: 
0.[NONE].[NONE](input=RelSubset#411016,exprs=[$2, $2.CUSTOMER_ID]), 
rel#411015:LogicalWatermarkAssigner.NONE.any.None: 
0.[NONE].[NONE](input=RelSubset#411014,rowtime=$rowtime,watermark=SOURCE_WATERMARK())]
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:250)
...
Caused by: java.lang.IllegalArgumentException: Type mismatch:
rel rowtype: RecordType(RecordType:peek_no_expand(INTEGER NOT NULL CUSTOMER_ID, 
VARCHAR(2147483647) CHARACTER SET "UTF-16LE" CUSTOMER_NAME, VARCHAR(2147483647) 
CHARACTER SET "UTF-16LE" TITLE, INTEGER DOB) after, INTEGER $f8) NOT NULL
equiv rowtype: RecordType(RecordType:peek_no_expand(INTEGER NOT NULL 
CUSTOMER_ID, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" CUSTOMER_NAME, 
VARCHAR(2147483647) CHARACTER SET "UTF-16LE" TITLE, INTEGER DOB) after, INTEGER 
NOT NULL $f8) NOT NULL
Difference:
$f8: INTEGER -> INTEGER NOT NULL

at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:592)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:613)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:144)
... 50 more
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33563) Implement type inference for Agg functions

2023-11-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33563:


 Summary: Implement type inference for Agg functions
 Key: FLINK-33563
 URL: https://issues.apache.org/jira/browse/FLINK-33563
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


* COLLECT
* DISTINCT



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33439) Implement type inference for IN function

2023-11-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33439:


 Summary: Implement type inference for IN function
 Key: FLINK-33439
 URL: https://issues.apache.org/jira/browse/FLINK-33439
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33431) Create restore tests for ExecNodes

2023-11-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33431:


 Summary: Create restore tests for ExecNodes
 Key: FLINK-33431
 URL: https://issues.apache.org/jira/browse/FLINK-33431
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Bonnie Varghese
 Fix For: 1.19.0


As a follow up to FLINK-25217 we should create tests for restoring all 
{{ExecNodes}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33419) Port PROCTIME/ROWTIME functions to the new inference stack

2023-10-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33419:


 Summary: Port PROCTIME/ROWTIME functions to the new inference stack
 Key: FLINK-33419
 URL: https://issues.apache.org/jira/browse/FLINK-33419
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33412) Implement type inference for reinterpret_cast function

2023-10-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33412:


 Summary: Implement type inference for reinterpret_cast function
 Key: FLINK-33412
 URL: https://issues.apache.org/jira/browse/FLINK-33412
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/Reinterpret.scala



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33411) Implement type inference for window properties functions

2023-10-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33411:


 Summary: Implement type inference for window properties functions
 Key: FLINK-33411
 URL: https://issues.apache.org/jira/browse/FLINK-33411
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/windowProperties.scala

Functions:
* WINDOW_START
* WINDOW_END



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33410) Implement type inference for Over offsets functions

2023-10-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33410:


 Summary: Implement type inference for Over offsets functions
 Key: FLINK-33410
 URL: https://issues.apache.org/jira/browse/FLINK-33410
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/overOffsets.scala

Functions:
* CURRENT_RANGE
* CURRENT_ROW
* UNBOUNDED_ROW
* UNBOUNDED_RANGE



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33407) Port time functions to the new type inference stack

2023-10-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33407:


 Summary: Port time functions to the new type inference stack
 Key: FLINK-33407
 URL: https://issues.apache.org/jira/browse/FLINK-33407
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


The end goal for this task is to remove 
https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/time.scala

For that to happen we need to port:
* EXTRACT
* CURRENT_DATE
* CURRENT_TIME
* CURRENT_TIMESTAMP
* LOCAL_TIME
* LOCAL_TIMESTAMP
* TEMPORAL_OVERLAPS
* DATE_FORMAT
* TIMESTAMP_DIFF
* TO_TIMESTAMP_LTZ
functions to the new type inference



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33375) Add a RestoreTestBase

2023-10-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33375:


 Summary: Add a RestoreTestBase
 Key: FLINK-33375
 URL: https://issues.apache.org/jira/browse/FLINK-33375
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Add a test base class for writing restore tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33372) Cryptic exception for a sub query in a CompiledPlan

2023-10-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33372:


 Summary: Cryptic exception for a sub query in a CompiledPlan
 Key: FLINK-33372
 URL: https://issues.apache.org/jira/browse/FLINK-33372
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: Dawid Wysakowicz


SQL statements with a SUBQUERY can be compiled to a plan, but such plans can 
not be executed and they fail with a cryptic exception.

Example:

{code}
final CompiledPlan compiledPlan = tEnv.compilePlanSql("insert into MySink 
SELECT * FROM LATERAL TABLE(func1(select c from MyTable))");

tEnv.loadPlan(PlanReference.fromJsonString(compiledPlan.asJsonString())).execute();
{code}

fails with:
{code}
org.apache.flink.table.planner.codegen.CodeGenException: Unsupported call: 
$SCALAR_QUERY() 
If you think this function should be supported, you can create an issue and 
start a discussion for it.
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33371) Make TestValues sinks return results as Rows

2023-10-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33371:


 Summary: Make TestValues sinks return results as Rows
 Key: FLINK-33371
 URL: https://issues.apache.org/jira/browse/FLINK-33371
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner, Tests
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


If we want to use the predicates from 
https://github.com/apache/flink/pull/23584 in restore tests we need to make 
testing sinks return Rows instead of Strings



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33327) Window TVF column expansion does not work with an INSERT INTO

2023-10-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33327:


 Summary: Window TVF column expansion does not work with an INSERT 
INTO
 Key: FLINK-33327
 URL: https://issues.apache.org/jira/browse/FLINK-33327
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


If we have an {{INSERT INTO}} with an explicit column list and a {{TUMBLE}} 
function, the explicit column expansion fails with {{NullPointerException}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33255) Validate argument count during type inference

2023-10-12 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33255:


 Summary: Validate argument count during type inference
 Key: FLINK-33255
 URL: https://issues.apache.org/jira/browse/FLINK-33255
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Currently we do not validate the argument count in 
{{TypeInferenceOperandInference}} which results in bugs like e.g. 
[FLINK-33248]. We do run the check already in {{TypeInferenceUtil}} when 
running inference for Table API so we should do the same in 
{{TypeInferenceOperandInference}} case.

We could expose {{TypeInferenceUtil#validateArgumentCount}} and call it. If the 
check fails, we should not adapt {{operandTypes}}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33223) MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled plan

2023-10-09 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33223:


 Summary: MATCH_RECOGNIZE AFTER MATCH clause can not be 
deserialised from a compiled plan
 Key: FLINK-33223
 URL: https://issues.apache.org/jira/browse/FLINK-33223
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


{code}
String sql =
"insert into MySink"
+ " SELECT * FROM\n"
+ " MyTable\n"
+ "   MATCH_RECOGNIZE(\n"
+ "   PARTITION BY vehicle_id\n"
+ "   ORDER BY `rowtime`\n"
+ "   MEASURES \n"
+ "   FIRST(A.`rowtime`) as startTime,\n"
+ "   LAST(A.`rowtime`) as endTime,\n"
+ "   FIRST(A.engine_temperature) as 
Initial_Temp,\n"
+ "   LAST(A.engine_temperature) as Final_Temp\n"
+ "   ONE ROW PER MATCH\n"
+ "   AFTER MATCH SKIP TO FIRST B\n"
+ "   PATTERN (A+ B)\n"
+ "   DEFINE\n"
+ "   A as LAST(A.engine_temperature,1) is NULL OR 
A.engine_temperature > LAST(A.engine_temperature,1),\n"
+ "   B as B.engine_temperature < 
LAST(A.engine_temperature)\n"
+ "   )MR;";
util.verifyJsonPlan(String.format(sql, afterClause));
{code}

fails with:

{code}
Could not resolve internal system function '$SKIP TO LAST$1'. This is a bug, 
please file an issue. (through reference chain: 
org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[3]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMatch["matchSpec"]->org.apache.flink.table.planner.plan.nodes.exec.spec.MatchSpec["after"])
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33179) Improve reporting serialisation issues

2023-10-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33179:


 Summary: Improve reporting serialisation issues
 Key: FLINK-33179
 URL: https://issues.apache.org/jira/browse/FLINK-33179
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


FLINK-33158 shows that serialisation exceptions are not reported in a helpful 
manner. We should improve error reporting so that it gives more context what 
went wrong.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33158) Cryptic exception when there is a StreamExecSort in JsonPlan

2023-09-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33158:


 Summary: Cryptic exception when there is a StreamExecSort in 
JsonPlan
 Key: FLINK-33158
 URL: https://issues.apache.org/jira/browse/FLINK-33158
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.1, 1.16.2
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.18.0


{code}
CREATE TABLE MyTable (
   a bigint,
   b int not null,
   c varchar,
   d timestamp(3)
with (
   'connector' = 'values',
   'bounded' = 'false'
)

insert into MySink SELECT a, a from MyTable order by b
{code}

fails with:

{code}
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 For input string: "null" (through reference chain: 
org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[2])
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33093) SHOW FUNCTIONS throw exception with unset catalog

2023-09-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33093:


 Summary: SHOW FUNCTIONS throw exception with unset catalog
 Key: FLINK-33093
 URL: https://issues.apache.org/jira/browse/FLINK-33093
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.18.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.18.0


A test like this throw an exception. It should instead return only built-in 
functions
{code}
@Test
public void testUnsetCatalogWithShowFunctions() throws Exception {
TableEnvironment tEnv = TableEnvironment.create(ENVIRONMENT_SETTINGS);

tEnv.useCatalog(null);

TableResult table = tEnv.executeSql("SHOW FUNCTIONS");
final List functions = 
CollectionUtil.iteratorToList(table.collect());

// check it has some built-in functions
assertThat(functions).hasSizeGreaterThan(0);
}
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33083) SupportsReadingMetadata is not applied when loading a CompiledPlan

2023-09-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33083:


 Summary: SupportsReadingMetadata is not applied when loading a 
CompiledPlan
 Key: FLINK-33083
 URL: https://issues.apache.org/jira/browse/FLINK-33083
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.1, 1.16.2
Reporter: Dawid Wysakowicz


If a few conditions are met, we can not apply ReadingMetadata interface:
# source overwrites:
 {code}
@Override
public boolean supportsMetadataProjection() {
return false;
}
 {code}
# source does not implement {{SupportsProjectionPushDown}}
# table has metadata columns e.g.
{code}
CREATE TABLE src (
  physical_name STRING,
  physical_sum INT,
  timestamp TIMESTAMP_LTZ(3) NOT NULL METADATA VIRTUAL
)
{code}
# we query the table {{SELECT * FROM src}}

It fails with:
{code}
Caused by: java.lang.IllegalArgumentException: Row arity: 1, but serializer 
arity: 2
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:124)
{code}

The reason is {{SupportsReadingMetadataSpec}} is created only in the 
{{PushProjectIntoTableSourceScanRule}}, but the rule is not applied when 1 & 2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32682) Introduce option for choosing time function evaluation methods

2023-07-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-32682:


 Summary: Introduce option for choosing time function evaluation 
methods
 Key: FLINK-32682
 URL: https://issues.apache.org/jira/browse/FLINK-32682
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner, Table SQL / Runtime
Reporter: Dawid Wysakowicz
 Fix For: 1.18.0


In FLIP-162 as future plans it was discussed to introduce an option 
{{table.exec.time-function-evaluation}} to control evaluation method of time 
function.

We should add this option to be able to evaluate time functions with 
{{query-time}} method in streaming mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32584) Make it possible to unset default catalog and/or database

2023-07-12 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-32584:


 Summary: Make it possible to unset default catalog and/or database
 Key: FLINK-32584
 URL: https://issues.apache.org/jira/browse/FLINK-32584
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.18.0


In certain scenarios it might make sense to unset the default catalog and/or 
database. For example in a situation when there is no sane default one, but we 
want the user make that decision consciously. 

This change has a narrow scope and changes only some checks in the API surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30427) Pulsar SQL connector lists not bundled dependencies

2022-12-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-30427:


 Summary: Pulsar SQL connector lists not bundled dependencies
 Key: FLINK-30427
 URL: https://issues.apache.org/jira/browse/FLINK-30427
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: pulsar-3.0.0
Reporter: Dawid Wysakowicz


flink-connector-pulsar lists:
{code}
- org.bouncycastle:bcpkix-jdk15on:1.69
- org.bouncycastle:bcprov-ext-jdk15on:1.69
{code}

but does not bundle them. (It uses them in test scope)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29500) InitializeOnMaster uses wrong parallelism with AdaptiveScheduler

2022-10-04 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-29500:


 Summary: InitializeOnMaster uses wrong parallelism with 
AdaptiveScheduler
 Key: FLINK-29500
 URL: https://issues.apache.org/jira/browse/FLINK-29500
 Project: Flink
  Issue Type: Bug
  Components: API / Core, Runtime / Coordination
Affects Versions: 1.14.6, 1.15.2, 1.16.0
Reporter: Dawid Wysakowicz


{{InputOutputFormatVertex}} uses {{JobVertex#getParallelism}} to invoke 
{{InitializeOnMaster#initializeGlobal}}. However, this parallelism might not be 
the actual one which will be used to execute the node in combination with 
Adaptive Scheduler. In case of Adaptive Scheduler the execution parallelism is 
provided via {{VertexParallelismStore}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-27489) Allow users to run dedicated tests in the CI

2022-05-04 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-27489:


 Summary: Allow users to run dedicated tests in the CI
 Key: FLINK-27489
 URL: https://issues.apache.org/jira/browse/FLINK-27489
 Project: Flink
  Issue Type: New Feature
  Components: Build System / CI
Reporter: Dawid Wysakowicz


Users can specify a dedicated test that is run on the CI in a PR.

Users are able to run any test at any given point in time.

It should use the existing user interface (e.g. FlinkBot).
{code}
@flinkbot run 
org.apache.flink.test.checkpointing.TimestampedFileInputSplitTest#testSplitComparison
{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27319) Duplicated "-t" option for savepoint format and deployment target

2022-04-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-27319:


 Summary: Duplicated "-t" option for savepoint format and 
deployment target
 Key: FLINK-27319
 URL: https://issues.apache.org/jira/browse/FLINK-27319
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.15.0


The two options savepoint format and deployment target have the same short 
option which causes a clash and the CLI to fail.

I suggest to drop the short "-t" for savepoint format.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27233) Unnecessary entries in connector-elasticsearch7 in NOTICE file

2022-04-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-27233:


 Summary: Unnecessary entries in connector-elasticsearch7 in NOTICE 
file
 Key: FLINK-27233
 URL: https://issues.apache.org/jira/browse/FLINK-27233
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


{{flink-sql-connector-elasticsearch7}} lists following dependencies in the 
NOTICE file, which are not bundled in the jar:

{code}
- com.fasterxml.jackson.core:jackson-databind:2.13.2.2
- com.fasterxml.jackson.core:jackson-annotations:2.13.2
- org.apache.lucene:lucene-spatial:8.7.0
- org.elasticsearch:elasticsearch-plugin-classloader:7.10.2
- org.lz4:lz4-java:1.8.0
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27231) SQL pulsar connector lists dependencies under wrong license

2022-04-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-27231:


 Summary: SQL pulsar connector lists dependencies under wrong 
license
 Key: FLINK-27231
 URL: https://issues.apache.org/jira/browse/FLINK-27231
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


Pulsar sql connector lists following dependencies under ASL2 license while they 
are licensed with Bouncy Castle license (variant of MIT?).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27230) Unnecessary entries in connector-kinesis NOTICE file

2022-04-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-27230:


 Summary: Unnecessary entries in connector-kinesis NOTICE file
 Key: FLINK-27230
 URL: https://issues.apache.org/jira/browse/FLINK-27230
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


flink-connector-kinesis lists but does not bundle:

{code}
- commons-logging:commons-logging:1.1.3
- com.fasterxml.jackson.core:jackson-core:2.13.2
{code}

{code}
[INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded jar.
[INFO] Excluding com.fasterxml.jackson.core:jackson-core:jar:2.13.2 from the 
shaded jar.
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27229) Cassandra overrides netty version in tests

2022-04-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-27229:


 Summary: Cassandra overrides netty version in tests
 Key: FLINK-27229
 URL: https://issues.apache.org/jira/browse/FLINK-27229
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Cassandra
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


{{flink-connector-cassandra}} declares:
{code}


io.netty
netty-all
4.1.46.Final
test

{code}
which overrides the project wide version of netty just for tests. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26960) Make it possible to drop an old unused registered Kryo serializer

2022-03-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26960:


 Summary: Make it possible to drop an old unused registered Kryo 
serializer
 Key: FLINK-26960
 URL: https://issues.apache.org/jira/browse/FLINK-26960
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System
Affects Versions: 1.14.4, 1.13.6, 1.12.7, 1.15.0
Reporter: Dawid Wysakowicz


If users register a Kryo serializer e.g. via:
{code}
env.registerTypeWithKryoSerializer(ClassA. ClassASerializer.class);
{code}

and then use a Kryo serializer for serializing state objects, the registered 
serializer is written into the KryoSerializer snapshot. Even if Kryo is used 
for serializing classes other than ClassA. This makes it impossible to remove 
{{ClassASerializer}} from the classpath, because it is required for reading the 
savepoint.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26783) Restore from a stop-with-savepoint if failed during committing

2022-03-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26783:


 Summary: Restore from a stop-with-savepoint if failed during 
committing
 Key: FLINK-26783
 URL: https://issues.apache.org/jira/browse/FLINK-26783
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


We decided stop-with-savepoint should commit side-effects and thus we should 
fail over to those savepoints if a failure happens when committing side effects.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26708) TimestampsAndWatermarksOperator should not propagate WatermarkStatus

2022-03-17 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26708:


 Summary: TimestampsAndWatermarksOperator should not propagate 
WatermarkStatus
 Key: FLINK-26708
 URL: https://issues.apache.org/jira/browse/FLINK-26708
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.14.4, 1.15.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


The lifecycle/scope of WatermarkStatus is tightly coupled with watermarks. 
Upstream watermarks are cut off in the TimestampsAndWatermarksOperator and 
therefore watermark statuses should be cut off as well.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26700) Update chinese documentation regarding restore modes

2022-03-17 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26700:


 Summary: Update chinese documentation regarding restore modes
 Key: FLINK-26700
 URL: https://issues.apache.org/jira/browse/FLINK-26700
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Dawid Wysakowicz


Translate FLINK-25193



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26392) Support externally induced sources in MultipleInputStreamTask

2022-02-28 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26392:


 Summary: Support externally induced sources in 
MultipleInputStreamTask
 Key: FLINK-26392
 URL: https://issues.apache.org/jira/browse/FLINK-26392
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing, Runtime / Task
Reporter: Dawid Wysakowicz


As of now the {{ExternallyInducedSourceReader}} is not supported in 
{{MultipleInputStreamTask}}, which means it does not work if sources are 
chained.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26349) AvroParquetReaders does not work with ReflectData

2022-02-24 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26349:


 Summary: AvroParquetReaders does not work with ReflectData
 Key: FLINK-26349
 URL: https://issues.apache.org/jira/browse/FLINK-26349
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


I tried to change the {{AvroParquetFileReadITCase}} to read the data as 
{{ReflectData}} and I stumbled on a problem. The scenario is that I use exact 
same code for writing parquet files, but changed the reading part to:

{code}
public static final class User {
private final String name;
private final Integer favoriteNumber;
private final String favoriteColor;

public User(String name, Integer favoriteNumber, String favoriteColor) {
this.name = name;
this.favoriteNumber = favoriteNumber;
this.favoriteColor = favoriteColor;
}
}

final FileSource source =
FileSource.forRecordStreamFormat(
AvroParquetReaders.forReflectRecord(User.class),
Path.fromLocalFile(TEMPORARY_FOLDER.getRoot()))
.monitorContinuously(Duration.ofMillis(5))
.build();
{code}

I get an error:
{code}
819020 [flink-akka.actor.default-dispatcher-9] DEBUG 
org.apache.flink.runtime.jobmaster.JobMaster [] - Archive local failure causing 
attempt cc9f5e814ea9a3a5b397018dbffcb6a9 to fail: 
com.esotericsoftware.kryo.KryoException: java.lang.UnsupportedOperationException
Serialization trace:
reserved (org.apache.avro.Schema$Field)
fieldMap (org.apache.avro.Schema$RecordSchema)
schema (org.apache.avro.generic.GenericData$Record)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:143)
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:21)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:402)
at 
org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:191)
at 
org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
at 
org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:53)
at 
org.apache.flink.runtime.io.network.api.serialization.NonSpanningWrapper.readInto(NonSpanningWrapper.java:337)
at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNonSpanningRecord(SpillingAdaptiveSpanningRecordDeserializer.java:128)
at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:103)
at 
org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:93)
at 
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:95)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at 

[jira] [Created] (FLINK-26299) Reintroduce old JobClient#triggerSavepoint,stopWithSavepoint methods

2022-02-22 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26299:


 Summary: Reintroduce old 
JobClient#triggerSavepoint,stopWithSavepoint methods
 Key: FLINK-26299
 URL: https://issues.apache.org/jira/browse/FLINK-26299
 Project: Flink
  Issue Type: Sub-task
  Components: API / DataStream, Runtime / Checkpointing
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


JobClient is a {{PublicEvolving}} API. We should keep the methods with old 
signatures at least for one release.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26273) Test checkpoints restore modes & formats

2022-02-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26273:


 Summary: Test checkpoints restore modes & formats
 Key: FLINK-26273
 URL: https://issues.apache.org/jira/browse/FLINK-26273
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We should test manually changes introduced in [FLINK-25276] & [FLINK-25154]

Proposal: 
Take canonical savepoint/native savepoint/externalised checkpoint (with 
RocksDB), and perform claim (1)/no claim (2) recoveries, and verify that in:
1. after a couple of checkpoints claimed files have been cleaned up
2. that after a single successful checkpoint, you can remove the start up files 
and failover the job without any errors.
3. take a native, incremental RocksDB savepoint, move to a different directory, 
restore from it

documentation:
1. 
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/savepoints/#restore-mode
2. 
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/savepoints/#savepoint-format



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26192) PulsarOrderedSourceReaderTest fails with exit code 255

2022-02-16 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26192:


 Summary: PulsarOrderedSourceReaderTest fails with exit code 255
 Key: FLINK-26192
 URL: https://issues.apache.org/jira/browse/FLINK-26192
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/wysakowiczdawid/Flink/_build/results?buildId=1367=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=26787

{code}
Feb 16 13:49:46 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) on 
project flink-connector-pulsar: There are test failures.
Feb 16 13:49:46 [ERROR] 
Feb 16 13:49:46 [ERROR] Please refer to 
/__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire-reports for 
the individual test results.
Feb 16 13:49:46 [ERROR] Please refer to dump files (if any exist) [date].dump, 
[date]-jvmRun[N].dump and [date].dumpstream.
Feb 16 13:49:46 [ERROR] The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd 
/__w/1/s/flink-connectors/flink-connector-pulsar && 
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
-Dmvn.forkNumber=1  
-XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar 
/__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar
 /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire 
2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp 
surefire_08509996975514960300tmp
Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log
Feb 16 13:49:46 [ERROR] Process Exit Code: 255
Feb 16 13:49:46 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
Feb 16 13:49:46 [ERROR] Command was /bin/sh -c cd 
/__w/1/s/flink-connectors/flink-connector-pulsar && 
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
-Dmvn.forkNumber=1  
-XX:-UseGCOverheadLimit -Duser.country=US -Duser.language=en -jar 
/__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire/surefirebooter3139517882560779643.jar
 /__w/1/s/flink-connectors/flink-connector-pulsar/target/surefire 
2022-02-16T13-48-34_435-jvmRun1 surefire3358354372075396323tmp 
surefire_08509996975514960300tmp
Feb 16 13:49:46 [ERROR] Error occurred in starting fork, check output in log
Feb 16 13:49:46 [ERROR] Process Exit Code: 255
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:748)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:305)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:265)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1314)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1159)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:932)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
Feb 16 13:49:46 [ERROR] at 
org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216)
Feb 16 13:49:46 [ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:160)
Feb 16 13:49:46 [ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Feb 16 

[jira] [Created] (FLINK-26164) Document watermark alignment

2022-02-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26164:


 Summary: Document watermark alignment
 Key: FLINK-26164
 URL: https://issues.apache.org/jira/browse/FLINK-26164
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.15.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26076) Fix ArchUnit violations in SourceMetricsITCase

2022-02-10 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26076:


 Summary: Fix ArchUnit violations in SourceMetricsITCase
 Key: FLINK-26076
 URL: https://issues.apache.org/jira/browse/FLINK-26076
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.15.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26052) Update chinese documentation regarding FLIP-203

2022-02-09 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-26052:


 Summary: Update chinese documentation regarding FLIP-203
 Key: FLINK-26052
 URL: https://issues.apache.org/jira/browse/FLINK-26052
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Runtime / Checkpointing
Reporter: Dawid Wysakowicz


Relevant english commits: 
* c1f5c5320150402fc0cb4fbf3a31f9a27b1e4d9a
* cd8ea8d5b207569f68acc5a3c8db95cd2ca47ba6



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25983) Add WatermarkStrategy#withWatermarkAlignment

2022-02-07 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25983:


 Summary: Add WatermarkStrategy#withWatermarkAlignment
 Key: FLINK-25983
 URL: https://issues.apache.org/jira/browse/FLINK-25983
 Project: Flink
  Issue Type: Sub-task
  Components: API / DataStream
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.15.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25952) Savepoint on S3 are not relocatable even if entropy injection is not enabled

2022-02-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25952:


 Summary: Savepoint on S3 are not relocatable even if entropy 
injection is not enabled
 Key: FLINK-25952
 URL: https://issues.apache.org/jira/browse/FLINK-25952
 Project: Flink
  Issue Type: Bug
  Components: FileSystems, Runtime / Checkpointing
Affects Versions: 1.14.3, 1.13.5, 1.15.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.15.0, 1.14.4, 1.13.7


We have a limitation that if we create savepoints with an injected entropy, 
they are not relocatable 
(https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/savepoints/#triggering-savepoints).

However the check if we use the entropy is flawed. In 
{{FsCheckpointStreamFactory}} we check only if the used filesystem extends from 
{{EntropyInjectingFileSystem}}. {{FlinkS3FileSystem}} does, but it still may 
have the entropy disabled. {{FlinkS3FileSystem#getEntropyInjectionKey}} may 
still return {{null}}. We should check for that in 
{{org.apache.flink.core.fs.EntropyInjector#isEntropyInjecting}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25923) Add tests for native savepoint format schema evolution

2022-02-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25923:


 Summary: Add tests for native savepoint format schema evolution
 Key: FLINK-25923
 URL: https://issues.apache.org/jira/browse/FLINK-25923
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz


Check test coverage for:

Schema evolution

https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25887) FLIP-193: Snapshots ownership follow ups

2022-01-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25887:


 Summary: FLIP-193: Snapshots ownership follow ups
 Key: FLINK-25887
 URL: https://issues.apache.org/jira/browse/FLINK-25887
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25322) Support no-claim mode in changelog state backend

2021-12-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25322:


 Summary: Support no-claim mode in changelog state backend
 Key: FLINK-25322
 URL: https://issues.apache.org/jira/browse/FLINK-25322
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing, Runtime / State Backends
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25256) Savepoints do not work with ExternallyInducedSources

2021-12-10 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25256:


 Summary: Savepoints do not work with ExternallyInducedSources
 Key: FLINK-25256
 URL: https://issues.apache.org/jira/browse/FLINK-25256
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.13.3, 1.14.0
Reporter: Dawid Wysakowicz


It is not possible to take a proper savepoint with {{ExternallyInducedSource}} 
or {{ExternallyInducedSourceReader}}. The problem is that we're hardcoding 
{{CheckpointOptions}} in the {{triggerHook}}.

The outcome of current state is that operators would try to take checkpoints in 
the checkpoint location whereas the {{CheckpointCoordinator}} will write 
metadata for those states in the savepoint location.

Moreover the situation gets even weirder (I have not checked it entirely), if 
we have a mixture of {{ExternallyInducedSource(s)}} and regular sources. In 
such a case the location and format at which the state of a particular task is 
persisted depends on the order of barriers arrival. If a barrier from a regular 
source arrives last the task takes a savepoint, on the other hand if last 
barrier is from an externally induced source it will take a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25203) Implement duplicating for aliyun

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25203:


 Summary: Implement duplicating for aliyun
 Key: FLINK-25203
 URL: https://issues.apache.org/jira/browse/FLINK-25203
 Project: Flink
  Issue Type: Sub-task
  Components: FileSystems
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We can use: https://www.alibabacloud.com/help/doc-detail/31979.htm



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25202) Implement duplicating for azure

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25202:


 Summary: Implement duplicating for azure
 Key: FLINK-25202
 URL: https://issues.apache.org/jira/browse/FLINK-25202
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We can use: https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25201) Implement duplicating for gcs

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25201:


 Summary: Implement duplicating for gcs
 Key: FLINK-25201
 URL: https://issues.apache.org/jira/browse/FLINK-25201
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We can use https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25200) Implement duplicating for s3 filesystem

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25200:


 Summary: Implement duplicating for s3 filesystem
 Key: FLINK-25200
 URL: https://issues.apache.org/jira/browse/FLINK-25200
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We can use https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25195) Use duplicating API for shared artefacts in RocksDB snapshots

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25195:


 Summary: Use duplicating API for shared artefacts in RocksDB 
snapshots
 Key: FLINK-25195
 URL: https://issues.apache.org/jira/browse/FLINK-25195
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing, Runtime / State Backends
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


Instead of uploading all artefacts, we could use the duplicating API to cheaply 
create an independent copy of shared artefacts instead of uploading them again 
(as described in FLINK-25192)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25194) Implement an API for duplicating artefacts

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25194:


 Summary: Implement an API for duplicating artefacts
 Key: FLINK-25194
 URL: https://issues.apache.org/jira/browse/FLINK-25194
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / FileSystem, Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We should implement methods that let us duplicate artefacts in a DFS. We can 
later on use it for cheaply duplicating shared snapshots artefacts instead of 
reuploading them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25193) Document claim & no-claim mode

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25193:


 Summary: Document claim & no-claim mode
 Key: FLINK-25193
 URL: https://issues.apache.org/jira/browse/FLINK-25193
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We should describe how the different restore modes work. It is important to go 
through the FLIP and include all {{NOTES}} in the written documentation



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25192) Implement proper no-claim mode support

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25192:


 Summary: Implement proper no-claim mode support
 Key: FLINK-25192
 URL: https://issues.apache.org/jira/browse/FLINK-25192
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


In the no-claim mode should not depend on any artefacts of the initial snapshot 
after the restore. In order to do that we should pass a flag along with the RPC 
and later on with a CheckpointBarrier to notify TaskManagers about that 
intention. Moreover state backends should take the flag into consideration and 
take "full snapshots"
* RocksDB state backend should upload all files instead of reusing artefacts 
from the initial one
* Changelog state backend should materialize the changelog upon the flag

https://cwiki.apache.org/confluence/x/bIyqCw#FLIP193:Snapshotsownership-No-claimmode(defaultmode)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25191) Skip savepoints for recovery

2021-12-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25191:


 Summary: Skip savepoints for recovery
 Key: FLINK-25191
 URL: https://issues.apache.org/jira/browse/FLINK-25191
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


Intermediate savepoints should not be used for recovery. In order to achieve 
that we should:
* do not send {{notifyCheckpointComplete}} for intermediate savepoints
* do not add them to {{CompletedCheckpointStore}}

Important! Synchronous savepoints (stop-with-savepoint) should still commit 
side-effects. We need to distinguish them from the intermediate savepoints.

https://cwiki.apache.org/confluence/x/bIyqCw#FLIP193:Snapshotsownership-SkippingSavepointsforRecovery



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25155) Implement claim snapshot mode

2021-12-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25155:


 Summary: Implement claim snapshot mode
 Key: FLINK-25155
 URL: https://issues.apache.org/jira/browse/FLINK-25155
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.15.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25154) FLIP-193: Snapshots ownership

2021-12-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-25154:


 Summary: FLIP-193: Snapshots ownership
 Key: FLINK-25154
 URL: https://issues.apache.org/jira/browse/FLINK-25154
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


Task for implementing FLIP-193: https://cwiki.apache.org/confluence/x/bIyqCw



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24868) Use custom serialization for storing checkpoint metadata in CompletedCheckpointStore

2021-11-10 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24868:


 Summary: Use custom serialization for storing checkpoint metadata 
in CompletedCheckpointStore
 Key: FLINK-24868
 URL: https://issues.apache.org/jira/browse/FLINK-24868
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Dawid Wysakowicz


We are using a java serialization for storing {{CompletedCheckpoint}} in 
{{CompletedCheckpointStore}}. This makes maintaining backwards compatibility of 
entries stored hard, even between minor versions. Maintaining this kind of 
backwards compatibility is required for ever considering rolling upgrades.

In particular, we do have {{MetadataSerializer}} for storing checkpoints 
metadata in a backwards-compatible way.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24732) Remove scala suffix from respective benchmarks dependencies

2021-11-02 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24732:


 Summary: Remove scala suffix from respective benchmarks 
dependencies
 Key: FLINK-24732
 URL: https://issues.apache.org/jira/browse/FLINK-24732
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


With FLINK-24018 few dependencies lost its scala suffix. We should remove it in 
benchmark dependencies to test against newest artifacts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24573) ZooKeeperJobGraphsStoreITCase crashes JVM

2021-10-18 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24573:


 Summary: ZooKeeperJobGraphsStoreITCase crashes JVM
 Key: FLINK-24573
 URL: https://issues.apache.org/jira/browse/FLINK-24573
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25123=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=8375

{code}
Oct 17 00:15:16 [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.22.2:test (integration-tests) 
on project flink-runtime: There are test failures.
Oct 17 00:15:16 [ERROR] 
Oct 17 00:15:16 [ERROR] Please refer to 
/__w/1/s/flink-runtime/target/surefire-reports for the individual test results.
Oct 17 00:15:16 [ERROR] Please refer to dump files (if any exist) [date].dump, 
[date]-jvmRun[N].dump and [date].dumpstream.
Oct 17 00:15:16 [ERROR] ExecutionException The forked VM terminated without 
properly saying goodbye. VM crash or System.exit called?
Oct 17 00:15:16 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-runtime/target 
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
-Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
/__w/1/s/flink-runtime/target/surefire/surefirebooter6284072213813812385.jar 
/__w/1/s/flink-runtime/target/surefire 2021-10-16T23-44-38_893-jvmRun2 
surefire134157100872108937tmp surefire_819867287453033687541tmp
Oct 17 00:15:16 [ERROR] Error occurred in starting fork, check output in log
Oct 17 00:15:16 [ERROR] Process Exit Code: 239
Oct 17 00:15:16 [ERROR] Crashed tests:
Oct 17 00:15:16 [ERROR] 
org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphsStoreITCase
Oct 17 00:15:16 [ERROR] 
org.apache.maven.surefire.booter.SurefireBooterForkException: 
ExecutionException The forked VM terminated without properly saying goodbye. VM 
crash or System.exit called?
Oct 17 00:15:16 [ERROR] Command was /bin/sh -c cd /__w/1/s/flink-runtime/target 
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m 
-Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
/__w/1/s/flink-runtime/target/surefire/surefirebooter6284072213813812385.jar 
/__w/1/s/flink-runtime/target/surefire 2021-10-16T23-44-38_893-jvmRun2 
surefire134157100872108937tmp surefire_819867287453033687541tmp
Oct 17 00:15:16 [ERROR] Error occurred in starting fork, check output in log
Oct 17 00:15:16 [ERROR] Process Exit Code: 239
Oct 17 00:15:16 [ERROR] Crashed tests:
Oct 17 00:15:16 [ERROR] 
org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphsStoreITCase
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
Oct 17 00:15:16 [ERROR] at 
org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216)
Oct 17 00:15:16 [ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:160)
Oct 17 00:15:16 [ERROR] at 

[jira] [Created] (FLINK-24552) Ineffective buffer debloat configuration in randomized tests

2021-10-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24552:


 Summary: Ineffective buffer debloat configuration in randomized 
tests
 Key: FLINK-24552
 URL: https://issues.apache.org/jira/browse/FLINK-24552
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Configuration, Runtime / Task
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0, 1.14.1


The randomization in {{TestStreamEnvironment#setAsContext}} is ineffective, it 
is not used. 

The problem is that the buffer debloat can be configure only through the tasks 
manager configuration. Configuring through the {{StreamExecutionEnvironment}} 
is not possible.

We should either:
1. Fix the randomization
2. Implement configuring buffer debloating through 
{{StreamExecutionEnvironment#configure}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24550) Can not access job information from a standby jobmanager UI

2021-10-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24550:


 Summary: Can not access job information from a standby jobmanager 
UI
 Key: FLINK-24550
 URL: https://issues.apache.org/jira/browse/FLINK-24550
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination, Runtime / Web Frontend
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0, 1.14.1


One can not access the "running jobs" section (if a job is running) or if the 
job is completed it can not access the job page. Moreover the overview section 
does not work in the standby manager if a job is running. The active jobmanager 
UI works just fine.

Reported in the ML:  
https://lists.apache.org/thread.html/r69646f1c943846ed07f9ff80232c8d0cea31222191354871f914484c%40%3Cuser.flink.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24549) FlinkKinesisConsumer does not work with generic types disabled

2021-10-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24549:


 Summary: FlinkKinesisConsumer does not work with generic types 
disabled
 Key: FLINK-24549
 URL: https://issues.apache.org/jira/browse/FLINK-24549
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.13.2, 1.12.5, 1.14.0
Reporter: Dawid Wysakowicz


FlinkKinesisConsumer uses {{GenericTypeInfo}} internally, which makes it 
impossible to disable generic types in the entire job.

{code}
java.lang.UnsupportedOperationException: Generic types have been disabled in 
the ExecutionConfig and type 
org.apache.flink.streaming.connectors.kinesis.model.SequenceNumber is treated 
as a generic type.
at 
org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:87)
at 
org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:104)
at 
org.apache.flink.api.java.typeutils.TupleTypeInfo.createSerializer(TupleTypeInfo.java:49)
at 
org.apache.flink.api.java.typeutils.ListTypeInfo.createSerializer(ListTypeInfo.java:99)
at 
org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:302)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getListState(DefaultOperatorStateBackend.java:264)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.getUnionListState(DefaultOperatorStateBackend.java:216)
at 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.initializeState(FlinkKinesisConsumer.java:443)
at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:189)
at 
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:171)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at 
org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:111)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:425)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:535)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:525)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:565)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570)
at java.lang.Thread.run(Thread.java:748)
{code}

Reported in the ML: 
https://lists.apache.org/thread.html/r6e7723a9d1d77e223fbab481c9a53cbd4a2189ee7442302ee3c33b95%40%3Cuser.flink.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24515) MailboxExecutor#submit swallows exceptions

2021-10-12 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24515:


 Summary: MailboxExecutor#submit swallows exceptions
 Key: FLINK-24515
 URL: https://issues.apache.org/jira/browse/FLINK-24515
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Task
Affects Versions: 1.13.2, 1.12.5, 1.14.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.12.6, 1.13.3, 1.15.0, 1.14.1


If a {{RunnableWithException}}/{{Callable}} is submitted via the 
{{MailboxExecutor#submit}} any exceptions thrown from it will be swallowed.

It is caused by the {{FutureTaskWithException}} implementation. The 
{{FutureTask#run}} does not throw an exception, but it sets it as its internal 
state. The exception will be thrown only when {{FutureTask#get()}} is called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24498) KafkaShuffleITCase fails with "The topic metadata failed to propagate to Kafka broker"

2021-10-11 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24498:


 Summary: KafkaShuffleITCase fails with "The topic metadata failed 
to propagate to Kafka broker"
 Key: FLINK-24498
 URL: https://issues.apache.org/jira/browse/FLINK-24498
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.2
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24932=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6655

{code}
Oct 10 22:44:25 [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 87.421 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase
Oct 10 22:44:25 [ERROR] 
testAssignedToPartitionIngestionTime(org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase)
  Time elapsed: 13.048 s  <<< FAILURE!
Oct 10 22:44:25 java.lang.AssertionError: Create test topic : 
test_assigned_to_partition_IngestionTime failed, The topic metadata failed to 
propagate to Kafka broker.
Oct 10 22:44:25 at org.junit.Assert.fail(Assert.java:88)
Oct 10 22:44:25 at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:226)
Oct 10 22:44:25 at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
Oct 10 22:44:25 at 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
Oct 10 22:44:25 at 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartition(KafkaShuffleITCase.java:295)
Oct 10 22:44:25 at 
org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartitionIngestionTime(KafkaShuffleITCase.java:114)
Oct 10 22:44:25 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Oct 10 22:44:25 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Oct 10 22:44:25 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Oct 10 22:44:25 at java.lang.reflect.Method.invoke(Method.java:498)
Oct 10 22:44:25 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Oct 10 22:44:25 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Oct 10 22:44:25 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Oct 10 22:44:25 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Oct 10 22:44:25 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Oct 10 22:44:25 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
Oct 10 22:44:25 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
Oct 10 22:44:25 at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
Oct 10 22:44:25 at java.lang.Thread.run(Thread.java:748)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24465) Wrong javadoc and documentation for buffer timeout

2021-10-07 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24465:


 Summary: Wrong javadoc and documentation for buffer timeout
 Key: FLINK-24465
 URL: https://issues.apache.org/jira/browse/FLINK-24465
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Runtime / Configuration, Runtime / Network
Affects Versions: 1.13.2, 1.14.0
Reporter: Dawid Wysakowicz
 Fix For: 1.13.3, 1.15.0, 1.14.1


The javadoc for {{setBufferTimeout}} and similarly the documentation for 
{{execution.buffer-timeout}} claims:

{code}
/**
 * Sets the maximum time frequency (milliseconds) for the flushing of the 
output buffers. By
 * default the output buffers flush frequently to provide low latency and 
to aid smooth
 * developer experience. Setting the parameter can result in three logical 
modes:
 *
 * 
 *   A positive integer triggers flushing periodically by that integer
 *   0 triggers flushing after every record thus minimizing latency
 *   -1 triggers flushing only when the output buffer is full thus 
maximizing throughput
 * 
 *
 * @param timeoutMillis The maximum time between two output flushes.
 */
{code}

which is not true.

The {{-1}} value is illegal (it throws an exception). {{0}} behaves as {{-1}} 
in the above description, at least from what I gathered. There is no way to 
configure behaviour described above for {{0}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24458) ParquetFsStreamingSinkITCase fails with time out

2021-10-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24458:


 Summary: ParquetFsStreamingSinkITCase fails with time out
 Key: FLINK-24458
 URL: https://issues.apache.org/jira/browse/FLINK-24458
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24784=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=4cf71635-d33f-53ff-7185-c5abb11ae3a0=14970

{code}
Oct 05 23:16:10 [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 32.276 s <<< FAILURE! - in 
org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase
Oct 05 23:16:10 [ERROR] testPart  Time elapsed: 20.787 s  <<< ERROR!
Oct 05 23:16:10 org.junit.runners.model.TestTimedOutException: test timed out 
after 20 seconds
Oct 05 23:16:10 at java.lang.Thread.sleep(Native Method)
Oct 05 23:16:10 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
Oct 05 23:16:10 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
Oct 05 23:16:10 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
Oct 05 23:16:10 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
Oct 05 23:16:10 at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
Oct 05 23:16:10 at 
java.util.Iterator.forEachRemaining(Iterator.java:115)
Oct 05 23:16:10 at 
org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109)
Oct 05 23:16:10 at 
org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.check(FsStreamingSinkITCaseBase.scala:133)
Oct 05 23:16:10 at 
org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:120)
Oct 05 23:16:10 at 
org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPart(FsStreamingSinkITCaseBase.scala:84)
Oct 05 23:16:10 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Oct 05 23:16:10 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Oct 05 23:16:10 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Oct 05 23:16:10 at java.lang.reflect.Method.invoke(Method.java:498)
Oct 05 23:16:10 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Oct 05 23:16:10 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Oct 05 23:16:10 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Oct 05 23:16:10 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Oct 05 23:16:10 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Oct 05 23:16:10 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Oct 05 23:16:10 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
Oct 05 23:16:10 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
Oct 05 23:16:10 at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
Oct 05 23:16:10 at java.lang.Thread.run(Thread.java:748)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24457) FileSourceTextLinesITCase.testContinuousTextFileSourceWithJobManagerFailover fails with NoSuchElement

2021-10-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24457:


 Summary: 
FileSourceTextLinesITCase.testContinuousTextFileSourceWithJobManagerFailover 
fails with NoSuchElement
 Key: FLINK-24457
 URL: https://issues.apache.org/jira/browse/FLINK-24457
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24781=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=2c68b137-b01d-55c9-e603-3ff3f320364b=23849
{code}
Oct 06 00:07:54 [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 41.682 s <<< FAILURE! - in 
org.apache.flink.connector.file.src.FileSourceTextLinesITCase
Oct 06 00:07:54 [ERROR] testContinuousTextFileSourceWithJobManagerFailover  
Time elapsed: 10.826 s  <<< ERROR!
Oct 06 00:07:54 java.util.NoSuchElementException
Oct 06 00:07:54 at java.util.LinkedList.removeLast(LinkedList.java:283)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.operators.collect.AbstractCollectResultBuffer.revert(AbstractCollectResultBuffer.java:112)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.operators.collect.CheckpointedCollectResultBuffer.sinkRestarted(CheckpointedCollectResultBuffer.java:37)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.operators.collect.AbstractCollectResultBuffer.dealWithResponse(AbstractCollectResultBuffer.java:87)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:147)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
Oct 06 00:07:54 at 
org.apache.flink.streaming.api.datastream.DataStreamUtils.collectRecordsFromUnboundedStream(DataStreamUtils.java:142)
Oct 06 00:07:54 at 
org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSource(FileSourceTextLinesITCase.java:224)
Oct 06 00:07:54 at 
org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:180)
Oct 06 00:07:54 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Oct 06 00:07:54 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Oct 06 00:07:54 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Oct 06 00:07:54 at java.lang.reflect.Method.invoke(Method.java:498)
Oct 06 00:07:54 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Oct 06 00:07:54 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Oct 06 00:07:54 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Oct 06 00:07:54 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Oct 06 00:07:54 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Oct 06 00:07:54 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Oct 06 00:07:54 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Oct 06 00:07:54 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Oct 06 00:07:54 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Oct 06 00:07:54 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Oct 06 00:07:54 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Oct 06 00:07:54 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Oct 06 00:07:54 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24450) OuterJoinITCase fails on azure

2021-10-05 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24450:


 Summary: OuterJoinITCase fails on azure
 Key: FLINK-24450
 URL: https://issues.apache.org/jira/browse/FLINK-24450
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.2
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24762=logs=955770d3-1fed-5a0a-3db6-0c7554c910cb=14447d61-56b4-5000-80c1-daa459247f6a=6879

{code}
Oct 05 01:26:04 [ERROR] Tests run: 48, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 22.485 s <<< FAILURE! - in 
org.apache.flink.table.planner.runtime.batch.sql.join.OuterJoinITCase
Oct 05 01:26:04 [ERROR] 
testFullEmptyOuter[SortMergeJoin](org.apache.flink.table.planner.runtime.batch.sql.join.OuterJoinITCase)
  Time elapsed: 0.396 s  <<< ERROR!
Oct 05 01:26:04 java.lang.RuntimeException: Job restarted
Oct 05 01:26:04 at 
org.apache.flink.streaming.api.operators.collect.UncheckpointedCollectResultBuffer.sinkRestarted(UncheckpointedCollectResultBuffer.java:42)
Oct 05 01:26:04 at 
org.apache.flink.streaming.api.operators.collect.AbstractCollectResultBuffer.dealWithResponse(AbstractCollectResultBuffer.java:87)
Oct 05 01:26:04 at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
Oct 05 01:26:04 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
Oct 05 01:26:04 at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
Oct 05 01:26:04 at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
Oct 05 01:26:04 at 
java.util.Iterator.forEachRemaining(Iterator.java:115)
Oct 05 01:26:04 at 
org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109)
Oct 05 01:26:04 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
Oct 05 01:26:04 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
Oct 05 01:26:04 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
Oct 05 01:26:04 at 
org.apache.flink.table.planner.runtime.batch.sql.join.OuterJoinITCase.testFullEmptyOuter(OuterJoinITCase.scala:156)
Oct 05 01:26:04 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Oct 05 01:26:04 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Oct 05 01:26:04 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Oct 05 01:26:04 at java.lang.reflect.Method.invoke(Method.java:498)
Oct 05 01:26:04 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
Oct 05 01:26:04 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Oct 05 01:26:04 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
Oct 05 01:26:04 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
Oct 05 01:26:04 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Oct 05 01:26:04 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Oct 05 01:26:04 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Oct 05 01:26:04 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
Oct 05 01:26:04 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Oct 05 01:26:04 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
Oct 05 01:26:04 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
Oct 05 01:26:04 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
Oct 05 01:26:04 at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
Oct 05 01:26:04 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24449) PulsarSourceITCase>SourceTestSuiteBase.testTaskManagerFailure fails with record mismatch

2021-10-05 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24449:


 Summary: 
PulsarSourceITCase>SourceTestSuiteBase.testTaskManagerFailure fails with record 
mismatch
 Key: FLINK-24449
 URL: https://issues.apache.org/jira/browse/FLINK-24449
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24750=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=25095

{code}
Oct 04 15:58:40 [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 111.559 s <<< FAILURE! - in 
org.apache.flink.connector.pulsar.source.PulsarSourceITCase
Oct 04 15:58:40 [ERROR] testTaskManagerFailure{TestEnvironment, 
ExternalContext, ClusterControllable}[1]  Time elapsed: 24.22 s  <<< FAILURE!
Oct 04 15:58:40 java.lang.AssertionError: 
Oct 04 15:58:40 
Oct 04 15:58:40 Expected: Records consumed by Flink should be identical to test 
data and preserve the order in split
Oct 04 15:58:40  but: Mismatched record at position 38: Expected '0-WU6W5B' 
but was '0-fiuOx4ttSEqVI0aaTMoF2'
Oct 04 15:58:40 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
Oct 04 15:58:40 at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
Oct 04 15:58:40 at 
org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testTaskManagerFailure(SourceTestSuiteBase.java:274)
Oct 04 15:58:40 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Oct 04 15:58:40 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Oct 04 15:58:40 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Oct 04 15:58:40 at java.lang.reflect.Method.invoke(Method.java:498)
Oct 04 15:58:40 at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
Oct 04 15:58:40 at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
Oct 04 15:58:40 at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24444) OperatorCoordinatorSchedulerTest#shutdownScheduler fails with IllegalState

2021-10-04 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-2:


 Summary: OperatorCoordinatorSchedulerTest#shutdownScheduler fails 
with IllegalState
 Key: FLINK-2
 URL: https://issues.apache.org/jira/browse/FLINK-2
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24727=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=8053

{code}
java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find 
Flink job (c803a5d701b4e6830a9d7c538fec843e)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.runtime.rest.handler.legacy.DefaultExecutionGraphCacheTest.testImmediateCacheInvalidationAfterFailure(DefaultExecutionGraphCacheTest.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24443) IntervalJoinITCase.testRowTimeInnerJoinWithEquiTimeAttrs fail with output mismatch

2021-10-04 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24443:


 Summary: IntervalJoinITCase.testRowTimeInnerJoinWithEquiTimeAttrs 
fail with output mismatch
 Key: FLINK-24443
 URL: https://issues.apache.org/jira/browse/FLINK-24443
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24716=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=9811

{code}
Oct 02 01:08:36 [ERROR] Tests run: 42, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 23.361 s <<< FAILURE! - in 
org.apache.flink.table.planner.runtime.stream.sql.IntervalJoinITCase
Oct 02 01:08:36 [ERROR] 
testRowTimeInnerJoinWithEquiTimeAttrs[StateBackend=ROCKSDB]  Time elapsed: 
0.408 s  <<< FAILURE!
Oct 02 01:08:36 java.lang.AssertionError: expected: but was:
Oct 02 01:08:36 at org.junit.Assert.fail(Assert.java:89)
Oct 02 01:08:36 at org.junit.Assert.failNotEquals(Assert.java:835)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24434) PyFlink YARN per-job on Docker test fails on Azure

2021-10-01 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24434:


 Summary: PyFlink YARN per-job on Docker test fails on Azure
 Key: FLINK-24434
 URL: https://issues.apache.org/jira/browse/FLINK-24434
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Deployment / YARN
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24669=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=23186
{code}
Sep 30 18:20:22 Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
Sep 30 18:20:22 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850)
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793)
Sep 30 18:20:22 at java.security.AccessController.doPrivileged(Native 
Method)
Sep 30 18:20:22 at javax.security.auth.Subject.doAs(Subject.java:422)
Sep 30 18:20:22 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489)
Sep 30 18:20:22 
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)
Sep 30 18:20:22 at org.apache.hadoop.ipc.Client.call(Client.java:1435)
Sep 30 18:20:22 at org.apache.hadoop.ipc.Client.call(Client.java:1345)
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
Sep 30 18:20:22 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
Sep 30 18:20:22 at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:583)
Sep 30 18:20:22 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Sep 30 18:20:22 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Sep 30 18:20:22 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Sep 30 18:20:22 at java.lang.reflect.Method.invoke(Method.java:498)
Sep 30 18:20:22 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
Sep 30 18:20:22 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
Sep 30 18:20:22 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
Sep 30 18:20:22 at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
Sep 30 18:20:22 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
Sep 30 18:20:22 at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
Sep 30 18:20:22 at 
org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2472)
Sep 30 18:20:22 ... 17 more

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24433) "No space left on device" in Azure e2e tests

2021-10-01 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24433:


 Summary: "No space left on device" in Azure e2e tests
 Key: FLINK-24433
 URL: https://issues.apache.org/jira/browse/FLINK-24433
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24668=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=19772

{code}
Sep 30 17:08:42 Job has been submitted with JobID 
5594c18e128a328ede39cfa59cb3cb07
Sep 30 17:08:56 2021-09-30 17:08:56,809 main ERROR Recovering from 
StringBuilderEncoder.encode('2021-09-30 17:08:56,807 WARN  
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher [] - An 
exception occurred when fetching query results
Sep 30 17:08:56 java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.rest.util.RestClientException: [Internal server 
error., 

[jira] [Created] (FLINK-24417) Add Flink 1.14 MigrationVersion

2021-09-30 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24417:


 Summary: Add Flink 1.14 MigrationVersion
 Key: FLINK-24417
 URL: https://issues.apache.org/jira/browse/FLINK-24417
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: Caizhi Weng
Assignee: Caizhi Weng
 Fix For: 1.14.0, 1.13.3


Currently the largest MigrationVersion is 1.12. We need newer versions to add 
more serializer compatibility tests.

As stated in 
[https://cwiki.apache.org/confluence/display/FLINK/Creating+a+Flink+Release#CreatingaFlinkRelease-Checklisttoproceedtothenextstep.1]
 this should be the work of release manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24355) Expose the flag for enabling checkpoints after tasks finish in the Web UI

2021-09-22 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24355:


 Summary: Expose the flag for enabling checkpoints after tasks 
finish in the Web UI
 Key: FLINK-24355
 URL: https://issues.apache.org/jira/browse/FLINK-24355
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Configuration, Runtime / Web Frontend
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0, 1.14.1


We should present the value of 
{{execution.checkpointing.checkpoints-after-tasks-finish.enabled}} in the Web 
UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24348) KafkaTableITCase fail with "ContainerLaunch Container startup failed"

2021-09-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24348:


 Summary: KafkaTableITCase fail with "ContainerLaunch Container 
startup failed"
 Key: FLINK-24348
 URL: https://issues.apache.org/jira/browse/FLINK-24348
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Kafka
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24338=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=7140

{code}
Sep 21 02:44:33 org.testcontainers.containers.ContainerLaunchException: 
Container startup failed
Sep 21 02:44:33 at 
org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
Sep 21 02:44:33 at 
org.testcontainers.containers.KafkaContainer.doStart(KafkaContainer.java:97)
Sep 21 02:44:33 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase$1.doStart(KafkaTableTestBase.java:71)
Sep 21 02:44:33 at 
org.testcontainers.containers.GenericContainer.start(GenericContainer.java:315)
Sep 21 02:44:33 at 
org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1060)
Sep 21 02:44:33 at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:29)
Sep 21 02:44:33 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Sep 21 02:44:33 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Sep 21 02:44:33 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Sep 21 02:44:33 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Sep 21 02:44:33 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Sep 21 02:44:33 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
Sep 21 02:44:33 at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
Sep 21 02:44:33 at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
Sep 21 02:44:33 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
Sep 21 02:44:33 at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
Sep 21 02:44:33 at 
java.util.Iterator.forEachRemaining(Iterator.java:116)
Sep 21 02:44:33 at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
Sep 21 02:44:33 at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
Sep 21 02:44:33 at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
Sep 21 02:44:33 at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
Sep 21 02:44:33 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
Sep 21 02:44:33 at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
Sep 21 02:44:33 at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
Sep 21 02:44:33 at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:82)
Sep 21 02:44:33 at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:73)
Sep 21 02:44:33 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
Sep 21 02:44:33 at 
org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
Sep 21 02:44:33 at 
org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
Sep 21 02:44:33 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
Sep 21 02:44:33 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
Sep 21 02:44:33 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
Sep 21 02:44:33 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
Sep 21 02:44:33 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Sep 21 02:44:33 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Sep 21 02:44:33 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Sep 21 02:44:33 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Sep 21 02:44:33 Caused by: org.rnorth.ducttape.RetryCountExceededException: 
Retry limit hit with exception
Sep 21 02:44:33 at 
org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
Sep 21 02:44:33 at 

[jira] [Created] (FLINK-24280) Support manual checkpoints triggering from a MiniCluster

2021-09-14 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24280:


 Summary: Support manual checkpoints triggering from a MiniCluster
 Key: FLINK-24280
 URL: https://issues.apache.org/jira/browse/FLINK-24280
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Checkpointing, Runtime / Coordination
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


The goal is to be able to trigger checkpoints manually at a desired time. The 
intention is to use it in tests. We do not want to make this a user-facing 
feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24270) Rewrite tests for illegal job modification against VertexFinishedStateChecker

2021-09-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24270:


 Summary: Rewrite tests for illegal job modification against 
VertexFinishedStateChecker
 Key: FLINK-24270
 URL: https://issues.apache.org/jira/browse/FLINK-24270
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


>From https://github.com/apache/flink/pull/16655#issuecomment-899603149:

All the tests about checking the illegal JobGraph modifications are written as 
tests against the CheckpointCoordinator, when they could be written just 
against the VertexFinishedStateChecker. That would make the tests more 
targeted, like only against the actual component that has the logic. That way, 
we need less test maintenance when the checkpoint coordinator changes later.

It is probably a good idea to have two test against the Scheduler that validate 
that the modification tests happen at the right points. Something like 
testJobGraphModificationsAreCheckedForInitialSavepoint() and 
testJobGraphModificationsAreCheckedForInitialCheckpoint().

Then we need no dedicated tests against the CheckpointCoordinator regarding 
illegal job upgrades. That makes sense, because handling this is also the 
responsibilities of the Scheduler and the VertexFinishedStateChecker. The 
CheckpointCoordinator is only the component that connects the two, and forwards 
the calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24269) Rename methods around final checkpoints

2021-09-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24269:


 Summary: Rename methods around final checkpoints
 Key: FLINK-24269
 URL: https://issues.apache.org/jira/browse/FLINK-24269
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Dawid Wysakowicz
 Fix For: 1.15.0


We should rename:
* {{TaskStateSnapshot.isFinishedOnRestore()}} to {{isTaskDeployedAsFinished}}
* {{TaskStateSnapshot.isOperatorsFinished()}} to {{isTaskFinished}}
* {{PendingCheckpoint#updateNonFinishedOnRestoreOperatorState}} to 
{{updateOperatorState}}

For context see: 
https://github.com/apache/flink/pull/16655#issuecomment-899603149



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24264) Streaming File Sink s3 end-to-end test fail with output mismatch

2021-09-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24264:


 Summary: Streaming File Sink s3 end-to-end test fail with output 
mismatch
 Key: FLINK-24264
 URL: https://issues.apache.org/jira/browse/FLINK-24264
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem, Tests
Affects Versions: 1.14.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23962=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=11428

{code}
Sep 13 04:56:55 Job (3a7555e3ea003b5f2be486786d66197d) reached terminal state 
CANCELED
Sep 13 04:56:56 FAIL File Streaming Sink: Output hash mismatch.  Got 
e700d1164e7deb36fbbd95c38ff62897, expected 6727342fdd3aae2129e61fc8f433fb6f.
Sep 13 04:56:56 head hexdump of actual:
Sep 13 04:56:56 000   E   r   r   o   r   e   x   e   c   u   t   i   n 
  g
Sep 13 04:56:56 010   a   w   s   c   o   m   m   a   n   d   :   s 
  3
Sep 13 04:56:56 020   c   p   -   -   q   u   i   e   t   s   3   : 
  /   /
Sep 13 04:56:56 030   f   l   i   n   k   -   i   n   t   e   g   r   a   t 
  i   o
Sep 13 04:56:56 040   n   -   t   e   s   t   s   /   t   e   m   p   /   t 
  e   s
Sep 13 04:56:56 050   t   _   f   i   l   e   _   s   i   n   k   -   6   8 
  4   6
Sep 13 04:56:56 060   c   8   2   1   -   3   b   5   8   -   4   c   6   6 
  -   b
Sep 13 04:56:56 070   0   e   f   -   3   2   0   3   2   a   7   3   4   4 
  1   3
Sep 13 04:56:56 080   /   h   o   s   t   d   i   r   /   /   t   e   m 
  p   -
Sep 13 04:56:56 090   t   e   s   t   -   d   i   r   e   c   t   o   r   y 
  -   4
Sep 13 04:56:56 0a0   1   9   6   9   4   8   8   4   2   0   /   t   e   m 
  p   /
Sep 13 04:56:56 0b0   t   e   s   t   _   f   i   l   e   _   s   i   n   k 
  -   6
Sep 13 04:56:56 0c0   8   4   6   c   8   2   1   -   3   b   5   8   -   4 
  c   6
Sep 13 04:56:56 0d0   6   -   b   0   e   f   -   3   2   0   3   2   a   7 
  3   4
Sep 13 04:56:56 0e0   4   1   3   -   -   e   x   c   l   u   d   e 
  '   *
Sep 13 04:56:56 0f0   '   -   -   i   n   c   l   u   d   e   '   * 
  /   p
Sep 13 04:56:56 100   a   r   t   -   [   !   /   ]   *   '   -   -   r 
  e   c
Sep 13 04:56:56 110   u   r   s   i   v   e  \n 
   
Sep 13 04:56:56 117
Sep 13 04:56:56 Stopping job timeout watchdog (with pid=408779)
rm: cannot remove 
'/home/vsts/work/1/s/flink-dist/target/flink-1.14-SNAPSHOT-bin/flink-1.14-SNAPSHOT/lib/flink-shaded-netty-tcnative-static-*.jar':
 No such file or directory

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24213) Java deadlock in QueryableState ClientTest

2021-09-08 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24213:


 Summary: Java deadlock in QueryableState ClientTest
 Key: FLINK-24213
 URL: https://issues.apache.org/jira/browse/FLINK-24213
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Queryable State
Affects Versions: 1.15.0
Reporter: Dawid Wysakowicz


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23750=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=15476

{code}
 Found one Java-level deadlock:
Sep 08 11:12:50 =
Sep 08 11:12:50 "Flink Test Client Event Loop Thread 0":
Sep 08 11:12:50   waiting to lock monitor 0x7f4e380309c8 (object 
0x86b2cd50, a java.lang.Object),
Sep 08 11:12:50   which is held by "main"
Sep 08 11:12:50 "main":
Sep 08 11:12:50   waiting to lock monitor 0x7f4ea4004068 (object 
0x86b2cf50, a java.lang.Object),
Sep 08 11:12:50   which is held by "Flink Test Client Event Loop Thread 0"

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   >