[GitHub] [flink] flinkbot edited a comment on pull request #13498: [FLINK-19430][docs-zh] Translate page datastream_tutorial into Chinese

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13498:
URL: https://github.com/apache/flink/pull/13498#issuecomment-699842299


   
   ## CI report:
   
   * 84117cb7218b909348d276f2803bdb11c8a81ba9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7019)
 
   * 36e941c325c99378eefbc97d31138fc418ea72fb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19471) CVE-2020-7712 is reported for flink-streaming-java_2.11:jar:1.11.1

2020-09-29 Thread Jeff Hu (Jira)
Jeff Hu created FLINK-19471:
---

 Summary: CVE-2020-7712 is reported for 
flink-streaming-java_2.11:jar:1.11.1
 Key: FLINK-19471
 URL: https://issues.apache.org/jira/browse/FLINK-19471
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.11.1
Reporter: Jeff Hu


flink-shaded-zookeeper-3-3.4.14-11.0.jar 
(pkg:maven/org.apache.flink/flink-shaded-zookeeper-3@3.4.14-11.0, 
cpe:2.3:a:apache:flink:3.4.14.11.0:*:*:*:*:*:*:*, 
cpe:2.3:a:apache:zookeeper:3.4.14.11.0:*:*:*:*:*:*:*) : CVE-2020-7712
zookeeper-3.4.14.jar (pkg:maven/org.apache.zookeeper/zookeeper@3.4.14, 
cpe:2.3:a:apache:zookeeper:3.4.14:*:*:*:*:*:*:*) : CVE-2020-7712

 

 

 

[INFO] +- org.apache.flink:flink-streaming-java_2.11:jar:1.11.1:provided
[INFO] | +- org.apache.flink:flink-runtime_2.11:jar:1.11.1:compile
[INFO] | | +- org.apache.flink:flink-hadoop-fs:jar:1.11.1:compile
[INFO] | | +- org.apache.flink:flink-shaded-jackson:jar:2.10.1-11.0:compile
[INFO] | | +- org.apache.flink:flink-shaded-zookeeper-3:jar:3.4.14-11.0:compile

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangxlong commented on pull request #13498: [FLINK-19430][docs-zh] Translate page datastream_tutorial into Chinese

2020-09-29 Thread GitBox


wangxlong commented on pull request #13498:
URL: https://github.com/apache/flink/pull/13498#issuecomment-701166463


   @dianfu  Thank you for reminder and patient review. Updated~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hehuiyuan commented on pull request #13494: [kafka connector]Fix cast question for properies() method in kafka ConnectorDescriptor

2020-09-29 Thread GitBox


hehuiyuan commented on pull request #13494:
URL: https://github.com/apache/flink/pull/13494#issuecomment-701166109


   @flinkbot 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hehuiyuan commented on pull request #13495: [Test]Fix case demo is more obvious to understand for ReinterpretAsKeyedStream

2020-09-29 Thread GitBox


hehuiyuan commented on pull request #13495:
URL: https://github.com/apache/flink/pull/13495#issuecomment-701164789


   @flinkbot test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang updated FLINK-19386:
-
Description: 
Replacing
{code:java}
org.apache.flink.table.utils.JavaScalaConversionUtil 
{code}
with
{code:java}
org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
 {code}
in `HiveCatalogUseBlinkITCase .

For the first is in legacy planner, after in blink planner. We'd better use 
class in blink planner.

  was:
Replacing

 
{code:java}
org.apache.flink.table.utils.JavaScalaConversionUtil 
{code}
with

 

 
{code:java}
org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
{code}
 

in `HiveCatalogUseBlinkITCase .

For the first is in legacy planner, after in blink planner. We'd better use 
class in blink planner.


> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> Replacing
> {code:java}
> org.apache.flink.table.utils.JavaScalaConversionUtil 
> {code}
> with
> {code:java}
> org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
>  {code}
> in `HiveCatalogUseBlinkITCase .
> For the first is in legacy planner, after in blink planner. We'd better use 
> class in blink planner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang updated FLINK-19386:
-
Description: 
Replacing

 
{code:java}
org.apache.flink.table.utils.JavaScalaConversionUtil 
{code}
with

 

 
{code:java}
org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
{code}
 

in `HiveCatalogUseBlinkITCase .

For the first is in legacy planner, after in blink planner. We'd better use 
class in blink planner.

  was:
I found some connectors denpendecy  legacy table planner and used in test class.

For example `HiveCatalogUseBlinkITCase `  use `JavaScalaConversionUtil` in 
legacy table planner not blink planner. For FLIP-32 has be accepted, I think we 
can remove dependecy of legacy table planner from connectors and formats.


> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> Replacing
>  
> {code:java}
> org.apache.flink.table.utils.JavaScalaConversionUtil 
> {code}
> with
>  
>  
> {code:java}
> org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
> {code}
>  
> in `HiveCatalogUseBlinkITCase .
> For the first is in legacy planner, after in blink planner. We'd better use 
> class in blink planner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204455#comment-17204455
 ] 

hailong wang commented on FLINK-19386:
--

Thank you [~jark]. I understand now~

Yes, My first thought just wants to replace {{JavaScalaConversionUtil. And 
then, I think this is maybe the common problem. }}

I will open a pr for replacing {{JavaScalaConversionUtil.}}

> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> I found some connectors denpendecy  legacy table planner and used in test 
> class.
> For example `HiveCatalogUseBlinkITCase `  use `JavaScalaConversionUtil` in 
> legacy table planner not blink planner. For FLIP-32 has be accepted, I think 
> we can remove dependecy of legacy table planner from connectors and formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19386) Using blink planner util class in

2020-09-29 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang updated FLINK-19386:
-
Summary: Using blink planner util class in   (was: Remove legacy table 
planner dependecy from connectors and formats)

> Using blink planner util class in 
> --
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> Replacing
> {code:java}
> org.apache.flink.table.utils.JavaScalaConversionUtil 
> {code}
> with
> {code:java}
> org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
>  {code}
> in `HiveCatalogUseBlinkITCase .
> For the first is in legacy planner, after in blink planner. We'd better use 
> class in blink planner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19386) Using blink planner util class in HiveCatalogUseBlinkITCase

2020-09-29 Thread hailong wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hailong wang updated FLINK-19386:
-
Summary: Using blink planner util class in  HiveCatalogUseBlinkITCase  
(was: Using blink planner util class in )

> Using blink planner util class in  HiveCatalogUseBlinkITCase
> 
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> Replacing
> {code:java}
> org.apache.flink.table.utils.JavaScalaConversionUtil 
> {code}
> with
> {code:java}
> org.apache.flink.table.planner.utils.JavaScalaConversionUtil 
>  {code}
> in `HiveCatalogUseBlinkITCase .
> For the first is in legacy planner, after in blink planner. We'd better use 
> class in blink planner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13516: [FLINK-19453][table-common] Deprecate old source and sink interfaces

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13516:
URL: https://github.com/apache/flink/pull/13516#issuecomment-701152610


   
   ## CI report:
   
   * 71ea6811ae9194c35241a225d105c636e515d653 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7106)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19249) Job would wait sometime(~10 min) before failover if some connection broken

2020-09-29 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204453#comment-17204453
 ] 

Zhijiang commented on FLINK-19249:
--

Thanks for reporting this [~klion26] and all the above discussions.

I am curious of why the downstream side is aware of this problem after ten 
minutes in network stack.
As we known, when the netty server(upstream) detects the network physical 
problem, it will do below two things:

* Send the ErrorResponse message to netty client(downstream);
* Close the channel explicitly on its side after sending above message.

So the downstream side actually relies on two mechanisms for failure detection 
and handling:
* Logic ErrorResponse message from upstream side, if the downstream can receive 
it from network, then it will fail itself.
* Physical kernel mechanism: while upstream closing the local channel, the 
downstream side will also detect this inactive channel after some time(TCP 
mechanism), and then fail itself via operating `#handler#channelInactive` for 
example.

If the above two mechanisms are not alway reliable in some bad network 
environment, or delay because of kernel default setting, then we might provide 
another application mechanism to resolve it for safety. I can think of a 
previously discussed option to let upstream report this network exception to 
JobManager side in RPC, then the manager can decide to cancel/fail the related 
tasks.

Regarding the other options as `ReadTimeOutHandle/IdelStateHandle`, I am 
wondering they might bring other side effects and also not always reliable or 
limited by network stack.

> Job would wait sometime(~10 min) before failover if some connection broken
> --
>
> Key: FLINK-19249
> URL: https://issues.apache.org/jira/browse/FLINK-19249
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Congxian Qiu(klion26)
>Priority: Blocker
> Fix For: 1.12.0, 1.11.3
>
>
> {quote}encountered this error on 1.7, after going through the master code, I 
> think the problem is still there
> {quote}
> When the network environment is not so good, the connection between the 
> server and the client may be disconnected innocently. After the 
> disconnection, the server will receive the IOException such as below
> {code:java}
> java.io.IOException: Connection timed out
>  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>  at sun.nio.ch.IOUtil.write(IOUtil.java:51)
>  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:468)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:403)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:367)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:639)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
>  at java.lang.Thread.run(Thread.java:748)
> {code}
> then release the view reader.
> But the job would not fail until the downstream detect the disconnection 
> because of {{channelInactive}} later(~10 min). between such time, the job can 
> still process data, but the broken channel can't transfer any data or event, 
> so snapshot would fail during this time. this will cause the job to replay 
> many data after failover.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13516: [FLINK-19453][table-common] Deprecate old source and sink interfaces

2020-09-29 Thread GitBox


flinkbot commented on pull request #13516:
URL: https://github.com/apache/flink/pull/13516#issuecomment-701152610


   
   ## CI report:
   
   * 71ea6811ae9194c35241a225d105c636e515d653 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #13507: [FLINK-19231][python] Support ListState and ListView for Python UDAF.

2020-09-29 Thread GitBox


twalthr commented on a change in pull request #13507:
URL: https://github.com/apache/flink/pull/13507#discussion_r497238937



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/common/CommonPythonAggregate.scala
##
@@ -65,18 +72,69 @@ trait CommonPythonAggregate extends CommonPythonBase {
 * For streaming execution we extract the PythonFunctionInfo from 
AggregateInfo.
 */
   protected def extractPythonAggregateFunctionInfosFromAggregateInfo(
-  pythonAggregateInfo: AggregateInfo): PythonFunctionInfo = {
+  aggIndex: Int,
+  pythonAggregateInfo: AggregateInfo): (PythonFunctionInfo, 
Array[DataViewSpec]) = {
 pythonAggregateInfo.function match {
   case function: PythonFunction =>
-new PythonFunctionInfo(
-  function,
-  pythonAggregateInfo.argIndexes.map(_.asInstanceOf[AnyRef]))
+(
+  new PythonFunctionInfo(
+function,
+pythonAggregateInfo.argIndexes.map(_.asInstanceOf[AnyRef])),
+  extractDataViewSpecs(
+aggIndex,
+function.asInstanceOf[PythonAggregateFunction].getAccumulatorType)
+)
   case _: Count1AggFunction =>
 // The count star will be treated specially in Python UDF worker
-PythonFunctionInfo.DUMMY_PLACEHOLDER
+(PythonFunctionInfo.DUMMY_PLACEHOLDER, Array())
   case _ =>
 throw new TableException(
   "Unsupported python aggregate function: " + 
pythonAggregateInfo.function)
 }
   }
+
+  protected def extractDataViewSpecs(
+  index: Int,
+  accType: TypeInformation[_]): Array[DataViewSpec] = {
+if (!accType.isInstanceOf[CompositeType[_]]) {
+  return Array()
+}
+
+def includesDataView(ct: CompositeType[_]): Boolean = {
+  (0 until ct.getArity).exists(i =>
+ct.getTypeAt(i) match {
+  case nestedCT: CompositeType[_] => includesDataView(nestedCT)
+  case t: TypeInformation[_] if t.getTypeClass == classOf[ListView[_]] 
=> true
+  case _ => false
+}
+  )
+}
+
+if (includesDataView(accType.asInstanceOf[CompositeType[_]])) {
+  accType match {
+case rowType: RowTypeInfo =>
+(0 until rowType.getArity).flatMap(i => {
+  rowType.getFieldTypes()(i) match {
+case ct: CompositeType[_] if includesDataView(ct) =>
+  throw new TableException(
+"For Python AggregateFunction DataView only supported at 
first " +
+  "level of accumulators of Row type.")
+case listView: ListViewTypeInfo[_] =>

Review comment:
   Side comment: This class is deprecated. Please don't use it in new code 
anymore. Otherwise we will never be able to drop it any time soon.
   
   Same for `TypeConversions.fromLegacyInfoToDataType` ideally we should get 
rid of all these calls as quickly as possible. I'm trying to remove as many of 
these calls as possible but if new calls are added in PR this is an endless 
effort.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13516: [FLINK-19453][table-common] Deprecate old source and sink interfaces

2020-09-29 Thread GitBox


flinkbot commented on pull request #13516:
URL: https://github.com/apache/flink/pull/13516#issuecomment-701149527


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 71ea6811ae9194c35241a225d105c636e515d653 (Wed Sep 30 
04:28:31 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr opened a new pull request #13516: [FLINK-19453][table-common] Deprecate old source and sink interfaces

2020-09-29 Thread GitBox


twalthr opened a new pull request #13516:
URL: https://github.com/apache/flink/pull/13516


   ## What is the purpose of the change
   
   Deprecates all old source and sink interfaces and their helper interfaces 
that are not required in FLIP-95.
   
   ## Brief change log
   
   Deprecation annotation and comment added.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: yes
 - The serializers: no
 - The runtime per-record code paths (performance sensitive):no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper:  no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? JavaDocs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19453) Deprecate old source and sink interfaces

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19453:
---
Labels: pull-request-available  (was: )

> Deprecate old source and sink interfaces
> 
>
> Key: FLINK-19453
> URL: https://issues.apache.org/jira/browse/FLINK-19453
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
>
> Deprecate all interfaces and classes that are not necessary anymore with 
> FLIP-95.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twalthr commented on pull request #13516: [FLINK-19453][table-common] Deprecate old source and sink interfaces

2020-09-29 Thread GitBox


twalthr commented on pull request #13516:
URL: https://github.com/apache/flink/pull/13516#issuecomment-701149021


   CC @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13515: [FLINK-19470][orc][parquet] Parquet and ORC reader reachEnd returns f…

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13515:
URL: https://github.com/apache/flink/pull/13515#issuecomment-701143749


   
   ## CI report:
   
   * 5f8c701af43074f17ce503aa2497d2f91ed56c50 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7105)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17331) Add NettyMessageContent interface for all the class which could be write to NettyMessage

2020-09-29 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204445#comment-17204445
 ] 

Zhijiang commented on FLINK-17331:
--

After offline discussing with [~karmagyz], i am aware of this motivation now.

While writing header for some NettyMessage instances into network stack, e.g. 
BufferResponse, the internal involved field length is hard coded inside 
NettyMessage. So if we make some changes for such field e.g. InputChannelID, 
then we are not ware that it also needs to change the respective length for 
related netty messages component. It would be better if we can make some 
dependence among them or find the inconsistency after changes early.

This improvement is not so critical and urgent, so feel free when you want to 
improve it a bit. [~karmagyz]

> Add NettyMessageContent interface for all the class which could be write to 
> NettyMessage
> 
>
> Key: FLINK-17331
> URL: https://issues.apache.org/jira/browse/FLINK-17331
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Minor
>
> Currently, there are some classes, e.g. {{JobVertexID}}, 
> {{ExecutionAttemptID}} need to write to {{NettyMessage}}. However, the size 
> of these classes in {{ByteBuf}} are directly written in {{NettyMessage}} 
> class, which is error-prone. If someone edits those classes, there would be 
> no warning or error during the compile phase. I think it would be better to 
> add a {{NettyMessageContent}}(the name could be discussed) interface:
> {code:java}
> public interface NettyMessageContent {
> void writeTo(ByteBuf bug)
> int getContentLen();
> }
> {code}
> Regarding the {{fromByteBuf}}, since it is a static method, we could not add 
> it to the interface. We might explain it in the javaDoc of 
> {{NettyMessageContent}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16408) Bind user code class loader to lifetime of a slot

2020-09-29 Thread chouxu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204441#comment-17204441
 ] 

chouxu commented on FLINK-16408:


[~Echo Lee]  do you  using  flink rest api ? I  meet up the same phenomenon , 
when i use flink rest api run job by repeating submit for several times . My 
flink cluster is 1.10 and using openjdk 8 , gc is using Parallel *Garbage 
Collector .* 

> Bind user code class loader to lifetime of a slot
> -
>
> Key: FLINK-16408
> URL: https://issues.apache.org/jira/browse/FLINK-16408
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: Metaspace-OOM.png
>
>
> In order to avoid class leaks due to creating multiple user code class 
> loaders and loading class multiple times in a recovery case, I would suggest 
> to bind the lifetime of a user code class loader to the lifetime of a slot. 
> More precisely, the user code class loader should live at most as long as the 
> slot which is using it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17331) Add NettyMessageContent interface for all the class which could be write to NettyMessage

2020-09-29 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-17331:


Assignee: Yangze Guo

> Add NettyMessageContent interface for all the class which could be write to 
> NettyMessage
> 
>
> Key: FLINK-17331
> URL: https://issues.apache.org/jira/browse/FLINK-17331
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Minor
>
> Currently, there are some classes, e.g. {{JobVertexID}}, 
> {{ExecutionAttemptID}} need to write to {{NettyMessage}}. However, the size 
> of these classes in {{ByteBuf}} are directly written in {{NettyMessage}} 
> class, which is error-prone. If someone edits those classes, there would be 
> no warning or error during the compile phase. I think it would be better to 
> add a {{NettyMessageContent}}(the name could be discussed) interface:
> {code:java}
> public interface NettyMessageContent {
> void writeTo(ByteBuf bug)
> int getContentLen();
> }
> {code}
> Regarding the {{fromByteBuf}}, since it is a static method, we could not add 
> it to the interface. We might explain it in the javaDoc of 
> {{NettyMessageContent}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17331) Add NettyMessageContent interface for all the class which could be write to NettyMessage

2020-09-29 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-17331:
-
Fix Version/s: (was: 1.12.0)

> Add NettyMessageContent interface for all the class which could be write to 
> NettyMessage
> 
>
> Key: FLINK-17331
> URL: https://issues.apache.org/jira/browse/FLINK-17331
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yangze Guo
>Priority: Minor
>
> Currently, there are some classes, e.g. {{JobVertexID}}, 
> {{ExecutionAttemptID}} need to write to {{NettyMessage}}. However, the size 
> of these classes in {{ByteBuf}} are directly written in {{NettyMessage}} 
> class, which is error-prone. If someone edits those classes, there would be 
> no warning or error during the compile phase. I think it would be better to 
> add a {{NettyMessageContent}}(the name could be discussed) interface:
> {code:java}
> public interface NettyMessageContent {
> void writeTo(ByteBuf bug)
> int getContentLen();
> }
> {code}
> Regarding the {{fromByteBuf}}, since it is a static method, we could not add 
> it to the interface. We might explain it in the javaDoc of 
> {{NettyMessageContent}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19454) Translate page 'Importing Flink into an IDE' into Chinese

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-19454:
---

Assignee: wulei0302  (was: wulei)

> Translate page 'Importing Flink into an IDE' into Chinese
> -
>
> Key: FLINK-19454
> URL: https://issues.apache.org/jira/browse/FLINK-19454
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: wulei0302
>Assignee: wulei0302
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/flinkDev/ide_setup.html]
> The markdown file is located in {{flink/docs/flinkDev/ide_setup.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19454) Translate page 'Importing Flink into an IDE' into Chinese

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-19454:
---

Assignee: wulei

> Translate page 'Importing Flink into an IDE' into Chinese
> -
>
> Key: FLINK-19454
> URL: https://issues.apache.org/jira/browse/FLINK-19454
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: wulei0302
>Assignee: wulei
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/flinkDev/ide_setup.html]
> The markdown file is located in {{flink/docs/flinkDev/ide_setup.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19456) sql client execute insert sql with comment ahead

2020-09-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204440#comment-17204440
 ] 

Jark Wu commented on FLINK-19456:
-

+1 for this.

> sql client execute insert sql with comment ahead
> 
>
> Key: FLINK-19456
> URL: https://issues.apache.org/jira/browse/FLINK-19456
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
> Environment: flink-1.11.2-bin-scala_2.11
> standalone cluster
> bin/sql-client.sh embedded
>Reporter: ledong Lin
>Priority: Major
>
> *Environment*: standalone cluster
>  *Step in sql client*:
> {code:java}
> Flink SQL> create table a( a string) with ( 'connector'='print');
>  [INFO] Table has been created.
> Flink SQL> –- test
>  > insert into a
>  > select date_format(current_timestamp, 'MMdd');
>  [INFO] Submitting SQL update statement to the cluster...
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an 
> issue.Exception in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
> Shutting down the session...done.
> {code}
> And the `_-- test_` is the cause of the problem. 
> *error info in _flink-llin-sql-client-bigdata00.log_*
> {code:java}
> 2020-09-29 22:10:54,325 ERROR org.apache.flink.table.client.SqlClient 
>  [] - SQL Client must stop. Unexpected exception. This is a bug. 
> Please consider filing an issue.java.lang.NullPointerException: null
>     at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_202]    at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) 
> [flink-sql-client_2.11-1.11.2.jar:1.11.2]  
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19456) sql client execute insert sql with comment ahead

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19456:

Priority: Major  (was: Minor)

> sql client execute insert sql with comment ahead
> 
>
> Key: FLINK-19456
> URL: https://issues.apache.org/jira/browse/FLINK-19456
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
> Environment: flink-1.11.2-bin-scala_2.11
> standalone cluster
> bin/sql-client.sh embedded
>Reporter: ledong Lin
>Priority: Major
>
> *Environment*: standalone cluster
>  *Step in sql client*:
> {code:java}
> Flink SQL> create table a( a string) with ( 'connector'='print');
>  [INFO] Table has been created.
> Flink SQL> –- test
>  > insert into a
>  > select date_format(current_timestamp, 'MMdd');
>  [INFO] Submitting SQL update statement to the cluster...
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an 
> issue.Exception in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
> Shutting down the session...done.
> {code}
> And the `_-- test_` is the cause of the problem. 
> *error info in _flink-llin-sql-client-bigdata00.log_*
> {code:java}
> 2020-09-29 22:10:54,325 ERROR org.apache.flink.table.client.SqlClient 
>  [] - SQL Client must stop. Unexpected exception. This is a bug. 
> Please consider filing an issue.java.lang.NullPointerException: null
>     at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_202]    at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) 
> [flink-sql-client_2.11-1.11.2.jar:1.11.2]  
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19456) sql client execute insert sql with comment ahead

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19456:

Issue Type: Improvement  (was: Bug)

> sql client execute insert sql with comment ahead
> 
>
> Key: FLINK-19456
> URL: https://issues.apache.org/jira/browse/FLINK-19456
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
> Environment: flink-1.11.2-bin-scala_2.11
> standalone cluster
> bin/sql-client.sh embedded
>Reporter: ledong Lin
>Priority: Major
>
> *Environment*: standalone cluster
>  *Step in sql client*:
> {code:java}
> Flink SQL> create table a( a string) with ( 'connector'='print');
>  [INFO] Table has been created.
> Flink SQL> –- test
>  > insert into a
>  > select date_format(current_timestamp, 'MMdd');
>  [INFO] Submitting SQL update statement to the cluster...
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an 
> issue.Exception in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
> Shutting down the session...done.
> {code}
> And the `_-- test_` is the cause of the problem. 
> *error info in _flink-llin-sql-client-bigdata00.log_*
> {code:java}
> 2020-09-29 22:10:54,325 ERROR org.apache.flink.table.client.SqlClient 
>  [] - SQL Client must stop. Unexpected exception. This is a bug. 
> Please consider filing an issue.java.lang.NullPointerException: null
>     at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_202]    at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) 
> [flink-sql-client_2.11-1.11.2.jar:1.11.2]  
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19456) sql client execute insert sql with comment ahead

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19456:

Labels:   (was: Bug)

> sql client execute insert sql with comment ahead
> 
>
> Key: FLINK-19456
> URL: https://issues.apache.org/jira/browse/FLINK-19456
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
> Environment: flink-1.11.2-bin-scala_2.11
> standalone cluster
> bin/sql-client.sh embedded
>Reporter: ledong Lin
>Priority: Minor
>
> *Environment*: standalone cluster
>  *Step in sql client*:
> {code:java}
> Flink SQL> create table a( a string) with ( 'connector'='print');
>  [INFO] Table has been created.
> Flink SQL> –- test
>  > insert into a
>  > select date_format(current_timestamp, 'MMdd');
>  [INFO] Submitting SQL update statement to the cluster...
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an 
> issue.Exception in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
> java.lang.NullPointerException at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> at java.util.Optional.ifPresent(Optional.java:159) at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
> Shutting down the session...done.
> {code}
> And the `_-- test_` is the cause of the problem. 
> *error info in _flink-llin-sql-client-bigdata00.log_*
> {code:java}
> 2020-09-29 22:10:54,325 ERROR org.apache.flink.table.client.SqlClient 
>  [] - SQL Client must stop. Unexpected exception. This is a bug. 
> Please consider filing an issue.java.lang.NullPointerException: null
>     at 
> org.apache.flink.table.client.cli.CliClient.callInsert(CliClient.java:598) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:315) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_202]    at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:212) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:142) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) 
> ~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201) 
> [flink-sql-client_2.11-1.11.2.jar:1.11.2]  
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19452) statistics of group by CDC data is always 1

2020-09-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204437#comment-17204437
 ] 

Jark Wu commented on FLINK-19452:
-

[~tinny], could you share the reason here? I think it's helpful for users who 
have the similar problems. 

> statistics of group by CDC data is always 1
> ---
>
> Key: FLINK-19452
> URL: https://issues.apache.org/jira/browse/FLINK-19452
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Zhengchao Shi
>Priority: Major
> Fix For: 1.12.0
>
>
> When using CDC to do count statistics, if only updates are made to the source 
> table(mysql table), then the value of count is always 1.
> {code:sql}
> CREATE TABLE orders (
>   order_number int,
>   product_id   int
> ) with (
>   'connector' = 'kafka-0.11',
>   'topic' = 'Topic',
>   'properties.bootstrap.servers' = 'localhost:9092',
>   'properties.group.id' = 'GroupId',
>   'scan.startup.mode' = 'latest-offset',
>   'format' = 'canal-json'
> );
> CREATE TABLE order_test (
>   order_number int,
>   order_cnt bigint
> ) WITH (
>   'connector' = 'print'
> );
> INSERT INTO order_test
> SELECT order_number, count(1) FROM orders GROUP BY order_number;
> {code}
> 3 records in  “orders” :
> ||order_number||product_id||
> |10001|1|
> |10001|2|
> |10001|3|
>  now update orders table:
> {code:sql}
> update orders set product_id = 5 where order_number = 10001;
> {code}
> the output of is :
> -D(10001,1)
>  +I(10001,1)
>  -D(10001,1)
>  +I(10001,1)
>  -D(10001,1)
>  +I(10001,1)
> i think, the final result is +I(10001, 3)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204438#comment-17204438
 ] 

Jark Wu commented on FLINK-19386:
-

Hi [~hailong wang], thank you for sharing the thoughts. However, removing 
legacy planner dependency is not possible unless we have removed the legacy 
planner. So I would suggest to keep them for now. Remvoving lagecy planner is 
on the planner, but we need to solve some blocker problems, e.g. FLIP-129, 
FLIP-134, FLIP-140, FLIP-136, etc.. 

>From the issue description, I thought you just want to replace 
>{{JavaScalaConversionUtil}}. I would +1 for that. But I don't think it's time 
>to remove dependencies for connectors and formats. 

> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> I found some connectors denpendecy  legacy table planner and used in test 
> class.
> For example `HiveCatalogUseBlinkITCase `  use `JavaScalaConversionUtil` in 
> legacy table planner not blink planner. For FLIP-32 has be accepted, I think 
> we can remove dependecy of legacy table planner from connectors and formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on a change in pull request #13498: [FLINK-19430][docs-zh] Translate page datastream_tutorial into Chinese

2020-09-29 Thread GitBox


dianfu commented on a change in pull request #13498:
URL: https://github.com/apache/flink/pull/13498#discussion_r497227841



##
File path: docs/dev/python/datastream_tutorial.zh.md
##
@@ -22,76 +22,77 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink offers a DataStream API for building robust, stateful streaming 
applications. It provides fine-grained control over state and time, which 
allows for the implementation of advanced event-driven systems. In this 
step-by-step guide, you’ll learn how to build a simple streaming application 
with PyFlink and the DataStream API.
+Apache Flink 提供了 DataStream 
API,用于构建健壮的、有状态的流式应用程序。它提供了对状态和时间细粒度控制,从而允许实现高级事件驱动系统。
+在这篇教程中,你将学习如何使用 PyFlink 和 DataStream API 构建一个简单的流式应用程序。
 
 * This will be replaced by the TOC
 {:toc}
 
-## What Will You Be Building? 
+## 你将构建什么?
 
-In this tutorial, you will learn how to write a simple Python DataStream job.
-The pipeline will read data from a non-empty collection and write the results 
to the local file system.
+在本教程中,你将学习如何编写一个简单的 Python DataStream 作业。
+例子是从非空集合中读取数据,并将结果写入本地文件系统。
 
-## Prerequisites
+## 先决条件
 
-This walkthrough assumes that you have some familiarity with Python, but you 
should be able to follow along even if you come from a different programming 
language.
+本教程假设你对 Python 有一定的熟悉,但是即使你使用的是不同编程语言,你也应该能够学会。
 
-## Help, I’m Stuck! 
+## 帮助,我很困惑!
 
-If you get stuck, check out the [community support 
resources](https://flink.apache.org/zh/community.html).
-In particular, Apache Flink's [user mailing 
list](https://flink.apache.org/zh/community.html#mailing-lists) consistently 
ranks as one of the most active of any Apache project and a great way to get 
help quickly. 
+如果你有疑惑,可以查阅 [community support 
resources](https://flink.apache.org/zh/community.html)。

Review comment:
   ```suggestion
   如果你有疑惑,可以查阅 [社区支持资源](https://flink.apache.org/zh/community.html)。
   ```

##
File path: docs/dev/python/datastream_tutorial.zh.md
##
@@ -22,76 +22,77 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink offers a DataStream API for building robust, stateful streaming 
applications. It provides fine-grained control over state and time, which 
allows for the implementation of advanced event-driven systems. In this 
step-by-step guide, you’ll learn how to build a simple streaming application 
with PyFlink and the DataStream API.
+Apache Flink 提供了 DataStream 
API,用于构建健壮的、有状态的流式应用程序。它提供了对状态和时间细粒度控制,从而允许实现高级事件驱动系统。
+在这篇教程中,你将学习如何使用 PyFlink 和 DataStream API 构建一个简单的流式应用程序。
 
 * This will be replaced by the TOC
 {:toc}
 
-## What Will You Be Building? 
+## 你将构建什么?
 
-In this tutorial, you will learn how to write a simple Python DataStream job.
-The pipeline will read data from a non-empty collection and write the results 
to the local file system.
+在本教程中,你将学习如何编写一个简单的 Python DataStream 作业。
+例子是从非空集合中读取数据,并将结果写入本地文件系统。
 
-## Prerequisites
+## 先决条件
 
-This walkthrough assumes that you have some familiarity with Python, but you 
should be able to follow along even if you come from a different programming 
language.
+本教程假设你对 Python 有一定的熟悉,但是即使你使用的是不同编程语言,你也应该能够学会。
 
-## Help, I’m Stuck! 
+## 帮助,我很困惑!
 
-If you get stuck, check out the [community support 
resources](https://flink.apache.org/zh/community.html).
-In particular, Apache Flink's [user mailing 
list](https://flink.apache.org/zh/community.html#mailing-lists) consistently 
ranks as one of the most active of any Apache project and a great way to get 
help quickly. 
+如果你有疑惑,可以查阅 [community support 
resources](https://flink.apache.org/zh/community.html)。
+特别是,Apache Flink [user mailing 
list](https://flink.apache.org/zh/community.html#mailing-lists) 
一直是最活跃的Apache项目之一,也是快速获得帮助的好方法。
 
-## How To Follow Along
+## 如何跟进
 
-If you want to follow along, you will require a computer with: 
+如果你想学习,你需要一台装有以下环境的电脑:
 
 * Java 8 or 11
 * Python 3.5, 3.6 or 3.7
 
-Using Python DataStream API requires installing PyFlink, which is available on 
[PyPI](https://pypi.org/project/apache-flink/) and can be easily installed 
using `pip`. 
+使用 Python DataStream API 需要安装 PyFlink,安装地址 
[PyPI](https://pypi.org/project/apache-flink/) ,同时也可以使用 `pip` 快速安装。 
 
 {% highlight bash %}
 $ python -m pip install apache-flink
 {% endhighlight %}
 
-Once PyFlink is installed, you can move on to write a Python DataStream job.
+一旦 PyFlink 安装完成之后,你可以开始编写 Python DataStream 作业。

Review comment:
   ```suggestion
   一旦 PyFlink 安装完成之后,你就可以开始编写 Python DataStream 作业了。
   ```

##
File path: docs/dev/python/datastream_tutorial.zh.md
##
@@ -22,76 +22,77 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink offers a DataStream API for building robust, stateful streaming 
applications. It provides fine-grained control over state and time, which 
allows for the implementation of advanced event-driven systems. In this 
step-by-step guide, 

[jira] [Closed] (FLINK-19444) flink 1.11 sql group by tumble Window aggregate can only be defined over a time attribute column, but TIMESTAMP(3) encountered

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19444.
---
Resolution: Not A Problem

The exception is as expected. See 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/time_attributes.html

> flink 1.11 sql group by tumble Window aggregate can only be defined over a 
> time attribute column, but TIMESTAMP(3) encountered
> --
>
> Key: FLINK-19444
> URL: https://issues.apache.org/jira/browse/FLINK-19444
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: panxiaohu
>Priority: Major
>
> Here's the code:
> String createSql = "CREATE TABLE clicks (\n" +
>  " `user` STRING,\n" +
>  " create_time TIMESTAMP(3),\n" +
>  " PRIMARY KEY (`user`) NOT ENFORCED\n" +
>  ") WITH (\n" +
>  " 'connector' = 'jdbc',\n" +
>  " 'url' = 'jdbc:mysql://localhost:3306/learning',\n" +
>  " 'username' = 'root',\n" +
>  " 'password' = 'john123',\n" +
>  " 'table-name' = 'clicks'\n" +
>  ")";
> Table table = tableEnv.sqlQuery("select user,TUMBLE_START(create_time, 
> INTERVAL '1' DAY),count(user) from clicks group by TUMBLE(create_time, 
> INTERVAL '1' DAY),user" );
>  
> then exception occurs as follows:
> org.apache.flink.table.api.TableException: Window aggregate can only be 
> defined over a time attribute column, but TIMESTAMP(3) 
> encountered.org.apache.flink.table.api.TableException: Window aggregate can 
> only be defined over a time attribute column, but TIMESTAMP(3) encountered.
>  at 
> org.apache.flink.table.planner.plan.rules.logical.StreamLogicalWindowAggregateRule.getInAggregateGroupExpression(StreamLogicalWindowAggregateRule.scala:50)
>  at 
> org.apache.flink.table.planner.plan.rules.logical.LogicalWindowAggregateRuleBase.onMatch(LogicalWindowAggregateRuleBase.scala:79)
>  at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:328)
>  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:562) at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:427) at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:264)
>  at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reopened FLINK-19386:
-

Sorry. Close by mistake. 

> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> I found some connectors denpendecy  legacy table planner and used in test 
> class.
> For example `HiveCatalogUseBlinkITCase `  use `JavaScalaConversionUtil` in 
> legacy table planner not blink planner. For FLIP-32 has be accepted, I think 
> we can remove dependecy of legacy table planner from connectors and formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19386:

Comment: was deleted

(was: See 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/time_attributes.html)

> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> I found some connectors denpendecy  legacy table planner and used in test 
> class.
> For example `HiveCatalogUseBlinkITCase `  use `JavaScalaConversionUtil` in 
> legacy table planner not blink planner. For FLIP-32 has be accepted, I think 
> we can remove dependecy of legacy table planner from connectors and formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19446) canal-json has a situation that -U and +U are equal, when updating the null field to be non-null

2020-09-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204433#comment-17204433
 ] 

Jark Wu commented on FLINK-19446:
-

That's true. I think we need to know whether this is a "changed" field. 
Depending on whether field is null is not enough.

> canal-json has a situation that -U and +U are equal, when updating the null 
> field to be non-null
> 
>
> Key: FLINK-19446
> URL: https://issues.apache.org/jira/browse/FLINK-19446
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Zhengchao Shi
>Priority: Major
> Fix For: 1.12.0
>
>
> line 118 in CanalJsonDeserializationSchema#deserialize method:
> {code:java}
> GenericRowData after = (GenericRowData) data.getRow(i, fieldCount);
> GenericRowData before = (GenericRowData) old.getRow(i, fieldCount);
> for (int f = 0; f < fieldCount; f++) {
>   if (before.isNullAt(f)) {
>   // not null fields in "old" (before) means the fields are 
> changed
>   // null/empty fields in "old" (before) means the fields are not 
> changed
>   // so we just copy the not changed fields into before
>   before.setField(f, after.getField(f));
>   }
> }
> before.setRowKind(RowKind.UPDATE_BEFORE);
> after.setRowKind(RowKind.UPDATE_AFTER);
> {code}
> if a field is null before update,it will cause -U and +U to be equal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19446) canal-json has a situation that -U and +U are equal, when updating the null field to be non-null

2020-09-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204433#comment-17204433
 ] 

Jark Wu edited comment on FLINK-19446 at 9/30/20, 4:03 AM:
---

That's true. I think we need to know whether this is a "changed" field. 
Depending on whether field is null is not enough. Do you have any idea [~tinny]?


was (Author: jark):
That's true. I think we need to know whether this is a "changed" field. 
Depending on whether field is null is not enough.

> canal-json has a situation that -U and +U are equal, when updating the null 
> field to be non-null
> 
>
> Key: FLINK-19446
> URL: https://issues.apache.org/jira/browse/FLINK-19446
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Zhengchao Shi
>Priority: Major
> Fix For: 1.12.0
>
>
> line 118 in CanalJsonDeserializationSchema#deserialize method:
> {code:java}
> GenericRowData after = (GenericRowData) data.getRow(i, fieldCount);
> GenericRowData before = (GenericRowData) old.getRow(i, fieldCount);
> for (int f = 0; f < fieldCount; f++) {
>   if (before.isNullAt(f)) {
>   // not null fields in "old" (before) means the fields are 
> changed
>   // null/empty fields in "old" (before) means the fields are not 
> changed
>   // so we just copy the not changed fields into before
>   before.setField(f, after.getField(f));
>   }
> }
> before.setRowKind(RowKind.UPDATE_BEFORE);
> after.setRowKind(RowKind.UPDATE_AFTER);
> {code}
> if a field is null before update,it will cause -U and +U to be equal



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13515: [FLINK-19470][orc][parquet] Parquet and ORC reader reachEnd returns f…

2020-09-29 Thread GitBox


flinkbot commented on pull request #13515:
URL: https://github.com/apache/flink/pull/13515#issuecomment-701143749


   
   ## CI report:
   
   * 5f8c701af43074f17ce503aa2497d2f91ed56c50 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19386) Remove legacy table planner dependecy from connectors and formats

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19386.
---
Fix Version/s: (was: 1.12.0)
   Resolution: Not A Problem

See 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/time_attributes.html

> Remove legacy table planner dependecy from connectors and formats
> -
>
> Key: FLINK-19386
> URL: https://issues.apache.org/jira/browse/FLINK-19386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>
> I found some connectors denpendecy  legacy table planner and used in test 
> class.
> For example `HiveCatalogUseBlinkITCase `  use `JavaScalaConversionUtil` in 
> legacy table planner not blink planner. For FLIP-32 has be accepted, I think 
> we can remove dependecy of legacy table planner from connectors and formats.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19394) Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' into Chinese

2020-09-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-19394:
---

Assignee: Roc Marshal

> Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' 
> into Chinese
> --
>
> Key: FLINK-19394
> URL: https://issues.apache.org/jira/browse/FLINK-19394
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: Translation, documentation, translation, translation-zh
>
> The file location: flink/docs/monitoring/checkpoint_monitoring.md
> The link of the page: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/monitoring/checkpoint_monitoring.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on a change in pull request #13299: [FLINK-19072][table-planner] Import Temporal Table join rule for Stream

2020-09-29 Thread GitBox


leonardBang commented on a change in pull request #13299:
URL: https://github.com/apache/flink/pull/13299#discussion_r497228266



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalCorrelateToJoinFromTemporalTableRule.scala
##
@@ -40,12 +51,105 @@ abstract class LogicalCorrelateToJoinFromTemporalTableRule(
 
   def getLogicalSnapshot(call: RelOptRuleCall): LogicalSnapshot
 
+  /** Trim out the HepRelVertex wrapper and get current relational expression. 
*/
+  protected def trimHep(node: RelNode): RelNode = {
+node match {
+  case hepRelVertex: HepRelVertex =>
+hepRelVertex.getCurrentRel
+  case _ => node
+}
+  }
+
+  protected def validateSnapshotInCorrelate(
+  snapshot: LogicalSnapshot,
+  correlate: LogicalCorrelate): Unit = {
+// period specification check
+snapshot.getPeriod.getType match {
+  // validate type is  event-time or processing time
+  case t: TimeIndicatorRelDataType =>
+  case _ =>
+throw new ValidationException("Temporal table join currently only 
supports " +
+  "'FOR SYSTEM_TIME AS OF' left table's time attribute field")

Review comment:
   `A JOIN B FOR SYSTEM_TIME AS OF A.proctime AS B_temporal` here we check 
wether user used the left time attribute column followed `AS OF` or not 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19365) Migrate Hive source to FLIP-27 source interface for batch

2020-09-29 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-19365:
---
Description: I'll limit the scope here to batch reading, to make the PR 
easier to review.

> Migrate Hive source to FLIP-27 source interface for batch
> -
>
> Key: FLINK-19365
> URL: https://issues.apache.org/jira/browse/FLINK-19365
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> I'll limit the scope here to batch reading, to make the PR easier to review.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on a change in pull request #13299: [FLINK-19072][table-planner] Import Temporal Table join rule for Stream

2020-09-29 Thread GitBox


leonardBang commented on a change in pull request #13299:
URL: https://github.com/apache/flink/pull/13299#discussion_r497227009



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalCorrelateToJoinFromTemporalTableRule.scala
##
@@ -40,12 +51,105 @@ abstract class LogicalCorrelateToJoinFromTemporalTableRule(
 
   def getLogicalSnapshot(call: RelOptRuleCall): LogicalSnapshot
 
+  /** Trim out the HepRelVertex wrapper and get current relational expression. 
*/
+  protected def trimHep(node: RelNode): RelNode = {
+node match {
+  case hepRelVertex: HepRelVertex =>
+hepRelVertex.getCurrentRel
+  case _ => node
+}
+  }
+
+  protected def validateSnapshotInCorrelate(
+  snapshot: LogicalSnapshot,
+  correlate: LogicalCorrelate): Unit = {
+// period specification check
+snapshot.getPeriod.getType match {
+  // validate type is  event-time or processing time

Review comment:
   this should ok that keeps line with following `case`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19365) Migrate Hive source to FLIP-27 source interface for batch

2020-09-29 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-19365:
---
Summary: Migrate Hive source to FLIP-27 source interface for batch  (was: 
Migrate Hive source to FLIP-27 source interface)

> Migrate Hive source to FLIP-27 source interface for batch
> -
>
> Key: FLINK-19365
> URL: https://issues.apache.org/jira/browse/FLINK-19365
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13515: [FLINK-19470][orc][parquet] Parquet and ORC reader reachEnd returns f…

2020-09-29 Thread GitBox


flinkbot commented on pull request #13515:
URL: https://github.com/apache/flink/pull/13515#issuecomment-701140042


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5f8c701af43074f17ce503aa2497d2f91ed56c50 (Wed Sep 30 
03:48:00 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-19470).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19470) Parquet and ORC reader reachEnd returns false after it has reached end

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19470:
---
Labels: pull-request-available  (was: )

> Parquet and ORC reader reachEnd returns false after it has reached end
> --
>
> Key: FLINK-19470
> URL: https://issues.apache.org/jira/browse/FLINK-19470
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> After a {{ParquetColumnarRowSplitReader}} or {{OrcColumnarRowSplitReader}} 
> has reached its end, calling {{reachEnd}} again gets false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache opened a new pull request #13515: [FLINK-19470][orc][parquet] Parquet and ORC reader reachEnd returns f…

2020-09-29 Thread GitBox


lirui-apache opened a new pull request #13515:
URL: https://github.com/apache/flink/pull/13515


   …alse after it has reached end
   
   
   
   ## What is the purpose of the change
   
   Fix the issue that ORC and parquet reader `reachEnd` methods return false 
after it has reached end.
   
   
   ## Brief change log
   
 - Fix the readers
 - Add test cases
   
   
   ## Verifying this change
   
   Added test cases
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #13507: [FLINK-19231][python] Support ListState and ListView for Python UDAF.

2020-09-29 Thread GitBox


dianfu commented on a change in pull request #13507:
URL: https://github.com/apache/flink/pull/13507#discussion_r497198985



##
File path: flink-python/pyflink/common/types.py
##
@@ -37,7 +37,7 @@ def _create_row(fields, values, row_kind: RowKind = None):
 return row
 
 
-class Row(tuple):
+class Row(object):

Review comment:
   Why changed this?

##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/python/PythonFunctionInfo.java
##
@@ -28,7 +28,7 @@
  * the actual Python function, the input arguments, etc.
  */
 @Internal
-public final class PythonFunctionInfo implements Serializable {

Review comment:
   Why changed this?

##
File path: flink-python/pyflink/fn_execution/beam/beam_operations_slow.py
##
@@ -304,9 +306,6 @@ def finish(self):
 
 def reset(self):
 super().reset()
-if self.keyed_state_backend:

Review comment:
   why removing this?

##
File path: flink-python/pyflink/proto/flink-fn-execution.proto
##
@@ -94,6 +94,21 @@ message UserDefinedDataStreamFunctions {
 
 // A list of the user-defined aggregate functions to be executed in a group 
aggregate operation.
 message UserDefinedAggregateFunctions {
+  message DataViewSpec {
+enum DataViewType {
+  LIST = 0;
+  MAP = 1;
+}
+string name = 1;
+int32 field_index = 2;
+DataViewType type = 3;
+Schema.FieldType element_type = 4;
+Schema.FieldType key_type = 5;
+  }
+  message DataViewSpecs {

Review comment:
   Each DataViewSpec belongs to a specific UserDefinedAggregateFunction.
   What about adding UserDefinedAggregateFunction which extends 
UserDefinedFunction and move DataViewSpec inside UserDefinedAggregateFunction?

##
File path: flink-python/pyflink/fn_execution/aggregate.py
##
@@ -167,18 +247,31 @@ class SimpleAggsHandleFunction(AggsHandleFunction):
 def __init__(self,
  udfs: List[AggregateFunction],
  args_offsets_list: List[List[int]],
- index_of_count_star: int):
+ index_of_count_star: int,
+ udf_data_view_specs: List[List[DataViewSpec]]):
 self._udfs = udfs
 self._args_offsets_list = args_offsets_list
 self._accumulators = None  # type: Row
 self._get_value_indexes = [i for i in range(len(udfs))]
 if index_of_count_star >= 0:
 # The record count is used internally, should be ignored by the 
get_value method.
 self._get_value_indexes.remove(index_of_count_star)
+self._udf_data_view_specs = udf_data_view_specs
+self._udf_data_views = []  # type: List[Dict[DataView]]

Review comment:
   ```suggestion
   self._udf_data_views = []
   ```

##
File path: flink-python/pyflink/proto/flink-fn-execution.proto
##
@@ -108,6 +123,8 @@ message UserDefinedAggregateFunctions {
   int32 state_cache_size = 7;
   // Cleanup the expired state if true.
   bool state_cleaning_enabled = 8;
+  // The data view specifications
+  repeated DataViewSpecs udf_data_view_specs = 9;

Review comment:
   ```suggestion
 repeated DataViewSpecs data_view_specs = 9;
   ```
   

##
File path: flink-python/pyflink/common/types.py
##
@@ -82,21 +82,18 @@ class Row(tuple):
 Row(name='Alice', age=11)
 """
 
-def __new__(cls, *args, **kwargs):

Review comment:
   I guess the example in the doc of the class doesn't work any more with 
this change. Could you explain why changed this and investigate if it's 
possible avoiding this change?

##
File path: flink-python/pyflink/fn_execution/beam/beam_operations_slow.py
##
@@ -323,6 +322,7 @@ def __init__(self, name, spec, counter_factory, sampler, 
consumers, keyed_state_
 self.index_of_count_star = spec.serialized_fn.index_of_count_star
 self.state_cache_size = spec.serialized_fn.state_cache_size
 self.state_cleaning_enabled = spec.serialized_fn.state_cleaning_enabled
+self.udf_data_view_specs = 
extract_data_view_specs(spec.serialized_fn.udf_data_view_specs)

Review comment:
   ```suggestion
   self.data_view_specs = 
extract_data_view_specs(spec.serialized_fn.udf_data_view_specs)
   ```

##
File path: flink-python/pyflink/table/data_view.py
##
@@ -0,0 +1,100 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless 

[GitHub] [flink] curcur commented on a change in pull request #13501: Single task result partition type

2020-09-29 Thread GitBox


curcur commented on a change in pull request #13501:
URL: https://github.com/apache/flink/pull/13501#discussion_r497225677



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedApproximateSubpartition.java
##
@@ -0,0 +1,138 @@
+package org.apache.flink.runtime.io.network.partition;
+
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferConsumer;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+public class PipelinedApproximateSubpartition extends PipelinedSubpartition {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(PipelinedApproximateSubpartition.class);
+
+   private boolean isPartialBuffer = false;
+
+   PipelinedApproximateSubpartition(int index, ResultPartition parent) {
+   super(index, parent);
+   }
+
+   public void releaseView() {
+   readView = null;
+   isPartialBuffer = true;
+   isBlockedByCheckpoint = false;
+   sequenceNumber = 0;
+   }
+
+   @Override
+   public PipelinedSubpartitionView 
createReadView(BufferAvailabilityListener availabilityListener) {
+   synchronized (buffers) {
+   checkState(!isReleased);
+
+   // if the view is not released yet
+   if (readView != null) {
+   releaseView();
+   }
+
+   LOG.debug("{}: Creating read view for subpartition {} 
of partition {}.",
+   parent.getOwningTaskName(), 
getSubPartitionIndex(), parent.getPartitionId());
+
+   readView = new 
PipelinedApproximateSubpartitionView(this, availabilityListener);
+   }
+
+   return readView;
+   }
+
+   @Nullable
+   @Override
+   BufferAndBacklog pollBuffer() {

Review comment:
   I totally agree. If we reach an agreement, I will definitely do that.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #13501: Single task result partition type

2020-09-29 Thread GitBox


curcur commented on a change in pull request #13501:
URL: https://github.com/apache/flink/pull/13501#discussion_r497225677



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedApproximateSubpartition.java
##
@@ -0,0 +1,138 @@
+package org.apache.flink.runtime.io.network.partition;
+
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.io.network.buffer.BufferConsumer;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+public class PipelinedApproximateSubpartition extends PipelinedSubpartition {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(PipelinedApproximateSubpartition.class);
+
+   private boolean isPartialBuffer = false;
+
+   PipelinedApproximateSubpartition(int index, ResultPartition parent) {
+   super(index, parent);
+   }
+
+   public void releaseView() {
+   readView = null;
+   isPartialBuffer = true;
+   isBlockedByCheckpoint = false;
+   sequenceNumber = 0;
+   }
+
+   @Override
+   public PipelinedSubpartitionView 
createReadView(BufferAvailabilityListener availabilityListener) {
+   synchronized (buffers) {
+   checkState(!isReleased);
+
+   // if the view is not released yet
+   if (readView != null) {
+   releaseView();
+   }
+
+   LOG.debug("{}: Creating read view for subpartition {} 
of partition {}.",
+   parent.getOwningTaskName(), 
getSubPartitionIndex(), parent.getPartitionId());
+
+   readView = new 
PipelinedApproximateSubpartitionView(this, availabilityListener);
+   }
+
+   return readView;
+   }
+
+   @Nullable
+   @Override
+   BufferAndBacklog pollBuffer() {

Review comment:
   I totally agree. If we reach the agreement, I will definitely do that.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #13501: Single task result partition type

2020-09-29 Thread GitBox


curcur commented on a change in pull request #13501:
URL: https://github.com/apache/flink/pull/13501#discussion_r497225512



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedApproximateSubpartitionView.java
##
@@ -0,0 +1,27 @@
+package org.apache.flink.runtime.io.network.partition;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+public class PipelinedApproximateSubpartitionView extends 
PipelinedSubpartitionView {
+
+   PipelinedApproximateSubpartitionView(PipelinedApproximateSubpartition 
parent, BufferAvailabilityListener listener) {
+   super(parent, listener);
+   }
+
+   @Override
+   public void releaseAllResources() {
+   if (isReleased.compareAndSet(false, true)) {
+   // The view doesn't hold any resources and the parent 
cannot be restarted. Therefore,
+   // it's OK to notify about consumption as well.
+   checkState(parent instanceof 
PipelinedApproximateSubpartition);
+   ((PipelinedApproximateSubpartition) 
parent).releaseView();

Review comment:
   Yeah, I was also thinking of that. The problem is that I have to change 
all the common methods in the superclass that uses `parent` to 
`approximateParent` => a lot of redundant code.
   
   `releaseView()` is one of the few classes that are different from the 
superclass.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #13501: Single task result partition type

2020-09-29 Thread GitBox


curcur commented on pull request #13501:
URL: https://github.com/apache/flink/pull/13501#issuecomment-701136832


   Thank you very much @AHeise for reviewing!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11509: [FLINK-16753] Use CheckpointException to wrap exceptions thrown from AsyncCheckpointRunnable

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #11509:
URL: https://github.com/apache/flink/pull/11509#issuecomment-603811265


   
   ## CI report:
   
   * 0a96025249d753072b8eaab23b404ded9b6ddf47 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7081)
 
   * 086b22995f36319eff14d180e91a5aae4d1fbcb5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7104)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19470) Parquet and ORC reader reachEnd returns false after it has reached end

2020-09-29 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-19470:
---
Description: After a {{ParquetColumnarRowSplitReader}} or 
{{OrcColumnarRowSplitReader}} has reached its end, calling {{reachEnd}} again 
gets false.  (was: After a {{ParquetColumnarRowSplitReader}} has reached its 
end, calling {{reachEnd}} again gets false.)

> Parquet and ORC reader reachEnd returns false after it has reached end
> --
>
> Key: FLINK-19470
> URL: https://issues.apache.org/jira/browse/FLINK-19470
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> After a {{ParquetColumnarRowSplitReader}} or {{OrcColumnarRowSplitReader}} 
> has reached its end, calling {{reachEnd}} again gets false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19470) Parquet and ORC reader reachEnd returns false after it has reached end

2020-09-29 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-19470:
---
Summary: Parquet and ORC reader reachEnd returns false after it has reached 
end  (was: ParquetColumnarRowSplitReader::reachEnd returns false after it has 
reached end)

> Parquet and ORC reader reachEnd returns false after it has reached end
> --
>
> Key: FLINK-19470
> URL: https://issues.apache.org/jira/browse/FLINK-19470
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Rui Li
>Priority: Major
> Fix For: 1.12.0
>
>
> After a {{ParquetColumnarRowSplitReader}} has reached its end, calling 
> {{reachEnd}} again gets false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13512: [FLINK-19457][core] Add a number sequence generating source for the New Source API.

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13512:
URL: https://github.com/apache/flink/pull/13512#issuecomment-700995407


   
   ## CI report:
   
   * 45a5a5dd40652d6d9212a0cdba6cd47a293ac819 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7099)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] pyscala commented on pull request #13214: [FLINK-18938][tableSQL/API] Throw better exception message for quering sink-only connector

2020-09-29 Thread GitBox


pyscala commented on pull request #13214:
URL: https://github.com/apache/flink/pull/13214#issuecomment-701128918


   @wuchong  The following changes have been made.
   1.Keep FactoryUtil#discoverFactory  is still a generic method for all kinds 
of factories.
   2.Adding a new method  OptionalFailure 
discoverOptionalFactory
   and using this method to check whether the connector/format is avaible and 
throw better exception in getDynamicTableFactory  and 
discoverOptionalFormatFactory.
   
   Looking forward to your reply.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19470) ParquetColumnarRowSplitReader::reachEnd returns false after it has reached end

2020-09-29 Thread Rui Li (Jira)
Rui Li created FLINK-19470:
--

 Summary: ParquetColumnarRowSplitReader::reachEnd returns false 
after it has reached end
 Key: FLINK-19470
 URL: https://issues.apache.org/jira/browse/FLINK-19470
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Rui Li
 Fix For: 1.12.0


After a {{ParquetColumnarRowSplitReader}} has reached its end, calling 
{{reachEnd}} again gets false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11509: [FLINK-16753] Use CheckpointException to wrap exceptions thrown from AsyncCheckpointRunnable

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #11509:
URL: https://github.com/apache/flink/pull/11509#issuecomment-603811265


   
   ## CI report:
   
   * 0a96025249d753072b8eaab23b404ded9b6ddf47 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7081)
 
   * 086b22995f36319eff14d180e91a5aae4d1fbcb5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Jiayi-Liao commented on pull request #11509: [FLINK-16753] Use CheckpointException to wrap exceptions thrown from AsyncCheckpointRunnable

2020-09-29 Thread GitBox


Jiayi-Liao commented on pull request #11509:
URL: https://github.com/apache/flink/pull/11509#issuecomment-701125599


   @zhijiangW  The PR should be good now. Could you take another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-19394) Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' into Chinese

2020-09-29 Thread Roc Marshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roc Marshal updated FLINK-19394:

Comment: was deleted

(was: [~lzljs3620320] Could you assign this ticket to me ?
Thank you~)

> Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' 
> into Chinese
> --
>
> Key: FLINK-19394
> URL: https://issues.apache.org/jira/browse/FLINK-19394
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: Roc Marshal
>Priority: Major
>  Labels: Translation, documentation, translation, translation-zh
>
> The file location: flink/docs/monitoring/checkpoint_monitoring.md
> The link of the page: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/monitoring/checkpoint_monitoring.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19394) Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' into Chinese

2020-09-29 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204412#comment-17204412
 ] 

Roc Marshal commented on FLINK-19394:
-

[~jark] Could you assign this ticket to me ?
Thank you~

> Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' 
> into Chinese
> --
>
> Key: FLINK-19394
> URL: https://issues.apache.org/jira/browse/FLINK-19394
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: Roc Marshal
>Priority: Major
>  Labels: Translation, documentation, translation, translation-zh
>
> The file location: flink/docs/monitoring/checkpoint_monitoring.md
> The link of the page: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/monitoring/checkpoint_monitoring.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe merged pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-29 Thread GitBox


godfreyhe merged pull request #13434:
URL: https://github.com/apache/flink/pull/13434


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #13434: [FLINK-19292][hive] HiveCatalog should support specifying Hadoop conf dir with configuration

2020-09-29 Thread GitBox


godfreyhe commented on pull request #13434:
URL: https://github.com/apache/flink/pull/13434#issuecomment-701120479


   LGTM, +1 to merge



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-19429.
---
Resolution: Fixed

master: 3d6d6570177ff57945ae719acb8ad17d92603c9c

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, chinese-translation, Documentation
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu merged pull request #13497: [FLINK-19429][docs-zh] Translate page `Data Types` into Chinese

2020-09-29 Thread GitBox


dianfu merged pull request #13497:
URL: https://github.com/apache/flink/pull/13497


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19429) Translate page 'Data Types' into Chinese

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19429:

Component/s: API / Python

> Translate page 'Data Types' into Chinese
> 
>
> Key: FLINK-19429
> URL: https://issues.apache.org/jira/browse/FLINK-19429
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, chinese-translation, Documentation
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Translate the page 
> [data_types|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/python/datastream-api-users-guide/data_types.html].
> The doc located in 
> "flink/docs/dev/python/datastream-api-users-guide/data_types.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on pull request #13497: [FLINK-19429][docs-zh] Translate page `Data Types` into Chinese

2020-09-29 Thread GitBox


dianfu commented on pull request #13497:
URL: https://github.com/apache/flink/pull/13497#issuecomment-701118190


   @wangxlong Thanks for the PR. LGTM. Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong commented on pull request #13498: [FLINK-19430][docs-zh] Translate page datastream_tutorial into Chinese

2020-09-29 Thread GitBox


wangxlong commented on pull request #13498:
URL: https://github.com/apache/flink/pull/13498#issuecomment-701117242


   Hi @dianfu, Could you help have a review in your free time, Thank you~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong commented on pull request #13497: [FLINK-19429][docs-zh] Translate page `Data Types` into Chinese

2020-09-29 Thread GitBox


wangxlong commented on pull request #13497:
URL: https://github.com/apache/flink/pull/13497#issuecomment-701117042


   Hi @dianfu, Could you help have a review in your free time, Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13510: [FLINK-13095][state-processor-api] Introduce window bootstrap writer for writing window operator state

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13510:
URL: https://github.com/apache/flink/pull/13510#issuecomment-700881897


   
   ## CI report:
   
   * e6ccb2f3e1b474cee6b31d0f0d894d1fe94ce22d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7096)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19443) runtime function 'splitIndex'

2020-09-29 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204401#comment-17204401
 ] 

Nicholas Jiang edited comment on FLINK-19443 at 9/30/20, 1:55 AM:
--

[~libenchao], if [~mytang0] wouldn't be available to fix this issue, could you 
please assign to me for fixing and adding test case for split_index function?


was (Author: nicholasjiang):
[~libenchao], if [~mytang0] wouldn't be available to fix this issue, could you 
please assign to me for fix?

> runtime function 'splitIndex'
> -
>
> Key: FLINK-19443
> URL: https://issues.apache.org/jira/browse/FLINK-19443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: mytang0
>Priority: Minor
>
> runtime function 'splitIndex' has NPE problem (located in the 
> SqlFunctionUtils class)
>  
> *NPE version:*
> public static String splitIndex(String str, String separator, int index) {
>     if (index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitByWholeSeparatorPreserveAllTokens(str, 
> separator);
>     if (index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }
> public static String splitIndex(String str, int character, int index) {
>     if (character > 255 || character < 1 || index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitPreserveAllTokens(str, (char) 
> character);
>     if (index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }
>  
> *Fix version:*
> public static String splitIndex(String str, String separator, int index) {
>     if (index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitByWholeSeparatorPreserveAllTokens(str, 
> separator);
>     if ({color:#FF}values == null ||{color} index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }
> public static String splitIndex(String str, int character, int index) {
>     if (character > 255 || character < 1 || index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitPreserveAllTokens(str, (char) 
> character);
>     if ({color:#FF}values == null ||{color} index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19443) runtime function 'splitIndex'

2020-09-29 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204401#comment-17204401
 ] 

Nicholas Jiang commented on FLINK-19443:


[~libenchao], if [~mytang0] wouldn't be available to fix this issue, could you 
please assign to me for fix?

> runtime function 'splitIndex'
> -
>
> Key: FLINK-19443
> URL: https://issues.apache.org/jira/browse/FLINK-19443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: mytang0
>Priority: Minor
>
> runtime function 'splitIndex' has NPE problem (located in the 
> SqlFunctionUtils class)
>  
> *NPE version:*
> public static String splitIndex(String str, String separator, int index) {
>     if (index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitByWholeSeparatorPreserveAllTokens(str, 
> separator);
>     if (index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }
> public static String splitIndex(String str, int character, int index) {
>     if (character > 255 || character < 1 || index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitPreserveAllTokens(str, (char) 
> character);
>     if (index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }
>  
> *Fix version:*
> public static String splitIndex(String str, String separator, int index) {
>     if (index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitByWholeSeparatorPreserveAllTokens(str, 
> separator);
>     if ({color:#FF}values == null ||{color} index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }
> public static String splitIndex(String str, int character, int index) {
>     if (character > 255 || character < 1 || index < 0) {
>         return null;
>     }
>     String[] values = StringUtils.splitPreserveAllTokens(str, (char) 
> character);
>     if ({color:#FF}values == null ||{color} index >= values.length) {
>         return null;
>     } else {
>         return values[index];
>     }
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13512: [FLINK-19457][core] Add a number sequence generating source for the New Source API.

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13512:
URL: https://github.com/apache/flink/pull/13512#issuecomment-700995407


   
   ## CI report:
   
   * 7ed9da178f78a54bbca8de66984a802c1875e3cf Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7097)
 
   * 45a5a5dd40652d6d9212a0cdba6cd47a293ac819 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7099)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13513: [FLINK-19434][DataStream API] Add source input chaining to StreamingJobGraphGenerator

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13513:
URL: https://github.com/apache/flink/pull/13513#issuecomment-701011500


   
   ## CI report:
   
   * 73ff86c5d941fdf7ed5fd22c8c6d5b982a0089d5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7098)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17480) Support running PyFlink on Kubernetes

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17480:

Component/s: API / Python

> Support running PyFlink on Kubernetes
> -
>
> Key: FLINK-17480
> URL: https://issues.apache.org/jira/browse/FLINK-17480
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Shuiqiang Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> This is the umbrella issue for running PyFlink on Kubernetes in native mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17480) Support running PyFlink on Kubernetes

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-17480.
---
Fix Version/s: 1.12.0
   Resolution: Fixed

Merged to master(1.12) via 7050c04235d86c6cf9b34eb89c1f7909004c8b82

> Support running PyFlink on Kubernetes
> -
>
> Key: FLINK-17480
> URL: https://issues.apache.org/jira/browse/FLINK-17480
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Shuiqiang Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> This is the umbrella issue for running PyFlink on Kubernetes in native mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-29 Thread GitBox


dianfu closed pull request #13322:
URL: https://github.com/apache/flink/pull/13322


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18044) Add the subtask index information to the SourceReaderContext.

2020-09-29 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204396#comment-17204396
 ] 

Jiangjie Qin commented on FLINK-18044:
--

[~sewen] Thanks for the comment. I agree that we should expose less information 
if possible. But for this specific case, The subtaskId is already a part of 
runtime context available to all the {{RichFunctions}} and it is usually quite 
useful in many cases, such as debugging when there are multiple subtasks. As 
far as I understand, the subtask index is immutable for a given execution 
attempt. It can only change when the task restarts. So it seems the it is safe 
and useful to be exposed. Can you elaborate a little more about the concern?

> Add the subtask index information to the SourceReaderContext.
> -
>
> Key: FLINK-18044
> URL: https://issues.apache.org/jira/browse/FLINK-18044
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>
> It is useful for the `SourceReader` to retrieve its subtask id. For example, 
> Kafka readers can create a consumer with proper client id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on pull request #13322: [FLINK-17480][kubernetes] Support running PyFlink on Kubernetes.

2020-09-29 Thread GitBox


dianfu commented on pull request #13322:
URL: https://github.com/apache/flink/pull/13322#issuecomment-701108813


   Merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19447) HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not initialized after 200000ms"

2020-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204394#comment-17204394
 ] 

Dian Fu edited comment on FLINK-19447 at 9/30/20, 1:37 AM:
---

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51


was (Author: dian.fu):
Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407

> HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not 
> initialized after 20ms"
> ---
>
> Key: FLINK-19447
> URL: https://issues.apache.org/jira/browse/FLINK-19447
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407
> {code}
> 2020-09-28T21:52:21.2146147Z 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase  Time elapsed: 208.382 
> sec  <<< ERROR!
> 2020-09-28T21:52:21.2146638Z java.io.IOException: Shutting down
> 2020-09-28T21:52:21.2147004Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:266)
> 2020-09-28T21:52:21.2147637Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:116)
> 2020-09-28T21:52:21.2148120Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1142)
> 2020-09-28T21:52:21.2148831Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1107)
> 2020-09-28T21:52:21.2149347Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1061)
> 2020-09-28T21:52:21.2149896Z  at 
> org.apache.flink.connector.hbase2.util.HBaseTestingClusterAutoStarter.setUp(HBaseTestingClusterAutoStarter.java:122)
> 2020-09-28T21:52:21.2150721Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-28T21:52:21.2151136Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-28T21:52:21.2151609Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-28T21:52:21.2152039Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-28T21:52:21.2152462Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-28T21:52:21.2152941Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-28T21:52:21.2153489Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-28T21:52:21.2153962Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2020-09-28T21:52:21.2154406Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-28T21:52:21.2154828Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-28T21:52:21.2155381Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
> 2020-09-28T21:52:21.2155864Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
> 2020-09-28T21:52:21.2156378Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-28T21:52:21.2156865Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
> 2020-09-28T21:52:21.2157458Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> 2020-09-28T21:52:21.2157993Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> 2020-09-28T21:52:21.2158470Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> 2020-09-28T21:52:21.2158890Z Caused by: java.lang.RuntimeException: Master 
> not initialized after 20ms
> 2020-09-28T21:52:21.2159350Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:229)
> 2020-09-28T21:52:21.2159823Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:197)
> 2020-09-28T21:52:21.2160270Z  at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:413)
> 

[jira] [Commented] (FLINK-19447) HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not initialized after 200000ms"

2020-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204395#comment-17204395
 ] 

Dian Fu commented on FLINK-19447:
-

Upgrade it to "Blocker" as it seems that this test is continuously failing.

> HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not 
> initialized after 20ms"
> ---
>
> Key: FLINK-19447
> URL: https://issues.apache.org/jira/browse/FLINK-19447
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407
> {code}
> 2020-09-28T21:52:21.2146147Z 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase  Time elapsed: 208.382 
> sec  <<< ERROR!
> 2020-09-28T21:52:21.2146638Z java.io.IOException: Shutting down
> 2020-09-28T21:52:21.2147004Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:266)
> 2020-09-28T21:52:21.2147637Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:116)
> 2020-09-28T21:52:21.2148120Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1142)
> 2020-09-28T21:52:21.2148831Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1107)
> 2020-09-28T21:52:21.2149347Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1061)
> 2020-09-28T21:52:21.2149896Z  at 
> org.apache.flink.connector.hbase2.util.HBaseTestingClusterAutoStarter.setUp(HBaseTestingClusterAutoStarter.java:122)
> 2020-09-28T21:52:21.2150721Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-28T21:52:21.2151136Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-28T21:52:21.2151609Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-28T21:52:21.2152039Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-28T21:52:21.2152462Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-28T21:52:21.2152941Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-28T21:52:21.2153489Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-28T21:52:21.2153962Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2020-09-28T21:52:21.2154406Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-28T21:52:21.2154828Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-28T21:52:21.2155381Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
> 2020-09-28T21:52:21.2155864Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
> 2020-09-28T21:52:21.2156378Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-28T21:52:21.2156865Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
> 2020-09-28T21:52:21.2157458Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> 2020-09-28T21:52:21.2157993Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> 2020-09-28T21:52:21.2158470Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> 2020-09-28T21:52:21.2158890Z Caused by: java.lang.RuntimeException: Master 
> not initialized after 20ms
> 2020-09-28T21:52:21.2159350Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:229)
> 2020-09-28T21:52:21.2159823Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:197)
> 2020-09-28T21:52:21.2160270Z  at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:413)
> 2020-09-28T21:52:21.2160800Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:259)
> 2020-09-28T21:52:21.2161096Z  ... 22 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19447) HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not initialized after 200000ms"

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19447:

Priority: Blocker  (was: Major)

> HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not 
> initialized after 20ms"
> ---
>
> Key: FLINK-19447
> URL: https://issues.apache.org/jira/browse/FLINK-19447
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407
> {code}
> 2020-09-28T21:52:21.2146147Z 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase  Time elapsed: 208.382 
> sec  <<< ERROR!
> 2020-09-28T21:52:21.2146638Z java.io.IOException: Shutting down
> 2020-09-28T21:52:21.2147004Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:266)
> 2020-09-28T21:52:21.2147637Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:116)
> 2020-09-28T21:52:21.2148120Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1142)
> 2020-09-28T21:52:21.2148831Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1107)
> 2020-09-28T21:52:21.2149347Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1061)
> 2020-09-28T21:52:21.2149896Z  at 
> org.apache.flink.connector.hbase2.util.HBaseTestingClusterAutoStarter.setUp(HBaseTestingClusterAutoStarter.java:122)
> 2020-09-28T21:52:21.2150721Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-28T21:52:21.2151136Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-28T21:52:21.2151609Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-28T21:52:21.2152039Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-28T21:52:21.2152462Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-28T21:52:21.2152941Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-28T21:52:21.2153489Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-28T21:52:21.2153962Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2020-09-28T21:52:21.2154406Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-28T21:52:21.2154828Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-28T21:52:21.2155381Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
> 2020-09-28T21:52:21.2155864Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
> 2020-09-28T21:52:21.2156378Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-28T21:52:21.2156865Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
> 2020-09-28T21:52:21.2157458Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> 2020-09-28T21:52:21.2157993Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> 2020-09-28T21:52:21.2158470Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> 2020-09-28T21:52:21.2158890Z Caused by: java.lang.RuntimeException: Master 
> not initialized after 20ms
> 2020-09-28T21:52:21.2159350Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:229)
> 2020-09-28T21:52:21.2159823Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:197)
> 2020-09-28T21:52:21.2160270Z  at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:413)
> 2020-09-28T21:52:21.2160800Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:259)
> 2020-09-28T21:52:21.2161096Z  ... 22 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19447) HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not initialized after 200000ms"

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19447:

Fix Version/s: 1.12.0

> HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not 
> initialized after 20ms"
> ---
>
> Key: FLINK-19447
> URL: https://issues.apache.org/jira/browse/FLINK-19447
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407
> {code}
> 2020-09-28T21:52:21.2146147Z 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase  Time elapsed: 208.382 
> sec  <<< ERROR!
> 2020-09-28T21:52:21.2146638Z java.io.IOException: Shutting down
> 2020-09-28T21:52:21.2147004Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:266)
> 2020-09-28T21:52:21.2147637Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:116)
> 2020-09-28T21:52:21.2148120Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1142)
> 2020-09-28T21:52:21.2148831Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1107)
> 2020-09-28T21:52:21.2149347Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1061)
> 2020-09-28T21:52:21.2149896Z  at 
> org.apache.flink.connector.hbase2.util.HBaseTestingClusterAutoStarter.setUp(HBaseTestingClusterAutoStarter.java:122)
> 2020-09-28T21:52:21.2150721Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-28T21:52:21.2151136Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-28T21:52:21.2151609Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-28T21:52:21.2152039Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-28T21:52:21.2152462Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-28T21:52:21.2152941Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-28T21:52:21.2153489Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-28T21:52:21.2153962Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2020-09-28T21:52:21.2154406Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-28T21:52:21.2154828Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-28T21:52:21.2155381Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
> 2020-09-28T21:52:21.2155864Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
> 2020-09-28T21:52:21.2156378Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-28T21:52:21.2156865Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
> 2020-09-28T21:52:21.2157458Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> 2020-09-28T21:52:21.2157993Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> 2020-09-28T21:52:21.2158470Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> 2020-09-28T21:52:21.2158890Z Caused by: java.lang.RuntimeException: Master 
> not initialized after 20ms
> 2020-09-28T21:52:21.2159350Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:229)
> 2020-09-28T21:52:21.2159823Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:197)
> 2020-09-28T21:52:21.2160270Z  at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:413)
> 2020-09-28T21:52:21.2160800Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:259)
> 2020-09-28T21:52:21.2161096Z  ... 22 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19447) HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not initialized after 200000ms"

2020-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204394#comment-17204394
 ] 

Dian Fu commented on FLINK-19447:
-

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407

> HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not 
> initialized after 20ms"
> ---
>
> Key: FLINK-19447
> URL: https://issues.apache.org/jira/browse/FLINK-19447
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407
> {code}
> 2020-09-28T21:52:21.2146147Z 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase  Time elapsed: 208.382 
> sec  <<< ERROR!
> 2020-09-28T21:52:21.2146638Z java.io.IOException: Shutting down
> 2020-09-28T21:52:21.2147004Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:266)
> 2020-09-28T21:52:21.2147637Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:116)
> 2020-09-28T21:52:21.2148120Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1142)
> 2020-09-28T21:52:21.2148831Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1107)
> 2020-09-28T21:52:21.2149347Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1061)
> 2020-09-28T21:52:21.2149896Z  at 
> org.apache.flink.connector.hbase2.util.HBaseTestingClusterAutoStarter.setUp(HBaseTestingClusterAutoStarter.java:122)
> 2020-09-28T21:52:21.2150721Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-28T21:52:21.2151136Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-28T21:52:21.2151609Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-28T21:52:21.2152039Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-28T21:52:21.2152462Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-28T21:52:21.2152941Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-28T21:52:21.2153489Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-28T21:52:21.2153962Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2020-09-28T21:52:21.2154406Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-28T21:52:21.2154828Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-28T21:52:21.2155381Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
> 2020-09-28T21:52:21.2155864Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
> 2020-09-28T21:52:21.2156378Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-28T21:52:21.2156865Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
> 2020-09-28T21:52:21.2157458Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> 2020-09-28T21:52:21.2157993Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> 2020-09-28T21:52:21.2158470Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> 2020-09-28T21:52:21.2158890Z Caused by: java.lang.RuntimeException: Master 
> not initialized after 20ms
> 2020-09-28T21:52:21.2159350Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:229)
> 2020-09-28T21:52:21.2159823Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:197)
> 2020-09-28T21:52:21.2160270Z  at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:413)
> 2020-09-28T21:52:21.2160800Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:259)
> 2020-09-28T21:52:21.2161096Z  ... 22 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19469) HBase connector 2.2 failed to download dependencies "org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT"

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19469:

Labels: test-stability  (was: )

> HBase connector 2.2 failed to download dependencies 
> "org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT" 
> 
>
> Key: FLINK-19469
> URL: https://issues.apache.org/jira/browse/FLINK-19469
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-09-29T20:59:24.8085970Z [ERROR] Failed to execute goal on project 
> flink-connector-hbase-2.2_2.11: Could not resolve dependencies for project 
> org.apache.flink:flink-connector-hbase-2.2_2.11:jar:1.12-SNAPSHOT: Failed to 
> collect dependencies at org.apache.hbase:hbase-server:jar:tests:2.2.3 -> 
> org.glassfish.web:javax.servlet.jsp:jar:2.3.2 -> 
> org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT: Failed to read artifact 
> descriptor for org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT: Could not 
> transfer artifact org.glassfish:javax.el:pom:3.0.1-b06-SNAPSHOT from/to 
> jvnet-nexus-snapshots 
> (https://maven.java.net/content/repositories/snapshots): Failed to transfer 
> file: 
> https://maven.java.net/content/repositories/snapshots/org/glassfish/javax.el/3.0.1-b06-SNAPSHOT/javax.el-3.0.1-b06-SNAPSHOT.pom.
>  Return code is: 503 , ReasonPhrase:Service Unavailable: Back-end server is 
> at capacity. -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19469) HBase connector 2.2 failed to download dependencies "org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT"

2020-09-29 Thread Dian Fu (Jira)
Dian Fu created FLINK-19469:
---

 Summary: HBase connector 2.2 failed to download dependencies 
"org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT" 
 Key: FLINK-19469
 URL: https://issues.apache.org/jira/browse/FLINK-19469
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7093=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

{code}
2020-09-29T20:59:24.8085970Z [ERROR] Failed to execute goal on project 
flink-connector-hbase-2.2_2.11: Could not resolve dependencies for project 
org.apache.flink:flink-connector-hbase-2.2_2.11:jar:1.12-SNAPSHOT: Failed to 
collect dependencies at org.apache.hbase:hbase-server:jar:tests:2.2.3 -> 
org.glassfish.web:javax.servlet.jsp:jar:2.3.2 -> 
org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT: Failed to read artifact 
descriptor for org.glassfish:javax.el:jar:3.0.1-b06-SNAPSHOT: Could not 
transfer artifact org.glassfish:javax.el:pom:3.0.1-b06-SNAPSHOT from/to 
jvnet-nexus-snapshots (https://maven.java.net/content/repositories/snapshots): 
Failed to transfer file: 
https://maven.java.net/content/repositories/snapshots/org/glassfish/javax.el/3.0.1-b06-SNAPSHOT/javax.el-3.0.1-b06-SNAPSHOT.pom.
 Return code is: 503 , ReasonPhrase:Service Unavailable: Back-end server is at 
capacity. -> [Help 1]
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19454) Translate page 'Importing Flink into an IDE' into Chinese

2020-09-29 Thread wulei0302 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204391#comment-17204391
 ] 

wulei0302 commented on FLINK-19454:
---

[~jark] hello Jark,  could you assignee this issue to me ?

> Translate page 'Importing Flink into an IDE' into Chinese
> -
>
> Key: FLINK-19454
> URL: https://issues.apache.org/jira/browse/FLINK-19454
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: wulei0302
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/flinkDev/ide_setup.html]
> The markdown file is located in {{flink/docs/flinkDev/ide_setup.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19454) Translate page 'Importing Flink into an IDE' into Chinese

2020-09-29 Thread wulei0302 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wulei0302 updated FLINK-19454:
--
Comment: was deleted

(was: [~libenchao] hello benchao,  could you assignee this issue to me ?)

> Translate page 'Importing Flink into an IDE' into Chinese
> -
>
> Key: FLINK-19454
> URL: https://issues.apache.org/jira/browse/FLINK-19454
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.2
>Reporter: wulei0302
>Priority: Minor
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/flinkDev/ide_setup.html]
> The markdown file is located in {{flink/docs/flinkDev/ide_setup.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18648) KafkaITCase.testStartFromGroupOffsets times out on azure

2020-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204387#comment-17204387
 ] 

Dian Fu edited comment on FLINK-18648 at 9/30/20, 1:13 AM:
---

KafkaITCase.testAutoOffsetRetrievalAndCommitToKafka on the 1.11 branch failed 
with the same exception:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7094=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=19faa67c-fc43-55af-ba4e-7519fca274b5


was (Author: dian.fu):
KafkaITCase.testAutoOffsetRetrievalAndCommitToKafka failed with the same 
exception:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7094=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=19faa67c-fc43-55af-ba4e-7519fca274b5

> KafkaITCase.testStartFromGroupOffsets times out on azure
> 
>
> Key: FLINK-18648
> URL: https://issues.apache.org/jira/browse/FLINK-18648
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4639=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32
> {code}
> 2020-07-20T12:16:53.4337483Z [INFO] Tests run: 12, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 203.504 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-20T12:16:53.9546587Z [INFO] 
> 2020-07-20T12:16:53.9546890Z [INFO] Results:
> 2020-07-20T12:16:53.9547158Z [INFO] 
> 2020-07-20T12:16:53.9547380Z [ERROR] Errors: 
> 2020-07-20T12:16:53.9548822Z [ERROR]   
> KafkaITCase.testStartFromGroupOffsets:158->KafkaConsumerTestBase.runStartFromGroupOffsets:540->KafkaConsumerTestBase.writeSequence:1992
>  » TestTimedOut
> 2020-07-20T12:16:53.9551978Z [INFO] 
> 2020-07-20T12:16:53.9552734Z [ERROR] Tests run: 80, Failures: 0, Errors: 1, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18648) KafkaITCase.testStartFromGroupOffsets times out on azure

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18648:

Labels: test-stability  (was: )

> KafkaITCase.testStartFromGroupOffsets times out on azure
> 
>
> Key: FLINK-18648
> URL: https://issues.apache.org/jira/browse/FLINK-18648
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4639=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32
> {code}
> 2020-07-20T12:16:53.4337483Z [INFO] Tests run: 12, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 203.504 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-20T12:16:53.9546587Z [INFO] 
> 2020-07-20T12:16:53.9546890Z [INFO] Results:
> 2020-07-20T12:16:53.9547158Z [INFO] 
> 2020-07-20T12:16:53.9547380Z [ERROR] Errors: 
> 2020-07-20T12:16:53.9548822Z [ERROR]   
> KafkaITCase.testStartFromGroupOffsets:158->KafkaConsumerTestBase.runStartFromGroupOffsets:540->KafkaConsumerTestBase.writeSequence:1992
>  » TestTimedOut
> 2020-07-20T12:16:53.9551978Z [INFO] 
> 2020-07-20T12:16:53.9552734Z [ERROR] Tests run: 80, Failures: 0, Errors: 1, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18648) KafkaITCase.testStartFromGroupOffsets times out on azure

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18648:

Affects Version/s: 1.11.0

> KafkaITCase.testStartFromGroupOffsets times out on azure
> 
>
> Key: FLINK-18648
> URL: https://issues.apache.org/jira/browse/FLINK-18648
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4639=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32
> {code}
> 2020-07-20T12:16:53.4337483Z [INFO] Tests run: 12, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 203.504 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-20T12:16:53.9546587Z [INFO] 
> 2020-07-20T12:16:53.9546890Z [INFO] Results:
> 2020-07-20T12:16:53.9547158Z [INFO] 
> 2020-07-20T12:16:53.9547380Z [ERROR] Errors: 
> 2020-07-20T12:16:53.9548822Z [ERROR]   
> KafkaITCase.testStartFromGroupOffsets:158->KafkaConsumerTestBase.runStartFromGroupOffsets:540->KafkaConsumerTestBase.writeSequence:1992
>  » TestTimedOut
> 2020-07-20T12:16:53.9551978Z [INFO] 
> 2020-07-20T12:16:53.9552734Z [ERROR] Tests run: 80, Failures: 0, Errors: 1, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18648) KafkaITCase.testStartFromGroupOffsets times out on azure

2020-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204387#comment-17204387
 ] 

Dian Fu commented on FLINK-18648:
-

KafkaITCase.testAutoOffsetRetrievalAndCommitToKafka failed with the same 
exception:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7094=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=19faa67c-fc43-55af-ba4e-7519fca274b5

> KafkaITCase.testStartFromGroupOffsets times out on azure
> 
>
> Key: FLINK-18648
> URL: https://issues.apache.org/jira/browse/FLINK-18648
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4639=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32
> {code}
> 2020-07-20T12:16:53.4337483Z [INFO] Tests run: 12, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 203.504 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 2020-07-20T12:16:53.9546587Z [INFO] 
> 2020-07-20T12:16:53.9546890Z [INFO] Results:
> 2020-07-20T12:16:53.9547158Z [INFO] 
> 2020-07-20T12:16:53.9547380Z [ERROR] Errors: 
> 2020-07-20T12:16:53.9548822Z [ERROR]   
> KafkaITCase.testStartFromGroupOffsets:158->KafkaConsumerTestBase.runStartFromGroupOffsets:540->KafkaConsumerTestBase.writeSequence:1992
>  » TestTimedOut
> 2020-07-20T12:16:53.9551978Z [INFO] 
> 2020-07-20T12:16:53.9552734Z [ERROR] Tests run: 80, Failures: 0, Errors: 1, 
> Skipped: 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19437) FileSourceTextLinesITCase.testContinuousTextFileSource failed with "SimpleStreamFormat is not splittable, but found split end (0) different from file length (198)"

2020-09-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-19437:

Fix Version/s: 1.12.0

> FileSourceTextLinesITCase.testContinuousTextFileSource failed with 
> "SimpleStreamFormat is not splittable, but found split end (0) different from 
> file length (198)"
> ---
>
> Key: FLINK-19437
> URL: https://issues.apache.org/jira/browse/FLINK-19437
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Stephan Ewen
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7008=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-27T21:58:38.9199090Z [ERROR] 
> testContinuousTextFileSource(org.apache.flink.connector.file.src.FileSourceTextLinesITCase)
>   Time elapsed: 0.517 s  <<< ERROR!
> 2020-09-27T21:58:38.9199619Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-09-27T21:58:38.9200118Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> 2020-09-27T21:58:38.9200722Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
> 2020-09-27T21:58:38.9201290Z  at 
> org.apache.flink.streaming.api.datastream.DataStreamUtils.collectRecordsFromUnboundedStream(DataStreamUtils.java:150)
> 2020-09-27T21:58:38.9201920Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testContinuousTextFileSource(FileSourceTextLinesITCase.java:136)
> 2020-09-27T21:58:38.9202570Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-27T21:58:38.9203054Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-27T21:58:38.9203539Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-27T21:58:38.9203968Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-27T21:58:38.9204369Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-27T21:58:38.9204844Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-27T21:58:38.9205359Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-27T21:58:38.9205814Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-09-27T21:58:38.9206240Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-09-27T21:58:38.9206611Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9206971Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-09-27T21:58:38.9207404Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-09-27T21:58:38.9207971Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-09-27T21:58:38.9208404Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-09-27T21:58:38.9208877Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-09-27T21:58:38.9209279Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-09-27T21:58:38.9209680Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-09-27T21:58:38.9210064Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-09-27T21:58:38.9210476Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9210881Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-09-27T21:58:38.9211272Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-09-27T21:58:38.9211638Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-27T21:58:38.9212305Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-09-27T21:58:38.9213157Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-09-27T21:58:38.9213663Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-27T21:58:38.9214123Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-09-27T21:58:38.9214620Z  at 
> 

[jira] [Commented] (FLINK-19447) HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not initialized after 200000ms"

2020-09-29 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204383#comment-17204383
 ] 

Dian Fu commented on FLINK-19447:
-

All the logs could be found here: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=artifacts=publishedArtifacts

The log for this specific failed tests:
https://artprodsu6weu.artifacts.visualstudio.com/A2d3c0ac8-fecf-45be-8407-6d87302181a9/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/artifact/cGlwZWxpbmVhcnRpZmFjdDovL2FwYWNoZS1mbGluay9wcm9qZWN0SWQvOTg0NjM0OTYtMWFmMi00NjIwLThlYWItYTJlY2MxYTJlNmZlL2J1aWxkSWQvNzA0Mi9hcnRpZmFjdE5hbWUvbG9ncy1jcm9uX2hhZG9vcDI0MS1jb25uZWN0b3JzLTE2MDEzMjc5ODk1/content?format=zip

> HBaseConnectorITCase.HBaseTestingClusterAutoStarter failed with "Master not 
> initialized after 20ms"
> ---
>
> Key: FLINK-19447
> URL: https://issues.apache.org/jira/browse/FLINK-19447
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407
> {code}
> 2020-09-28T21:52:21.2146147Z 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase  Time elapsed: 208.382 
> sec  <<< ERROR!
> 2020-09-28T21:52:21.2146638Z java.io.IOException: Shutting down
> 2020-09-28T21:52:21.2147004Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:266)
> 2020-09-28T21:52:21.2147637Z  at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:116)
> 2020-09-28T21:52:21.2148120Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1142)
> 2020-09-28T21:52:21.2148831Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1107)
> 2020-09-28T21:52:21.2149347Z  at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1061)
> 2020-09-28T21:52:21.2149896Z  at 
> org.apache.flink.connector.hbase2.util.HBaseTestingClusterAutoStarter.setUp(HBaseTestingClusterAutoStarter.java:122)
> 2020-09-28T21:52:21.2150721Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-09-28T21:52:21.2151136Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-09-28T21:52:21.2151609Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-09-28T21:52:21.2152039Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-09-28T21:52:21.2152462Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-09-28T21:52:21.2152941Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-09-28T21:52:21.2153489Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-09-28T21:52:21.2153962Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2020-09-28T21:52:21.2154406Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-09-28T21:52:21.2154828Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-09-28T21:52:21.2155381Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
> 2020-09-28T21:52:21.2155864Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
> 2020-09-28T21:52:21.2156378Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-09-28T21:52:21.2156865Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
> 2020-09-28T21:52:21.2157458Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
> 2020-09-28T21:52:21.2157993Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
> 2020-09-28T21:52:21.2158470Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> 2020-09-28T21:52:21.2158890Z Caused by: java.lang.RuntimeException: Master 
> not initialized after 20ms
> 2020-09-28T21:52:21.2159350Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:229)
> 2020-09-28T21:52:21.2159823Z  at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:197)
> 2020-09-28T21:52:21.2160270Z  at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:413)
> 2020-09-28T21:52:21.2160800Z  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #13499: [FLINK-16972][network] LocalBufferPool eagerly fetches global segments to ensure proper availability.

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13499:
URL: https://github.com/apache/flink/pull/13499#issuecomment-699852497


   
   ## CI report:
   
   * f298e8b85569242501d871e326cce21695582caa Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7095)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13514: [FLINK-19468][DataStream API] Metrics return empty when data stream / operator name contains "+"

2020-09-29 Thread GitBox


flinkbot edited a comment on pull request #13514:
URL: https://github.com/apache/flink/pull/13514#issuecomment-701038173


   
   ## CI report:
   
   * 8a72f73417ceb18536d282679e8ce5c60f634acb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7100)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13514: [FLINK-19468][DataStream API] Metrics return empty when data stream / operator name contains "+"

2020-09-29 Thread GitBox


flinkbot commented on pull request #13514:
URL: https://github.com/apache/flink/pull/13514#issuecomment-701038173


   
   ## CI report:
   
   * 8a72f73417ceb18536d282679e8ce5c60f634acb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13514: [FLINK-19468][DataStream API] Metrics return empty when data stream / operator name contains "+"

2020-09-29 Thread GitBox


flinkbot commented on pull request #13514:
URL: https://github.com/apache/flink/pull/13514#issuecomment-701034946


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8a72f73417ceb18536d282679e8ce5c60f634acb (Tue Sep 29 
22:57:32 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-19468).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] jerrypeng opened a new pull request #13514: [FLINK-19468][DataStream API] Metrics return empty when data stream / operator name contains "+"

2020-09-29 Thread GitBox


jerrypeng opened a new pull request #13514:
URL: https://github.com/apache/flink/pull/13514


   There is an issue in which the special character "+" is not removed from the 
data stream / operator name which causes metrics for the operator to not be 
properly returned. Code Reference:
   
   
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/dump/MetricQueryService.java#L208
   

   
   For example if the operator name is:
   
   pulsar(url: pulsar+ssl://192.168.1.198:56014)
   
   Metrics for an operator with the above name will always return empty.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   >