[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * 0b35756a911b269e896321f64a6146e8ed2eebe9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29948)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * d5650e23a86bce6a51f24b3ab1216c8167a78cbb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29947)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * 44a88561780f8e995f69cad227fd53ac3340ff4f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29933)
 
   * 0b35756a911b269e896321f64a6146e8ed2eebe9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29948)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18437: [FLINK-25712][connector/tests] Merge flink-connector-testing into flink-connector-test-utils

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18437:
URL: https://github.com/apache/flink/pull/18437#issuecomment-1018174136


   
   ## CI report:
   
   * 44a88561780f8e995f69cad227fd53ac3340ff4f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29933)
 
   * 0b35756a911b269e896321f64a6146e8ed2eebe9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25670) StateFun: Unable to handle oversize HTTP message if state size is large

2022-01-22 Thread Kyle (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480541#comment-17480541
 ] 

Kyle edited comment on FLINK-25670 at 1/23/22, 3:55 AM:


When I set payload_max_bytes to 1GB,
{code:java}
    kind: io.statefun.endpoints.v2/http
    spec:
      functions: example/*
      urlPathTemplate: http://functions.statefun.svc.cluster.local:8000/statefun
      transport:
        type: io.statefun.transports.v1/async
        payload_max_bytes: 1073741824
        timeouts:
          call: 2min {code}
Another error happens:
{code:java}
2022-01-23 02:39:16,469 INFO  
org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Retry #3 
ToFunctionRequestSummary(address=Address(example, hello, ), batchSize=1, 
totalSizeInBytes=80, numberOfStates=2) ,About to sleep for 16
2022-01-23 02:39:24,687 WARN  
org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception 
caught while trying to deliver a message: (attempt 
#3)ToFunctionRequestSummary(address=Address(example, hello, ), batchSize=1, 
totalSizeInBytes=80, numberOfStates=2)
java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Unknown Source) ~[?:?]
        at java.nio.DirectByteBuffer.(Unknown Source) ~[?:?]
        at java.nio.ByteBuffer.allocateDirect(Unknown Source) ~[?:?]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:745)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:262)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena.allocate(PoolArena.java:232)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.allocBuffer(CompositeByteBuf.java:1853)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.consolidate0(CompositeByteBuf.java:1732)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.consolidateIfNeeded(CompositeByteBuf.java:559)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.addComponent(CompositeByteBuf.java:266)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.addComponent(CompositeByteBuf.java:222)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.appendPartialContent(MessageAggregator.java:333)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:298)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.i

[jira] (FLINK-25670) StateFun: Unable to handle oversize HTTP message if state size is large

2022-01-22 Thread Kyle (Jira)


[ https://issues.apache.org/jira/browse/FLINK-25670 ]


Kyle deleted comment on FLINK-25670:
--

was (Author: JIRAUSER283693):
Adding this configuration into flink-config.yaml is not working.
{code:java}
statefun.feedback.memory.size: 1GB {code}

> StateFun: Unable to handle oversize HTTP message if state size is large
> ---
>
> Key: FLINK-25670
> URL: https://issues.apache.org/jira/browse/FLINK-25670
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.1.1
>Reporter: Kyle
>Priority: Major
> Attachments: 00-module.yaml, functions.py
>
>
> Per requirement we need to handle state which is about 500MB large (72MB 
> state allocated in commented code as attached). However the HTTP message 
> limit disallows us to send back large state to StateFun cluster after saving 
> state in Stateful Function.
> Another question is whether large data is allowed to send to Stateful 
> Function from ingress.
>  
> 2022-01-17 07:57:18,416 WARN  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception 
> caught while trying to deliver a message: (attempt 
> #10)ToFunctionRequestSummary(address=Address(example, hello, ), 
> batchSize=1, totalSizeInBytes=80, numberOfStates=2)
> org.apache.flink.shaded.netty4.io.netty.handler.codec.TooLongFrameException: 
> Response entity too large: DefaultHttpResponse(decodeResult: success, 
> version: HTTP/1.1)
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> Content-Length: 40579630
> Date: Mon, 17 Jan 2022 07:57:18 GMT
> Server: Python/3.9 aiohttp/3.8.1
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:276)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:87)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.invokeHandleOversizedMessage(MessageAggregator.java:404)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:254)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:425)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
>  [statefun-flink-distribution.jar:3.1.1]
>

[jira] [Commented] (FLINK-25670) StateFun: Unable to handle oversize HTTP message if state size is large

2022-01-22 Thread Kyle (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480543#comment-17480543
 ] 

Kyle commented on FLINK-25670:
--

Adding this configuration into flink-config.yaml is not working.
{code:java}
statefun.feedback.memory.size: 1GB {code}

> StateFun: Unable to handle oversize HTTP message if state size is large
> ---
>
> Key: FLINK-25670
> URL: https://issues.apache.org/jira/browse/FLINK-25670
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-3.1.1
>Reporter: Kyle
>Priority: Major
> Attachments: 00-module.yaml, functions.py
>
>
> Per requirement we need to handle state which is about 500MB large (72MB 
> state allocated in commented code as attached). However the HTTP message 
> limit disallows us to send back large state to StateFun cluster after saving 
> state in Stateful Function.
> Another question is whether large data is allowed to send to Stateful 
> Function from ingress.
>  
> 2022-01-17 07:57:18,416 WARN  
> org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception 
> caught while trying to deliver a message: (attempt 
> #10)ToFunctionRequestSummary(address=Address(example, hello, ), 
> batchSize=1, totalSizeInBytes=80, numberOfStates=2)
> org.apache.flink.shaded.netty4.io.netty.handler.codec.TooLongFrameException: 
> Response entity too large: DefaultHttpResponse(decodeResult: success, 
> version: HTTP/1.1)
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> Content-Length: 40579630
> Date: Mon, 17 Jan 2022 07:57:18 GMT
> Server: Python/3.9 aiohttp/3.8.1
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:276)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpObjectAggregator.handleOversizedMessage(HttpObjectAggregator.java:87)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.invokeHandleOversizedMessage(MessageAggregator.java:404)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:254)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
>  ~[statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:425)
>  [statefun-flink-distribution.jar:3.1.1]
>         at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(Byte

[jira] [Commented] (FLINK-25670) StateFun: Unable to handle oversize HTTP message if state size is large

2022-01-22 Thread Kyle (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480541#comment-17480541
 ] 

Kyle commented on FLINK-25670:
--

When I set payload_max_bytes to 1GB,
{code:java}
    kind: io.statefun.endpoints.v2/http
    spec:
      functions: example/*
      urlPathTemplate: http://functions.statefun.svc.cluster.local:8000/statefun
      transport:
        type: io.statefun.transports.v1/async
        payload_max_bytes: 1073741824
        timeouts:
          call: 2min {code}
Another error happens:
{code:java}
2022-01-23 02:39:16,469 INFO  
org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Retry #3 
ToFunctionRequestSummary(address=Address(example, hello, ), batchSize=1, 
totalSizeInBytes=80, numberOfStates=2) ,About to sleep for 16
2022-01-23 02:39:24,687 WARN  
org.apache.flink.statefun.flink.core.nettyclient.NettyRequest [] - Exception 
caught while trying to deliver a message: (attempt 
#3)ToFunctionRequestSummary(address=Address(example, hello, ), batchSize=1, 
totalSizeInBytes=80, numberOfStates=2)
java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Unknown Source) ~[?:?]
        at java.nio.DirectByteBuffer.(Unknown Source) ~[?:?]
        at java.nio.ByteBuffer.allocateDirect(Unknown Source) ~[?:?]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:755)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:745)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:262)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena.allocate(PoolArena.java:232)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PoolArena.allocate(PoolArena.java:147)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:356)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.allocBuffer(CompositeByteBuf.java:1853)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.consolidate0(CompositeByteBuf.java:1732)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.consolidateIfNeeded(CompositeByteBuf.java:559)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.addComponent(CompositeByteBuf.java:266)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.buffer.CompositeByteBuf.addComponent(CompositeByteBuf.java:222)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.appendPartialContent(MessageAggregator.java:333)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:298)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
 ~[statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 [statefun-flink-distribution.jar:3.1.1]
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invok

[GitHub] [flink] flinkbot edited a comment on pull request #18448: [FLINK-25763][match-recognize][docs] Updated docs to use code tag consistent with other tables

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18448:
URL: https://github.com/apache/flink/pull/18448#issuecomment-1019378090


   
   ## CI report:
   
   * fb9ab3825be1ae799b1f125635c33b87114ba556 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29943)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 1f41c907b17fed4a57aef3ad9ddeb2760551fdcf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29942)
 
   * d5650e23a86bce6a51f24b3ab1216c8167a78cbb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29947)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 1f41c907b17fed4a57aef3ad9ddeb2760551fdcf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29942)
 
   * d5650e23a86bce6a51f24b3ab1216c8167a78cbb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18448: [FLINK-25763][match-recognize][docs] Updated docs to use code tag consistent with other tables

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18448:
URL: https://github.com/apache/flink/pull/18448#issuecomment-1019378090


   
   ## CI report:
   
   * fb9ab3825be1ae799b1f125635c33b87114ba556 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29943)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18448: [FLINK-25763][match-recognize][docs] Updated docs to use code tag consistent with other tables

2022-01-22 Thread GitBox


flinkbot commented on pull request #18448:
URL: https://github.com/apache/flink/pull/18448#issuecomment-1019378090


   
   ## CI report:
   
   * fb9ab3825be1ae799b1f125635c33b87114ba556 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18448: [FLINK-25763][match-recognize][docs] Updated docs to use code tag consistent with other tables

2022-01-22 Thread GitBox


flinkbot commented on pull request #18448:
URL: https://github.com/apache/flink/pull/18448#issuecomment-1019377978


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fb9ab3825be1ae799b1f125635c33b87114ba556 (Sat Jan 22 
23:42:48 UTC 2022)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25763).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25763) Match Recognize Logical Offsets function table shows backticks

2022-01-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25763:
---
Labels: doc pull-request-available table-api  (was: doc table-api)

> Match Recognize Logical Offsets function table shows backticks
> --
>
> Key: FLINK-25763
> URL: https://issues.apache.org/jira/browse/FLINK-25763
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.14.3
> Environment: All
>Reporter: Mans Singh
>Priority: Minor
>  Labels: doc, pull-request-available, table-api
> Fix For: 1.15.0
>
> Attachments: MatchRecognizeLogicalOffsets.png
>
>
> The match recognize logical offsets functions in the table are formatted with 
> back ticks as shown below:
>  
> !MatchRecognizeLogicalOffsets.png|width=736,height=230!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] mans2singh opened a new pull request #18448: [FLINK-25763][match-recognize][docs] Updated docs to use code tag consistent with other tables

2022-01-22 Thread GitBox


mans2singh opened a new pull request #18448:
URL: https://github.com/apache/flink/pull/18448


   ## What is the purpose of the change
   
   * The match recognize functions were displayed with triple backticks in the 
table.
   
   ## Brief change log
   
   * Updated docs to use ... like in other tables in the docs.
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25763) Match Recognize Logical Offsets function table shows backticks

2022-01-22 Thread Mans Singh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480524#comment-17480524
 ] 

Mans Singh commented on FLINK-25763:


Please assign this issue to me.  Thanks

> Match Recognize Logical Offsets function table shows backticks
> --
>
> Key: FLINK-25763
> URL: https://issues.apache.org/jira/browse/FLINK-25763
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.14.3
> Environment: All
>Reporter: Mans Singh
>Priority: Minor
>  Labels: doc, table-api
> Fix For: 1.15.0
>
> Attachments: MatchRecognizeLogicalOffsets.png
>
>
> The match recognize logical offsets functions in the table are formatted with 
> back ticks as shown below:
>  
> !MatchRecognizeLogicalOffsets.png|width=736,height=230!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25763) Match Recognize Logical Offsets function table shows backticks

2022-01-22 Thread Mans Singh (Jira)
Mans Singh created FLINK-25763:
--

 Summary: Match Recognize Logical Offsets function table shows 
backticks
 Key: FLINK-25763
 URL: https://issues.apache.org/jira/browse/FLINK-25763
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.14.3
 Environment: All
Reporter: Mans Singh
 Fix For: 1.15.0
 Attachments: MatchRecognizeLogicalOffsets.png

The match recognize logical offsets functions in the table are formatted with 
back ticks as shown below:

 

!MatchRecognizeLogicalOffsets.png|width=736,height=230!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 1f41c907b17fed4a57aef3ad9ddeb2760551fdcf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29942)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25377) kubernetes's request/limit resource can been seperated?

2022-01-22 Thread Tamir Sagi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480506#comment-17480506
 ] 

Tamir Sagi edited comment on FLINK-25377 at 1/22/22, 6:31 PM:
--

at the meantime, What about using Pod template for your flink cluster?? you can 
set request/limit there. You can also have a dedicated template for JM/TM 
separately .


was (Author: JIRAUSER283777):
Until the API will allow you to define these value separately, What about using 
Pod template for your flink cluster?? you can set request/limit there. You can 
also have a dedicated template for JM/TM separately .

> kubernetes's request/limit resource can been seperated?
> ---
>
> Key: FLINK-25377
> URL: https://issues.apache.org/jira/browse/FLINK-25377
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.2
> Environment: flink 1.13.2
> kubernetes
>Reporter: jeff-zou
>Priority: Major
>
> now my kubernetes cluster has very low CPU utilization,but cannt publish more 
> tasks,because resources are already occupied by Pod request.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25377) kubernetes's request/limit resource can been seperated?

2022-01-22 Thread Tamir Sagi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480506#comment-17480506
 ] 

Tamir Sagi commented on FLINK-25377:


Until the API will allow you to define these value separately, What about using 
Pod template for your flink cluster?? you can set request/limit there. You can 
also have a dedicated template for JM/TM separately .

> kubernetes's request/limit resource can been seperated?
> ---
>
> Key: FLINK-25377
> URL: https://issues.apache.org/jira/browse/FLINK-25377
> Project: Flink
>  Issue Type: New Feature
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.2
> Environment: flink 1.13.2
> kubernetes
>Reporter: jeff-zou
>Priority: Major
>
> now my kubernetes cluster has very low CPU utilization,but cannt publish more 
> tasks,because resources are already occupied by Pod request.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is deployed

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Labels: Flink flink-k8s native-k8s pull-request-available  (was: Flink 
flink-k8s native-kuber pull-request-available)

> Native k8s- User defined system properties get overriden when cluster is 
> deployed 
> --
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-k8s, pull-request-available
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is deployed

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Labels: Flink flink-k8s kubernetes native-k8s native-kubernetes 
pull-request-available  (was: Flink flink-k8s native-k8s pull-request-available)

> Native k8s- User defined system properties get overriden when cluster is 
> deployed 
> --
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, kubernetes, native-k8s, 
> native-kubernetes, pull-request-available
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is deployed

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Fix Version/s: (was: 1.14.4)

> Native k8s- User defined system properties get overriden when cluster is 
> deployed 
> --
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18447: FLINK-25762 - [Deployment/kubernetes] Move JVM system properties args to the end of exec command

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18447:
URL: https://github.com/apache/flink/pull/18447#issuecomment-1019286035


   
   ## CI report:
   
   * 9dc6ca17c031ce1a0a83b5d569013f6644a6e3af Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29941)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 87737e9930a873bd0ca5613afc75e05506996541 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29937)
 
   * 1f41c907b17fed4a57aef3ad9ddeb2760551fdcf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29942)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 87737e9930a873bd0ca5613afc75e05506996541 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29937)
 
   * 1f41c907b17fed4a57aef3ad9ddeb2760551fdcf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25758) GCS Filesystem implementation fails on Java 11 tests due to licensing issues

2022-01-22 Thread Galen Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480471#comment-17480471
 ] 

Galen Warren commented on FLINK-25758:
--

Sure, I'll take a look. Is there a way to run the license check locally?

> GCS Filesystem implementation fails on Java 11 tests due to licensing issues
> 
>
> Key: FLINK-25758
> URL: https://issues.apache.org/jira/browse/FLINK-25758
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Galen Warren
>Priority: Blocker
>
> {code}
> 00:33:45,410 DEBUG org.apache.flink.tools.ci.licensecheck.NoticeFileChecker   
>   [] - Dependency io.netty:netty-common:4.1.51.Final is mentioned in NOTICE 
> file /__w/2/s/flink-python/src/main/resources/META-INF/NOTICE, but was not 
> mentioned by the build output as a bundled dependency
> 00:33:45,411 ERROR org.apache.flink.tools.ci.licensecheck.NoticeFileChecker   
>   [] - Could not find dependency javax.annotation:javax.annotation-api:1.3.2 
> in NOTICE file 
> /__w/2/s/flink-filesystems/flink-gs-fs-hadoop/src/main/resources/META-INF/NOTICE
> 00:33:45,536 INFO  org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - Checking directory /tmp/flink-validation-deployment with a total of 
> 197 jar files.
> 00:34:18,554 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - File '/javax/annotation/security/package.html' in jar 
> '/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
>  contains match with forbidden regex 'gnu ?\R?[\s/#]*general 
> ?\R?[\s/#]*public ?\R?[\s/#]*license'.
> 00:34:18,555 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - File '/javax/annotation/package.html' in jar 
> '/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
>  contains match with forbidden regex 'gnu ?\R?[\s/#]*general 
> ?\R?[\s/#]*public ?\R?[\s/#]*license'.
> 00:35:46,612 WARN  org.apache.flink.tools.ci.licensecheck.LicenseChecker  
>   [] - Found a total of 3 severe license issues
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29932&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] gaoyunhaii commented on a change in pull request #18157: [FLINK-17808] Rename checkpoint meta file to "_metadata" until it has…

2022-01-22 Thread GitBox


gaoyunhaii commented on a change in pull request #18157:
URL: https://github.com/apache/flink/pull/18157#discussion_r790153063



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsCheckpointMetadataOutputStream.java
##
@@ -162,4 +161,18 @@ public FsCompletedCheckpointStorageLocation 
closeAndFinalizeCheckpoint() throws
 }
 }
 }
+
+static MetadataOutputStreamWrapper getOutputStreamWrapper(
+final FileSystem fileSystem, final Path metadataFilePath) throws 
IOException {
+try {
+RecoverableWriter recoverableWriter = 
fileSystem.createRecoverableWriter();
+if (fileSystem.exists(metadataFilePath)) {
+throw new IOException("Target file " + metadataFilePath + " is 
already exists.");
+}
+return new 
RecoverableStreamWrapper(recoverableWriter.open(metadataFilePath));
+} catch (Throwable throwable) {
+LOG.warn("Errors on creating recoverable writer.", throwable);

Review comment:
   Perhaps not output the stack here to avoid cause confusion since the 
exception won't make the process fail.
   
   Might change to 
   
   `LOG.warn("Errors on creating recoverable writer due to {}, will use the 
ordinary writer.", throwable.getMessage());`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #18157: [FLINK-17808] Rename checkpoint meta file to "_metadata" until it has…

2022-01-22 Thread GitBox


gaoyunhaii commented on a change in pull request #18157:
URL: https://github.com/apache/flink/pull/18157#discussion_r790153063



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsCheckpointMetadataOutputStream.java
##
@@ -162,4 +161,18 @@ public FsCompletedCheckpointStorageLocation 
closeAndFinalizeCheckpoint() throws
 }
 }
 }
+
+static MetadataOutputStreamWrapper getOutputStreamWrapper(
+final FileSystem fileSystem, final Path metadataFilePath) throws 
IOException {
+try {
+RecoverableWriter recoverableWriter = 
fileSystem.createRecoverableWriter();
+if (fileSystem.exists(metadataFilePath)) {
+throw new IOException("Target file " + metadataFilePath + " is 
already exists.");
+}
+return new 
RecoverableStreamWrapper(recoverableWriter.open(metadataFilePath));
+} catch (Throwable throwable) {
+LOG.warn("Errors on creating recoverable writer.", throwable);

Review comment:
   Perhaps not output the stack here to avoid cause confusion. 
   
   Might change to 
   
   `LOG.warn("Errors on creating recoverable writer due to {}, will use the 
ordinary writer.", throwable.getMessage());`

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/MetadataOutputStreamWrapper.java
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.state.filesystem;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.core.fs.FSDataOutputStream;
+
+import java.io.IOException;
+
+/** The wrapper manages metadata output stream close and commit. */
+@Internal
+public abstract class MetadataOutputStreamWrapper {
+private volatile boolean closed = false;
+
+/** Return {@link FSDataOutputStream} to write and other operations. */
+abstract FSDataOutputStream getOutput();
+
+/**
+ * The abstract function of once closing output stream and committing 
operation. It will throw
+ * {@link IOException} when failed and should be invoked by {@code 
closeForCommit()} indirectly
+ * instead of this function.
+ */
+protected abstract void closeForCommitAction() throws IOException;
+
+/**
+ * The abstract function of once closing output stream operation. It will 
throw {@link
+ * IOException} when failed and should be invoked by {@code close()} 
indirectly instead of this
+ * function.
+ */
+protected abstract void closeAction() throws IOException;
+
+/**
+ * The abstract function of aborting temporary files or doing nothing, 
which depends on the
+ * different output stream implementations. It will throw {@link 
IOException} when failed.
+ */
+abstract void abort() throws IOException;
+
+/**
+ * The function will check output stream valid. If it has been closed 
before, it will throw
+ * {@link IOException}. If not, it will invoke {@code 
closeForCommitAction()} and mark it
+ * closed.
+ */
+void closeForCommit() throws IOException {

Review comment:
   Might mark `closeForCommit` and `close` final if they are not expected 
to be modified

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/MetadataOutputStreamWrapper.java
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the 

[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is deployed

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Summary: Native k8s- User defined system properties get overriden when 
cluster is deployed   (was: Native k8s- User defined system properties get 
overriden when cluster is created )

> Native k8s- User defined system properties get overriden when cluster is 
> deployed 
> --
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25761) Translate Avro format page into Chinese.

2022-01-22 Thread Zhiwu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480460#comment-17480460
 ] 

Zhiwu Wang commented on FLINK-25761:


[~RocMarshal]  i‘d like to help with this task. and will complete within maybe 
1-2 days.

> Translate Avro format page into Chinese.
> 
>
> Key: FLINK-25761
> URL: https://issues.apache.org/jira/browse/FLINK-25761
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: RocMarshal
>Priority: Minor
>  Labels: chinese-translation
>
> file location: 
> flink/docs/content.zh/docs/connectors/datastream/formats/avro.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25738) Translate/fix translations for "FileSystem" connector page of "Connectors > DataStream Connectors"

2022-01-22 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480459#comment-17480459
 ] 

RocMarshal commented on FLINK-25738:


Hi, [~Zhiwu Wang] Thank you so much. If you don't mind, you would start with 
the translation sub-task 1. 

 I'm willing to do the check task for it. :)

> Translate/fix translations for "FileSystem" connector page of "Connectors > 
> DataStream Connectors" 
> ---
>
> Key: FLINK-25738
> URL: https://issues.apache.org/jira/browse/FLINK-25738
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Martijn Visser
>Assignee: RocMarshal
>Priority: Major
>  Labels: chinese-translation
>
> After the merge of https://github.com/apache/flink/pull/18288 to resolve 
> https://issues.apache.org/jira/browse/FLINK-20188 multiple pages needs to be 
> translated or changed documentation needs to be reviewed, translated and 
> corrected where possible.
> It involves the following pages from the documentation:
> * docs/content.zh/docs/connectors/datastream/filesystem.md (This has a 
> partial Chinese translation but since it was a complete overhaul, I've copied 
> the English text in)
> * docs/content.zh/docs/connectors/datastream/formats/avro.md
> * docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md
> * docs/content.zh/docs/connectors/datastream/formats/hadoop.md
> * docs/content.zh/docs/connectors/datastream/formats/mongodb.md
> * docs/content.zh/docs/connectors/datastream/formats/overview.md
> * docs/content.zh/docs/connectors/datastream/formats/parquet.md
> * docs/content.zh/docs/connectors/datastream/formats/text_files.md
> * docs/content.zh/docs/connectors/table/filesystem.md (This has a partial 
> Chinese translation but since it was a complete overhaul, I've copied the 
> English text in)
> * docs/content.zh/docs/deployment/filesystems/s3.md (Just needs a check, it 
> should only be link updates)
> * docs/content.zh/docs/dev/datastream/execution_mode.md (Just needs a check, 
> it should only be link updates)
> * 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25761) Translate Avro format page into Chinese.

2022-01-22 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-25761:
---
Parent: FLINK-25738
Issue Type: Sub-task  (was: Improvement)

> Translate Avro format page into Chinese.
> 
>
> Key: FLINK-25761
> URL: https://issues.apache.org/jira/browse/FLINK-25761
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: RocMarshal
>Priority: Minor
>  Labels: chinese-translation
>
> file location: 
> flink/docs/content.zh/docs/connectors/datastream/formats/avro.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Attachment: stacktrace.png

> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18447: FLINK-25762 - [Deployment/kubernetes] Move JVM system properties args to the end of exec command

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18447:
URL: https://github.com/apache/flink/pull/18447#issuecomment-1019286035


   
   ## CI report:
   
   * 9dc6ca17c031ce1a0a83b5d569013f6644a6e3af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29941)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Attachment: (was: log.png)

> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Attachment: log.png

> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Attachment: (was: stacktrace.png)

> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Description: 
Running Flink 1.14.2 in native k8s mode, User defined system properties get 
ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
/opt/flink/conf/log4j-console.properties.

That happens due to the order of exec command provided in flink-console.sh file.

[https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]

exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "$\{ARGS[@]}"

Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might get 
overriden by other args.


Discussion

https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b

  was:
Running Flink 1.14.2 in native k8s mode, it seems like it ignores system 
property -Dlog4j.configurationFile and falls back to 
/opt/flink/conf/log4j-console.properties.

That happens due to the order of exec command provided in flink-console.sh file.

https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114

exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}"

Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might get 
overriden by other args.

 


> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, User defined system properties get 
> ignores. i.e,  -Dlog4j.configurationFile, it falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> [https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114]
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "$\{log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "$\{ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
> Discussion
> https://lists.apache.org/thread/b24g1nd00q5pln5h9w2mh1s3ocxwb61b



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #18447: FLINK-25762 - [Deployment/kubernetes] Move JVM system properties args to the end of exec command

2022-01-22 Thread GitBox


flinkbot commented on pull request #18447:
URL: https://github.com/apache/flink/pull/18447#issuecomment-1019286035


   
   ## CI report:
   
   * 9dc6ca17c031ce1a0a83b5d569013f6644a6e3af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18447: FLINK-25762 - [Deployment/kubernetes] Move JVM system properties args to the end of exec command

2022-01-22 Thread GitBox


flinkbot commented on pull request #18447:
URL: https://github.com/apache/flink/pull/18447#issuecomment-1019285866


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9dc6ca17c031ce1a0a83b5d569013f6644a6e3af (Sat Jan 22 
14:55:13 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25762).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25738) Translate/fix translations for "FileSystem" connector page of "Connectors > DataStream Connectors"

2022-01-22 Thread Zhiwu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480453#comment-17480453
 ] 

Zhiwu Wang commented on FLINK-25738:


Hi,guys,i may offer some help with these translation work. plz assign some to 
me.

> Translate/fix translations for "FileSystem" connector page of "Connectors > 
> DataStream Connectors" 
> ---
>
> Key: FLINK-25738
> URL: https://issues.apache.org/jira/browse/FLINK-25738
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Martijn Visser
>Assignee: RocMarshal
>Priority: Major
>  Labels: chinese-translation
>
> After the merge of https://github.com/apache/flink/pull/18288 to resolve 
> https://issues.apache.org/jira/browse/FLINK-20188 multiple pages needs to be 
> translated or changed documentation needs to be reviewed, translated and 
> corrected where possible.
> It involves the following pages from the documentation:
> * docs/content.zh/docs/connectors/datastream/filesystem.md (This has a 
> partial Chinese translation but since it was a complete overhaul, I've copied 
> the English text in)
> * docs/content.zh/docs/connectors/datastream/formats/avro.md
> * docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md
> * docs/content.zh/docs/connectors/datastream/formats/hadoop.md
> * docs/content.zh/docs/connectors/datastream/formats/mongodb.md
> * docs/content.zh/docs/connectors/datastream/formats/overview.md
> * docs/content.zh/docs/connectors/datastream/formats/parquet.md
> * docs/content.zh/docs/connectors/datastream/formats/text_files.md
> * docs/content.zh/docs/connectors/table/filesystem.md (This has a partial 
> Chinese translation but since it was a complete overhaul, I've copied the 
> English text in)
> * docs/content.zh/docs/deployment/filesystems/s3.md (Just needs a check, it 
> should only be link updates)
> * docs/content.zh/docs/dev/datastream/execution_mode.md (Just needs a check, 
> it should only be link updates)
> * 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamir Sagi updated FLINK-25762:
---
Component/s: Deployment / Kubernetes

> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, it seems like it ignores system 
> property -Dlog4j.configurationFile and falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "${ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25762:
---
Labels: Flink flink-k8s native-kuber pull-request-available  (was: Flink 
flink-k8s native-kuber)

> Native k8s- User defined system properties get overriden when cluster is 
> created 
> -
>
> Key: FLINK-25762
> URL: https://issues.apache.org/jira/browse/FLINK-25762
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Tamir Sagi
>Priority: Major
>  Labels: Flink, flink-k8s, native-kuber, pull-request-available
> Fix For: 1.14.4
>
> Attachments: log.png, stacktrace.png
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Running Flink 1.14.2 in native k8s mode, it seems like it ignores system 
> property -Dlog4j.configurationFile and falls back to 
> /opt/flink/conf/log4j-console.properties.
> That happens due to the order of exec command provided in flink-console.sh 
> file.
> https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
> -classpath "`manglePathList 
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
> "${ARGS[@]}"
> Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might 
> get overriden by other args.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] tamirsagi opened a new pull request #18447: FLINK-25762 - [Kubernetes] Move JVM system properties args to the end of exec command

2022-01-22 Thread GitBox


tamirsagi opened a new pull request #18447:
URL: https://github.com/apache/flink/pull/18447


   exec command in flink-console.sh has been changed
   
   exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}"
   
   To
   
   exec "$JAVA_RUN" $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList 
"$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
"${ARGS[@]}" "${FLINK_ENV_JAVA_OPTS}"
   
   In that way we prioritize  User desired properties.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25762) Native k8s- User defined system properties get overriden when cluster is created

2022-01-22 Thread Tamir Sagi (Jira)
Tamir Sagi created FLINK-25762:
--

 Summary: Native k8s- User defined system properties get overriden 
when cluster is created 
 Key: FLINK-25762
 URL: https://issues.apache.org/jira/browse/FLINK-25762
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.14.0, 1.13.0
Reporter: Tamir Sagi
 Fix For: 1.14.4
 Attachments: log.png, stacktrace.png

Running Flink 1.14.2 in native k8s mode, it seems like it ignores system 
property -Dlog4j.configurationFile and falls back to 
/opt/flink/conf/log4j-console.properties.

That happens due to the order of exec command provided in flink-console.sh file.

https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114

exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}"

Apart from logging, All user defined properties(FLINK_ENV_JAVA_OPTS) might get 
overriden by other args.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25761) Translate Avro format page into Chinese.

2022-01-22 Thread RocMarshal (Jira)
RocMarshal created FLINK-25761:
--

 Summary: Translate Avro format page into Chinese.
 Key: FLINK-25761
 URL: https://issues.apache.org/jira/browse/FLINK-25761
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: RocMarshal


file location: flink/docs/content.zh/docs/connectors/datastream/formats/avro.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] ijuma commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2022-01-22 Thread GitBox


ijuma commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r790148031



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/FlinkKafkaInternalProducer.java
##
@@ -154,20 +161,6 @@ public void close() {
 "Close without timeout is now allowed because it can leave 
lingering Kafka threads.");
 }
 
-@Override
-public void close(long timeout, TimeUnit unit) {
-synchronized (producerClosingLock) {
-kafkaProducer.close(timeout, unit);
-if (LOG.isDebugEnabled()) {
-LOG.debug(
-"Closed internal KafkaProducer {}. Stacktrace: {}",
-System.identityHashCode(this),
-
Joiner.on("\n").join(Thread.currentThread().getStackTrace()));
-}
-closed = true;
-}
-}
-

Review comment:
   Kafka didn't remove this method in 2.8.1, it removed it in 3.0.0
   https://github.com/apache/kafka/blob/7
   
2.8/clients/src/main/java/org/apache/kafka/clients/producer/Producer.java#L104




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ijuma commented on a change in pull request #17696: [FLINK-24765][kafka] Bump Kafka version to 2.8

2022-01-22 Thread GitBox


ijuma commented on a change in pull request #17696:
URL: https://github.com/apache/flink/pull/17696#discussion_r790148031



##
File path: 
flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/FlinkKafkaInternalProducer.java
##
@@ -154,20 +161,6 @@ public void close() {
 "Close without timeout is now allowed because it can leave 
lingering Kafka threads.");
 }
 
-@Override
-public void close(long timeout, TimeUnit unit) {
-synchronized (producerClosingLock) {
-kafkaProducer.close(timeout, unit);
-if (LOG.isDebugEnabled()) {
-LOG.debug(
-"Closed internal KafkaProducer {}. Stacktrace: {}",
-System.identityHashCode(this),
-
Joiner.on("\n").join(Thread.currentThread().getStackTrace()));
-}
-closed = true;
-}
-}
-

Review comment:
   Kafka didn't remove this method in 2.8.1, it removed it in 3.0.0
   
https://github.com/apache/kafka/blob/2.8/clients/src/main/java/org/apache/kafka/clients/producer/Producer.java#L104




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25760) Support for extended SQL syntax

2022-01-22 Thread melin (Jira)
melin created FLINK-25760:
-

 Summary: Support for extended SQL syntax
 Key: FLINK-25760
 URL: https://issues.apache.org/jira/browse/FLINK-25760
 Project: Flink
  Issue Type: Improvement
Reporter: melin


Supports extended SQL syntax, similar to Spark Extensions Feature. Custom SQL 
syntax can be implemented



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17498: [FLINK-14954][rest] Add OpenAPI spec generator

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #17498:
URL: https://github.com/apache/flink/pull/17498#issuecomment-944209637


   
   ## CI report:
   
   * 66d958ddf60055b3e6c120279e55e57f23b5d2c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29938)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 87737e9930a873bd0ca5613afc75e05506996541 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29937)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25738) Translate/fix translations for "FileSystem" connector page of "Connectors > DataStream Connectors"

2022-01-22 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480405#comment-17480405
 ] 

Martijn Visser commented on FLINK-25738:


[~RocMarshal] Of course, many thanks already!

> Translate/fix translations for "FileSystem" connector page of "Connectors > 
> DataStream Connectors" 
> ---
>
> Key: FLINK-25738
> URL: https://issues.apache.org/jira/browse/FLINK-25738
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Martijn Visser
>Assignee: RocMarshal
>Priority: Major
>  Labels: chinese-translation
>
> After the merge of https://github.com/apache/flink/pull/18288 to resolve 
> https://issues.apache.org/jira/browse/FLINK-20188 multiple pages needs to be 
> translated or changed documentation needs to be reviewed, translated and 
> corrected where possible.
> It involves the following pages from the documentation:
> * docs/content.zh/docs/connectors/datastream/filesystem.md (This has a 
> partial Chinese translation but since it was a complete overhaul, I've copied 
> the English text in)
> * docs/content.zh/docs/connectors/datastream/formats/avro.md
> * docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md
> * docs/content.zh/docs/connectors/datastream/formats/hadoop.md
> * docs/content.zh/docs/connectors/datastream/formats/mongodb.md
> * docs/content.zh/docs/connectors/datastream/formats/overview.md
> * docs/content.zh/docs/connectors/datastream/formats/parquet.md
> * docs/content.zh/docs/connectors/datastream/formats/text_files.md
> * docs/content.zh/docs/connectors/table/filesystem.md (This has a partial 
> Chinese translation but since it was a complete overhaul, I've copied the 
> English text in)
> * docs/content.zh/docs/deployment/filesystems/s3.md (Just needs a check, it 
> should only be link updates)
> * docs/content.zh/docs/dev/datastream/execution_mode.md (Just needs a check, 
> it should only be link updates)
> * 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25738) Translate/fix translations for "FileSystem" connector page of "Connectors > DataStream Connectors"

2022-01-22 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-25738:
--

Assignee: RocMarshal

> Translate/fix translations for "FileSystem" connector page of "Connectors > 
> DataStream Connectors" 
> ---
>
> Key: FLINK-25738
> URL: https://issues.apache.org/jira/browse/FLINK-25738
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Martijn Visser
>Assignee: RocMarshal
>Priority: Major
>  Labels: chinese-translation
>
> After the merge of https://github.com/apache/flink/pull/18288 to resolve 
> https://issues.apache.org/jira/browse/FLINK-20188 multiple pages needs to be 
> translated or changed documentation needs to be reviewed, translated and 
> corrected where possible.
> It involves the following pages from the documentation:
> * docs/content.zh/docs/connectors/datastream/filesystem.md (This has a 
> partial Chinese translation but since it was a complete overhaul, I've copied 
> the English text in)
> * docs/content.zh/docs/connectors/datastream/formats/avro.md
> * docs/content.zh/docs/connectors/datastream/formats/azure_table_storage.md
> * docs/content.zh/docs/connectors/datastream/formats/hadoop.md
> * docs/content.zh/docs/connectors/datastream/formats/mongodb.md
> * docs/content.zh/docs/connectors/datastream/formats/overview.md
> * docs/content.zh/docs/connectors/datastream/formats/parquet.md
> * docs/content.zh/docs/connectors/datastream/formats/text_files.md
> * docs/content.zh/docs/connectors/table/filesystem.md (This has a partial 
> Chinese translation but since it was a complete overhaul, I've copied the 
> English text in)
> * docs/content.zh/docs/deployment/filesystems/s3.md (Just needs a check, it 
> should only be link updates)
> * docs/content.zh/docs/dev/datastream/execution_mode.md (Just needs a check, 
> it should only be link updates)
> * 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25756) Dedicated Opensearch connectors

2022-01-22 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25756:
---
Component/s: Connectors / Common

> Dedicated Opensearch connectors
> ---
>
> Key: FLINK-25756
> URL: https://issues.apache.org/jira/browse/FLINK-25756
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Andriy Redko
>Priority: Major
>
> Since the time Opensearch got forked from Elasticsearch a few things got 
> changed. The projects evolve in different directions, the Elasticsearch 
> clients up to 7.13.x were able to connect to Opensearch clusters, but since 
> 7.14 - not anymore [1] (Elastic continues to harden their clients to connect 
> to Elasticsearch clusters only).
> For example, running current Flink master against Opensearch clusters using 
> Elalsticsearch 7 connectors would fail with:
>  
> {noformat}
>  Caused by: ElasticsearchException[Elasticsearch version 6 or more is 
> required]
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2043)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListenerDirectly(ListenableFuture.java:113)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.done(ListenableFuture.java:100)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.BaseFuture.set(BaseFuture.java:133)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.common.util.concurrent.ListenableFuture.onResponse(ListenableFuture.java:139)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$5.onSuccess(RestHighLevelClient.java:2129)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:376)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestClient$1.completed(RestClient.java:370)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591){noformat}
>  
> With the compatibility mode [2] turned on, still fails further the line:
> {noformat}
> Caused by: ElasticsearchException[Invalid or missing tagline [The OpenSearch 
> Project: https://opensearch.org/]]
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient.java:2056)
>  at 
> org.apache.flink.elasticsearch7.shaded.org.elasticsearch.client.RestHighLevelClient$4.onResponse(RestHighLevelClient

[jira] [Resolved] (FLINK-25678) TaskExecutorStateChangelogStoragesManager.shutdown is not thread-safe

2022-01-22 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-25678.
---
  Assignee: Roman Khachatryan
Resolution: Fixed

Merged as 11a406e67057ca9260c16c08054c209e3452a291 into 1.14,
as b72a5e0ef237ad02ed074bace1f7cb3aa09631e4 into master.

> TaskExecutorStateChangelogStoragesManager.shutdown is not thread-safe
> -
>
> Key: FLINK-25678
> URL: https://issues.apache.org/jira/browse/FLINK-25678
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.15.0, 1.14.2
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.4
>
>
> [https://github.com/apache/flink/pull/18169#discussion_r785741977]
> The method is called from the shutdown hook and therefore should be 
> thread-safe.
> cc: [~Zakelly] , [~dmvk] 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] rkhachatryan merged pull request #18442: [BP-1.14]FLINK-25678][runtime] Make TaskExecutorStateChangelogStoragesManager.shutdown thread-safe

2022-01-22 Thread GitBox


rkhachatryan merged pull request #18442:
URL: https://github.com/apache/flink/pull/18442


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25674) CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink fails on AZP

2022-01-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-25674:
-
Issue Type: Technical Debt  (was: Bug)

> CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink 
> fails on AZP
> -
>
> Key: FLINK-25674
> URL: https://issues.apache.org/jira/browse/FLINK-25674
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test 
> {{CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink}}
>  fails on AZP with
> {code}
> 2022-01-17T02:20:49.5493218Z Jan 17 02:20:49 [ERROR] 
> testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink  Time elapsed: 15.145 s  
> <<< ERROR!
> 2022-01-17T02:20:49.5494292Z Jan 17 02:20:49 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.testpojonoannotatedkeyspace already exists
> 2022-01-17T02:20:49.5495503Z Jan 17 02:20:49  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2022-01-17T02:20:49.5496540Z Jan 17 02:20:49  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2022-01-17T02:20:49.5497594Z Jan 17 02:20:49  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2022-01-17T02:20:49.5498647Z Jan 17 02:20:49  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2022-01-17T02:20:49.5499594Z Jan 17 02:20:49  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2022-01-17T02:20:49.5501059Z Jan 17 02:20:49  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink(CassandraConnectorITCase.java:449)
> 2022-01-17T02:20:49.5502208Z Jan 17 02:20:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-17T02:20:49.5503180Z Jan 17 02:20:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-17T02:20:49.5504178Z Jan 17 02:20:49  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-17T02:20:49.5604696Z Jan 17 02:20:49  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-17T02:20:49.5605959Z Jan 17 02:20:49  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-17T02:20:49.5606983Z Jan 17 02:20:49  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-17T02:20:49.5608008Z Jan 17 02:20:49  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-17T02:20:49.5608991Z Jan 17 02:20:49  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-17T02:20:49.5609957Z Jan 17 02:20:49  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-17T02:20:49.5610970Z Jan 17 02:20:49  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2022-01-17T02:20:49.5612021Z Jan 17 02:20:49  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-17T02:20:49.5613033Z Jan 17 02:20:49  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-17T02:20:49.5613888Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-17T02:20:49.5614902Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-17T02:20:49.5615847Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-17T02:20:49.5616769Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-17T02:20:49.5617759Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-17T02:20:49.5618667Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-17T02:20:49.5619532Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-17T02:20:49.5620398Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-17T02:20:49.5621274Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-17T02:20:49.5622475Z Jan 17 02:20:49  at

[jira] [Comment Edited] (FLINK-25674) CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink fails on AZP

2022-01-22 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479964#comment-17479964
 ] 

Chesnay Schepler edited comment on FLINK-25674 at 1/22/22, 10:57 AM:
-

master: 153bb9b5fa04ae7de8aef22c346a1c342a376c59
1.14: a03343682f2ca99b3b97572246c9b4a634752fa4
1.13: 26fb7a269f5fe3fc5dd0d52d88afb2915e452d1b


was (Author: zentol):
master: 153bb9b5fa04ae7de8aef22c346a1c342a376c59
1.14: a03343682f2ca99b3b97572246c9b4a634752fa4

> CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink 
> fails on AZP
> -
>
> Key: FLINK-25674
> URL: https://issues.apache.org/jira/browse/FLINK-25674
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test 
> {{CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink}}
>  fails on AZP with
> {code}
> 2022-01-17T02:20:49.5493218Z Jan 17 02:20:49 [ERROR] 
> testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink  Time elapsed: 15.145 s  
> <<< ERROR!
> 2022-01-17T02:20:49.5494292Z Jan 17 02:20:49 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.testpojonoannotatedkeyspace already exists
> 2022-01-17T02:20:49.5495503Z Jan 17 02:20:49  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2022-01-17T02:20:49.5496540Z Jan 17 02:20:49  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2022-01-17T02:20:49.5497594Z Jan 17 02:20:49  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2022-01-17T02:20:49.5498647Z Jan 17 02:20:49  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2022-01-17T02:20:49.5499594Z Jan 17 02:20:49  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2022-01-17T02:20:49.5501059Z Jan 17 02:20:49  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink(CassandraConnectorITCase.java:449)
> 2022-01-17T02:20:49.5502208Z Jan 17 02:20:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-17T02:20:49.5503180Z Jan 17 02:20:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-17T02:20:49.5504178Z Jan 17 02:20:49  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-17T02:20:49.5604696Z Jan 17 02:20:49  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-17T02:20:49.5605959Z Jan 17 02:20:49  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-17T02:20:49.5606983Z Jan 17 02:20:49  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-17T02:20:49.5608008Z Jan 17 02:20:49  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-17T02:20:49.5608991Z Jan 17 02:20:49  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-17T02:20:49.5609957Z Jan 17 02:20:49  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-17T02:20:49.5610970Z Jan 17 02:20:49  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2022-01-17T02:20:49.5612021Z Jan 17 02:20:49  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-17T02:20:49.5613033Z Jan 17 02:20:49  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-17T02:20:49.5613888Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-17T02:20:49.5614902Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-17T02:20:49.5615847Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-17T02:20:49.5616769Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-17T02:20:49.5617759Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-17T02:20:49.5618667Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-17T02:20:49.5619532Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$1

[jira] [Closed] (FLINK-25674) CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink fails on AZP

2022-01-22 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-25674.

Resolution: Fixed

> CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink 
> fails on AZP
> -
>
> Key: FLINK-25674
> URL: https://issues.apache.org/jira/browse/FLINK-25674
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.14.0
>Reporter: Till Rohrmann
>Assignee: Etienne Chauchot
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.13.6, 1.14.4
>
>
> The test 
> {{CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink}}
>  fails on AZP with
> {code}
> 2022-01-17T02:20:49.5493218Z Jan 17 02:20:49 [ERROR] 
> testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink  Time elapsed: 15.145 s  
> <<< ERROR!
> 2022-01-17T02:20:49.5494292Z Jan 17 02:20:49 
> com.datastax.driver.core.exceptions.AlreadyExistsException: Table 
> flink.testpojonoannotatedkeyspace already exists
> 2022-01-17T02:20:49.5495503Z Jan 17 02:20:49  at 
> com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
> 2022-01-17T02:20:49.5496540Z Jan 17 02:20:49  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2022-01-17T02:20:49.5497594Z Jan 17 02:20:49  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2022-01-17T02:20:49.5498647Z Jan 17 02:20:49  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2022-01-17T02:20:49.5499594Z Jan 17 02:20:49  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2022-01-17T02:20:49.5501059Z Jan 17 02:20:49  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink(CassandraConnectorITCase.java:449)
> 2022-01-17T02:20:49.5502208Z Jan 17 02:20:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-17T02:20:49.5503180Z Jan 17 02:20:49  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-17T02:20:49.5504178Z Jan 17 02:20:49  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-17T02:20:49.5604696Z Jan 17 02:20:49  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-17T02:20:49.5605959Z Jan 17 02:20:49  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-01-17T02:20:49.5606983Z Jan 17 02:20:49  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-17T02:20:49.5608008Z Jan 17 02:20:49  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-01-17T02:20:49.5608991Z Jan 17 02:20:49  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-17T02:20:49.5609957Z Jan 17 02:20:49  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-17T02:20:49.5610970Z Jan 17 02:20:49  at 
> org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
> 2022-01-17T02:20:49.5612021Z Jan 17 02:20:49  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-17T02:20:49.5613033Z Jan 17 02:20:49  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-01-17T02:20:49.5613888Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-01-17T02:20:49.5614902Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-01-17T02:20:49.5615847Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-01-17T02:20:49.5616769Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-01-17T02:20:49.5617759Z Jan 17 02:20:49  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-01-17T02:20:49.5618667Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-01-17T02:20:49.5619532Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-01-17T02:20:49.5620398Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-01-17T02:20:49.5621274Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-01-17T02:20:49.5622475Z Jan 17 02:20:49  at 
> org.junit.runners.ParentRunner

[GitHub] [flink] flinkbot edited a comment on pull request #17498: [FLINK-14954][rest] Add OpenAPI spec generator

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #17498:
URL: https://github.com/apache/flink/pull/17498#issuecomment-944209637


   
   ## CI report:
   
   * 93c9a1ca1d4a9f8299350fd01789297c4a3a3c37 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25169)
 
   * 66d958ddf60055b3e6c120279e55e57f23b5d2c2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29938)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17498: [FLINK-14954][rest] Add OpenAPI spec generator

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #17498:
URL: https://github.com/apache/flink/pull/17498#issuecomment-944209637


   
   ## CI report:
   
   * 93c9a1ca1d4a9f8299350fd01789297c4a3a3c37 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=25169)
 
   * 66d958ddf60055b3e6c120279e55e57f23b5d2c2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25759) Upgrade to flink-shaded 15.0

2022-01-22 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-25759:


 Summary: Upgrade to flink-shaded 15.0
 Key: FLINK-25759
 URL: https://issues.apache.org/jira/browse/FLINK-25759
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.15.0






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18442: [BP-1.14]FLINK-25678][runtime] Make TaskExecutorStateChangelogStoragesManager.shutdown thread-safe

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18442:
URL: https://github.com/apache/flink/pull/18442#issuecomment-1018490440


   
   ## CI report:
   
   * 0020c0e92c9676b0c6cc5e0a124cd4ceedc4d329 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29934)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18189: [FLINK-25430] Replace RunningJobRegistry by JobResultStore

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18189:
URL: https://github.com/apache/flink/pull/18189#issuecomment-1000280224


   
   ## CI report:
   
   * 374d27e52572f64dd991c63b20090945270a6431 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29935)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 058cac39b343263eb7ee8c719946d518bd629bfe Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29904)
 
   * 87737e9930a873bd0ca5613afc75e05506996541 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29937)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18137: [FLINK-25287][connector-testing-framework] Refactor interfaces of connector testing framework to support more scenarios

2022-01-22 Thread GitBox


flinkbot edited a comment on pull request #18137:
URL: https://github.com/apache/flink/pull/18137#issuecomment-996615034


   
   ## CI report:
   
   * 058cac39b343263eb7ee8c719946d518bd629bfe Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=29904)
 
   * 87737e9930a873bd0ca5613afc75e05506996541 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann closed pull request #18169: [FLINK-25277] add shutdown hook to stop TaskExecutor on SIGTERM

2022-01-22 Thread GitBox


tillrohrmann closed pull request #18169:
URL: https://github.com/apache/flink/pull/18169


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25277) Introduce explicit shutdown signalling between TaskManager and JobManager

2022-01-22 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-25277.
-
Resolution: Fixed

Fixed via

b67865bd06237c86c1f2a5822770117c3df68db6
d09bf4538f6aa5575798f0d059b044ca0ad0df90

> Introduce explicit shutdown signalling between TaskManager and JobManager 
> --
>
> Key: FLINK-25277
> URL: https://issues.apache.org/jira/browse/FLINK-25277
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Niklas Semmler
>Assignee: Niklas Semmler
>Priority: Major
>  Labels: pull-request-available, reactive
> Fix For: 1.15.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> We need to introduce shutdown signalling between TaskManager and JobManager 
> for fast & graceful shutdown in reactive scheduler mode.
> In Flink 1.14 and earlier versions, the JobManager tracks the availability of 
> a TaskManager using a hearbeat. This heartbeat is by default configured with 
> an interval of 10 seconds and a timeout of 50 seconds [1]. Hence, the 
> shutdown of a TaskManager is recognized only after about 50-60 seconds. This 
> works fine for the static scheduling mode, where a TaskManager only 
> disappears as part of a cluster shutdown or a job failure. However, in the 
> reactive scheduler mode (FLINK-10407), TaskManagers are regularly added and 
> removed from a running job. Here, the heartbeat-mechanisms incurs additional 
> delays.
> To remove these delays, we add an explicit shutdown signal from the 
> TaskManager to the JobManager.
>  
> [1]https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25758) GCS Filesystem implementation fails on Java 11 tests due to licensing issues

2022-01-22 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480374#comment-17480374
 ] 

Martijn Visser commented on FLINK-25758:


[~galenwarren] Can you have a look at this? The GCS filesystem implementation 
fails on the Java 11 CI tests (these don't run during the Github PR, but in our 
daily cron jobs) due to licensing issues. 

You can build it locally for Java 11 by running {{-Djdk11 -Pjava11-target}} as 
Maven parameters (next to having Java 11 as JDK active of course)

> GCS Filesystem implementation fails on Java 11 tests due to licensing issues
> 
>
> Key: FLINK-25758
> URL: https://issues.apache.org/jira/browse/FLINK-25758
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Galen Warren
>Priority: Blocker
>
> {code}
> 00:33:45,410 DEBUG org.apache.flink.tools.ci.licensecheck.NoticeFileChecker   
>   [] - Dependency io.netty:netty-common:4.1.51.Final is mentioned in NOTICE 
> file /__w/2/s/flink-python/src/main/resources/META-INF/NOTICE, but was not 
> mentioned by the build output as a bundled dependency
> 00:33:45,411 ERROR org.apache.flink.tools.ci.licensecheck.NoticeFileChecker   
>   [] - Could not find dependency javax.annotation:javax.annotation-api:1.3.2 
> in NOTICE file 
> /__w/2/s/flink-filesystems/flink-gs-fs-hadoop/src/main/resources/META-INF/NOTICE
> 00:33:45,536 INFO  org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - Checking directory /tmp/flink-validation-deployment with a total of 
> 197 jar files.
> 00:34:18,554 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - File '/javax/annotation/security/package.html' in jar 
> '/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
>  contains match with forbidden regex 'gnu ?\R?[\s/#]*general 
> ?\R?[\s/#]*public ?\R?[\s/#]*license'.
> 00:34:18,555 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - File '/javax/annotation/package.html' in jar 
> '/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
>  contains match with forbidden regex 'gnu ?\R?[\s/#]*general 
> ?\R?[\s/#]*public ?\R?[\s/#]*license'.
> 00:35:46,612 WARN  org.apache.flink.tools.ci.licensecheck.LicenseChecker  
>   [] - Found a total of 3 severe license issues
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29932&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25758) GCS Filesystem implementation fails on Java 11 tests due to licensing issues

2022-01-22 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-25758:
--

 Summary: GCS Filesystem implementation fails on Java 11 tests due 
to licensing issues
 Key: FLINK-25758
 URL: https://issues.apache.org/jira/browse/FLINK-25758
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Affects Versions: 1.15.0
Reporter: Martijn Visser


{code}
00:33:45,410 DEBUG org.apache.flink.tools.ci.licensecheck.NoticeFileChecker 
[] - Dependency io.netty:netty-common:4.1.51.Final is mentioned in NOTICE file 
/__w/2/s/flink-python/src/main/resources/META-INF/NOTICE, but was not mentioned 
by the build output as a bundled dependency
00:33:45,411 ERROR org.apache.flink.tools.ci.licensecheck.NoticeFileChecker 
[] - Could not find dependency javax.annotation:javax.annotation-api:1.3.2 in 
NOTICE file 
/__w/2/s/flink-filesystems/flink-gs-fs-hadoop/src/main/resources/META-INF/NOTICE
00:33:45,536 INFO  org.apache.flink.tools.ci.licensecheck.JarFileChecker
[] - Checking directory /tmp/flink-validation-deployment with a total of 197 
jar files.
00:34:18,554 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker
[] - File '/javax/annotation/security/package.html' in jar 
'/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
 contains match with forbidden regex 'gnu ?\R?[\s/#]*general ?\R?[\s/#]*public 
?\R?[\s/#]*license'.
00:34:18,555 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker
[] - File '/javax/annotation/package.html' in jar 
'/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
 contains match with forbidden regex 'gnu ?\R?[\s/#]*general ?\R?[\s/#]*public 
?\R?[\s/#]*license'.
00:35:46,612 WARN  org.apache.flink.tools.ci.licensecheck.LicenseChecker
[] - Found a total of 3 severe license issues
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29932&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25758) GCS Filesystem implementation fails on Java 11 tests due to licensing issues

2022-01-22 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-25758:
--

Assignee: Galen Warren

> GCS Filesystem implementation fails on Java 11 tests due to licensing issues
> 
>
> Key: FLINK-25758
> URL: https://issues.apache.org/jira/browse/FLINK-25758
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Galen Warren
>Priority: Blocker
>
> {code}
> 00:33:45,410 DEBUG org.apache.flink.tools.ci.licensecheck.NoticeFileChecker   
>   [] - Dependency io.netty:netty-common:4.1.51.Final is mentioned in NOTICE 
> file /__w/2/s/flink-python/src/main/resources/META-INF/NOTICE, but was not 
> mentioned by the build output as a bundled dependency
> 00:33:45,411 ERROR org.apache.flink.tools.ci.licensecheck.NoticeFileChecker   
>   [] - Could not find dependency javax.annotation:javax.annotation-api:1.3.2 
> in NOTICE file 
> /__w/2/s/flink-filesystems/flink-gs-fs-hadoop/src/main/resources/META-INF/NOTICE
> 00:33:45,536 INFO  org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - Checking directory /tmp/flink-validation-deployment with a total of 
> 197 jar files.
> 00:34:18,554 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - File '/javax/annotation/security/package.html' in jar 
> '/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
>  contains match with forbidden regex 'gnu ?\R?[\s/#]*general 
> ?\R?[\s/#]*public ?\R?[\s/#]*license'.
> 00:34:18,555 ERROR org.apache.flink.tools.ci.licensecheck.JarFileChecker  
>   [] - File '/javax/annotation/package.html' in jar 
> '/tmp/flink-validation-deployment/org/apache/flink/flink-gs-fs-hadoop/1.15-SNAPSHOT/flink-gs-fs-hadoop-1.15-20220122.001944-1.jar'
>  contains match with forbidden regex 'gnu ?\R?[\s/#]*general 
> ?\R?[\s/#]*public ?\R?[\s/#]*license'.
> 00:35:46,612 WARN  org.apache.flink.tools.ci.licensecheck.LicenseChecker  
>   [] - Found a total of 3 severe license issues
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29932&view=logs&j=946871de-358d-5815-3994-8175615bc253&t=e0240c62-4570-5d1c-51af-dd63d2093da1



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25757) Fix this security issue related to this exploit: https://www.exploit-db.com/exploits/48978

2022-01-22 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480372#comment-17480372
 ] 

Martijn Visser commented on FLINK-25757:


[~yogurtearl] There is no authentication in Flink, so I still don't see what's 
the exploit. Anyone who has access to a Flink cluster can submit any JAR file. 
There is endpoint for it as you can see at 
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jars-upload.
 This results sounds like a scanning tool which produces a lot of false 
positives. 

> Fix this security issue related to this exploit: 
> https://www.exploit-db.com/exploits/48978
> --
>
> Key: FLINK-25757
> URL: https://issues.apache.org/jira/browse/FLINK-25757
> Project: Flink
>  Issue Type: Bug
>Reporter: Michael Bailey
>Priority: Critical
>  Labels: security
>
> Fix this security issue related to this exploit: 
> [https://www.exploit-db.com/exploits/48978]
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] Aitozi edited a comment on pull request #12345: draft: [FLINK-17971] [Runtime/StateBackends] Add RocksDB SST ingestion for batch writes

2022-01-22 Thread GitBox


Aitozi edited a comment on pull request #12345:
URL: https://github.com/apache/flink/pull/12345#issuecomment-1019084179


   Do we have some plan to drive this feature forward ? I have applied partly 
of this patch to improve the performance in our internal version, and thanks 
for your effort @lgo . I think the code is already in a good shape except 
lacking of some benchmark testing. I'am glad to do a favor to complete this 
part, If need any help, please ping me  @lgo @Myasuka  :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org