Re: [DISCUSS] Status of Statefun Project

2023-05-26 Thread Galen Warren
Ok, I get it. No interest.

If this project is being abandoned, I guess I'll work with my own fork. Is
there anything I should consider here? Can I share it with other people who
use this project?

On Tue, May 16, 2023 at 10:50 AM Galen Warren 
wrote:

> Hi Martijn, since you opened this discussion thread, I'm curious what your
> thoughts are in light of the responses? Thanks.
>
> On Wed, Apr 19, 2023 at 1:21 PM Galen Warren 
> wrote:
>
>> I use Apache Flink for stream processing, and StateFun as a hand-off
>>> point for the rest of the application.
>>> It serves well as a bridge between a Flink Streaming job and
>>> micro-services.
>>
>>
>> This is essentially how I use it as well, and I would also be sad to see
>> it sunsetted. It works well; I don't know that there is a lot of new
>> development required, but if there are no new Statefun releases, then
>> Statefun can only be used with older Flink versions.
>>
>> On Tue, Apr 18, 2023 at 10:04 PM Marco Villalobos <
>> mvillalo...@kineteque.com> wrote:
>>
>>> I am currently using Stateful Functions in my application.
>>>
>>> I use Apache Flink for stream processing, and StateFun as a hand-off
>>> point for the rest of the application.
>>> It serves well as a bridge between a Flink Streaming job and
>>> micro-services.
>>>
>>> I would be disappointed if StateFun was sunsetted.  Its a good idea.
>>>
>>> If there is anything I can do to help, as a contributor perhaps, please
>>> let me know.
>>>
>>> > On Apr 3, 2023, at 2:02 AM, Martijn Visser 
>>> wrote:
>>> >
>>> > Hi everyone,
>>> >
>>> > I want to open a discussion on the status of the Statefun Project [1]
>>> in Apache Flink. As you might have noticed, there hasn't been much
>>> development over the past months in the Statefun repository [2]. There is
>>> currently a lack of active contributors and committers who are able to help
>>> with the maintenance of the project.
>>> >
>>> > In order to improve the situation, we need to solve the lack of
>>> committers and the lack of contributors.
>>> >
>>> > On the lack of committers:
>>> >
>>> > 1. Ideally, there are some of the current Flink committers who have
>>> the bandwidth and can help with reviewing PRs and merging them.
>>> > 2. If that's not an option, it could be a consideration that current
>>> committers only approve and review PRs, that are approved by those who are
>>> willing to contribute to Statefun and if the CI passes
>>> >
>>> > On the lack of contributors:
>>> >
>>> > 3. Next to having this discussion on the Dev and User mailing list, we
>>> can also create a blog with a call for new contributors on the Flink
>>> project website, send out some tweets on the Flink / Statefun twitter
>>> accounts, post messages on Slack etc. In that message, we would inform how
>>> those that are interested in contributing can start and where they could
>>> reach out for more information.
>>> >
>>> > There's also option 4. where a group of interested people would split
>>> Statefun from the Flink project and make it a separate top level project
>>> under the Apache Flink umbrella (similar as recently has happened with
>>> Flink Table Store, which has become Apache Paimon).
>>> >
>>> > If we see no improvements in the coming period, we should consider
>>> sunsetting Statefun and communicate that clearly to the users.
>>> >
>>> > I'm looking forward to your thoughts.
>>> >
>>> > Best regards,
>>> >
>>> > Martijn
>>> >
>>> > [1] https://nightlies.apache.org/flink/flink-statefun-docs-master/ <
>>> https://nightlies.apache.org/flink/flink-statefun-docs-master/>
>>> > [2] https://github.com/apache/flink-statefun <
>>> https://github.com/apache/flink-statefun>
>>>
>>


Re: [DISCUSS] FLIP-313 Add support of User Defined AsyncTableFunction

2023-05-26 Thread Jing Ge
Hi Aitozi,

Thanks for your proposal. I am not quite sure if I understood your thoughts
correctly. You described a special case implementation of the
AsyncTableFunction with on public API changes. Would you please elaborate
your purpose of writing a FLIP according to the FLIP documentation[1]?
Thanks!

[1]
https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals

Best regards,
Jing

On Wed, May 24, 2023 at 1:07 PM Aitozi  wrote:

> May I ask for some feedback  :D
>
> Thanks,
> Aitozi
>
> Aitozi  于2023年5月23日周二 19:14写道:
> >
> > Just catch an user case report from Giannis Polyzos for this usage:
> >
> > https://lists.apache.org/thread/qljwd40v5ntz6733cwcdr8s4z97b343b
> >
> > Aitozi  于2023年5月23日周二 17:45写道:
> > >
> > > Hi guys,
> > > I want to bring up a discussion about adding support of User
> > > Defined AsyncTableFunction in Flink.
> > > Currently, async table function are special functions for table source
> > > to perform
> > > async lookup. However, it's worth to support the user defined async
> > > table function.
> > > Because, in this way, the end SQL user can leverage it to perform the
> > > async operation
> > > which is useful to maximum the system throughput especially for IO
> > > bottleneck case.
> > >
> > > You can find some more detail in [1].
> > >
> > > Looking forward to feedback
> > >
> > >
> > > [1]:
> https://cwiki.apache.org/confluence/display/FLINK/%5BFLIP-313%5D+Add+support+of+User+Defined+AsyncTableFunction
> > >
> > > Thanks,
> > > Aitozi.
>


Re: Questions on checkpointing mechanism for FLIP-27 Source API

2023-05-26 Thread Jing Ge
Hi Hong,

Great question! Afaik, it depends on the implementation. Speaking of the
"loopback" event sending to the SplitEnumerator, I guess you meant here [1]
(It might be good, if you could point out the right position in the source
code to help us understand the question better:)), which will end up with
calling the SplitEnumerator[2]. There is only one implementation of the
method handleSourceEvent(int subtaskId, SourceEvent sourceEvent) in
HybridSourceSplitEnumerator[3].

The only call that will send a operator event to the SplitEnumerator I
found in the current master branch is in the HybridSourceReader when the
reader reaches the end of the input of the current source[4]. Since the
call is in SourceReader#pollNext(ReaderOutput output), it should follow the
exactly once semantic mechanism defined by [5]. My understanding is that
the OperatorEvent 1 will belong to the epoch after the checkpoint in this
case.

Best regards,
Jing

[1]
https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/SourceOperator.java#L284
[2]
https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-core/src/main/java/org/apache/flink/api/connector/source/SplitEnumerator.java#L120
[3]
https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/hybrid/HybridSourceSplitEnumerator.java#L195
[4]
https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/hybrid/HybridSourceReader.java#LL95C13-L95C13
[5]
https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-runtime/src/main/java/org/apache/flink/runtime/operators/coordination/OperatorCoordinatorHolder.java#L66

On Thu, May 25, 2023 at 4:27 AM Hongshun Wang 
wrote:

> Hi Hong,
>
> The checkpoint is triggered by the timer executor of CheckpointCoordinator.
> It triggers the checkpoint in SourceCoordinator (which is passed to
> SplitEnumerator) and then in SourceOperator. The checkpoint event is put in
> SplitEnumerator's event loop to be executed. You can see the details here.
>
> Yours
> Hongshun
>
> On Wed, May 17, 2023 at 11:39 PM Teoh, Hong 
> wrote:
>
> > Hi all,
> >
> > I’m writing a new source based on the FLIP-27 Source API, and I had some
> > questions on the checkpointing mechanisms and associated guarantees.
> Would
> > appreciate if someone more familiar with the API would be able to provide
> > insights here!
> >
> > In FLIP-27 Source, we now have a SplitEnumerator (running on JM) and a
> > SourceReader (running on TM). However, the SourceReader can send events
> to
> > the SplitEnumerator. Given this, we have introduced a “loopback”
> > communication mechanism from TM to JM, and I wonder if/how we handle this
> > during checkpoints.
> >
> >
> > Example of how data might be lost:
> > 1. Checkpoint 123 triggered
> > 2. SplitEnumerator takes checkpoint of state for checkpoint 123
> > 3. SourceReader sends OperatorEvent 1 and mutates state to reflect this
> > 4. SourceReader takes checkpoint of state for checkpoint 123
> > …
> > 5. Checkpoint 123 completes
> >
> > Let’s assume OperatorEvent 1 would mutate SplitEnumerator state once
> > processed, There is now inconsistent state between SourceReader state and
> > SplitEnumerator state. (SourceReader assumes OperatorEvent 1 is
> processed,
> > whereas SplitEnumerator has not processed OperatorEvent 1)
> >
> > Do we have any mechanisms for mitigating this issue? For example, does
> the
> > SplitEnumerator re-take the snapshot of state for a checkpoint if an
> > OperatorEvent is sent before the checkpoint is complete?
> >
> > Regards,
> > Hong
>


Re: [DISCUSS] FLIP-312: Add Yarn ACLs to Flink Containers

2023-05-26 Thread Archit Goyal
Thanks Yang for review.


  1.  FLIP-312 relies on Hadoop version 2.6.0 or later.
  2.  I have updated the FLIP and made it more descriptive.
  3.  ACLs apply to logs as well as permissions to kill the application. Also, 
in the PR we are setting ACLs for Task Manager (createTaskExecutorContext) as 
well as Job Manager (startAppMaster).

Thanks,
Archit Goyal

From: Yang Wang 
Date: Sunday, May 21, 2023 at 9:08 PM
To: dev@flink.apache.org 
Subject: Re: [DISCUSS] FLIP-312: Add Yarn ACLs to Flink Containers
Thanks for creating this FLIP.

This sounds like a useful feature to make the Flink applications running on
YARN cluster more securely.

However, I think we still miss some important parts in the FLIP.
1. Which hadoop versions this FLIP relies on?
2. We need to describe a bit more about how the YARN ACLs works.
3. Does the ACLs only apply to the logs? How about the Flink JobManager UI?

Best,
Yang

Venkatakrishnan Sowrirajan  于2023年5月13日周六 08:12写道:

> Thanks for the FLIP, Archit.
>
> +1 from me as well. This would be very useful for us and others in the
> community given the same issue was raised earlier as well.
>
> Regards
> Venkata krishnan
>
>
> On Fri, May 12, 2023 at 4:03 PM Becket Qin  wrote:
>
> > Thanks for the FLIP, Archit.
> >
> > The motivation sounds reasonable and it looks like a straightforward
> > proposal. +1 from me.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Fri, May 12, 2023 at 1:30 AM Archit Goyal
>  > >
> > wrote:
> >
> > > Hi all,
> > >
> > > I am opening this thread to discuss the proposal to support Yarn ACLs
> to
> > > Flink containers which has been documented in FLIP-312 <
> > >
> >
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FFLINK%2FFLIP*312*3A*Add*Yarn*ACLs*to*Flink*Containers__%3BKyUrKysrKys!!IKRxdwAv5BmarQ!bQiA3GX9bFf-w6A9M4Aez7RSMYLdvFtjZnlrOSf6N2nQUFuDdnoJ20uujW8RPY1VbLS9P4AfpnqPmkZZOuQ%24=05%7C01%7Cargoyal%40linkedin.com%7C0337240314fb45444f5e08db5a7a277f%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C638203252947441598%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=HS6QhFdRGtX7Yp7qCzEB7kOeDyqB0ePhd%2BUy7BAPsY8%3D=0
> > > >.
> > >
> > > This FLIP mentions about providing Yarn application ACL mechanism on
> > Flink
> > > containers to be able to provide specific rights to users other than
> the
> > > one running the Flink application job. This will restrict other users
> in
> > > two ways:
> > >
> > >   *   view logs through the Resource Manager job history
> > >   *   kill the application
> > >
> > > Please feel free to reply to this email thread and share your opinions.
> > >
> > > Thanks,
> > > Archit Goyal
> > >
> > >
> >
>


[jira] [Created] (FLINK-32209) Opensearch connector should remove the dependency on flink-shaded

2023-05-26 Thread Andriy Redko (Jira)
Andriy Redko created FLINK-32209:


 Summary: Opensearch connector should remove the dependency on 
flink-shaded
 Key: FLINK-32209
 URL: https://issues.apache.org/jira/browse/FLINK-32209
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.0.1
Reporter: Andriy Redko
 Fix For: opensearch-1.1.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Apache Flink 1.16.2 released

2023-05-26 Thread Jing Ge
Hi Weijie,

Thanks again for your effort. I was wondering if there were any obstacles
you had to overcome while releasing 1.16.2 and 1.17.1 that could lead us to
any improvement wrt the release process and management?

Best regards,
Jing

On Fri, May 26, 2023 at 4:41 PM Martijn Visser 
wrote:

> Thank you Weijie and those who helped with testing!
>
> On Fri, May 26, 2023 at 1:06 PM weijie guo 
> wrote:
>
> > The Apache Flink community is very happy to announce the release of
> > Apache Flink 1.16.2, which is the second bugfix release for the Apache
> > Flink 1.16 series.
> >
> >
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> > streaming applications.
> >
> >
> >
> > The release is available for download at:
> >
> > https://flink.apache.org/downloads.html
> >
> >
> >
> > Please check out the release blog post for an overview of the
> > improvements for this bugfix release:
> >
> > https://flink.apache.org/news/2023/05/25/release-1.16.2.html
> >
> >
> >
> > The full release notes are available in Jira:
> >
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352765
> >
> >
> >
> > We would like to thank all contributors of the Apache Flink community
> > who made this release possible!
> >
> >
> >
> > Feel free to reach out to the release managers (or respond to this
> > thread) with feedback on the release process. Our goal is to
> > constantly improve the release process. Feedback on what could be
> > improved or things that didn't go so well are appreciated.
> >
> >
> >
> > Regards,
> >
> > Release Manager
> >
>


Re: [ANNOUNCE] Apache Flink 1.17.1 released

2023-05-26 Thread Jing Ge
Hi Weijie,

That is earlier than I expected! Thank you so much for your effort!

Best regards,
Jing

On Fri, May 26, 2023 at 4:44 PM Martijn Visser 
wrote:

> Same here as with Flink 1.16.2, thank you Weijie and those who helped with
> testing!
>
> On Fri, May 26, 2023 at 1:08 PM weijie guo 
> wrote:
>
>>
>> The Apache Flink community is very happy to announce the release of Apache 
>> Flink 1.17.1, which is the first bugfix release for the Apache Flink 1.17 
>> series.
>>
>>
>>
>>
>> Apache Flink® is an open-source stream processing framework for distributed, 
>> high-performing, always-available, and accurate data streaming applications.
>>
>>
>>
>> The release is available for download at:
>>
>> https://flink.apache.org/downloads.html
>>
>>
>>
>>
>> Please check out the release blog post for an overview of the improvements 
>> for this bugfix release:
>>
>> https://flink.apache.org/news/2023/05/25/release-1.17.1.html
>>
>>
>>
>> The full release notes are available in Jira:
>>
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352886
>>
>>
>>
>>
>> We would like to thank all contributors of the Apache Flink community who 
>> made this release possible!
>>
>>
>>
>>
>> Feel free to reach out to the release managers (or respond to this thread) 
>> with feedback on the release process. Our goal is to constantly improve the 
>> release process. Feedback on what could be improved or things that didn't go 
>> so well are appreciated.
>>
>>
>>
>> Regards,
>>
>> Release Manager
>>
>


[jira] [Created] (FLINK-32208) Remove dependency on flink-shaded from flink-connector-aws

2023-05-26 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-32208:
---

 Summary: Remove dependency on flink-shaded from flink-connector-aws
 Key: FLINK-32208
 URL: https://issues.apache.org/jira/browse/FLINK-32208
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / AWS, Connectors / DynamoDB
Affects Versions: aws-connector-4.2.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


The AWS connectors depend on flink-shaded. With the externalization of 
connector, connectors shouldn't rely on Flink-Shaded but instead shade 
dependencies such as this one themselves



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Apache Flink 1.17.1 released

2023-05-26 Thread Martijn Visser
Same here as with Flink 1.16.2, thank you Weijie and those who helped with
testing!

On Fri, May 26, 2023 at 1:08 PM weijie guo 
wrote:

>
> The Apache Flink community is very happy to announce the release of Apache 
> Flink 1.17.1, which is the first bugfix release for the Apache Flink 1.17 
> series.
>
>
>
>
> Apache Flink® is an open-source stream processing framework for distributed, 
> high-performing, always-available, and accurate data streaming applications.
>
>
>
> The release is available for download at:
>
> https://flink.apache.org/downloads.html
>
>
>
>
> Please check out the release blog post for an overview of the improvements 
> for this bugfix release:
>
> https://flink.apache.org/news/2023/05/25/release-1.17.1.html
>
>
>
> The full release notes are available in Jira:
>
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352886
>
>
>
>
> We would like to thank all contributors of the Apache Flink community who 
> made this release possible!
>
>
>
>
> Feel free to reach out to the release managers (or respond to this thread) 
> with feedback on the release process. Our goal is to constantly improve the 
> release process. Feedback on what could be improved or things that didn't go 
> so well are appreciated.
>
>
>
> Regards,
>
> Release Manager
>


Re: [ANNOUNCE] Apache Flink 1.16.2 released

2023-05-26 Thread Martijn Visser
Thank you Weijie and those who helped with testing!

On Fri, May 26, 2023 at 1:06 PM weijie guo 
wrote:

> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.16.2, which is the second bugfix release for the Apache
> Flink 1.16 series.
>
>
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data
> streaming applications.
>
>
>
> The release is available for download at:
>
> https://flink.apache.org/downloads.html
>
>
>
> Please check out the release blog post for an overview of the
> improvements for this bugfix release:
>
> https://flink.apache.org/news/2023/05/25/release-1.16.2.html
>
>
>
> The full release notes are available in Jira:
>
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352765
>
>
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
>
>
> Feel free to reach out to the release managers (or respond to this
> thread) with feedback on the release process. Our goal is to
> constantly improve the release process. Feedback on what could be
> improved or things that didn't go so well are appreciated.
>
>
>
> Regards,
>
> Release Manager
>


[jira] [Created] (FLINK-32207) Error ModuleNotFoundError for Pyflink.table.descriptors and wheel version mismatch

2023-05-26 Thread Alireza Omidvar (Jira)
Alireza Omidvar created FLINK-32207:
---

 Summary: Error ModuleNotFoundError for Pyflink.table.descriptors 
and wheel version mismatch
 Key: FLINK-32207
 URL: https://issues.apache.org/jira/browse/FLINK-32207
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.17.0
 Environment: Colab

Python3.10 
Reporter: Alireza Omidvar
 Attachments: image (1).png, image (2).png

Gentlemen,
 
 
 
I have problem with some apache-flink modules. I am running a 1.17.0 apache- 
flink and I write test codes in Colab I faced a problem for import modules 
 
 
 
 
from pyflink.table import DataTypes 
 
from pyflink.table.descriptors import Schema, Kafka, Json, Rowtime 
 
from pyflink.table.catalog import FileSystem 
 
 
 
 
not working for me (python version 3.10) 
 
 
Any help is highly appreciated the strange is that other modules importing 
fine.  I checked with your Github but didn't find these on yours too which 
means modules are not inside your descriptor.py too. I think it needed 
installation of connectors but it failed too. 
 
 
 
 
Please see the link below: 
 
 
 
 
 
[https://github.com/aomidvar/scrapper-price-comparison/blob/d8a10f74101bf96974e769813c33b83d7a71f02b/kafkaconsumer1.ipynb]
 
 
 
 
I am running a test after producing the stream 
([https://github.com/aomidvar/scrapper-price-comparison/blob/main/kafkaproducer1.ipynb])
 to Confluent server and I like to do a flink job but the above mentioned 
modules are not found with the following links in collab:
 
 
That is not probably a bug. Only version of apache-flink now working on colab 
is 1.17.0. I prefer 3.10 but installed a virtual python 3.8 env and between 
different modules found out that Kafka and Json modules are not in 
descriptors.py of version 1.17 Apache-flink default. But modules exist in 
Apache-flink 1.13 version.
[https://colab.research.google.com/drive/1aHKv8WA6RA10zTdwdzUubB5K0anEmOws?usp=sharing]
[https://colab.research.google.com/drive/1eCHJlsb8AjdmJtPc95X3H4btmFVSoCL4?usp=sharing]
 
I've got this error for Json, Kafka ...
---
 
 ImportError Traceback (most recent call last)  
in () 1 from pyflink.table import DataTypes > 2 from 
pyflink.table.descriptors import Schema, Kafka, Json, Rowtime 3 from 
pyflink.table.catalog import FileSystem ImportError: cannot import name 'Kafka' 
from 'pyflink.table.descriptors' 
(/usr/local/lib/python3.10/dist-packages/pyflink/table/descriptors.py) 
 
---
 
 NOTE: If your import is failing due to a missing package, you can manually 
install dependencies using either !pip or !apt. To view examples of installing 
some common dependencies, click the "Open Examples" button below.
 
 ---
 
I have doubt that if current error is related to a version and dependencies 
then 
 
I have to ask the developer if I do this python 3.8 env is that possible to get 
solved?
 
 
Thanks for your time ,
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32206) Error ModuleNotFoundError for Pyflink.table.descriptors and wheel version mismatch

2023-05-26 Thread Alireza Omidvar (Jira)
Alireza Omidvar created FLINK-32206:
---

 Summary: Error ModuleNotFoundError for Pyflink.table.descriptors 
and wheel version mismatch
 Key: FLINK-32206
 URL: https://issues.apache.org/jira/browse/FLINK-32206
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: mongodb-1.0.1
Reporter: Alireza Omidvar






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Apache Flink 1.17.1 released

2023-05-26 Thread weijie guo
The Apache Flink community is very happy to announce the release of
Apache Flink 1.17.1, which is the first bugfix release for the Apache
Flink 1.17 series.



Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.



The release is available for download at:

https://flink.apache.org/downloads.html



Please check out the release blog post for an overview of the
improvements for this bugfix release:

https://flink.apache.org/news/2023/05/25/release-1.17.1.html



The full release notes are available in Jira:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352886



We would like to thank all contributors of the Apache Flink community
who made this release possible!



Feel free to reach out to the release managers (or respond to this
thread) with feedback on the release process. Our goal is to
constantly improve the release process. Feedback on what could be
improved or things that didn't go so well are appreciated.



Regards,

Release Manager


[ANNOUNCE] Apache Flink 1.16.2 released

2023-05-26 Thread weijie guo
The Apache Flink community is very happy to announce the release of
Apache Flink 1.16.2, which is the second bugfix release for the Apache
Flink 1.16 series.



Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.



The release is available for download at:

https://flink.apache.org/downloads.html



Please check out the release blog post for an overview of the
improvements for this bugfix release:

https://flink.apache.org/news/2023/05/25/release-1.16.2.html



The full release notes are available in Jira:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352765



We would like to thank all contributors of the Apache Flink community
who made this release possible!



Feel free to reach out to the release managers (or respond to this
thread) with feedback on the release process. Our goal is to
constantly improve the release process. Feedback on what could be
improved or things that didn't go so well are appreciated.



Regards,

Release Manager


[DISCUSS] FLIP-294: Support Customized Job Meta Data Listener

2023-05-26 Thread Shammon FY
Hi devs,

We would like to bring up a discussion about FLIP-294: Support Customized
Job Meta Data Listener[1]. We have had several discussions with Jark Wu,
Leonard Xu, Dong Lin, Qingsheng Ren and Poorvank about the functions and
interfaces, and thanks for their valuable advice.
The overall job and connector information is divided into metadata and
lineage, this FLIP focuses on metadata and lineage will be discussed in
another FLIP in the future. In this FLIP we want to add a customized
listener in Flink to report catalog modifications to external metadata
systems such as datahub[2] or atlas[3]. Users can view the specific
information of connectors such as source and sink for Flink jobs in these
systems, including fields, watermarks, partitions, etc.

Looking forward to hearing from you, thanks.


[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Job+Meta+Data+Listener
[2] https://datahub.io/
[3] https://atlas.apache.org/#/


[jira] [Created] (FLINK-32205) Flink Rest Client should support connecting to the server using URLs.

2023-05-26 Thread Weihua Hu (Jira)
Weihua Hu created FLINK-32205:
-

 Summary: Flink Rest Client should support connecting to the server 
using URLs.
 Key: FLINK-32205
 URL: https://issues.apache.org/jira/browse/FLINK-32205
 Project: Flink
  Issue Type: Improvement
  Components: Command Line Client
Affects Versions: 1.17.0
Reporter: Weihua Hu


Currently, Flink Client can only connect to the server via the address:port, 
which is configured by the rest.address and rest.port.

But in some other scenarios. Flink Server is run behind a proxy. Such as 
running on Kubernetes and exposing services through ingress. The URL to access 
the Flink server can be: http://{proxy address}/{some prefix path to identify 
flink clusters}/{flink request path} 

In [FLINK-32030|https://issues.apache.org/jira/browse/FLINK-32030], the SQL 
Client gateway accepts URLs by using the '--endpoint'.

IMO, we should introduce an option, such as "rest.endpoint", to make the Flink 
client work with URLs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32204) ZooKeeperLeaderElectionTest.testZooKeeperReelectionWithReplacement fails with The ExecutorService is shut down already. No Callables can be executed on AZP

2023-05-26 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-32204:
---

 Summary: 
ZooKeeperLeaderElectionTest.testZooKeeperReelectionWithReplacement fails with 
The ExecutorService is shut down already. No Callables can be executed on AZP
 Key: FLINK-32204
 URL: https://issues.apache.org/jira/browse/FLINK-32204
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: Sergey Nuyanzin


[This 
build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49386=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=7095]
 fails as
{noformat}

May 25 18:45:50 Caused by: java.util.concurrent.RejectedExecutionException: The 
ExecutorService is shut down already. No Callables can be executed.
May 25 18:45:50 at 
org.apache.flink.util.concurrent.DirectExecutorService.throwRejectedExecutionExceptionIfShutdown(DirectExecutorService.java:237)
May 25 18:45:50 at 
org.apache.flink.util.concurrent.DirectExecutorService.submit(DirectExecutorService.java:100)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.cache.TreeCache.publishEvent(TreeCache.java:902)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.cache.TreeCache.publishEvent(TreeCache.java:894)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.cache.TreeCache.access$1200(TreeCache.java:79)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.recipes.cache.TreeCache$TreeNode.processResult(TreeCache.java:489)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:926)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:683)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
May 25 18:45:50 at 
org.apache.flink.shaded.curator5.org.apache.curator.framework.imps.GetDataBuilderImpl$3.processResult(GetDataBuilderImpl.java:272)
May 25 18:45:50 at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:634)
May 25 18:45:50 at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:553)
May 25 18:45:50
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32203) Potential ClassLoader memory leak due to log4j configuration

2023-05-26 Thread Oleksandr Nitavskyi (Jira)
Oleksandr Nitavskyi created FLINK-32203:
---

 Summary: Potential ClassLoader memory leak due to log4j 
configuration
 Key: FLINK-32203
 URL: https://issues.apache.org/jira/browse/FLINK-32203
 Project: Flink
  Issue Type: Bug
Reporter: Oleksandr Nitavskyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [RESULT] [VOTE] Release 1.17.1, release candidate #1

2023-05-26 Thread Jing Ge
Hi Weijie,

Thank you so much for sharing it!

Best regards,
Jing

On Fri, May 26, 2023 at 8:00 AM weijie guo 
wrote:

> Hi Jing,
>
> The release process for 1.16.2 and 1.17.1 is nearing its end, but I am
> waiting for Docker Hub to approval the update of manifest and publish the
> new image. I estimate that this may take 1-2 days and then I will
> officially announce the release of the new version immediately
>
> Best regards,
>
> Weijie
>
>
>
> Jing Ge  于2023年5月25日周四 19:42写道:
>
> > Hi Weijie,
> >
> > Thanks again for driving it. I was wondering if you are able to share the
> > estimated date when the 1.16.2 and 1.17.1 releases will be officially
> > announced after the voting is closed? Thanks!
> >
> > Best regards,
> > Jing
> >
> > On Thu, May 25, 2023 at 9:46 AM weijie guo 
> > wrote:
> >
> > > I'm happy to announce that we have unanimously approved this release.
> > >
> > >
> > >
> > > There are 7 approving votes, 3 of which are binding:
> > >
> > >
> > > * Xintong Song(binding)
> > >
> > > * Yuxin Tan
> > >
> > > * Xingbo Huang(binding)
> > >
> > > * Yun Tang
> > >
> > > * Jing Ge
> > >
> > > * Qingsheng Ren(binding)
> > >
> > > * Benchao Li
> > >
> > >
> > > There are no disapproving votes.
> > >
> > >
> > > I'll work on the steps to finalize the release and will send out the
> > >
> > > announcement as soon as that has been completed.
> > >
> > >
> > > Thanks everyone!
> > >
> > >
> > > Best regards,
> > >
> > > Weijie
> > >
> >
>


Re: [DISCUSS] FLIP-308: Support Time Travel In Batch Mode

2023-05-26 Thread Shammon FY
Thanks Feng, the feature of time travel sounds great!

In addition to SYSTEM_TIME, lake houses such as paimon and iceberg support
snapshot or version. For example, users can query snapshot 1 for paimon by
the following statement
SELECT * FROM t VERSION AS OF 1

Could we support this in Flink too?

Best,
Shammon FY

On Fri, May 26, 2023 at 1:20 PM Benchao Li  wrote:

> Regarding the implementation, did you consider the pushdown abilities
> compatible, e.g., projection pushdown, filter pushdown, partition pushdown.
> Since `Snapshot` is not handled much in existing rules, I have a concern
> about this. Of course, it depends on your implementation detail, what is
> important is that we'd better add some cross tests for these.
>
> Regarding the interface exposed to Connector, I see there is a rejected
> design for adding SupportsTimeTravel, but I didn't see the alternative in
> the FLIP doc. IMO, this is an important thing we need to clarify because we
> need to know whether the Connector supports this, and what column/metadata
> corresponds to 'system_time'.
>
> Feng Jin  于2023年5月25日周四 22:50写道:
>
> > Thanks for your reply
> >
> > @Timo @BenChao @yuxia
> >
> > Sorry for the mistake,  Currently , calcite only supports  `FOR
> SYSTEM_TIME
> > AS OF `  syntax.  We can only support `FOR SYSTEM_TIME AS OF` .  I've
> > updated the syntax part of the FLIP.
> >
> >
> > @Timo
> >
> > > We will convert it to TIMESTAMP_LTZ?
> >
> > Yes, I think we need to convert TIMESTAMP to TIMESTAMP_LTZ and then
> convert
> > it into a long value.
> >
> > > How do we want to query the most recent version of a table
> >
> > I think we can use `AS OF CURRENT_TIMESTAMP` ,But it does cause
> > inconsistency with the real-time concept.
> > However, from my personal understanding, the scope of  `AS OF
> > CURRENT_TIMESTAMP` is the table itself, not the table record.  So, I
> think
> > using CURRENT_TIMESTAMP should also be reasonable?.
> > Additionally, if no version is specified, the latest version should be
> used
> > by default.
> >
> >
> >
> > Best,
> > Feng
> >
> >
> > On Thu, May 25, 2023 at 7:47 PM yuxia 
> wrote:
> >
> > > Thanks Feng for bringing this up. It'll be great to introduce time
> travel
> > > to Flink to have a better integration with external data soruces.
> > >
> > > I also share same concern about the syntax.
> > > I see in the part of `Whether to support other syntax implementations`
> in
> > > this FLIP, seems the syntax in Calcite should be `FOR SYSTEM_TIME AS
> OF`,
> > > right?
> > > But the the syntax part in this FLIP, it seems to be `AS OF TIMESTAMP`
> > > instead of  `FOR SYSTEM_TIME AS OF`. Is it just a mistake or by design?
> > >
> > >
> > > Best regards,
> > > Yuxia
> > >
> > > - 原始邮件 -
> > > 发件人: "Benchao Li" 
> > > 收件人: "dev" 
> > > 发送时间: 星期四, 2023年 5 月 25日 下午 7:27:17
> > > 主题: Re: [DISCUSS] FLIP-308: Support Time Travel In Batch Mode
> > >
> > > Thanks Feng, it's exciting to have this ability.
> > >
> > > Regarding the syntax section, are you proposing `AS OF` instead of `FOR
> > > SYSTEM AS OF` to do this? I know `FOR SYSTEM AS OF` is in the SQL
> > standard
> > > and has been supported in some database vendors such as SQL Server.
> About
> > > `AS OF`, is it in the standard or any database vendor supports this, if
> > > yes, I think it's worth to add this support to Calcite, and I would
> give
> > a
> > > hand in Calcite side. Otherwise, I think we'd better to use `FOR SYSTEM
> > AS
> > > OF`.
> > >
> > > Timo Walther  于2023年5月25日周四 19:02写道:
> > >
> > > > Also: How do we want to query the most recent version of a table?
> > > >
> > > > `AS OF CURRENT_TIMESTAMP` would be ideal, but according to the docs
> > both
> > > > the type is TIMESTAMP_LTZ and what is even more concerning is the it
> > > > actually is evalated row-based:
> > > >
> > > >  > Returns the current SQL timestamp in the local time zone, the
> return
> > > > type is TIMESTAMP_LTZ(3). It is evaluated for each record in
> streaming
> > > > mode. But in batch mode, it is evaluated once as the query starts and
> > > > uses the same result for every row.
> > > >
> > > > This could make it difficult to explain in a join scenario of
> multiple
> > > > snapshotted tables.
> > > >
> > > > Regards,
> > > > Timo
> > > >
> > > >
> > > > On 25.05.23 12:29, Timo Walther wrote:
> > > > > Hi Feng,
> > > > >
> > > > > thanks for proposing this FLIP. It makes a lot of sense to finally
> > > > > support querying tables at a specific point in time or hopefully
> also
> > > > > ranges soon. Following time-versioned tables.
> > > > >
> > > > > Here is some feedback from my side:
> > > > >
> > > > > 1. Syntax
> > > > >
> > > > > Can you elaborate a bit on the Calcite restrictions?
> > > > >
> > > > > Does Calcite currently support `AS OF` syntax for this but not `FOR
> > > > > SYSTEM_TIME AS OF`?
> > > > >
> > > > > It would be great to support `AS OF` also for time-versioned joins
> > and
> > > > > have a unified and short syntax.
> > > > >
> > > > > 

[jira] [Created] (FLINK-32202) useless configuration

2023-05-26 Thread zhangdong7 (Jira)
zhangdong7 created FLINK-32202:
--

 Summary: useless configuration
 Key: FLINK-32202
 URL: https://issues.apache.org/jira/browse/FLINK-32202
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.15.4
 Environment: public static final ConfigOption 
SERVER_PORT_RANGE = 
ConfigOptions.key("queryable-state.server.ports").stringType().defaultValue("9067").withDescription("The
 port range of the queryable state server. The specified range can be a single 
port: \"9123\", a range of ports: \"50100-50200\", or a list of ranges and 
ports: \"50100-50200,50300-50400,51234\".").withDeprecatedKeys(new 
String[]\{"query.server.ports"});
Reporter: zhangdong7


According to the official Flink documentation, the parameter query.server.ports 
has been replaced by queryable-state.server.ports, but the parameter 
query.server.ports:6125 will be generated when Flink starts. Is this a 
historical problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32201) Enable the distribution of shuffle descriptors via the blob server by connection number

2023-05-26 Thread Weihua Hu (Jira)
Weihua Hu created FLINK-32201:
-

 Summary: Enable the distribution of shuffle descriptors via the 
blob server by connection number
 Key: FLINK-32201
 URL: https://issues.apache.org/jira/browse/FLINK-32201
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Reporter: Weihua Hu



Flink support distributes shuffle descriptors via the blob server to reduce 
JobManager overhead. But the default threshold to enable it is 1MB, which never 
reaches. Users need to set a proper value for this, but it requires advanced 
knowledge before configuring it.

I would like to enable this feature by the number of connections of a group of 
shuffle descriptors. For examples, a simple streaming job with two operators, 
each with 10,000 parallelism and connected via all-to-all distribution. In this 
job, we only get one set of shuffle descriptors, and this group has 1 * 
1 connections. This means that JobManager needs to send this set of shuffle 
descriptors to 1 tasks.

Since it is also difficult for users to configure, I would like to give it a 
default value. The serialized shuffle descriptors sizes for different 
parallelism are shown below.


|| Producer parallelism || serialized shuffle descriptor size || consumer 
parallelism || total data size that JM needs to send ||
| 5000 | 100KB | 5000 | 500MB |
| 1 | 200KB | 1 | 2GB |
| 2 | 400Kb | 2 | 8GB |

So, I would like to set the default value to 10,000 * 10,000. 

Any suggestions or concerns are appreciated.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32200) OrcFileSystemITCase cashed with exit code 239 (NoClassDefFoundError: scala/concurrent/duration/Deadline)

2023-05-26 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-32200:
-

 Summary: OrcFileSystemITCase cashed with exit code 239 
(NoClassDefFoundError: scala/concurrent/duration/Deadline)
 Key: FLINK-32200
 URL: https://issues.apache.org/jira/browse/FLINK-32200
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.18.0
Reporter: Matthias Pohl


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=49325=logs=4eda0b4a-bd0d-521a-0916-8285b9be9bb5=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9=12302

{code}
12:24:14,883 [flink-akka.actor.internal-dispatcher-2] ERROR 
org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: Thread 
'flink-akka.actor.internal-dispatcher-2' produced an uncaught exception. 
Stopping the process...
java.lang.NoClassDefFoundError: scala/concurrent/duration/Deadline
at scala.concurrent.duration.Deadline$.apply(Deadline.scala:30) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at scala.concurrent.duration.Deadline$.now(Deadline.scala:76) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at akka.actor.CoordinatedShutdown.loop$1(CoordinatedShutdown.scala:737) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
akka.actor.CoordinatedShutdown.$anonfun$run$7(CoordinatedShutdown.scala:762) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
 ~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
 ~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
 [flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
[?:1.8.0_292]
at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
[?:1.8.0_292]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
[?:1.8.0_292]
at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
[?:1.8.0_292]
12:24:14,882 [flink-metrics-akka.actor.internal-dispatcher-2] ERROR 
org.apache.flink.util.FatalExitExceptionHandler  [] - FATAL: Thread 
'flink-metrics-akka.actor.internal-dispatcher-2' produced an uncaught 
exception. Stopping the process...
java.lang.NoClassDefFoundError: scala/concurrent/duration/Deadline
at scala.concurrent.duration.Deadline$.apply(Deadline.scala:30) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at scala.concurrent.duration.Deadline$.now(Deadline.scala:76) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at akka.actor.CoordinatedShutdown.loop$1(CoordinatedShutdown.scala:737) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
akka.actor.CoordinatedShutdown.$anonfun$run$7(CoordinatedShutdown.scala:762) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 
scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) 
~[flink-rpc-akka_318674dc-e98c-4e16-8705-faefda52bd1a.jar:1.18-SNAPSHOT]
at 

Re: [RESULT] [VOTE] Release 1.17.1, release candidate #1

2023-05-26 Thread weijie guo
Hi Jing,

The release process for 1.16.2 and 1.17.1 is nearing its end, but I am
waiting for Docker Hub to approval the update of manifest and publish the
new image. I estimate that this may take 1-2 days and then I will
officially announce the release of the new version immediately

Best regards,

Weijie



Jing Ge  于2023年5月25日周四 19:42写道:

> Hi Weijie,
>
> Thanks again for driving it. I was wondering if you are able to share the
> estimated date when the 1.16.2 and 1.17.1 releases will be officially
> announced after the voting is closed? Thanks!
>
> Best regards,
> Jing
>
> On Thu, May 25, 2023 at 9:46 AM weijie guo 
> wrote:
>
> > I'm happy to announce that we have unanimously approved this release.
> >
> >
> >
> > There are 7 approving votes, 3 of which are binding:
> >
> >
> > * Xintong Song(binding)
> >
> > * Yuxin Tan
> >
> > * Xingbo Huang(binding)
> >
> > * Yun Tang
> >
> > * Jing Ge
> >
> > * Qingsheng Ren(binding)
> >
> > * Benchao Li
> >
> >
> > There are no disapproving votes.
> >
> >
> > I'll work on the steps to finalize the release and will send out the
> >
> > announcement as soon as that has been completed.
> >
> >
> > Thanks everyone!
> >
> >
> > Best regards,
> >
> > Weijie
> >
>