Hi Aljoscha,

I believe that support for Broadcast State should also be in 1.5.
There is an open PR https://github.com/apache/flink/pull/5230 
<https://github.com/apache/flink/pull/5230> for that
and there are some pending issues related to scala api and documentation.

Thanks,
Kostas

> On Feb 5, 2018, at 5:37 PM, Timo Walther <twal...@apache.org> wrote:
> 
> Hi Shuyi,
> 
> I will take a look at it again this week. I'm pretty sure it will be part of 
> 1.5.0.
> 
> Regards,
> Timo
> 
> 
> Am 2/5/18 um 5:25 PM schrieb Shuyi Chen:
>> Hi Aljoscha, can we get this feature in for 1.5.0? We have a lot of
>> internal users waiting for this feature.
>> 
>> [FLINK-7923 <https://issues.apache.org/jira/browse/FLINK-7923>] Support
>> accessing subfields of a Composite element in an Object Array type column
>> 
>> Thanks a lot
>> Shuyi
>> 
>> 
>> On Mon, Feb 5, 2018 at 6:59 AM, Christophe Jolif <cjo...@gmail.com> wrote:
>> 
>>> Hi guys,
>>> 
>>> Sorry for jumping in, but I think
>>> 
>>> [FLINK-8101] Elasticsearch 6.X support
>>> [FLINK-7386]  Flink Elasticsearch 5 connector is not compatible with
>>> Elasticsearch 5.2+ client
>>> 
>>>  have long been awaited and there was one PR from me and from someone else
>>> showing the interest ;) So if you could consider it for 1.5 that would be
>>> great!
>>> 
>>> Thanks!
>>> --
>>> Christophe
>>> 
>>> On Mon, Feb 5, 2018 at 2:47 PM, Timo Walther <twal...@apache.org> wrote:
>>> 
>>>> Hi Aljoscha,
>>>> 
>>>> it would be great if we can include the first version of the SQL client
>>>> (see FLIP-24, Implementation Plan 1). I will open a PR this week. I think
>>>> we can merge this with explicit "experimental/alpha" status. It is far
>>> away
>>>> from feature completeness but will be a great tool for Flink beginners.
>>>> 
>>>> In order to use the SQL client we would need to also add some table
>>>> sources with the new unified table factories (FLINK-8535), but this is
>>>> optional because a user can implement own table factories at the
>>> begining.
>>>> Regards,
>>>> Timo
>>>> 
>>>> 
>>>> Am 2/5/18 um 2:36 PM schrieb Tzu-Li (Gordon) Tai:
>>>> 
>>>> Hi Aljoscha,
>>>>> Thanks for starting the discussion.
>>>>> 
>>>>> I think there’s a few connector related must-have improvements that we
>>>>> should get in before the feature freeze, since quite a few users have
>>> been
>>>>> asking for them:
>>>>> 
>>>>> [FLINK-6352] FlinkKafkaConsumer should support to use timestamp to set
>>> up
>>>>> start offset
>>>>> [FLINK-5479] Per-partition watermarks in FlinkKafkaConsumer should
>>>>> consider idle partitions
>>>>> [FLINK-8516] Pluggable shard-to-subtask partitioning for
>>>>> FlinkKinesisConsumer
>>>>> [FLINK-6109] Add a “checkpointed offset” metric to FlinkKafkaConsumer
>>>>> 
>>>>> These are still missing in the master branch. Only FLINK-5479 is still
>>>>> lacking a pull request.
>>>>> 
>>>>> Cheers,
>>>>> Gordon
>>>>> 
>>>>> On 31 January 2018 at 10:38:43 AM, Aljoscha Krettek (
>>> aljos...@apache.org)
>>>>> wrote:
>>>>> Hi Everyone,
>>>>> 
>>>>> When we decided to do the 1.4.0 release a while back we did that to get
>>> a
>>>>> stable release out before putting in a couple of new features. Back
>>> then,
>>>>> some of those new features (FLIP-6, network stack changes, local state
>>>>> recovery) were almost ready and we wanted to do a shortened 1.5.0
>>>>> development cycle to allow for those features to become ready and then
>>> do
>>>>> the next release.
>>>>> 
>>>>> We are now approaching the approximate time where we wanted to do the
>>>>> Flink 1.5.0 release so I would like to gauge where we are and gather
>>>>> opinions on how we should proceed now.
>>>>> 
>>>>> With this, I'd also like to propose myself as the release manager for
>>>>> 1.5.0 but I'm very happy to yield if someone else would be interested in
>>>>> doing that.
>>>>> 
>>>>> What do you think?
>>>>> 
>>>>> Best,
>>>>> Aljoscha
>>>>> 
>>>> 
>>>> 
>>> 
>>> --
>>> Christophe
>>> 
>> 
>> 
> 

Reply via email to