Hi,
You should be a contributor already.
Best, Fabian
Am Do., 11. Apr. 2019 um 13:50 Uhr schrieb Run :
> Hi Guys,
>
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA ID is fan_li_ya.
>
>
> Thanks a lot.
>
>
> Best,
> Liya Fan
Hi all,
Flink Forward Europe returns to Berlin on October 7-9th, 2019.
We are happy to announce that the Call for Presentations is open!
Please submit a proposal if you'd like to present your Apache Flink
experience, best practices, new features, or use cases in front of an
international audience
t (2), this sounds to me more like a project for a
> > "translator",
> > >> not a "technical writer". As far as I understand it, the big benefit
> of
> > >> having a technical writer is to have someone who can describe
> > complicated
>
plan.
> My understanding is that we need to indicate priorities between proposals
> and might get only one, so it would be good to not subdivide.
>
> On Fri, Apr 12, 2019 at 9:58 AM Fabian Hueske wrote:
>
> > Hi everyone,
> >
> > I think we can split the first project
progress ?
>
>
> -Original Message-
> From: Fabian Hueske
> Sent: 8 April 2019 11:18
> To: dev
> Subject: Re: SQL CLI and JDBC
>
> Hi Hanan,
>
> I'm not aware of any plans to add a JDBC Driver.
>
> One issue with the JDBC interface is that it o
Hi Francis,
Welcome to the Flink community!
I've given you contributor permissions for Jira.
Best, Fabian
Am Mo., 15. Apr. 2019 um 03:49 Uhr schrieb du francis <
francisdu...@gmail.com>:
> Hello Flink Community:
>
> My name is Francis Du ,I want to contribute to Apache Flink.
> Would yo
Hi,
Welcome to the Flink community!
I gave you contributor permissions for Jira.
Best, Fabian
Am Di., 16. Apr. 2019 um 04:20 Uhr schrieb Kevin Tang <
kevin.shang...@gmail.com>:
> Hi,
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is t
Hi Fabian,
>
> How can I also get contributor permissions for Jira?
>
> Thanks in advance,
> Konstantinos
>
> On Tue, Apr 16, 2019 at 1:38 PM Fabian Hueske wrote:
>
> > Hi,
> >
> > Welcome to the Flink community!
> > I gave you contributor permissions for
ou please add me to the Contributor group?
>
> Best,
> Konstantinos
>
> On Tue, Apr 16, 2019 at 3:27 PM Fabian Hueske wrote:
>
> > Hi Konstantinos,
> >
> > You need a Jira account and share your user id.
> > We will then add it to the Contributor group and
Hi everyone,
We recently added a roadmap to our project website [1] and decided to
update it after every release. Flink 1.8.0 was released a few days ago, so
I think it we should check and remove from the roadmap what was achieved so
far and add features / improvements that we plan for the future.
other project ideas?
> >>> >>
> >>> >> Cheers,
> >>> >>
> >>> >> Konstantin
> >>> >>
> >>> >>
> >>> >>
> >>> >>
> >>> >> On Fri, Apr 12, 2019
Hi,
Welcome to the Flink community.
I gave you contributor permissions for Jira.
Best, Fabian
Am Mo., 29. Apr. 2019 um 09:28 Uhr schrieb deng ziming <
dengziming1...@gmail.com>:
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID i
Hi JFrame,
Welcome to the Flink community.
I gave you contributor permissions for Jira.
Best, Fabian
Am Mo., 29. Apr. 2019 um 09:27 Uhr schrieb 张俸铭 <15963339...@163.com>:
> Hi,
>
> I want to contribute to Apache Flink.
>
> Would you please give me the contributor permission?My JIRA ID is JFrame
Hi Zhou Haiqing,
Welcome to the Flink community.
I gave you contributor permissions for Jira.
Best, Fabian
Am Mo., 29. Apr. 2019 um 09:48 Uhr schrieb Hunk :
> Hi,
> I want to contribute to Apache Flink. Would you please give me the
> contributor permission?
> My JIRA ID is zhouhaiqing.
>
schrieb Jeff Zhang :
> Hi Fabian,
>
> One thing missing is python api and python udf, we already discussed it in
> community, and it is very close to reach consensus.
>
>
> Fabian Hueske 于2019年4月17日周三 下午7:51写道:
>
> > Hi everyone,
> >
> > We recently adde
e python effort, I think he can help to prepare
> it.
>
>
>
> Fabian Hueske 于2019年4月29日周一 下午10:15写道:
>
>> Hi everyone,
>>
>> Since we had no more comments on this thread, I think we proceed to
>> update the roadmap.
>>
>> @Jeff Zhang I agree,
Hi Sergey,
You are right, keys are managed in key groups. Each key belongs to a key
group and one or more key groups are assigned to each parallel task of an
operator.
Key groups are not exposed to users and the assignments of keys ->
key-groups and key-groups -> tasks cannot be changed without ch
Hi,
Welcome to the Flink community.
I gave you contributor permissions for Jira.
Best, Fabian
Am Di., 30. Apr. 2019 um 04:02 Uhr schrieb aihua li :
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is aiwa.
>
>
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Thanks, Fabian
Am Do., 25. Apr. 2019 um 05:58 Uhr schrieb Shi Quan :
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
>
> My JIRA ID is quan
>
> Best regard
ibutions, any
> >>>>> contributor
> >>>>>>>>> could
> >>>>>>>>>> just ask the committer (who merged those contributions) about
> >>>>>>>> contributor
>
Hi Vijay,
You are right, FLIP-34 is the right place to look at.
AFAIK, it is not possible to trigger a checkpoint after the sources
finished.
FLIP-34 will provide a few options how to handle such requirements.
Best, Fabian
Am Do., 2. Mai 2019 um 00:14 Uhr schrieb vijikarthi :
> It looks like t
the PR about add Python Table API section to the
> roadmap. I
> > > > appreciate if you have time to look at it. :)
> > > >
> > > > https://github.com/apache/flink-web/pull/204
> > > >
> > > > Regards,
> > > &
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Best, Fabian
Am Sa., 4. Mai 2019 um 11:29 Uhr schrieb 华超 :
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is wtcctw.
>
>
> --
> Hua Chao (华 超
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Best, Fabian
Am Mo., 6. Mai 2019 um 05:40 Uhr schrieb Xingbo Huang :
> Hi guys,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA username is hxbks2ks.
>
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Best, Fabian
Am Di., 7. Mai 2019 um 12:13 Uhr schrieb clay clay :
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is clay4megtr
>
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Best, Fabian
Am So., 5. Mai 2019 um 11:24 Uhr schrieb 郑智国 :
> Hi,
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA username is zhiguo.
>
Hi,
Welcome to the Flink community.
I gave you contributor permissions for Jira.
Best, Fabian
Am Mi., 8. Mai 2019 um 10:03 Uhr schrieb Rui Li :
> Hi team,
>
> Could someone add me as a contributor? My JIRA username is lirui.
> Thanks!
>
> --
> Best regards!
> Rui Li
>
e are able to
> predict future currency exchange rates. I’ll point out the general
> architecture and describe the most interesting findings.
>
> Best
> Tim
>
> -Ursprüngliche Nachricht-
> Von: Robert Metzger
> Gesendet: Montag, 6. Mai 2019 09:44
> An: Fabian
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Best, Fabian
Am Fr., 10. Mai 2019 um 06:45 Uhr schrieb Gen Luo :
> Hi,
>
> Could someone add me as a contributor? My JIRA username is c4e.
> Thanks!
>
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
best, Fabian
Am Fr., 10. Mai 2019 um 09:48 Uhr schrieb lyy900...@163.com <
lyy900...@163.com>:
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID
Hi,
Welcome to the Flink community,
I gave you contributor permissions for Jira.
Best, Fabian
Am Fr., 10. Mai 2019 um 14:13 Uhr schrieb clay clay :
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is zhangzqit
>
Hi Jiyang,
Welcome to the Flink community!
I gave you contributor permissions for Jira.
Best, Fabian
Am Sa., 11. Mai 2019 um 05:15 Uhr schrieb bryan yan :
> Hi Flink community,
>
> This is Jiyang from Amazon applying for contributor permission. My id is
> yjiyang.
> Best,
> Jiyang
>
Hi Zhinan,
Welcome to the Flink community!
I gave you contributor permissions for Jira.
Best, Fabian
Am So., 12. Mai 2019 um 09:48 Uhr schrieb Zhinan Cheng <
chingchi...@gmail.com>:
> Hello,
>
> I want to contribute to Apache Flink.
> Would please give me the contributor permission?
>
> My JIRA
eng from CUHK.
> I also want to contribute to Apache Flink and I have applied for the
> contributor permission.
> However, my JIRA ID is zncheng.
> Would you please make sure that you also give me the contributor
> permission?
>
> Regards,
> Zhinan
>
>
>
> On Mon
Hi,
Welcome to the Flink community.
I've given you contributor permissions for Jira.
Best, Fabian
Am Mo., 13. Mai 2019 um 10:27 Uhr schrieb Tong Sun :
> Hi,
>
> I want to contribute to Apache Flink.
> Would you please give me the contributor permission?
> My JIRA ID is stong
>
Hi Shakir,
This is a frequently reported issue in Flink's metrics collection / UI.
Send and received records and bytes only include data that is shared
between Flink tasks but not between a source system (Kafka) and Flink or
Flink and a sink system (Kinesis).
IIRC, there is an effort to fix this p
Hi All,
I obviously support this proposal, but I'd like to emphasize two points.
* I think we can significantly improve the getting-started experience with
better (and up-to-date) tutorials.
* A better structure and separation of concepts and API will be very
helpful. I noticed this when I was re
Hi Ken,
You are right. Slot-sharing can only be configured for DataStream
applications. The DataSet API does not support this.
I think right now, there is no good solution for this problem.
There are several ongoing efforts to improve Flink's batch capabilities,
incl. better scheduling, failure re
Hi Eugene,
I agree with Jan. Using a ProcessFunction is the way to go.
ProcessFunction gives you all the tools you need:
* ListState which is very cheap to append to (and you only need to read the
ListState when you receive a watermark).
* Access to event timestamps, the current watermark and tim
Congrats Jincheng!
Am Di., 25. Juni 2019 um 11:48 Uhr schrieb Aljoscha Krettek <
aljos...@apache.org>:
> Congratulations! :-)
>
> > On 25. Jun 2019, at 11:34, Wei Zhong wrote:
> >
> > Congratulations Jincheng!
> >
> > Best,
> > Wei
> >
> >
> >> 在 2019年6月25日,15:18,JingsongLee 写道:
> >>
> >> Jinch
4 GMT+02:00 Till Rohrmann :
> There ain’t no such thing as a free lunch and code style.
>
> On Thu, Oct 22, 2015 at 3:13 PM, Maximilian Michels
> wrote:
>
> > I think we have to document all these classes. Code Style doesn't come
> > for free :)
> >
> &g
I'd like to bring up Vasia's question on the project structure.
Stephan started the discussion and proposed a new project structure about
three weeks ago [1].
The proposal was refined a bit and eventually backed by many +1s.
Do we want to make this happen in 0.10 or do we postpone it after the
re
Chesnay
>> checked with Henry on this topic in JIRA.
>>
>> On Thu, Oct 22, 2015 at 3:38 PM, Fabian Hueske wrote:
>>
>> I'd like to bring up Vasia's question on the project structure.
>>>
>>> Stephan started the discussion and proposed a new
directly to coordinate. Thanks.
2015-10-22 16:45 GMT+02:00 Fabian Hueske :
> I just deleted the Record API to check what would break.
> Doesn't look too scary, just a few tests that need to be adapted. I'm
> right in the middle of that. Hope to open a PR soon.
>
> 2015-10-2
instead of error?
> >>
> >> This would allow us to cherry-pick certain rules we would like people
> >> to follow but not strictly enforced.
> >>
> >> - Henry
> >>
> >> On Thu, Oct 22, 2015 at 9:20 AM, Stephan Ewen wrote:
>
Yes, let's do it
+1
2015-10-23 12:00 GMT+02:00 Stephan Ewen :
> +1 for 1.0 :-)
>
> On Fri, Oct 23, 2015 at 11:59 AM, Maximilian Michels
> wrote:
>
> > Dear Flink community,
> >
> > We have forked the current release candidate from the master to the
> > release-0.10-rc0 branch. Changes for the ne
wouldn't voice a -1 for spaces,
> but
> >> it seems to me like an unnecessary change that would touch every single
> >> Java file and without substantially improving anything.
> >>
> >> JavaDocs by-module with JIRAs to track progress seems like the best
&g
t when the JIRA is resolved. And we should discuss the
> progress about those tickets regularly (to hopefully find volunteers to
> resolve them) and use a special tag for them.
>
> On 10/23/2015 01:59 PM, Fabian Hueske wrote:
> > And who should be "forced" to write the d
-10-19 17:53 GMT+02:00 Fabian Hueske :
> Thanks for your feedback, Alex.
>
> Chesnay is right, we cannot modify the GH assignee field at the moment. If
> this changes at some point, I would support your proposal.
>
> Regarding the PR - JIRA rule, this has been discussed
The website consists of two parts which are maintained in two separate
respositories:
1) The project website about features, community, etc.
2) The documentation of the project
We have the separation because we want to be able to update source and
documentation in one repository to avoid that the
gt; - Removal of Record API. Good thing to have, but should not be a release
> > blocker. I would be fine with doing this for 1.0
> >
> > On Thu, Oct 22, 2015 at 5:24 PM, Fabian Hueske
> wrote:
> >
> > > Hmm, it took IntelliJ some time to figure ou
ut the
> > role of a shepherd is a "meta" role, if a understand the guideline
> > correctly, and not a technical one (-> everybody should discuss,
> > comment, accept, mark to get merged, and merge PRs). So why do we need a
> > different shepherd there? I think, c
agree with Fabian and Ufuk that it makes sense to separate the website
> > and the source repository. However, the distinction between the
> > documentation and the homepage should be more clear.
> >
> > On Mon, Oct 26, 2015 at 10:35 AM, Ufuk Celebi wrote:
> >
> >>
I'll try to have a look at the proposal from a performance point of view in
the next days.
Please ping me, if I don't follow up this thread.
Cheers, Fabian
2015-10-27 18:28 GMT+01:00 Martin Junghanns :
> Hi,
>
> At our group, we also moved several algorithms from Giraph to Gelly and
> ran into s
;Java 8" ??
>
> => "Python", "Interactive Scale Shell", "Connectors", "Iterations",
> "Hadoop" could go to "DataSet API"
> => "Storm" could go to "DataStream API"
> => as an alternative, &qu
logo as a link -- too me, it is always confusing it two links next
> to each other do the same thing).
>
> -Matthias
>
>
>
> On 10/27/2015 11:56 PM, Fabian Hueske wrote:
> > I agree that it is confusing that the documentation is linked more than
> > once from the pr
, all other changes are not that important
> to me right now but I wouldn't deny the structure of the documentation
> can be improved.
>
> On Wed, Oct 28, 2015 at 10:05 AM, Fabian Hueske wrote:
> >
> > I agree, two Overview links pointing to different locations should be
I would go for the first solution with the join.
This gives the engine the highest degree of freedom:
- repartition vs. broadcast-forward
- sort-merge vs. hash-join
Best, Fabian
2015-10-28 18:45 GMT+01:00 Vasiliki Kalavri :
> Hi Martin,
>
> isn't finding the intersection of edges enough in this
+1 to have binaries for both versions in Maven and as build to download.
2015-10-26 17:11 GMT+01:00 Theodore Vasiloudis <
theodoros.vasilou...@gmail.com>:
> +1 for having binaries, I'm working on a Spark application currently with
> Scala 2.11 and having to rebuild everything when deploying e.g.
I'm sorry, but I have to give a -1 for this RC.
Starting a Scala 2.11 build (hadoop2 and hadoop24) with
./bin/start-local.sh fails with a ClassNotFoundException:
ava.lang.NoClassDefFoundError:
org/apache/flink/shaded/org/apache/curator/RetryPolicy
at
org.apache.flink.runtime.jobmanager.Jo
efficiently emit 2 different types of records from a coGroup?
> 3. does it make any difference if we group/combine the messages before
> updating the workset or after?
>
> Cheers,
> -Vasia.
>
>
> On 27 October 2015 at 18:39, Fabian Hueske wrote:
>
> > I'll try
thout a delta iteration terminates
> fine. I didn't have access to the cluster today, so I couldn't debug this
> further, but I will do as soon as I have access again.
>
> The rest of my comments are inline:
>
> On 30 October 2015 at 17:53, Fabian Hueske wrote:
>
&
he
> > >> > scala version in a library is akin to saying "We don't depend on
> scala
> > >> for
> > >> > this library, so feel free to use whatever you want." Sbt users will
> > >> > typically specify the version of scala they use and
Oct 23, 2015 at 12:05 AM, Henry Saputra <
> >> henry.sapu...@gmail.com>
> >> > wrote:
> >> > >> Could we make certain rules to give warning instead of error?
> >> > >>
> >> > >> This would allow us to cherry-pick certain rules
aximilian Michels :
> I'm for leaving it as-is and renaming all artifacts which depend on
> Scala for the release following 0.10.
>
> On Mon, Nov 2, 2015 at 11:32 AM, Fabian Hueske wrote:
> > OK, let me try to summarize the discussion (and please correct me if I
> got
&
Scala 2.11 artifacts
> currently. For Scala 2.10 artifacts, we aren’t adding the version suffix
> for Flink with Java users.
> >
> > I’m for adding the version suffix to Scala 2.10 artifacts also. But I’m
> not sure that removing the version suffix from Java-only artifacts wou
-1
A user reported a bug in the Scala DataSet API: FLINK-2953
Should be easy to solve. I will provide a fix soon.
2015-10-30 15:51 GMT+01:00 Maximilian Michels :
> We can continue testing now:
>
> https://docs.google.com/document/d/1keGYj2zj_AOOKH1bC43Xc4MDz0eLhTErIoxevuRtcus/edit
>
> On Fri, Oc
ause of cached older dependencies, ...), it happened more than
> once during version upgrades, maven project re-organizations, etc.
>
> Doing it after 0.10 and having a few week to let it sink in and errors
> surface would probably be much safer...
>
> On Mon, Nov 2, 2015 at 5:19
Hi Andre,
Thanks for reaching out to the Flink community!
I am not sure your analysis is based on correct assumptions about Flink's
delta iterations.
Flink's delta iterations do support
- working and solution sets of different types
- worksets that grow and shrink or are completely replaced in ea
Hi Johannes,
that's the way to do it, IMO.
Cheers, Fabian
2015-11-03 22:39 GMT+01:00 Kirschnick, Johannes <
johannes.kirschn...@tu-berlin.de>:
> Hi List,
>
> I am stuck on a simple problem and though somebody might point me into the
> right direction.
>
> Basically I’m trying to
>
>
> -
nobody said to have a personal preferences of tabs over
> spaces."
> >
> > I think this is wrong. If I am correct, Till and myself would prefer
> > spaces (but non of us is super strict about this -- we could both live
> > with tabs too).
> >
> > -Matt
FNXzg/view?usp=sharing
>
> https://drive.google.com/file/d/0BzQJrI2eGlyYN014NXp6OEZUdGs/view?usp=sharing
>
> Thanks a lot for the help!
>
> -Vasia.
>
> On 30 October 2015 at 21:28, Fabian Hueske wrote:
>
> > We can of course inject an optional ReduceFunction (or GroupReduce,
I agree with Robert. Looks like a bug in Flink.
Maybe an off-by-one issue (violating index is 32768 and the default memory
segment size is 32KB).
Which Flink version are you using?
In case you are using a custom build, can you share the commit ID (is
reported in the first lines of the JobManager l
I am not sure if we always should declare complete classes as
@PublicInterface.
This does definitely make sense for interfaces and abstract classes such as
MapFunction or InputFormat but not necessarily for classes such as DataSet
that we might want to extend by methods which should not immediately
his redundancy, by either
> emitting null fields or by creating an operator that could emit 2 different
> types of tuples?
>
> Thanks!
> -Vasia.
>
> On 9 November 2015 at 15:20, Fabian Hueske wrote:
>
> > Hi Vasia,
> >
> > sorry for the late reply.
> &
gt; Ali
> >
> >
> >
> > On 2015-11-10, 3:01 PM, "Ufuk Celebi" wrote:
> >
> > >Thanks for reporting this. Are you using any custom data types?
> > >
> > >If you can share your code, it would be very helpful in order to debug
>
Kryo to serialize and
deserialize your Pojos as follows:
ExecutionEnvironment env = ...
env.getConfig().enableForceKryo();
Best,
Fabian
[1] https://gist.github.com/fhueske/6c5aa386fc79ab69712b
2015-11-11 10:38 GMT+01:00 Fabian Hueske :
> Hi Ali,
>
> one more thing. Did that error occu
va.lang.String
> class org.apache.flink.api.java.typeutils.GenericTypeInfo : class
> io.pivotal.rti.protocols.ProtocolAttributeMap
> class org.apache.flink.api.java.typeutils.GenericTypeInfo : class
> io.pivotal.rti.protocols.ProtocolDetailMap
> class org.apache.flink.api.common.typeinf
Hi Gyula,
I just checked with jconsole that the memory allocation is correct.
However, the log message is a bit misleading. In case of the streaming
mode, the managed memory is lazily allocated and the logged amount is an
upper bound.
Cheers, Fabian
2015-11-12 13:37 GMT+01:00 Gyula Fóra :
> Hey
+1
I checked:
1) on Windows 10 with Cygwin
- building from source without tests (mvn -DskipTests clean install) works
- building from source with tests (mvn clean install) fails: FLINK-2757
- start/stop scripts (start-local.sh, start-local-streaming.sh,
stop-local.sh) work
- submitting example job
The failing tests on Windows should *NOT* block the release, IMO. ;-)
2015-11-12 14:48 GMT+01:00 Fabian Hueske :
> +1
>
> I checked:
> 1) on Windows 10 with Cygwin
> - building from source without tests (mvn -DskipTests clean install) works
> - building from source with tests
> this. I will stick with 0.9.1 for now but I’ll download and use 0.10 as
> soon as it’s released.
>
> Cheers,
> Ali
>
> On 2015-11-11, 6:00 PM, "Fabian Hueske" wrote:
>
> >Hi Ali,
> >
> >Flink uses different serializers for different data
Hi everybody,
with 0.10.0 almost being released I started writing release nodes for the
Flink blog.
Please find the current draft here:
https://docs.google.com/document/d/1ULZAdxwneZAldhJ69tB3UEvjJQhS-ZASN5mdtumtJ48/edit?usp=sharing
Everybody has permissions to comment the draft. Please let me k
> > On 12 Nov 2015, at 21:25, Fabian Hueske wrote:
> >
> > Hi everybody,
> >
> > with 0.10.0 almost being released I started writing release nodes for the
> > Flink blog.
> >
> > Please find the current draft here:
> >
> https://docs.googl
Hi everybody,
thanks for your input and feedback.
I updated the draft and published the release announcement a few minutes
ago:
--> http://flink.apache.org/news/2015/11/16/release-0.10.0.html
Thanks,
Fabian
2015-11-15 20:39 GMT+01:00 Alexander Alexandrov <
alexander.s.alexand...@gmail.com>:
>
Hi everybody,
The Flink community is excited to announce that Apache Flink 0.10.0 has
been released.
Please find the release announcement here:
--> http://flink.apache.org/news/2015/11/16/release-0.10.0.html
Best,
Fabian
Sounds like a good idea to me.
+1
Fabian
2015-12-10 15:31 GMT+01:00 Maximilian Michels :
> Hi squirrels,
>
> By this time, we have numerous connectors which let you insert data
> into Flink or output data from Flink.
>
> On the streaming side we have
>
> - RollingSink
> - Flume
> - Kafka
> - Ni
Hi Vasia,
I agree, Gremlin definitely looks like an interesting API for Flink.
I'm not sure how it relates to Gelly. I guess Gelly would (initially) be
more tightly integrated with the DataSet API whereas Gremlin would be a
connector for other languages. Any ideas on this?
Another question would
sing the graph.
> > >
> > > It looked like it would work well. We remember we could even model
> > > recursive conditions on traversals pretty well with delta iterations.
> > >
> > > If Gremlin's use cases are anything like Cypher, I could ping Marti
+1 for the second approach.
Regarding Stephan's comment, I think it would be better to have dedicated
WindowAssigner classes. Otherwise, this becomes inconsistent with the
dedicated Event/ProcessingTimeTriggers.
2015-12-18 12:03 GMT+01:00 Stephan Ewen :
> I am also in favor of option (2).
>
> We
14:52 GMT+01:00 Aljoscha Krettek :
> @Fabian So you would actually be for option 3) of my initial proposals?
>
> @Stephan What do you mean by that? Would users set a time characteristic
> per job or per window assigner in your suggestion?
>
> On Fri, 18 Dec 2015 at 12:10 Fabian
Stephan already opened a PR for restructuring the examples:
--> https://github.com/apache/flink/pull/1482
Otherwise +1!
2016-01-06 16:21 GMT+01:00 Robert Metzger :
> The renaming hasn't been done completely. I would like to fix this before
> the 1.0.0 release.
>
> I think the following issues a
Hi everybody,
in the last days, Timo and I refined the design document for adding a SQL /
StreamSQL interface on top of Flink that was started by Stephan.
The document proposes an architecture that is centered around Apache
Calcite. Calcite is an Apache top-level project and includes a SQL parser
u created JIRA ticket to keep track of this new feature?
>
> - Henry
>
> On Thu, Jan 7, 2016 at 6:05 AM, Fabian Hueske wrote:
> > Hi everybody,
> >
> > in the last days, Timo and I refined the design document for adding a
> SQL /
> > StreamSQL interface on
+1
2016-01-08 17:35 GMT+01:00 Till Rohrmann :
> +1 since it increase maintainability of the code base if it is not really
> used and thus removed.
>
> On Fri, Jan 8, 2016 at 5:33 PM, Ufuk Celebi wrote:
>
> > +1
> >
> > I wanted to make a similar proposal.
> >
> > – Ufuk
> >
> > > On 08 Jan 2016,
gt;
> > On Sunday, January 10, 2016, Henry Saputra
> wrote:
> >
> >> Awesome! Thanks for the reply, Fabian.
> >>
> >> - Henry
> >>
> >> On Sunday, January 10, 2016, Fabian Hueske >> > wrote:
> >>
> >>> Hi Henry,
> >>
language (EPL) [2] could be a
> good starting point.
>
> [1] https://docs.oracle.com/database/121/DWHSG/pattern.htm#DWHSG8959
> [2] http://www.espertech.com/esper/release-5.2.0/esper-reference/html/
>
> Cheers,
> Till
>
> On Mon, Jan 11, 2016 at 10:12 AM, Fabian Hueske
Hi Arjun,
yes, it is possible to start the web dashboard also when running jobs from
the IDE.
It is a bit hacky though...
You can do it by creating a LocalStreamExecutionEnvironment as follows:
val config = new Configuration()
config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true)
config
Hi everybody,
Lately, ASF Infra has changed the write permissions of all Git repositories
twice.
Originally, it was not possible to force into the master branch.
A few weeks ago, infra disabled also force pushing into other branches.
Now, this has changed again after the issue was discussed with
@flink.apache.org
> > Subject: Re: [DISCUSS] Git force pushing and deletion of branchs
> >
> > +1 for protecting the master branch.
> >
> > I also don't see any reason why anyone should force push there
> >
> > Gyula
> >
> > Fabian Hueske ezt
301 - 400 of 1444 matches
Mail list logo