Hi Dian & Jark,
I checked out your prototype code, but it didn't pass the CEPITCase test in
the flink-table component.
It turns out that in the `MatchCodeGenerator.scala` file, line 74 should
use `${classOf[IterativeCondition.Context[_]].getCanonicalName}` instead of
`${classOf[IterativeCondition.Context[_]]}`.
I've also read your desgin document and it looks fine to me. Actually, I am
working on the same thing recently, I think maybe we can work together to
push this forward.
Thanks,
Yueting Chen
On Tue, Jun 13, 2017 at 10:44 AM, Dian Fu <dia...@apache.org> wrote:
> Hi Fabian,
>
> We have evaluated the missing features of Flink CEP roughly, it should not
> be quite difficult to support them. Kostas, Dawid, what's your thought?
>
> For supporting MATCH_RECOGNIZE, do you think we could create the JIRAs and
> start to work right now or we should wait until the release of calcite
> 1.13?
>
> Btw, could you help to add me(dian.fu) to the contributor list, then I can
> assign the JIRAs to myself? Thanks a lot.
>
> Best regards,
> Dian
>
> On Tue, Jun 13, 2017 at 3:59 AM, Fabian Hueske <fhue...@gmail.com> wrote:
>
> > Hi Jark,
> >
> > Thanks for updating the design doc and sharing your prototype!
> > I didn't look at the code in detail, but the fact that it is less than 1k
> > LOC is very promising. It seems that most of the complex CEP logic can be
> > reused :-)
> > Adding a dependency on flink-cep should be fine, IMO. It is a very slim
> > library with almost none external dependencies.
> >
> > Regarding the missing features of Flink CEP that you listed in the design
> > doc, it would be good to get some in put from Kostas and Dawid which are
> > the main contributors to CEP.
> > Do you have already plans regarding some of the missing features or can
> you
> > assess how hard it would be to integrate them?
> >
> > Cheers, Fabian
> >
> > Btw. The Calcite community started a discussion about releasing Calcite
> > 1.13. So, the missing features might soon be available.
> >
> > 2017-06-12 14:25 GMT+02:00 Jark Wu <j...@apache.org>:
> >
> > > Hi guys,
> > >
> > > Good news! We have made a prototype for integrating CEP and SQL. See
> this
> > > link
> > > https://github.com/apache/flink/compare/master...
> > > wuchong:cep-on-sql?expand=1
> > >
> > >
> > > You can check CepITCase to try the simple CQL example.
> > >
> > > Meanwhile, we updated our design doc with additional implementation
> > detail,
> > > including how
> > > to translate MATCH_RECOGNIZE into CEP API, and the features needed to
> add
> > > to Flink CEP,
> > > and the implementation plan. See the document
> > > https://docs.google.com/document/d/1HaaO5eYI1VZjyhtVPZOi3jVzikU7i
> > > K15H0YbniTnN30/edit#heading=h.4oas4koy8qu3
> > >
> > > In the prototype, we make flink-table dependency on flink-cep. Do you
> > think
> > > that is fine?
> > >
> > > What do you think about the prototype and the design doc ?
> > >
> > > Any feedbacks are welcome!
> > >
> > > Cheers,
> > > Jark Wu
> > >
> > >
> > > 2017-06-08 17:54 GMT+08:00 Till Rohrmann <trohrm...@apache.org>:
> > >
> > > > Thanks for sharing your ideas with the community. I really like the
> > > design
> > > > document and think that it's a good approach to follow Oracle's SQL
> > > > extension for pattern matching. Looking forward to having support for
> > SQL
> > > > with CEP capabilities :-)
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Thu, Jun 8, 2017 at 8:57 AM, Jark Wu <j...@apache.org> wrote:
> > > >
> > > > > Hi @Kostas, @Fabian, thank you for your support.
> > > > >
> > > > > @Fabian, I totally agree with you that we should focus on SQL
> first.
> > > > Let's
> > > > > keep Table API in mind and discuss that later.
> > > > >
> > > > > Regarding to the orderBy() clause, I'm not sure about that. I think
> > it
> > > > > makes sense to make it required in streaming mode(either order by
> > > rowtime
> > > > > or order by proctime). But CEP also works in batch mode, and not
> > > > necessary
> > > > > to order by some column. Nevertheless, we can support CEP on batch
> > SQL
> > > > > later.
> > > > >
> > > > >