RE: [ANNOUNCE] Chengxiang Li added as committer

2016-01-19 Thread Li, Chengxiang
Thanks everyone, it's always great to collaborate with you guys, look forward to contribute more on Flink. Thanks Chengxiang -Original Message- From: Paris Carbone [mailto:par...@kth.se] Sent: Tuesday, January 19, 2016 9:24 PM To: dev@flink.apache.org Subject: Re: [ANNOUNCE]

RE: [DISCUSS] Git force pushing and deletion of branchs

2016-01-13 Thread Li, Chengxiang
+1 on the original style. Master branch disable force pushing in case of misusing and feature branch enable force pushing for flexible developing. -Original Message- From: Gyula Fóra [mailto:gyf...@apache.org] Sent: Wednesday, January 13, 2016 6:36 PM To: dev@flink.apache.org Subject:

RE: Effort to add SQL / StreamSQL to Flink

2016-01-07 Thread Li, Chengxiang
Very cool work, look forward to contribute. -Original Message- From: Chiwan Park [mailto:chiwanp...@apache.org] Sent: Friday, January 8, 2016 9:36 AM To: dev@flink.apache.org Subject: Re: Effort to add SQL / StreamSQL to Flink Really good! Many people want to use SQL. :) > On Jan 8,

RE: The null in Flink

2015-12-07 Thread Li, Chengxiang
. Thanks Chengxiang -Original Message- From: Li, Chengxiang [mailto:chengxiang...@intel.com] Sent: Thursday, December 3, 2015 4:43 PM To: dev@flink.apache.org Subject: RE: The null in Flink Hi, Stephan Treat UNKOWN as FALSE may works if the Boolean expression is used in filter operation

RE: The null in Flink

2015-12-03 Thread Li, Chengxiang
term becomes UNKNOWN and the row is filtered out (as if the predicate was false) - the result of the query contains no rows where predicate results are UNKNOWN. Stephan On Tue, Dec 1, 2015 at 4:09 AM, Li, Chengxiang <chengxiang...@intel.com> wrote: > Stephen, > For the 3rd topic, y

RE: The null in Flink

2015-11-30 Thread Li, Chengxiang
ns are monotonous (have no NOT), then the >> UNKNOWN value can be the same as FALSE. So the query planner had to >> rewrite all expression trees to have no NOT, which means pushing the >> NOT down into the leaf comparison operations (for example push NOT into == >> to become !=).

RE: The null in Flink

2015-11-25 Thread Li, Chengxiang
Hi In this mail list, there are some discussions about null value handling in Flink, and I saw several related JIRAs as well(like FLINK-2203, FLINK-2210), but unfortunately, got reverted due to immature design, and no further action since then. I would like to pick this topic up here, as it's

RE: The null in Flink

2015-11-25 Thread Li, Chengxiang
on top of the Table API. Regards, Timo On 25.11.2015 11:31, Li, Chengxiang wrote: > Hi > In this mail list, there are some discussions about null value handling in > Flink, and I saw several related JIRAs as well(like FLINK-2203, FLINK-2210), > but unfortunately, got reverted due to imm

RE: A proposal about skew data handling in Flink

2015-10-19 Thread Li, Chengxiang
g a discussion about data skew! I agree, it's a > important issue that can cause a lot of problems. > I'll have a look at your proposal and add comments soon. > > Thanks, Fabian > > 2015-10-15 12:24 GMT+02:00 Li, Chengxiang <chengxiang...@intel.com>: > >> Dear all, >> I

RE: [Proposal] Create a separate sub module for benchmark test

2015-09-22 Thread Li, Chengxiang
microbenchmarks with test execution. The code for these benchmarks resides in the test scope of the projects (so it is not packaged), but it is not executed as part of the UnitTests or IntegrationTests. Greetings, Stephan On Tue, Sep 22, 2015 at 12:22 PM, Li, Chengxiang <chengxiang...@intel.com>

[Proposal] Create a separate sub module for benchmark test

2015-09-22 Thread Li, Chengxiang
Hi, folks During work on Flink, I found several micro benchmarks which come from different modules, these benchmarks measure on manual, annotated with Junit annotations, so they got executed during unit test as well. There are some shortage on current implementation: 1. Benchmark test

Use bloom filter to improve hybrid hash join performance

2015-06-18 Thread Li, Chengxiang
Hi, flink developers I read the flink hybrid hash join documents and implementation, very nice job. For the case of small table does not all fit into memory, I think we may able to improve the performance better. Currently in hybrid hash join, while small table does not fit into memory, part