[jira] [Created] (FLINK-24456) Support bounded offset in the Kafka table connector

2021-10-05 Thread Haohui Mai (Jira)
Haohui Mai created FLINK-24456:
--

 Summary: Support bounded offset in the Kafka table connector
 Key: FLINK-24456
 URL: https://issues.apache.org/jira/browse/FLINK-24456
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai


The {{setBounded}} API in the DataStream connector of Kafka is particularly 
useful when writing tests. Unfortunately the table connector of Kafka lacks the 
same API.

It would be good to have this API added.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24485) COUNT(DISTINCT) should support binary field

2021-10-08 Thread Haohui Mai (Jira)
Haohui Mai created FLINK-24485:
--

 Summary: COUNT(DISTINCT) should support binary field
 Key: FLINK-24485
 URL: https://issues.apache.org/jira/browse/FLINK-24485
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.14.0
Reporter: Haohui Mai


Current the SQL API fails when doing {{COUNT(DISTINCT)}} on a binary field. In 
our use case we store the UUID as a 16-byte binary string.

While it is possible to work around to do a base64 encoding on the string but 
it should be relatively straightforward to implement the native solution to 
gain the optimal speed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-7159) Semantics of OVERLAPS in Table API diverge from the SQL standard

2017-07-11 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7159:
-

 Summary: Semantics of OVERLAPS in Table API diverge from the SQL 
standard
 Key: FLINK-7159
 URL: https://issues.apache.org/jira/browse/FLINK-7159
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


According to http://web.cecs.pdx.edu/~len/sql1999.pdf

ISO/IEC 9075-2:1999 (E) ©ISO/IEC, 8.12 

{noformat}
The result of the  is the result of the following 
expression:
( S1 > S2 AND NOT ( S1 >= T2 AND T1 >= T2 ) )
OR
( S2 > S1 AND NOT ( S2 >= T1 AND T2 >= T1 ) )
OR
( S1 = S2 AND ( T1 <> T2 OR T1 = T2 ) )
{noformat}

The Table API diverges from this semantic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7235) Backport CALCITE-1884 to the Flink repository before Calcite 1.14

2017-07-19 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7235:
-

 Summary: Backport CALCITE-1884 to the Flink repository before 
Calcite 1.14
 Key: FLINK-7235
 URL: https://issues.apache.org/jira/browse/FLINK-7235
 Project: Flink
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


We need to backport CALCITE-1884 in order to unblock upgrading Calcite to 1.13.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7237) Remove DateTimeUtils from Flink once Calcite is upgraded to 1.14

2017-07-19 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7237:
-

 Summary: Remove DateTimeUtils from Flink once Calcite is upgraded 
to 1.14
 Key: FLINK-7237
 URL: https://issues.apache.org/jira/browse/FLINK-7237
 Project: Flink
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7236) Bump up the Calcite version to 1.14

2017-07-19 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7236:
-

 Summary: Bump up the Calcite version to 1.14
 Key: FLINK-7236
 URL: https://issues.apache.org/jira/browse/FLINK-7236
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


This is the umbrella task to coordinate tasks to upgrade Calcite to 1.14.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7344) Migrate usage of joda-time to the Java 8 DateTime API

2017-08-02 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7344:
-

 Summary: Migrate usage of joda-time to the Java 8 DateTime API
 Key: FLINK-7344
 URL: https://issues.apache.org/jira/browse/FLINK-7344
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


As the minimum Java version of Flink has been upgraded to 1.8, it is a good 
time to migrate all usage of the joda-time package to the native Java 8 
DateTime API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7392) Enable more predicate push-down in joins

2017-08-08 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7392:
-

 Summary: Enable more predicate push-down in joins
 Key: FLINK-7392
 URL: https://issues.apache.org/jira/browse/FLINK-7392
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


This is a follow-up of FLINK-6429.

As a quick workaround to prevent pushing down projections for time indicators, 
FLINK-6429 reverts the behavior of {{ProjectJoinTransposeRule}} back to the one 
in Calcite 1.12.

As [~jark] suggested in FLINK-6429, we can selectively disable the push down 
for time indicators in {{ProjectJoinTransposeRule}}. This jira tracks the 
effort of implement the suggestion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7743) Remove the restriction of minimum memory of JM

2017-09-30 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7743:
-

 Summary: Remove the restriction of minimum memory of JM
 Key: FLINK-7743
 URL: https://issues.apache.org/jira/browse/FLINK-7743
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Per discussion on 
http://mail-archives.apache.org/mod_mbox/flink-user/201709.mbox/%3c4f77255e-1ddb-4e99-a667-73941b110...@apache.org%3E

It might be great to remove the restriction of the minimum heap size of the JM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7787) Remove guava dependency in the cassandra connector

2017-10-09 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-7787:
-

 Summary: Remove guava dependency in the cassandra connector
 Key: FLINK-7787
 URL: https://issues.apache.org/jira/browse/FLINK-7787
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


As discovered in FLINK-6225, the cassandra connector uses the future classes in 
the guava library. We can get rid of the dependency by using the equivalent 
classes provided by Java 8.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-5489) maven release:prepare fails due to invalid JDOM comments in pom.xml

2017-01-13 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-5489:
-

 Summary: maven release:prepare fails due to invalid JDOM comments 
in pom.xml
 Key: FLINK-5489
 URL: https://issues.apache.org/jira/browse/FLINK-5489
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.2.0, 1.3.0
Reporter: Haohui Mai
Priority: Minor


When I was trying to publish Flink to our internal artifactory, I found out 
that {{maven release:prepare}} has failed because the plugin complains about 
the some of the comments pom.xml do not conform with the JDOM format:

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-release-plugin:2.4.2:prepare (default-cli) on 
project flink-parent: Execution default-cli of goal 
org.apache.maven.plugins:maven-release-plugin:2.4.2:prepare failed: The data "-
[ERROR] This module is used a dependency in the root pom. It activates shading 
for all sub modules
[ERROR] through an include rule in the shading configuration. This assures that 
Maven always generates
[ERROR] an effective pom for all modules, i.e. get rid of Maven properties. In 
particular, this is needed
[ERROR] to define the Scala version property in the root pom but not let the 
root pom depend on Scala
[ERROR] and thus be suffixed along with all other modules.
[ERROR] " is not legal for a JDOM comment: Comment data cannot start with a 
hyphen.
{noformat}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5583) Support flexible error handling in the Kafka consumer

2017-01-19 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-5583:
-

 Summary: Support flexible error handling in the Kafka consumer
 Key: FLINK-5583
 URL: https://issues.apache.org/jira/browse/FLINK-5583
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


We found that it is valuable to allow the applications to handle errors and 
exceptions in the Kafka consumer in order to build a robust application in 
production.

The context is the following:

(1) We have schematized, Avro records flowing through Kafka.
(2) The decoder implements the DeserializationSchema to decode the records.
(3) Occasionally there are corrupted records (e.g., schema issues). The 
streaming pipeline might want to bail out (which is the current behavior) or to 
skip the corrupted records depending on the applications.

Two options are available:

(1) Have a variant of DeserializationSchema to return a FlatMap like structure 
as suggested in FLINK-3679.
(2) Allow the applications to catch and handle the exception by exposing some 
APIs that are similar to the {{ExceptionProxy}}.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5631) [yarn] Support downloading additional jars from non-HDFS paths

2017-01-24 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-5631:
-

 Summary: [yarn] Support downloading additional jars from non-HDFS 
paths
 Key: FLINK-5631
 URL: https://issues.apache.org/jira/browse/FLINK-5631
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Currently the {{YarnResourceManager}} and {{YarnApplicationMasterRunner}} 
always register the additional jars using the YARN filesystem object. This is 
problematic as the paths might require another filesystem.

To support localizing from non-HDFS paths (e.g., s3, http or viewfs), the 
cleaner approach is to get the filesystem object from the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5725) Support JOIN between two streams in the SQL API

2017-02-06 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-5725:
-

 Summary: Support JOIN between two streams in the SQL API
 Key: FLINK-5725
 URL: https://issues.apache.org/jira/browse/FLINK-5725
 Project: Flink
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Haohui Mai


As described in the title.

This jira proposes to support joining two streaming tables in the SQL API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5931) Make Flink highly available even if defaultFS is unavailable

2017-02-27 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-5931:
-

 Summary: Make Flink highly available even if defaultFS is 
unavailable
 Key: FLINK-5931
 URL: https://issues.apache.org/jira/browse/FLINK-5931
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


In order to use Flink in mission-critical environments, Flink must be available 
even if the {{defaultFS}} is unavailable.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5954) Always assign names to the window in the Stream SQL APi

2017-03-02 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-5954:
-

 Summary: Always assign names to the window in the Stream SQL APi
 Key: FLINK-5954
 URL: https://issues.apache.org/jira/browse/FLINK-5954
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


CALCITE-1603 and CALCITE-1615 brings in supports for {{TUMBLE}}, {{HOP}}, 
{{SESSION}} grouped windows, as well as the corresponding auxiliary functions 
that allow uses to query the start and the end of the windows (e.g., 
{{TUMBLE_START()}} and {{TUMBLE_END()}} see 
http://calcite.apache.org/docs/stream.html for more details).

The goal of this jira is to add support for these auxiliary functions in Flink. 
Flink already has runtime supports for them, as these functions are essential 
mapped to the {{WindowStart}} and {{WindowEnd}} classes.

To implement this feature in transformation, the transformation needs to 
recognize these functions and map them to the {{WindowStart}} and {{WindowEnd}} 
classes.

The problem is that both classes can only refer to the windows using alias. 
Therefore this jira proposes to assign a unique name for each window to enable 
the transformation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6011) Support TUMBLE, HOP, SESSION window in streaming SQL

2017-03-09 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6011:
-

 Summary: Support TUMBLE, HOP, SESSION window in streaming SQL
 Key: FLINK-6011
 URL: https://issues.apache.org/jira/browse/FLINK-6011
 Project: Flink
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


CALCITE-1603 and CALCITE-1615 introduces the support of the {{TUMBLE}} / 
{{HOP}} / {{SESSION}} windows in the parser.

This jira tracks the efforts of adding the corresponding supports on the 
planners / optimizers in Flink.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6012) Support WindowStart / WindowEnd functions in stream SQL

2017-03-09 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6012:
-

 Summary: Support WindowStart / WindowEnd functions in stream SQL
 Key: FLINK-6012
 URL: https://issues.apache.org/jira/browse/FLINK-6012
 Project: Flink
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira proposes to add support for {{TUMBLE_START()}} / {{TUMBLE_END()}} / 
{{HOP_START()}} / {{HOP_END()}} / {{SESSUIB_START()}} / {{SESSION_END()}} in 
the planner in Flink.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6033) Support UNNEST query in the stream SQL api

2017-03-13 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6033:
-

 Summary: Support UNNEST query in the stream SQL api
 Key: FLINK-6033
 URL: https://issues.apache.org/jira/browse/FLINK-6033
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


It would be nice to support the {{UNNEST}} keyword in the stream SQL API. 
The keyword is widely used in queries that relate to nested fields.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6209) StreamPlanEnvironment always has a parallelism of 1

2017-03-28 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6209:
-

 Summary: StreamPlanEnvironment always has a parallelism of 1
 Key: FLINK-6209
 URL: https://issues.apache.org/jira/browse/FLINK-6209
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Thanks [~bill.liu8904] for triaging the issue.

After FLINK-5808 we saw that the Flink jobs that are uploaded through the UI 
always have a parallelism of 1, even the parallelism is explicitly set via in 
the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6217) ContaineredTaskManagerParameters sets off heap memory size incorrectly

2017-03-29 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6217:
-

 Summary: ContaineredTaskManagerParameters sets off heap memory 
size incorrectly
 Key: FLINK-6217
 URL: https://issues.apache.org/jira/browse/FLINK-6217
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Thanks [~bill.liu8904] for triaging the issue.

When {{taskmanager.memory.off-heap}} is disabled, we observed that the total 
memory that Flink allocates exceed the total memory of the container:

For a 8G container the JobManager starts the container with the following 
parameter:

{noformat}
$JAVA_HOME/bin/java -Xms6072m -Xmx6072m -XX:MaxDirectMemorySize=6072m ...
{noformat}

The total amount of heap memory plus the off-heap memory exceeds the total 
amount of memory of the container. As a result YARN occasionally kills the 
container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6281) Create TableSink for JDBC

2017-04-07 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6281:
-

 Summary: Create TableSink for JDBC
 Key: FLINK-6281
 URL: https://issues.apache.org/jira/browse/FLINK-6281
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


It would be nice to integrate the table APIs with the JDBC connectors so that 
the rows in the tables can be directly pushed into JDBC.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6335) Support DISTINCT over grouped window in stream SQL

2017-04-20 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6335:
-

 Summary: Support DISTINCT over grouped window in stream SQL
 Key: FLINK-6335
 URL: https://issues.apache.org/jira/browse/FLINK-6335
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


The SQL on the batch side supports the {{DISTINCT}} keyword over aggregation. 
This jira proposes to support the {{DISTINCT}} keyword on streaming aggregation 
using the same technique on the batch side.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6373) Add runtime support for distinct aggregation over grouped windows

2017-04-24 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6373:
-

 Summary: Add runtime support for distinct aggregation over grouped 
windows
 Key: FLINK-6373
 URL: https://issues.apache.org/jira/browse/FLINK-6373
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


This is a follow up task for FLINK-6335. FLINK-6335 enables parsing the 
distinct aggregations over grouped windows. This jira tracks the effort of 
adding runtime support for the query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6377) Support map types in the Table / SQL API

2017-04-24 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6377:
-

 Summary: Support map types in the Table / SQL API
 Key: FLINK-6377
 URL: https://issues.apache.org/jira/browse/FLINK-6377
 Project: Flink
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira tracks the efforts of adding supports for maps into the Table / SQL 
APIs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6562) Support implicit table references for nested fields in SQL

2017-05-11 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6562:
-

 Summary: Support implicit table references for nested fields in SQL
 Key: FLINK-6562
 URL: https://issues.apache.org/jira/browse/FLINK-6562
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


Currently nested fields can only be accessed through fully qualified 
identifiers. For example, users need to specify the following query for the 
table {{f}} that has a nested field {{foo.bar}}

{noformat}
SELECT f.foo.bar FROM f
{noformat}

Other query engines like Hive / Presto supports implicit table references. For 
example:

{noformat}
SELECT foo.bar FROM f
{noformat}

This jira proposes to support the latter syntax in the SQL API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6563) Expose time indicator attributes in the KafkaTableSource

2017-05-11 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6563:
-

 Summary: Expose time indicator attributes in the KafkaTableSource
 Key: FLINK-6563
 URL: https://issues.apache.org/jira/browse/FLINK-6563
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


This is a follow up for FLINK-5884.

After FLINK-5884 requires the {{TableSource}} interfaces to expose the 
processing time and the event time for the data stream. This jira proposes to 
expose these two information in the Kafka table source.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6574) External catalog should support a single level catalog

2017-05-12 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6574:
-

 Summary: External catalog should support a single level catalog
 Key: FLINK-6574
 URL: https://issues.apache.org/jira/browse/FLINK-6574
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 1.3.0


We found out that the current external catalog requires three layers of 
references for any tables. For example, the SQL would look like the following 
when referencing external table:

{noformat}
SELECT * FROM catalog.db.table
{noformat}

It would be great to support only two layers of indirections which is closer to 
many of the deployment on Presto / Hive today.

{noformat}
SELECT * FROM db.table
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6595) Nested SQL queries do not expose proctime / rowtime attributes

2017-05-15 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6595:
-

 Summary: Nested SQL queries do not expose proctime / rowtime 
attributes
 Key: FLINK-6595
 URL: https://issues.apache.org/jira/browse/FLINK-6595
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 1.3.0


We found out that the group windows cannot be applied with nested queries 
out-of-the-box:

{noformat}
SELECT * FROM (
  (SELECT ...)
UNION ALL)
  (SELECT ...)
) GROUP BY foo, TUMBLE(proctime, ...)
{noformat}

Flink complains about {{proctime}} is undefined.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6605) Allow users to specify a default name for processing time

2017-05-16 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6605:
-

 Summary: Allow users to specify a default name for processing time
 Key: FLINK-6605
 URL: https://issues.apache.org/jira/browse/FLINK-6605
 Project: Flink
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


FLINK-5884 enables users to specify column names for both processing time and 
event time. FLINK-6595 and FLINK-6584 breaks as chained / nested queries will 
no longer have an attribute of processing time / event time.

This jira proposes to add a default name for the processing time in order to 
unbreak FLINK-6595 and FLINK-6584.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6692) The flink-dist jar contains unshaded nettyjar

2017-05-23 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6692:
-

 Summary: The flink-dist jar contains unshaded nettyjar
 Key: FLINK-6692
 URL: https://issues.apache.org/jira/browse/FLINK-6692
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 1.3.0


The {{flink-dist}} jar contains unshaded netty 3 and netty 4 classes:

{noformat}
io/netty/handler/codec/http/router/
io/netty/handler/codec/http/router/BadClientSilencer.class
io/netty/handler/codec/http/router/MethodRouted.class
io/netty/handler/codec/http/router/Handler.class
io/netty/handler/codec/http/router/Router.class
io/netty/handler/codec/http/router/DualMethodRouter.class
io/netty/handler/codec/http/router/Routed.class
io/netty/handler/codec/http/router/AbstractHandler.class
io/netty/handler/codec/http/router/KeepAliveWrite.class
io/netty/handler/codec/http/router/DualAbstractHandler.class
io/netty/handler/codec/http/router/MethodRouter.class
{noformat}

{noformat}
org/jboss/netty/util/internal/jzlib/InfBlocks.class
org/jboss/netty/util/internal/jzlib/InfCodes.class
org/jboss/netty/util/internal/jzlib/InfTree.class
org/jboss/netty/util/internal/jzlib/Inflate$1.class
org/jboss/netty/util/internal/jzlib/Inflate.class
org/jboss/netty/util/internal/jzlib/JZlib$WrapperType.class
org/jboss/netty/util/internal/jzlib/JZlib.class
org/jboss/netty/util/internal/jzlib/StaticTree.class
org/jboss/netty/util/internal/jzlib/Tree.class
org/jboss/netty/util/internal/jzlib/ZStream$1.class
org/jboss/netty/util/internal/jzlib/ZStream.class
{noformat}

Is it an expected behavior?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6693) Support DATE_FORMAT function in the SQL API

2017-05-23 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6693:
-

 Summary: Support DATE_FORMAT function in the SQL API
 Key: FLINK-6693
 URL: https://issues.apache.org/jira/browse/FLINK-6693
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Haohui Mai
Assignee: Haohui Mai


It would be quite handy to support the {{DATE_FORMAT}} function in Flink to 
support various date / time related operations:

The specification of the {{DATE_FORMAT}} function can be found in 
https://prestodb.io/docs/current/functions/datetime.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6780) ExternalTableSource fails to add the processing time and the event time attribute in the row type

2017-05-30 Thread Haohui Mai (JIRA)
Haohui Mai created FLINK-6780:
-

 Summary: ExternalTableSource fails to add the processing time and 
the event time attribute in the row type
 Key: FLINK-6780
 URL: https://issues.apache.org/jira/browse/FLINK-6780
 Project: Flink
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Critical


We observed that all streaming queries that refer to external tables fail when 
the Volcano planner converting {{LogicalTableScan}} to 
{{FlinkLogicalTableSourceScan}}:

{noformat}
Type mismatch:
rowtype of new rel:
RecordType(, TIMESTAMP(3) NOT NULL proctime) NOT NULL
rowtype of set:
RecordType(, ...) NOT NULL
{noformat}

Tables that are registered through 
{{StreamTableEnvironment#registerTableSource()}} do not suffer from this 
problem as {{StreamTableSourceTable}} adds the processing time / event time 
attribute automatically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)