[jira] [Created] (FLINK-8519) FileAlreadyExistsException on Start Flink Session
Hai Zhou UTC+8 created FLINK-8519: - Summary: FileAlreadyExistsException on Start Flink Session Key: FLINK-8519 URL: https://issues.apache.org/jira/browse/FLINK-8519 Project: Flink Issue Type: Bug Components: YARN Affects Versions: 1.5.0 Reporter: Hai Zhou UTC+8 Fix For: 1.5.0 *steps to reproduce:* 1. build flink from source , git commit: c1734f4 2. run script: source /path/hadoop/bin/hadoop_user_login.sh hadoop-launcher; export YARN_CONF_DIR=/path/hadoop/etc/hadoop; export HADOOP_CONF_DIR=/path/hadoop/etc/hadoop; export JVM_ARGS="-Djava.security.krb5.conf=${HADOOP_CONF_DIR}/krb5.conf"; /path/flink-1.5-SNAPSHOT/bin/yarn-session.sh -D yarn.container-start-command-template="/usr/local/jdk1.8.0_112/bin/java %%jvmmem%% %%jvmopts%% %%logging%% %%class%% %%args%% %%redirects%%" -n 4 -nm job_name -qu root.rt.flink -jm 1024 -tm 4096 -s 4 -d *error infos:* 2018-01-27 00:51:12,841 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli - Error while running the Flink Yarn session. java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1571) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:786) Caused by: org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:389) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:594) at org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$2(FlinkYarnSessionCli.java:786) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) ... 2 more Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: Path /user already exists as dir; cannot create link here at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244) at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:334) at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:161) at org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:161) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167) at org.apache.flink.yarn.AbstractYarnClusterDescriptor.startAppMaster(AbstractYarnClusterDescriptor.java:656) at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deployInternal(AbstractYarnClusterDescriptor.java:485) at org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploySessionCluster(AbstractYarnClusterDescriptor.java:384) ... 7 more -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-8359) Update copyright date in NOTICE
Hai Zhou UTC+8 created FLINK-8359: - Summary: Update copyright date in NOTICE Key: FLINK-8359 URL: https://issues.apache.org/jira/browse/FLINK-8359 Project: Flink Issue Type: Task Components: Build System Affects Versions: 1.5.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 NOTICE file has copyright year as 2014-2017. This needs to be updated as 2014-2018. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8313) Please add REAMD file for flink-web repository
Hai Zhou UTC+8 created FLINK-8313: - Summary: Please add REAMD file for flink-web repository Key: FLINK-8313 URL: https://issues.apache.org/jira/browse/FLINK-8313 Project: Flink Issue Type: Improvement Components: Project Website Reporter: Hai Zhou UTC+8 Add a REAMD file to Introduce: 1、What's the {{flink-web}} repository 2、How to Contribute it -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8228) Code cleanup - pointless bitwise expressions
Hai Zhou UTC+8 created FLINK-8228: - Summary: Code cleanup - pointless bitwise expressions Key: FLINK-8228 URL: https://issues.apache.org/jira/browse/FLINK-8228 Project: Flink Issue Type: Improvement Components: Checkstyle Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Trivial Fix For: 1.5.0 Such expressions include anding with zero, oring by zero, and shifting by zero. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8156) Bump commons-beanutils version to 1.9.3
Hai Zhou UTC+8 created FLINK-8156: - Summary: Bump commons-beanutils version to 1.9.3 Key: FLINK-8156 URL: https://issues.apache.org/jira/browse/FLINK-8156 Project: Flink Issue Type: Bug Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Commons-beanutils v1.8.0 dependency is not security compliant. See [CVE-2014-0114|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114]: {noformat} Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar in Apache Struts 1.x through 1.3.10 and in other products requiring commons-beanutils through 1.9.2, does not suppress the class property, which allows remote attackers to "manipulate" the ClassLoader and execute arbitrary code via the class parameter, as demonstrated by the passing of this parameter to the getClass method of the ActionForm object in Struts 1. {noformat} Note that current version commons-beanutils 1.9.2 in turn has a CVE in its dependency commons-collections (CVE-2015-6420, see BEANUTILS-488), which is fixed in 1.9.3. We should upgrade {{commons-beanutils}} from 1.8.3 to 1.9.3 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8149) Replace usages of deprecated SerializationSchema
Hai Zhou UTC+8 created FLINK-8149: - Summary: Replace usages of deprecated SerializationSchema Key: FLINK-8149 URL: https://issues.apache.org/jira/browse/FLINK-8149 Project: Flink Issue Type: Improvement Components: Kinesis Connector Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 The deprecated {{SerializationSchema}} in {{flink-streaming-java}}, has been moved to {{flink-core}}. But, the deprecate {{SerializationSchema}} is still used in {{flink-connector-kinesis}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8142) Cleanup reference to deprecated constants in ConfigConstants
Hai Zhou UTC+8 created FLINK-8142: - Summary: Cleanup reference to deprecated constants in ConfigConstants Key: FLINK-8142 URL: https://issues.apache.org/jira/browse/FLINK-8142 Project: Flink Issue Type: Improvement Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Minor ConfigConstants contains several deprecated String constants that are used by other Flink modules. Those should be cleaned up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8105) Removed unnecessary null check
Hai Zhou UTC+8 created FLINK-8105: - Summary: Removed unnecessary null check Key: FLINK-8105 URL: https://issues.apache.org/jira/browse/FLINK-8105 Project: Flink Issue Type: Improvement Components: Checkstyle Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Minor Fix For: 1.5.0 eg. {code:java} if (value != null && value instanceof String) {code} null instanceof String returns false hence replaced the check with {code:java} if (value instanceof String) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8101) Elasticsearch 6.x support
Hai Zhou UTC+8 created FLINK-8101: - Summary: Elasticsearch 6.x support Key: FLINK-8101 URL: https://issues.apache.org/jira/browse/FLINK-8101 Project: Flink Issue Type: New Feature Components: ElasticSearch Connector Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Fix For: 1.5.0 Recently, elasticsearch 6.0.0 was released: https://www.elastic.co/blog/elasticsearch-6-0-0-released The minimum version of ES6 compatible Elasticsearch Java Client is 5.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-8033) Build Flink with JDK 9
Hai Zhou UTC+8 created FLINK-8033: - Summary: Build Flink with JDK 9 Key: FLINK-8033 URL: https://issues.apache.org/jira/browse/FLINK-8033 Project: Flink Issue Type: Improvement Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Fix For: 1.5.0 This is a JIRA to track all issues that found to support Flink on Java 9 in the future. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7985) Update findbugs-maven-plugin version to 3.0.2
Hai Zhou UTC+8 created FLINK-7985: - Summary: Update findbugs-maven-plugin version to 3.0.2 Key: FLINK-7985 URL: https://issues.apache.org/jira/browse/FLINK-7985 Project: Flink Issue Type: Improvement Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Major Fix For: 1.5.0 The findbug version used by flink is pretty old (1.3.9). The old version of Findbugs itself have some bugs (like http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474). and the latest version 3.0.2 fixed the "Missing test classes" issue (https://github.com/gleclaire/findbugs-maven-plugin/issues/15). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7984) Bump snappy-java to 1.1.4
Hai Zhou UTC+8 created FLINK-7984: - Summary: Bump snappy-java to 1.1.4 Key: FLINK-7984 URL: https://issues.apache.org/jira/browse/FLINK-7984 Project: Flink Issue Type: Improvement Reporter: Hai Zhou UTC+8 Priority: Major Upgrade the snappy java version to 1.1.4(the latest, May, 2017). The older version has some issues like memory leak (https://github.com/xerial/snappy-java/issues/91). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7983) Bump prometheus java client to 0.1.0
Hai Zhou UTC+8 created FLINK-7983: - Summary: Bump prometheus java client to 0.1.0 Key: FLINK-7983 URL: https://issues.apache.org/jira/browse/FLINK-7983 Project: Flink Issue Type: Wish Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Update the dependencies {{io.prometheus:simpleclient*}} version from 0.0.26 to 0.1.0. the version 0.1.0 have many improvements: {noformat} [FEATURE] Support gzip compression for HTTPServer [FEATURE] Support running HTTPServer in daemon thread [BUGFIX] Shutdown threadpool on stop() for HTTPServer {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7982) Bump commons-configuration to 2.1.1
Hai Zhou UTC+8 created FLINK-7982: - Summary: Bump commons-configuration to 2.1.1 Key: FLINK-7982 URL: https://issues.apache.org/jira/browse/FLINK-7982 Project: Flink Issue Type: Improvement Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Major Fix For: 1.5.0 Currently the dependency {{org.apache.commons:commons-configuration (version:1.7, Sep, 2011)}}, update to {{org.apache.commons: commons-configuration2: 2.1.1}} Reference hadoop: [Hadoop Commom: HADOOP-14648 - Bump commons-configuration2 to 2.1.1|https://issues.apache.org/jira/browse/HADOOP-14648] [Hadoop Common: HADOOP-13660 - Upgrade commons-configuration version to 2.1|https://issues.apache.org/jira/browse/HADOOP-13660] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7981) Bump commons-lang3 version to 3.6
Hai Zhou UTC+8 created FLINK-7981: - Summary: Bump commons-lang3 version to 3.6 Key: FLINK-7981 URL: https://issues.apache.org/jira/browse/FLINK-7981 Project: Flink Issue Type: Improvement Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Update commons-lang3 from 3.3.2 to 3.6. {{SerializationUtils.clone()}} of commons-lang3 (<3.5) has a bug that break thread safety, which gets stack sometimes caused by race condition of initializing hash map. See https://issues.apache.org/jira/browse/LANG-1251. **other** [BEAM-2481:Update commons-lang3 dependency to version 3.6|https://issues.apache.org/jira/browse/BEAM-2481] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7980) Bump joda-time to 2.9.9
Hai Zhou UTC+8 created FLINK-7980: - Summary: Bump joda-time to 2.9.9 Key: FLINK-7980 URL: https://issues.apache.org/jira/browse/FLINK-7980 Project: Flink Issue Type: Improvement Components: Build System Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Major Fix For: 1.5.0 joda-time is version 2.5(Oct, 2014), bumping to 2.9.9(the latest version). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7979) Use Log.*(Object, Throwable) overload to log exceptions
Hai Zhou UTC+8 created FLINK-7979: - Summary: Use Log.*(Object, Throwable) overload to log exceptions Key: FLINK-7979 URL: https://issues.apache.org/jira/browse/FLINK-7979 Project: Flink Issue Type: Improvement Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Critical Fix For: 1.5.0 I found some code that logging an exception, it converts the exception to string or call `.getMessage()`. I think the better way is to use the Logger method overloads which take `Throwable` as a parameter. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7964) Add Apache Kafka 1.0.0 connector
Hai Zhou UTC+8 created FLINK-7964: - Summary: Add Apache Kafka 1.0.0 connector Key: FLINK-7964 URL: https://issues.apache.org/jira/browse/FLINK-7964 Project: Flink Issue Type: Improvement Components: Kafka Connector Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Major Fix For: 1.5.0 Kafka 1.0.0 is no mere bump of the version number. The Apache Kafka Project Management Committee has packed a number of valuable enhancements into the release. Here is a summary of a few of them: * Since its introduction in version 0.10, the Streams API has become hugely popular among Kafka users, including the likes of Pinterest, Rabobank, Zalando, and The New York Times. In 1.0, the the API continues to evolve at a healthy pace. To begin with, the builder API has been improved (KIP-120). A new API has been added to expose the state of active tasks at runtime (KIP-130). The new cogroup API makes it much easier to deal with partitioned aggregates with fewer StateStores and fewer moving parts in your code (KIP-150). Debuggability gets easier with enhancements to the print() and writeAsText() methods (KIP-160). And if that’s not enough, check out KIP-138 and KIP-161 too. For more on streams, check out the Apache Kafka Streams documentation, including some helpful new tutorial videos. * Operating Kafka at scale requires that the system remain observable, and to make that easier, we’ve made a number of improvements to metrics. These are too many to summarize without becoming tedious, but Connect metrics have been significantly improved (KIP-196), a litany of new health check metrics are now exposed (KIP-188), and we now have a global topic and partition count (KIP-168). Check out KIP-164 and KIP-187 for even more. * We now support Java 9, leading, among other things, to significantly faster TLS and CRC32C implementations. Over-the-wire encryption will be faster now, which will keep Kafka fast and compute costs low when encryption is enabled. * In keeping with the security theme, KIP-152 cleans up the error handling on Simple Authentication Security Layer (SASL) authentication attempts. Previously, some authentication error conditions were indistinguishable from broker failures and were not logged in a clear way. This is cleaner now. * Kafka can now tolerate disk failures better. Historically, JBOD storage configurations have not been recommended, but the architecture has nevertheless been tempting: after all, why not rely on Kafka’s own replication mechanism to protect against storage failure rather than using RAID? With KIP-112, Kafka now handles disk failure more gracefully. A single disk failure in a JBOD broker will not bring the entire broker down; rather, the broker will continue serving any log files that remain on functioning disks. * Since release 0.11.0, the idempotent producer (which is the producer used in the presence of a transaction, which of course is the producer we use for exactly-once processing) required max.in.flight.requests.per.connection to be equal to one. As anyone who has written or tested a wire protocol can attest, this put an upper bound on throughput. Thanks to KAFKA-5949, this can now be as large as five, relaxing the throughput constraint quite a bit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7952) Add metrics for counting logging events
Hai Zhou UTC+8 created FLINK-7952: - Summary: Add metrics for counting logging events Key: FLINK-7952 URL: https://issues.apache.org/jira/browse/FLINK-7952 Project: Flink Issue Type: Wish Components: Metrics Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Critical Fix For: 1.5.0 It would be useful to track logging events . *impl:* adds event counting via a custom Log4J Appender, this tracks the number of INFO, WARN, ERROR and FATAL logging events. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7900) Add a Rich KeySelector
Hai Zhou UTC+8 created FLINK-7900: - Summary: Add a Rich KeySelector Key: FLINK-7900 URL: https://issues.apache.org/jira/browse/FLINK-7900 Project: Flink Issue Type: Improvement Components: DataStream API Reporter: Hai Zhou UTC+8 Priority: Critical Currently, we just have a `KeySelector` Function, maybe we should add a `RichKeySelector` RichFunction, for the user to read some configuration information to build the keySelector they need. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7893) Port CheckpointStatsDetailsSubtasksHandler to new REST endpoint
Hai Zhou UTC+8 created FLINK-7893: - Summary: Port CheckpointStatsDetailsSubtasksHandler to new REST endpoint Key: FLINK-7893 URL: https://issues.apache.org/jira/browse/FLINK-7893 Project: Flink Issue Type: Sub-task Components: Distributed Coordination, REST Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Port *CheckpointStatsDetailsHandler* to new REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7892) Port CheckpointStatsHandler to new REST endpoint
Hai Zhou UTC+8 created FLINK-7892: - Summary: Port CheckpointStatsHandler to new REST endpoint Key: FLINK-7892 URL: https://issues.apache.org/jira/browse/FLINK-7892 Project: Flink Issue Type: Sub-task Components: Distributed Coordination, REST Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Port *CheckpointStatsDetailsHandler* to new REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7885) Port ClusterConfigHandler to new REST endpoint
Hai Zhou UTC+8 created FLINK-7885: - Summary: Port ClusterConfigHandler to new REST endpoint Key: FLINK-7885 URL: https://issues.apache.org/jira/browse/FLINK-7885 Project: Flink Issue Type: Sub-task Components: Distributed Coordination, REST Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Port *ClusterConfigHandler* to new REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7884) Port ClusterOverviewHandler to new REST endpoint
Hai Zhou UTC+8 created FLINK-7884: - Summary: Port ClusterOverviewHandler to new REST endpoint Key: FLINK-7884 URL: https://issues.apache.org/jira/browse/FLINK-7884 Project: Flink Issue Type: Sub-task Components: Distributed Coordination, REST Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.5.0 Port *ClusterOverviewHandler* to new REST endpoint -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7879) only execute apache-rat in one build profile
Hai Zhou UTC+8 created FLINK-7879: - Summary: only execute apache-rat in one build profile Key: FLINK-7879 URL: https://issues.apache.org/jira/browse/FLINK-7879 Project: Flink Issue Type: Improvement Components: Travis Affects Versions: 1.4.0 Reporter: Hai Zhou UTC+8 Fix For: 1.4.0 Similarly to [FLINK-7350|https://issues.apache.org/jira/browse/FLINK-7350] we improve build times (and stability!) by only executing the Apache Rat plugin in the build profile that builds the all of flink. Bump apache-rat-plugin to 0.12, [RAT-173 Cannot skip plugin run completely, but check only | https://issues.apache.org/jira/browse/RAT-173] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7843) Improve and enhance documentation for system metrics
Hai Zhou UTC+8 created FLINK-7843: - Summary: Improve and enhance documentation for system metrics Key: FLINK-7843 URL: https://issues.apache.org/jira/browse/FLINK-7843 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.3.2 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Priority: Critical Fix For: 1.4.0 I think we should do the following improvements about system metrics section in the documentation: # Add a column that the *Type* of metric. eg. Counters, Gauges, Histograms and Meters # Modify the *Description* of the metric,Add unit description. eg. in bytes, in megabytes, in nanoseconds, in milliseconds -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7819) Check object to clean is closure
Hai Zhou UTC+8 created FLINK-7819: - Summary: Check object to clean is closure Key: FLINK-7819 URL: https://issues.apache.org/jira/browse/FLINK-7819 Project: Flink Issue Type: Bug Components: Build System Affects Versions: 1.3.2 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.4.0 in *ClosureCleaner.clean(func) * method, we should check func is closure. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7777) Upgrade maven plugin japicmp version to 0.10.0
Hai Zhou UTC+8 created FLINK-: - Summary: Upgrade maven plugin japicmp version to 0.10.0 Key: FLINK- URL: https://issues.apache.org/jira/browse/FLINK- Project: Flink Issue Type: Bug Affects Versions: 1.3.2 Reporter: Hai Zhou UTC+8 Priority: Minor Fix For: 1.4.0 Currently, flink used japicmp-maven-plugin version is 0.7.0, I'm getting these warnings from the maven plugin during a *mvn clean verify*: {code:java} [INFO] Written file '.../target/japicmp/japicmp.diff'. [INFO] Written file '.../target/japicmp/japicmp.xml'. [INFO] Written file '.../target/japicmp/japicmp.html'. Warning: org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser: Property 'http://www.oracle.com/xml/jaxp/properties/entityExpansionLimit' is not recognized. Compiler warnings: WARNING: 'org.apache.xerces.jaxp.SAXParserImpl: Property 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized.' Warning: org.apache.xerces.parsers.SAXParser: Feature 'http://javax.xml.XMLConstants/feature/secure-processing' is not recognized. Warning: org.apache.xerces.parsers.SAXParser: Property 'http://javax.xml.XMLConstants/property/accessExternalDTD' is not recognized. Warning: org.apache.xerces.parsers.SAXParser: Property 'http://www.oracle.com/xml/jaxp/properties/entityExpansionLimit' is not recognized. {code} japicmp fixed in version 0.7.1 : _Excluded xerces vom maven-reporting dependency in order to prevent warnings from SAXParserImpl. _ The current stable version is 0.10.0, we can consider upgrading to this version. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7758) Fix bug Kafka09Fetcher add offset metrics
Hai Zhou UTC+8 created FLINK-7758: - Summary: Fix bug Kafka09Fetcher add offset metrics Key: FLINK-7758 URL: https://issues.apache.org/jira/browse/FLINK-7758 Project: Flink Issue Type: Bug Components: Kafka Connector Affects Versions: 1.3.2 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.4.0 in Kafka09Fetcher, add _KafkaConsumer_ kafkaMetricGroup. No judgment that the useMetrics variable is true. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7742) Fix array access might be out of bounds
Hai Zhou UTC+8 created FLINK-7742: - Summary: Fix array access might be out of bounds Key: FLINK-7742 URL: https://issues.apache.org/jira/browse/FLINK-7742 Project: Flink Issue Type: Bug Components: Build System Affects Versions: 1.3.2 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7741) Fix NPE when throw new SlotNotFoundException
Hai Zhou UTC+8 created FLINK-7741: - Summary: Fix NPE when throw new SlotNotFoundException Key: FLINK-7741 URL: https://issues.apache.org/jira/browse/FLINK-7741 Project: Flink Issue Type: Bug Components: Build System Affects Versions: 1.3.2 Reporter: Hai Zhou UTC+8 Assignee: Hai Zhou UTC+8 Fix For: 1.4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (FLINK-7697) Add metrics for Elasticsearch Sink
Hai Zhou UTC+8 created FLINK-7697: - Summary: Add metrics for Elasticsearch Sink Key: FLINK-7697 URL: https://issues.apache.org/jira/browse/FLINK-7697 Project: Flink Issue Type: Wish Components: ElasticSearch Connector Reporter: Hai Zhou UTC+8 Priority: Critical We should add metrics to track events write to ElasticasearchSink. eg. * number of successful bulk sends * number of documents inserted * number of documents updated * number of documents version conflicts -- This message was sent by Atlassian JIRA (v6.4.14#64029)