[jira] [Commented] (FLINK-17309) TPC-DS fail to run data generator
[ https://issues.apache.org/jira/browse/FLINK-17309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101395#comment-17101395 ] Robert Metzger commented on FLINK-17309: Thanks a lot for validating the change. In my opinion, we can polish & merge the PR. > TPC-DS fail to run data generator > - > > Key: FLINK-17309 > URL: https://issues.apache.org/jira/browse/FLINK-17309 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.11.0 >Reporter: Dawid Wysakowicz >Priority: Critical > Labels: pull-request-available, test-stability > > {code} > [INFO] Download data generator success. > [INFO] 15:53:41 Generating TPC-DS qualification data, this need several > minutes, please wait... > ./dsdgen_linux: line 1: 500:: command not found > [FAIL] Test script contains errors. > {code} > https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7849=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on pull request #11839: [FLINK-17166][dist] Modify the log4j-console.properties to also output logs into the files for WebUI
wangyang0918 commented on pull request #11839: URL: https://github.com/apache/flink/pull/11839#issuecomment-625042466 I also prefer to use the `ConsoleAppender` in log4j. It is more straightforward and stable. From the discussion in the ML, it seems that the only concern is about the disk consumption. So do we need to use the `RollingFileAppender` by default in `log4j-console.properties` and set the `MaxFileSize` and `MaxBackupIndex` explicitly? @tillrohrmann @zentol WDYT? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12015: [FLINK-17416] Bump okhttp version to 3.12.11 and 3.14.8
flinkbot commented on pull request #12015: URL: https://github.com/apache/flink/pull/12015#issuecomment-625041851 ## CI report: * 4706730b61045c11fadefc1029deb242318d7cf7 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework
flinkbot edited a comment on pull request #11854: URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491 ## CI report: * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN * d56cc5bfae0943dd147e517a36cdecaba69e7ca5 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=716) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252
[ https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101393#comment-17101393 ] Robert Metzger commented on FLINK-17416: I also agree that we need to bump okhttp, otherwise our users will run into the issue. I disabled the failing test to make the build green again in https://github.com/apache/flink/commit/ad46ca3ea8f445dcddde631978ecc7935d5fa8ae > Flink-kubernetes doesn't work on java 8 8u252 > - > > Key: FLINK-17416 > URL: https://issues.apache.org/jira/browse/FLINK-17416 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0, 1.11.0 >Reporter: wangxiyuan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > When using java-8-8u252 version, the flink container end-to-end failed. The > test `Running 'Run kubernetes session test'` fails with the `Broken pipe` > error. > See: > [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz] > > Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242 > > The reason is that the okhttp library which flink using doesn't work on > java-8-8u252: > [https://github.com/square/okhttp/issues/5970] > > The problem has been with the PR: > [https://github.com/square/okhttp/pull/5977] > > Maybe we can wait for a new 3.12.x release and bump the okhttp version in > Flink later. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17166) Modify the log4j-console.properties to also output logs into the files for WebUI
[ https://issues.apache.org/jira/browse/FLINK-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101385#comment-17101385 ] Yang Wang commented on FLINK-17166: --- [~trohrmann] aha, now i get your point. However, [~chesnay] suggests to keep the same behavior as before which outputs stdout/stderr both to the jobmanager/taskmanager.out. What do you think? In fact, i prefer to separate the stdout/stderr to different files. > Modify the log4j-console.properties to also output logs into the files for > WebUI > > > Key: FLINK-17166 > URL: https://issues.apache.org/jira/browse/FLINK-17166 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Configuration >Reporter: Andrey Zagrebin >Assignee: Yang Wang >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink
[ https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101381#comment-17101381 ] Yun Gao commented on FLINK-10672: - Hi [~ibzib], sorry that the environment has been freed by the administrator for met with some security problem, I will re-deploy the environment and will send you the access method this week. > Task stuck while writing output to flink > > > Key: FLINK-10672 > URL: https://issues.apache.org/jira/browse/FLINK-10672 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.5.4 > Environment: OS: Debuan rodente 4.17 > Flink version: 1.5.4 > ||Key||Value|| > |jobmanager.heap.mb|1024| > |jobmanager.rpc.address|localhost| > |jobmanager.rpc.port|6123| > |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter| > |metrics.reporter.jmx.port|9250-9260| > |metrics.reporters|jmx| > |parallelism.default|1| > |rest.port|8081| > |taskmanager.heap.mb|1024| > |taskmanager.numberOfTaskSlots|1| > |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26| > > h1. Overview > ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap > Size||Flink Managed Memory|| > |43501|1|0|12|62.9 GB|922 MB|642 MB| > h1. Memory > h2. JVM (Heap/Non-Heap) > ||Type||Committed||Used||Maximum|| > |Heap|922 MB|575 MB|922 MB| > |Non-Heap|68.8 MB|64.3 MB|-1 B| > |Total|991 MB|639 MB|922 MB| > h2. Outside JVM > ||Type||Count||Used||Capacity|| > |Direct|3,292|105 MB|105 MB| > |Mapped|0|0 B|0 B| > h1. Network > h2. Memory Segments > ||Type||Count|| > |Available|3,194| > |Total|3,278| > h1. Garbage Collection > ||Collector||Count||Time|| > |G1_Young_Generation|13|336| > |G1_Old_Generation|1|21| >Reporter: Ankur Goenka >Assignee: Yun Gao >Priority: Major > Labels: beam > Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, > Po89UGDn58V.png, WithBroadcastJob.png, jmx_dump.json, jmx_dump_detailed.json, > jstack_129827.log, jstack_163822.log, jstack_66985.log > > > I am running a fairly complex pipleline with 200+ task. > The pipeline works fine with small data (order of 10kb input) but gets stuck > with a slightly larger data (300kb input). > > The task gets stuck while writing the output toFlink, more specifically it > gets stuck while requesting memory segment in local buffer pool. The Task > manager UI shows that it has enough memory and memory segments to work with. > The relevant stack trace is > {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 > tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9] > java.lang.Thread.State: TIMED_WAITING (on object monitor) > at (C/C++) 0x7fef201c7dae (Unknown Source) > at (C/C++) 0x7fef1f2aea07 (Unknown Source) > at (C/C++) 0x7fef1f241cd3 (Unknown Source) > at java.lang.Object.wait(Native Method) > - waiting on <0xf6d56450> (a java.util.ArrayDeque) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247) > - locked <0xf6d56450> (a java.util.ArrayDeque) > at > org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144) > at > org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107) > at > org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65) > at > org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26) > at > org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80) > at > org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction$MyDataReceiver.accept(FlinkExecutableStageFunction.java:230) > - locked <0xf6a60bd0> (a java.lang.Object) > at > org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:81) > at > org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:32) > at >
[jira] [Commented] (FLINK-16099) Translate "HiveCatalog" page of "Hive Integration" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101379#comment-17101379 ] Jark Wu commented on FLINK-16099: - Hi [~andrew_lin], there is already a pending pull request under reviewing. > Translate "HiveCatalog" page of "Hive Integration" into Chinese > > > Key: FLINK-16099 > URL: https://issues.apache.org/jira/browse/FLINK-16099 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html > The markdown file is located in > {{flink/docs/dev/table/hive/hive_catalog.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on pull request #12003: [FLINK-10934] Support application mode for kubernetes
wangyang0918 commented on pull request #12003: URL: https://github.com/apache/flink/pull/12003#issuecomment-625036828 @kl0u I have addressed the comments and please have another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12015: [FLINK-17416] Bump okhttp version to 3.12.11 and 3.14.8
flinkbot commented on pull request #12015: URL: https://github.com/apache/flink/pull/12015#issuecomment-625036292 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4706730b61045c11fadefc1029deb242318d7cf7 (Thu May 07 05:25:23 UTC 2020) **Warnings:** * **6 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-17416).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework
flinkbot edited a comment on pull request #11854: URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491 ## CI report: * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN * 78d0b42e81680b15698ee9d1382e95f07a9021df Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=679) * d56cc5bfae0943dd147e517a36cdecaba69e7ca5 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=716) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 opened a new pull request #12015: [FLINK-17416] Bump okhttp version to 3.12.11 and 3.14.8
wangyang0918 opened a new pull request #12015: URL: https://github.com/apache/flink/pull/12015 ## What is the purpose of the change Bump okhttp version to 3.12.11 and 3.14.8 because it could not work with java 8.0.252. Refer to https://github.com/square/okhttp/issues/5970 for more information. ## Brief change log * Bump okhttp version to 3.12.11 and 3.14.8 and update `NOTICE` file ## Verifying this change * All the existing tests should pass ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (**yes** / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter
[ https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17250: --- Priority: Minor (was: Blocker) > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter > --- > > Key: FLINK-17250 > URL: https://issues.apache.org/jira/browse/FLINK-17250 > Project: Flink > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Tammy zhang >Priority: Minor > > when i run a job in flink cluster, a exception is occured: > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in > idea, but when i package it to jar, the jar throw the exception,i do not know > what is happened, pelaese fix it as quickly as possible,thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter
[ https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger resolved FLINK-17250. Resolution: Not A Problem > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter > --- > > Key: FLINK-17250 > URL: https://issues.apache.org/jira/browse/FLINK-17250 > Project: Flink > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Tammy zhang >Priority: Minor > > when i run a job in flink cluster, a exception is occured: > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in > idea, but when i package it to jar, the jar throw the exception,i do not know > what is happened, pelaese fix it as quickly as possible,thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter
[ https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101378#comment-17101378 ] Robert Metzger commented on FLINK-17250: Okay, great to hear that you were able to solve your problem. > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter > --- > > Key: FLINK-17250 > URL: https://issues.apache.org/jira/browse/FLINK-17250 > Project: Flink > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Tammy zhang >Priority: Blocker > > when i run a job in flink cluster, a exception is occured: > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in > idea, but when i package it to jar, the jar throw the exception,i do not know > what is happened, pelaese fix it as quickly as possible,thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] rmetzger commented on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
rmetzger commented on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-625034157 I manually restarted the failed tasks. It might have been a temporary issue with GitHub. Since this change is limited to the documentation, the CI build is not very important. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rmetzger commented on pull request #12007: [FLINK-17416][e2e][k8s][hotfix] Use Openjdk8 for e2e tests
rmetzger commented on pull request #12007: URL: https://github.com/apache/flink/pull/12007#issuecomment-625032862 What this PR does is changing the JDK from zulu to openjdk: after this PR, on startup it logs: ``` Java and Maven version openjdk version "1.8.0_252" OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~16.04-b09) OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode) Invoking mvn with '/home/vsts/maven_cache/apache-maven-3.2.5/bin/mvn -Dmaven.wagon.http.pool=false --settings /home/vsts/work/1/s/tools/ci/google-mirror-settings.xml -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn --no-snapshot-updates -B -Dinclude-hadoop -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.11 -Pe2e-hadoop -version' Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-14T17:29:23+00:00) ``` Before this PR, it logs: ``` Java and Maven version openjdk version "1.8.0_252" OpenJDK Runtime Environment (Zulu 8.46.0.19-linux64)-Microsoft-Azure-restricted (build 1.8.0_252-b14) OpenJDK 64-Bit Server VM (Zulu 8.46.0.19-linux64)-Microsoft-Azure-restricted (build 25.252-b14, mixed mode) Invoking mvn with '/home/vsts/maven_cache/apache-maven-3.2.5/bin/mvn -Dmaven.wagon.http.pool=false --settings /home/vsts/work/1/s/tools/ci/google-mirror-settings.xml -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn --no-snapshot-updates -B -Dinclude-hadoop -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.11 -Pe2e-hadoop -version' Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-14T17:29:23+00:00) ``` I thought the okhttp problem was caused by the zulu vm, but it seems to apply to openjdk as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11854: [FLINK-17407] Introduce external resource framework
flinkbot edited a comment on pull request #11854: URL: https://github.com/apache/flink/pull/11854#issuecomment-617586491 ## CI report: * bddb0e274da11bbe99d15c6e0bb55e8d8c0e658a UNKNOWN * dc7a9c5c7d1fac82518815b9277809dfb82ddaac UNKNOWN * 2238559b0e2245e77204e7c7d0ef34c7a97e3766 UNKNOWN * 78d0b42e81680b15698ee9d1382e95f07a9021df Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=679) * d56cc5bfae0943dd147e517a36cdecaba69e7ca5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12003: [FLINK-10934] Support application mode for kubernetes
flinkbot edited a comment on pull request #12003: URL: https://github.com/apache/flink/pull/12003#issuecomment-624416714 ## CI report: * 57b669055ef7388e1a70ed54263e9fca1b2f9fc8 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=713) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12013: [FLINK-17256] Suppport keyword arguments in the PyFlink Descriptor API.
flinkbot edited a comment on pull request #12013: URL: https://github.com/apache/flink/pull/12013#issuecomment-625012004 ## CI report: * c947260be5694e47c2a47152a2a1495ef4ddd991 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=710) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252
[ https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101372#comment-17101372 ] Yang Wang commented on FLINK-17416: --- I agree with [~chesnay] that if we want to solve the problem thoroughly, we need to bump the okhttp version to 3.12.11 and 3.14.8. > Flink-kubernetes doesn't work on java 8 8u252 > - > > Key: FLINK-17416 > URL: https://issues.apache.org/jira/browse/FLINK-17416 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0, 1.11.0 >Reporter: wangxiyuan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > When using java-8-8u252 version, the flink container end-to-end failed. The > test `Running 'Run kubernetes session test'` fails with the `Broken pipe` > error. > See: > [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz] > > Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242 > > The reason is that the okhttp library which flink using doesn't work on > java-8-8u252: > [https://github.com/square/okhttp/issues/5970] > > The problem has been with the PR: > [https://github.com/square/okhttp/pull/5977] > > Maybe we can wait for a new 3.12.x release and bump the okhttp version in > Flink later. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner
flinkbot edited a comment on pull request #11276: URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780 ## CI report: * 772b6e923e3398ddf43c1c300934fb8147d9acfb UNKNOWN * 13daff653ab7e3f74cd923d50efb19315f0b75a6 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=712) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252
[ https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101370#comment-17101370 ] Yang Wang commented on FLINK-17416: --- I am not sure why the java version in azure upgrade from "1.8.0_242" to "1.8.0_252" from yesterday. It makes the K8 tests failed. > Flink-kubernetes doesn't work on java 8 8u252 > - > > Key: FLINK-17416 > URL: https://issues.apache.org/jira/browse/FLINK-17416 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0, 1.11.0 >Reporter: wangxiyuan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > When using java-8-8u252 version, the flink container end-to-end failed. The > test `Running 'Run kubernetes session test'` fails with the `Broken pipe` > error. > See: > [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz] > > Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242 > > The reason is that the okhttp library which flink using doesn't work on > java-8-8u252: > [https://github.com/square/okhttp/issues/5970] > > The problem has been with the PR: > [https://github.com/square/okhttp/pull/5977] > > Maybe we can wait for a new 3.12.x release and bump the okhttp version in > Flink later. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on pull request #12007: [FLINK-17416][e2e][k8s][hotfix] Use Openjdk8 for e2e tests
wangyang0918 commented on pull request #12007: URL: https://github.com/apache/flink/pull/12007#issuecomment-625027932 The test failed because we are still using openjdk version "1.8.0_252" in azure. And it could not work with okhttp 3.x. BTW, the K8s test failed at the Flink client side, so it has nothing to do with the docker environment. Our docker image java version is "1.8.0_212". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11629: [FLINK-14267][connectors/filesystem]Introduce BaseRow Encoder in csv for filesystem table sink.
flinkbot edited a comment on pull request #11629: URL: https://github.com/apache/flink/pull/11629#issuecomment-608466138 ## CI report: * 7a5b8de11688078f0e6b5df1fa62741b2298cb8e Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/159864073) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7340) * 4f8429e176e066e9601953edc62a94322534189a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=714) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12014: [FLINK-16529][python]Add ignore_parse_errors() method to Json format …
flinkbot edited a comment on pull request #12014: URL: https://github.com/apache/flink/pull/12014#issuecomment-625012050 ## CI report: * a3e1317e6cd01b958a933a0220a70e8b652b3136 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=711) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11936: [FLINK-17386][Security][hotfix] fix LinkageError not captured
flinkbot edited a comment on pull request #11936: URL: https://github.com/apache/flink/pull/11936#issuecomment-620683562 ## CI report: * a02867a0bce8f1a0174a6a3133b50b76167dbcc9 Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163208458) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=520) * 7efc5efc190412045358d713b209a691fa2fdad7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=715) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252
[ https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17416: --- Affects Version/s: 1.11.0 > Flink-kubernetes doesn't work on java 8 8u252 > - > > Key: FLINK-17416 > URL: https://issues.apache.org/jira/browse/FLINK-17416 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0, 1.11.0 >Reporter: wangxiyuan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > When using java-8-8u252 version, the flink container end-to-end failed. The > test `Running 'Run kubernetes session test'` fails with the `Broken pipe` > error. > See: > [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz] > > Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242 > > The reason is that the okhttp library which flink using doesn't work on > java-8-8u252: > [https://github.com/square/okhttp/issues/5970] > > The problem has been with the PR: > [https://github.com/square/okhttp/pull/5977] > > Maybe we can wait for a new 3.12.x release and bump the okhttp version in > Flink later. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252
[ https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17416: --- Priority: Blocker (was: Minor) > Flink-kubernetes doesn't work on java 8 8u252 > - > > Key: FLINK-17416 > URL: https://issues.apache.org/jira/browse/FLINK-17416 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0 >Reporter: wangxiyuan >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > When using java-8-8u252 version, the flink container end-to-end failed. The > test `Running 'Run kubernetes session test'` fails with the `Broken pipe` > error. > See: > [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz] > > Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242 > > The reason is that the okhttp library which flink using doesn't work on > java-8-8u252: > [https://github.com/square/okhttp/issues/5970] > > The problem has been with the PR: > [https://github.com/square/okhttp/pull/5977] > > Maybe we can wait for a new 3.12.x release and bump the okhttp version in > Flink later. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17416) Flink-kubernetes doesn't work on java 8 8u252
[ https://issues.apache.org/jira/browse/FLINK-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101360#comment-17101360 ] Robert Metzger commented on FLINK-17416: My proposed fix doesn't work. I will push a hotfix (once validated) that disables the e2e test until fixed. > Flink-kubernetes doesn't work on java 8 8u252 > - > > Key: FLINK-17416 > URL: https://issues.apache.org/jira/browse/FLINK-17416 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Affects Versions: 1.10.0 >Reporter: wangxiyuan >Priority: Minor > Labels: pull-request-available > Fix For: 1.11.0 > > > When using java-8-8u252 version, the flink container end-to-end failed. The > test `Running 'Run kubernetes session test'` fails with the `Broken pipe` > error. > See: > [https://logs.openlabtesting.org/logs/periodic-20-flink-mail/github.com/apache/flink/master/flink-end-to-end-test-arm64-container/fcfdd47/job-output.txt.gz] > > Flink Azure CI doesn't hit this problem because it runs under jdk-8-8u242 > > The reason is that the okhttp library which flink using doesn't work on > java-8-8u252: > [https://github.com/square/okhttp/issues/5970] > > The problem has been with the PR: > [https://github.com/square/okhttp/pull/5977] > > Maybe we can wait for a new 3.12.x release and bump the okhttp version in > Flink later. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12003: [FLINK-10934] Support application mode for kubernetes
flinkbot edited a comment on pull request #12003: URL: https://github.com/apache/flink/pull/12003#issuecomment-624416714 ## CI report: * 2cc10767854d49f63c6677e7f9217f2c89b8b306 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=672) * 57b669055ef7388e1a70ed54263e9fca1b2f9fc8 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=713) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11936: [FLINK-17386][Security][hotfix] fix LinkageError not captured
flinkbot edited a comment on pull request #11936: URL: https://github.com/apache/flink/pull/11936#issuecomment-620683562 ## CI report: * a02867a0bce8f1a0174a6a3133b50b76167dbcc9 Travis: [SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/163208458) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=520) * 7efc5efc190412045358d713b209a691fa2fdad7 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner
flinkbot edited a comment on pull request #11276: URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780 ## CI report: * 772b6e923e3398ddf43c1c300934fb8147d9acfb UNKNOWN * 644ba6d77ee03a378eea07ed3aff611c86154caa Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/159244520) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7192) * 13daff653ab7e3f74cd923d50efb19315f0b75a6 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=712) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11629: [FLINK-14267][connectors/filesystem]Introduce BaseRow Encoder in csv for filesystem table sink.
flinkbot edited a comment on pull request #11629: URL: https://github.com/apache/flink/pull/11629#issuecomment-608466138 ## CI report: * 7a5b8de11688078f0e6b5df1fa62741b2298cb8e Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/159864073) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7340) * 4f8429e176e066e9601953edc62a94322534189a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] bowenli86 commented on a change in pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields
bowenli86 commented on a change in pull request #11900: URL: https://github.com/apache/flink/pull/11900#discussion_r421227400 ## File path: flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/catalog/PostgresCatalog.java ## @@ -216,6 +216,12 @@ public CatalogBaseTable getTable(ObjectPath tablePath) throws TableNotExistExcep } } + public static final String PG_SERIAL = "serial"; + //public static final String PG_SERIAL2 = "serial2"; //ResultSetMetaData.getColumnTypeName() returns int2 Review comment: just add comments, rather than commenting out string constants? ## File path: flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/catalog/PostgresCatalogTestBase.java ## @@ -58,6 +58,7 @@ protected static final String TABLE5 = "t5"; protected static final String TABLE_PRIMITIVE_TYPE = "dt"; protected static final String TABLE_ARRAY_TYPE = "dt2"; + protected static final String TABLE_SERIAL_TYPE = "dt3"; Review comment: why use a separate table rather than reusing existing one? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-13938) Use pre-uploaded libs to accelerate flink submission
[ https://issues.apache.org/jira/browse/FLINK-13938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-13938: -- Description: Currently, every time we start a flink cluster, flink lib jars need to be uploaded to hdfs and then register Yarn local resource so that it could be downloaded to jobmanager and all taskmanager container. I think we could have two optimizations. # Use pre-uploaded flink binary to avoid uploading of flink system jars # By default, the LocalResourceVisibility is APPLICATION, so they will be downloaded only once and shared for all taskmanager containers of a same application in the same node. However, different applications will have to download all jars every time, including the flink-dist.jar. We could use the yarn public cache to eliminate the unnecessary jars downloading and make launching container faster. Take both FLINK-13938 and FLINK-14964 into account, this whole submission optimization feature will be done in the following steps. * Add {{yarn.provided.lib.dirs}} to configure pre-uploaded libs, which contain files that are useful for all the users of the platform(i.e. different applications). So it needs to be public readable and will be set with {{PUBLIC}} visibility for local resources. For the first version, we can have only the flink-dist, lib/, plugins/ being automatically excluded from uploading if the {{yarn.pre-uploaded.flink.path}} contains a file with the same name. This will be done in FLINK-13938. * Make all the options(including user jar, flink-dist-*.jar, libs, etc.) could support remote path. This feature allow the Flink client do not need to have a local user jar and dependencies. Combined with application mode, the deployer(i.e. Flink job management system) will have better performance. This will be done in FLINK-14964. How to use the pre-upload feature? 1. First, upload the Flink binary to the HDFS directories 2. Use {{yarn.provided.lib.dirs}} to specify the pre-uploaded libs A final submission command could be issued like following. {code:java} ./bin/flink run -m yarn-cluster -d \ -yD yarn.provided.lib.dirs=hdfs://myhdfs/flink/lib,hdfs://myhdfs/flink/plugins \ examples/streaming/WindowJoin.jar {code} How to use the remote path with application mode? {code:java} ./bin/flink run -m yarn-cluster -d \ -yD yarn.provided.lib.dirs=hdfs://myhdfs/flink/lib,hdfs://myhdfs/flink/plugins \ hdfs://myhdfs/jars/WindowJoin.jar {code} was: Currently, every time we start a flink cluster, flink lib jars need to be uploaded to hdfs and then register Yarn local resource so that it could be downloaded to jobmanager and all taskmanager container. I think we could have two optimizations. # Use pre-uploaded flink binary to avoid uploading of flink system jars # By default, the LocalResourceVisibility is APPLICATION, so they will be downloaded only once and shared for all taskmanager containers of a same application in the same node. However, different applications will have to download all jars every time, including the flink-dist.jar. We could use the yarn public cache to eliminate the unnecessary jars downloading and make launching container faster. Following the discussion in the user ML. [https://lists.apache.org/list.html?u...@flink.apache.org:lte=1M:Flink%20Conf%20%22yarn.flink-dist-jar%22%20Question] Take both FLINK-13938 and FLINK-14964 into account, this feature will be done in the following steps. * Enrich "\-yt/--yarnship" to support HDFS directory * Add a new config option to control whether to disable the flink-dist uploading(*Will be extended to support all files, including lib/plugin/user jars/dependencies/etc.*) * Enrich "\-yt/--yarnship" to specify local resource visibility. It is "APPLICATION" by default. It could be also configured to "PUBLIC", which means shared by all applications, or "PRIVATE" which means shared by a same user. (*Will be done later according to the feedback*) How to use this feature? 1. First, upload the Flink binary and user jars to the HDFS directories 2. Use "\-yt/–yarnship" to specify the pre-uploaded libs 3. Disable the automatic uploading of flink-dist via {{yarn.submission.automatic-flink-dist-upload}}: false A final submission command could be issued like following. {code:java} ./bin/flink run -m yarn-cluster -d \ -yt hdfs://myhdfs/flink/release/flink-1.11 \ -yD yarn.submission.automatic-flink-dist-upload=false \ examples/streaming/WindowJoin.jar {code} > Use pre-uploaded libs to accelerate flink submission > > > Key: FLINK-13938 > URL: https://issues.apache.org/jira/browse/FLINK-13938 > Project: Flink > Issue Type: New Feature > Components: Client / Job Submission, Deployment / YARN >Reporter: Yang Wang >Assignee: Yang Wang >
[GitHub] [flink] flinkbot edited a comment on pull request #12014: [FLINK-16529][python]Add ignore_parse_errors() method to Json format …
flinkbot edited a comment on pull request #12014: URL: https://github.com/apache/flink/pull/12014#issuecomment-625012050 ## CI report: * a3e1317e6cd01b958a933a0220a70e8b652b3136 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=711) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12013: [FLINK-17256] Suppport keyword arguments in the PyFlink Descriptor API.
flinkbot edited a comment on pull request #12013: URL: https://github.com/apache/flink/pull/12013#issuecomment-625012004 ## CI report: * c947260be5694e47c2a47152a2a1495ef4ddd991 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=710) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12003: [FLINK-10934] Support application mode for kubernetes
flinkbot edited a comment on pull request #12003: URL: https://github.com/apache/flink/pull/12003#issuecomment-624416714 ## CI report: * 2cc10767854d49f63c6677e7f9217f2c89b8b306 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=672) * 57b669055ef7388e1a70ed54263e9fca1b2f9fc8 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
flinkbot edited a comment on pull request #11960: URL: https://github.com/apache/flink/pull/11960#issuecomment-621791651 ## CI report: * 6c9c6d4f1ad70f5911f3e58461b8fb3be975df85 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=709) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner
flinkbot edited a comment on pull request #11276: URL: https://github.com/apache/flink/pull/11276#issuecomment-593234780 ## CI report: * 772b6e923e3398ddf43c1c300934fb8147d9acfb UNKNOWN * 644ba6d77ee03a378eea07ed3aff611c86154caa Travis: [FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/159244520) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=7192) * 13daff653ab7e3f74cd923d50efb19315f0b75a6 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17125) Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users
[ https://issues.apache.org/jira/browse/FLINK-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17125: Fix Version/s: 1.11.0 1.10.1 > Add a Usage Notes Page to Answer Common Questions Encountered by PyFlink Users > -- > > Key: FLINK-17125 > URL: https://issues.apache.org/jira/browse/FLINK-17125 > Project: Flink > Issue Type: Improvement > Components: API / Python, Documentation >Reporter: Huang Xingbo >Assignee: Huang Xingbo >Priority: Major > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > > There are several common problems that PyFlink new users often encounter. We > need to support usage notes to help them solve these problems quickly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17479) Occasional checkpoint failure due to null pointer exception in Flink version 1.10
[ https://issues.apache.org/jira/browse/FLINK-17479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101355#comment-17101355 ] Congxian Qiu(klion26) commented on FLINK-17479: --- [~nobleyd] thanks for reporting this problem. seems strange fro the picture your given. if the {{checkpointMetadata}} is null, then how can the message [C{{ould not perform checkpoint " + checkpointMetaData.getCheckpointId() + " for operator " + getName() + '.'][1]}} could be printed? the error message will try to get the checkpointId from {{checkpointMetaData}}. could you please share the whole jm log, a reproducible job is even better. thanks. [1]https://github.com/apache/flink/blob/aa4eb8f0c9ce74e6b92c3d9be5dc8e8cb536239d/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L801 > Occasional checkpoint failure due to null pointer exception in Flink version > 1.10 > - > > Key: FLINK-17479 > URL: https://issues.apache.org/jira/browse/FLINK-17479 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.10.0 > Environment: Flink1.10.0 > jdk1.8.0_60 >Reporter: nobleyd >Priority: Major > Attachments: image-2020-04-30-18-44-21-630.png, > image-2020-04-30-18-55-53-779.png > > > I upgrade the standalone cluster(3 machines) from flink1.9 to flink1.10.0 > latest. My job running normally in flink1.9 for about half a year, while I > get some job failed due to null pointer exception when checkpoing in > flink1.10.0. > Below is the exception log: > !image-2020-04-30-18-55-53-779.png! > I have checked the StreamTask(882), and is shown below. I think the only case > is that checkpointMetaData is null that can lead to a null pointer exception. > !image-2020-04-30-18-44-21-630.png! > I do not know why, is there anyone can help me? The problem only occurs in > Flink1.10.0 for now, it works well in flink1.9. I give the some conf > info(some different to the default) also in below, guessing that maybe it is > an error for configuration mistake. > some conf of my flink1.10.0: > > {code:java} > taskmanager.memory.flink.size: 71680m > taskmanager.memory.framework.heap.size: 512m > taskmanager.memory.framework.off-heap.size: 512m > taskmanager.memory.task.off-heap.size: 17920m > taskmanager.memory.managed.size: 512m > taskmanager.memory.jvm-metaspace.size: 512m > taskmanager.memory.network.fraction: 0.1 > taskmanager.memory.network.min: 1024mb > taskmanager.memory.network.max: 1536mb > taskmanager.memory.segment-size: 128kb > rest.port: 8682 > historyserver.web.port: 8782high-availability.jobmanager.port: > 13141,13142,13143,13144 > blob.server.port: 13146,13147,13148,13149taskmanager.rpc.port: > 13151,13152,13153,13154 > taskmanager.data.port: 13156metrics.internal.query-service.port: > 13161,13162,13163,13164,13166,13167,13168,13169env.java.home: > /usr/java/jdk1.8.0_60/bin/java > env.pid.dir: /home/work/flink-1.10.0{code} > > Hope someone can help me solve it. > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] docete commented on pull request #11276: [FLINK-16029][table-planner-blink] Remove register source and sink in test cases of blink planner
docete commented on pull request #11276: URL: https://github.com/apache/flink/pull/11276#issuecomment-625013123 rebased to resolve conflicts. @KurtYoung please have a look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12014: [FLINK-16529][python]Add ignore_parse_errors() method to Json format …
flinkbot commented on pull request #12014: URL: https://github.com/apache/flink/pull/12014#issuecomment-625012050 ## CI report: * a3e1317e6cd01b958a933a0220a70e8b652b3136 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12013: [FLINK-17256] Suppport keyword arguments in the PyFlink Descriptor API.
flinkbot commented on pull request #12013: URL: https://github.com/apache/flink/pull/12013#issuecomment-625012004 ## CI report: * c947260be5694e47c2a47152a2a1495ef4ddd991 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter
[ https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101350#comment-17101350 ] Tammy zhang edited comment on FLINK-17250 at 5/7/20, 3:39 AM: -- thanks for pay attention to this question, i solved the problem with other methods, i guess the reason for this exception is cause i mixed the StreamExecutionEnvironment and ExecutionEnvironment in a job, now i use the unitive StreamExecutionEnvironment in the job, and the exception is disappeared @[~rmetzger] was (Author: 1372114269): i solved the problem with other methods, i guess the reason for this exception is cause i mixed the StreamExecutionEnvironment and ExecutionEnvironment in a job, now i use the unitive StreamExecutionEnvironment in the job, and the exception is disappeared > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter > --- > > Key: FLINK-17250 > URL: https://issues.apache.org/jira/browse/FLINK-17250 > Project: Flink > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Tammy zhang >Priority: Blocker > > when i run a job in flink cluster, a exception is occured: > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in > idea, but when i package it to jar, the jar throw the exception,i do not know > what is happened, pelaese fix it as quickly as possible,thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] liying919 removed a comment on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
liying919 removed a comment on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-624991186 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] liying919 commented on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
liying919 commented on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-625011211 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter
[ https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101350#comment-17101350 ] Tammy zhang commented on FLINK-17250: - i solved the problem with other methods, i guess the reason for this exception is cause i mixed the StreamExecutionEnvironment and ExecutionEnvironment in a job, now i use the unitive StreamExecutionEnvironment in the job, and the exception is disappeared > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter > --- > > Key: FLINK-17250 > URL: https://issues.apache.org/jira/browse/FLINK-17250 > Project: Flink > Issue Type: Bug >Affects Versions: 1.10.0 >Reporter: Tammy zhang >Priority: Blocker > > when i run a job in flink cluster, a exception is occured: > interface akka.event.LoggingFilter is not assignable from class > akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in > idea, but when i package it to jar, the jar throw the exception,i do not know > what is happened, pelaese fix it as quickly as possible,thanks -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] klion26 commented on pull request #11982: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
klion26 commented on pull request #11982: URL: https://github.com/apache/flink/pull/11982#issuecomment-625010492 @liying919 thanks for the work, I'll review the new pr late today. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-16478) add restApi to modify loglevel
[ https://issues.apache.org/jira/browse/FLINK-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17099746#comment-17099746 ] Xingxing Di edited comment on FLINK-16478 at 5/7/20, 3:33 AM: -- Hi [~trohrmann], thanks for the comments. 1. About the scope thing, i totaly agree with you, we can start with the cluster wide log level, I've edit the google doc as well. About the timer thing, i think a timer for reseting log level would be very helpful, user can simply config one time for a short time debug, no need to worry about forgetting to change it back. This referenced to the design of Apache Storm: [https://github.com/apache/storm/blob/master/docs/dynamic-log-level-settings.md] 2. Since flink already migrate to log4j2 , I think we should at least support log4j2 and log4j.(According to the current design, we can easily support logback as well.) As you said before, log4j2 do have different means to configure the log level depending on the log4j2 version, but I found the way to configure log level which in your shared link will support all the log4j2 version as i known. Here is the [Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] which similar to storm's [LogConfigManager . |https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/daemon/worker/LogConfigManager.java]I also added the [LogConfigWorkerFactory |https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.jlgb5jpklf9k] to show how to detect the logging backend. Compatibility: * For a unsupported logging backend, dynamic log level setting will not work, but cluster will work fine, since we do not depend on a specific implementation directly unless we detect an supported logging backend. * For an incompatible version([Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] should work for all log4j2 versions, here we assume there is an unexpected case.), dynamic log level setting may not work properly, also flink logging system may not work properly either. 3. As above, i looked into the design of apache storm, seems storm only support log4j2. I will continue to do the research. was (Author: dixingx...@yeah.net): Hi [~trohrmann], thanks for the comments. 1. About the scope thing, i totaly agree with you, we can start with the cluster wide log level, I've edit the google doc as well. About the timer thing, i think a timer for reseting log level would be very helpful, user can simply config one time for a short time debug, no need to worry about forgetting to change it back. This referenced to the design of Apache Storm: [https://github.com/apache/storm/blob/master/docs/dynamic-log-level-settings.md] 2. Since flink already migrate to log4j2 , I think we should at least support log4j2 and log4j.(According to the current design, we can easily support logback as well.) As you said before, log4j2 do have different means to configure the log level depending on the log4j2 version, but I found the way to configure log level which in your shared link will support all the log4j2 version as i known. Here is the [Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] which similar to storm's [LogConfigManager . |https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/daemon/worker/LogConfigManager.java]I also added the [LogConfigWorkerFactory |https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.jlgb5jpklf9k] to show how to detect the logging backend. Compatibility: * For a unsupported logging backend, Cluster will work as usual, since we do not depend on a specific implementation directly unless we detect an supported logging backend. * For an incompatible version([Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] should work for all log4j2 versions, here we assume there is an unexpected case.), dynamic log level setting may not work properly, also flink logging system may not work properly either. 3. As above, i looked into the design of apache storm, seems storm only support log4j2. I will continue to do the research. > add restApi to modify loglevel > --- > > Key: FLINK-16478 > URL: https://issues.apache.org/jira/browse/FLINK-16478 > Project: Flink > Issue Type: Improvement > Components: Runtime / REST >Reporter: xiaodao >Priority: Minor > > sometimes we may need to change loglevel to get more information to resolved > bug, now we need to stop it and modify
[GitHub] [flink] yanghua commented on pull request #10191: [FLINK-14749] Migrate duration and memory size ConfigOptions in JobManagerOptions
yanghua commented on pull request #10191: URL: https://github.com/apache/flink/pull/10191#issuecomment-625009111 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #8823: [FLINK-12514] Refactor the failure checkpoint counting mechanism with ordered checkpoint id
yanghua commented on pull request #8823: URL: https://github.com/apache/flink/pull/8823#issuecomment-625009037 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-16478) add restApi to modify loglevel
[ https://issues.apache.org/jira/browse/FLINK-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17099746#comment-17099746 ] Xingxing Di edited comment on FLINK-16478 at 5/7/20, 3:29 AM: -- Hi [~trohrmann], thanks for the comments. 1. About the scope thing, i totaly agree with you, we can start with the cluster wide log level, I've edit the google doc as well. About the timer thing, i think a timer for reseting log level would be very helpful, user can simply config one time for a short time debug, no need to worry about forgetting to change it back. This referenced to the design of Apache Storm: [https://github.com/apache/storm/blob/master/docs/dynamic-log-level-settings.md] 2. Since flink already migrate to log4j2 , I think we should at least support log4j2 and log4j.(According to the current design, we can easily support logback as well.) As you said before, log4j2 do have different means to configure the log level depending on the log4j2 version, but I found the way to configure log level which in your shared link will support all the log4j2 version as i known. Here is the [Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] which similar to storm's [LogConfigManager . |https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/daemon/worker/LogConfigManager.java]I also added the [LogConfigWorkerFactory |https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.jlgb5jpklf9k] to show how to detect the logging backend. Compatibility: * For a unsupported logging backend, Cluster will work as usual, since we do not depend on a specific implementation directly unless we detect an supported logging backend. * For an incompatible version([Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] should work for all log4j2 versions, here we assume there is an unexpected case.), dynamic log level setting may not work properly, also flink logging system may not work properly either. 3. As above, i looked into the design of apache storm, seems storm only support log4j2. I will continue to do the research. was (Author: dixingx...@yeah.net): Hi [~trohrmann], thanks for the comments. 1. About the scope thing, i totaly agree with you, we can start with the cluster wide log level, I've edit the google doc as well. About the timer thing, i think a timer for reseting log level would be very helpful, user can simply config one time for a short time debug, no need to worry about forgetting to change it back. This referenced to the design of Apache Storm: [https://github.com/apache/storm/blob/master/docs/dynamic-log-level-settings.md] 2. Since flink already migrate to log4j2 , I think we should at least support log4j2 and log4j. As you said before, log4j2 do have different means to configure the log level depending on the log4j2 version, but I found the way to configure log level which in your shared link will support all the log4j2 version as i known. Here is the [Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] which similar to storm's [LogConfigManager . |https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/daemon/worker/LogConfigManager.java]I also added the [LogConfigWorkerFactory |https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.jlgb5jpklf9k] to show how to detect the logging backend. In general * For a unsupported logging backend, Cluster will work as usual, since we do not depend on a specific implementation directly unless we detect an supported logging backend. * For an incompatible version([Log4j2ConfigWorker|https://docs.google.com/document/d/1Q02VSSBzlZaZzvxuChIo1uinw8KDQsyTZUut6_IDErY/edit#heading=h.fd0rccx9k6u7] should work for all log4j2 versions, here we assume there is an unexpected case.), dynamic log level setting may not work properly, also flink logging system may not work properly either. * According to the current design, I think we can easily support logback as well although i am not familiar with it yet. 3. As above, i looked into the design of apache storm, seems storm only support log4j2. I will continue to do the research. > add restApi to modify loglevel > --- > > Key: FLINK-16478 > URL: https://issues.apache.org/jira/browse/FLINK-16478 > Project: Flink > Issue Type: Improvement > Components: Runtime / REST >Reporter: xiaodao >Priority: Minor > > sometimes we may need to change loglevel to get more information to resolved > bug, now we need to stop it and
[GitHub] [flink] yanghua commented on pull request #8679: [FLINK-9465] Specify a separate savepoint timeout option via CLI
yanghua commented on pull request #8679: URL: https://github.com/apache/flink/pull/8679#issuecomment-625008712 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #8461: [FLINK-12422] Remove IN_TESTS for make test code and production code consistent
yanghua commented on pull request #8461: URL: https://github.com/apache/flink/pull/8461#issuecomment-625008565 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #6807: [FLINK-10292] Generate JobGraph in StandaloneJobClusterEntrypoint only once
yanghua commented on pull request #6807: URL: https://github.com/apache/flink/pull/6807#issuecomment-625008399 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #6954: [FLINK-10701] Move modern kafka connector module into connector profile
yanghua commented on pull request #6954: URL: https://github.com/apache/flink/pull/6954#issuecomment-625008501 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #6637: [FLINK-10230] [sql-client] Support printing the query of a view
yanghua commented on pull request #6637: URL: https://github.com/apache/flink/pull/6637#issuecomment-625008276 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #6631: [FLINK-10229] [sql-client] Support listing of views
yanghua commented on pull request #6631: URL: https://github.com/apache/flink/pull/6631#issuecomment-625008165 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #6432: [FLINK-9970] [table] Add ASCII/CHR function for table/sql API
yanghua commented on pull request #6432: URL: https://github.com/apache/flink/pull/6432#issuecomment-625008061 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #6675: [FLINK-10258] [sql-client] Allow streaming sources to be present for batch executions
yanghua commented on pull request #6675: URL: https://github.com/apache/flink/pull/6675#issuecomment-625007829 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #7718: [FLINK-10718] Use IO executor in RpcService for message serialization
yanghua commented on pull request #7718: URL: https://github.com/apache/flink/pull/7718#issuecomment-625007470 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #7750: [FLINK-11639] Provide readSequenceFile for Hadoop new API
yanghua commented on pull request #7750: URL: https://github.com/apache/flink/pull/7750#issuecomment-625007557 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #7764: [FLINK-11546] Add option to manually set job ID in CLI
yanghua commented on pull request #7764: URL: https://github.com/apache/flink/pull/7764#issuecomment-625007660 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #7960: [FLINK-9854][sql-client] Allow passing multi-line input to SQL Client CLI
yanghua commented on pull request #7960: URL: https://github.com/apache/flink/pull/7960#issuecomment-625007207 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #8005: [FLINK-11818] Provide pipe transformation function for DataSet API
yanghua commented on pull request #8005: URL: https://github.com/apache/flink/pull/8005#issuecomment-625007331 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yanghua commented on pull request #8034: [FLINK-11733] Provide HadoopMapFunction for org.apache.hadoop.mapreduce.Mapper
yanghua commented on pull request #8034: URL: https://github.com/apache/flink/pull/8034#issuecomment-625007088 leaving... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] liying919 commented on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
liying919 commented on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-625004154 @rmetzger It seems that error occurs while fetching the code. Do you know what should I do This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12014: [FLINK-16529][python]Add ignore_parse_errors() method to Json format …
flinkbot commented on pull request #12014: URL: https://github.com/apache/flink/pull/12014#issuecomment-625004244 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 3f452049dd8d88c9cbc64910fd2b89c318f58bf9 (Thu May 07 03:13:12 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-16529).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] liying919 removed a comment on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
liying919 removed a comment on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-624999378 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12002: [FLINK-16845][connector/common] Add SourceOperator which runs the Source
flinkbot edited a comment on pull request #12002: URL: https://github.com/apache/flink/pull/12002#issuecomment-624393131 ## CI report: * 6b28c708484cd7c27dd1b1c2e81a0cb8f04c4564 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=706) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
flinkbot edited a comment on pull request #11960: URL: https://github.com/apache/flink/pull/11960#issuecomment-621791651 ## CI report: * a283855e4c5042bec925a05e15727ab2db71bd1e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=656) * 6c9c6d4f1ad70f5911f3e58461b8fb3be975df85 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=709) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-16529) Add ignore_parse_errors() method to Json format in python API
[ https://issues.apache.org/jira/browse/FLINK-16529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-16529: --- Labels: pull-request-available (was: ) > Add ignore_parse_errors() method to Json format in python API > - > > Key: FLINK-16529 > URL: https://issues.apache.org/jira/browse/FLINK-16529 > Project: Flink > Issue Type: New Feature > Components: API / Python, Table SQL / Ecosystem >Reporter: Jark Wu >Priority: Major > Labels: pull-request-available > > We forgot to add corresponding {{ignore_parse_errors}} to {{Json}} class in > Python API in FLINK-15396. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] SteNicholas opened a new pull request #12014: [FLINK-16529][python]Add ignore_parse_errors() method to Json format …
SteNicholas opened a new pull request #12014: URL: https://github.com/apache/flink/pull/12014 ## What is the purpose of the change *JSON format currently support 'format.ignore-parse-errors' to skip dirty records in Java, which should consistent with Java in Python API.* ## Brief change log - *Add `ignore_parse_errors` method in `Json` class to support 'format.ignore-parse-errors'.* ## Verifying this change - *Add `test_ignore_parse_errors` method in `JsonDescriptorTests` to verify whether to support 'format.ignore-parse-errors'.* ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) - The serializers: (yes / no / don't know) - The runtime per-record code paths (performance sensitive): (yes / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) - The S3 file system connector: (yes / no / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zhuzhurk commented on a change in pull request #11929: [FLINK-17369][runtime,tests] Migrate RestartPipelinedRegionFailoverStrategyBuildingTest to PipelinedRegionComputeUtilTest
zhuzhurk commented on a change in pull request #11929: URL: https://github.com/apache/flink/pull/11929#discussion_r421212075 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java ## @@ -196,6 +196,7 @@ public RestartPipelinedRegionFailoverStrategy( * @return the failover region that contains the given execution vertex */ @VisibleForTesting + @Deprecated Review comment: Seems this deprecation commit is not needed since the method will be just removed in a following commit? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12013: [FLINK-17256] Suppport keyword arguments in the PyFlink Descriptor API.
flinkbot commented on pull request #12013: URL: https://github.com/apache/flink/pull/12013#issuecomment-625002978 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c947260be5694e47c2a47152a2a1495ef4ddd991 (Thu May 07 03:07:44 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17256) Suppport keyword arguments in the PyFlink Descriptor API
[ https://issues.apache.org/jira/browse/FLINK-17256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17256: --- Labels: beginners pull-request-available (was: beginners) > Suppport keyword arguments in the PyFlink Descriptor API > > > Key: FLINK-17256 > URL: https://issues.apache.org/jira/browse/FLINK-17256 > Project: Flink > Issue Type: Improvement > Components: API / Python >Reporter: sunjincheng >Assignee: shuiqiangchen >Priority: Major > Labels: beginners, pull-request-available > Fix For: 1.11.0 > > > Keyword arguments is a very commonly used feature in Python. We should > support it in the PyFlink Descriptor API to make the API more user friendly > for Python users. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] shuiqiangchen opened a new pull request #12013: [FLINK-17256] Suppport keyword arguments in the PyFlink Descriptor API.
shuiqiangchen opened a new pull request #12013: URL: https://github.com/apache/flink/pull/12013 ## What is the purpose of the change Keyword arguments is a very commonly used feature in Python. We should support it in the PyFlink Descriptor API to make the API more user friendly for Python users. ## Brief change log - add constructor keyword arguments for descriptors in descritors.py ## Verifying this change This change is already covered by existing tests in test_descriptors.py ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: ( no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-9900) Fix unstable test ZooKeeperHighAvailabilityITCase#testRestoreBehaviourWithFaultyStateHandles
[ https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101335#comment-17101335 ] Biao Liu commented on FLINK-9900: - Thanks [~rmetzger] for reporting. This time, the case failed to submit job to cluster. The cluster didn't start the job within 10 seconds, so timeout happened. It's hard to say which step it got stuck in. The last log of {{JobMaster}} is "Configuring application-defined state backend with job/cluster config". I have attached the relevant log (mvn-2.log). [~trohrmann] do you have any idea? > Fix unstable test > ZooKeeperHighAvailabilityITCase#testRestoreBehaviourWithFaultyStateHandles > > > Key: FLINK-9900 > URL: https://issues.apache.org/jira/browse/FLINK-9900 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination, Tests >Affects Versions: 1.5.1, 1.6.0, 1.9.0 >Reporter: zhangminglei >Assignee: Biao Liu >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.9.1, 1.10.0 > > Attachments: mvn-2.log > > Time Spent: 40m > Remaining Estimate: 0h > > https://api.travis-ci.org/v3/job/405843617/log.txt > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 124.598 sec > <<< FAILURE! - in > org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase > > testRestoreBehaviourWithFaultyStateHandles(org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) > Time elapsed: 120.036 sec <<< ERROR! > org.junit.runners.model.TestTimedOutException: test timed out after 12 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693) > at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) > at > java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729) > at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) > at > org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles(ZooKeeperHighAvailabilityITCase.java:244) > Results : > Tests in error: > > ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles:244 > » TestTimedOut > Tests run: 1453, Failures: 0, Errors: 1, Skipped: 29 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-9900) Fix unstable test ZooKeeperHighAvailabilityITCase#testRestoreBehaviourWithFaultyStateHandles
[ https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Biao Liu updated FLINK-9900: Attachment: mvn-2.log > Fix unstable test > ZooKeeperHighAvailabilityITCase#testRestoreBehaviourWithFaultyStateHandles > > > Key: FLINK-9900 > URL: https://issues.apache.org/jira/browse/FLINK-9900 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination, Tests >Affects Versions: 1.5.1, 1.6.0, 1.9.0 >Reporter: zhangminglei >Assignee: Biao Liu >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.9.1, 1.10.0 > > Attachments: mvn-2.log > > Time Spent: 40m > Remaining Estimate: 0h > > https://api.travis-ci.org/v3/job/405843617/log.txt > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 124.598 sec > <<< FAILURE! - in > org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase > > testRestoreBehaviourWithFaultyStateHandles(org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) > Time elapsed: 120.036 sec <<< ERROR! > org.junit.runners.model.TestTimedOutException: test timed out after 12 > milliseconds > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693) > at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) > at > java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729) > at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) > at > org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles(ZooKeeperHighAvailabilityITCase.java:244) > Results : > Tests in error: > > ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles:244 > » TestTimedOut > Tests run: 1453, Failures: 0, Errors: 1, Skipped: 29 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-16099) Translate "HiveCatalog" page of "Hive Integration" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101333#comment-17101333 ] Andrew.D.lin commented on FLINK-16099: -- Hi [~jark], I am willing to do it! > Translate "HiveCatalog" page of "Hive Integration" into Chinese > > > Key: FLINK-16099 > URL: https://issues.apache.org/jira/browse/FLINK-16099 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html > The markdown file is located in > {{flink/docs/dev/table/hive/hive_catalog.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] liying919 commented on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
liying919 commented on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-624999378 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
flinkbot edited a comment on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-624987148 ## CI report: * d3c197de000c8ffbb4cd20a8244e906bd90487a2 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=707) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
flinkbot edited a comment on pull request #11960: URL: https://github.com/apache/flink/pull/11960#issuecomment-621791651 ## CI report: * a283855e4c5042bec925a05e15727ab2db71bd1e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=656) * 6c9c6d4f1ad70f5911f3e58461b8fb3be975df85 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-16102) Translate "Use Hive connector in scala shell" page of "Hive Integration" into Chinese
[ https://issues.apache.org/jira/browse/FLINK-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-16102. --- Fix Version/s: 1.11.0 Resolution: Fixed Resolved in master (1.11.0): 4d16d84b3ff4ddfdd25fe7f5766f5590a3df5fdf > Translate "Use Hive connector in scala shell" page of "Hive Integration" into > Chinese > -- > > Key: FLINK-16102 > URL: https://issues.apache.org/jira/browse/FLINK-16102 > Project: Flink > Issue Type: Sub-task > Components: chinese-translation, Documentation >Reporter: Jark Wu >Assignee: zhule >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > The page url is > https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/scala_shell_hive.html > The markdown file is located in > {{flink/docs/dev/table/hive/scala_shell_hive.zh.md}} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] WeiZhong94 commented on a change in pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
WeiZhong94 commented on a change in pull request #11960: URL: https://github.com/apache/flink/pull/11960#discussion_r421204626 ## File path: flink-python/pyflink/fn_execution/tests/test_process_mode_boot.py ## @@ -114,7 +117,7 @@ def run_boot_py(self): "--control_endpoint", "localhost:", "--semi_persist_dir", self.tmp_dir] -return subprocess.call(args, stdout=sys.stdout, stderr=sys.stderr, env=self.env) Review comment: On Windows, the PyCharm IDE will replace the `sys.stdout` and `sys.stderr`, which does not have the attribute "fileno". If we specify the stdout and stderr here, an exception will thrown when running tests in the PyCharm IDE. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WeiZhong94 commented on pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
WeiZhong94 commented on pull request #11960: URL: https://github.com/apache/flink/pull/11960#issuecomment-624994997 @dianfu Thanks for your review! I have addressed your comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WeiZhong94 commented on a change in pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
WeiZhong94 commented on a change in pull request #11960: URL: https://github.com/apache/flink/pull/11960#discussion_r421204158 ## File path: flink-python/pyflink/table/tests/test_pandas_udf.py ## @@ -142,7 +142,7 @@ def time_func(time_param): 'time_param of wrong type %s !' % type(time_param[0]) return time_param -timestamp_value = datetime.datetime(1970, 1, 1, 0, 0, 0, 123000) +timestamp_value = datetime.datetime(1970, 1, 2, 0, 0, 0, 123000) Review comment: In windows the `time.mktime()` does not support negative UTC timestamp value. So we need to ensure the datetime object won't produce a negative UTC timestamp value. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WeiZhong94 commented on a change in pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
WeiZhong94 commented on a change in pull request #11960: URL: https://github.com/apache/flink/pull/11960#discussion_r421203791 ## File path: flink-python/src/main/resources/pyflink-udf-runner.sh ## @@ -40,4 +40,4 @@ if [[ "$_PYTHON_WORKING_DIR" != "" ]]; then fi log="$BOOT_LOG_DIR/flink-python-udf-boot.log" -${python} -m pyflink.fn_execution.boot $@ 2>&1 | tee -a ${log} Review comment: Now the log file does not shared between different tasks, so the "-a" option is not necessary anymore. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] WeiZhong94 commented on a change in pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
WeiZhong94 commented on a change in pull request #11960: URL: https://github.com/apache/flink/pull/11960#discussion_r421203462 ## File path: flink-python/pyflink/pyflink_gateway_server.py ## @@ -0,0 +1,209 @@ + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import argparse +import getpass +import glob +import os +import platform +import re +import signal +import socket +import sys +from collections import namedtuple +from string import Template +from subprocess import Popen, PIPE, check_output + +from pyflink.find_flink_home import _find_flink_home, _find_flink_source_root + + +def on_windows(): +return platform.system() == "Windows" + + +def find_java_executable(): +java_executable = "java.exe" if on_windows() else "java" +flink_home = _find_flink_home() +flink_conf_path = os.path.join(flink_home, "conf", "flink-conf.yaml") +java_home = None + +if os.path.isfile(flink_conf_path): +with open(flink_conf_path, "r") as f: +flink_conf_yaml = f.read() +java_homes = re.findall(r'^[ ]*env\.java\.home[ ]*: ([^#]*).*$', flink_conf_yaml) +if len(java_homes) > 1: +java_home = java_homes[len(java_homes) - 1].strip() + +if java_home is None and "JAVA_HOME" in os.environ: +java_home = os.environ["JAVA_HOME"] + +if java_home is not None: +java_executable = os.path.join(java_home, "bin", java_executable) + +return java_executable + + +def construct_log_settings(): +templates = [ + "-Dlog.file=${flink_log_dir}/flink-${flink_ident_string}-python-${hostname}.log", +"-Dlog4j.configuration=${flink_conf_dir}/log4j-cli.properties", +"-Dlog4j.configurationFile=${flink_conf_dir}/log4j-cli.properties", +"-Dlogback.configurationFile=${flink_conf_dir}/logback.xml" +] + +flink_home = _find_flink_home() +flink_conf_dir = os.path.join(flink_home, "conf") +flink_log_dir = os.path.join(flink_home, "log") +if "FLINK_IDENT_STRING" in os.environ: +flink_ident_string = os.environ["FLINK_IDENT_STRING"] +else: +flink_ident_string = getpass.getuser() +hostname = socket.gethostname() +log_settings = [] +for template in templates: +log_settings.append(Template(template).substitute( +flink_conf_dir=flink_conf_dir, +flink_log_dir=flink_log_dir, +flink_ident_string=flink_ident_string, +hostname=hostname)) +return log_settings + + +def construct_classpath(): +flink_home = _find_flink_home() +if on_windows(): +# The command length is limited on Windows. To avoid the problem we should shorten the +# command length as much as possible. +lib_jars = os.path.join(flink_home, "lib", "*") +else: +lib_jars = os.pathsep.join(glob.glob(os.path.join(flink_home, "lib", "*.jar"))) + +flink_python_jars = glob.glob(os.path.join(flink_home, "opt", "flink-python*.jar")) +if len(flink_python_jars) < 1: +print("The flink-python jar is not found in the opt folder of the FLINK_HOME: %s" % + flink_home) +return lib_jars +flink_python_jar = flink_python_jars[0] + +return os.pathsep.join([lib_jars, flink_python_jar]) + + +def download_apache_avro(): +""" +Currently we need to download the Apache Avro manually to avoid test failure caused by the avro +format sql jar. See https://issues.apache.org/jira/browse/FLINK-17417. If the issue is fixed, +this method could be removed. Using maven command copy the jars in repository to avoid accessing +external network. +""" +flink_source_root = _find_flink_source_root() +avro_jar_pattern = os.path.join( +flink_source_root, "flink-formats", "flink-avro", "target", "avro*.jar") +if len(glob.glob(avro_jar_pattern)) > 0: +# the avro jar already existed, just return. +return +mvn = "mvn.cmd" if on_windows() else "mvn" +avro_version_output = check_output( +[mvn, "help:evaluate",
[jira] [Created] (FLINK-17551) Documentation of stable releases are actually built on top of snapshot code bases.
Xintong Song created FLINK-17551: Summary: Documentation of stable releases are actually built on top of snapshot code bases. Key: FLINK-17551 URL: https://issues.apache.org/jira/browse/FLINK-17551 Project: Flink Issue Type: Bug Components: Project Website Affects Versions: 1.10.0 Reporter: Xintong Song When browsing Flink's documentation on the project website, we can choose from both the latest snapshot version and the stable release versions. However, it seems the documentation of stable release version is actually built on top of the snapshot version of the release branch. E.g., currently the latest stable release is 1.10.0, but the documentation described as "Flink 1.10 (Latest stable release)" is actually built with 1.10-SNAPSHOT. As a consequence, users might be confused when they use release 1.10.0 and some latest documentation changes meant for 1.10.1. [This comment|https://github.com/apache/flink/pull/11300#issuecomment-624776199] shows one of such confusions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] WeiZhong94 commented on a change in pull request #11960: [FLINK-12717][python] Support running PyFlink on Windows
WeiZhong94 commented on a change in pull request #11960: URL: https://github.com/apache/flink/pull/11960#discussion_r421203351 ## File path: flink-python/src/main/java/org/apache/flink/client/python/PythonGatewayServer.java ## @@ -71,15 +75,42 @@ public static void main(String[] args) throws IOException, ExecutionException, I } try { - // Exit on EOF or broken pipe. This ensures that the server dies - // if its parent program dies. - while (System.in.read() != -1) { Review comment: In windows, the `System.in.read()` always returns -1 and returns immediately. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang commented on a change in pull request #12010: [FLINK-17286][connectors / filesystem]Integrate json to file system connector
leonardBang commented on a change in pull request #12010: URL: https://github.com/apache/flink/pull/12010#discussion_r421202157 ## File path: flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemTableSource.java ## @@ -121,6 +127,14 @@ public TableSchema getSchema() { return schema; } + @Override + public TypeInformation createTypeInformation(DataType dataType) { Review comment: JsonRowDataDeserializationSchema need this type info This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
flinkbot edited a comment on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-624987148 ## CI report: * d3c197de000c8ffbb4cd20a8244e906bd90487a2 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=707) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12002: [FLINK-16845][connector/common] Add SourceOperator which runs the Source
flinkbot edited a comment on pull request #12002: URL: https://github.com/apache/flink/pull/12002#issuecomment-624393131 ## CI report: * 4a3d66188bfe8484de52157d59d5850d3559073c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=655) * 6b28c708484cd7c27dd1b1c2e81a0cb8f04c4564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=706) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] liying919 commented on pull request #12012: [FLINK-17289][docs]Translate tutorials/etl.md to Chinese
liying919 commented on pull request #12012: URL: https://github.com/apache/flink/pull/12012#issuecomment-624991186 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] JingsongLi commented on a change in pull request #12010: [FLINK-17286][connectors / filesystem]Integrate json to file system connector
JingsongLi commented on a change in pull request #12010: URL: https://github.com/apache/flink/pull/12010#discussion_r421201261 ## File path: flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonRowDataFileInputFormat.java ## @@ -0,0 +1,248 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.formats.json; + +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.core.fs.FileInputSplit; +import org.apache.flink.core.fs.Path; +import org.apache.flink.table.api.TableSchema; +import org.apache.flink.table.data.GenericRowData; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.factories.FileSystemFormatFactory; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.LogicalType; +import org.apache.flink.table.types.logical.RowType; +import org.apache.flink.table.utils.PartitionPathUtils; + +import java.io.BufferedReader; +import java.io.IOException; +import java.io.InputStreamReader; +import java.nio.charset.Charset; +import java.util.Arrays; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.stream.Collectors; + +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * A {@link JsonRowDataFileInputFormat} is responsible to read {@link RowData} records + * from json format files. + */ +public class JsonRowDataFileInputFormat extends AbstractJsonFileInputFormat { Review comment: This can be an inner class in FormatFactory just like parquet and orc. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17544) NPE JDBCUpsertOutputFormat
[ https://issues.apache.org/jira/browse/FLINK-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17544: Fix Version/s: 1.10.2 1.11.0 > NPE JDBCUpsertOutputFormat > -- > > Key: FLINK-17544 > URL: https://issues.apache.org/jira/browse/FLINK-17544 > Project: Flink > Issue Type: Improvement > Components: Connectors / JDBC >Affects Versions: 1.10.0 >Reporter: John Lonergan >Priority: Major > Fix For: 1.11.0, 1.10.2 > > > Encountered a situation where I get an NPE from JDBCUpsertOutputFormat. > This occurs when close is called before open. > This happened because I had a sink where it had a final field of type > JDBCUpsertOutputFormat. > The open operation of my sink was slow (blocked on something else) and open > on the JDBCUpsertOutputFormat had not yet been called. > In the mean time the job was cancelled, which caused close on my sink to be > called, which then > called close on the JDBCUpsertOutputFormat . > This throws an NPE due to a lack of a guard on an internal field that is > only initialised in the JDBCUpsertOutputFormat open operation. > The close method already guards one potentially null value .. > {code:java} > if (this.scheduledFuture != null) { > {code} > But needs the additional guard below ... > {code:java} > if (jdbcWriter != null) // << THIS LINE NEEDED TO GUARD UNINITIALISE VAR >try { > jdbcWriter.close(); >} catch (SQLException e) { > LOG.warn("Close JDBC writer failed.", e); >} > {code} > See also FLINK-17545 -- This message was sent by Atlassian Jira (v8.3.4#803005)