[GitHub] [flink] flinkbot edited a comment on issue #9203: [FLINK-13375][table-api] Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9203: [FLINK-13375][table-api] Improve 
config names in ExecutionConfigOptions and OptimizerConfigOptions
URL: https://github.com/apache/flink/pull/9203#issuecomment-514046368
 
 
   ## CI report:
   
   * f5e680b52e6a85e85642fc22a41724c5a452505c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120109078)
   * c1388ab2867ad134b2300ccad6ca519eff547ccb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120216941)
   * b70392e5d7337da2a416316068397f92925a0d7e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120455507)
   * 46e98c30bbe0709b2261ddd5ddb2963bf91ccfcc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120935409)
   * 26249aaea8e5459a158bd766b306a4744fee6f80 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121005721)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9254: [FLINK-13427][hive] HiveCatalog's createFunction fails when function …

2019-07-28 Thread GitBox
flinkbot commented on issue #9254: [FLINK-13427][hive] HiveCatalog's 
createFunction fails when function …
URL: https://github.com/apache/flink/pull/9254#issuecomment-515852651
 
 
   ## CI report:
   
   * 9ab2841490d5dd93cfb4fec046240dcb43df19bc : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121010407)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9254: [FLINK-13427][hive] HiveCatalog's createFunction fails when function …

2019-07-28 Thread GitBox
flinkbot commented on issue #9254: [FLINK-13427][hive] HiveCatalog's 
createFunction fails when function …
URL: https://github.com/apache/flink/pull/9254#issuecomment-515850940
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9254: [FLINK-13427][hive] HiveCatalog's createFunction fails when function …

2019-07-28 Thread GitBox
lirui-apache commented on issue #9254: [FLINK-13427][hive] HiveCatalog's 
createFunction fails when function …
URL: https://github.com/apache/flink/pull/9254#issuecomment-515850791
 
 
   cc @JingsongLi @KurtYoung @xuefuz @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13427) HiveCatalog's createFunction fails when function name has upper-case characters

2019-07-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13427:
---
Labels: pull-request-available  (was: )

> HiveCatalog's createFunction fails when function name has upper-case 
> characters
> ---
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lirui-apache opened a new pull request #9254: [FLINK-13427][hive] HiveCatalog's createFunction fails when function …

2019-07-28 Thread GitBox
lirui-apache opened a new pull request #9254: [FLINK-13427][hive] HiveCatalog's 
createFunction fails when function …
URL: https://github.com/apache/flink/pull/9254
 
 
   …name has upper-case characters
   
   
   
   ## What is the purpose of the change
   
   To fix the issue that function name is not normalized in HiveCatalog.
   
   
   ## Brief change log
   
 - Normalize function name when a Hive function is created.
 - Added a common base class for `HiveCatalogHiveMetadataTest` and 
`HiveCatalogGenericMetadataTest`. And a new test case to make sure HiveCatalog 
is case-insensitive when creating a function.
   
   
   ## Verifying this change
   
   New test case added.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9175: [FLINK-12038] [test] fix YARNITCase random fail

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9175: [FLINK-12038] [test] fix YARNITCase 
random fail
URL: https://github.com/apache/flink/pull/9175#issuecomment-513163926
 
 
   ## CI report:
   
   * f236f1934804c0b9b2d5df4564fad5f2c5ed9821 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119766106)
   * 0d91c6c63c1e07526a3f9441f0ec5c439fb48dec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120124248)
   * f6fa5e97c05927fbccf02f586f7c3b26303e6efd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120426579)
   * 9f3c1c71b93b1813c85bb90bf4e2ef4b69e7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120547384)
   * d7b3f660d6c7a49ac36d24af21060a0ea2d47466 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121005147)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy commented on issue #9147: [FLINK-11631][test] Harden TaskExecutorITCase

2019-07-28 Thread GitBox
ifndef-SleePy commented on issue #9147: [FLINK-11631][test] Harden 
TaskExecutorITCase
URL: https://github.com/apache/flink/pull/9147#issuecomment-515848362
 
 
   Will close and reopen to trigger Travis building again


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11631) TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination unstable on Travis

2019-07-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11631:
---
Labels: pull-request-available test-stability  (was: test-stability)

> TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination unstable on 
> Travis
> 
>
> Key: FLINK-11631
> URL: https://issues.apache.org/jira/browse/FLINK-11631
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: Biao Liu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.9.0
>
>
> The {{TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination}} is 
> unstable on Travis. It fails with 
> {code}
> 16:12:04.644 [ERROR] 
> testJobReExecutionAfterTaskExecutorTermination(org.apache.flink.runtime.taskexecutor.TaskExecutorITCase)
>   Time elapsed: 1.257 s  <<< ERROR!
> org.apache.flink.util.FlinkException: Could not close resource.
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorITCase.teardown(TaskExecutorITCase.java:83)
> Caused by: org.apache.flink.util.FlinkException: Error while shutting the 
> TaskExecutor down.
> Caused by: org.apache.flink.util.FlinkException: Could not properly shut down 
> the TaskManager services.
> Caused by: java.lang.IllegalStateException: NetworkBufferPool is not empty 
> after destroying all LocalBufferPools
> {code} 
> https://api.travis-ci.org/v3/job/493221318/log.txt
> The problem seems to be caused by the {{TaskExecutor}} not properly waiting 
> for the termination of all running {{Tasks}}. Due to this, there is a race 
> condition which causes that not all buffers are returned to the 
> {{BufferPool}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] ifndef-SleePy opened a new pull request #9147: [FLINK-11631][test] Harden TaskExecutorITCase

2019-07-28 Thread GitBox
ifndef-SleePy opened a new pull request #9147: [FLINK-11631][test] Harden 
TaskExecutorITCase
URL: https://github.com/apache/flink/pull/9147
 
 
   ## What is the purpose of the change
   
   * Harden `TaskExecutorITCase#testJobReExecutionAfterTaskExecutorTermination`
   * Harden `TastExecutorITCase#testJobRecoveryWithFailingTaskExecutor`
   
   ## Brief change log
   
   * Improve `MiniCluster` by clearing terminated components in case of an 
reusing
   * Use a map to keep task managers in `MiniCluster` instead of a list since 
the task managers might be mutable in some test cases
   * `MiniCluster` exposes more components to `TestingMiniCluster` for 
enriching testing
   * Replace `terminateTaskExecutor(index)` with 
`terminateTaskExecutorRandomly()` in `TestingMiniCluster`
   * Add a try-catch for 
`TastExecutorITCase#testJobRecoveryWithFailingTaskExecutor` to avoid unexpected 
exception caused by terminating task executor
   
   ## Verifying this change
   
   * This change is already covered by existing tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] ifndef-SleePy closed pull request #9147: [FLINK-11631][test] Harden TaskExecutorITCase

2019-07-28 Thread GitBox
ifndef-SleePy closed pull request #9147: [FLINK-11631][test] Harden 
TaskExecutorITCase
URL: https://github.com/apache/flink/pull/9147
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9253: [Flink-12164][tests] Harden JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout

2019-07-28 Thread GitBox
flinkbot commented on issue #9253: [Flink-12164][tests] Harden 
JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout
URL: https://github.com/apache/flink/pull/9253#issuecomment-515843418
 
 
   ## CI report:
   
   * ee881273027480bd8901c2a06cc4f082f1c0604c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121007482)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9253: [Flink-12164][tests] Harden JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout

2019-07-28 Thread GitBox
flinkbot commented on issue #9253: [Flink-12164][tests] Harden 
JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout
URL: https://github.com/apache/flink/pull/9253#issuecomment-515842135
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13372) Timestamp conversion bug in non-blink Table/SQL runtime

2019-07-28 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894932#comment-16894932
 ] 

Shuyi Chen commented on FLINK-13372:


I'll provide a fix for 1.9.0.

> Timestamp conversion bug in non-blink Table/SQL runtime
> ---
>
> Key: FLINK-13372
> URL: https://issues.apache.org/jira/browse/FLINK-13372
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1, 1.9.0
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Critical
>
> Currently, in the non-blink table/SQL runtime, Flink used 
> SqlFunctions.internalToTimestamp(long v) from Calcite to convert event time 
> (in long) to java.sql.Timestamp.
> {code:java}
>  public static Timestamp internalToTimestamp(long v) { return new Timestamp(v 
> - (long)LOCAL_TZ.getOffset(v)); } {code}
> However, as discussed in the recent Calcite mailing list, 
> SqlFunctions.internalToTimestamp() assumes the input timestamp value is in 
> the current JVM’s default timezone (which is unusual), NOT milliseconds since 
> epoch. And SqlFunctions.internalToTimestamp() is used to convert timestamp 
> value in the current JVM’s default timezone to milliseconds since epoch, 
> which java.sql.Timestamp constructor takes. Therefore, the results will not 
> only be wrong, but change if the job runs in machines on different timezones 
> as well.(The only exception is that all your production machines uses UTC 
> timezone.)
> Here is an example, if the user input value is 0 (00:00:00 UTC on 1 January 
> 1970), and the table/SQL runtime runs in a machine in PST (UTC-8), the output 
> sql.Timestamp after SqlFunctions.internalToTimestamp() will become 2880 
> millisec since epoch (08:00:00 UTC on 1 January 1970); And with the same 
> input, if the table/SQL runtime runs again in a different machine in EST 
> (UTC-5), the output sql.Timestamp after SqlFunctions.internalToTimestamp() 
> will become 1800 millisec since epoch (05:00:00 UTC on 1 January 1970).
> Currently, there are unittests to test the table/SQL API event time 
> input/output (e.g., GroupWindowITCase.testEventTimeTumblingWindow() and 
> SqlITCase.testDistinctAggWithMergeOnEventTimeSessionGroupWindow()). They now 
> all passed because we are comparing the string format of the time which 
> ignores timezone. If you step into the code, the actual java.sql.Timestamp 
> value is wrong and change as the tests run in different timezone (e.g., one 
> can use -Duser.timezone=PST to change the current JVM’s default timezone)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] ifndef-SleePy opened a new pull request #9253: [Flink-12164][tests] Harden JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout

2019-07-28 Thread GitBox
ifndef-SleePy opened a new pull request #9253: [Flink-12164][tests] Harden 
JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout
URL: https://github.com/apache/flink/pull/9253
 
 
   ## What is the purpose of the change
   
   * Harden unstable case, 
`JobMasterTest`.`testJobFailureWhenTaskExecutorHeartbeatTimeout`
   
   ## Brief change log
   
   * Enrich `TestingHeartbeatServices`, support manually trigger timeout for 
testing 
   * Make `JobMasterTest`.`testJobFailureWhenTaskExecutorHeartbeatTimeout` not 
relying on timeout anymore
   * Correct invalid format of `JobMasterTest`
   
   ## Verifying this change
   
   * This change is already covered by existing tests, such as `JobMasterTest`
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hongtao12310 opened a new pull request #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-28 Thread GitBox
hongtao12310 opened a new pull request #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237
 
 
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hongtao12310 commented on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-28 Thread GitBox
hongtao12310 commented on issue #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237#issuecomment-515841229
 
 
   Thanks @lirui-apache for your comment
   
   I will reopen the PR and add some description
   
   ==> Maybe we can just call new HiveConf(HadoopUtils.getHadoopConfiguration, 
HiveConf.class) to create the HiveConf? 
   yes, I think it is reasonable
   
   ===> I think we can add some test case similar to HadoopConfigLoadingTest
   do you mean we should check hiveConf properties load from Hadoop 
configuration? and these test cases should be going into HiveCatalogITCase.java 
?
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9203: [FLINK-13375][table-api] Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9203: [FLINK-13375][table-api] Improve 
config names in ExecutionConfigOptions and OptimizerConfigOptions
URL: https://github.com/apache/flink/pull/9203#issuecomment-514046368
 
 
   ## CI report:
   
   * f5e680b52e6a85e85642fc22a41724c5a452505c : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120109078)
   * c1388ab2867ad134b2300ccad6ca519eff547ccb : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120216941)
   * b70392e5d7337da2a416316068397f92925a0d7e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120455507)
   * 46e98c30bbe0709b2261ddd5ddb2963bf91ccfcc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120935409)
   * 26249aaea8e5459a158bd766b306a4744fee6f80 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121005721)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog 
should add hive version in conf
URL: https://github.com/apache/flink/pull/9232#issuecomment-515295549
 
 
   ## CI report:
   
   * 199e67fb426db2cf4389112ba914089e5dee3f35 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120801913)
   * 4b4ef80ebf2dabdc924407eae6a7b5d079ef6ce6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121001697)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9252: [FLINK-13051][runtime] Replace the non-selectable stream task with the input-selectable one

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9252: [FLINK-13051][runtime] Replace the 
non-selectable stream task with the input-selectable one
URL: https://github.com/apache/flink/pull/9252#issuecomment-515825469
 
 
   ## CI report:
   
   * c884b9ebd4ac020c4ecf6bdff2e4c7c003471395 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121001403)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9175: [FLINK-12038] [test] fix YARNITCase random fail

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9175: [FLINK-12038] [test] fix YARNITCase 
random fail
URL: https://github.com/apache/flink/pull/9175#issuecomment-513163926
 
 
   ## CI report:
   
   * f236f1934804c0b9b2d5df4564fad5f2c5ed9821 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119766106)
   * 0d91c6c63c1e07526a3f9441f0ec5c439fb48dec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120124248)
   * f6fa5e97c05927fbccf02f586f7c3b26303e6efd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120426579)
   * 9f3c1c71b93b1813c85bb90bf4e2ef4b69e7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120547384)
   * d7b3f660d6c7a49ac36d24af21060a0ea2d47466 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121005147)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #9175: [FLINK-12038] [test] fix YARNITCase random fail

2019-07-28 Thread GitBox
shuai-xu commented on a change in pull request #9175: [FLINK-12038] [test] fix 
YARNITCase random fail
URL: https://github.com/apache/flink/pull/9175#discussion_r308049252
 
 

 ##
 File path: 
flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java
 ##
 @@ -163,7 +160,7 @@
 
protected static File yarnSiteXML = null;
 
-   private YarnClient yarnClient = null;
+   protected YarnClient yarnClient = null;
 
 Review comment:
   Done, I didn't see it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shuai-xu commented on a change in pull request #9175: [FLINK-12038] [test] fix YARNITCase random fail

2019-07-28 Thread GitBox
shuai-xu commented on a change in pull request #9175: [FLINK-12038] [test] fix 
YARNITCase random fail
URL: https://github.com/apache/flink/pull/9175#discussion_r308049271
 
 

 ##
 File path: flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNITCase.java
 ##
 @@ -117,17 +122,35 @@ public void testPerJobMode() throws Exception {
assertThat(jobResult, 
is(notNullValue()));

assertThat(jobResult.getSerializedThrowable().isPresent(), is(false));
 
-   
waitUntilApplicationFinished(applicationId, yarnAppTerminateTimeout);
+   
waitApplicationFinishedElseKillIt(applicationId, yarnAppTerminateTimeout, 
yarnClusterDescriptor);
} finally {
if (clusterClient != null) {
clusterClient.shutdown();
}
-
-   if (applicationId != null) {
-   
yarnClusterDescriptor.killCluster(applicationId);
-   }
}
}
});
}
+
+   private void waitApplicationFinishedElseKillIt(
+   ApplicationId applicationId,
+   Duration timeout,
+   YarnClusterDescriptor yarnClusterDescriptor) throws 
Exception {
+   Deadline deadline = Deadline.now().plus(timeout);
+   YarnApplicationState state = 
yarnClient.getApplicationReport(applicationId).getYarnApplicationState();
+
+   while (state != YarnApplicationState.FINISHED) {
+   if (state == YarnApplicationState.FAILED || state == 
YarnApplicationState.KILLED) {
+   Assert.fail("Application became FAILED or 
KILLED while expecting FINISHED");
+   } else {
+   sleep(sleepIntervalInMS);
+   }
+   if (deadline.isOverdue()) {
+   
yarnClusterDescriptor.killCluster(applicationId);
+   Assert.fail("Application didn't finish before 
timeout");
+   }
+   state = 
yarnClient.getApplicationReport(applicationId).getYarnApplicationState();
 
 Review comment:
   OK


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13450) Adjust tests to tolerate arithmetic differences between x86 and ARM

2019-07-28 Thread wangxiyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894913#comment-16894913
 ] 

wangxiyuan commented on FLINK-13450:


This can be fixed by using `StrictMath` instead of `Math` to ensure the math 
result is the same on different platform.

> Adjust tests to tolerate arithmetic differences between x86 and ARM
> ---
>
> Key: FLINK-13450
> URL: https://issues.apache.org/jira/browse/FLINK-13450
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Priority: Major
>
> Certain arithmetic operations have different precision/rounding on ARM versus 
> x86.
> Tests using floating point numbers should be changed to tolerate a certain 
> minimal deviation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#discussion_r308046419
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormatTest.java
 ##
 @@ -314,10 +315,40 @@ public void 
testJDBCInputFormatWithParallelismAndGenericSplitting() throws IOExc
jdbcInputFormat.closeInputFormat();
}
 
+   @Test
+   public void testJDBCInputFormatWithParallelismAndTimestampSplitting() 
throws IOException {
+   Serializable[][] queryParameters = new LocalDateTime[2][2];
 
 Review comment:
   `Serializable` -> `LocalDateTime` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#discussion_r308039934
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
 ##
 @@ -292,8 +302,10 @@ public Row nextRecord(Row row) throws IOException {
if (!hasNext) {
return null;
}
+   TypeInformation[] fieldTypes = 
rowTypeInfo.getFieldTypes();
for (int pos = 0; pos < row.getArity(); pos++) {
-   row.setField(pos, resultSet.getObject(pos + 1));
+   row.setField(pos, JDBCUtils.timeObjectCast(
 
 Review comment:
   Can we combine the `JDBCUtils.timeObjectCast` with 
`JDBCUtils.getFieldFromResultSet`, e.g. `getField(ResultSet set, int index, int 
logicalType, Class physicalType)` ? We should have an inverse method of 
`JDBCUtils.setField`.
   
   It is strange to use a `timeObjectCast` here, because the object may not a 
time object.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#discussion_r308043685
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCUtils.java
 ##
 @@ -110,13 +117,35 @@ public static void setField(PreparedStatement upload, 
int type, Object field, in
upload.setBigDecimal(index + 1, 
(java.math.BigDecimal) field);
break;
case java.sql.Types.DATE:
-   upload.setDate(index + 1, 
(java.sql.Date) field);
+   if (field instanceof Date) {
+   upload.setDate(index + 
1, (Date) field);
+   } else {
+   
Preconditions.checkArgument(
+   field 
instanceof LocalDate,
+   "Field must be 
either java.sql.Date or java.time.LocalDate");
+   upload.setDate(index + 
1, java.sql.Date.valueOf((LocalDate) field));
+   }
break;
case java.sql.Types.TIME:
-   upload.setTime(index + 1, 
(java.sql.Time) field);
+   if (field instanceof Time) {
+   upload.setTime(index + 
1, (Time) field);
+   } else {
+   
Preconditions.checkArgument(
+   field 
instanceof LocalTime,
+   "Field must be 
either java.sql.Time or java.time.LocalTime");
+   upload.setTime(index + 
1, Time.valueOf((LocalTime) field));
+   }
break;
case java.sql.Types.TIMESTAMP:
-   upload.setTimestamp(index + 1, 
(java.sql.Timestamp) field);
+   if (field instanceof Timestamp) 
{
+   
upload.setTimestamp(index + 1, (Timestamp) field);
+   } else {
+   
Preconditions.checkArgument(
+   field 
instanceof LocalDateTime,
+   "Field must be 
either java.sql.Timestamp or java.time.LocalDateTime");
+   upload.setTimestamp(
+   index + 1, 
Timestamp.valueOf((LocalDateTime) field));
+   }
 
 Review comment:
   I would like to use `else if` instead of `checkArgument` here. Because we 
many support more datetime type in the future (e.g. `Instant`, 
`OffsetDateTime`).
   
   ```java
   if (field instanceof Timestamp) {

upload.setTimestamp(index + 1, (Timestamp) field);
} else if (field instanceof 
LocalDateTime) {

upload.setTimestamp(index + 1, Timestamp.valueOf((LocalDateTime) field));
} else {
throw new 
IllegalArgumentException("Field must be either java.sql.Timestamp or 
java.time.LocalDateTime, but is " + field.getClass());
}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#discussion_r308047537
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormatTest.java
 ##
 @@ -314,10 +315,40 @@ public void 
testJDBCInputFormatWithParallelismAndGenericSplitting() throws IOExc
jdbcInputFormat.closeInputFormat();
}
 
+   @Test
+   public void testJDBCInputFormatWithParallelismAndTimestampSplitting() 
throws IOException {
+   Serializable[][] queryParameters = new LocalDateTime[2][2];
+   queryParameters[0] = new 
LocalDateTime[]{TEST_DATA[0].printTimestamp, TEST_DATA[4].printTimestamp};
+   queryParameters[1] = new 
LocalDateTime[]{TEST_DATA[5].printTimestamp, TEST_DATA[9].printTimestamp};
+   ParameterValuesProvider paramProvider = new 
GenericParameterValuesProvider(queryParameters);
+   jdbcInputFormat = JDBCInputFormat.buildJDBCInputFormat()
+   .setDrivername(DRIVER_CLASS)
+   .setDBUrl(DB_URL)
+   
.setQuery(JDBCTestBase.SELECT_ALL_BOOKS_SPLIT_BY_TIMESTAMP)
+   .setRowTypeInfo(ROW_TYPE_INFO)
 
 Review comment:
   Could you add a test to verify java.sql.Date/Timestamp still work?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#discussion_r308047259
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCOutputFormatTest.java
 ##
 @@ -246,13 +230,40 @@ public void clearOutputTable() throws Exception {
}
}
 
+   private void checkEquals(ResultSet resultSet, int cnt) throws 
SQLException {
 
 Review comment:
   cnt -> expectedCnt


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
wuchong commented on a change in pull request #9236: [FLINK-13283][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#discussion_r308045473
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/test/java/org/apache/flink/api/java/io/jdbc/JDBCUpsertOutputFormatTest.java
 ##
 @@ -66,6 +67,7 @@ public void testJDBCOutputFormat() throws Exception {
.build())
.setFieldNames(fieldNames)
.setKeyFields(keyFields)
+   .setFieldTypes(SQL_TYPES)
 
 Review comment:
   It seems that it is error-prone if we don't set a field types? If so, I 
would suggest to require field types in the JDBCUpsertOutputFormat.Builder. As 
it is a new introduced class, it's fine to change this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13439) Run existing SQL/Table API E2E tests with blink runner

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-13439:
--

Assignee: Zhenghua Gao

> Run existing SQL/Table API E2E tests with blink runner
> --
>
> Key: FLINK-13439
> URL: https://issues.apache.org/jira/browse/FLINK-13439
> Project: Flink
>  Issue Type: Test
>  Components: Table SQL / API, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.9.0
>
>
> We should run all existing SQL/Table API E2E tests with the blink runner. As 
> part of FLINK-13273 the {{test_sql_client.sh}} test will already be ported to 
> run with blink. Additionally we also need to enable the 
> {{test_streaming_sql.sh}} test to run with the blink runner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12818) Improve stability of twoInputMapSink benchmark

2019-07-28 Thread Haibo Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894906#comment-16894906
 ] 

Haibo Sun commented on FLINK-12818:
---

Hi [~pnowojski],

The benchmark on 
`TwoInputSelectableStreamTask`/`StreamTwoInputSelectableProcessor` was also 
unstable, and the original expectation of stabilizing was broken.


I made some other attempts, including upgrading `JDK 1.8` to the latest version 
"1.8.0_212", closing the hyper-threading of CPUs, and disabling checkpointing, 
but the benchmark is still unstable. After using VTune for analysis, it was 
found that the slow JVM-fork was more time-consuming than the fast one, mainly 
in the `RecordWriter#emit()` method (the stack information is shown in the 
following figure). I suspect this is related to the cache miss of CPU. After 
disabling checkpointing and adjusting the settings by the following code, the 
benchmark becomes stable, but it becomes unstable once checkpointing is enabled.

 

*Code of Class FlinkEnvironmentContext :*
{code:java}
public class FlinkEnvironmentContext {
    public StreamExecutionEnvironment env;

    private final int parallelism = 1;
    private final boolean objectReuse = true;

    @Setup
    public void setUp() throws IOException {
        Configuration configuration = new Configuration();
        
configuration.setInteger(NettyShuffleEnvironmentOptions.NETWORK_BUFFERS_PER_CHANNEL,
 2);
        
configuration.setInteger(NettyShuffleEnvironmentOptions.NETWORK_EXTRA_BUFFERS_PER_GATE,
 0);
        
configuration.setString(NettyShuffleEnvironmentOptions.NETWORK_BUFFERS_MEMORY_MIN,
 "1mb");

        env = StreamExecutionEnvironment.createLocalEnvironment(parallelism, 
configuration);

        // set up the execution environment
        env.setParallelism(parallelism);
        env.getConfig().disableSysoutLogging();
        if (objectReuse) {
            env.getConfig().enableObjectReuse();
        }

        env.setStateBackend(new MemoryStateBackend());
    }

    public void execute() throws Exception {
        env.execute();
    }

}{code}
 

*Call Stack of RecordWriter#emit() :*

!RecordWriter-emit.png!

 

> Improve stability of twoInputMapSink benchmark
> --
>
> Key: FLINK-12818
> URL: https://issues.apache.org/jira/browse/FLINK-12818
> Project: Flink
>  Issue Type: Sub-task
>  Components: Benchmarks
>Reporter: Piotr Nowojski
>Priority: Critical
> Attachments: RecordWriter-emit.png
>
>
> The {{twoInputMapSink}} benchmark is very unstable over time:
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=twoInputMapSink=2=200=off=on=on
> It should be fixed, otherwise the benchmark can not be used.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-12818) Improve stability of twoInputMapSink benchmark

2019-07-28 Thread Haibo Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Sun updated FLINK-12818:
--
Attachment: RecordWriter-emit.png

> Improve stability of twoInputMapSink benchmark
> --
>
> Key: FLINK-12818
> URL: https://issues.apache.org/jira/browse/FLINK-12818
> Project: Flink
>  Issue Type: Sub-task
>  Components: Benchmarks
>Reporter: Piotr Nowojski
>Priority: Critical
> Attachments: RecordWriter-emit.png
>
>
> The {{twoInputMapSink}} benchmark is very unstable over time:
> http://codespeed.dak8s.net:8000/timeline/#/?exe=1=twoInputMapSink=2=200=off=on=on
> It should be fixed, otherwise the benchmark can not be used.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13427) HiveCatalog's createFunction fails when function name has upper-case characters

2019-07-28 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894901#comment-16894901
 ] 

Kurt Young commented on FLINK-13427:


Thanks [~lirui], I will assign this to you.

> HiveCatalog's createFunction fails when function name has upper-case 
> characters
> ---
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13427) HiveCatalog's createFunction fails when function name has upper-case characters

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-13427:
--

Assignee: Rui Li

> HiveCatalog's createFunction fails when function name has upper-case 
> characters
> ---
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Assignee: Rui Li
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13437) Add Hive SQL E2E test

2019-07-28 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894899#comment-16894899
 ] 

Kurt Young commented on FLINK-13437:


It seems [FLINK-13303|https://issues.apache.org/jira/browse/FLINK-13303] only 
covers hive connector, but this issue also covers metadata, calling function 
and so on. 

> Add Hive SQL E2E test
> -
>
> Key: FLINK-13437
> URL: https://issues.apache.org/jira/browse/FLINK-13437
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.9.0
>
>
> We should add an E2E test for the Hive integration: List all tables and read 
> some metadata, read from an existing table, register a new table in Hive, use 
> a registered function, write to an existing table, write to a new table.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13352) Using hive connector with hive-1.2.1 needs libfb303 jar

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-13352.
--
   Resolution: Fixed
Fix Version/s: 1.10.0

merged in 1.10.0: dbcbc09134e9baa1d8e7d3693afc1402e797dd1e

merged in 1.9.0: 4788e4807f725e8c8ffeae3ecf18ac1f5ab8b39c

> Using hive connector with hive-1.2.1 needs libfb303 jar
> ---
>
> Key: FLINK-13352
> URL: https://issues.apache.org/jira/browse/FLINK-13352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Should mention that libfb303 jar is needed in {{catalog.md}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lirui-apache commented on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-28 Thread GitBox
lirui-apache commented on issue #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237#issuecomment-515830383
 
 
   Thanks @hongtao12310 for working on this. Maybe we can just call `new 
HiveConf(HadoopUtils.getHadoopConfiguration, HiveConf.class)` to create the 
HiveConf? For testing, I think we can add some test case similar to 
`HadoopConfigLoadingTest`.
   And btw, you can update the PR to add description, fix travis build, etc. No 
need to close and create a new one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-07-28 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894894#comment-16894894
 ] 

Jark Wu commented on FLINK-13405:
-

Agree with [~xccui], "e.g." is better than "i.e." here.

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] KurtYoung merged pull request #9223: [FLINK-13352][hive] Using hive connector with hive-1.2.1 needs libfb3…

2019-07-28 Thread GitBox
KurtYoung merged pull request #9223: [FLINK-13352][hive] Using hive connector 
with hive-1.2.1 needs libfb3…
URL: https://github.com/apache/flink/pull/9223
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog should add hive version in conf

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9232: [FLINK-13424][hive] HiveCatalog 
should add hive version in conf
URL: https://github.com/apache/flink/pull/9232#issuecomment-515295549
 
 
   ## CI report:
   
   * 199e67fb426db2cf4389112ba914089e5dee3f35 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120801913)
   * 4b4ef80ebf2dabdc924407eae6a7b5d079ef6ce6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121001697)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9252: [FLINK-13051][runtime] Replace the non-selectable stream task with the input-selectable one

2019-07-28 Thread GitBox
flinkbot commented on issue #9252: [FLINK-13051][runtime] Replace the 
non-selectable stream task with the input-selectable one
URL: https://github.com/apache/flink/pull/9252#issuecomment-515825469
 
 
   ## CI report:
   
   * c884b9ebd4ac020c4ecf6bdff2e4c7c003471395 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121001403)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9252: [FLINK-13051][runtime] Replace the non-selectable stream task with the input-selectable one

2019-07-28 Thread GitBox
flinkbot commented on issue #9252: [FLINK-13051][runtime] Replace the 
non-selectable stream task with the input-selectable one
URL: https://github.com/apache/flink/pull/9252#issuecomment-515824110
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13427) HiveCatalog's createFunction fails when function name has upper-case characters

2019-07-28 Thread Rui Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894883#comment-16894883
 ] 

Rui Li commented on FLINK-13427:


I'll work on it if nobody's working on it yet.

> HiveCatalog's createFunction fails when function name has upper-case 
> characters
> ---
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13051) Drop the non-selectable two-input StreamTask and Processor

2019-07-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13051:
---
Labels: pull-request-available  (was: )

> Drop the non-selectable two-input StreamTask and Processor
> --
>
> Key: FLINK-13051
> URL: https://issues.apache.org/jira/browse/FLINK-13051
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
>
> After `StreamTwoInputSelectableProcessor` supports 
> `CheckpointBarrierHandler`, we should drop the non-selectable two-input 
> StreamTask and Processor.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] sunhaibotb opened a new pull request #9252: [FLINK-13051][runtime] Replace the non-selectable stream task with the input-selectable one

2019-07-28 Thread GitBox
sunhaibotb opened a new pull request #9252: [FLINK-13051][runtime] Replace the 
non-selectable stream task with the input-selectable one
URL: https://github.com/apache/flink/pull/9252
 
 
   ## What is the purpose of the change
   
   This pull request replaces the non-selectable `StreamTask`/`InputProcessor` 
with the input-selectable ones. The aim is to simplify the Flink code and make 
it more efficient.
   
   
   ## Brief change log
   
 - Replaces the non-selectable stream task with the input-selectable one.
 - Drops the non-selectable StreamTask and InputProcessor.
 - Renames `TwoInputSelectableStreamTask` and 
`StreamTwoInputSelectableProcessor` to `TwoInputStreamTask` and 
`StreamTwoInputProcessor`respectively.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
`TwoInputStreamTaskTest`, `StreamTaskCancellationBarrierTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (**yes** / no 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13372) Timestamp conversion bug in non-blink Table/SQL runtime

2019-07-28 Thread Qi (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894882#comment-16894882
 ] 

Qi commented on FLINK-13372:


This is a long existing issue which affects all SQL timestamp and watermark 
related calculations. In our Flink SQL jobs we have to workaround this by 
adding offsets to the timestamp column, which is not an elegant way. Hope this 
could be fixed soon.

> Timestamp conversion bug in non-blink Table/SQL runtime
> ---
>
> Key: FLINK-13372
> URL: https://issues.apache.org/jira/browse/FLINK-13372
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1, 1.9.0
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Critical
>
> Currently, in the non-blink table/SQL runtime, Flink used 
> SqlFunctions.internalToTimestamp(long v) from Calcite to convert event time 
> (in long) to java.sql.Timestamp.
> {code:java}
>  public static Timestamp internalToTimestamp(long v) { return new Timestamp(v 
> - (long)LOCAL_TZ.getOffset(v)); } {code}
> However, as discussed in the recent Calcite mailing list, 
> SqlFunctions.internalToTimestamp() assumes the input timestamp value is in 
> the current JVM’s default timezone (which is unusual), NOT milliseconds since 
> epoch. And SqlFunctions.internalToTimestamp() is used to convert timestamp 
> value in the current JVM’s default timezone to milliseconds since epoch, 
> which java.sql.Timestamp constructor takes. Therefore, the results will not 
> only be wrong, but change if the job runs in machines on different timezones 
> as well.(The only exception is that all your production machines uses UTC 
> timezone.)
> Here is an example, if the user input value is 0 (00:00:00 UTC on 1 January 
> 1970), and the table/SQL runtime runs in a machine in PST (UTC-8), the output 
> sql.Timestamp after SqlFunctions.internalToTimestamp() will become 2880 
> millisec since epoch (08:00:00 UTC on 1 January 1970); And with the same 
> input, if the table/SQL runtime runs again in a different machine in EST 
> (UTC-5), the output sql.Timestamp after SqlFunctions.internalToTimestamp() 
> will become 1800 millisec since epoch (05:00:00 UTC on 1 January 1970).
> Currently, there are unittests to test the table/SQL API event time 
> input/output (e.g., GroupWindowITCase.testEventTimeTumblingWindow() and 
> SqlITCase.testDistinctAggWithMergeOnEventTimeSessionGroupWindow()). They now 
> all passed because we are comparing the string format of the time which 
> ignores timezone. If you step into the code, the actual java.sql.Timestamp 
> value is wrong and change as the tests run in different timezone (e.g., one 
> can use -Duser.timezone=PST to change the current JVM’s default timezone)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-07-28 Thread Xingcan Cui (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894879#comment-16894879
 ] 

Xingcan Cui commented on FLINK-13405:
-

首先,sorting在这里一定是排序而非分类。

其次,我个人对于此处“i.e.”的用法有所怀疑。如果访问其内容就等同于高效排序,那用"i.e."没问题。但根据我的理解,高效排序可能只是内容访问的用途之一。如果是后者,那应该把"i.e."替换成"e.g.",大致翻译是:……无法访问它们的内容(例如为了高效排序)。

 

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13352) Using hive connector with hive-1.2.1 needs libfb303 jar

2019-07-28 Thread Rui Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894874#comment-16894874
 ] 

Rui Li commented on FLINK-13352:


Thanks [~ykt836]. Would you mind also review the PR? This is trivial doc change 
and [~xuefuz] has already approved the PR.

> Using hive connector with hive-1.2.1 needs libfb303 jar
> ---
>
> Key: FLINK-13352
> URL: https://issues.apache.org/jira/browse/FLINK-13352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Should mention that libfb303 jar is needed in {{catalog.md}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (FLINK-13324) flink-shaded-hadoop could exclude curator dependency

2019-07-28 Thread TisonKun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TisonKun closed FLINK-13324.

Resolution: Not A Bug

[~fly_in_gis] thanks for your explanation. It is reasonable for me.

> flink-shaded-hadoop could exclude curator dependency
> 
>
> Key: FLINK-13324
> URL: https://issues.apache.org/jira/browse/FLINK-13324
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Reporter: TisonKun
>Priority: Minor
> Attachments: patch.diff
>
>
> Flink's high-availability functionality depends on curator, which means flink 
> has its own dependent curator version(now is "2.12.0"). However, there are 
> several transitive dependencies referring to 
> curator-(client|framework|recipe)-2.7.1 from flink-shaded-hadoop(2). We might 
> exclude these dependencies for a single curator version used in flink.
> cc [~Zentol]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13427) HiveCatalog's createFunction fails when function name has upper-case characters

2019-07-28 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894858#comment-16894858
 ] 

Kurt Young commented on FLINK-13427:


[~lzljs3620320] , [~lirui] are you guys working on this?

> HiveCatalog's createFunction fails when function name has upper-case 
> characters
> ---
>
> Key: FLINK-13427
> URL: https://issues.apache.org/jira/browse/FLINK-13427
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
>  
> {code:java}
> hiveCatalog.createFunction(
>   new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"),
>   new CatalogFunctionImpl(TestSimpleUDF.class.getCanonicalName(), new 
> HashMap<>()),
>   false);
> hiveCatalog.getFunction(new ObjectPath(HiveCatalog.DEFAULT_DB, "myUdf"));
> {code}
> There is an exception now:
> {code:java}
> org.apache.flink.table.catalog.exceptions.FunctionNotExistException: Function 
> default.myUdf does not exist in Catalog test-catalog.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1030)
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogITCase.testGenericTable(HiveCatalogITCase.java:146)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runTestMethod(FlinkStandaloneHiveRunner.java:170)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:155)
> at 
> org.apache.flink.batch.connectors.hive.FlinkStandaloneHiveRunner.runChild(FlinkStandaloneHiveRunner.java:93)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: NoSuchObjectException(message:Function default.myUdf does not 
> exist)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result$get_function_resultStandardScheme.read(ThriftHiveMetastore.java)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_function_result.read(ThriftHiveMetastore.java)
> {code}
> Seems there are some bugs in HiveCatalog when use upper.
> Maybe we should normalizeName in createFunction...



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13192) Add tests for different Hive table formats

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13192:
---
Fix Version/s: 1.9.0

> Add tests for different Hive table formats
> --
>
> Key: FLINK-13192
> URL: https://issues.apache.org/jira/browse/FLINK-13192
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   * 5009b10d38eef92f25bfe4ff4608f2dd121ea9c6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120915709)
   * e3b272586d8f41d3800e86134730c4dc427952a6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120916220)
   * f1aee543a7aef88e3cf052f4d686ab0a8e5938e5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120996260)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13352) Using hive connector with hive-1.2.1 needs libfb303 jar

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-13352:
--

Assignee: Rui Li

> Using hive connector with hive-1.2.1 needs libfb303 jar
> ---
>
> Key: FLINK-13352
> URL: https://issues.apache.org/jira/browse/FLINK-13352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Should mention that libfb303 jar is needed in {{catalog.md}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13352) Using hive connector with hive-1.2.1 needs libfb303 jar

2019-07-28 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894857#comment-16894857
 ] 

Kurt Young commented on FLINK-13352:


[~lirui] I will assign this issue to you since you already opened a PR.

> Using hive connector with hive-1.2.1 needs libfb303 jar
> ---
>
> Key: FLINK-13352
> URL: https://issues.apache.org/jira/browse/FLINK-13352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Should mention that libfb303 jar is needed in {{catalog.md}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13352) Using hive connector with hive-1.2.1 needs libfb303 jar

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13352:
---
Fix Version/s: 1.9.0

> Using hive connector with hive-1.2.1 needs libfb303 jar
> ---
>
> Key: FLINK-13352
> URL: https://issues.apache.org/jira/browse/FLINK-13352
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Should mention that libfb303 jar is needed in {{catalog.md}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13423) Unable to find function in hive 1

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-13423:
--

Assignee: Rui Li  (was: Rui Li)

> Unable to find function in hive 1
> -
>
> Key: FLINK-13423
> URL: https://issues.apache.org/jira/browse/FLINK-13423
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Assignee: Rui Li
>Priority: Blocker
>
> I hit the following error when I try to use count in sql on hive1
> {code:java}
> btenv.sqlQuery("select count(1) from date_dim").toDataSet[Row].print(){code}
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed. Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:127)
> at 
> org.apache.flink.table.api.internal.TableEnvImpl.sqlQuery(TableEnvImpl.scala:427)
> ... 30 elided
> Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1033)
> at 
> org.apache.flink.table.catalog.FunctionCatalog.lookupFunction(FunctionCatalog.java:167)
> at 
> org.apache.flink.table.catalog.FunctionCatalogOperatorTable.lookupOperatorOverloads(FunctionCatalogOperatorTable.java:74)
> at 
> org.apache.calcite.sql.util.ChainedSqlOperatorTable.lookupOperatorOverloads(ChainedSqlOperatorTable.java:73)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1183)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1198)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1168)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:925)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:639)
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:123)
> ... 31 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13423) Unable to find function in hive 1

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13423:
---
Fix Version/s: 1.10.0
   1.9.0

> Unable to find function in hive 1
> -
>
> Key: FLINK-13423
> URL: https://issues.apache.org/jira/browse/FLINK-13423
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Assignee: Rui Li
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> I hit the following error when I try to use count in sql on hive1
> {code:java}
> btenv.sqlQuery("select count(1) from date_dim").toDataSet[Row].print(){code}
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed. Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:127)
> at 
> org.apache.flink.table.api.internal.TableEnvImpl.sqlQuery(TableEnvImpl.scala:427)
> ... 30 elided
> Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1033)
> at 
> org.apache.flink.table.catalog.FunctionCatalog.lookupFunction(FunctionCatalog.java:167)
> at 
> org.apache.flink.table.catalog.FunctionCatalogOperatorTable.lookupOperatorOverloads(FunctionCatalogOperatorTable.java:74)
> at 
> org.apache.calcite.sql.util.ChainedSqlOperatorTable.lookupOperatorOverloads(ChainedSqlOperatorTable.java:73)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1183)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1198)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1168)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:925)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:639)
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:123)
> ... 31 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13303) Add e2e test for hive connector

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13303:
---
Priority: Blocker  (was: Critical)

> Add e2e test for hive connector
> ---
>
> Key: FLINK-13303
> URL: https://issues.apache.org/jira/browse/FLINK-13303
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Hive, Tests
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13423) Unable to find function in hive 1

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-13423:
--

Assignee: Rui Li

> Unable to find function in hive 1
> -
>
> Key: FLINK-13423
> URL: https://issues.apache.org/jira/browse/FLINK-13423
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Assignee: Rui Li
>Priority: Blocker
>
> I hit the following error when I try to use count in sql on hive1
> {code:java}
> btenv.sqlQuery("select count(1) from date_dim").toDataSet[Row].print(){code}
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed. Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:127)
> at 
> org.apache.flink.table.api.internal.TableEnvImpl.sqlQuery(TableEnvImpl.scala:427)
> ... 30 elided
> Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1033)
> at 
> org.apache.flink.table.catalog.FunctionCatalog.lookupFunction(FunctionCatalog.java:167)
> at 
> org.apache.flink.table.catalog.FunctionCatalogOperatorTable.lookupOperatorOverloads(FunctionCatalogOperatorTable.java:74)
> at 
> org.apache.calcite.sql.util.ChainedSqlOperatorTable.lookupOperatorOverloads(ChainedSqlOperatorTable.java:73)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1183)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1198)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1168)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:925)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:639)
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:123)
> ... 31 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13423) Unable to find function in hive 1

2019-07-28 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894856#comment-16894856
 ] 

Kurt Young commented on FLINK-13423:


I will assign this issue to you [~lirui]

> Unable to find function in hive 1
> -
>
> Key: FLINK-13423
> URL: https://issues.apache.org/jira/browse/FLINK-13423
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Blocker
>
> I hit the following error when I try to use count in sql on hive1
> {code:java}
> btenv.sqlQuery("select count(1) from date_dim").toDataSet[Row].print(){code}
> {code:java}
> org.apache.flink.table.api.ValidationException: SQL validation failed. Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:127)
> at 
> org.apache.flink.table.api.internal.TableEnvImpl.sqlQuery(TableEnvImpl.scala:427)
> ... 30 elided
> Caused by: org.apache.flink.table.catalog.exceptions.CatalogException: Failed 
> to get function tpcds_text_2.COUNT
> at 
> org.apache.flink.table.catalog.hive.HiveCatalog.getFunction(HiveCatalog.java:1033)
> at 
> org.apache.flink.table.catalog.FunctionCatalog.lookupFunction(FunctionCatalog.java:167)
> at 
> org.apache.flink.table.catalog.FunctionCatalogOperatorTable.lookupOperatorOverloads(FunctionCatalogOperatorTable.java:74)
> at 
> org.apache.calcite.sql.util.ChainedSqlOperatorTable.lookupOperatorOverloads(ChainedSqlOperatorTable.java:73)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1183)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1198)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.performUnconditionalRewrites(SqlValidatorImpl.java:1168)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:925)
> at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:639)
> at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:123)
> ... 31 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-07-28 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894841#comment-16894841
 ] 

Jark Wu commented on FLINK-13405:
-

单从字面上来看,(即,为了高效排序)是可以的。
这句话我的理解是,"...访问 data 中的内容,目的是为了高效排序"。所以是否可以翻译成 (目的是为了高效排序)? 

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13372) Timestamp conversion bug in non-blink Table/SQL runtime

2019-07-28 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894840#comment-16894840
 ] 

Kurt Young commented on FLINK-13372:


Hi [~suez1224], thanks for reporting this, are you planning to fix this in 
1.9.0 or next version?

> Timestamp conversion bug in non-blink Table/SQL runtime
> ---
>
> Key: FLINK-13372
> URL: https://issues.apache.org/jira/browse/FLINK-13372
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1, 1.9.0
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Critical
>
> Currently, in the non-blink table/SQL runtime, Flink used 
> SqlFunctions.internalToTimestamp(long v) from Calcite to convert event time 
> (in long) to java.sql.Timestamp.
> {code:java}
>  public static Timestamp internalToTimestamp(long v) { return new Timestamp(v 
> - (long)LOCAL_TZ.getOffset(v)); } {code}
> However, as discussed in the recent Calcite mailing list, 
> SqlFunctions.internalToTimestamp() assumes the input timestamp value is in 
> the current JVM’s default timezone (which is unusual), NOT milliseconds since 
> epoch. And SqlFunctions.internalToTimestamp() is used to convert timestamp 
> value in the current JVM’s default timezone to milliseconds since epoch, 
> which java.sql.Timestamp constructor takes. Therefore, the results will not 
> only be wrong, but change if the job runs in machines on different timezones 
> as well.(The only exception is that all your production machines uses UTC 
> timezone.)
> Here is an example, if the user input value is 0 (00:00:00 UTC on 1 January 
> 1970), and the table/SQL runtime runs in a machine in PST (UTC-8), the output 
> sql.Timestamp after SqlFunctions.internalToTimestamp() will become 2880 
> millisec since epoch (08:00:00 UTC on 1 January 1970); And with the same 
> input, if the table/SQL runtime runs again in a different machine in EST 
> (UTC-5), the output sql.Timestamp after SqlFunctions.internalToTimestamp() 
> will become 1800 millisec since epoch (05:00:00 UTC on 1 January 1970).
> Currently, there are unittests to test the table/SQL API event time 
> input/output (e.g., GroupWindowITCase.testEventTimeTumblingWindow() and 
> SqlITCase.testDistinctAggWithMergeOnEventTimeSessionGroupWindow()). They now 
> all passed because we are comparing the string format of the time which 
> ignores timezone. If you step into the code, the actual java.sql.Timestamp 
> value is wrong and change as the tests run in different timezone (e.g., one 
> can use -Duser.timezone=PST to change the current JVM’s default timezone)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13446) Row count sliding window outputs incorrectly in blink planner

2019-07-28 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13446:
---
Priority: Blocker  (was: Major)

> Row count sliding window outputs incorrectly in blink planner
> -
>
> Key: FLINK-13446
> URL: https://issues.apache.org/jira/browse/FLINK-13446
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For blink planner, the Row count sliding window outputs incorrectly. The 
> window assigner assigns less window than what expected. This means the window 
> outputs fewer data. The bug can be reproduced by the following test:
> {code:java}
>   @Test
>   def testGroupWindowWithoutKeyInProjection(): Unit = {
> val data = List(
>   (1L, 1, "Hi", 1, 1),
>   (2L, 2, "Hello", 2, 2),
>   (4L, 2, "Hello", 2, 2),
>   (8L, 3, "Hello world", 3, 3),
>   (16L, 3, "Hello world", 3, 3))
> val stream = failingDataSource(data)
> val table = stream.toTable(tEnv, 'long, 'int, 'string, 'int2, 'int3, 
> 'proctime.proctime)
> val weightAvgFun = new WeightedAvg
> val countDistinct = new CountDistinct
> val windowedTable = table
>   .window(Slide over 2.rows every 1.rows on 'proctime as 'w)
>   .groupBy('w, 'int2, 'int3, 'string)
>   .select(weightAvgFun('long, 'int), countDistinct('long))
> val sink = new TestingAppendSink
> windowedTable.toAppendStream[Row].addSink(sink)
> env.execute()
> val expected = Seq("12,2", "8,1", "2,1", "3,2", "1,1")
> assertEquals(expected.sorted, sink.getAppendResults.sorted)
>   }
> {code}
> The expected output is Seq("12,2", "8,1", "2,1", "3,2", "1,1") while the 
> actual output is Seq("12,2", "3,2")
> To fix the problem, we can correct the assign logic in 
> CountSlidingWindowAssigner.assignWindows.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13405) Translate "Basic API Concepts" page into Chinese

2019-07-28 Thread WangHengWei (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894838#comment-16894838
 ] 

WangHengWei commented on FLINK-13405:
-

[~jark], [~klion26], [~xccui],

668行:Flink treats these data types as black boxes and is not able to access 
their content (i.e., for efficient sorting). 

括号中的内容我不理解,尤其是那个“i.e.",无论翻译成(即,为了高效分类)还是(即,为了高效排序)都不通顺。请帮忙指导,谢谢。

> Translate "Basic API Concepts" page into Chinese
> 
>
> Key: FLINK-13405
> URL: https://issues.apache.org/jira/browse/FLINK-13405
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.10.0
>Reporter: WangHengWei
>Assignee: WangHengWei
>Priority: Major
>  Labels: documentation, pull-request-available
> Fix For: 1.10.0
>
>
> The page url is 
> [https://github.com/apache/flink/blob/master/docs/dev/api_concepts.zh.md]
> The markdown file is located in flink/docs/dev/api_concepts.zh.md



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TsReaper commented on issue #9236: [FLINK-13283][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread GitBox
TsReaper commented on issue #9236: [FLINK-13283][jdbc] Fix JDBC connectors with 
DataTypes.DATE/TIME/TIMESTAMP support
URL: https://github.com/apache/flink/pull/9236#issuecomment-515812004
 
 
   Travis: https://travis-ci.com/TsReaper/flink/builds/120833387


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13438) Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread Caizhi Weng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894837#comment-16894837
 ] 

Caizhi Weng commented on FLINK-13438:
-

Hi [~lirui] [~jark] and [~lzljs3620320], sorry for the late response. I 
actually tried to add support for DataTypes.DATE/TIME/TIMESTAMP before 
submitting this issue, and my patch is provided in the attachment. In this 
patch, when running `HiveCatalogDataTypeTest`, two tests will fail due to this 
issue. Please take a look.

> Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support
> -
>
> Key: FLINK-13438
> URL: https://issues.apache.org/jira/browse/FLINK-13438
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Caizhi Weng
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
> Attachments: 0001-hive.patch
>
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework 
> using TableSchema, which contains DataType. As the time data read from and 
> write to Hive connectors must be java.sql.* types and the default conversion 
> class of our time data types are java.time.*, we have to fix Hive connector 
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created 
> using Hive's schema, so the time types in the created schema will be sql time 
> type not local time type. If user specifies a local time type in the table 
> schema when creating a table in Hive, he will get a different schema when 
> reading it out. This is undesired.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13438) Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread Caizhi Weng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-13438:

Attachment: 0001-hive.patch

> Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support
> -
>
> Key: FLINK-13438
> URL: https://issues.apache.org/jira/browse/FLINK-13438
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Caizhi Weng
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
> Attachments: 0001-hive.patch
>
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework 
> using TableSchema, which contains DataType. As the time data read from and 
> write to Hive connectors must be java.sql.* types and the default conversion 
> class of our time data types are java.time.*, we have to fix Hive connector 
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created 
> using Hive's schema, so the time types in the created schema will be sql time 
> type not local time type. If user specifies a local time type in the table 
> schema when creating a table in Hive, he will get a different schema when 
> reading it out. This is undesired.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] TisonKun commented on issue #9158: [FLINK-10052][coordination] Tolerate temporarily suspended ZooKeeper connections

2019-07-28 Thread GitBox
TisonKun commented on issue #9158: [FLINK-10052][coordination] Tolerate 
temporarily suspended ZooKeeper connections
URL: https://github.com/apache/flink/pull/9158#issuecomment-515810540
 
 
   cc @tillrohrmann @StefanRRichter we need a committer to push this thread 
forward. It would be glad to see this issue also fixed in 1.9.0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   * 5009b10d38eef92f25bfe4ff4608f2dd121ea9c6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120915709)
   * e3b272586d8f41d3800e86134730c4dc427952a6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120916220)
   * f1aee543a7aef88e3cf052f4d686ab0a8e5938e5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120996260)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen edited a comment on issue #9235: [FLINK-13228][tests][filesystems] Harden HadoopRecoverableWriterTest

2019-07-28 Thread GitBox
StephanEwen edited a comment on issue #9235: [FLINK-13228][tests][filesystems] 
Harden HadoopRecoverableWriterTest
URL: https://github.com/apache/flink/pull/9235#issuecomment-515788137
 
 
   FYI: You can use `IOUtils.closeQuietly(...)` for this as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen commented on issue #9235: [FLINK-13228][tests][filesystems] Harden HadoopRecoverableWriterTest

2019-07-28 Thread GitBox
StephanEwen commented on issue #9235: [FLINK-13228][tests][filesystems] Harden 
HadoopRecoverableWriterTest
URL: https://github.com/apache/flink/pull/9235#issuecomment-515788137
 
 
   FYI: Y(o)u can use `IOUtils.closeQuietly(...)` for this as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9251: [FLINK-13451][tests] Remove use of Unsafe.defineClass() from CommonTestUtils

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9251: [FLINK-13451][tests] Remove use of 
Unsafe.defineClass() from CommonTestUtils
URL: https://github.com/apache/flink/pull/9251#issuecomment-515781912
 
 
   ## CI report:
   
   * fb1cc465fab9eb9655dc5ba3670c05423b74bda3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120983921)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13452) Pipelined region failover strategy does not recover Job if checkpoint cannot be read

2019-07-28 Thread Gary Yao (JIRA)
Gary Yao created FLINK-13452:


 Summary: Pipelined region failover strategy does not recover Job 
if checkpoint cannot be read
 Key: FLINK-13452
 URL: https://issues.apache.org/jira/browse/FLINK-13452
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.9.0, 1.10.0
Reporter: Gary Yao
Assignee: Gary Yao
 Fix For: 1.9.0
 Attachments: jobmanager.log

The job does not recover if a checkpoint cannot be read and 
{{jobmanager.execution.failover-strategy}} is set to _"region"_. 

*Analysis*

The {{RestartCallback}} created by {{AdaptedRestartPipelinedRegionStrategyNG}} 
throws a \{{RuntimeException}} if no checkpoints could be read. When the 
restart is invoked in a separate thread pool, the exception is swallowed. See:

[https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/AdaptedRestartPipelinedRegionStrategyNG.java#L117-L119]

[https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FixedDelayRestartStrategy.java#L65]

*Expected behavior*
 * Job should restart

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (FLINK-13452) Pipelined region failover strategy does not recover Job if checkpoint cannot be read

2019-07-28 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao reassigned FLINK-13452:


Assignee: (was: Gary Yao)

> Pipelined region failover strategy does not recover Job if checkpoint cannot 
> be read
> 
>
> Key: FLINK-13452
> URL: https://issues.apache.org/jira/browse/FLINK-13452
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Gary Yao
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: jobmanager.log
>
>
> The job does not recover if a checkpoint cannot be read and 
> {{jobmanager.execution.failover-strategy}} is set to _"region"_. 
> *Analysis*
> The {{RestartCallback}} created by 
> {{AdaptedRestartPipelinedRegionStrategyNG}} throws a \{{RuntimeException}} if 
> no checkpoints could be read. When the restart is invoked in a separate 
> thread pool, the exception is swallowed. See:
> [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/AdaptedRestartPipelinedRegionStrategyNG.java#L117-L119]
> [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FixedDelayRestartStrategy.java#L65]
> *Expected behavior*
>  * Job should restart
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9251: [FLINK-13451][tests] Remove use of Unsafe.defineClass() from CommonTestUtils

2019-07-28 Thread GitBox
flinkbot commented on issue #9251: [FLINK-13451][tests] Remove use of 
Unsafe.defineClass() from CommonTestUtils
URL: https://github.com/apache/flink/pull/9251#issuecomment-515781912
 
 
   ## CI report:
   
   * fb1cc465fab9eb9655dc5ba3670c05423b74bda3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120983921)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13451) Rework CommonTestUtils.createClassNotInClassPath() to not use Unsafe.defineClass()

2019-07-28 Thread Stephan Ewen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-13451:
-
Component/s: (was: core)
 Tests

> Rework CommonTestUtils.createClassNotInClassPath() to not use 
> Unsafe.defineClass()
> --
>
> Key: FLINK-13451
> URL: https://issues.apache.org/jira/browse/FLINK-13451
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The method {{Unsafe.defineClass()}} is removed in Java 11.
> To support Java 11, we need to rework the method 
> {{CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.
> Java 11 introduces a new way to define a class from byte code via 
> {{MethodHandles}}. However, because these do not exist in Java 8, we cannot 
> use them if we want to keep supporting Java 8, which we most likely want to 
> do for quite a while.
> A method that works across both versions is to write the class byte code out 
> to a temporary file and create a new URLClassLoader that loads the class from 
> that file.
> That solution is not a complete drop-in replacement, because it cannot add 
> the class to an existing class loader, but can only create a new pair of 
> (classloader & new-class-in-that-classloader). But it is seems 
> straightforward to adjust the existing tests to work with that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9251: [FLINK-13451][tests] Remove use of Unsafe.defineClass() from CommonTestUtils

2019-07-28 Thread GitBox
flinkbot commented on issue #9251: [FLINK-13451][tests] Remove use of 
Unsafe.defineClass() from CommonTestUtils
URL: https://github.com/apache/flink/pull/9251#issuecomment-515781641
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13451) Rework CommonTestUtils.createClassNotInClassPath() to not use Unsafe.defineClass()

2019-07-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13451:
---
Labels: pull-request-available  (was: )

> Rework CommonTestUtils.createClassNotInClassPath() to not use 
> Unsafe.defineClass()
> --
>
> Key: FLINK-13451
> URL: https://issues.apache.org/jira/browse/FLINK-13451
> Project: Flink
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> The method {{Unsafe.defineClass()}} is removed in Java 11.
> To support Java 11, we need to rework the method 
> {{CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.
> Java 11 introduces a new way to define a class from byte code via 
> {{MethodHandles}}. However, because these do not exist in Java 8, we cannot 
> use them if we want to keep supporting Java 8, which we most likely want to 
> do for quite a while.
> A method that works across both versions is to write the class byte code out 
> to a temporary file and create a new URLClassLoader that loads the class from 
> that file.
> That solution is not a complete drop-in replacement, because it cannot add 
> the class to an existing class loader, but can only create a new pair of 
> (classloader & new-class-in-that-classloader). But it is seems 
> straightforward to adjust the existing tests to work with that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] StephanEwen opened a new pull request #9251: [FLINK-13451][tests] Remove use of Unsafe.defineClass() from CommonTestUtils

2019-07-28 Thread GitBox
StephanEwen opened a new pull request #9251: [FLINK-13451][tests] Remove use of 
Unsafe.defineClass() from CommonTestUtils
URL: https://github.com/apache/flink/pull/9251
 
 
   ## What is the purpose of the change
   
   The method `Unsafe.defineClass()` is removed in Java 11. To support Java 11, 
we rework the method
   `CommonTestUtils.createClassNotInClassPath()` to use a different mechanism.
   
   This commit now writes the class byte code out to a temporary file and 
create a new URLClassLoader that loads the class from that file. 
   
   The solution is not a complete drop-in replacement, because it cannot add 
the class to an existing class loader, but can only create a new pair of 
(classloader & new-class-in-that-classloader).
   Because of that, the commit also adjusts the existing tests to work with 
that new mechanism.
   
   ## Brief change log
   
 - Change `CommonTestUtils`:  `createObjectForClassNotInClassPath()` ==> 
`createObjectFromNewClassLoader()`
 - Adjust existing tests
   
   ## Verifying this change
   
   There are two unit tests that validate the method of the new test utility.
   Aside from that, this change is a refactoring of existing tests and thus 
needs no additional tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9250: [FLINK-13371][coordination] Prevent 
leaks of blocking partitions 
URL: https://github.com/apache/flink/pull/9250#issuecomment-515772917
 
 
   ## CI report:
   
   * 4e15048a256b338df18ad8f9d89e0a576ae06a27 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120980283)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-07-28 Thread GitBox
flinkbot commented on issue #9250: [FLINK-13371][coordination] Prevent leaks of 
blocking partitions 
URL: https://github.com/apache/flink/pull/9250#issuecomment-515772917
 
 
   ## CI report:
   
   * 4e15048a256b338df18ad8f9d89e0a576ae06a27 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120980283)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13451) Rework CommonTestUtils.createClassNotInClassPath() to not use Unsafe.defineClass()

2019-07-28 Thread Stephan Ewen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-13451:


Assignee: Stephan Ewen

> Rework CommonTestUtils.createClassNotInClassPath() to not use 
> Unsafe.defineClass()
> --
>
> Key: FLINK-13451
> URL: https://issues.apache.org/jira/browse/FLINK-13451
> Project: Flink
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Critical
> Fix For: 1.10.0
>
>
> The method {{Unsafe.defineClass()}} is removed in Java 11.
> To support Java 11, we need to rework the method 
> {{CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.
> Java 11 introduces a new way to define a class from byte code via 
> {{MethodHandles}}. However, because these do not exist in Java 8, we cannot 
> use them if we want to keep supporting Java 8, which we most likely want to 
> do for quite a while.
> A method that works across both versions is to write the class byte code out 
> to a temporary file and create a new URLClassLoader that loads the class from 
> that file.
> That solution is not a complete drop-in replacement, because it cannot add 
> the class to an existing class loader, but can only create a new pair of 
> (classloader & new-class-in-that-classloader). But it is seems 
> straightforward to adjust the existing tests to work with that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-07-28 Thread GitBox
flinkbot commented on issue #9250: [FLINK-13371][coordination] Prevent leaks of 
blocking partitions 
URL: https://github.com/apache/flink/pull/9250#issuecomment-515772318
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13371) Release partitions in JM if producer restarts

2019-07-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13371:
---
Labels: pull-request-available  (was: )

> Release partitions in JM if producer restarts
> -
>
> Key: FLINK-13371
> URL: https://issues.apache.org/jira/browse/FLINK-13371
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Network
>Affects Versions: 1.9.0
>Reporter: Andrey Zagrebin
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> As discussed in FLINK-13245, there can be a case that producer does not even 
> detect any consumption attempt if consumer fails before the connection is 
> established. It means we cannot fully rely on shuffle service for the release 
> on consumption in case of consumer failure. When producer restarts it will 
> leak partitions from the previous attempt. Previously we had an explicit 
> release call for this case in Execution.cancel/suspend. Basically JM has to 
> explicitly release all partitions produced by the previous task execution 
> attempt in case of producer restart, including `released on consumption` 
> partitions. For this change, we might need to track all partitions in 
> PartitionTrackerImpl.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zentol opened a new pull request #9250: [FLINK-13371][coordination] Prevent leaks of blocking partitions

2019-07-28 Thread GitBox
zentol opened a new pull request #9250: [FLINK-13371][coordination] Prevent 
leaks of blocking partitions 
URL: https://github.com/apache/flink/pull/9250
 
 
   If force-release-on-consumption is enabled it is possible for blocking 
partitions to be leaked. For these partitions we rely on the consumer sending 
notifications for having consumed the partition, however the consumer may never 
be deployed successfully. In this case the partition is neither released by the 
task, nor by any other cleanup procedure since they all ignore partitions that 
are released on consumption.
   
   Note that a similar issue can occur for pipelined partitions that are 
buffered in the producers side before a consumer was actually scheduled. This 
issue is not addressed by this commit.
   
   The TaskExecutor now tracks all blocking partitions, to ensure they are 
cleaned up when the JM connection terminates or the TE shuts down.
   
   The PartitionTracker now tracks all blocking partitions, to ensure they are 
cleaned up on job termination and vertex resets.
   
   The execution now separately issues release calls for all produced 
partitions in case of a state reconciliation, where an execution was CANCELING 
but receives the notitification for being FINISHED. Since we arrive in a 
CANCELED state we release all partitions. At this point the PartitionTracker is 
not yet tracking these partitions (since we never officially reached a state 
FINISHED in the EG), hence the execution is sending these through separate RPC 
logic.
   Additionally, the execution no longer issues release calls through the 
PartitionTracker if it reached a terminal state, but just removes the 
partitions from the tracker.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13451) Rework CommonTestUtils.createClassNotInClassPath() to not use Unsafe.defineClass()

2019-07-28 Thread Stephan Ewen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-13451:
-
Description: 
The method {{Unsafe.defineClass()}} is removed in Java 11.
To support Java 11, we need to rework the method 
{{CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.

Java 11 introduces a new way to define a class from byte code via 
{{MethodHandles}}. However, because these do not exist in Java 8, we cannot use 
them if we want to keep supporting Java 8, which we most likely want to do for 
quite a while.

A method that works across both versions is to write the class byte code out to 
a temporary file and create a new URLClassLoader that loads the class from that 
file.

That solution is not a complete drop-in replacement, because it cannot add the 
class to an existing class loader, but can only create a new pair of 
(classloader & new-class-in-that-classloader). But it is seems straightforward 
to adjust the existing tests to work with that.

  was:
The method {{Unsafe.defineClass()}} is removed in Java 11.
To support Java 11, we need to rework the method {{ 
CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.

Java 11 introduces a new way to define a class from byte code via 
{{MethodHandles}}. However, because these do not exist in Java 8, we cannot use 
them if we want to keep supporting Java 8, which we most likely want to do for 
quite a while.

A method that works across both versions is to write the class byte code out to 
a temporary file and create a new URLClassLoader that loads the class from that 
file.

That solution is not a complete drop-in replacement, because it cannot add the 
class to an existing class loader, but can only create a new pair of 
(classloader & new-class-in-that-classloader). But it is seems straightforward 
to adjust the existing tests to work with that.


> Rework CommonTestUtils.createClassNotInClassPath() to not use 
> Unsafe.defineClass()
> --
>
> Key: FLINK-13451
> URL: https://issues.apache.org/jira/browse/FLINK-13451
> Project: Flink
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Priority: Critical
> Fix For: 1.10.0
>
>
> The method {{Unsafe.defineClass()}} is removed in Java 11.
> To support Java 11, we need to rework the method 
> {{CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.
> Java 11 introduces a new way to define a class from byte code via 
> {{MethodHandles}}. However, because these do not exist in Java 8, we cannot 
> use them if we want to keep supporting Java 8, which we most likely want to 
> do for quite a while.
> A method that works across both versions is to write the class byte code out 
> to a temporary file and create a new URLClassLoader that loads the class from 
> that file.
> That solution is not a complete drop-in replacement, because it cannot add 
> the class to an existing class loader, but can only create a new pair of 
> (classloader & new-class-in-that-classloader). But it is seems 
> straightforward to adjust the existing tests to work with that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13451) Rework CommonTestUtils.createClassNotInClassPath() to not use Unsafe.defineClass()

2019-07-28 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-13451:


 Summary: Rework CommonTestUtils.createClassNotInClassPath() to not 
use Unsafe.defineClass()
 Key: FLINK-13451
 URL: https://issues.apache.org/jira/browse/FLINK-13451
 Project: Flink
  Issue Type: Improvement
  Components: core
Affects Versions: 1.9.0
Reporter: Stephan Ewen
 Fix For: 1.10.0


The method {{Unsafe.defineClass()}} is removed in Java 11.
To support Java 11, we need to rework the method {{ 
CommonTestUtils.createClassNotInClassPath()}} to use a different mechanism.

Java 11 introduces a new way to define a class from byte code via 
{{MethodHandles}}. However, because these do not exist in Java 8, we cannot use 
them if we want to keep supporting Java 8, which we most likely want to do for 
quite a while.

A method that works across both versions is to write the class byte code out to 
a temporary file and create a new URLClassLoader that loads the class from that 
file.

That solution is not a complete drop-in replacement, because it cannot add the 
class to an existing class loader, but can only create a new pair of 
(classloader & new-class-in-that-classloader). But it is seems straightforward 
to adjust the existing tests to work with that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] klion26 commented on issue #8751: [FLINK-11937][StateBackend]Resolve small file problem in RocksDB incremental checkpoint

2019-07-28 Thread GitBox
klion26 commented on issue #8751: [FLINK-11937][StateBackend]Resolve small file 
problem in RocksDB incremental checkpoint
URL: https://github.com/apache/flink/pull/8751#issuecomment-515767642
 
 
   @StephanEwen  @tzulitai @aljoscha @kl0u Could you please take a look at 
this, thanks.
   Besides the before comments. I also updated the description of the PR, 
mainly added this two description about this pr:
   - Resue the same underlying file in one checkpoint of one operator, this 
means we just generate a single file for one checkpoint.
   - Do not support to use ByteStreamStateHandle in the new feature FSCSOS, 
will always flush state to file.
   
   the failed test is irrelevant, there is an issue 
[FLINK-9900](https://issues.apache.org/jira/browse/FLINK-9900) tracking it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2019-07-28 Thread Congxian Qiu(klion26) (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894711#comment-16894711
 ] 

Congxian Qiu(klion26) commented on FLINK-9900:
--

Another instance https://travis-ci.com/flink-ci/flink/jobs/220237256

> Failed to testRestoreBehaviourWithFaultyStateHandles 
> (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) 
> ---
>
> Key: FLINK-9900
> URL: https://issues.apache.org/jira/browse/FLINK-9900
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.5.1, 1.6.0, 1.9.0
>Reporter: zhangminglei
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/405843617/log.txt
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 124.598 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase
>  
> testRestoreBehaviourWithFaultyStateHandles(org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)
>  Time elapsed: 120.036 sec <<< ERROR!
>  org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
>  at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles(ZooKeeperHighAvailabilityITCase.java:244)
> Results :
> Tests in error: 
>  
> ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles:244
>  » TestTimedOut
> Tests run: 1453, Failures: 0, Errors: 1, Skipped: 29



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8751: [FLINK-11937][StateBackend]Resolve small file problem in RocksDB incremental checkpoint

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #8751: [FLINK-11937][StateBackend]Resolve 
small file problem in RocksDB incremental checkpoint
URL: https://github.com/apache/flink/pull/8751#issuecomment-515749731
 
 
   ## CI report:
   
   * 82127e2fb45c51334d7a3bcc42ae940894c0db6c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120970456)
   * 2dda201498d913b3fbf2e7d078dd5440652e6a19 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974324)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] 
Introduce type inference for hive functions in blink
URL: https://github.com/apache/flink/pull/9089#issuecomment-510488226
 
 
   ## CI report:
   
   * fb34a0f4245ddac5872ea77aad07887a6ff12d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118890132)
   * ba44069acdbd82261839605b5d363548dae81522 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054606)
   * 349f15d9e799ac9d316a02392d60495058fda4aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974318)
   * 8876d89f32920192e1d3615b588b72021dbc379a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974650)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] 
Introduce type inference for hive functions in blink
URL: https://github.com/apache/flink/pull/9089#issuecomment-510488226
 
 
   ## CI report:
   
   * fb34a0f4245ddac5872ea77aad07887a6ff12d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118890132)
   * ba44069acdbd82261839605b5d363548dae81522 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054606)
   * 349f15d9e799ac9d316a02392d60495058fda4aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974318)
   * 8876d89f32920192e1d3615b588b72021dbc379a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974650)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] 
Introduce type inference for hive functions in blink
URL: https://github.com/apache/flink/pull/9089#issuecomment-510488226
 
 
   ## CI report:
   
   * fb34a0f4245ddac5872ea77aad07887a6ff12d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118890132)
   * ba44069acdbd82261839605b5d363548dae81522 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054606)
   * 349f15d9e799ac9d316a02392d60495058fda4aa : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974318)
   * 8876d89f32920192e1d3615b588b72021dbc379a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974650)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13438) Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support

2019-07-28 Thread Jingsong Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894690#comment-16894690
 ] 

Jingsong Lee commented on FLINK-13438:
--

Hi [~lirui] , I think should add tests to DATE/TIME/TIMESTAMP type in Hive 
source, sink and udx using blink-planner (and maybe using flink-planner too).

Hi [~TsReaper], I think you should explain which case and code will lead to 
this bug in Jira to let the problem more understandable.

> Fix Hive connector with DataTypes.DATE/TIME/TIMESTAMP support
> -
>
> Key: FLINK-13438
> URL: https://issues.apache.org/jira/browse/FLINK-13438
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Caizhi Weng
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> Similar to JDBC connectors, Hive connectors communicate with Flink framework 
> using TableSchema, which contains DataType. As the time data read from and 
> write to Hive connectors must be java.sql.* types and the default conversion 
> class of our time data types are java.time.*, we have to fix Hive connector 
> with DataTypes.DATE/TIME/TIMESTAMP support.
> But currently when reading tables from Hive, the table schema is created 
> using Hive's schema, so the time types in the created schema will be sql time 
> type not local time type. If user specifies a local time type in the table 
> schema when creating a table in Hive, he will get a different schema when 
> reading it out. This is undesired.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] Introduce type inference for hive functions in blink

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #9089: [FLINK-13225][table-planner-blink] 
Introduce type inference for hive functions in blink
URL: https://github.com/apache/flink/pull/9089#issuecomment-510488226
 
 
   ## CI report:
   
   * fb34a0f4245ddac5872ea77aad07887a6ff12d11 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118890132)
   * ba44069acdbd82261839605b5d363548dae81522 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054606)
   * 349f15d9e799ac9d316a02392d60495058fda4aa : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974318)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8751: [FLINK-11937][StateBackend]Resolve small file problem in RocksDB incremental checkpoint

2019-07-28 Thread GitBox
flinkbot edited a comment on issue #8751: [FLINK-11937][StateBackend]Resolve 
small file problem in RocksDB incremental checkpoint
URL: https://github.com/apache/flink/pull/8751#issuecomment-515749731
 
 
   ## CI report:
   
   * 82127e2fb45c51334d7a3bcc42ae940894c0db6c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120970456)
   * 2dda201498d913b3fbf2e7d078dd5440652e6a19 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/120974324)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13412) Test fail on ARM

2019-07-28 Thread Stephan Ewen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-13412.

   Resolution: Duplicate
Fix Version/s: (was: 1.9.0)

Closing this in favor FLINK-13448

> Test fail on ARM
> 
>
> Key: FLINK-13412
> URL: https://issues.apache.org/jira/browse/FLINK-13412
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.0
>Reporter: wangxiyuan
>Priority: Trivial
>
> We hit some test failures when running `mvn verify` for Flink on ARM arch.
> --
> 1. org.apache.flink.util.MemoryArchitectureTest
> [ERROR] 
> testArchitectureNotUnknown(org.apache.flink.util.MemoryArchitectureTest) Time 
> elapsed: 0.024 s <<< FAILURE!
> java.lang.AssertionError: Values should be different. Actual: UNKNOWN
> at 
> org.apache.flink.util.MemoryArchitectureTest.testArchitectureNotUnknown(MemoryArchitectureTest.java:34)
>  
> 2. org.apache.flink.table.expressions.SqlExpressionTest
> testArithmeticFunctions org.junit.ComparisonFailure: Wrong result for: 
> [LOG(3,27)] optimized to: [3.0004440E0] expected:<3.0[]> but 
> was:<3.0[004]>
> --
> For the first one, I think we should add `aarch64` to the knonwn arch list as 
> well.
>  
> For the second one, the logarithms behavior is a little different between arm 
> and x86 .For example log(3,27) or log(7,343), the result is 3.0 on x86 but 
> 3.0004 on arm,  can we remove the test log(3,27), or change to 
> something like log(2,8) ?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13450) Adjust tests to tolerate arithmetic differences between x86 and ARM

2019-07-28 Thread Stephan Ewen (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-13450:
-
Description: 
Certain arithmetic operations have different precision/rounding on ARM versus 
x86.

Tests using floating point numbers should be changed to tolerate a certain 
minimal deviation.

  was:
Certain arithmetic operations have different precision/rounding on ARM versus 
x86.

Tests using floating point numbers should be changes to tolerate a certain 
minimal deviation.


> Adjust tests to tolerate arithmetic differences between x86 and ARM
> ---
>
> Key: FLINK-13450
> URL: https://issues.apache.org/jira/browse/FLINK-13450
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Priority: Major
>
> Certain arithmetic operations have different precision/rounding on ARM versus 
> x86.
> Tests using floating point numbers should be changed to tolerate a certain 
> minimal deviation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13450) Adjust tests to tolerate arithmetic differences between x86 and ARM

2019-07-28 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-13450:


 Summary: Adjust tests to tolerate arithmetic differences between 
x86 and ARM
 Key: FLINK-13450
 URL: https://issues.apache.org/jira/browse/FLINK-13450
 Project: Flink
  Issue Type: Sub-task
Affects Versions: 1.9.0
Reporter: Stephan Ewen


Certain arithmetic operations have different precision/rounding on ARM versus 
x86.

Tests using floating point numbers should be changes to tolerate a certain 
minimal deviation.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


  1   2   >