[GitHub] [flink] flinkbot commented on issue #9480: [Flink-13752] Only references necessary variables when bookkeeping result partitions on TM

2019-08-18 Thread GitBox
flinkbot commented on issue #9480: [Flink-13752] Only references necessary 
variables when bookkeeping result partitions on TM
URL: https://github.com/apache/flink/pull/9480#issuecomment-522439114
 
 
   ## CI report:
   
   * a95596e548b62cfc71177f8c2da0be540566f976 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123672528)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9480: [Flink-13752] Only references necessary variables when bookkeeping result partitions on TM

2019-08-18 Thread GitBox
flinkbot commented on issue #9480: [Flink-13752] Only references necessary 
variables when bookkeeping result partitions on TM
URL: https://github.com/apache/flink/pull/9480#issuecomment-522437510
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a95596e548b62cfc71177f8c2da0be540566f976 (Mon Aug 19 
06:52:58 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] gaoyunhaii opened a new pull request #9480: [Flink-13752] Only references necessary variables when bookkeeping result partitions on TM

2019-08-18 Thread GitBox
gaoyunhaii opened a new pull request #9480: [Flink-13752] Only references 
necessary variables when bookkeeping result partitions on TM
URL: https://github.com/apache/flink/pull/9480
 
 
   
   
   ## What is the purpose of the change
   
   This pull request fixes the problem that the TaskDeploymentDescriptor is 
referenced by an anonymous function that is going to be executed when task 
terminated. This causeTaskDeploymentDescriptor  cannot be recycled by GC. Since 
TaskDeploymentDescriptor  has a reference to its serialized value and for some 
tasks this array might contain dozens of megabytes, unable to recycle 
TaskDeploymentDescriptor may have large impact on GC and further on the 
performance.
   
   Since the anonymous function indeed only want to references the JobId 
instead of the whole TaskDeploymentDescriptor, it can thus only reference the 
JobId instead.
   
   ## Brief change log
   
   a95596e548b62cfc71177f8c2da0be540566f976 acquires the JobId outside the 
anonymous function and only reference to the JobId in the anonymous function.
   
   ## Verifying this change
   I think there might be no need to add a UT for this fix since it is not 
changing the interface of some components. If we want to add a UT, it might can 
only acquire the anonymous function and check if it has a field referencing 
TaskDeploymentDescriptor. However, the behavior to test is mainly implement 
related logic.
   
   Currently it has been verified by an eternal job which triggers this problem 
manually.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager Checkpointing, 
Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9479: [FLINK-13768] [Documentation] Add documentation regarding path style access for s3

2019-08-18 Thread GitBox
flinkbot commented on issue #9479: [FLINK-13768] [Documentation] Add 
documentation regarding path style access for s3
URL: https://github.com/apache/flink/pull/9479#issuecomment-522435214
 
 
   ## CI report:
   
   * f4b4a9e1343252b7783d8fd39d8ab25e39b400d7 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123671193)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441376)
   * 9c14836f8639e98d58cf7bb32e38b938b3843994 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577044)
   * 76186776c5620598a19234245bbd05dfdfb1c62c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120113740)
   * 628ca7b316ad3968c90192a47a84dd01f26e2578 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122381349)
   * d204a725ff3c8a046cbd1b84e34d9e3ae8aafeac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123620485)
   * 143efadbdb6c4681569d5b412a175edfb1633b85 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123637809)
   * b78b64a82ed2a9a92886095ec42f06d5082ad830 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123671219)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9479: [FLINK-13768] [Documentation] Add documentation regarding path style access for s3

2019-08-18 Thread GitBox
flinkbot commented on issue #9479: [FLINK-13768] [Documentation] Add 
documentation regarding path style access for s3
URL: https://github.com/apache/flink/pull/9479#issuecomment-522434059
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f4b4a9e1343252b7783d8fd39d8ab25e39b400d7 (Mon Aug 19 
06:40:06 UTC 2019)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] achyuthsamudrala opened a new pull request #9479: [FLINK-13768] [Documentation] Add documentation regarding path style access for s3

2019-08-18 Thread GitBox
achyuthsamudrala opened a new pull request #9479: [FLINK-13768] [Documentation] 
Add documentation regarding path style access for s3
URL: https://github.com/apache/flink/pull/9479
 
 
   
   
   ## What is the purpose of the change
   
   When interacting with s3 compatible file systems such as CEPH, virtual host 
style addressing is not always enabled. One such property is the path style 
access property. By default if this property is not set, virtual host style 
addressing is used. The documentation should mention how this property can be 
passed on as a flink conf property.
   More about virtual hosting/path style: 
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
   
   ## Brief change log
   
   Add documentation regarding path style access for s3
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no 
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no 
 - The S3 file system connector: no 
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13500) RestClusterClient requires S3 access when HA is configured

2019-08-18 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910188#comment-16910188
 ] 

TisonKun commented on FLINK-13500:
--

As for this issue it is more likely that we should not initialize 
BlobStoreServices on HighAvailabilityService's constructor but on 
{{#createBlobStore}} called. It would solve this issue in the right way while 
we can still think of separating client/server ha services.

> RestClusterClient requires S3 access when HA is configured
> --
>
> Key: FLINK-13500
> URL: https://issues.apache.org/jira/browse/FLINK-13500
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / REST
>Affects Versions: 1.8.1
>Reporter: David Judd
>Priority: Major
>
> RestClusterClient initialization calls ClusterClient initialization, which 
> calls
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices
> In turn, createHighAvailabilityServices calls 
> BlobUtils.createBlobStoreFromConfig, which in our case tries to talk to S3.
> It seems very surprising to me that (a) RestClusterClient needs any form of 
> access other than to the REST API, and (b) that client initialization would 
> attempt a write as a side effect. I do not see either of these surprising 
> facts described in the documentation–are they intentional?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13758) failed to submit JobGraph when registered hdfs file in DistributedCache

2019-08-18 Thread luoguohao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910185#comment-16910185
 ] 

luoguohao commented on FLINK-13758:
---

[~fly_in_gis] yes. this works in local mode, because the different code path. 
but if you deploy the application on the cluster, it would failed after a while 
(default would be 100 minutes). 

> failed to submit JobGraph when registered hdfs file in DistributedCache 
> 
>
> Key: FLINK-13758
> URL: https://issues.apache.org/jira/browse/FLINK-13758
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1
>Reporter: luoguohao
>Priority: Major
>
> when using HDFS files for DistributedCache, it would failed to submit 
> jobGraph, we can see exceptions stack traces in log file after a while, but 
> if DistributedCache file is a local file, every thing goes fine.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8742: [FLINK-11879] Add validators for the uses of InputSelectable, BoundedOneInput and BoundedMultiInput

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8742: [FLINK-11879] Add validators for the 
uses of InputSelectable, BoundedOneInput and BoundedMultiInput
URL: https://github.com/apache/flink/pull/8742#issuecomment-510731561
 
 
   ## CI report:
   
   * 3f0c15862fc70f35cd58883ca9635bde1a5fb7ee : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118876288)
   * e9adf752da210ededdcebbd1ba3753c3b689cf3e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054586)
   * 9bdedfc1d79a87012205f4e1345bffcd5f7fc299 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121056283)
   * e78d543020c51ef86c7a597b04a8b552b43381f5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123663406)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9002: [FLINK-13105][table][doc] Add documentation for blink planner's built-in functions

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9002: [FLINK-13105][table][doc] Add 
documentation for blink planner's built-in functions
URL: https://github.com/apache/flink/pull/9002#issuecomment-513721231
 
 
   ## CI report:
   
   * 40477a632625f9cb7ebce8ed4488a99b7b4f5093 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119984603)
   * 64ad34dd196a8941ff9f92dd0b389a22796650f8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123663046)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13768) Update documentation regarding `path style access` for S3 filesystem implementations

2019-08-18 Thread Achyuth Narayan Samudrala (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Achyuth Narayan Samudrala updated FLINK-13768:
--
Issue Type: Improvement  (was: New Feature)

> Update documentation regarding `path style access` for S3 filesystem 
> implementations
> 
>
> Key: FLINK-13768
> URL: https://issues.apache.org/jira/browse/FLINK-13768
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Achyuth Narayan Samudrala
>Priority: Minor
>  Labels: documentation-update
>
> The documentation related to various properties that can be provided for the 
> s3 sink is not very informative. According to the code in 
> flink-s3-fs-base/flink-s3-fs-hadoop, any property specified as 
> s3. is transformed to fs.s3a..
>  
> When interacting with s3 compatible file systems such as CEPH/Minio, default 
> configuration properties might not be sufficient. One such property is the 
> fs.s3a.path.style.access. This property enables different modes of access to 
> the s3 buckets. By default if this property is not set, virtual host style 
> addressing is used. The documentation should mention how this property can be 
> passed on as a flink conf property.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13768) Update documentation regarding `path style access` for S3 filesystem implementations

2019-08-18 Thread Achyuth Narayan Samudrala (JIRA)
Achyuth Narayan Samudrala created FLINK-13768:
-

 Summary: Update documentation regarding `path style access` for S3 
filesystem implementations
 Key: FLINK-13768
 URL: https://issues.apache.org/jira/browse/FLINK-13768
 Project: Flink
  Issue Type: New Feature
  Components: Documentation
Reporter: Achyuth Narayan Samudrala


The documentation related to various properties that can be provided for the s3 
sink is not very informative. According to the code in 
flink-s3-fs-base/flink-s3-fs-hadoop, any property specified as 
s3. is transformed to fs.s3a..

 

When interacting with s3 compatible file systems such as CEPH/Minio, default 
configuration properties might not be sufficient. One such property is the 
fs.s3a.path.style.access. This property enables different modes of access to 
the s3 buckets. By default if this property is not set, virtual host style 
addressing is used. The documentation should mention how this property can be 
passed on as a flink conf property.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13731) flink sql support window with alignment

2019-08-18 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910177#comment-16910177
 ] 

Jark Wu commented on FLINK-13731:
-

Hi, I think proctime attribute should return a timestamp with local time zone, 
so the TUMBLE(pt, interval '1' DAY) can align with the session time zone. 
In the other hand, I'm also fine with  supporting TUMBLE(dateTime, interval, 
time), as this is supported in Calcite and useful when we want the window start 
time is from "00:20".

> flink sql support window with alignment
> ---
>
> Key: FLINK-13731
> URL: https://issues.apache.org/jira/browse/FLINK-13731
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: zzsmdfj
>Priority: Major
>
> for now, sql: 
> {code:java}
> // code placeholder
> SELECT  COUNT(*) GROUP BY TUMBLE(pt, interval '1' DAY, time '08:00:00')
> {code}
> not supported in flink sql, when rowtime is processTime, the window is  
> assigned by UTC time,  it is not correct day window when i was in specified 
> time zone.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add AvroSerializationSchema

2019-08-18 Thread GitBox
soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add 
AvroSerializationSchema
URL: https://github.com/apache/flink/pull/8371#issuecomment-522413266
 
 
   Using ConfluentRegistryAvroSerializationSchema auto registers the avro 
schema and saves data in bytes. Also, when an avro schema is registered in 
advance when sending message getting the following error. 
   
   ```
   Caused by: org.apache.kafka.common.errors.SerializationException: Error 
registering Avro schema: "bytes"
   Caused by: 
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
Schema being registered is incompatible with an earlier schema; error code: 
409; error code: 409
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add AvroSerializationSchema

2019-08-18 Thread GitBox
soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add 
AvroSerializationSchema
URL: https://github.com/apache/flink/pull/8371#issuecomment-522413266
 
 
   Using ConfluentRegistryAvroSerialization schema auto registers the avro 
schema in bytes. Also, when an avro schema is registered in advance when 
sending message getting the following error. 
   
   ```Caused by: org.apache.kafka.common.errors.SerializationException: Error 
registering Avro schema: "bytes"
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add AvroSerializationSchema

2019-08-18 Thread GitBox
soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add 
AvroSerializationSchema
URL: https://github.com/apache/flink/pull/8371#issuecomment-522413266
 
 
   Using ConfluentRegistryAvroSerialization schema auto registers the avro 
schema in bytes. Also, when an avro schema is registered in advance when 
sending message getting the following error. 
   
   ```
   Caused by: org.apache.kafka.common.errors.SerializationException: Error 
registering Avro schema: "bytes"
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add AvroSerializationSchema

2019-08-18 Thread GitBox
soumyasmruti edited a comment on issue #8371: [FLINK-9679] - Add 
AvroSerializationSchema
URL: https://github.com/apache/flink/pull/8371#issuecomment-522413266
 
 
   Using ConfluentRegistryAvroSerialization schema auto registers the avro 
schema in bytes. Also, when an avro schema is registered in advance when 
sending message getting the following error. 
   
   ```
   Caused by: org.apache.kafka.common.errors.SerializationException: Error 
registering Avro schema: "bytes"
   Caused by: 
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
Schema being registered is incompatible with an earlier schema; error code: 
409; error code: 409
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] soumyasmruti commented on issue #8371: [FLINK-9679] - Add AvroSerializationSchema

2019-08-18 Thread GitBox
soumyasmruti commented on issue #8371: [FLINK-9679] - Add 
AvroSerializationSchema
URL: https://github.com/apache/flink/pull/8371#issuecomment-522413266
 
 
   Using ConfluentRegistryAvroSerialization schema auto registers the avro 
schema in bytes. Also, when an avro schema is registered in advance when 
sending message getting the following error. 
   
   ```Caused by: org.apache.kafka.common.errors.SerializationException: Error 
registering Avro schema: "bytes"
   Caused by: 
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
Schema being registered is incompatible with an earlier schema; error code: 
409; error code: 409
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-4256) Fine-grained recovery

2019-08-18 Thread Thomas Weise (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910168#comment-16910168
 ] 

Thomas Weise commented on FLINK-4256:
-

Thanks for the clarification, this is excellent news. Perhaps clarify that on 
FLIP-1? Also, until the even finer grained recovery of streaming jobs becomes 
available, it may be possible for users to decompose a pipeline into smaller 
segments with intermediate pubsub topics if partial availability across a 
shuffle step is needed.

> Fine-grained recovery
> -
>
> Key: FLINK-4256
> URL: https://issues.apache.org/jira/browse/FLINK-4256
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.9.0
>
>
> When a task fails during execution, Flink currently resets the entire 
> execution graph and triggers complete re-execution from the last completed 
> checkpoint. This is more expensive than just re-executing the failed tasks.
> In many cases, more fine-grained recovery is possible.
> The full description and design is in the corresponding FLIP.
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures
> The detail desgin for version1 is 
> https://docs.google.com/document/d/1_PqPLA1TJgjlqz8fqnVE3YSisYBDdFsrRX_URgRSj74/edit#



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521552936
 
 
   ## CI report:
   
   * c9d99f2866f281298f4217e9ce7543732bece2f8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123334919)
   * 671aa2687e3758d16646c6fbf58e4cc486328a38 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123456040)
   * 5c25642609614012a78142672e4e11f0b028e2a8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123488890)
   * 52289430cf4e1891b285a43d2625e908b3f2cfdf : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123659343)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8742: [FLINK-11879] Add validators for the uses of InputSelectable, BoundedOneInput and BoundedMultiInput

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8742: [FLINK-11879] Add validators for the 
uses of InputSelectable, BoundedOneInput and BoundedMultiInput
URL: https://github.com/apache/flink/pull/8742#issuecomment-510731561
 
 
   ## CI report:
   
   * 3f0c15862fc70f35cd58883ca9635bde1a5fb7ee : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/118876288)
   * e9adf752da210ededdcebbd1ba3753c3b689cf3e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054586)
   * 9bdedfc1d79a87012205f4e1345bffcd5f7fc299 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/121056283)
   * e78d543020c51ef86c7a597b04a8b552b43381f5 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123663406)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13750) Separate HA services between client-/ and server-side

2019-08-18 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910162#comment-16910162
 ] 

TisonKun commented on FLINK-13750:
--

The main requirement of client side on HA service is to communicate to 
Dispatcher/WebMonitor. Any LeaderElectionServices, BlobServices and other 
LeaderRetrievalServices are no need for client side.

I think it is reasonable to separate HA service exposed to client- and 
server-side.

I'd like to take a closer look and provide a solution to this :-)

> Separate HA services between client-/ and server-side
> -
>
> Key: FLINK-13750
> URL: https://issues.apache.org/jira/browse/FLINK-13750
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client, Runtime / Coordination
>Reporter: Chesnay Schepler
>Priority: Major
>
> Currently, we use the same {{HighAvailabilityServices}} on the client and 
> server. However, the client does not need several of the features that the 
> services currently provide (access to the blobstore or checkpoint metadata).
> Additionally, due to how these services are setup they also require the 
> client to have access to the blob storage, despite it never actually being 
> used, which can cause issues, like FLINK-13500.
> [~Tison] Would be be interested in this issue?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9002: [FLINK-13105][table][doc] Add documentation for blink planner's built-in functions

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9002: [FLINK-13105][table][doc] Add 
documentation for blink planner's built-in functions
URL: https://github.com/apache/flink/pull/9002#issuecomment-513721231
 
 
   ## CI report:
   
   * 40477a632625f9cb7ebce8ed4488a99b7b4f5093 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119984603)
   * 64ad34dd196a8941ff9f92dd0b389a22796650f8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123663046)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8952: [FLINK-10868][flink-yarn] Add failure rater for resource manager

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8952: [FLINK-10868][flink-yarn] Add failure 
rater for resource manager
URL: https://github.com/apache/flink/pull/8952#issuecomment-513724324
 
 
   ## CI report:
   
   * d5fa0c8c2c46bafaf2e62a02743378a1e5399b35 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119985824)
   * cb899d83a30d8b34a4fb8ae9048bf34aeffa37f7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123621833)
   * f68194461a93195faf8b90ad87e7310ef61a6460 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123658965)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13731) flink sql support window with alignment

2019-08-18 Thread zzsmdfj (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910158#comment-16910158
 ] 

zzsmdfj commented on FLINK-13731:
-

[~jark] at alibaba(or in blink), how you handle with it ? I would appreciate it 
 if you give some advices.

> flink sql support window with alignment
> ---
>
> Key: FLINK-13731
> URL: https://issues.apache.org/jira/browse/FLINK-13731
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: zzsmdfj
>Priority: Major
>
> for now, sql: 
> {code:java}
> // code placeholder
> SELECT  COUNT(*) GROUP BY TUMBLE(pt, interval '1' DAY, time '08:00:00')
> {code}
> not supported in flink sql, when rowtime is processTime, the window is  
> assigned by UTC time,  it is not correct day window when i was in specified 
> time zone.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13731) flink sql support window with alignment

2019-08-18 Thread zzsmdfj (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910154#comment-16910154
 ] 

zzsmdfj commented on FLINK-13731:
-

[~GatsbyNewton] , when  rowTime is processTime, I not sure. In fact, I am not 
clear why we not support  window with alignment(calcite support for this 
feature). get more operands from groupExpr in LogicalWindowAggregateRule, then 
pass into case class TumblingGroupWindow or SlidingGroupWindow. is it right way 
to this? 

> flink sql support window with alignment
> ---
>
> Key: FLINK-13731
> URL: https://issues.apache.org/jira/browse/FLINK-13731
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: zzsmdfj
>Priority: Major
>
> for now, sql: 
> {code:java}
> // code placeholder
> SELECT  COUNT(*) GROUP BY TUMBLE(pt, interval '1' DAY, time '08:00:00')
> {code}
> not supported in flink sql, when rowtime is processTime, the window is  
> assigned by UTC time,  it is not correct day window when i was in specified 
> time zone.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13598) frocksdb doesn't have arm release

2019-08-18 Thread wangxiyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910150#comment-16910150
 ] 

wangxiyuan commented on FLINK-13598:


I hit a test error when try *make check_some*: 
[https://logs.openlabtesting.org/logs/1/1/c12abfa2a2835a5e58516dd9c2b2251bab5afc66/check/frocksdb-build-and-test-arm64/3e27afb/job-output.txt.gz]

 

[ RUN ] FormatLatest/DBBloomFilterTestWithParam.BloomFilter/0
1 present => 10070 reads
1 missing => 182 reads
1 present => 10070 reads
1 missing => 184 reads
1 present => 10070 reads
1 missing => 182 reads
1 present => 10070 reads
1 missing => 182 reads
1 present => 10070 reads
1 missing => 182 reads
[ OK ] FormatLatest/DBBloomFilterTestWithParam.BloomFilter/0 (7528 ms)
[ RUN ] FormatLatest/DBBloomFilterTestWithParam.BloomFilter/1
1 present => 39977 reads
1 missing => 20193 reads
1 present => 40076 reads
1 missing => 20306 reads
db/db_bloom_filter_test.cc:458: Failure
Expected: (reads) <= (2 * N + 3 * N / 100), actual: 20306 vs 20300
terminate called after throwing an instance of 
'testing::internal::GoogleTestFailureException'
 what(): db/db_bloom_filter_test.cc:458: Failure
Expected: (reads) <= (2 * N + 3 * N / 100), actual: 20306 vs 20300
Received signal 6 (Aborted)
#0 /lib/aarch64-linux-gnu/libc.so.6(gsignal+0x38) [0xb9917528] ?? ??:0

 

can anybody give me any idea about this error?

 

Thanks

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0, 2.0.0
>Reporter: wangxiyuan
>Priority: Major
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521552936
 
 
   ## CI report:
   
   * c9d99f2866f281298f4217e9ce7543732bece2f8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123334919)
   * 671aa2687e3758d16646c6fbf58e4cc486328a38 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123456040)
   * 5c25642609614012a78142672e4e11f0b028e2a8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123488890)
   * 52289430cf4e1891b285a43d2625e908b3f2cfdf : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123659343)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8952: [FLINK-10868][flink-yarn] Add failure rater for resource manager

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8952: [FLINK-10868][flink-yarn] Add failure 
rater for resource manager
URL: https://github.com/apache/flink/pull/8952#issuecomment-513724324
 
 
   ## CI report:
   
   * d5fa0c8c2c46bafaf2e62a02743378a1e5399b35 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119985824)
   * cb899d83a30d8b34a4fb8ae9048bf34aeffa37f7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123621833)
   * f68194461a93195faf8b90ad87e7310ef61a6460 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123658965)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13742) Fix code generation when aggregation contains both distinct aggregate with and without filter

2019-08-18 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13742.
-
Resolution: Fixed

master: 94fa4ceade57172362e2d35e5aac8383f8f40a40
1.9: fc42fdd9e369ff375bf189238e0edeaf1c683901

> Fix code generation when aggregation contains both distinct aggregate with 
> and without filter
> -
>
> Key: FLINK-13742
> URL: https://issues.apache.org/jira/browse/FLINK-13742
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following test will fail when the aggregation contains {{COUNT(DISTINCT 
> c)}} and {{COUNT(DISTINCT c) filter ...}}.
> {code:java}
> @Test
>   def testDistinctWithMultiFilter(): Unit = {
> val sqlQuery =
>   "SELECT b, " +
> "  SUM(DISTINCT (a * 3)), " +
> "  COUNT(DISTINCT SUBSTRING(c FROM 1 FOR 2))," +
> "  COUNT(DISTINCT c)," +
> "  COUNT(DISTINCT c) filter (where MOD(a, 3) = 0)," +
> "  COUNT(DISTINCT c) filter (where MOD(a, 3) = 1) " +
> "FROM MyTable " +
> "GROUP BY b"
> val t = 
> failingDataSource(StreamTestData.get3TupleData).toTable(tEnv).as('a, 'b, 'c)
> tEnv.registerTable("MyTable", t)
> val result = tEnv.sqlQuery(sqlQuery).toRetractStream[Row]
> val sink = new TestingRetractSink
> result.addSink(sink)
> env.execute()
> val expected = List(
>   "1,3,1,1,0,1",
>   "2,15,1,2,1,0",
>   "3,45,3,3,1,1",
>   "4,102,1,4,1,2",
>   "5,195,1,5,2,1",
>   "6,333,1,6,2,2")
> assertEquals(expected.sorted, sink.getRetractResults.sorted)
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9459: [FLINK-13742][table-planner-blink] Fix code generation when aggregation contains both distinct aggregate with and without filter.

2019-08-18 Thread GitBox
asfgit closed pull request #9459: [FLINK-13742][table-planner-blink] Fix code 
generation when aggregation contains both distinct aggregate with and without 
filter.
URL: https://github.com/apache/flink/pull/9459
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13758) failed to submit JobGraph when registered hdfs file in DistributedCache

2019-08-18 Thread Yang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910139#comment-16910139
 ] 

Yang Wang commented on FLINK-13758:
---

Hi [~luoguohao]

Do you mean to register a cached file located on the hdfs? Just like the code 
below.
{code:java}
env.registerCachedFile("hdfs://myhdfs/path/of/file", "test_data", false){code}
I think it could works.

> failed to submit JobGraph when registered hdfs file in DistributedCache 
> 
>
> Key: FLINK-13758
> URL: https://issues.apache.org/jira/browse/FLINK-13758
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1
>Reporter: luoguohao
>Priority: Major
>
> when using HDFS files for DistributedCache, it would failed to submit 
> jobGraph, we can see exceptions stack traces in log file after a while, but 
> if DistributedCache file is a local file, every thing goes fine.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9459: [FLINK-13742][table-planner-blink] Fix code generation when aggregation contains both distinct aggregate with and without filter.

2019-08-18 Thread GitBox
wuchong commented on issue #9459: [FLINK-13742][table-planner-blink] Fix code 
generation when aggregation contains both distinct aggregate with and without 
filter.
URL: https://github.com/apache/flink/pull/9459#issuecomment-522383435
 
 
   Travis passed in my own travis: 
https://travis-ci.org/wuchong/flink/builds/573365202


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9478: [FLINK-13766][task] Refactor the implementation of StreamInputProcessor based on StreamTaskInput#emitNext

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9478: [FLINK-13766][task] Refactor the 
implementation of StreamInputProcessor based on StreamTaskInput#emitNext
URL: https://github.com/apache/flink/pull/9478#issuecomment-522374042
 
 
   ## CI report:
   
   * 1530600eaf36324966f343c277437e48c2416dc2 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123653758)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9478: [FLINK-13766][task] Refactor the implementation of StreamInputProcessor based on StreamTaskInput#emitNext

2019-08-18 Thread GitBox
flinkbot commented on issue #9478: [FLINK-13766][task] Refactor the 
implementation of StreamInputProcessor based on StreamTaskInput#emitNext
URL: https://github.com/apache/flink/pull/9478#issuecomment-522374042
 
 
   ## CI report:
   
   * 1530600eaf36324966f343c277437e48c2416dc2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123653758)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13767) Migrate isFinished method from AvailabilityListener to AsyncDataInput

2019-08-18 Thread zhijiang (JIRA)
zhijiang created FLINK-13767:


 Summary: Migrate isFinished method from AvailabilityListener to 
AsyncDataInput
 Key: FLINK-13767
 URL: https://issues.apache.org/jira/browse/FLINK-13767
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Network, Runtime / Task
Reporter: zhijiang
Assignee: zhijiang


AvailabilityListener is both used in AsyncDataInput and StreamTaskInput. We 
already introduced InputStatus for StreamTaskInput#emitNext, and then 
InputStatus#END_OF_INPUT has the same semantic with 
AvailabilityListener#isFinished.

But for the case of AsyncDataInput which is mainly used by InputGate layer, the 
isFinished() method is still needed at the moment. So we migrate this method 
from AvailabilityListener to  AsyncDataInput, and refactor the 
StreamInputProcessor implementations by using InputStatus to judge finished 
state.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9478: [FLINK-13766][task] Refactor the implementation of StreamInputProcessor based on StreamTaskInput#emitNext

2019-08-18 Thread GitBox
flinkbot commented on issue #9478: [FLINK-13766][task] Refactor the 
implementation of StreamInputProcessor based on StreamTaskInput#emitNext
URL: https://github.com/apache/flink/pull/9478#issuecomment-522373234
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1530600eaf36324966f343c277437e48c2416dc2 (Mon Aug 19 
00:31:49 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW opened a new pull request #9478: [FLINK-13766][task] Refactor the implementation of StreamInputProcessor based on StreamTaskInput#emitNext

2019-08-18 Thread GitBox
zhijiangW opened a new pull request #9478: [FLINK-13766][task] Refactor the 
implementation of StreamInputProcessor based on StreamTaskInput#emitNext
URL: https://github.com/apache/flink/pull/9478
 
 
   ## What is the purpose of the change
   
   The current processing in task input processor is based on the way of 
`pollNext`. In order to unify the processing way of new source operator, we 
introduce the new `StreamTaskInput#emitNext(Output)` instead of current 
pollNext. Then we need to adjust the existing implementations of 
`StreamOneInputProcessor/StreamTwoInputSelectableProcessor` based on the new 
emit way.
   
   To do so, we could integrate all the task inputs from network/source in a 
unified processing on runtime side.
   
   ## Brief change log
   
 - *Refactor the constructor of `StreamOneInputProcessor`*
 - *Refactor the constructor of `StreamTwoInputSelectableProcessor`*
 - *Introduce `InputStatus` and `StreamTaskInput#emitNext(Output)`*
 - *Refactor the implementation of `StreamOneInputProcessor`*
 - *Refactor the implementation of `StreamTwoInputSelectableProcessor`*
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13766) Refactor the implementation of StreamInputProcessor based on StreamTaskInput#emitNext

2019-08-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13766:
---
Labels: pull-request-available  (was: )

> Refactor the implementation of StreamInputProcessor based on 
> StreamTaskInput#emitNext
> -
>
> Key: FLINK-13766
> URL: https://issues.apache.org/jira/browse/FLINK-13766
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
>
> The current processing in task input processor is based on the way of 
> pollNext. In order to unify the processing way of new source operator, we 
> introduce the new StreamTaskInput#emitNext(Output) instead of current 
> pollNext. Then we need to adjust the existing implementations of 
> StreamOneInputProcessor/StreamTwoInputSelectableProcessor based on the new 
> emit way.
> To do so, we could integrate all the task inputs from network/source in a 
> unified processing on runtime side.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9477: [FLINK-13765][task] Introduce the InputSelectionHandler for selecting next input in StreamTwoInputSelectableProcessor

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9477: [FLINK-13765][task] Introduce the 
InputSelectionHandler for selecting next input in 
StreamTwoInputSelectableProcessor
URL: https://github.com/apache/flink/pull/9477#issuecomment-522360024
 
 
   ## CI report:
   
   * 71e7fae9a212e19c83e1fe6d656de49a39334aa2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123648363)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13766) Refactor the implementation of StreamInputProcessor based on StreamTaskInput#emitNext

2019-08-18 Thread zhijiang (JIRA)
zhijiang created FLINK-13766:


 Summary: Refactor the implementation of StreamInputProcessor based 
on StreamTaskInput#emitNext
 Key: FLINK-13766
 URL: https://issues.apache.org/jira/browse/FLINK-13766
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: zhijiang
Assignee: zhijiang


The current processing in task input processor is based on the way of pollNext. 
In order to unify the processing way of new source operator, we introduce the 
new StreamTaskInput#emitNext(Output) instead of current pollNext. Then we need 
to adjust the existing implementations of 
StreamOneInputProcessor/StreamTwoInputSelectableProcessor based on the new emit 
way.

To do so, we could integrate all the task inputs from network/source in a 
unified processing on runtime side.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9476: [FLINK-13764][task, metrics] Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9476: [FLINK-13764][task, metrics] Pass the 
counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9476#issuecomment-522355867
 
 
   ## CI report:
   
   * df58cd55f5ca284e412ebbfc35c336e4f012974c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123646651)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9477: [FLINK-13765][task] Introduce the InputSelectionHandler for selecting next input in StreamTwoInputSelectableProcessor

2019-08-18 Thread GitBox
flinkbot commented on issue #9477: [FLINK-13765][task] Introduce the 
InputSelectionHandler for selecting next input in 
StreamTwoInputSelectableProcessor
URL: https://github.com/apache/flink/pull/9477#issuecomment-522360024
 
 
   ## CI report:
   
   * 71e7fae9a212e19c83e1fe6d656de49a39334aa2 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123648363)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9477: [FLINK-13765][task] Introduce the InputSelectionHandler for selecting next input in StreamTwoInputSelectableProcessor

2019-08-18 Thread GitBox
flinkbot commented on issue #9477: [FLINK-13765][task] Introduce the 
InputSelectionHandler for selecting next input in 
StreamTwoInputSelectableProcessor
URL: https://github.com/apache/flink/pull/9477#issuecomment-522359714
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 71e7fae9a212e19c83e1fe6d656de49a39334aa2 (Sun Aug 18 
22:02:14 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13765) Introduce the InputSelectionHandler for selecting next input in StreamTwoInputSelectableProcessor

2019-08-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13765:
---
Labels: pull-request-available  (was: )

> Introduce the InputSelectionHandler for selecting next input in 
> StreamTwoInputSelectableProcessor
> -
>
> Key: FLINK-13765
> URL: https://issues.apache.org/jira/browse/FLINK-13765
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
>
> In StreamTwoInputSelectableProcessor there are three fields 
> \{InputSelectable, InputSelection, availableInputsMask} to be used together 
> for the function of selecting next available input index. It would bring two 
> problems:
>  * From design aspect, these fields should be abstracted into a separate 
> component and passed into StreamTwoInputSelectableProcessor.
>  * inputSelector.nextSelection() is called while processing elements in  
> StreamTwoInputSelectableProcessor, so it is the blocker for integrating task 
> input/output for both 
> StreamOneInputProcessor/StreamTwoInputSelectableProcessor later.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhijiangW opened a new pull request #9477: [FLINK-13765][task] Introduce the InputSelectionHandler for selecting next input in StreamTwoInputSelectableProcessor

2019-08-18 Thread GitBox
zhijiangW opened a new pull request #9477: [FLINK-13765][task] Introduce the 
InputSelectionHandler for selecting next input in 
StreamTwoInputSelectableProcessor
URL: https://github.com/apache/flink/pull/9477
 
 
   ## What is the purpose of the change
   
   In `StreamTwoInputSelectableProcessor` there are three fields 
{`InputSelectable`, `InputSelection`, `availableInputsMask`} to be used 
together for the function of selecting next available input index. It would 
bring two problems:
   
   From design aspect, these fields should be abstracted into a separate 
component and passed into `StreamTwoInputSelectableProcessor`.
   
   `inputSelector.nextSelection()` is called while processing elements in  
`StreamTwoInputSelectableProcessor`, so it is the blocker for integrating
   task input/output for both `StreamOneInputProcessor` and 
`StreamTwoInputSelectableProcessor` later.
   
   ## Brief change log
   
 - *Introduce the `InputSelectionHandler` for handling the logic of next 
available input index*
 - *Refactor the related process in `StreamTwoInputSelectableProcessor` 
based on `InputSelectionHandler`*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13765) Introduce the InputSelectionHandler for selecting next input in StreamTwoInputSelectableProcessor

2019-08-18 Thread zhijiang (JIRA)
zhijiang created FLINK-13765:


 Summary: Introduce the InputSelectionHandler for selecting next 
input in StreamTwoInputSelectableProcessor
 Key: FLINK-13765
 URL: https://issues.apache.org/jira/browse/FLINK-13765
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: zhijiang
Assignee: zhijiang


In StreamTwoInputSelectableProcessor there are three fields \{InputSelectable, 
InputSelection, availableInputsMask} to be used together for the function of 
selecting next available input index. It would bring two problems:
 * From design aspect, these fields should be abstracted into a separate 
component and passed into StreamTwoInputSelectableProcessor.
 * inputSelector.nextSelection() is called while processing elements in  
StreamTwoInputSelectableProcessor, so it is the blocker for integrating task 
input/output for both StreamOneInputProcessor/StreamTwoInputSelectableProcessor 
later.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9471: [FLINK-13754][task] Decouple OperatorChain from StreamStatusMaintainer

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9471: [FLINK-13754][task] Decouple 
OperatorChain from StreamStatusMaintainer
URL: https://github.com/apache/flink/pull/9471#issuecomment-522269622
 
 
   ## CI report:
   
   * 46356e9f2ac97632021b3450f2585ea8b6120175 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123609454)
   * 330c8be5df79465a8804b7059c104984c6ac43ad : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123644155)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9476: [FLINK-13764][task, metrics] Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
flinkbot commented on issue #9476: [FLINK-13764][task, metrics] Pass the 
counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9476#issuecomment-522355867
 
 
   ## CI report:
   
   * df58cd55f5ca284e412ebbfc35c336e4f012974c : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123646651)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9476: [FLINK-13764][task, metrics] Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
flinkbot commented on issue #9476: [FLINK-13764][task, metrics] Pass the 
counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9476#issuecomment-522355405
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 225090c5e72df48c731123405609536c0d40c1b7 (Sun Aug 18 
21:06:38 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13764) Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13764:
---
Labels: pull-request-available  (was: )

> Pass the counter of numRecordsIn into the constructors of 
> StreamOne/TwoInputProcessor
> -
>
> Key: FLINK-13764
> URL: https://issues.apache.org/jira/browse/FLINK-13764
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics, Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
>
> Currently the counter of numRecordsIn is setup while processing input in 
> processor. In order to integrate the processing logic based on 
> StreamTaskInput#emitNext(Output) later, we need to pass the counter into 
> output functions then.
> So this refactoring is the precondition of following works, and it could get 
> additional benefits. One is that we could make the counter as final field in 
> StreamInputProcessor. Another is that we could reuse the counter setup logic 
> for both StreamOne/TwoInputProcessors.
> There should be no side effects if we make the counter setup a bit earlier 
> than the previous way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhijiangW opened a new pull request #9476: [FLINK-13764][task, metrics] Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
zhijiangW opened a new pull request #9476: [FLINK-13764][task, metrics] Pass 
the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9476
 
 
   ## What is the purpose of the change
   
   Currently the counter of `numRecordsIn` is setup while processing input in 
processor. In order to integrate the processing logic based on 
`StreamTaskInput#emitNext(Output)` later, we need to pass the counter into 
output functions then. So this refactoring is the precondition of following 
works, and it could get additional benefits. 
   
   One is that we could make the counter as final field in 
`StreamInputProcessor`.
   Another is that we could reuse the counter setup logic for both 
`StreamOne/TwoInputProcessors`. 
   
   There should be no side effects if we make the counter setup a bit earlier 
than the previous way.
   
   ## Brief change log
   
 - *Introduce the method of setup `numRecordsIn` counter in `StreamTask`*
 - *Pass the counter into the constructors of 
`StreamOne/TwoInputProcessors`*
 - *Remove the logic of initializing counter in 
`StreamOne/TwoInputProcessors`*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13764) Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread zhijiang (JIRA)
zhijiang created FLINK-13764:


 Summary: Pass the counter of numRecordsIn into the constructors of 
StreamOne/TwoInputProcessor
 Key: FLINK-13764
 URL: https://issues.apache.org/jira/browse/FLINK-13764
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: zhijiang
Assignee: zhijiang


Currently the counter of numRecordsIn is setup while processing input in 
processor. In order to integrate the processing logic based on 
StreamTaskInput#emitNext(Output) later, we need to pass the counter into output 
functions then.

So this refactoring is the precondition of following works, and it could get 
additional benefits. One is that we could make the counter as final field in 
StreamInputProcessor. Another is that we could reuse the counter setup logic 
for both StreamOne/TwoInputProcessors.

There should be no side effects if we make the counter setup a bit earlier than 
the previous way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13764) Pass the counter of numRecordsIn into the constructors of StreamOne/TwoInputProcessor

2019-08-18 Thread zhijiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-13764:
-
Component/s: Runtime / Metrics

> Pass the counter of numRecordsIn into the constructors of 
> StreamOne/TwoInputProcessor
> -
>
> Key: FLINK-13764
> URL: https://issues.apache.org/jira/browse/FLINK-13764
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics, Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>
> Currently the counter of numRecordsIn is setup while processing input in 
> processor. In order to integrate the processing logic based on 
> StreamTaskInput#emitNext(Output) later, we need to pass the counter into 
> output functions then.
> So this refactoring is the precondition of following works, and it could get 
> additional benefits. One is that we could make the counter as final field in 
> StreamInputProcessor. Another is that we could reuse the counter setup logic 
> for both StreamOne/TwoInputProcessors.
> There should be no side effects if we make the counter setup a bit earlier 
> than the previous way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] walterddr edited a comment on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-18 Thread GitBox
walterddr edited a comment on issue #9336: 
[FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-522352406
 
 
   thanks for the update @wzhero1 . I just ran [some E2E 
test](https://github.com/walterddr/flink/commit/70c48d09d2d3c446b7146eaf98622c4170d41f14)
 in `flink-yarn-test` with priority settings in capacity scheduler 
[here](https://travis-ci.com/walterddr/flink/jobs/226132781). seems like it is 
taking effect correctly since Travis uses default `hadoop-2.8.3`. 
   
   Obviously this code cannot be directly merged since it doesn't work with 
YARN 2.4.1. but it gives an idea of what I envision this PR will be used. 
please correct me if you have different opinion in mind. 
   
   I will do another pass once tests are updated. thanks -Rong


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages in AsyncWaitOperator's wait loops

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding 
processing of downstream messages in AsyncWaitOperator's wait loops
URL: https://github.com/apache/flink/pull/9383#issuecomment-519130955
 
 
   ## CI report:
   
   * 5d8448c4813f5b362f98f898998f1278f062d807 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122292142)
   * 4d628935e8899d6019566bfc93b5c688bc1835ec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122941321)
   * d7c0bd5edc65110910d79ca7c7bf2139672f8c02 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123229382)
   * b7a19fe5d83ee271e7560f90fbf07a7703937273 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123235786)
   * 7650b3b19b05ed6a121566d7c19d5e7bc71489fa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123332630)
   * 2493723ebd2c307f47bbdfcf154a31ab97cda312 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123335647)
   * f3f0fe6d16ef3bba35d06a797196f94f372701ff : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123336279)
   * c6ee15104ee678c239367670773723920e34c26d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123371348)
   * d0e4fbf25a8ff9982171ed982868b51ad851aaf0 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123472764)
   * 741386a495a5657bb654dcd0168f2d42873445e7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123476701)
   * 05e27c097851c65bd9a405b4aae376e2ef6c2b50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123645059)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9475: [FLINK-13762][task] Implement a unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9475: [FLINK-13762][task] Implement a 
unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9475#issuecomment-522352954
 
 
   ## CI report:
   
   * 7048a7504f200a099b5e4e42844888f51ba604aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123645048)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9475: [FLINK-13762][task] Implement a unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
flinkbot commented on issue #9475: [FLINK-13762][task] Implement a unified 
StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9475#issuecomment-522352954
 
 
   ## CI report:
   
   * 7048a7504f200a099b5e4e42844888f51ba604aa : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123645048)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding processing of downstream messages in AsyncWaitOperator's wait loops

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9383: [FLINK-13248] [runtime] Adding 
processing of downstream messages in AsyncWaitOperator's wait loops
URL: https://github.com/apache/flink/pull/9383#issuecomment-519130955
 
 
   ## CI report:
   
   * 5d8448c4813f5b362f98f898998f1278f062d807 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122292142)
   * 4d628935e8899d6019566bfc93b5c688bc1835ec : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122941321)
   * d7c0bd5edc65110910d79ca7c7bf2139672f8c02 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123229382)
   * b7a19fe5d83ee271e7560f90fbf07a7703937273 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123235786)
   * 7650b3b19b05ed6a121566d7c19d5e7bc71489fa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123332630)
   * 2493723ebd2c307f47bbdfcf154a31ab97cda312 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123335647)
   * f3f0fe6d16ef3bba35d06a797196f94f372701ff : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123336279)
   * c6ee15104ee678c239367670773723920e34c26d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123371348)
   * d0e4fbf25a8ff9982171ed982868b51ad851aaf0 : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123472764)
   * 741386a495a5657bb654dcd0168f2d42873445e7 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123476701)
   * 05e27c097851c65bd9a405b4aae376e2ef6c2b50 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123645059)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9475: [FLINK-13762][task] Implement a unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
flinkbot commented on issue #9475: [FLINK-13762][task] Implement a unified 
StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9475#issuecomment-522352624
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7048a7504f200a099b5e4e42844888f51ba604aa (Sun Aug 18 
20:26:31 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13762) Implement a unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13762:
---
Labels: pull-request-available  (was: )

> Implement a unified StatusWatermarkOutputHandler for 
> StreamOne/TwoInputProcessor
> 
>
> Key: FLINK-13762
> URL: https://issues.apache.org/jira/browse/FLINK-13762
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>  Labels: pull-request-available
>
> Currently StreamOneInputProcessor and StreamTwoInputSelectableProcessor have 
> separate implementations of ForwardingValueOutputHandler. Especially for the 
> implementation in  StreamTwoInputSelectableProcessor, it couples the internal 
> input index logic which would be a blocker for the following unification of 
> StreamTaskInput/Output.
> We could realize a unified ForwardingValueOutputHandler for both 
> StreamOneInput/ TwoInputSelectableProcessor, and it does not consider 
> different inputs to always consume StreamStatus. Then we refactor the 
> implementation of StreamStatusMaintainer for judging the status of different 
> inputs internally before really emitting the StreamStatus.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13762) Integrate the implementation of ForwardingValveOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread zhijiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-13762:
-
Description: 
Currently StreamOneInputProcessor and StreamTwoInputSelectableProcessor have 
separate implementations of ForwardingValueOutputHandler. Especially for the 
implementation in  StreamTwoInputSelectableProcessor, it couples the internal 
input index logic which would be a blocker for the following unification of 
StreamTaskInput/Output.

We could realize a unified ForwardingValueOutputHandler for both 
StreamOneInput/ TwoInputSelectableProcessor, and it does not consider different 
inputs to always consume StreamStatus. Then we refactor the implementation of 
StreamStatusMaintainer for judging the status of different inputs internally 
before really emitting the StreamStatus.

  was:
Currently StreamOneInputProcessor and StreamTwoInputSelectableProcessor have 
separate implementations of ForwardingValveOutputHandler. Especially for the 
implementation in  StreamTwoInputSelectableProcessor, it couples the internal 
input index logic which would be a blocker for the following unification of 
StreamTaskInput/Output.

We could realize a unified ForwardingValveOutputHandler for both 
StreamOneInput/ TwoInputSelectableProcessor, and it does not consider different 
inputs to always consume StreamStatus. Then we refactor the implementation of 
StreamStatusMaintainer for judging the status of different inputs internally 
before really emitting the StreamStatus.


> Integrate the implementation of ForwardingValveOutputHandler for 
> StreamOne/TwoInputProcessor
> 
>
> Key: FLINK-13762
> URL: https://issues.apache.org/jira/browse/FLINK-13762
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>
> Currently StreamOneInputProcessor and StreamTwoInputSelectableProcessor have 
> separate implementations of ForwardingValueOutputHandler. Especially for the 
> implementation in  StreamTwoInputSelectableProcessor, it couples the internal 
> input index logic which would be a blocker for the following unification of 
> StreamTaskInput/Output.
> We could realize a unified ForwardingValueOutputHandler for both 
> StreamOneInput/ TwoInputSelectableProcessor, and it does not consider 
> different inputs to always consume StreamStatus. Then we refactor the 
> implementation of StreamStatusMaintainer for judging the status of different 
> inputs internally before really emitting the StreamStatus.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13762) Implement a unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread zhijiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-13762:
-
Summary: Implement a unified StatusWatermarkOutputHandler for 
StreamOne/TwoInputProcessor  (was: Integrate the implementation of 
ForwardingValveOutputHandler for StreamOne/TwoInputProcessor)

> Implement a unified StatusWatermarkOutputHandler for 
> StreamOne/TwoInputProcessor
> 
>
> Key: FLINK-13762
> URL: https://issues.apache.org/jira/browse/FLINK-13762
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
>
> Currently StreamOneInputProcessor and StreamTwoInputSelectableProcessor have 
> separate implementations of ForwardingValueOutputHandler. Especially for the 
> implementation in  StreamTwoInputSelectableProcessor, it couples the internal 
> input index logic which would be a blocker for the following unification of 
> StreamTaskInput/Output.
> We could realize a unified ForwardingValueOutputHandler for both 
> StreamOneInput/ TwoInputSelectableProcessor, and it does not consider 
> different inputs to always consume StreamStatus. Then we refactor the 
> implementation of StreamStatusMaintainer for judging the status of different 
> inputs internally before really emitting the StreamStatus.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zhijiangW opened a new pull request #9475: [FLINK-13762][task] Implement a unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread GitBox
zhijiangW opened a new pull request #9475: [FLINK-13762][task] Implement a 
unified StatusWatermarkOutputHandler for StreamOne/TwoInputProcessor
URL: https://github.com/apache/flink/pull/9475
 
 
   ## What is the purpose of the change
   
   Currently `StreamOneInputProcessor` and `StreamTwoInputSelectableProcessor` 
have separate implementations of `ForwardingValveOutputHandler`. Especially for 
the implementation in  `StreamTwoInputSelectableProcessor`, it couples the 
internal input index logic which would be a blocker for the following 
unification of `StreamTaskInput/Output`.
   
   We could refactor the implementation of `StreamStatusMaintainer` for judging 
the status of different inputs internally before really emitting the 
`StreamStatus`. So it is reasonable to realize a unified 
`ForwardingValveOutputHandler` for both `StreamOneInput/ 
TwoInputSelectableProcessor`, and it does not consider different inputs to 
always propogate `StreamStatus`. 
   
   ## Brief change log
   
 - *Adjust `StreamStatusMaintainerImpl` to emit idle status only if all 
input status are idle.*
 - *Refactor the class name for `StatusWatermarkValue` and 
`ValueOutputHandler`*
 - *Implement a unified `ForwardingStatusWatermarkOutputHandler` for both 
one/two input processors*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-18 Thread GitBox
walterddr commented on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-522352406
 
 
   thanks for the update @wzhero1 . I just ran [some E2E 
test](https://github.com/walterddr/flink/commit/70c48d09d2d3c446b7146eaf98622c4170d41f14)
 in `flink-yarn-test` with priority settings in capacity scheduler 
[here](https://travis-ci.com/walterddr/flink/jobs/226132781). seems like it is 
taking effect correctly since Travis uses default `hadoop-2.8.3`. I will do 
another pass once tests are updated. thanks -Rong


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9471: [FLINK-13754][task] Decouple OperatorChain from StreamStatusMaintainer

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9471: [FLINK-13754][task] Decouple 
OperatorChain from StreamStatusMaintainer
URL: https://github.com/apache/flink/pull/9471#issuecomment-522269622
 
 
   ## CI report:
   
   * 46356e9f2ac97632021b3450f2585ea8b6120175 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123609454)
   * 330c8be5df79465a8804b7059c104984c6ac43ad : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123644155)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on issue #9472: [FLINK-13759][builds] Fix builds for master branch are failed during compile stage

2019-08-18 Thread GitBox
tillrohrmann commented on issue #9472: [FLINK-13759][builds] Fix builds for 
master branch are failed during compile stage
URL: https://github.com/apache/flink/pull/9472#issuecomment-522349471
 
 
   The reason why the build failed was indeed a cache inconsistency and not 
`check_shaded_artifacts_connector_elasticsearch` which checked for a 
non-existing connector directory. @zentol could it be that 
`check_shaded_artifacts_connector_elasticsearch` does not properly fail the 
build if it cannot find a directory?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13317) Merge NetUtils and ClientUtils

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-13317.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

Fixed via f118d7404b3e0a0904b4024abb578444fc98ef49

> Merge NetUtils and ClientUtils
> --
>
> Key: FLINK-13317
> URL: https://issues.apache.org/jira/browse/FLINK-13317
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Affects Versions: 1.8.0, 1.8.1
>Reporter: Charles Xu
>Assignee: Charles Xu
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Both NetUtils (flink-core) and ClientUtils (flink-clients) support validating 
> the "host:port" string. To reduce the duplicate code, it's better to move 
> ClientUtils.parseHostPortAddress() to NetUtils class, modify its references 
> and then drop ClientUtils.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann closed pull request #9352: FLINK-13317 Merge NetUtils and ClientUtils

2019-08-18 Thread GitBox
tillrohrmann closed pull request #9352: FLINK-13317 Merge NetUtils and 
ClientUtils
URL: https://github.com/apache/flink/pull/9352
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13763) Master build is broken because of wrong Maven version

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-13763.
-
   Resolution: Fixed
Fix Version/s: (was: 1.10.0)

Clearing the cache of the master branch seemed to have solved the problem. 
Consequently, it must have been a cache inconsistency.

> Master build is broken because of wrong Maven version
> -
>
> Key: FLINK-13763
> URL: https://issues.apache.org/jira/browse/FLINK-13763
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Blocker
>
> Currently, all master builds fail on Travis because Maven {{3.6.0}} is being 
> used instead of Maven {{3.2.5}} (FLINK-3158). Strangely, this only seems to 
> happen for the master branch.
> {code}
> /home/travis/maven_cache/apache-maven-3.2.5
> /home/travis/maven_cache/apache-maven-3.2.5/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/.rvm/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin
> -Dorg.slf4j.simpleLogger.showDateTime=true 
> -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS
> Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 
> 2018-10-24T18:41:47Z)
> Maven home: /usr/local/maven-3.6.0
> {code}
> https://api.travis-ci.org/v3/job/573427149/log.txt
> https://api.travis-ci.org/v3/job/573405515/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13763) Master build is broken because of wrong Maven version

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13763:
--
Description: 
Currently, all master builds fail on Travis because Maven {{3.6.0}} is being 
used instead of Maven {{3.2.5}} (FLINK-3158). Strangely, this only seems to 
happen for the master branch.

{code}
/home/travis/maven_cache/apache-maven-3.2.5
/home/travis/maven_cache/apache-maven-3.2.5/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/.rvm/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin
-Dorg.slf4j.simpleLogger.showDateTime=true 
-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS
Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 
2018-10-24T18:41:47Z)
Maven home: /usr/local/maven-3.6.0
{code}

https://api.travis-ci.org/v3/job/573427149/log.txt
https://api.travis-ci.org/v3/job/573405515/log.txt

  was:
Currently, all master builds fail on Travis because Maven {{3.6.0}} is being 
used instead of Maven {{3.2.5}} (FLINK-3158). Strangely, this only seems to 
happen for the master branch.

{code}
/home/travis/maven_cache/apache-maven-3.2.5
/home/travis/maven_cache/apache-maven-3.2.5/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/.rvm/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin
-Dorg.slf4j.simpleLogger.showDateTime=true 
-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS
Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 
2018-10-24T18:41:47Z)
Maven home: /usr/local/maven-3.6.0
{code}

https://api.travis-ci.org/v3/job/573429209/log.txt
https://api.travis-ci.org/v3/job/573427149/log.txt
https://api.travis-ci.org/v3/job/573405515/log.txt


> Master build is broken because of wrong Maven version
> -
>
> Key: FLINK-13763
> URL: https://issues.apache.org/jira/browse/FLINK-13763
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.10.0
>
>
> Currently, all master builds fail on Travis because Maven {{3.6.0}} is being 
> used instead of Maven {{3.2.5}} (FLINK-3158). Strangely, this only seems to 
> happen for the master branch.
> {code}
> /home/travis/maven_cache/apache-maven-3.2.5
> /home/travis/maven_cache/apache-maven-3.2.5/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/.rvm/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin
> -Dorg.slf4j.simpleLogger.showDateTime=true 
> -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS
> Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 
> 2018-10-24T18:41:47Z)
> Maven home: /usr/local/maven-3.6.0
> {code}
> https://api.travis-ci.org/v3/job/573427149/log.txt
> https://api.travis-ci.org/v3/job/573405515/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13763) Master build is broken because of wrong Maven version

2019-08-18 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910034#comment-16910034
 ] 

Till Rohrmann commented on FLINK-13763:
---

Maybe it is caused by a Travis cache inconsistency because it seems that we run 
{{setup_maven.sh}} to install Maven {{3.2.5}}.

> Master build is broken because of wrong Maven version
> -
>
> Key: FLINK-13763
> URL: https://issues.apache.org/jira/browse/FLINK-13763
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.10.0
>
>
> Currently, all master builds fail on Travis because Maven {{3.6.0}} is being 
> used instead of Maven {{3.2.5}} (FLINK-3158). Strangely, this only seems to 
> happen for the master branch.
> {code}
> /home/travis/maven_cache/apache-maven-3.2.5
> /home/travis/maven_cache/apache-maven-3.2.5/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/.rvm/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin
> -Dorg.slf4j.simpleLogger.showDateTime=true 
> -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS
> Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 
> 2018-10-24T18:41:47Z)
> Maven home: /usr/local/maven-3.6.0
> {code}
> https://api.travis-ci.org/v3/job/573429209/log.txt
> https://api.travis-ci.org/v3/job/573427149/log.txt
> https://api.travis-ci.org/v3/job/573405515/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13763) Master build is broken because of wrong Maven version

2019-08-18 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-13763:
-

 Summary: Master build is broken because of wrong Maven version
 Key: FLINK-13763
 URL: https://issues.apache.org/jira/browse/FLINK-13763
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.10.0
Reporter: Till Rohrmann
 Fix For: 1.10.0


Currently, all master builds fail on Travis because Maven {{3.6.0}} is being 
used instead of Maven {{3.2.5}} (FLINK-3158). Strangely, this only seems to 
happen for the master branch.

{code}
/home/travis/maven_cache/apache-maven-3.2.5
/home/travis/maven_cache/apache-maven-3.2.5/bin:/home/travis/.rvm/gems/ruby-2.5.3/bin:/home/travis/.rvm/gems/ruby-2.5.3@global/bin:/home/travis/.rvm/rubies/ruby-2.5.3/bin:/home/travis/.rvm/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/travis/bin:/home/travis/.local/bin:/usr/local/lib/jvm/openjdk11/bin:/opt/pyenv/shims:/home/travis/.phpenv/shims:/home/travis/perl5/perlbrew/bin:/home/travis/.nvm/versions/node/v8.12.0/bin:/home/travis/gopath/bin:/home/travis/.gimme/versions/go1.11.1.linux.amd64/bin:/usr/local/maven-3.6.0/bin:/usr/local/cmake-3.12.4/bin:/usr/local/clang-7.0.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/home/travis/.phpenv/bin:/opt/pyenv/bin:/home/travis/.yarn/bin
-Dorg.slf4j.simpleLogger.showDateTime=true 
-Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS
Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 
2018-10-24T18:41:47Z)
Maven home: /usr/local/maven-3.6.0
{code}

https://api.travis-ci.org/v3/job/573429209/log.txt
https://api.travis-ci.org/v3/job/573427149/log.txt
https://api.travis-ci.org/v3/job/573405515/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441376)
   * 9c14836f8639e98d58cf7bb32e38b938b3843994 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577044)
   * 76186776c5620598a19234245bbd05dfdfb1c62c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120113740)
   * 628ca7b316ad3968c90192a47a84dd01f26e2578 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122381349)
   * d204a725ff3c8a046cbd1b84e34d9e3ae8aafeac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123620485)
   * 143efadbdb6c4681569d5b412a175edfb1633b85 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123637809)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13731) flink sql support window with alignment

2019-08-18 Thread Jimmy Wong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910014#comment-16910014
 ] 

Jimmy Wong commented on FLINK-13731:


Hi  [~zhaoshijie], I think you can convert the timestamp with UDF, but I don't 
have a try.

> flink sql support window with alignment
> ---
>
> Key: FLINK-13731
> URL: https://issues.apache.org/jira/browse/FLINK-13731
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: zzsmdfj
>Priority: Major
>
> for now, sql: 
> {code:java}
> // code placeholder
> SELECT  COUNT(*) GROUP BY TUMBLE(pt, interval '1' DAY, time '08:00:00')
> {code}
> not supported in flink sql, when rowtime is processTime, the window is  
> assigned by UTC time,  it is not correct day window when i was in specified 
> time zone.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file 
replication config for yarn configuration
URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151
 
 
   ## CI report:
   
   * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119421914)
   * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441376)
   * 9c14836f8639e98d58cf7bb32e38b938b3843994 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119577044)
   * 76186776c5620598a19234245bbd05dfdfb1c62c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120113740)
   * 628ca7b316ad3968c90192a47a84dd01f26e2578 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122381349)
   * d204a725ff3c8a046cbd1b84e34d9e3ae8aafeac : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123620485)
   * 143efadbdb6c4681569d5b412a175edfb1633b85 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123637809)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java 
don't throw away exception info in logging 
URL: https://github.com/apache/flink/pull/9456#issuecomment-521825874
 
 
   ## CI report:
   
   * 1242679f7bd5ec3f7c1115006e978267abafc84b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123441772)
   * c2e57b175b07e9ee854598140676ab428c2b4b8f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123442281)
   * cd9568ae549b007727edaacb0607c7310b2fd520 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123541232)
   * d954b81d8404c629808eb0c03d8c55e2d849a4e4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123636613)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java 
don't throw away exception info in logging 
URL: https://github.com/apache/flink/pull/9456#issuecomment-521825874
 
 
   ## CI report:
   
   * 1242679f7bd5ec3f7c1115006e978267abafc84b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123441772)
   * c2e57b175b07e9ee854598140676ab428c2b4b8f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123442281)
   * cd9568ae549b007727edaacb0607c7310b2fd520 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123541232)
   * d954b81d8404c629808eb0c03d8c55e2d849a4e4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123636613)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wzhero1 commented on a change in pull request #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-18 Thread GitBox
wzhero1 commented on a change in pull request #9336: 
[FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#discussion_r314991911
 
 

 ##
 File path: docs/_includes/generated/yarn_config_configuration.html
 ##
 @@ -22,6 +22,11 @@
 "0"
 With this configuration option, users can specify a port, a 
range of ports or a list of ports for the Application Master (and JobManager) 
RPC port. By default we recommend using the default value (0) to let the 
operating system choose an appropriate port. In particular when multiple AMs 
are running on the same physical host, fixed port assignments prevent the AM 
from starting. For example when running Flink on YARN on an environment with a 
restrictive firewall, this option allows specifying a range of allowed 
ports.
 
+
 
 Review comment:
   @walterddr I also found in 2.8.x,  the `ApplicationReport` which is 
generated by `yarnClient.getApplicationReport` has the method 
`set/getPriority`, but there is no such method in 2.6.x. 
   
   > Hi here,
   > 
   > did a bit of research: the API is already there in: 
[ApplicationSubmissionContext](https://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.html).
 However, the usage pattern is different: it is only introduced to Capacity 
scheduler in 
[2.8.x](https://hadoop.apache.org/docs/r2.8.5/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Setup_for_application_priority).
   > 
   > The priority setting was used in 
[ResourceRequest](http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html),
 which is a different usage pattern: after 
[2.6.x](https://hadoop.apache.org/docs/r2.6.5//hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html),
 YARN documentation indicates that `appContext.setPriority(...)` should be used 
instead of `rsrcRequest.setPriority(...)`
   > 
   > I am still not sure how prior to 2.6.x what is the effect of setting 
`appContext.setPriority`.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9352: FLINK-13317 Merge NetUtils and ClientUtils

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9352: FLINK-13317 Merge NetUtils and 
ClientUtils
URL: https://github.com/apache/flink/pull/9352#issuecomment-518010490
 
 
   ## CI report:
   
   * 2c2b8ec8ad260a021792fb7ac41e2c1c15da0722 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121876465)
   * f118d7404b3e0a0904b4024abb578444fc98ef49 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123633326)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13762) Integrate the implementation of ForwardingValveOutputHandler for StreamOne/TwoInputProcessor

2019-08-18 Thread zhijiang (JIRA)
zhijiang created FLINK-13762:


 Summary: Integrate the implementation of 
ForwardingValveOutputHandler for StreamOne/TwoInputProcessor
 Key: FLINK-13762
 URL: https://issues.apache.org/jira/browse/FLINK-13762
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: zhijiang
Assignee: zhijiang


Currently StreamOneInputProcessor and StreamTwoInputSelectableProcessor have 
separate implementations of ForwardingValveOutputHandler. Especially for the 
implementation in  StreamTwoInputSelectableProcessor, it couples the internal 
input index logic which would be a blocker for the following unification of 
StreamTaskInput/Output.

We could realize a unified ForwardingValveOutputHandler for both 
StreamOneInput/ TwoInputSelectableProcessor, and it does not consider different 
inputs to always consume StreamStatus. Then we refactor the implementation of 
StreamStatusMaintainer for judging the status of different inputs internally 
before really emitting the StreamStatus.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wzhero1 commented on issue #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-18 Thread GitBox
wzhero1 commented on issue #9336: [FLINK-13548][Deployment/YARN]Support 
priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#issuecomment-522332605
 
 
   @walterddr Thanks for your detailed research.  I will change the doc and add 
a test in `FlinkYarnSessionCliTest` to test `setPriority` operation is valid. 
And after that we can submit a version to provide this yarn config parameter 
first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] izhangzhihao commented on issue #9474: [FLINK-13761][ScalaAPI]`SplitStream` should be deprecated because `SplitJavaStream` is deprecated

2019-08-18 Thread GitBox
izhangzhihao commented on issue #9474: [FLINK-13761][ScalaAPI]`SplitStream` 
should be deprecated because `SplitJavaStream` is deprecated
URL: https://github.com/apache/flink/pull/9474#issuecomment-522328934
 
 
   @flinkbot please recheck the pull request title.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13761) `SplitStream` should be deprecated because `SplitJavaStream` is deprecated

2019-08-18 Thread zhihao zhang (JIRA)
zhihao zhang created FLINK-13761:


 Summary: `SplitStream` should be deprecated because 
`SplitJavaStream` is deprecated
 Key: FLINK-13761
 URL: https://issues.apache.org/jira/browse/FLINK-13761
 Project: Flink
  Issue Type: Bug
  Components: API / Scala
Affects Versions: 1.8.1
Reporter: zhihao zhang


h1. `SplitStream` should be deprecated because `SplitJavaStream` is deprecated.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9474: `SplitStream` should be deprecated because `SplitJavaStream` is deprecated

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9474: `SplitStream` should be deprecated 
because `SplitJavaStream` is deprecated
URL: https://github.com/apache/flink/pull/9474#issuecomment-522322366
 
 
   ## CI report:
   
   * 15bc2a5626c87d571fdcf776315db1008617f5e2 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123631027)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13717) allow to set taskmanager.host and taskmanager.bind-host separately

2019-08-18 Thread Stephan Ewen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909983#comment-16909983
 ] 

Stephan Ewen commented on FLINK-13717:
--

I think having {{taskmanager.bind-host}} configurable makes sense. Was just 
asking if a default value of {{0.0.0.0}} would fit your use case.

> allow to set taskmanager.host and taskmanager.bind-host separately
> --
>
> Key: FLINK-13717
> URL: https://issues.apache.org/jira/browse/FLINK-13717
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Configuration, Runtime / Network
>Affects Versions: 1.8.1, 1.9.0
>Reporter: Robert Fiser
>Priority: Major
>
> We trying to use flink in docker container with bridge network.
> Without specifying taskmanager.host taskmanager binds the host/address which 
> is not visible in cluster. It's same behaviour when taskmanager.host is set 
> to 0.0.0.0.
> When it is se to external address or host name then taskmanager cannot bind 
> the address because of bridge network.
> So we need to set taskmanager.host which will be reported to jobmanager and 
> taskmanager.bind-host which can taskmanager bind inside the container
> It similar to https://issues.apache.org/jira/browse/FLINK-2821 but the 
> problem is with taskmanagers.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9352: FLINK-13317 Merge NetUtils and ClientUtils

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9352: FLINK-13317 Merge NetUtils and 
ClientUtils
URL: https://github.com/apache/flink/pull/9352#issuecomment-518010490
 
 
   ## CI report:
   
   * 2c2b8ec8ad260a021792fb7ac41e2c1c15da0722 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121876465)
   * f118d7404b3e0a0904b4024abb578444fc98ef49 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123633326)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13752) TaskDeploymentDescriptor cannot be recycled by GC due to referenced by an anonymous function

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13752:
--
Priority: Critical  (was: Major)

> TaskDeploymentDescriptor cannot be recycled by GC due to referenced by an 
> anonymous function
> 
>
> Key: FLINK-13752
> URL: https://issues.apache.org/jira/browse/FLINK-13752
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Yun Gao
>Priority: Critical
>
> When comparing the 1.8 and 1.9.0-rc2 on a test streaming job, we found that 
> the performance on 1.9.0-rc2 is much lower than that of 1.8. By comparing the 
> two versions, we found that the count of Full GC of TaskExecutor process on 
> 1.9.0-rc2 is much more than that on 1.8.
> A further analysis found that the difference is due to in 
> _TaskExecutor#setupResultPartitionBookkeeping_, the anonymous function in 
> _taskTermimationWithResourceCleanFuture_ has referenced the 
> _TaskDeploymentDescriptor_, since this function will be kept till the task is 
> terminated,  _TaskDeploymentDescriptor_ will also be kept referenced in the 
> closure and cannot be recycled by GC. In this job, _TaskDeploymentDescriptor_ 
> of some tasks are as large as 10M, and the total heap is about 113M, thus the 
> kept _TaskDeploymentDescriptors_ will cause relatively large impact on GC and 
> performance.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-4256) Fine-grained recovery

2019-08-18 Thread Stephan Ewen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909980#comment-16909980
 ] 

Stephan Ewen commented on FLINK-4256:
-

This is in fact working for streaming as well, not only for batch. It works for 
both on the granularity of "pipelined regions".

However, with blocking "batch" shuffles, a batch job decomposes into many small 
pipelined regions, which can be individually recovered. Streaming programs only 
decompose into multiple pipelined regions when they do not have an all-to-all 
shuffle ({{keyBy()}} or {{rebalance()}}).

Anything beyond that, like more fine grained recovery of streaming jobs is not 
in the scope here, because it would need a mechanism different from Flink's 
current checkpointing mechanism.

> Fine-grained recovery
> -
>
> Key: FLINK-4256
> URL: https://issues.apache.org/jira/browse/FLINK-4256
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.9.0
>
>
> When a task fails during execution, Flink currently resets the entire 
> execution graph and triggers complete re-execution from the last completed 
> checkpoint. This is more expensive than just re-executing the failed tasks.
> In many cases, more fine-grained recovery is possible.
> The full description and design is in the corresponding FLIP.
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures
> The detail desgin for version1 is 
> https://docs.google.com/document/d/1_PqPLA1TJgjlqz8fqnVE3YSisYBDdFsrRX_URgRSj74/edit#



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9473: [FLINK-13760][hive] Fix hardcode Scala version dependency in hive connector

2019-08-18 Thread GitBox
flinkbot edited a comment on issue #9473: [FLINK-13760][hive] Fix hardcode 
Scala version dependency in hive connector
URL: https://github.com/apache/flink/pull/9473#issuecomment-522320467
 
 
   ## CI report:
   
   * 712d9d5a83592bf5e86d4cbbc39f63763ef6eb7a : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123630147)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #9472: [FLINK-13759][builds] Fix builds for master branch are failed during compile stage

2019-08-18 Thread GitBox
wuchong commented on issue #9472: [FLINK-13759][builds] Fix builds for master 
branch are failed during compile stage
URL: https://github.com/apache/flink/pull/9472#issuecomment-522326554
 
 
   I have the same question. It should fail for a long time ago. Could this 
because of Travis caching mechanism? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11630) TaskExecutor does not wait for Task termination when terminating itself

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-11630.
-
   Resolution: Fixed
Fix Version/s: 1.9.1
   1.10.0

Fixed via

1.10.0: cee8a38c7cb72a41c6d9ff5a128a279721225fe9
1.9.1: 65e6dbb5dc6d3fb021536363bc9da684cf1c306c

> TaskExecutor does not wait for Task termination when terminating itself
> ---
>
> Key: FLINK-11630
> URL: https://issues.apache.org/jira/browse/FLINK-11630
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: boshu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The {{TaskExecutor}} does not properly wait for the termination of {{Tasks}} 
> when terminating. In fact, it does not even trigger the cancellation of the 
> running {{Tasks}}. I think for better lifecycle management it is important 
> that the {{TaskExecutor}} triggers the termination of all running {{Tasks}} 
> and then wait until all {{Tasks}} have terminated before it terminates itself.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann closed pull request #7757: [FLINK-11630] Triggers the termination of all running Tasks when shutting down TaskExecutor

2019-08-18 Thread GitBox
tillrohrmann closed pull request #7757: [FLINK-11630] Triggers the termination 
of all running Tasks when shutting down TaskExecutor
URL: https://github.com/apache/flink/pull/7757
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann closed pull request #9072: [FLINK-11630] Wait for the termination of all running Tasks when shutting down TaskExecutor

2019-08-18 Thread GitBox
tillrohrmann closed pull request #9072: [FLINK-11630] Wait for the termination 
of all running Tasks when shutting down TaskExecutor
URL: https://github.com/apache/flink/pull/9072
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13760) Fix hardcode Scala version dependency in hive connector

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13760:
--
Component/s: Build System

> Fix hardcode Scala version dependency in hive connector
> ---
>
> Key: FLINK-13760
> URL: https://issues.apache.org/jira/browse/FLINK-13760
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Connectors / Hive
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> FLINK-13688 introduced a {{flink-test-utils}} dependency in 
> {{flink-connector-hive}}. However, the Scala version of the artifactId is 
> hardcoded, this result in recent CRON jobs failed. 
> Here is an instance: https://api.travis-ci.org/v3/job/573092374/log.txt
> {code}
> 11:46:09.078 [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce 
> (enforce-versions) @ flink-connector-hive_2.12 ---
> 11:46:09.134 [WARNING] Rule 0: 
> org.apache.maven.plugins.enforcer.BannedDependencies failed with message:
> Found Banned Dependency: com.typesafe.akka:akka-slf4j_2.11:jar:2.5.21
> Found Banned Dependency: com.typesafe.akka:akka-actor_2.11:jar:2.5.21
> Found Banned Dependency: com.typesafe:ssl-config-core_2.11:jar:0.3.7
> Found Banned Dependency: 
> org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0
> Found Banned Dependency: com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21
> Found Banned Dependency: org.apache.flink:flink-clients_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: 
> org.apache.flink:flink-streaming-java_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: com.typesafe.akka:akka-stream_2.11:jar:2.5.21
> Found Banned Dependency: com.github.scopt:scopt_2.11:jar:3.5.0
> Found Banned Dependency: 
> org.apache.flink:flink-test-utils_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: org.apache.flink:flink-runtime_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: 
> org.apache.flink:flink-runtime_2.11:test-jar:tests:1.10-SNAPSHOT
> Found Banned Dependency: 
> org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.1.1
> Found Banned Dependency: com.twitter:chill_2.11:jar:0.7.6
> Found Banned Dependency: org.clapper:grizzled-slf4j_2.11:jar:1.3.2
> Found Banned Dependency: 
> org.apache.flink:flink-optimizer_2.11:jar:1.10-SNAPSHOT
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (FLINK-13760) Fix hardcode Scala version dependency in hive connector

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-13760.
---
Resolution: Fixed

Fixed via 

1.10.0: 64938e5317cca98054cfd944eb89f9e53f067ae8
1.9.1: c53fada078400ee812d6acd9d1e87ca7ee1c67a7

> Fix hardcode Scala version dependency in hive connector
> ---
>
> Key: FLINK-13760
> URL: https://issues.apache.org/jira/browse/FLINK-13760
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> FLINK-13688 introduced a {{flink-test-utils}} dependency in 
> {{flink-connector-hive}}. However, the Scala version of the artifactId is 
> hardcoded, this result in recent CRON jobs failed. 
> Here is an instance: https://api.travis-ci.org/v3/job/573092374/log.txt
> {code}
> 11:46:09.078 [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce 
> (enforce-versions) @ flink-connector-hive_2.12 ---
> 11:46:09.134 [WARNING] Rule 0: 
> org.apache.maven.plugins.enforcer.BannedDependencies failed with message:
> Found Banned Dependency: com.typesafe.akka:akka-slf4j_2.11:jar:2.5.21
> Found Banned Dependency: com.typesafe.akka:akka-actor_2.11:jar:2.5.21
> Found Banned Dependency: com.typesafe:ssl-config-core_2.11:jar:0.3.7
> Found Banned Dependency: 
> org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0
> Found Banned Dependency: com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21
> Found Banned Dependency: org.apache.flink:flink-clients_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: 
> org.apache.flink:flink-streaming-java_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: com.typesafe.akka:akka-stream_2.11:jar:2.5.21
> Found Banned Dependency: com.github.scopt:scopt_2.11:jar:3.5.0
> Found Banned Dependency: 
> org.apache.flink:flink-test-utils_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: org.apache.flink:flink-runtime_2.11:jar:1.10-SNAPSHOT
> Found Banned Dependency: 
> org.apache.flink:flink-runtime_2.11:test-jar:tests:1.10-SNAPSHOT
> Found Banned Dependency: 
> org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.1.1
> Found Banned Dependency: com.twitter:chill_2.11:jar:0.7.6
> Found Banned Dependency: org.clapper:grizzled-slf4j_2.11:jar:1.3.2
> Found Banned Dependency: 
> org.apache.flink:flink-optimizer_2.11:jar:1.10-SNAPSHOT
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13759) All builds for master branch are failed during compile stage

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-13759:
--
Component/s: Build System

> All builds for master branch are failed during compile stage
> 
>
> Key: FLINK-13759
> URL: https://issues.apache.org/jira/browse/FLINK-13759
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.10.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here is an instance: https://api.travis-ci.org/v3/job/572950228/log.txt
> There is an error in the log.
> {code}
> ==
> find: 
> ‘flink-connectors/flink-connector-elasticsearch/target/flink-connector-elasticsearch*.jar’:
>  No such file or directory
> ==
> Previous build failure detected, skipping cache setup.
> ==
> {code}
> The {{flink-connector-elasticsearch}} is not exist. But recent commits didn't 
> modify this.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (FLINK-13759) All builds for master branch are failed during compile stage

2019-08-18 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-13759.
---
Resolution: Fixed

Fixed via 814190e9f2efc067a004bef6af86c2541e33aada

> All builds for master branch are failed during compile stage
> 
>
> Key: FLINK-13759
> URL: https://issues.apache.org/jira/browse/FLINK-13759
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here is an instance: https://api.travis-ci.org/v3/job/572950228/log.txt
> There is an error in the log.
> {code}
> ==
> find: 
> ‘flink-connectors/flink-connector-elasticsearch/target/flink-connector-elasticsearch*.jar’:
>  No such file or directory
> ==
> Previous build failure detected, skipping cache setup.
> ==
> {code}
> The {{flink-connector-elasticsearch}} is not exist. But recent commits didn't 
> modify this.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] tillrohrmann closed pull request #9473: [FLINK-13760][hive] Fix hardcode Scala version dependency in hive connector

2019-08-18 Thread GitBox
tillrohrmann closed pull request #9473: [FLINK-13760][hive] Fix hardcode Scala 
version dependency in hive connector
URL: https://github.com/apache/flink/pull/9473
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >