[jira] [Resolved] (FLINK-8367) Port SubtaskCurrentAttemptDetailsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-8367. -- Resolution: Fixed Fixed via de30d16547d2a0392c2e311934830605de6ebd9e > Port SubtaskCurrentAttemptDetailsHandler to new REST endpoint > - > > Key: FLINK-8367 > URL: https://issues.apache.org/jira/browse/FLINK-8367 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu >Priority: Major > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskCurrentAttemptDetailsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-8432) Add openstack swift filesystem
[ https://issues.apache.org/jira/browse/FLINK-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jelmer Kuperus updated FLINK-8432: -- Description: At ebay classifieds we started running our infrastructure on top of OpenStack. The openstack project comes with its own amazon-s3-like filesystem, known as Swift. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound. We would really like to be able to use it within flink without Hadoop dependencies, as a sink or for storing savepoints etc I've prepared a pull request that adds support for it. It wraps the hadoop support for swift in a way that is very similar to the way the s3 connector works. You can find out more about the underlying hadoop implementation at [https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html] Pull request : [https://github.com/apache/flink/pull/5296] was: At ebay classifieds we started running our infrastructure on top of OpenStack. The openstack project comes with its own amazon-s3-like filesystem, known as Swift. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound. We would really like to be able to use it within flink, as a sink or for storing savepoints etc I've prepared a pull request that adds support for it. It wraps the hadoop support for swift in a way that is very similar to the way the s3 connector works. You can find out more about the underlying hadoop implementation at https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html Pull request : https://github.com/apache/flink/pull/5296 > Add openstack swift filesystem > -- > > Key: FLINK-8432 > URL: https://issues.apache.org/jira/browse/FLINK-8432 > Project: Flink > Issue Type: Improvement > Components: filesystem-connector >Affects Versions: 1.4.0 >Reporter: Jelmer Kuperus >Priority: Major > Labels: features > > At ebay classifieds we started running our infrastructure on top of OpenStack. > The openstack project comes with its own amazon-s3-like filesystem, known as > Swift. It's built for scale and optimized for durability, availability, and > concurrency across the entire data set. Swift is ideal for storing > unstructured data that can grow without bound. > We would really like to be able to use it within flink without Hadoop > dependencies, as a sink or for storing savepoints etc > I've prepared a pull request that adds support for it. It wraps the hadoop > support for swift in a way that is very similar to the way the s3 connector > works. > You can find out more about the underlying hadoop implementation at > [https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html] > Pull request : [https://github.com/apache/flink/pull/5296] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-8399) Use independent configurations for the different timeouts in slot manager
[ https://issues.apache.org/jira/browse/FLINK-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325919#comment-16325919 ] ASF GitHub Bot commented on FLINK-8399: --- Github user shuai-xu commented on the issue: https://github.com/apache/flink/pull/5271 @tillrohrmann thank you for reviewing, I have modified it. > Use independent configurations for the different timeouts in slot manager > - > > Key: FLINK-8399 > URL: https://issues.apache.org/jira/browse/FLINK-8399 > Project: Flink > Issue Type: Bug > Components: Cluster Management >Affects Versions: 1.5.0 >Reporter: shuai.xu >Assignee: shuai.xu >Priority: Major > Labels: flip-6 > > There are three parameter in slot manager to indicate the timeout for slot > request to task manager, slot request to be discarded and task manager to be > released. But now they all come from the value of AkkaOptions.ASK_TIMEOUT, > need to use independent configurations for them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] flink issue #5271: [FLINK-8399] [runtime] use independent configurations for...
Github user shuai-xu commented on the issue: https://github.com/apache/flink/pull/5271 @tillrohrmann thank you for reviewing, I have modified it. ---
[jira] [Commented] (FLINK-8289) The RestServerEndpoint should return the address with real ip when getRestAdddress
[ https://issues.apache.org/jira/browse/FLINK-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325918#comment-16325918 ] ASF GitHub Bot commented on FLINK-8289: --- Github user shuai-xu commented on the issue: https://github.com/apache/flink/pull/5190 @tillrohrmann @EronWright , this make it more clear, but seems not solve the problem completely. Since we need to set RestOptions#ADDRESS to the address of a rest server so the client can access the rest server. But we get 0.0.0.0 from getRestAddress of the rest server if let the rest server bind to RestOptions#BIND_ADDRESS with 0.0.0.0 unless we add another method to the rest server which can get the advertised address. > The RestServerEndpoint should return the address with real ip when > getRestAdddress > -- > > Key: FLINK-8289 > URL: https://issues.apache.org/jira/browse/FLINK-8289 > Project: Flink > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: shuai.xu >Assignee: shuai.xu >Priority: Major > Labels: flip-6 > > Now when RestServerEndpoint.getRestAddress, it will return an address same > with the value of config rest.address, the default it 127.0.0.1:9067, but > this address can not be accessed from another machine. And the ip for > Dispatcher and JobMaster are usually dynamically, so user will configure it > to 0.0.0.0, and the getRestAddress will return 0.0.0.0:9067, this address > will be registered to YARN or Mesos, but this address can not be accessed > from another machine also. So it need to return the real ip:port for user to > access the web monitor anywhere. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] flink issue #5190: [FLINK-8289] [runtime] set the rest.address to the host o...
Github user shuai-xu commented on the issue: https://github.com/apache/flink/pull/5190 @tillrohrmann @EronWright , this make it more clear, but seems not solve the problem completely. Since we need to set RestOptions#ADDRESS to the address of a rest server so the client can access the rest server. But we get 0.0.0.0 from getRestAddress of the rest server if let the rest server bind to RestOptions#BIND_ADDRESS with 0.0.0.0 unless we add another method to the rest server which can get the advertised address. ---
[jira] [Comment Edited] (FLINK-8355) DataSet Should not union a NULL row for AGG without GROUP BY clause.
[ https://issues.apache.org/jira/browse/FLINK-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325903#comment-16325903 ] sunjincheng edited comment on FLINK-8355 at 1/15/18 5:41 AM: - Hi [~fhueske] Thanks for your suggestion. I open the PR([https://github.com/apache/flink/pull/5241)] with FLINK-8325. Best, Jincheng was (Author: sunjincheng121): Hi [~fhueske] Thanks for your suggestion. I open the PR FLINK-8325. Best, Jincheng > DataSet Should not union a NULL row for AGG without GROUP BY clause. > > > Key: FLINK-8355 > URL: https://issues.apache.org/jira/browse/FLINK-8355 > Project: Flink > Issue Type: Bug > Components: Table API SQL >Affects Versions: 1.5.0 >Reporter: sunjincheng >Priority: Major > > Currently {{DataSetAggregateWithNullValuesRule}} will UINON a NULL row for > non grouped aggregate query. when {{CountAggFunction}} support > {{COUNT(*)}}(FLINK-8325). the result will incorrect. > for example, if Tabble {{T1}} has 3 records. when we run the follow SQL in > DataSet: > {code} > SELECT COUNT(*) as cnt from Tab // cnt = 4(incorrect). > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-8355) DataSet Should not union a NULL row for AGG without GROUP BY clause.
[ https://issues.apache.org/jira/browse/FLINK-8355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325903#comment-16325903 ] sunjincheng commented on FLINK-8355: Hi [~fhueske] Thanks for your suggestion. I open the PR FLINK-8325. Best, Jincheng > DataSet Should not union a NULL row for AGG without GROUP BY clause. > > > Key: FLINK-8355 > URL: https://issues.apache.org/jira/browse/FLINK-8355 > Project: Flink > Issue Type: Bug > Components: Table API SQL >Affects Versions: 1.5.0 >Reporter: sunjincheng >Priority: Major > > Currently {{DataSetAggregateWithNullValuesRule}} will UINON a NULL row for > non grouped aggregate query. when {{CountAggFunction}} support > {{COUNT(*)}}(FLINK-8325). the result will incorrect. > for example, if Tabble {{T1}} has 3 records. when we run the follow SQL in > DataSet: > {code} > SELECT COUNT(*) as cnt from Tab // cnt = 4(incorrect). > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] flink issue #5170: [FLINK-8266] [runtime] add network memory to ResourceProf...
Github user shuai-xu commented on the issue: https://github.com/apache/flink/pull/5170 @tillrohrmann sorry, the conflict is resolved now ---
[jira] [Commented] (FLINK-8266) Add network memory to ResourceProfile
[ https://issues.apache.org/jira/browse/FLINK-8266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325899#comment-16325899 ] ASF GitHub Bot commented on FLINK-8266: --- Github user shuai-xu commented on the issue: https://github.com/apache/flink/pull/5170 @tillrohrmann sorry, the conflict is resolved now > Add network memory to ResourceProfile > - > > Key: FLINK-8266 > URL: https://issues.apache.org/jira/browse/FLINK-8266 > Project: Flink > Issue Type: Improvement > Components: Cluster Management >Reporter: shuai.xu >Assignee: shuai.xu >Priority: Major > Labels: flip-6 > > In task manager side, it should allocated the network buffer pool according > to the input channel and output sub partition number, but when allocating a > worker, the resource profile doesn't contain the information about these > memory. > So I suggest add a network memory filed to ResourceProfile and job master > should calculate it when scheduling a task and then resource manager can > allocating a container with the resource profile. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] flink pull request #5287: [FLINK-8367] [REST] Migrate SubtaskCurrentAttemptD...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/5287 ---
[GitHub] flink issue #5294: [FLINK-8427] [optimizer] Checkstyle for org.apache.flink....
Github user zentol commented on the issue: https://github.com/apache/flink/pull/5294 I would ignore the optimizer module TBH. There are no recent contributions to the module, so we don't benefit from cleaner PR. I also don't know what will happen to the module when we start unifying the batch APIs, which probably will be either be a full rewrite or removal. ---
[jira] [Commented] (FLINK-8427) Checkstyle for org.apache.flink.optimizer.costs
[ https://issues.apache.org/jira/browse/FLINK-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325708#comment-16325708 ] ASF GitHub Bot commented on FLINK-8427: --- Github user zentol commented on the issue: https://github.com/apache/flink/pull/5294 I would ignore the optimizer module TBH. There are no recent contributions to the module, so we don't benefit from cleaner PR. I also don't know what will happen to the module when we start unifying the batch APIs, which probably will be either be a full rewrite or removal. > Checkstyle for org.apache.flink.optimizer.costs > --- > > Key: FLINK-8427 > URL: https://issues.apache.org/jira/browse/FLINK-8427 > Project: Flink > Issue Type: Improvement > Components: Optimizer >Affects Versions: 1.5.0 >Reporter: Greg Hogan >Assignee: Greg Hogan >Priority: Trivial > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8431) Allow to specify # GPUs for TaskManager in Mesos
[ https://issues.apache.org/jira/browse/FLINK-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325704#comment-16325704 ] Eron Wright commented on FLINK-8431: - [~eastcirclek] thanks for the detailed explanation. > Allow to specify # GPUs for TaskManager in Mesos > > > Key: FLINK-8431 > URL: https://issues.apache.org/jira/browse/FLINK-8431 > Project: Flink > Issue Type: Improvement > Components: Cluster Management, Mesos >Reporter: Dongwon Kim >Assignee: Dongwon Kim >Priority: Minor > > Mesos provides first-class support for Nvidia GPUs [1], but Flink does not > exploit it when scheduling TaskManagers. If Mesos agents are configured to > isolate GPUs as shown in [2], TaskManagers that do not specify to use GPUs > cannot see GPUs at all. > We, therefore, need to introduce a new configuration property named > "mesos.resourcemanager.tasks.gpus" to allow users to specify # of GPUs for > each TaskManager process in Mesos. > [1] http://mesos.apache.org/documentation/latest/gpu-support/ > [2] http://mesos.apache.org/documentation/latest/gpu-support/#agent-flags -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8369) Port SubtaskExecutionAttemptAccumulatorsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325694#comment-16325694 ] ASF GitHub Bot commented on FLINK-8369: --- Github user tillrohrmann commented on the issue: https://github.com/apache/flink/pull/5285 Thanks for your contribution @ifndef-SleePy. Changes look good. Merging this PR. > Port SubtaskExecutionAttemptAccumulatorsHandler to new REST endpoint > > > Key: FLINK-8369 > URL: https://issues.apache.org/jira/browse/FLINK-8369 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptAccumulatorsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink issue #5285: [FLINK-8369] [REST] Migrate SubtaskExecutionAttemptAccumu...
Github user tillrohrmann commented on the issue: https://github.com/apache/flink/pull/5285 Thanks for your contribution @ifndef-SleePy. Changes look good. Merging this PR. ---
[jira] [Created] (FLINK-8433) Update code example for "Managed Operator State" documentation
Fabian Hueske created FLINK-8433: Summary: Update code example for "Managed Operator State" documentation Key: FLINK-8433 URL: https://issues.apache.org/jira/browse/FLINK-8433 Project: Flink Issue Type: Bug Components: Documentation, State Backends, Checkpointing Affects Versions: 1.4.0, 1.5.0 Reporter: Fabian Hueske The code example for "Managed Operator State" (https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/state.html#using-managed-operator-state) refers to the {{CheckpointedRestoring}} interface which was removed in Flink 1.4.0. The example must be updated and the interface be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (FLINK-8369) Port SubtaskExecutionAttemptAccumulatorsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-8369. -- Resolution: Fixed Fixed via dc9a4f2f63e977f42e2f19a347c7010057dc4c69 > Port SubtaskExecutionAttemptAccumulatorsHandler to new REST endpoint > > > Key: FLINK-8369 > URL: https://issues.apache.org/jira/browse/FLINK-8369 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptAccumulatorsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #5285: [FLINK-8369] [REST] Migrate SubtaskExecutionAttemp...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/5285 ---
[jira] [Commented] (FLINK-8369) Port SubtaskExecutionAttemptAccumulatorsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325661#comment-16325661 ] ASF GitHub Bot commented on FLINK-8369: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/5285 > Port SubtaskExecutionAttemptAccumulatorsHandler to new REST endpoint > > > Key: FLINK-8369 > URL: https://issues.apache.org/jira/browse/FLINK-8369 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptAccumulatorsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (FLINK-8432) Add openstack swift filesystem
[ https://issues.apache.org/jira/browse/FLINK-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jelmer Kuperus updated FLINK-8432: -- Description: At ebay classifieds we started running our infrastructure on top of OpenStack. The openstack project comes with its own amazon-s3-like filesystem, known as Swift. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound. We would really like to be able to use it within flink, as a sink or for storing savepoints etc I've prepared a pull request that adds support for it. It wraps the hadoop support for swift in a way that is very similar to the way the s3 connector works. You can find out more about the underlying hadoop implementation at https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html Pull request : https://github.com/apache/flink/pull/5296 was: At ebay classifieds we started running our infrastructure on top of OpenStack. The openstack project comes with its own amazon-s3-like filesystem, known as Swift. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound. We would really like to be able to use it within flink, as a sink or for storing savepoints etc I've prepared a pull request that adds support for it. It wraps the hadoop support for swift in a way that is very similar to the way the s3 connector works. You can find out more about the underlying hadoop implementation at https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html > Add openstack swift filesystem > -- > > Key: FLINK-8432 > URL: https://issues.apache.org/jira/browse/FLINK-8432 > Project: Flink > Issue Type: Improvement > Components: filesystem-connector >Affects Versions: 1.4.0 >Reporter: Jelmer Kuperus > Labels: features > > At ebay classifieds we started running our infrastructure on top of OpenStack. > The openstack project comes with its own amazon-s3-like filesystem, known as > Swift. It's built for scale and optimized for durability, availability, and > concurrency across the entire data set. Swift is ideal for storing > unstructured data that can grow without bound. > We would really like to be able to use it within flink, as a sink or for > storing savepoints etc > I've prepared a pull request that adds support for it. It wraps the hadoop > support for swift in a way that is very similar to the way the s3 connector > works. > You can find out more about the underlying hadoop implementation at > https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html > Pull request : https://github.com/apache/flink/pull/5296 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8432) Add openstack swift filesystem
[ https://issues.apache.org/jira/browse/FLINK-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325617#comment-16325617 ] ASF GitHub Bot commented on FLINK-8432: --- GitHub user jelmerk opened a pull request: https://github.com/apache/flink/pull/5296 [FLINK-8432] [filesystem-connector] Add support for openstack's swift filesystem ## What is the purpose of the change Add support for OpenStack's cloud storage solution ## Brief change log - Added new module below flink-filesystems ## Verifying this change This change added tests and can be verified as follows: - Added integration tests for simple reading and writing and listing directories ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: yes - The serializers: don't know - The runtime per-record code paths (performance sensitive): don't know - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: don't know - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? yes - If yes, how is the feature documented? not documented You can merge this pull request into a Git repository by running: $ git pull https://github.com/jelmerk/flink openstack_fs_support Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/5296.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #5296 commit d46696192cc57ba9df07fabf24c6d66538ec186d Author: Jelmer KuperusDate: 2018-01-14T09:42:02Z [FLINK-8432] Add support for openstack's swift filesystem > Add openstack swift filesystem > -- > > Key: FLINK-8432 > URL: https://issues.apache.org/jira/browse/FLINK-8432 > Project: Flink > Issue Type: Improvement > Components: filesystem-connector >Affects Versions: 1.4.0 >Reporter: Jelmer Kuperus > Labels: features > > At ebay classifieds we started running our infrastructure on top of OpenStack. > The openstack project comes with its own amazon-s3-like filesystem, known as > Swift. It's built for scale and optimized for durability, availability, and > concurrency across the entire data set. Swift is ideal for storing > unstructured data that can grow without bound. > We would really like to be able to use it within flink, as a sink or for > storing savepoints etc > I've prepared a pull request that adds support for it. It wraps the hadoop > support for swift in a way that is very similar to the way the s3 connector > works. > You can find out more about the underlying hadoop implementation at > https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #5296: [FLINK-8432] [filesystem-connector] Add support fo...
GitHub user jelmerk opened a pull request: https://github.com/apache/flink/pull/5296 [FLINK-8432] [filesystem-connector] Add support for openstack's swift filesystem ## What is the purpose of the change Add support for OpenStack's cloud storage solution ## Brief change log - Added new module below flink-filesystems ## Verifying this change This change added tests and can be verified as follows: - Added integration tests for simple reading and writing and listing directories ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: yes - The serializers: don't know - The runtime per-record code paths (performance sensitive): don't know - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: don't know - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? yes - If yes, how is the feature documented? not documented You can merge this pull request into a Git repository by running: $ git pull https://github.com/jelmerk/flink openstack_fs_support Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/5296.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #5296 commit d46696192cc57ba9df07fabf24c6d66538ec186d Author: Jelmer KuperusDate: 2018-01-14T09:42:02Z [FLINK-8432] Add support for openstack's swift filesystem ---
[jira] [Created] (FLINK-8432) Add openstack swift filesystem
Jelmer Kuperus created FLINK-8432: - Summary: Add openstack swift filesystem Key: FLINK-8432 URL: https://issues.apache.org/jira/browse/FLINK-8432 Project: Flink Issue Type: Improvement Components: filesystem-connector Affects Versions: 1.4.0 Reporter: Jelmer Kuperus At ebay classifieds we started running our infrastructure on top of OpenStack. The openstack project comes with its own amazon-s3-like filesystem, known as Swift. It's built for scale and optimized for durability, availability, and concurrency across the entire data set. Swift is ideal for storing unstructured data that can grow without bound. We would really like to be able to use it within flink, as a sink or for storing savepoints etc I've prepared a pull request that adds support for it. It wraps the hadoop support for swift in a way that is very similar to the way the s3 connector works. You can find out more about the underlying hadoop implementation at https://hadoop.apache.org/docs/stable/hadoop-openstack/index.html -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8368) Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325593#comment-16325593 ] ASF GitHub Bot commented on FLINK-8368: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/5270 > Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint > --- > > Key: FLINK-8368 > URL: https://issues.apache.org/jira/browse/FLINK-8368 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptDetailsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #5270: [FLINK-8368] [REST] Migrate SubtaskExecutionAttemp...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/5270 ---
[jira] [Resolved] (FLINK-8368) Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann resolved FLINK-8368. -- Resolution: Fixed Fixed via d8b2c0febfdb1c15e4251247a87c1c606ea2a284 3920e9a47f7eea09335d644d55053863136a9a9b > Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint > --- > > Key: FLINK-8368 > URL: https://issues.apache.org/jira/browse/FLINK-8368 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptDetailsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8361) Remove create_release_files.sh
[ https://issues.apache.org/jira/browse/FLINK-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325569#comment-16325569 ] ASF GitHub Bot commented on FLINK-8361: --- Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/5291 +1, LGTM > Remove create_release_files.sh > -- > > Key: FLINK-8361 > URL: https://issues.apache.org/jira/browse/FLINK-8361 > Project: Flink > Issue Type: Improvement > Components: Build System >Affects Versions: 1.5.0 >Reporter: Greg Hogan >Priority: Trivial > > The monolithic {{create_release_files.sh}} does not support building Flink > without Hadoop and looks to have been superseded by the scripts in > {{tools/releasing}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink issue #5291: [FLINK-8361] [build] Remove create_release_files.sh
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/5291 +1, LGTM ---
[jira] [Commented] (FLINK-8384) Session Window Assigner with Dynamic Gaps
[ https://issues.apache.org/jira/browse/FLINK-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325566#comment-16325566 ] ASF GitHub Bot commented on FLINK-8384: --- GitHub user dyanarose opened a pull request: https://github.com/apache/flink/pull/5295 [FLINK-8384] [streaming] Session Window Assigner with Dynamic Gaps ## What is the purpose of the change This PR adds the ability for the Session Window assigners to to have dynamic inactivity gaps in addition to the existing static inactivity gaps. **Behaviour of dynamic gaps within existing sessions:** - scenario 1 - the new timeout is prior to the old timeout. The old timeout (the furthest in the future) is respected. - scenario 2 - the new timeout is after the old timeout. The new timeout is respected. - scenario 3 - a session is in flight, a new timeout is calculated, however no new events arrive for that session after the new timeout is calculated. This session will not have its timeout changed ## Brief change log **What's New** - SessionWindowTimeGapExtractor\- Generic Interface with one extract method that returns the time gap - DynamicEventTimeSessionWindows\ - Generic event time session window - DynamicProcessingTimeSessionWindows\ - Generic processing time session window - TypedEventTimeTrigger\ - Generic event time trigger - TypedProcessingTimeTrigger\ - Generic processing time trigger - Tests for all the above ## Verifying this change This change added tests and can be verified as follows: - added tests for the typed triggers that duplicate the existing trigger tests to prove parity - added unit tests for the dynamic session window assigners that mimic the existing static session window assigner tests to prove parity in the static case - added tests to the WindowOperatorTest class to prove the behaviour of changing inactivity gaps ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no, though the two typed trigger classes are marked `@Public(Evolving)`) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (yes) - If yes, how is the feature documented? (docs && JavaDocs) You can merge this pull request into a Git repository by running: $ git pull https://github.com/SaleCycle/flink dynamic-session-window-gaps Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/5295.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #5295 commit 399522a3e23a51ce1e860e5e09499ef98a7e340d Author: Dyana Rose Date: 2018-01-10T15:50:00Z [FLINK-8384] [streaming] Dynamic Gap Session Window Assigner > Session Window Assigner with Dynamic Gaps > - > > Key: FLINK-8384 > URL: https://issues.apache.org/jira/browse/FLINK-8384 > Project: Flink > Issue Type: Improvement > Components: Streaming >Reporter: Dyana Rose >Priority: Minor > > *Reason for Improvement* > Currently both Session Window assigners only allow a static inactivity gap. > Given the following scenario, this is too restrictive: > * Given a stream of IoT events from many device types > * Assume each device type could have a different inactivity gap > * Assume each device type gap could change while sessions are in flight > By allowing dynamic inactivity gaps, the correct gap can be determined in the > [assignWindows > function|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/windowing/assigners/EventTimeSessionWindows.java#L59] > by passing the element currently under consideration, the timestamp, and the > context to a user defined function. This eliminates the need to create > unwieldy work arounds if you only have static gaps. > Dynamic Session Window gaps should be available for both Event Time and > Processing Time streams. > (short preliminary discussion: > https://lists.apache.org/thread.html/08a011c0039e826343e9be0b1bb4ecfc2e12976bc65f8a43ee88@%3Cdev.flink.apache.org%3E) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #5295: [FLINK-8384] [streaming] Session Window Assigner w...
GitHub user dyanarose opened a pull request: https://github.com/apache/flink/pull/5295 [FLINK-8384] [streaming] Session Window Assigner with Dynamic Gaps ## What is the purpose of the change This PR adds the ability for the Session Window assigners to to have dynamic inactivity gaps in addition to the existing static inactivity gaps. **Behaviour of dynamic gaps within existing sessions:** - scenario 1 - the new timeout is prior to the old timeout. The old timeout (the furthest in the future) is respected. - scenario 2 - the new timeout is after the old timeout. The new timeout is respected. - scenario 3 - a session is in flight, a new timeout is calculated, however no new events arrive for that session after the new timeout is calculated. This session will not have its timeout changed ## Brief change log **What's New** - SessionWindowTimeGapExtractor\- Generic Interface with one extract method that returns the time gap - DynamicEventTimeSessionWindows\ - Generic event time session window - DynamicProcessingTimeSessionWindows\ - Generic processing time session window - TypedEventTimeTrigger\ - Generic event time trigger - TypedProcessingTimeTrigger\ - Generic processing time trigger - Tests for all the above ## Verifying this change This change added tests and can be verified as follows: - added tests for the typed triggers that duplicate the existing trigger tests to prove parity - added unit tests for the dynamic session window assigners that mimic the existing static session window assigner tests to prove parity in the static case - added tests to the WindowOperatorTest class to prove the behaviour of changing inactivity gaps ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no, though the two typed trigger classes are marked `@Public(Evolving)`) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (yes) - If yes, how is the feature documented? (docs && JavaDocs) You can merge this pull request into a Git repository by running: $ git pull https://github.com/SaleCycle/flink dynamic-session-window-gaps Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/5295.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #5295 commit 399522a3e23a51ce1e860e5e09499ef98a7e340d Author: Dyana Rose Date: 2018-01-10T15:50:00Z [FLINK-8384] [streaming] Dynamic Gap Session Window Assigner ---
[GitHub] flink pull request #5294: [FLINK-8427] [optimizer] Checkstyle for org.apache...
GitHub user greghogan opened a pull request: https://github.com/apache/flink/pull/5294 [FLINK-8427] [optimizer] Checkstyle for org.apache.flink.optimizer.costs ## What is the purpose of the change Enforce checkstyle for org.apache.flink.optimizer.costs You can merge this pull request into a Git repository by running: $ git pull https://github.com/greghogan/flink 8427_checkstyle_for_org_apache_flink_optimizer_costs Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/5294.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #5294 commit 54f0303541aa737723081d1d927b14c877527de3 Author: Greg HoganDate: 2018-01-12T21:12:37Z Optimize imports commit 6b969f5048132bafe80330d28c156f6530a9a22b Author: Greg Hogan
Date: 2018-01-12T21:15:13Z Trailing whitespace commit a61f7b862a20142f29375237d5b428b40dee75c3 Author: Greg Hogan
Date: 2018-01-12T21:18:10Z [FLINK-8427] [optimizer] Checkstyle for org.apache.flink.optimizer.costs ---
[jira] [Commented] (FLINK-8427) Checkstyle for org.apache.flink.optimizer.costs
[ https://issues.apache.org/jira/browse/FLINK-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325544#comment-16325544 ] ASF GitHub Bot commented on FLINK-8427: --- GitHub user greghogan opened a pull request: https://github.com/apache/flink/pull/5294 [FLINK-8427] [optimizer] Checkstyle for org.apache.flink.optimizer.costs ## What is the purpose of the change Enforce checkstyle for org.apache.flink.optimizer.costs You can merge this pull request into a Git repository by running: $ git pull https://github.com/greghogan/flink 8427_checkstyle_for_org_apache_flink_optimizer_costs Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/5294.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #5294 commit 54f0303541aa737723081d1d927b14c877527de3 Author: Greg HoganDate: 2018-01-12T21:12:37Z Optimize imports commit 6b969f5048132bafe80330d28c156f6530a9a22b Author: Greg Hogan
Date: 2018-01-12T21:15:13Z Trailing whitespace commit a61f7b862a20142f29375237d5b428b40dee75c3 Author: Greg Hogan
Date: 2018-01-12T21:18:10Z [FLINK-8427] [optimizer] Checkstyle for org.apache.flink.optimizer.costs > Checkstyle for org.apache.flink.optimizer.costs > --- > > Key: FLINK-8427 > URL: https://issues.apache.org/jira/browse/FLINK-8427 > Project: Flink > Issue Type: Improvement > Components: Optimizer >Affects Versions: 1.5.0 >Reporter: Greg Hogan >Assignee: Greg Hogan >Priority: Trivial > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #5270: [FLINK-8368] [REST] Migrate SubtaskExecutionAttemp...
Github user tillrohrmann commented on a diff in the pull request: https://github.com/apache/flink/pull/5270#discussion_r161374497 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/SubtaskExecutionAttemptDetailsHandler.java --- @@ -0,0 +1,131 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.rest.handler.job; + +import org.apache.flink.api.common.JobID; +import org.apache.flink.api.common.time.Time; +import org.apache.flink.runtime.execution.ExecutionState; +import org.apache.flink.runtime.executiongraph.AccessExecution; +import org.apache.flink.runtime.jobgraph.JobVertexID; +import org.apache.flink.runtime.rest.handler.HandlerRequest; +import org.apache.flink.runtime.rest.handler.RestHandlerException; +import org.apache.flink.runtime.rest.handler.legacy.ExecutionGraphCache; +import org.apache.flink.runtime.rest.handler.legacy.metrics.MetricFetcher; +import org.apache.flink.runtime.rest.handler.util.MutableIOMetrics; +import org.apache.flink.runtime.rest.messages.EmptyRequestBody; +import org.apache.flink.runtime.rest.messages.JobIDPathParameter; +import org.apache.flink.runtime.rest.messages.JobVertexIdPathParameter; +import org.apache.flink.runtime.rest.messages.MessageHeaders; +import org.apache.flink.runtime.rest.messages.job.SubtaskAttemptMessageParameters; +import org.apache.flink.runtime.rest.messages.job.SubtaskExecutionAttemptDetailsInfo; +import org.apache.flink.runtime.rest.messages.job.metrics.IOMetricsInfo; +import org.apache.flink.runtime.taskmanager.TaskManagerLocation; +import org.apache.flink.runtime.webmonitor.RestfulGateway; +import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever; +import org.apache.flink.util.Preconditions; + +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executor; + +/** + * Handler of specific sub task execution attempt. + */ +public class SubtaskExecutionAttemptDetailsHandler extends AbstractSubtaskAttemptHandler{ + + private final MetricFetcher metricFetcher; + + /** +* Instantiates a new Abstract job vertex handler. --- End diff -- wrong java docs: Should be `SubtaskExecutionAttemptDetailsHandler`. ---
[jira] [Commented] (FLINK-8368) Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325484#comment-16325484 ] ASF GitHub Bot commented on FLINK-8368: --- Github user tillrohrmann commented on a diff in the pull request: https://github.com/apache/flink/pull/5270#discussion_r161374497 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/SubtaskExecutionAttemptDetailsHandler.java --- @@ -0,0 +1,131 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.rest.handler.job; + +import org.apache.flink.api.common.JobID; +import org.apache.flink.api.common.time.Time; +import org.apache.flink.runtime.execution.ExecutionState; +import org.apache.flink.runtime.executiongraph.AccessExecution; +import org.apache.flink.runtime.jobgraph.JobVertexID; +import org.apache.flink.runtime.rest.handler.HandlerRequest; +import org.apache.flink.runtime.rest.handler.RestHandlerException; +import org.apache.flink.runtime.rest.handler.legacy.ExecutionGraphCache; +import org.apache.flink.runtime.rest.handler.legacy.metrics.MetricFetcher; +import org.apache.flink.runtime.rest.handler.util.MutableIOMetrics; +import org.apache.flink.runtime.rest.messages.EmptyRequestBody; +import org.apache.flink.runtime.rest.messages.JobIDPathParameter; +import org.apache.flink.runtime.rest.messages.JobVertexIdPathParameter; +import org.apache.flink.runtime.rest.messages.MessageHeaders; +import org.apache.flink.runtime.rest.messages.job.SubtaskAttemptMessageParameters; +import org.apache.flink.runtime.rest.messages.job.SubtaskExecutionAttemptDetailsInfo; +import org.apache.flink.runtime.rest.messages.job.metrics.IOMetricsInfo; +import org.apache.flink.runtime.taskmanager.TaskManagerLocation; +import org.apache.flink.runtime.webmonitor.RestfulGateway; +import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever; +import org.apache.flink.util.Preconditions; + +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executor; + +/** + * Handler of specific sub task execution attempt. + */ +public class SubtaskExecutionAttemptDetailsHandler extends AbstractSubtaskAttemptHandler{ + + private final MetricFetcher metricFetcher; + + /** +* Instantiates a new Abstract job vertex handler. --- End diff -- wrong java docs: Should be `SubtaskExecutionAttemptDetailsHandler`. > Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint > --- > > Key: FLINK-8368 > URL: https://issues.apache.org/jira/browse/FLINK-8368 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptDetailsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8368) Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint
[ https://issues.apache.org/jira/browse/FLINK-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325483#comment-16325483 ] ASF GitHub Bot commented on FLINK-8368: --- Github user tillrohrmann commented on a diff in the pull request: https://github.com/apache/flink/pull/5270#discussion_r161374482 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/JobVertexAccumulatorsHandler.java --- @@ -65,25 +60,21 @@ public JobVertexAccumulatorsHandler( } @Override - protected JobVertexAccumulatorsInfo handleRequest(HandlerRequestrequest, AccessExecutionGraph executionGraph) throws RestHandlerException { - JobVertexID jobVertexID = request.getPathParameter(JobVertexIdPathParameter.class); - AccessExecutionJobVertex jobVertex = executionGraph.getJobVertex(jobVertexID); - - if (null != jobVertex) { - StringifiedAccumulatorResult[] accs = jobVertex.getAggregatedUserAccumulatorsStringified(); - ArrayList userAccumulatorList = new ArrayList<>(accs.length); + protected JobVertexAccumulatorsInfo handleRequest( + HandlerRequest request, + AccessExecutionJobVertex jobVertex) throws RestHandlerException { - for (StringifiedAccumulatorResult acc : accs) { - userAccumulatorList.add( - new JobVertexAccumulatorsInfo.UserAccumulator( - acc.getName(), - acc.getType(), - acc.getValue())); - } + StringifiedAccumulatorResult[] accs = jobVertex.getAggregatedUserAccumulatorsStringified(); + ArrayList userAccumulatorList = new ArrayList<>(accs.length); - return new JobVertexAccumulatorsInfo(jobVertex.getJobVertexId().toString(), userAccumulatorList); - } else { - throw new RestHandlerException("There is no accumulator for vertex " + jobVertexID + '.', HttpResponseStatus.NOT_FOUND); + for (StringifiedAccumulatorResult acc : accs) { + userAccumulatorList.add( + new JobVertexAccumulatorsInfo.UserAccumulator( + acc.getName(), + acc.getType(), + acc.getValue())); } + + return new JobVertexAccumulatorsInfo(jobVertex.getJobVertexId().toString(), userAccumulatorList); --- End diff -- Very nice refinement :-) > Port SubtaskExecutionAttemptDetailsHandler to new REST endpoint > --- > > Key: FLINK-8368 > URL: https://issues.apache.org/jira/browse/FLINK-8368 > Project: Flink > Issue Type: Sub-task > Components: REST >Reporter: Biao Liu >Assignee: Biao Liu > Labels: flip-6 > Fix For: 1.5.0 > > > Migrate > org.apache.flink.runtime.rest.handler.legacy.SubtaskExecutionAttemptDetailsHandler > to new a REST handler that registered in WebMonitorEndpoint. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] flink pull request #5270: [FLINK-8368] [REST] Migrate SubtaskExecutionAttemp...
Github user tillrohrmann commented on a diff in the pull request: https://github.com/apache/flink/pull/5270#discussion_r161374482 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/job/JobVertexAccumulatorsHandler.java --- @@ -65,25 +60,21 @@ public JobVertexAccumulatorsHandler( } @Override - protected JobVertexAccumulatorsInfo handleRequest(HandlerRequestrequest, AccessExecutionGraph executionGraph) throws RestHandlerException { - JobVertexID jobVertexID = request.getPathParameter(JobVertexIdPathParameter.class); - AccessExecutionJobVertex jobVertex = executionGraph.getJobVertex(jobVertexID); - - if (null != jobVertex) { - StringifiedAccumulatorResult[] accs = jobVertex.getAggregatedUserAccumulatorsStringified(); - ArrayList userAccumulatorList = new ArrayList<>(accs.length); + protected JobVertexAccumulatorsInfo handleRequest( + HandlerRequest request, + AccessExecutionJobVertex jobVertex) throws RestHandlerException { - for (StringifiedAccumulatorResult acc : accs) { - userAccumulatorList.add( - new JobVertexAccumulatorsInfo.UserAccumulator( - acc.getName(), - acc.getType(), - acc.getValue())); - } + StringifiedAccumulatorResult[] accs = jobVertex.getAggregatedUserAccumulatorsStringified(); + ArrayList userAccumulatorList = new ArrayList<>(accs.length); - return new JobVertexAccumulatorsInfo(jobVertex.getJobVertexId().toString(), userAccumulatorList); - } else { - throw new RestHandlerException("There is no accumulator for vertex " + jobVertexID + '.', HttpResponseStatus.NOT_FOUND); + for (StringifiedAccumulatorResult acc : accs) { + userAccumulatorList.add( + new JobVertexAccumulatorsInfo.UserAccumulator( + acc.getName(), + acc.getType(), + acc.getValue())); } + + return new JobVertexAccumulatorsInfo(jobVertex.getJobVertexId().toString(), userAccumulatorList); --- End diff -- Very nice refinement :-) ---