[ https://issues.apache.org/jira/browse/BEAM-5110?focusedWorklogId=136339&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-136339 ]
ASF GitHub Bot logged work on BEAM-5110: ---------------------------------------- Author: ASF GitHub Bot Created on: 20/Aug/18 23:12 Start Date: 20/Aug/18 23:12 Worklog Time Spent: 10m Work Description: tweise commented on issue #6189: [BEAM-5110] Explicitly count the references for BatchFlinkExecutableStageContext … URL: https://github.com/apache/beam/pull/6189#issuecomment-414493523 Please resolve merge conflict. Do you think we should incorporate the count configuration option since the portable Flink runner is effectively broken right now when used with Python SDK? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 136339) Time Spent: 2.5h (was: 2h 20m) > Reconile Flink JVM singleton management with deployment > ------------------------------------------------------- > > Key: BEAM-5110 > URL: https://issues.apache.org/jira/browse/BEAM-5110 > Project: Beam > Issue Type: Bug > Components: runner-flink > Reporter: Ben Sidhom > Assignee: Ben Sidhom > Priority: Major > Time Spent: 2.5h > Remaining Estimate: 0h > > [~angoenka] noticed through debugging that multiple instances of > BatchFlinkExecutableStageContext.BatchFactory are loaded for a given job when > executing in standalone cluster mode. This context factory is responsible for > maintaining singleton state across a TaskManager (JVM) in order to share SDK > Environments across workers in a given job. The multiple-loading breaks > singleton semantics and results in an indeterminate number of Environments > being created. > It turns out that the [Flink classloading > mechanism|https://ci.apache.org/projects/flink/flink-docs-release-1.5/monitoring/debugging_classloading.html] > is determined by deployment mode. Note that "user code" as referenced by > this link is actually the Flink job server jar. Actual end-user code lives > inside of the SDK Environment and uploaded artifacts. > In order to maintain singletons without resorting to IPC (for example, using > file locks and/or additional gRPC servers), we need to force non-dynamic > classloading. For example, this happens when jobs are submitted to YARN for > one-off deployments via `flink run`. However, connecting to an existing > (Flink standalone) deployment results in dynamic classloading. > We should investigate this behavior and either document (and attempt to > enforce) deployment modes that are consistent with our requirements, or (if > possible) create a custom classloader that enforces singleton loading. -- This message was sent by Atlassian JIRA (v7.6.3#76005)