Hello.
I am trying to run FlinkRunner (2.15.0) on AWS EC2 instance and submit job
to AWS EMR (5.26.0).
However, I get below error when I run the pipeline and fail.
========================================================-
Caused by: java.lang.Exception: The user defined 'open()' method caused an
exception: java.io.IOException: Cannot run program "docker": error=2, No
such file or directory
at
org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:498)
at
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:368)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
... 1 more
Caused by:
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
java.io.IOException: Cannot run program "docker": error=2, No such file or
directory
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4966)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:211)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:202)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:185)
at
org.apache.beam.runners.flink.translation.functions.FlinkDefaultExecutableStageContext.getStageBundleFactory(FlinkDefaultExecutableStageContext.java:49)
at
org.apache.beam.runners.flink.translation.functions.ReferenceCountingFlinkExecutableStageContextFactory$WrappedContext.getStageBundleFactory(ReferenceCountingFlinkExecutableStageContextFactory.java:203)
at
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.open(FlinkExecutableStageFunction.java:129)
at
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at
org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:494)
... 3 more
Caused by: java.io.IOException: Cannot run program "docker": error=2, No
such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at
org.apache.beam.runners.fnexecution.environment.DockerCommand.runShortCommand(DockerCommand.java:141)
at
org.apache.beam.runners.fnexecution.environment.DockerCommand.runImage(DockerCommand.java:92)
at
org.apache.beam.runners.fnexecution.environment.DockerEnvironmentFactory.createEnvironment(DockerEnvironmentFactory.java:152)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:178)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:162)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
at
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
... 11 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 24 more
========================================================-
Pipeline options are below.
========================================================-
options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_version=1.8",
"--flink_master_url=ip-172-31-1-84.ap-northeast-1.compute.internal:43581",
"--environment_config=
asia.gcr.io/PROJECTNAME/beam/python3",
"--experiments=beam_fn_api"
])
p = beam.Pipeline(options=options)
========================================================-
I am able to run docker info ec2-user on the server where script is
running..
========================================================-
(python) [ec2-user@ip-172-31-2-121 ~]$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
...
========================================================-
I used "debian-stretch" .
========================================================-
debian-stretch-hvm-x86_64-gp2-2019-09-08-17994-572488bb-fc09-4638-8628-e1e1d26436f4-ami-0ed2d2283aa1466df.4
(ami-06f16171199d98c63)
========================================================-
This seems to not happen when flink runs locally.
========================================================-
admin@ip-172-31-9-89:/opt/flink$ sudo ss -atunp | grep 8081
tcp LISTEN 0 128 :::8081 :::*
users:(("java",pid=18420,fd=82))
admin@ip-172-31-9-89:/opt/flink$ sudo ps -ef | grep java | head -1
admin 17698 1 0 08:59 ? 00:00:12 java -jar
/home/admin/.apache_beam/cache/beam-runners-flink-1.8-job-server-2.15.0.jar
--flink-master-url ip-172-31-1-84.ap-northeast-1.compute.internal:43581
--artifacts-dir /tmp/artifactskj47j8yn --job-port 48205 --artifact-port 0
--expansion-port 0
admin@ip-172-31-9-89:/opt/flink$
========================================================-
Would there be any other setting I need to look for when running on EC2
instance ?
Thanks,
Yu Watanabe
--
Yu Watanabe
Weekend Freelancer who loves to challenge building data platform
[email protected]
[image: LinkedIn icon] <https://www.linkedin.com/in/yuwatanabe1> [image:
Twitter icon] <https://twitter.com/yuwtennis>