schevalley2 commented on code in PR #24303:
URL: https://github.com/apache/flink/pull/24303#discussion_r1486546701


##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image.ref=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+#### Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upload can be 
utilized so Flink will upload the local artifact to DFS during deployment and 
fetch it on the deployed JobManager pod:

Review Comment:
   nit: utilized -> used



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image.ref=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+#### Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upload can be 
utilized so Flink will upload the local artifact to DFS during deployment and 
fetch it on the deployed JobManager pod:
+
+```bash
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    -Dkubernetes.artifacts.local-upload-enabled=true \
+    -Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \
+    local:///tmp/my-flink-job.jar
+```
+
+The `kubernetes.artifacts.local-upload-enabled` enables this feature, and 
`kubernetes.artifacts.local-upload-target` has to point to a valid remote 
target that exists and the permissions configured properly.

Review Comment:
   nit: that exists and *has* the permissiosn configured properly.



##########
flink-clients/src/main/java/org/apache/flink/client/cli/ArtifactFetchOptionsInternal.java:
##########
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.client.cli;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+
+import java.util.List;
+
+/** Artifact Fetch options. */
+@Internal
+public class ArtifactFetchOptionsInternal {
+
+    public static final ConfigOption<List<String>> COMPLETE_LIST =
+            ConfigOptions.key("$internal.user.artifacts.complete-list")

Review Comment:
   I agree with you. I wonder if it would be worth either writing somewhere 
else as you suggest or simply log something like:
   
   > DefaultKubernetesArtifactUploader completed uploading artifacts, replacing 
user.artifacts.artifact-list with "…"
   
   So it can be easily found in logs, like in `checkAndUpdatePortConfigOption`



##########
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/KubernetesClusterDescriptorTest.java:
##########
@@ -86,7 +86,8 @@ public FlinkKubeClient fromConfiguration(
                                         
server.createClient().inNamespace(NAMESPACE),
                                         Executors.newDirectExecutorService());
                             }
-                        });
+                        },
+                        config -> {});

Review Comment:
   nit: I think it's fine since it's a test. Seeing in the diff I thoght maybe 
it's possible to declare some static NO_OP = config -> {} and use it here as 
`KubernetesArtifactUploader.NO_OP` but I don't have a strong opinion on that.



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -85,6 +85,9 @@ For production use, we recommend deploying Flink Applications 
in the [Applicatio
 
 The [Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode) requires that the user code is bundled together with the 
Flink image because it runs the user code's `main()` method on the cluster.
 The Application Mode makes sure that all Flink components are properly cleaned 
up after the termination of the application.
+Bundling can be done by modifying the base Flink Docker image, or via the User 
Artifact Management, which makes it possible to uploading and download 
artifacts that are not available locally.

Review Comment:
   nit: should it be uploading -> upload?



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image.ref=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+#### Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upload can be 
utilized so Flink will upload the local artifact to DFS during deployment and 
fetch it on the deployed JobManager pod:
+
+```bash
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    -Dkubernetes.artifacts.local-upload-enabled=true \
+    -Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \
+    local:///tmp/my-flink-job.jar
+```
+
+The `kubernetes.artifacts.local-upload-enabled` enables this feature, and 
`kubernetes.artifacts.local-upload-target` has to point to a valid remote 
target that exists and the permissions configured properly.
+You can add additional artifacts via the `user.artifacts.artifact-list` config 
option, which can contain a mix of local and remote artifacts, it will try to 
upload the local ones and leave the rest as is, so all of them can be fetched 
on the deployed JobManager pod:

Review Comment:
   nit: I wonder if the sentence should stop after "remote artifacts"



##########
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##########
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image.ref=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+#### Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upload can be 
utilized so Flink will upload the local artifact to DFS during deployment and 
fetch it on the deployed JobManager pod:
+
+```bash
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    -Dkubernetes.artifacts.local-upload-enabled=true \
+    -Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \
+    local:///tmp/my-flink-job.jar
+```
+
+The `kubernetes.artifacts.local-upload-enabled` enables this feature, and 
`kubernetes.artifacts.local-upload-target` has to point to a valid remote 
target that exists and the permissions configured properly.
+You can add additional artifacts via the `user.artifacts.artifact-list` config 
option, which can contain a mix of local and remote artifacts, it will try to 
upload the local ones and leave the rest as is, so all of them can be fetched 
on the deployed JobManager pod:
+
+```bash
+$ ./bin/flink run-application \
+    --target kubernetes-application \
+    -Dkubernetes.cluster-id=my-first-application-cluster \
+    -Dkubernetes.container.image=custom-image-name \
+    -Dkubernetes.artifacts.local-upload-enabled=true \
+    -Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \
+    
-Duser.artifacts.artifact-list=local:///tmp/my-flink-udf1.jar\;s3://my-bucket/my-flink-udf2.jar
 \
+    local:///tmp/my-flink-job.jar
+```
+
+In case the job JAR or any additional artifact already available remotely via 
DFS or HTTP(S), Flink will simply fetch it on the deployed JobManager pod:

Review Comment:
   nit: *is* already available



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to