Actually, in our document, we have provided a command[1] to create the
service account.
It is similar to your yaml file.

$ kubectl create clusterrolebinding flink-role-binding-default
--clusterrole=edit --serviceaccount=default:default


Unfortunately, we could not support mounting a PVC. We plan to do it in pod
template[2],
but there's not much progress so far. But I think Flink could support NFS
directly[3].
Could you have a try to configure the checkpoint path to a NFS path?

Moreover, in our production environment, we are using S3/AliyunOSS for the
checkpointing
storage. Flink has provided the filesystem plugins in the $FLINK_HOME/opt
directory.


[1].
https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#rbac
[2]. https://issues.apache.org/jira/browse/FLINK-15656
[3].
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/#local-file-system

Best,
Yang


Boris Lublinsky <boris.lublin...@lightbend.com> 于2020年11月4日周三 上午2:42写道:

> Thanks a lot,
> This helped a lot.
> And I did make it work. It probably would of help if documentation,
> explicitly gave an example of role/rolebinding, something like:
>
> kubectl apply -f - <<EOF
> apiVersion: rbac.authorization.k8s.io/v1
> kind: Role
> metadata:
>   name: flink-role
>   namespace: default
> rules:
> - apiGroups: ["", "apps"]
>   resources: ["deployments", "pods"]
>   verbs: ["get", "list", "watch", "create", "update", "delete"]
> ---
> apiVersion: rbac.authorization.k8s.io/v1
> kind: RoleBinding
> metadata:
>   name: flink-role-binding
>   namespace: default
> subjects:
> - kind: ServiceAccount
>   name: flink
> roleRef:
>   kind: Role
>   name: flink-role
>   apiGroup: rbac.authorization.k8s.io
> EOF
>
>
> And now I see that I do not really need operator, can do it much simpler
> with this approach.
>
> The only remaining question is how I can mount additional PVC for
> checkpointing. When running on K8, we typically use NFS, mount it to the
> pods and specify location in Flink-config.yaml.
>
> DO you have an example somewhere of doing this?
>
>
> On Nov 3, 2020, at 7:02 AM, Yang Wang <danrtsey...@gmail.com> wrote:
>
> You could follow the guide[1] here to output the logs to the console so
> that
> it could be accessed via "kubectl logs". And from 1.12. we will make this
> as default.
>
> [1].
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#log-files
>
>
> Best,
> Yang
>
> Chesnay Schepler <ches...@apache.org> 于2020年11月3日周二 下午5:32写道:
>
>> 1) -Dkubernetes.namespace
>> 2) The -D syntax is actually just a way to specify configurations options
>> from the command-line. As such, the configuration page
>> <https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/config.html#kubernetes>
>> lists all options.
>> 3) if the diff between the configurations isn't too big you could maybe
>> have a shared base config, and specify the special options on the
>> command-line (see 2)). But if you truly need a separate file, then I don't
>> think there is another way than the one you described.
>> 4) yes, the configuration is stored as a config map.
>>
>> On 11/3/2020 12:17 AM, Boris Lublinsky wrote:
>>
>> Hi,
>> I was trying to follow instructions
>> https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/native_kubernetes.html
>>  but
>> non e of them really worked.
>>
>> For session I tried:
>>
>> /Users/boris/Support/flink-1.11.2/bin/flink run-application -t
>> kubernetes-application \
>>     -Dkubernetes.cluster-id=flink-native-k8s-application \
>>   -Dtaskmanager.memory.process.size=4096m \
>>   -Dkubernetes.taskmanager.cpu=2 \
>>   -Dtaskmanager.numberOfTaskSlots=4 \
>>   -Dkubernetes.container.image=flink:1.11.2-scala_2.12 \
>>   local:///opt/flink/examples/batch/WordCount.jar
>>
>> And for application
>>
>> /Users/boris/Support/flink-1.11.2//bin/kubernetes-session.sh \
>>   -Dkubernetes.cluster-id=flink-native-k8s-session \
>>   -Dtaskmanager.memory.process.size=4096m \
>>   -Dkubernetes.taskmanager.cpu=2 \
>>   -Dtaskmanager.numberOfTaskSlots=4 \
>>   -Dresourcemanager.taskmanager-timeout=3600000
>>
>>
>> Both were trying to create JM deployment, but in both cases the actual
>> container creation failed with no explanation
>>
>>
>> Thats the only log that I can see:
>>
>> kubectl logs flink-native-k8s-application-5d686d5457-lnttw
>> Start command : /bin/bash -c $JAVA_HOME/bin/java -classpath
>> $FLINK_CLASSPATH -Xmx1073741824 -Xms1073741824
>> -XX:MaxMetaspaceSize=268435456 -Dlog.file=/opt/flink/log/jobmanager.log
>> -Dlogback.configurationFile=file:/opt/flink/conf/logback.xml
>> -Dlog4j.configurationFile=file:/opt/flink/conf/log4j.properties
>> org.apache.flink.kubernetes.entrypoint.KubernetesApplicationClusterEntrypoint
>> 1> /opt/flink/log/jobmanager.out 2> /opt/flink/log/jobmanager.err
>>
>> A couple of additional questions:
>>
>> 1. Is there a way to specify namespace where the deployment is created?
>> 2. Is there a list of -D parameters that can be specified?
>> 3. If I want a custom Flink-conf.yaml, for every invocation, do I have to
>> create it in separate location and then use something
>> like FLINK_CONF_DIR=/Users/boris/Support/flink-1.11.2/conf for every run?
>> Or there is a simpler way?
>> 4. If I understand correctly, this creates a config-map that is actually
>> used.
>>
>>
>>
>

Reply via email to