Github user suryag10 commented on the issue:

    https://github.com/apache/spark/pull/21669
  
    Hi Ilan,
    I was able to make work the Kerberos with one work around(which I am trying 
to do a full fix) and one fix. 
    Fix is the one which i had commented earlier and is as follows:
    
    code changed From
     .withName(KRB_FILE_VOLUME)
     .withMountPath(KRB_FILE_DIR_PATH)
    
    to 
     .withName(KRB_FILE_VOLUME)
    .withMountPath(KRB_FILE_DIR_PATH + "/krb5.conf")
    .withSubPath("krb5.conf")
    
    Work around is described as follows:
    What I had observed was when executor pod was being created, the following 
function is being called twice (file KerberosConfExecutorFeatureStep.scala):
    
    override def configurePod(pod: SparkPod): SparkPod = {
     
    and following error was coming.
    
    2018-09-02 05:01:12 ERROR Utils:91 - Uncaught exception in thread 
kubernetes-executor-snapshots-subscribers-1
    io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
POST at: https://kubernetes.default.svc/api/v1/namespaces/default/pods. 
Message: Pod "kerberos-1535864447099-exec-1" is invalid: [spec.volumes[4].name: 
Duplicate value: "hadoop-secret", spec.volumes[5].name: Duplicate value: 
"krb5-file", spec.containers[0].volumeMounts[4].mountPath: Invalid value: 
"/mnt/secrets/hadoop-credentials": must be unique, 
spec.containers[0].volumeMounts[5].mountPath: Invalid value: "/etc/krb5.conf": 
must be unique]. Received status: Status(apiVersion=v1, code=422, 
details=StatusDetails(causes=[StatusCause(field=spec.volumes[4].name, 
message=Duplicate value: "hadoop-secret", reason=FieldValueDuplicate, 
additionalProperties={}), StatusCause(field=spec.volumes[5].name, 
message=Duplicate value: "krb5-file", reason=FieldValueDuplicate, 
additionalProperties={}), 
StatusCause(field=spec.containers[0].volumeMounts[4].mountPath, message=Invalid 
value: "/mnt/secrets/hadoop-credentials": must
  be unique, reason=FieldValueInvalid, additionalProperties={}), 
StatusCause(field=spec.containers[0].volumeMounts[5].mountPath, message=Invalid 
value: "/etc/krb5.conf": must be unique, reason=FieldValueInvalid, 
additionalProperties={})], group=null, kind=Pod, 
name=kerberos-1535864447099-exec-1, retryAfterSeconds=null, uid=null, 
additionalProperties={}), kind=Status, message=Pod 
"kerberos-1535864447099-exec-1" is invalid: [spec.volumes[4].name: Duplicate 
value: "hadoop-secret", spec.volumes[5].name: Duplicate value: "krb5-file", 
spec.containers[0].volumeMounts[4].mountPath: Invalid value: 
"/mnt/secrets/hadoop-credentials": must be unique, 
spec.containers[0].volumeMounts[5].mountPath: Invalid value: "/etc/krb5.conf": 
must be unique], metadata=ListMeta(resourceVersion=null, selfLink=null, 
additionalProperties={}), reason=Invalid, status=Failure, 
additionalProperties={}).
    
    So I had kept a work around not to call the above function second time when 
an executor pod was being created. 
    
    With the above I was able to run successfully both hdfs and hive sample 
text cases. 
    
    Regards
    Surya



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to