Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19206
I also meant that I don't think this is a problem and the "fix" is actually
incorrect. I disagree with "this is a feature", because if the path resolves to
an HA namespace, and YARN doesn't know
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/19206
It says `Spark-19206` in your PR title but Spark-19206 is actually about
`Update outdated parameter descriptions in external-kafka module`; so maybe you
should reference a different JIRA. That's all
Github user Chaos-Ju commented on the issue:
https://github.com/apache/spark/pull/19206
@vanzin @srowen thanks your notice and commentï¼ I means that if make
Yarn Cluster know about the federated HDFS service, We should distribute the
mount table XML in the cluster and by restart
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19206
Also, doesn't this mean that your YARN service does not know about the
federated HDFS service?
Same thing can happen if you instead have an HA setup but YARN doesn't know
about it. It will
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19206
@Chaos-Ju Sean means the bug you reference in the title has nothing to do
with the code.
---
-
To unsubscribe, e-mail:
Github user Chaos-Ju commented on the issue:
https://github.com/apache/spark/pull/19206
@srowen you think should close the jira and this is pointlessï¼
---
-
To unsubscribe, e-mail:
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19206
@Chaos-Ju this is connected to the wrong JIRA
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional