Hi Rui,
I agree with you that we can implement puggable DT providers firstly, I have
created a new ticket to track it:
https://issues.apache.org/jira/browse/FLINK-21232.
Spark’s HadoopDelegationTokenManager could run on both client and
driver(Application master) sides. On the client side,
Hi Jie,
Thanks for the investigation. I think we can first implement pluggable DT
providers, and add renewal abilities incrementally. I'm also curious where
Spark runs its HadoopDelegationTokenManager when renewal is enabled?
Because it seems HadoopDelegationTokenManager needs access to keytab to
Hi Till,
Sorry for late response, I just did some investigations about Spark. Spark
adopted the SPI way to obtain delegations for different components. It has a
Thanks Jie for driving this discussion. I also prefer a pluggable
delegation token provider. And I think users can use configurations to
specify the hive conf path, similar to how users specify a hive catalog.
On Wed, Jan 13, 2021 at 4:51 PM Till Rohrmann wrote:
> Hi Jie Wang,
>
> thanks for
Hi Jie Wang,
thanks for starting this discussion. To me the SPI approach sounds better
because it is not as brittle as using reflection. Concerning the
configuration, we could think about introducing some Hive specific
configuration options which allow us to specify these paths. How are other
Hi everyone,
Currently, Hive delegation token is not obtained when Flink submits the
application in Yarn mode using kinit way. The ticket is
https://issues.apache.org/jira/browse/FLINK-20714. I'd like to start a
discussion about how to support this feature.
Maybe we have two options:
1. Using