[ 
https://issues.apache.org/jira/browse/SPARK-27742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16851924#comment-16851924
 ] 

Stavros Kontopoulos edited comment on SPARK-27742 at 5/30/19 2:58 PM:
----------------------------------------------------------------------

{quote}From client side the max lifetime can be only decreased for security 
reasons + see my previous point.
{quote}
In general since Kafka allows me to set this value, I should be able to do so. 

"Max lifetime for the token in milliseconds. If the value is -1, then 
MaxLifeTime will default to a server side config value 
(delegation.token.max.lifetime.ms)."

For the user this means better flexibility per job to setup an upper limit.

Again I see the point repeatedly get new tokens, no max life time. From a 
security perspective this allows the user to have a ticket for ever, if not 
mistaken, which is less restrictive than setting a hard limit. Imagine you have 
multiple Spark batch apps and you want to setup limits for administration 
reasons eg. no user is allowed to have access more than 5 days (for streaming 
jobs you need no limits). Anyway my 2 cents.
{quote}There is no possibility to obtain token for anybody else (pls see the 
comment in the code).
{quote}
When proxy user will be supported I guess there will be.


was (Author: skonto):
{quote}From client side the max lifetime can be only decreased for security 
reasons + see my previous point.
{quote}
In general since Kafka allows me to set this value, I should be able to do so. 

"Max lifetime for the token in milliseconds. If the value is -1, then 
MaxLifeTime will default to a server side config value 
(delegation.token.max.lifetime.ms)."

For the user this means better flexibility per job to setup an upper limit.

Again I see the point repeatedly get new tokens, no max life time. From a 
security perspective this allows the user to have a ticket for ever, if not 
mistaken, which is less restrictive than setting a hard limit. Imagine you have 
multiple Spark apps and you want to setup limits for administration reasons eg. 
no user is allowed to have access more than 5 days (for streaming jobs you need 
no limits). Anyway my 2 cents.
{quote}There is no possibility to obtain token for anybody else (pls see the 
comment in the code).
{quote}
When proxy user will be supported I guess there will be.

> Security Support in Sources and Sinks for SS and Batch
> ------------------------------------------------------
>
>                 Key: SPARK-27742
>                 URL: https://issues.apache.org/jira/browse/SPARK-27742
>             Project: Spark
>          Issue Type: Brainstorming
>          Components: SQL, Structured Streaming
>    Affects Versions: 3.0.0
>            Reporter: Stavros Kontopoulos
>            Priority: Major
>
> As discussed with [~erikerlandson] on the [Big Data on K8s 
> UG|https://docs.google.com/document/d/1pnF38NF6N5eM8DlK088XUW85Vms4V2uTsGZvSp8MNIA]
>  it would be good to capture current status and identify work that needs to 
> be done for securing Spark when accessing sources and sinks. For example what 
> is the status of SSL, Kerberos support in different scenarios. The big 
> concern nowadays is how to secure data pipelines end-to-end. 
> Note: Not sure if this overlaps with some other ticket. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to