[GitHub] flink issue #3692: FLINK-5974 Added configurations to support mesos-dns host...

2017-05-04 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/3692
  
@tillrohrmann Please let me know if you are waiting for any clarifications 
before this PR could be merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3692: FLINK-5974 Added configurations to support mesos-d...

2017-04-06 Thread vijikarthi
GitHub user vijikarthi opened a pull request:

https://github.com/apache/flink/pull/3692

FLINK-5974 Added configurations to support mesos-dns hostname resolution

This PR addresses FLINK-5974 requirements which takes care of handling 
dynamic host name resolution for JM and TM components especially in some 
deployment environment like Mesos/DCOS.

It addresses two main functionalities.

a) Dynamic host name configuration

Support for specifying hostname for JM/TM is already available through 
`-jobmanager.rpc.address` and `taskmanager.hostname` configurations.

However in Mesos DC/OS type of environment, each task container can be 
looked up using an hostname alias which is derived using the format 
`..mesos` where the service discovery is managed through 
`mesos-dns`. To support these dynamic hostname lookup, we have introduced a new 
configuration `mesos.resourcemanager.tasks.hostname` which takes the format 
`_TASK.`. 

When this property is supplied, the `_TASK` token will be replaced with the 
`TASK_ID` of the TM container and the final derived string will be used to 
populate `taskmanager.hostname` configuration.

For example, in DCOS setup one could supply the configuration as 
`-Dmesos.resourcemanager.tasks.hostname=_TASK.{{FRAMEWORK_NAME}}.mesos` where 
`FRAMEWORK_NAME` could be `flink`

Please refer to 
https://docs.mesosphere.com/1.9/usage/service-discovery/mesos-dns/service-naming/#a-records
 for more details on how Mesos service discovery works.

b) Support to run *any* bootstrap script prior to execute TM startup script

Currently, the TM boot script `mesos-taskmanager.sh` is the only script 
that is passed to Mesos launcher for booting TM container. 

In DC/OS environment where service discovery is common, we need a mechanism 
to wait for the service discovery records to be available and the hostname is 
indeed resolvable before launching the TM boot script. 

DCOS deployment offers a way to validate and wait for the service discovery 
records to be available before launching the tasks. Please see below links for 
more details on how it works.

https://mesosphere.github.io/dcos-commons/developer-guide.html#task-bootstrap
https://github.com/mesosphere/dcos-commons/blob/master/sdk/bootstrap/main.go

To support this, we have introduced a new configuration 
`mesos.resourcemanager.tasks.cmd-prefix=$FLINK_HOME/bin/bootstrap` to provide 
any executable/script that can be configured to run prior to executing the TM 
bootstrap command. 

This feature *currently* works *only for Docker based image* where the 
bootstrap script can be pre-baked in to a specific location that can be used to 
configure `mesos.resourcemanager.tasks.cmd-prefix'.

While both the implementations are helping in addressing the Mesos/DCOS 
type of deployment but the implementation is agnostic of these environments and 
can be used for any generic deployment that may need such a facility.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vijikarthi/flink FLINK-5974-Master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3692.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3692


commit aeb432dc7fe8bcdd5faa49b8ad5dfb5630ea0747
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2017-04-06T16:48:39Z

FLINK-5974 Added configurations to support mesos-dns hostname resolution




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3600: [FLINK-6117]Make setting of 'zookeeper.sasl.disable' work...

2017-03-29 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/3600
  
The changes looks good to me. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3600: [FLINK-6117]Make setting of 'zookeeper.sasl.disable' work...

2017-03-28 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/3600
  
The default ZK SASL client behavior is to enable SASL client and to be in 
consistent it makes sense for us to leave the default option enabled.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3600: [FLINK-6117]Make setting of 'zookeeper.sasl.disabl...

2017-03-28 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/3600#discussion_r108284815
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/SecurityOptions.java ---
@@ -55,6 +55,10 @@
//  ZooKeeper Security Options
// 

 
+   public static final ConfigOption ZOOKEEPER_SASL_DISABLE =
+   key("zookeeper.sasl.disable")
+   .defaultValue(true);
--- End diff --

Can the default value be false (meaning SASL client is always enabled) to 
be in consistent with ZK SASL client module?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3566: [FLINK-6117]makes setting of 'zookeeper.sasl.disab...

2017-03-19 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/3566#discussion_r106829135
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java 
---
@@ -89,6 +90,7 @@ public static CuratorFramework 
startCuratorFramework(Configuration configuration
 
boolean disableSaslClient = 
configuration.getBoolean(ConfigConstants.ZOOKEEPER_SASL_DISABLE,
ConfigConstants.DEFAULT_ZOOKEEPER_SASL_DISABLE);
+   System.setProperty(ZooKeeperSaslClient.ENABLE_CLIENT_SASL_KEY, 
String.valueOf(!disableSaslClient));
--- End diff --

Please move this logic to `ZooKeeperModule` class 
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/ZooKeeperModule.java#L49


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2017-03-17 Thread vijikarthi
Github user vijikarthi closed the pull request at:

https://github.com/apache/flink/pull/2425


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2017-03-16 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@StephanEwen It's absolutely fine with me and I will cancel this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2017-03-14 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@StephanEwen The shared secret serves can be considered as an additional 
security extension on top of TLS integration, thus it designates only an 
authorized identity to execute actions on a running cluster.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3486: [FLINK-5981][SECURITY]make ssl version and cipher suites ...

2017-03-14 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/3486
  
I think the patch looks good. Is there a specific protocol version + cipher 
suite combination sets that the user should be aware which needs to be 
documented?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-11-17 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@StephanEwen, @mxm  I have updated the documentation changes as suggested, 
moved common code from BlobUtils to SecurityContext, added new ConfigOptions 
class for security configurations lookup. 

>
The cookie is added to every single message/buffer that is transferred. 
That is too much - securing the integrity of the stream is responsibility of 
the encryption layer. The cookie should be added to requests messages that 
establish connections only.

I have added a new handler code to front load the secure cookie validation. 
These handlers are added to both `NettyServer` and `NettyClient` pipeline right 
after the SSL handler is added. I still kept the original code that passes 
cookie for every message (will remove the logic if you are okay with the 
handler implementation)

Please review and let me know your feedback.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-11-09 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@StephanEwen @mxm - Could you please review the proposed change and let me 
know if you are okay with it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-11-06 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
>
The cookie is added to every single message/buffer that is transferred. 
That is too much - securing the integrity of the stream is responsibility of 
the encryption layer. The cookie should be added to requests messages that 
establish connections only.

I will change the code to address cookie handling right after the SSL 
handshake using a new handler and drop the cookie passing for every messages. 
The handler will be added to the pipeline of both `NettyClient` and 
`NettyServer`. Client will send the cookie when the channel becomes active and 
the server will validate and keep track of the clients that are authorized. 

Here is the pseudo-code for Client and Server handlers. Please take a look 
and let me know if you are okay with this approach and I will modify the code.

---
public static class ClientCookieHandler extends 
ChannelInboundHandlerAdapter {

private final String secureCookie;

final Charset DEFAULT_CHARSET = Charset.forName("utf-8");

public ClientCookieHandler(String secureCookie) {
this.secureCookie = secureCookie;
}


@Override
public void channelActive(ChannelHandlerContext ctx) throws 
Exception {
super.channelActive(ctx);

if(this.secureCookie != null && 
this.secureCookie.length() != 0) {
final ByteBuf buffer = Unpooled.buffer(4 + 
this.secureCookie.getBytes(DEFAULT_CHARSET).length);

buffer.writeInt(secureCookie.getBytes(DEFAULT_CHARSET).length);

buffer.writeBytes(secureCookie.getBytes(DEFAULT_CHARSET));
ctx.writeAndFlush(buffer);
}
}
}

public static class ServerCookieDecoder extends 
MessageToMessageDecoder {

private final String secureCookie;

private final List channelList = new ArrayList<>();

private final Charset DEFAULT_CHARSET = 
Charset.forName("utf-8");

   public ServerCookieDecoder(String secureCookie) {
this.secureCookie = secureCookie;
}

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf msg, 
List out) throws Exception {

if(secureCookie == null || secureCookie.length() == 0) {
return;
}

if(channelList.contains(ctx.channel())) {
return;
}

//read cookie based on the cookie length passed
int cookieLength = msg.readInt();
if(cookieLength != 
secureCookie.getBytes(DEFAULT_CHARSET).length) {
String message = "Cookie length does not match 
with source cookie. Invalid secure cookie passed.";
throw new IllegalStateException(message);
}

//read only if cookie length is greater than zero
if(cookieLength > 0) {

final byte[] buffer = new 
byte[secureCookie.getBytes(DEFAULT_CHARSET).length];
msg.readBytes(buffer, 0, cookieLength);


if(!Arrays.equals(secureCookie.getBytes(DEFAULT_CHARSET), buffer)) {
LOG.error("Secure cookie from the 
client is not matching with the server's identity");
throw new 
IllegalStateException("Invalid secure cookie passed.");
}

LOG.info("Secure cookie validation passed");

channelList.add(ctx.channel());
}

}
}
--- 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-11-03 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@mxm - Sorry that I have missed to address some of your comments. Attached 
patch that includes Netty code null precondition validation and fixes the Blob 
service cookie length issue. Please take a look and see if they are okay? 
Thanks for your patience.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2734: Keytab & TLS support for Flink on Mesos Setup

2016-11-01 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2734
  
@mxm - Could you please take a look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2734: Keytab & TLS support for Flink on Mesos Setup

2016-10-31 Thread vijikarthi
GitHub user vijikarthi opened a pull request:

https://github.com/apache/flink/pull/2734

Keytab & TLS support for Flink on Mesos Setup

This PR addresses below issues
- FLINK-4826 (Keytab support on Mesos environment) 
- FLINK-4918 (TLS support for Mesos Artifact Server)

For Keytab support, the Hadoop configuration files (core-site & hdfs-site 
xml) are automatically uploaded to artifact server from the directory path 
provided through HADOOP_CONF_DIR environment variable, if hadoop security is 
enabled?

For Mesos Artifact Server TLS support, a new configuration is introduced 
"mesos.resourcemanager.artifactserver.ssl.enabled" which can be selectively 
used to toggle transport layer security for Mesos artifact server. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vijikarthi/flink FLINK-4826

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2734.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2734


commit a05d6ac4a18d2c17c6e43f7a04eb5756eb5b14dc
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2016-10-13T22:45:35Z

FLINK-4826 Added keytab support to mesos container

commit 87e8ad9863c275bb237ab213d4ecd52b5458d184
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2016-10-31T16:54:03Z

FLINK-4918 Added SSL handler to artifact server




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-10-30 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
Addressed multiple application support/Yarn configuration file changes as 
part of FLINK-4950 patch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-10-26 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r85167083
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java ---
@@ -108,6 +111,11 @@
 
private final Options ALL_OPTIONS;
 
+   private static final String fileName = "yarn-app.ini";
+   private static final String cookieKey = "secureCookie";
--- End diff --

Yes, I will make the change. 
- Do you object to retain the ini file format and port the current 
properties file implementation to INI format (to persist multiple application 
states)?

- Per current implementation (retrieveCluster), the CLI code fetches the 
application ID from properties file if not supplied through CLI argument. When 
we support multiple application state, then we expect application ID to be 
supplied always since there could be more than one application ID and the 
default functionality will go away. Do you concur? 

>
If we really need to provide backward compatibility support, then we could 
return the application ID from the INI file should there be just only instance 
persisted? If more than one application ID exists, then we throw an error 
indicating "Application ID" needs to be supplied as CLI argument.

Please let me know how you want me to approach and I will make the changes 
accordingly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-10-24 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@mxm - Please take a look when you get a chance?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-10-19 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
Thanks @mxm for the review. I will incorporate your feedback and attach the 
patch.

>
When security is enabled, encryption should also be turned on by default. 
Otherwise we will transmit plain-text passwords over the wire.

Yes it makes sense. I will make a conditional check and throw an error if 
encryption is not enabled when security is enabled?

>
Should there be too modes for network transmission, 1) with cookie, one 
without? Do we need 32 bits for the cookie length? We should be precise about 
the maximum length. I saw it is set to 1024 in other places.

Yes, max cookie length validation is 1024. I will change the code where the 
buffer length was allocated to a high value, instead it will use the byte array 
length read from the message. 

> 
Should we really always send the cookie for every message? The security 
document mentions in T2-3 that we only want to authorize upon the first 
connection.

Yes, we took the approach to pass secure cookie for every message to keep 
minimal changes to the current design

>
Why do we transmit the cookie to the client? This seems like a major 
security concern. The client should always provide the cookie.
edit: I see this has been specified in the document in T2-4. Still, I 
wonder if it would make sense to simply add this now because the workaround to 
fetch the cookie from the JobManager doesn't look appealing.

Good catch. I forgot to revert the code after the merge and it is not 
required. Will fix it. 

>
You added a Yarn specific cookie option which should be part of the general 
options instead.

It is added since secure cookie can be supplied when using both Yarn 
session CLI as well as Flink CLI. I have provided detailed explanation in one 
of the comments.

>
You've introduce a new config file to persist the app state. We already 
have the Yarn properties file for that.

I have provided explanation on why we need new config file in one of the 
comments. Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-10-19 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r84169656
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ---
@@ -57,24 +61,37 @@
// constructor in order to work with the generic deserializer.
// 

 
-   static final int HEADER_LENGTH = 4 + 4 + 1; // frame length (4), magic 
number (4), msg ID (1)
+   static final int HEADER_LENGTH = 4 + 4 + 4 + 1; // frame length (4), 
magic number (4), Cookie (4), msg ID (1)
 
static final int MAGIC_NUMBER = 0xBADC0FFE;
 
+   static final int BUFFER_SIZE = 65536;
+
+   static final Charset DEFAULT_CHARSET = Charset.forName("utf-8");
+
abstract ByteBuf write(ByteBufAllocator allocator) throws Exception;
 
abstract void readFrom(ByteBuf buffer) throws Exception;
 
// 

 
-   private static ByteBuf allocateBuffer(ByteBufAllocator allocator, byte 
id) {
-   return allocateBuffer(allocator, id, 0);
+   private static ByteBuf allocateBuffer(ByteBufAllocator allocator, byte 
id, String secureCookie) {
+   return allocateBuffer(allocator, id, secureCookie, 0);
}
 
-   private static ByteBuf allocateBuffer(ByteBufAllocator allocator, byte 
id, int length) {
+   private static ByteBuf allocateBuffer(ByteBufAllocator allocator, byte 
id, String secureCookie, int length) {
+   secureCookie = (secureCookie == null || secureCookie.length() 
== 0) ? "": secureCookie;
+   length+=secureCookie.getBytes().length;
final ByteBuf buffer = length != 0 ? 
allocator.directBuffer(HEADER_LENGTH + length) : allocator.directBuffer();
buffer.writeInt(HEADER_LENGTH + length);
buffer.writeInt(MAGIC_NUMBER);
+
+   buffer.writeInt(secureCookie.length());
--- End diff --

Good catch. Will change it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-10-19 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r84175964
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java ---
@@ -442,8 +453,10 @@ public static void runInteractiveCli(YarnClusterClient 
yarnCluster, boolean read
case "quit":
case "stop":

yarnCluster.shutdownCluster();
+   if 
(yarnCluster.hasBeenShutdown()) {
+   
removeAppState(applicationId);
--- End diff --

Will move the logic to shutdown handler code


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-10-19 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r84174244
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java ---
@@ -108,6 +111,11 @@
 
private final Options ALL_OPTIONS;
 
+   private static final String fileName = "yarn-app.ini";
+   private static final String cookieKey = "secureCookie";
--- End diff --

I agree it is not manageable to have multiple files but there are two main 
reasons for introducing this new file.
- Yarn properties file is stored in /tmp location which is accessible to 
all the users. We want to store the secure cookie in user home location to 
prevent cookie leak
- Current implementation of Yarn properties file does not take multiple 
applications (Yarn application ID) in to account which is resolved using the 
ini file implementation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-10-19 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r84170399
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java
 ---
@@ -112,9 +129,9 @@ public void write(ChannelHandlerContext ctx, Object 
msg, ChannelPromise promise)
 
// Create the frame length decoder here as it depends on the 
encoder
//
-   // 
+--+--++++
-   // | FRAME LENGTH (4) | MAGIC NUMBER (4) | ID (1) || CUSTOM 
MESSAGE |
-   // 
+--+--++++
+   // 
+--+--+++++
+   // | FRAME LENGTH (4) | MAGIC NUMBER (4) | COOKIE (4) | ID (1) 
|| CUSTOM MESSAGE |
+   // 
+--+--+++++
static LengthFieldBasedFrameDecoder createFrameLengthDecoder() {
return new 
LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, -4, 4);
--- End diff --

We don't strip the cookie and hence there is no change to adjust the decoder


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-10-18 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
Resolved merge conflicts and squashed commits to rebase with master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2589: FLINK-3932 State Backend Security

2016-10-13 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2589
  
Are we waiting for any additional review?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-10-10 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@rmetzger can you please take a look at the updated patch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2589: FLINK-3932 State Backend Security

2016-10-06 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2589
  
Thanks @mxm . I have just rebased it against the master. Could you please 
merge the code. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2589: FLINK-3932 State Backend Security

2016-10-05 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2589
  
@mxm Can you take a look at this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2589: FLINK-3932 State Backend Security

2016-10-04 Thread vijikarthi
GitHub user vijikarthi opened a pull request:

https://github.com/apache/flink/pull/2589

FLINK-3932 State Backend Security

This PR addresses ZK authorization (ACLs) requirement of FLINK-3932 and its 
dependency FLINK-4667 (Yarn session CLI not using correct ZK namespace in 
secure environment).

No code change has been done for "checkpoint/savepoint data protection" 
since the default implementation limits the access to user/groups. However, the 
root directory for both checkpoint and savepoint should be configured to a 
sub-directory under the "user home" directory with permissions 700 (mainly for 
local file system since the default umask grants both the user and the group RW 
access). For HDFS, since the user home directory is not accessible by any other 
user (except superuser), we don't need to set any additional permissions for 
the state backend directories.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vijikarthi/flink feature-FLINK-3932

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2589.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2589


commit da5285ac24e2e9fcb8ac493a028aaa3599e82ec3
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2016-09-22T17:10:01Z

FLINK-3932 Added ZK ACL configuration for secure cluster setup

commit 9b9a9304a6d7262c5a56b1871f21fb3fa32b7ce7
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2016-09-23T17:23:19Z

FLINK-4667 Fix for using correct ZK namespace in Yarn deployment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-10-03 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
Addressed [FLINK-4635] Netty data transfer authentication (missing piece of 
FLINK-3930)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-23 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
Thanks @StephanEwen 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-23 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@rmetzger @StephanEwen are you guys waiting for any inputs from my side?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-20 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@rmetzger I have added internals documentation section and provided details 
on how secure cookie is implemented. I will address the missing Netty data 
transfer secure cookie part in FLINK-4635. Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-16 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm I have addressed some of the review feedback and rebased to upstream 
master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-09-16 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r79221254
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityContext.java
 ---
@@ -155,6 +157,58 @@ public static void install(SecurityConfiguration 
config) throws Exception {
installedContext = new SecurityContext(loginUser);
}
 
+   /*
+* This is a temporary fix to support both Kafka and ZK client libraries
+* that are expecting the system variable to determine secure cluster
+*/
+   private static void populateJaasConfigSystemProperty(Configuration 
configuration) {
+
+   //hack since Kafka Login Handler explicitly looks for the 
property or else it throws an exception
+   
//https://github.com/apache/kafka/blob/0.9.0/clients/src/main/java/org/apache/kafka/common/security/kerberos/Login.java#L289
+   if(null == configuration) {
+   System.setProperty("java.security.auth.login.config", 
"");
--- End diff --

Moved all the hard coded configuration property to static variable


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-09-16 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r79221107
  
--- Diff: 
flink-clients/src/main/java/org/apache/flink/client/CliFrontend.java ---
@@ -161,6 +161,8 @@ public CliFrontend(String configDir) throws Exception {
"filesystem scheme from configuration.", e);
}
 
+   this.config.setString(ConfigConstants.FLINK_BASE_DIR_PATH_KEY, 
configDirectory.getAbsolutePath());
--- End diff --

Good catch, though the implementation takes care of handling base directory 
and the immediate "conf" directory. Will change it to point to parent directory.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-09-16 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r79190433
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -1233,6 +1239,9 @@
/** ZooKeeper default leader port. */
public static final int DEFAULT_ZOOKEEPER_LEADER_PORT = 3888;
 
+   /** Defaults for ZK client security **/
+   public static final boolean DEFAULT_ZOOKEEPER_SASL_DISABLE = true;
--- End diff --

I agree but it can be argued both ways. We could keep the default to false 
(enable SASL client auth if not disabled explicitly through configuration file) 
or expect an explicit ask to enable SASL through the configuration settings. I 
chose later since secure ZK is not a common deployment (mostly) and moreover we 
also have introduced new security configurations to enable security and one 
could configure/adjust ZK configuration at that time. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-16 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
>
    @vijikarthi Thanks for the update. Great to see the tests are passing now. 
I'm curious, why did this issue only appear on Travis and not locally?

Kafka/ZK connection is unusually longer in Travis and I don't know the 
reason? I have noticed a similar comment in the Kafka Test code too where we 
have changed the timeout from 6 sec (default) to 30 sec. I have increased the 
timeout interval further for the secure run since occasionally it fails with 
the lower timeout settings.

The combination of longer ZK connection time and client code waiting for 
incorrect event (SysConnected) was the reason why in Travis it was failing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-14 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm The issue is apparently due to ZK client API implementation which 
lookup the system configuration property to determined the type of event 
(SASLAuthenticated/SysConnected) that it should wait for? I have fixed the 
issue by adding a dummy configuration file that will be used only to set the 
system property when SASL authentication is enabled (Apparently both Kafka and 
ZK rely on the configuration property and I believe they are moving away from 
using these variables, but for now we need the hack for secure deployment). 
Please take a look at the two new commits to find the fix details and I will 
squash the commits after your review. Travis build is not failing anymore.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-14 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@nielsbasjes Thanks for the link. The issue however is related to ZK SASL 
client API implementation and it took a while to figure out the actual cause.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-09 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
Thanks, will try that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-09 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
hmmm.. it's getting complicated. will try to debug the issue. Do you know 
why on Travis it has to fail but not on Jenkins?

How do I simulate this on Travis for my own testing?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-08 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
I have noticed the issue occasionally in master branch too but very 
inconsistent. I just rebased against the latest master and ran "mvn clean 
verify". I don't see any errors. I have tried couple of times to see if it 
reoccurs but it looks good. Not sure if the issue is addressed by some other 
patches in the master branch?  Could you please give it a try?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-06 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
>
How is the secret transferred to the TaskManagers on YARN?

Cookie is transferred to TM container through container environment 
variable and further gets populated to in-memory Flink configuration instance. 
The secure cookie is vulnerable (so as the Keytab file) to the users who has 
access to the container local resource storage area and that's the limitation 
we may have to deal with.

>Is using the JobManagerMessages.getRequestBlobManagerSecureCookie() 
message always secure?

I believe it is safe since Akka endpoints are secured using the shared 
token (cookie) and for someone to access the cookie using the 
"JobManagerMessages.getRequestBlobManagerSecureCookie()", they should have been 
authenticated first.

>
Maybe it also makes sense to start adding a page into the internals 
documentation section, explaining how the secure cookie is implemented.

I am planning to add a separate page to internals documentation explaining 
how the shared secret stuff is implemented for various deployment options - 
Standalone, Yarn, Mesos (tbd)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-06 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
rebased again with the latest master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-06 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
>
T2-3 is not about the web interface netty, its about the data transfer netty
In Flink, we are using netty for (at least) three things:
- Akka is using Netty. This is addressed in the pull request
- The web interface is using Netty. Addressed as well
- The user data (datastreams, etc.) is transferred using Netty between 
the TaskManagers as well.

Thanks Robert for the clarification. It was a good catch and I have clearly 
missed the Netty part used between TMs. I will create another JIRA and address 
the issue as part of it. 

I have incorporated your feedback comments and rebased the commits to 
latest master. Please retest and see if the patch is okay?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-05 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm I believe the ZK timeout issue occurs from 
LocalFlinkMiniClusterITCase->testLocalFlinkMiniClusterWithMultipleTaskManagers 
test case but it is not consistent. I ran the Kafka test case alone and it 
worked. I also ran "mvn clean verify" and I don't see any errors (after couple 
of retry - same ZK timeout error from LocalFlinkMiniClusterITCase) . It looks 
like there is some inconsistency in some of the integration test scenarios.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-04 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm, I was able to reproduce this issue on some other machine. The issue 
is that KRB5 config file which is mounted as a local resource is not visible 
though we set the system property (java.security.krb5.conf) within the 
container. I have modified the implementation to pass the system property as 
part of container JVM parameter. I have rebased the code against upstream 
master. Please give it a try.   


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-09-02 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm, I am not seeing the error messages when I run the Yarn test. Could 
you please run "secure" test case alone and share the logs?

>
mvn test integration-test -Dtest="YARNSessionFIFOSecuredITCase" 
-Pinclude-yarn-tests -pl flink-yarn-tests -DfailIfNoTests=false


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-09-01 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
>
According to the design document, netty authentication is also part of this 
JIRA. Why was it not addressed?

The netty layer is addressed as part of web layer authentication (T2-3 & 
T2-5 combined). Do you see a need to address this to some other part of the 
code as well?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-09-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r77230955
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java ---
@@ -682,6 +774,91 @@ public static File 
getYarnPropertiesLocation(Configuration conf) {
return new File(propertiesFileLocation, YARN_PROPERTIES_FILE + 
currentUser);
}
 
+   public static void persistAppState(String appId, String cookie) {
+   if(appId == null || cookie == null) { return; }
+   String path = System.getProperty("user.home") + File.separator 
+ fileName;
+   LOG.debug("Going to persist cookie for the appID: {} in {} ", 
appId, path);
+   try {
+   File f = new File(path);
+   if(!f.exists()) {
+   f.createNewFile();
+   }
+   HierarchicalINIConfiguration config = new 
HierarchicalINIConfiguration(path);
+   SubnodeConfiguration subNode = config.getSection(appId);
+   if (subNode.containsKey(cookieKey)) {
+   String errorMessage = "Secure Cookie is already 
found in "+ path + " for the appID: "+ appId;
+   LOG.error(errorMessage);
+   throw new RuntimeException(errorMessage);
+   }
+   subNode.addProperty(cookieKey, cookie);
+   config.save();
+   LOG.debug("Persisted cookie for the appID: {}", appId);
+   } catch(Exception e) {
+   LOG.error("Exception occurred while persisting app 
state for app id: {}. Exception: {}", appId, e);
+   throw new RuntimeException(e);
+   }
+   }
+
+   public static String getAppSecureCookie(String appId) {
+   if(appId == null) {
+   String errorMessage = "Application ID cannot be null";
+   LOG.error(errorMessage);
+   throw new RuntimeException(errorMessage);
+   }
+
+   String cookieFromFile;
+   String path = System.getProperty("user.home") + File.separator 
+ fileName;
+   LOG.debug("Going to fetch cookie for the appID: {} from {}", 
appId, path);
+
+   try {
+   File f = new File(path);
+   if (!f.exists()) {
+   String errorMessage = "Could not find the file: 
" + path + " in user home directory";
+   LOG.error(errorMessage);
+   throw new RuntimeException(errorMessage);
+   }
+   HierarchicalINIConfiguration config = new 
HierarchicalINIConfiguration(path);
+   SubnodeConfiguration subNode = config.getSection(appId);
+   if (!subNode.containsKey(cookieKey)) {
+   String errorMessage = "Could  not find the app 
ID section in "+ path + " for the appID: "+ appId;
+   LOG.error(errorMessage);
+   throw new RuntimeException(errorMessage);
+   }
+   cookieFromFile = subNode.getString(cookieKey, "");
+   if(cookieFromFile.length() == 0) {
+   String errorMessage = "Could  not find cookie 
in "+ path + " for the appID: "+ appId;
+   LOG.error(errorMessage);
+   throw new RuntimeException(errorMessage);
+   }
+   } catch(Exception e) {
+   LOG.error("Exception occurred while fetching cookie for 
app id: {} Exception: {}", appId, e);
+   throw new RuntimeException(e);
+   }
+
+   LOG.debug("Found cookie for the appID: {}", appId);
+   return cookieFromFile;
+   }
+
+   public static void removeAppState(String appId) {
+   if(appId == null) { return; }
+   String path = System.getProperty("user.home") + File.separator 
+ fileName;
+   LOG.debug("Going to remove the reference for the appId: {} from 
{}", appId, path);
+   try {
+   File f = new File(path);
+   if (!f.exists()) {
+   String errorMessage = "Could not find the file: 
" + path + " in user home directory";
+   LOG.warn(errorMessage);
+   r

[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-09-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r77227679
  
--- Diff: 
flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobClientSecureTest.java
 ---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.blob;
+
+import org.apache.flink.configuration.ConfigConstants;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+import static org.junit.Assert.fail;
+
+public class BlobClientSecureTest extends BlobClientTest {
+
+   /**
+* Starts the BLOB server with secure cookie enabled configuration
+*/
+   @BeforeClass
+   public static void startServer() {
+   try {
+   config.setBoolean(ConfigConstants.SECURITY_ENABLED, 
true);
+   config.setString(ConfigConstants.SECURITY_COOKIE, 
"foo");
+   BLOB_SERVER = new BlobServer(config);
+   }
+   catch (IOException e) {
+   e.printStackTrace();
+   fail(e.getMessage());
+   }
--- End diff --

Thanks. Will add the test case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-09-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r77226096
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServer.java ---
@@ -426,4 +440,11 @@ void unregisterConnection(BlobServerConnection conn) {
}
}
 
+   /* Secure cookie to authenticate */
+   @Override
+   public String getSecureCookie() { return secureCookie; }
+
+   /* Flag to indicate if service level authentication is enabled or not */
+   public boolean isSecurityEnabled() { return secureCookie != null; }
--- End diff --

We expect the secure cookie configuration to be available if security is 
enabled. A missing value will be reported as an error ahead of time. Are you 
expecting any other conditions to be met? Could you please elaborate?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-09-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2425#discussion_r77221958
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/HttpRequestHandler.java
 ---
@@ -99,7 +110,43 @@ public void channelRead0(ChannelHandlerContext ctx, 
HttpObject msg) {
currentDecoder.destroy();
currentDecoder = null;
}
-   
+
+   if(secureCookie != null) {
--- End diff --

The secure cookie value could be auto-populated (Yarn) or user-provided but 
will be persisted in the in-memory Flink configuration instance which is passed 
to the web layer during bootstrap. Should the user decide to torn security off, 
then we expect the services to be restarted to reflect the change?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-30 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm I have rebased the code to the latest master. Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2425: FLINK-3930 Added shared secret based authorization for Fl...

2016-08-28 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2425
  
@mxm - The patch is available for your review. Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-28 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
> 
YARNSessionFIFOSecuredITCase gives me the following:
17:49:58,097 INFO SecurityLogger.org.apache.hadoop.ipc.Server - Auth 
successful for appattempt_1471880990715_0001_01 (auth:SIMPLE)
It is not using Kerberos it seems. We should check that security is really 
enabled and fail the test if not.

@mxm I am not sure why the log statements from IPC layers are using 
auth:SIMPLE but I have verified the same messages (NM/RM logs) on a running HDP 
(secure) cluster too. I would imagine this is the default implementation and we 
can ignore those messages. However, while investigating this issue, I have 
found an interesting problem with YarnMiniCluster. The containers created does 
not have the Yarn Configuration that we pass through the test code. The KRB5 
file is also not visible and hence the UGI/security context that we create was 
missing proper Hadoop configurations. I have fixed the issue and patched it.

I have also disabled the RollingSinkSecure IT test case since secure MiniFS 
cluster requires privileged ports. We can enable the test case when the patch 
(HDFS-9213) is made in to main stream.

Please take a look and let me know if you can deploy and run the code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2425: FLINK-3930 Added shared secret based authorization...

2016-08-26 Thread vijikarthi
GitHub user vijikarthi opened a pull request:

https://github.com/apache/flink/pull/2425

FLINK-3930 Added shared secret based authorization for Flink service …

This PR addresses FLINK-3930 requirements. It enables shared secret based 
secure cookie authorization for the following components

- Akka layer
- Blob Service
- Web UI

Secure cookie authentication can be enabled by providing below 
configurations to Flink configuration file.

- `security.enabled`: A boolean value (true|false) indicating security is 
enabled or not.
- `security.cookie` : Secure cookie value to be used for authentication. 
For standalone deployment mode, the secure cookie value is mandatory when 
security is enabled but for the Yarn mode it is optional (auto-generated if not 
provided).

Alternatively, secure cookie value can be provided through Flink/Yarn CLI 
using "-k" or "--cookie" parameter option.

The web runtime module prompts for secure cookie using standard basic HTTP 
authentication mechanism, where the user id field is a noop and the password 
field will be used to capture the secure cookie. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vijikarthi/flink FLINK-3930

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2425


commit 33d391cb17e68dd203328a91fa6b63218884b49d
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2016-08-26T19:02:20Z

FLINK-3930 Added shared secret based authorization for Flink service 
components




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-24 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
>
It seems like the privileged port issues can be circumvented by setting 
conf.getBoolean("dfs.datanode.require.secure.ports", false)?

It is not supported yet  https://github.com/apache/hadoop/blob/branch-2.3.0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java#L106;>ref.
 The trunk https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java#L115;>
 code  also have similar logic.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-23 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
> If we have to use privileged ports then we won't be able to use our CI 
system. Are you sure it can only run in privileged mode? Is it not possible to 
change the port binding to port >= 1024?

Unfortunately this is how the underlying code is implemented and the patch 
from HDFS-9213 will get us to use higher order ports. I can mark the test class 
@Ignore for now if we have to wait until the patch is ported?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-22 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
> YARNSessionFIFOSecuredITCase gives me the following:
17:49:58,097 INFO SecurityLogger.org.apache.hadoop.ipc.Server - Auth 
successful for appattempt_1471880990715_0001_01 (auth:SIMPLE)
 It is not using Kerberos it seems. We should check that security is really 
enabled and fail the test if not.

How did you run the Yarn IT test? I am seeing some compile issues when I 
try to run the below command?

```
> mvn verify -pl flink-yarn-tests -Pinclude-yarn-tests
[INFO] Scanning for projects...
[INFO]  
   
[INFO] 

[INFO] Building flink-yarn-tests 1.2-SNAPSHOT
[INFO] 

[INFO] 
[INFO] --- maven-checkstyle-plugin:2.16:check (validate) @ 
flink-yarn-tests_2.10 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-maven) @ 
flink-yarn-tests_2.10 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.7:add-source (add-source) @ 
flink-yarn-tests_2.10 ---
[INFO] Source directory: 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/main/scala added.
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
flink-yarn-tests_2.10 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
flink-yarn-tests_2.10 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- scala-maven-plugin:3.1.4:compile (scala-compile-first) @ 
flink-yarn-tests_2.10 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
flink-yarn-tests_2.10 ---
[INFO] No sources to compile
[INFO] 
[INFO] --- build-helper-maven-plugin:1.7:add-test-source (add-test-source) 
@ flink-yarn-tests_2.10 ---
[INFO] Test Source directory: 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/scala added.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) 
@ flink-yarn-tests_2.10 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- scala-maven-plugin:3.1.4:testCompile (scala-test-compile) @ 
flink-yarn-tests_2.10 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
flink-yarn-tests_2.10 ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 12 source files to 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/target/test-classes
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/java/org/apache/flink/yarn/UtilsTest.java:[81,47]
 YARN_HEAP_CUTOFF_RATIO in org.apache.flink.configuration.ConfigConstants has 
been deprecated
[WARNING] 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/java/org/apache/flink/yarn/UtilsTest.java:[82,48]
 YARN_HEAP_CUTOFF_MIN in org.apache.flink.configuration.ConfigConstants has 
been deprecated
[INFO] 2 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionFIFOSecuredITCase.java:[21,41]
 cannot find symbol
  symbol:   class SecurityContext
  location: package org.apache.flink.runtime.security
[ERROR] 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionFIFOSecuredITCase.java:[22,34]
 cannot find symbol
  symbol:   class SecureTestEnvironment
  location: package org.apache.flink.test.util
[ERROR] 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionFIFOSecuredITCase.java:[23,34]
 cannot find symbol
  symbol:   class TestingSecurityContext
  location: package org.apache.flink.test.util
[ERROR] 
/workspace/git-projects/junk/apache-flink/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java:[408,68]
 cannot find symbol
  symbol:   variable SECURITY_KEYTAB_KEY
  location:

[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-22 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
The `RollingFileSink` error might be due to 
https://issues.apache.org/jira/browse/HDFS-9213. The secure MiniDFS 
cluster requires privileged ports to be used and we need to enable the java 
process to grant access if it is not running as root.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-17 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm - Could you please take a look and let me know if you need any further 
changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-12 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
Thanks Max. I have reorganized the secure test case and removed custom 
JRunner implementation for Kafka. Kept single secure test case for HDFS, Kafka 
and Yarn modules.  Please take a look.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-09 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm - Could you please let me know if you are okay with the modifications 
to the integration test case scenarios that I have mentioned. I am open to keep 
just 3 classes for each scenarios (HDFS, Yarn & Kafka) as you have suggested 
but in my opinion that will defeat the idea of reusing existing test program. 
Please let me know either way and I will fix the code accordingly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-02 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm - I have addressed most of the feedback and pushed the changes. Please 
take a look when you get a chance. 

Regarding the secure test cases, 
-  HDFS and Yarn are handled through the @BeforeClass and @AfterClass style 
and they do not use custom JRunner implementation. As you have suggested, I 
could keep just one or two tests for each of the modules to cut down the 
running time, if that's okay with you?
- Kafka tests are handled with custom JRunner and if we need to move it to 
@BeforeClass and @AfterClass, then we may have to duplicate the code used in 
the base classes which may not look good. For e.g., Have a look at 
Kafka09ProducerSecuredITCase which is extended from Kafka09ProducerITCase which 
has 2 level of parent classes (KafkaProducerTestBase & KafkaTestBase). If we 
write a separate test case for secure cluster, then I may have to duplicate 
some of these base class code which can be avoided if we use custom JRunner. We 
could still limit the number of test cases to just one or two to minimize the 
running time. Please let me know your thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-02 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73245570
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -1016,6 +1016,23 @@
/** The environment variable name which contains the location of the 
lib folder */
public static final String ENV_FLINK_LIB_DIR = "FLINK_LIB_DIR";
 
+   //  Security 
---
+
+   /**
+* The config parameter defining security credentials required
+* for securing Flink cluster.
+*/
+
+   /** Keytab file key name to be used in flink configuration file */
+   public static final String SECURITY_KEYTAB_KEY = "security.keytab";
+
+   /** Kerberos security principal key name to be used in flink 
configuration file */
+   public static final String SECURITY_PRINCIPAL_KEY = 
"security.principal";
+
+   /** Keytab file name populated in YARN container */
+   public static final String KEYTAB_FILE_NAME = "krb5.keytab";
--- End diff --

Okay, will move it to flink-yarn module


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73037620
  
--- Diff: docs/internals/flink_security.md ---
@@ -0,0 +1,87 @@
+---
+title:  "Flink Security"
+# Top navigation
+top-nav-group: internals
+top-nav-pos: 10
+top-nav-title: Flink Security
+---
+
+
+This document briefly describes how Flink security works in the context of 
various deployment mechanism (Standalone/Cluster vs YARN) 
+and the connectors that participates in Flink Job execution stage. This 
documentation can be helpful for both administrators and developers 
+who plans to run Flink on a secure environment.
+
+## Objective
+
+The primary goal of Flink security model is to enable secure data access 
for jobs within a cluster via connectors. In production deployment scenario, 
+streaming jobs are understood to run for longer period of time 
(days/weeks/months) and the system must be  able to authenticate against secure 
+data sources throughout the life of the job. The current implementation 
supports running Flink cluster (Job Manager/Task Manager/Jobs) under the 
+context of a Kerberos identity based on Keytab credential supplied during 
deployment time. Any jobs submitted will continue to run in the identity of the 
cluster.
+
+## How Flink Security works
+Flink deployment includes running Job Manager/ZooKeeper, Task Manager(s), 
Web UI and Job(s). Jobs (user code) can be submitted through web UI and/or CLI. 
+A Job program may use one or more connectors (Kafka, HDFS, Cassandra, 
Flume, Kinesis etc.,) and each connector may have a specific security 
+requirements (Kerberos, database based, SSL/TLS, custom etc.,). While 
satisfying the security requirements for all the connectors evolve over a 
period 
+of time but at this time of writing, the following connectors/services are 
tested for Kerberos/Keytab based security.
+
+- Kafka (0.9)
+- HDFS
+- ZooKeeper
+
+Hadoop uses UserGroupInformation (UGI) class to manage security. UGI is a 
static implementation that takes care of handling Kerberos authentication. 
Flink bootstrap implementation
+(JM/TM/CLI) takes care of instantiating UGI with appropriate security 
credentials to establish necessary security context.
+
+Services like Kafka and ZooKeeper uses SASL/JAAS based authentication 
mechanism to authenticate against a Kerberos server. It expects JAAS 
configuration with platform-specific login 
+module *name* to be provided. Managing per-connector configuration files 
will be an overhead and to overcome this requirement, a process-wide JAAS 
configuration object is 
+instantiated which serves standard ApplicationConfigurationEntry for the 
connectors that authenticates using SASL/JAAS mechanism.
+
+It is important to understand that the Flink processes (JM/TM/UI/Jobs) 
itself uses UGI's doAS() implementation to run under specific user context 
i.e., if Hadoop security is enabled 
+then the Flink processes will be running under secure user account or else 
it will run as the OS login user account who starts Flink cluster.
+
+## Security Configurations
+
+Secure credentials can be supplied by adding below configuration elements 
to Flink configuration file:
+
+- `security.keytab`: Absolute path to Kerberos keytab file that contains 
the user credentials/secret.
+
+- `security.principal`: User principal name that the Flink cluster should 
run as.
--- End diff --

The keytab file contains both principal and encrypted keys (password)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@mxm - Thanks for your feedback and here is my response to some of your 
comments.

- Do we need to run all the Yarn tests normally and secured? We already 
have problems with our test execution time. Perhaps we could have one dedicated 
test for secure setups and disable the other ones by default to run them 
manually if needed.
[Vijay] - Yes, it is not essential to run the secure test case all the time 
as it consumes more cycles. Do you have any suggestion on controlling this 
through some mvn/surefire plugin configuration?

- The testing code seems overly complicated using the custom JUnit Runner. 
I think we could achieve the same with @BeforeClass and @AfterClass methods in 
the secure IT cases.
[Vijay] - It is little overhead but works out well with minimal changes to 
the code. We could revisit and make any changes if it creates any issues.

- There is no dedicated test for the SecurityContext and the 
JaasConfiguration classes
[Vijay] - Yes, will add UT for those classes.

- It would be nice to add some documentation to the configuration web page.
[Vijay] - I believe you are referring to the 
https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html. If 
so, yes it certainly helps and I will be happy to add the details but I don't 
have access to edit the page.

- We should throw exceptions if the secure configuration is not complete 
instead of falling back to non-authenticated execution for either Hadoop or the 
Jaas configuration. Otherwise, users might end up with a partly secure 
environment.
[Vijay] - Yes, will add the validation logic


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
@nielsbasjes - In most deployments, the KRB5 configuration file will be 
located in a well known (for e.g., /etc/krb5.conf) but in scenarios where 
custom location needs to be provided, we could pass the value through  
"-Djava.security.krb5.conf"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73028461
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnTaskManagerRunner.java ---
@@ -75,34 +84,47 @@ public static void runYarnTaskManager(String[] args, 
final Class toks : 
UserGroupInformation.getCurrentUser().getTokens()) {
-   ugi.addToken(toks);
+   String keytabPath = null;
+   if(remoteKeytabPath != null) {
+   File f = new File(currDir, 
ConfigConstants.KEYTAB_FILE_NAME);
--- End diff --

The name is not configurable (user provided) but we use a constant value. 
Is there any reason to keep the name unique? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73027093
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/RunTypeSelectionRunner.java
 ---
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.util;
+
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.model.FrameworkMethod;
+import org.junit.runners.model.InitializationError;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/*
+ * Custom JRunner used to run integration tests for Kafka in secure mode. 
Since the test code implementation uses
+ * abstract base code to run the MiniFlinkCluster and also start/stop 
Kafka brokers depending on the version (0.8/0.9)
+ * we need to know the run mode (CLEAR/SECURE) ahead of time (before the 
beforeClass). This Runner instance
+ * take care of setting SECURE flag on the holder class for secure testing 
to work seamless
+ *
+ */
+public class RunTypeSelectionRunner extends BlockJUnit4ClassRunner {
--- End diff --

I agree with you that it will be nice to control this behavior in 
@BeforeClass but the challenge is controlling test run (secure vs insecure 
mode). The base class should be aware of the run mode (secure/insecure) ahead 
of time and I found it very challenging especially when the lifecycle of the 
services (MiniFlink, Kafka/ZK etc.,) are handled at different levels. I ended 
up in using custom JRunner which gives the flexibility in handling the run 
level from the top most layer (@Test class) without having to readjust the 
codebase much.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73023852
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/RunTypeHolder.java
 ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.util;
+
+public class RunTypeHolder {
--- End diff --

Will add the desc


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73023816
  
--- Diff: 
flink-streaming-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestBase.java
 ---
@@ -81,13 +90,21 @@ public static void prepare() throws IOException, 
ClassNotFoundException {

LOG.info("-");
LOG.info("Starting KafkaTestBase ");

LOG.info("-");
-   
 
+   Configuration flinkConfig = new Configuration();
 
// dynamically load the implementation for the test
Class clazz = 
Class.forName("org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl");
kafkaServer = (KafkaTestEnvironment) 
InstantiationUtil.instantiate(clazz);
 
+   LOG.info("Runtype: {}", RunTypeHolder.get());
+   if(RunTypeHolder.get().equals(RunTypeHolder.RunType.SECURE)
--- End diff --

Yes == is more appropriate


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73023705
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/RunTypeHolder.java
 ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.util;
+
+public class RunTypeHolder {
+
+   private static RunType runType = RunType.CLEAR;
--- End diff --

Will change it to PLAIN


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73020806
  
--- Diff: 
flink-streaming-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaShortRetentionTestBase.java
 ---
@@ -60,18 +64,33 @@
private static Properties standardProps;
private static ForkableFlinkMiniCluster flink;
 
+   @ClassRule
+   public static TemporaryFolder tempFolder = new TemporaryFolder();
+
+   protected static Properties secureProps = new Properties();
+
@BeforeClass
public static void prepare() throws IOException, ClassNotFoundException 
{

LOG.info("-");
LOG.info("Starting KafkaShortRetentionTestBase ");

LOG.info("-");
 
+   Configuration flinkConfig = new Configuration();
+
// dynamically load the implementation for the test
Class clazz = 
Class.forName("org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl");
kafkaServer = (KafkaTestEnvironment) 
InstantiationUtil.instantiate(clazz);
 
LOG.info("Starting KafkaTestBase.prepare() for Kafka " + 
kafkaServer.getVersion());
 
+   LOG.info("Runtype: {}", RunTypeHolder.get());
+   if(RunTypeHolder.get().equals(RunTypeHolder.RunType.SECURE)
+   && kafkaServer.isSecureRunSupported()) {
+   SecureTestEnvironment.prepare(tempFolder);
+   
SecureTestEnvironment.getSecurityEnabledFlinkConfiguration(flinkConfig);
--- End diff --

No. The passed config object will be populated with keytab and principal 
security configurations.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73017860
  
--- Diff: 
flink-streaming-connectors/flink-connector-filesystem/src/test/java/org/apache/flink/streaming/connectors/fs/RollingSinkSecuredITCase.java
 ---
@@ -0,0 +1,195 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.fs;
+
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.runtime.security.SecurityContext;
+import org.apache.flink.streaming.util.TestStreamEnvironment;
+import org.apache.flink.test.util.SecureTestEnvironment;
+import org.apache.flink.test.util.TestingSecurityContext;
+import org.apache.flink.test.util.TestBaseUtils;
+import org.apache.flink.util.NetUtils;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.http.HttpConfig;
+import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.*;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTP_ADDRESS_KEY;
--- End diff --

Sorry, this might be from IntelliJ auto-import stuff. Will revert to use 
appropriate imports.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73015565
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityContext.java
 ---
@@ -0,0 +1,218 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.security;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.Preconditions;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import java.io.File;
+import java.lang.reflect.Method;
+import java.security.PrivilegedExceptionAction;
+/*
+ * Process-wide security context object which initializes UGI with 
appropriate security credentials and also it
+ * creates in-memory JAAS configuration object which will serve 
appropriate ApplicationConfigurationEntry for the
+ * connector login module implementation that authenticates Kerberos 
identity using SASL/JAAS based mechanism.
+ */
+@Internal
+public class SecurityContext {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(SecurityContext.class);
+
+   private static SecurityContext installedContext;
+
+   public static SecurityContext getInstalled() { return installedContext; 
}
+
+   private UserGroupInformation ugi;
+
+   SecurityContext(UserGroupInformation ugi) {
+   this.ugi = ugi;
+   }
+
+   public  T runSecured(final FlinkSecuredRunner runner) throws 
Exception {
+   return ugi.doAs(new PrivilegedExceptionAction() {
+   @Override
+   public T run() throws Exception {
+   return runner.run();
+   }
+   });
+   }
+
+   public static void install(SecurityConfiguration config) throws 
Exception {
+
+   // perform static initialization of UGI, JAAS
+   if(installedContext != null) {
+   LOG.warn("overriding previous security context");
+   }
+
+   // establish the JAAS config
+   JaasConfiguration jaasConfig = new 
JaasConfiguration(config.keytab, config.principal);
+   
javax.security.auth.login.Configuration.setConfiguration(jaasConfig);
+
+   //hack since Kafka Login Handler explicitly look for the 
property or else it throws an exception
+   
//https://github.com/apache/kafka/blob/0.9.0/clients/src/main/java/org/apache/kafka/common/security/kerberos/Login.java#L289
+   System.setProperty("java.security.auth.login.config", "");
+
+   // establish the UGI login user
+   UserGroupInformation.setConfiguration(config.hadoopConf);
+   UserGroupInformation loginUser;
+   if(UserGroupInformation.isSecurityEnabled() && config.keytab != 
null && !Preconditions.isNullOrEmpty(config.principal)) {
--- End diff --

Sure. Will handle it in JAASConfiguration.java as that looks more 
appropriate place.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73014755
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityContext.java
 ---
@@ -0,0 +1,218 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.security;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.util.Preconditions;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.Subject;
+import java.io.File;
+import java.lang.reflect.Method;
+import java.security.PrivilegedExceptionAction;
+/*
+ * Process-wide security context object which initializes UGI with 
appropriate security credentials and also it
+ * creates in-memory JAAS configuration object which will serve 
appropriate ApplicationConfigurationEntry for the
+ * connector login module implementation that authenticates Kerberos 
identity using SASL/JAAS based mechanism.
+ */
+@Internal
+public class SecurityContext {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(SecurityContext.class);
+
+   private static SecurityContext installedContext;
+
+   public static SecurityContext getInstalled() { return installedContext; 
}
+
+   private UserGroupInformation ugi;
+
+   SecurityContext(UserGroupInformation ugi) {
+   this.ugi = ugi;
+   }
+
+   public  T runSecured(final FlinkSecuredRunner runner) throws 
Exception {
+   return ugi.doAs(new PrivilegedExceptionAction() {
+   @Override
+   public T run() throws Exception {
+   return runner.run();
+   }
+   });
+   }
+
+   public static void install(SecurityConfiguration config) throws 
Exception {
+
+   // perform static initialization of UGI, JAAS
+   if(installedContext != null) {
+   LOG.warn("overriding previous security context");
+   }
+
+   // establish the JAAS config
+   JaasConfiguration jaasConfig = new 
JaasConfiguration(config.keytab, config.principal);
+   
javax.security.auth.login.Configuration.setConfiguration(jaasConfig);
+
+   //hack since Kafka Login Handler explicitly look for the 
property or else it throws an exception
+   
//https://github.com/apache/kafka/blob/0.9.0/clients/src/main/java/org/apache/kafka/common/security/kerberos/Login.java#L289
+   System.setProperty("java.security.auth.login.config", "");
--- End diff --

It was never set earlier and not really essential. We are adding it only 
because Kafka code implementation (09 branch) explicitly looks for this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73013795
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/JaasConfiguration.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.security;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.util.Preconditions;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import java.io.File;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ *
+ * JAAS configuration provider object that provides default LoginModule 
for various connectors that supports
+ * JAAS/SASL based Kerberos authentication. The implementation is inspired 
from Hadoop UGI class.
+ *
+ * Different connectors uses different login module name to implement JAAS 
based authentication support.
+ * For example, Kafka expects the login module name to be "kafkaClient" 
whereas ZooKeeper expect the
+ * name to be "client". This sets onus on the Flink cluster administrator 
to configure/provide right
+ * JAAS config entries. To simplify this requirement, we have introduced 
this abstraction that provides
+ * a standard lookup to get the login module entry for the JAAS based 
authentication to work.
+ *
+ * HDFS connector will not be impacted with this configuration since it 
uses UGI based mechanism to authenticate.
+ *
+ * https://docs.oracle.com/javase/7/docs/api/javax/security/auth/login/Configuration.html;>Configuration
+ *
+ */
+
+@Internal
+public class JaasConfiguration extends Configuration {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(JaasConfiguration.class);
+
+   public static final String JAVA_VENDOR_NAME = 
System.getProperty("java.vendor");
+
+   public static final boolean IBM_JAVA;
+
+   static {
+   IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM");
+   }
+
+   public JaasConfiguration(String keytab, String principal) {
--- End diff --

I had this as package local earlier but the test framework code is making 
use of this class (extending) and hence I changed it to public constructor.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73013657
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/JaasConfiguration.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.security;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.util.Preconditions;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import java.io.File;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ *
+ * JAAS configuration provider object that provides default LoginModule 
for various connectors that supports
+ * JAAS/SASL based Kerberos authentication. The implementation is inspired 
from Hadoop UGI class.
+ *
+ * Different connectors uses different login module name to implement JAAS 
based authentication support.
+ * For example, Kafka expects the login module name to be "kafkaClient" 
whereas ZooKeeper expect the
+ * name to be "client". This sets onus on the Flink cluster administrator 
to configure/provide right
+ * JAAS config entries. To simplify this requirement, we have introduced 
this abstraction that provides
+ * a standard lookup to get the login module entry for the JAAS based 
authentication to work.
+ *
+ * HDFS connector will not be impacted with this configuration since it 
uses UGI based mechanism to authenticate.
+ *
+ * https://docs.oracle.com/javase/7/docs/api/javax/security/auth/login/Configuration.html;>Configuration
+ *
+ */
+
+@Internal
+public class JaasConfiguration extends Configuration {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(JaasConfiguration.class);
+
+   public static final String JAVA_VENDOR_NAME = 
System.getProperty("java.vendor");
+
+   public static final boolean IBM_JAVA;
+
+   static {
+   IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM");
+   }
+
+   public JaasConfiguration(String keytab, String principal) {
+
+   LOG.info("Initializing JAAS configuration instance. Parameters: 
{}, {}", keytab, principal);
+
+   if(!Preconditions.isNullOrEmpty(keytab) && 
!Preconditions.isNullOrEmpty(principal)) {
+
+   if(IBM_JAVA) {
+   keytabKerberosOptions.put("useKeytab", 
prependFileUri(keytab));
+   keytabKerberosOptions.put("credsType", "both");
+   } else {
+   keytabKerberosOptions.put("keyTab", keytab);
+   keytabKerberosOptions.put("doNotPrompt", 
"true");
+   keytabKerberosOptions.put("useKeyTab", "true");
+   keytabKerberosOptions.put("storeKey", "true");
+   }
+
+   keytabKerberosOptions.put("principal", principal);
+   keytabKerberosOptions.put("refreshKrb5Config", "true");
+   keytabKerberosOptions.putAll(debugOptions);
+
+   keytabKerberosAce = new AppConfigurationEntry(
+   KerberosUtil.getKrb5LoginModuleName(),
+   
AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
+   keytabKerberosOptions);
+   }
+   }
+
+   private static final Map<String, String> debugOptions = new HashMap<>();
--- End diff --

Yes, it makes sense to use log4j configuration to toggle J

[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73012870
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/JaasConfiguration.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.security;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.util.Preconditions;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import java.io.File;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ *
+ * JAAS configuration provider object that provides default LoginModule 
for various connectors that supports
+ * JAAS/SASL based Kerberos authentication. The implementation is inspired 
from Hadoop UGI class.
+ *
+ * Different connectors uses different login module name to implement JAAS 
based authentication support.
+ * For example, Kafka expects the login module name to be "kafkaClient" 
whereas ZooKeeper expect the
+ * name to be "client". This sets onus on the Flink cluster administrator 
to configure/provide right
+ * JAAS config entries. To simplify this requirement, we have introduced 
this abstraction that provides
+ * a standard lookup to get the login module entry for the JAAS based 
authentication to work.
+ *
+ * HDFS connector will not be impacted with this configuration since it 
uses UGI based mechanism to authenticate.
+ *
+ * https://docs.oracle.com/javase/7/docs/api/javax/security/auth/login/Configuration.html;>Configuration
+ *
+ */
+
+@Internal
+public class JaasConfiguration extends Configuration {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(JaasConfiguration.class);
+
+   public static final String JAVA_VENDOR_NAME = 
System.getProperty("java.vendor");
+
+   public static final boolean IBM_JAVA;
+
+   static {
+   IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM");
+   }
+
+   public JaasConfiguration(String keytab, String principal) {
+
+   LOG.info("Initializing JAAS configuration instance. Parameters: 
{}, {}", keytab, principal);
+
+   if(!Preconditions.isNullOrEmpty(keytab) && 
!Preconditions.isNullOrEmpty(principal)) {
+
+   if(IBM_JAVA) {
+   keytabKerberosOptions.put("useKeytab", 
prependFileUri(keytab));
+   keytabKerberosOptions.put("credsType", "both");
+   } else {
+   keytabKerberosOptions.put("keyTab", keytab);
+   keytabKerberosOptions.put("doNotPrompt", 
"true");
+   keytabKerberosOptions.put("useKeyTab", "true");
+   keytabKerberosOptions.put("storeKey", "true");
+   }
+
+   keytabKerberosOptions.put("principal", principal);
+   keytabKerberosOptions.put("refreshKrb5Config", "true");
+   keytabKerberosOptions.putAll(debugOptions);
+
+   keytabKerberosAce = new AppConfigurationEntry(
+   KerberosUtil.getKrb5LoginModuleName(),
+   
AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
+   keytabKerberosOptions);
+   }
+   }
+
+   private static final Map<String, String> debugOptions = new HashMap<>();
+
+   private static final Map<String, String> kerberosCacheOptions = new

[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73012593
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/security/JaasConfiguration.java
 ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.security;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.util.Preconditions;
+import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+import java.io.File;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ *
+ * JAAS configuration provider object that provides default LoginModule 
for various connectors that supports
+ * JAAS/SASL based Kerberos authentication. The implementation is inspired 
from Hadoop UGI class.
+ *
+ * Different connectors uses different login module name to implement JAAS 
based authentication support.
+ * For example, Kafka expects the login module name to be "kafkaClient" 
whereas ZooKeeper expect the
+ * name to be "client". This sets onus on the Flink cluster administrator 
to configure/provide right
+ * JAAS config entries. To simplify this requirement, we have introduced 
this abstraction that provides
+ * a standard lookup to get the login module entry for the JAAS based 
authentication to work.
+ *
+ * HDFS connector will not be impacted with this configuration since it 
uses UGI based mechanism to authenticate.
+ *
+ * https://docs.oracle.com/javase/7/docs/api/javax/security/auth/login/Configuration.html;>Configuration
+ *
+ */
+
+@Internal
+public class JaasConfiguration extends Configuration {
+
+   private static final Logger LOG = 
LoggerFactory.getLogger(JaasConfiguration.class);
+
+   public static final String JAVA_VENDOR_NAME = 
System.getProperty("java.vendor");
+
+   public static final boolean IBM_JAVA;
+
+   static {
+   IBM_JAVA = JAVA_VENDOR_NAME.contains("IBM");
+   }
+
+   public JaasConfiguration(String keytab, String principal) {
+
+   LOG.info("Initializing JAAS configuration instance. Parameters: 
{}, {}", keytab, principal);
+
+   if(!Preconditions.isNullOrEmpty(keytab) && 
!Preconditions.isNullOrEmpty(principal)) {
+
+   if(IBM_JAVA) {
--- End diff --

I have tested with OpenJDK (1.8.0_91) only. Since the code fragment is 
derived from Hadoop UGI implementation, I assume it will work for IBM based 
JVM:)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-08-01 Thread vijikarthi
Github user vijikarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/2275#discussion_r73012055
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -1016,6 +1016,23 @@
/** The environment variable name which contains the location of the 
lib folder */
public static final String ENV_FLINK_LIB_DIR = "FLINK_LIB_DIR";
 
+   //  Security 
---
+
+   /**
+* The config parameter defining security credentials required
+* for securing Flink cluster.
+*/
+
+   /** Keytab file key name to be used in flink configuration file */
+   public static final String SECURITY_KEYTAB_KEY = "security.keytab";
+
+   /** Kerberos security principal key name to be used in flink 
configuration file */
+   public static final String SECURITY_PRINCIPAL_KEY = 
"security.principal";
+
+   /** Keytab file name populated in YARN container */
+   public static final String KEYTAB_FILE_NAME = "krb5.keytab";
--- End diff --

It is just a constant literal and not exposed in the Flink configuration 
file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-07-27 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
Thanks @mxm for the review and feedback. I will respond to the comments and 
incorporate any changes required.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-07-26 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
Team, please let me know if any additional details are required to 
kick-start the review process?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2275: FLINK-3929 Support for Kerberos Authentication with Keyta...

2016-07-21 Thread vijikarthi
Github user vijikarthi commented on the issue:

https://github.com/apache/flink/pull/2275
  
Adding some more cotext to the implementation details. which is based on 
the design proposal 
(https://docs.google.com/document/d/1-GQB6uVOyoaXGwtqwqLV8BHDxWiMO2WnVzBoJ8oPaAs/edit?usp=sharing)

Current security implementation works in a subtle way utilizing the Keberos 
cache of the user who starts Flink process/jobs and only in the context of 
supporting secure access to Hadoop cluster. The underlying UGI implementation 
of Hadoop infrastructure is used to harden the security using the keytab cache. 
For Yarn mode of deployment, delegation tokens are created and populated to 
container environment (App Master/JM and TM). 

There are two areas of improvement that current implementation lacks:
1) Tokens will be expired in due course and hence it impacts long running 
jobs
2) Missing functionality to support secure connection to Kafka and ZK 
(Kafka 0.9 and latest ZK versions are supporting kerberos based authentication 
using SASL/JAAS)

This PR addresses above gaps by providing Keytab support to securely 
communicate to Hadoop and Kafka/ZK services.

1) Additional Configurations: 

Below new security specific configurations are added to the Flink 
configuration file.
a) security.principal - user principal that Flink process/connectors should 
authenticate as 
b) security.keytab - keytab file location

In standlone mode, it is assumed that the configurations pre-exists (manual 
process) on all cluster nodes from where the JM and TMs will be running. 

In Yarn mode, the configuration (and keytab file) is expected only on the 
node from where YarnCLI or FlinkCLI will be invoked. Application code takes 
care of copying Keytab file to JM/TM Yarn containers as local resource for 
lookup.

In the absence of providing security configurations, the delegation token 
mechanism still works to support backward compatibility (manual kinit before 
starting JM/TMs).

2) Process-wide in-memory JAAS configuration to enable Kafka/ZK secure 
authentication.
 
The JAAS configuration plays a critical role in authentication for 
Kerberized application. Kafka/ZK login module code is expected to construct a 
login context based on supplied JAAS configuration file entries and 
authenticates to produce a subject.  The context is constructed with an 
application name which acts as a lookup key into the configuration, yielding 
one or more login modules.   The login module implements the specific strategy, 
such as using a configured keytab or using the user’s ticket cache.

Instead of managing per-connector JAAS configuration file, a process-wide 
JAAS configuration object is initialized during Flink bootstrap phase, thus 
providing a singular login module to all callers configured to login using the 
supplied keytab.

(https://docs.oracle.com/javase/7/docs/api/javax/security/auth/login/Configuration.html#setConfiguration(javax.security.auth.login.Configuration)

To summarize, following sequence happens when the secure configuration is 
enabled.
Flink bootstrap code (both Yarn and Standalone) initializes security 
context by
a) Initializing UGI with the supplied keytab and principal which takes care 
of handling Kerberos authentication and login renewal for Hadoop services. 
b) Creating process-wide JAAS configuration object for Kafka/ZK login 
modules to support Kerberos/SASL authentication. Login renewals are 
automatically taken care by ZK and Kafka login module implementation.

Some additional details are provided in the documentation page as well that 
can be referenced from here.

(https://github.com/vijikarthi/flink/blob/FLINK-3929/docs/internals/flink_security.md)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2275: FLINK-3929 Support for Kerberos Authentication wit...

2016-07-20 Thread vijikarthi
GitHub user vijikarthi opened a pull request:

https://github.com/apache/flink/pull/2275

FLINK-3929 Support for Kerberos Authentication with Keytab Credential

This PR addresses FLINK-3929 requirements:
1) Added Keytab support to Flink (Standalone and Yarn mode deployment)
2) Unified JAAS configuration support to services (Kafka/ZK)
3) Added MiniKDC support to run integration test case for FS/Kafka 
connectors and Yarn
4) Added documentation that explains how the security model works

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vijikarthi/flink FLINK-3929

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2275.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2275


commit 824d16696bd05d5b2aa2920813dacfda736c7372
Author: Vijay Srinivasaraghavan <vijayaraghavan.srinivasaragha...@emc.com>
Date:   2016-07-21T00:08:33Z

FLINK-3929 Support for Kerberos Authentication with Keytab Credential




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #:

2016-06-20 Thread vijikarthi
Github user vijikarthi commented on the pull request:


https://github.com/apache/flink/commit/c8fed99e3e85a4d27c6134cfa3e07fb3a8e1da2a#commitcomment-17935989
  
In pom.xml:
In pom.xml on line 972:
This version (2.18.1) has some issues running scoped test case. For example 
it does not run, "mvn verify -pl flink-yarn-tests -Pinclude-yarn-tests 
-Dtest=YARNSessionFIFOITCase#testJavaAPI". I have reverted the version 2.19.1 
and it was working fine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---