Re: Java 8 lambdas for CEP patterns won't compile

2017-06-11 Thread Ted Yu
Looking at docs/dev/libs/cep.md , there're 3 examples using lambda.
Here is one:

Pattern pattern = Pattern.begin("start").where(evt -> evt.getId()
== 42)

Your syntax should be supported.
I haven't found such example in test code, though.

FYI

On Sun, Jun 11, 2017 at 2:42 PM, David Koch  wrote:

> Hello,
>
> I cannot get patterns expressed as lambdas like:
>
> Pattern pattern1 = Pattern. begin("start")
> .where(evt -> evt.key.length() > 0)
> .next("last").where(evt -> evt.key.length() >
> 0).within(Time.seconds(5));
>
> to compile with Flink 1.3.0. My IDE doesn't handle it and building on
> command line with maven does not work either. The exception given by a
> maven build in command line is:
>
> [ERROR] The method where(IterativeCondition) in the type
> Pattern is not applicable for the arguments (( evt)
> -> {})
> [ERROR] /Users//xxx/cep-test/src/main/java/com///
> CEPTest.java:[83]
> [ERROR] .where(evt -> evt.key.length() > 0)
> [ERROR] ^^^
> [ERROR] The target type of this expression must be a functional interface
>
> I used the standard pom.xml generated by the Flink quick start archetype.
> If I recall correctly this is something that used to work with Flink
> 1.2.0-SNAPSHOT back when I tested CEP for the first time.
>
> Any idea why this could be the case or maybe my syntax is not correct? I
> include my maven information below.
>
> Thank you,
>
> David
>
>
> Davids-MacBook-Pro-2:cep-test dkoch$ mvn -v
> Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
> 2015-11-10T17:41:47+01:00)
> Maven home: /usr/local/Cellar/maven/3.3.9/libexec
> Java version: 1.8.0_102, vendor: Oracle Corporation
> Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_
> 102.jdk/Contents/Home/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "mac os x", version: "10.11.6", arch: "x86_64", family: "mac"
>


Re: Listening to timed-out patterns in Flink CEP

2017-06-11 Thread David Koch
Hello,

It's been a while and I have never replied on the list. In fact, the fix
committed by Till does work. Thanks!

On Tue, Apr 25, 2017 at 9:37 AM, Moiz Jinia  wrote:

> Hey David,
> Did that work for you? If yes could you share an example. I have a similar
> use case - need to get notified of an event NOT occurring within a
> specified
> time window.
>
> Thanks much!
>
> Moiz
>
>
>
> --
> View this message in context: http://apache-flink-user-maili
> ng-list-archive.2336050.n4.nabble.com/Listening-to-timed-
> out-patterns-in-Flink-CEP-tp9371p12800.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>


Re: Problem with WebUI

2017-06-11 Thread Chesnay Schepler
This looks like a dependency conflict to me. Try checking whether 
anything you use depends on netty.


On 09.06.2017 17:42, Dawid Wysakowicz wrote:

I had a look into yarn logs and I found such exception:

2017-06-09 17:10:20,922 ERROR
org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler
 - Caught exception
java.lang.AbstractMethodError
at io.netty.util.ReferenceCountUtil.touch(ReferenceCountUtil.java:73)
at

io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
at

io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
at

io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
at

io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at

org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
at

org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
at

io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at

io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at

io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:435)
at

io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at

io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)
at

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)
at

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
at

io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)


Any idea how to tackle it?

Z pozdrowieniami! / Cheers!


Dawid Wysakowicz

*Data/Software Engineer*

Skype: dawid_wys | Twitter: @OneMoreCoder




2017-06-09 16:17 GMT+02:00 Dawid Wysakowicz 
>:


Hi,

I am trying to run a flink job on yarn. When I submit the job with
following command

Re: At what point do watermarks get injected into the stream?

2017-06-11 Thread Ray Ruvinskiy
Thanks for the explanation, Fabian.

Suppose I have a parallel source that does not inject watermarks, and the first 
operation on the DataStream is assignTimestampsAndWatermarks. Does each 
parallel task that makes up the source independently inject watermarks for the 
records that it has read? Suppose I then call keyBy and a shuffle ensues. Will 
the resulting partitions after the shuffle have interleaved watermarks from the 
various source tasks?

More concretely, suppose s source has a degree of parallelism of two. One of 
the source tasks injects the watermarks 2 and 5, while the other injects 3 and 
10. There is then a shuffle, creating two different partitions. Will all the 
watermarks be broadcast to all the partitions? Or is it possible for, say, one 
partition to end up with watermarks 2 and 10 and another with 3 and 5? And 
after the shuffle, how do we ensure that the watermarks are processed in order 
by the operators receiving them?

Thanks,

Ray

From: Fabian Hueske 
Date: Saturday, June 10, 2017 at 3:56 PM
To: Ray Ruvinskiy 
Cc: "user@flink.apache.org" 
Subject: Re: At what point do watermarks get injected into the stream?

Hi Ray,
in principle, watermarks can be injected anywhere in a stream by calling 
DataStream.assignTimestampsAndWatermarks().
However, timestamps are usually injected as soon as possible after a stream in 
ingested (before the first shuffle). The reason is that watermarks depend on 
the order of events (and their timestamps) in the stream. While Flink 
guarantees the order of events within a partition, a shuffle interleaves events 
of different partitions in an unpredictable way such that it is not possible to 
reason about the order of timestamps afterwards.
The most common way to inject watermarks is directly inside of a SourceFunction 
or with a TimestampAssigner before the first shuffle.
Best, Fabian

2017-06-09 0:46 GMT+02:00 Ray Ruvinskiy 
>:
I’m trying to build a mental model of how watermarks get injected into the 
stream. Suppose I have a stream with a parallel source, and I’m running a 
cluster with multiple task managers. Does each parallel source reader inject 
watermarks, which are then forwarded to downstream consumers and shuffled 
between task managers? Or are watermarks created after the shuffle, when the 
stream records reach their destined task manager and right before they’re 
processed by the operator?

Thanks,

Ray



Re: Fink: KafkaProducer Data Loss

2017-06-11 Thread ninad
Thanks Gordon.

On Jun 11, 2017 9:11 AM, "Tzu-Li (Gordon) Tai [via Apache Flink User
Mailing List archive.]"  wrote:

> Hi Ninad,
>
> Thanks for the logs!
> Just to let you know, I’ll continue to investigate this early next week.
>
> Cheers,
> Gordon
>
> On 8 June 2017 at 7:08:23 PM, ninad ([hidden email]
> ) wrote:
>
> I built Flink v1.3.0 with cloudera hadoop and am able to see the data
> loss.
>
> Here are the details:
>
> *tmOneCloudera583.log*
>
> Received session window task:
> *2017-06-08 15:10:46,131 INFO org.apache.flink.runtime.taskmanager.Task
> - TriggerWindow(ProcessingTimeSessionWindows(3),
> ListStateDescriptor{serializer=org.apache.flink.api.common.typeutils.base.
> ListSerializer@f4361ec},
> ProcessingTimeTrigger(), WindowedStream.apply(WindowedStream.java:1100))
> ->
> Sink: sink.http.sep (1/1) (3ef2f947096f3b0986b8ad334c7d5d3c) switched
> from
> CREATED to DEPLOYING.
>
> Finished checkpoint 2 (Synchronous part)
> 2017-06-08 15:15:51,982 DEBUG
> org.apache.flink.streaming.runtime.tasks.StreamTask -
> TriggerWindow(ProcessingTimeSessionWindows(3),
> ListStateDescriptor{serializer=org.apache.flink.api.common.typeutils.base.
> ListSerializer@f4361ec},
> ProcessingTimeTrigger(), WindowedStream.apply(WindowedStream.java:1100))
> ->
> Sink: sink.http.sep (1/1) - finished synchronous part of checkpoint
> 2.Alignment duration: 0 ms, snapshot duration 215 ms
> *
>
> The task failed before the verification of completed checkpoint could be
> verified.
> i.e, I don't see the log saying "Notification of complete checkpoint for
> task TriggerWindow" for checkpoint 2.
>
> *jmCloudera583.log*
>
> Job Manager received acks for checkpoint 2
>
> *2017-06-08 15:15:51,898 DEBUG
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Received
> acknowledge message for checkpoint 2 from task
> 3e980e4d3278ef0d0f5e0fa9591cc53d of job 3f5aef5e15a23bce627c05c94760fb16.
> 2017-06-08 15:15:51,982 DEBUG
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Received
> acknowledge message for checkpoint 2 from task
> 3ef2f947096f3b0986b8ad334c7d5d3c of job 3f5aef5e15a23bce627c05c94760fb16*.
>
>
> Job Manager tried to restore from checkpoint 2.
>
> *2017-06-08 15:16:02,111 INFO
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore -
> Found 1 checkpoints in ZooKeeper.
> 2017-06-08 15:16:02,111 INFO
> org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore -
> Trying to retrieve checkpoint 2.
> 2017-06-08 15:16:02,122 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Restoring
> from latest valid checkpoint: Checkpoint 2 @ 149693476
> 6105 for 3f5aef5e15a23bce627c05c94760fb16.*
>
> *tmTwocloudera583.log*
>
> Task Manager tried to restore the data and was successful.
>
> *2017-06-08 15:16:02,258 DEBUG
> org.apache.flink.runtime.state.heap.HeapKeyedStateBackend - Restoring
> snapshot from state handles:
> [KeyGroupsStateHandle{groupRangeOffsets=KeyGroupRangeOffsets{
> keyGroupRange=KeyGroupRange{startKeyGroup=0,
> endKeyGroup=127}, offsets=[13460, 13476, 13492, 13508, 13524, 13540,
> 13556,
> 13572, 13588, 13604, 13620, 13636, 13652, 13668, 13684, 14566, 14582,
> 14598,
> 14614, 14630, 14646, 14662, 14678, 14694, 14710, 14726, 14742, 14758,
> 14774,
> 14790, 14806, 14822, 14838, 14854, 14870, 14886, 14902, 14918, 14934,
> 14950,
> 14966, 14982, 14998, 15014, 15030, 15046, 15062, 15078, 15094, 15110,
> 15126,
> 15142, 15158, 15174, 15190, 15206, 16088, 16104, 16120, 16136, 28818,
> 28834,
> 28850, 28866, 28882, 28898, 28914, 28930, 28946, 28962, 28978, 28994,
> 29010,
> 29026, 29042, 29058, 29074, 29090, 29106, 29122, 29138, 29154, 29170,
> 40346,
> 40362, 40378, 40394, 40410, 40426, 40442, 40458, 40474, 40490, 40506,
> 40522,
> 40538, 41420, 41436, 41452, 41468, 41484, 41500, 41516, 41532, 41548,
> 41564,
> 41580, 41596, 41612, 41628, 41644, 41660, 41676, 41692, 41708, 41724,
> 41740,
> 41756, 41772, 41788, 41804, 41820, 41836, 41852, 41868, 41884, 41900,
> 41916]}, data=File State:
> hdfs://dc1udtlhcld005.stack.qadev.corp:8022/user/harmony_
> user/flink-checkpoint/3f5aef5e15a23bce627c05c94760fb
> 16/chk-2/aa076014-120b-4955-ab46-89094358ebeb
> [41932 bytes]}].*
>
> But apparently, the retore state didn't have all the messages the window
> had
> received. Because
> a few messages were not replayed, and the kafka sink didn't receive all
> the
> messages.
>
> Attaching the files here.
>
> jmCloudera583.log
>  n4.nabble.com/file/n13597/jmCloudera583.log>
> tmOneCloudera583.log
>  n4.nabble.com/file/n13597/tmOneCloudera583.log>
> tmTwoCloudera583.log
>  n4.nabble.com/file/n13597/tmTwoCloudera583.log>
>
> BTW, I posted my test results and logs for regular Flink v1.3.0 on Jun 6,
> but don't see that post here. 

Re: Fink: KafkaProducer Data Loss

2017-06-11 Thread Tzu-Li (Gordon) Tai
Hi Ninad,

Thanks for the logs!
Just to let you know, I’ll continue to investigate this early next week.

Cheers,
Gordon

On 8 June 2017 at 7:08:23 PM, ninad (nni...@gmail.com) wrote:

I built Flink v1.3.0 with cloudera hadoop and am able to see the data loss.  

Here are the details:  

*tmOneCloudera583.log*  

Received session window task:  
*2017-06-08 15:10:46,131 INFO org.apache.flink.runtime.taskmanager.Task  
- TriggerWindow(ProcessingTimeSessionWindows(3),  
ListStateDescriptor{serializer=org.apache.flink.api.common.typeutils.base.ListSerializer@f4361ec},
  
ProcessingTimeTrigger(), WindowedStream.apply(WindowedStream.java:1100)) ->  
Sink: sink.http.sep (1/1) (3ef2f947096f3b0986b8ad334c7d5d3c) switched from  
CREATED to DEPLOYING.  

Finished checkpoint 2 (Synchronous part)  
2017-06-08 15:15:51,982 DEBUG  
org.apache.flink.streaming.runtime.tasks.StreamTask -  
TriggerWindow(ProcessingTimeSessionWindows(3),  
ListStateDescriptor{serializer=org.apache.flink.api.common.typeutils.base.ListSerializer@f4361ec},
  
ProcessingTimeTrigger(), WindowedStream.apply(WindowedStream.java:1100)) ->  
Sink: sink.http.sep (1/1) - finished synchronous part of checkpoint  
2.Alignment duration: 0 ms, snapshot duration 215 ms  
*  

The task failed before the verification of completed checkpoint could be  
verified.  
i.e, I don't see the log saying "Notification of complete checkpoint for  
task TriggerWindow" for checkpoint 2.  

*jmCloudera583.log*  

Job Manager received acks for checkpoint 2  

*2017-06-08 15:15:51,898 DEBUG  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Received  
acknowledge message for checkpoint 2 from task  
3e980e4d3278ef0d0f5e0fa9591cc53d of job 3f5aef5e15a23bce627c05c94760fb16.  
2017-06-08 15:15:51,982 DEBUG  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Received  
acknowledge message for checkpoint 2 from task  
3ef2f947096f3b0986b8ad334c7d5d3c of job 3f5aef5e15a23bce627c05c94760fb16*.  

Job Manager tried to restore from checkpoint 2.  

*2017-06-08 15:16:02,111 INFO  
org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore -  
Found 1 checkpoints in ZooKeeper.  
2017-06-08 15:16:02,111 INFO  
org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore -  
Trying to retrieve checkpoint 2.  
2017-06-08 15:16:02,122 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Restoring  
from latest valid checkpoint: Checkpoint 2 @ 149693476  
6105 for 3f5aef5e15a23bce627c05c94760fb16.*  

*tmTwocloudera583.log*  

Task Manager tried to restore the data and was successful.  

*2017-06-08 15:16:02,258 DEBUG  
org.apache.flink.runtime.state.heap.HeapKeyedStateBackend - Restoring  
snapshot from state handles:  
[KeyGroupsStateHandle{groupRangeOffsets=KeyGroupRangeOffsets{keyGroupRange=KeyGroupRange{startKeyGroup=0,
  
endKeyGroup=127}, offsets=[13460, 13476, 13492, 13508, 13524, 13540, 13556,  
13572, 13588, 13604, 13620, 13636, 13652, 13668, 13684, 14566, 14582, 14598,  
14614, 14630, 14646, 14662, 14678, 14694, 14710, 14726, 14742, 14758, 14774,  
14790, 14806, 14822, 14838, 14854, 14870, 14886, 14902, 14918, 14934, 14950,  
14966, 14982, 14998, 15014, 15030, 15046, 15062, 15078, 15094, 15110, 15126,  
15142, 15158, 15174, 15190, 15206, 16088, 16104, 16120, 16136, 28818, 28834,  
28850, 28866, 28882, 28898, 28914, 28930, 28946, 28962, 28978, 28994, 29010,  
29026, 29042, 29058, 29074, 29090, 29106, 29122, 29138, 29154, 29170, 40346,  
40362, 40378, 40394, 40410, 40426, 40442, 40458, 40474, 40490, 40506, 40522,  
40538, 41420, 41436, 41452, 41468, 41484, 41500, 41516, 41532, 41548, 41564,  
41580, 41596, 41612, 41628, 41644, 41660, 41676, 41692, 41708, 41724, 41740,  
41756, 41772, 41788, 41804, 41820, 41836, 41852, 41868, 41884, 41900,  
41916]}, data=File State:  
hdfs://dc1udtlhcld005.stack.qadev.corp:8022/user/harmony_user/flink-checkpoint/3f5aef5e15a23bce627c05c94760fb16/chk-2/aa076014-120b-4955-ab46-89094358ebeb
  
[41932 bytes]}].*  

But apparently, the retore state didn't have all the messages the window had  
received. Because  
a few messages were not replayed, and the kafka sink didn't receive all the  
messages.  

Attaching the files here.  

jmCloudera583.log  

  
tmOneCloudera583.log  

  
tmTwoCloudera583.log  

  

BTW, I posted my test results and logs for regular Flink v1.3.0 on Jun 6,  
but don't see that post here. I did receive an email thought. Hope you guys  
saw that.  

Thanks for your patience guys.  



--  
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Fink-KafkaProducer-Data-Loss-tp11413p13597.html
  
Sent from the Apache Flink User 

Re: In-transit Data Encryption in EMR

2017-06-11 Thread Tzu-Li (Gordon) Tai
Hi Vinay,

Apologies for the inactivity on this thread, I was occupied with some critical 
fixes for 1.3.1.

1. Can anyone please explain me how do you test if SSL is working correctly ? 
Currently I am just relying on the logs.

AFAIK, if any of the SSL configuration settings are enabled (*.ssl.enabled) and 
your job is running fine, then everything should be functioning.

2. Wild Card is not working with the keytool command, can you please let me 
know what is the issue with the following command:

The wildcard option only works for wildcarding subdomains.
For example, SAN=*.domain.com

On 9 June 2017 at 4:33:46 PM, vinay patil (vinay18.pa...@gmail.com) wrote:

Hi Guys,

Can anyone please provide me solution to my queries.

On Jun 8, 2017 11:30 PM, "Vinay Patil" <[hidden email]> wrote:
Hi Guys,

I am able to setup SSL correctly, however the following command  does not work 
correctly and results in the error I had mailed earlier
flink run -m yarn-cluster -yt deploy-keys/ TestJob.jar

Few Doubts: 
1. Can anyone please explain me how do you test if SSL is working correctly ? 
Currently I am just relying on the logs.

2. Wild Card is not working with the keytool command, can you please let me 
know what is the issue with the following command:

keytool -genkeypair -alias ca -keystore: -ext SAN=dns:node1.* 


Regards,
Vinay Patil

On Mon, Jun 5, 2017 at 8:43 PM, vinay patil [via Apache Flink User Mailing List 
archive.] <[hidden email]> wrote:
Hi Gordon,

The yarn session gets created when I try to run the following command:
yarn-session.sh -n 4 -s 2 -jm 1024 -tm 3000 -d --ship deploy-keys/

However when I try to access the Job Manager UI, it gives me exception as :
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target

I am able to see the Job Manager UI  when I imported the CA certificate to java 
truststore on EMR master node :
keytool -keystore /etc/alternatives/jre/lib/security/cacerts -importcert -alias 
FLINKSSL -file ca.cer


Does this mean that SSL is configured correctly ? I can see in the Job Manager 
configurations and also in th e logs. Is there any other way to verify ?

Also the keystore and truststore  password should be masked in the logs which 
is not case.

2017-06-05 14:51:31,135 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: security.ssl.enabled, true
2017-06-05 14:51:31,136 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: security.ssl.keystore, deploy-keys/ca.keystore
2017-06-05 14:51:31,136 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: security.ssl.keystore-password, password
2017-06-05 14:51:31,136 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: security.ssl.key-password, password
2017-06-05 14:51:31,136 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: security.ssl.truststore, deploy-keys/ca.truststore
2017-06-05 14:51:31,136 INFO  
org.apache.flink.configuration.GlobalConfiguration            - Loading 
configuration property: security.ssl.truststore-password, password


Regards,
Vinay Patil


If you reply to this email, your message will be added to the discussion below:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/In-transit-Data-Encryption-in-EMR-tp13455p13490.html
To start a new topic under Apache Flink User Mailing List archive., email 
[hidden email]
To unsubscribe from Apache Flink User Mailing List archive., click here.
NAML


View this message in context: Re: In-transit Data Encryption in EMR
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.