Re: Zeppelin

2019-04-25 Thread Dawid Wysakowicz
Hi Sergey,

I am not very familiar with Zepellin. But I know Jeff (cc'ed) is working
on integrating Flink with some notebooks. He might be able to help you.

Best,

Dawid

On 25/04/2019 08:42, Smirnov Sergey Vladimirovich (39833) wrote:
>
> Hello,
>
>  
>
> Trying to link Zeppelin 0.9 with Flink 1.8. It`s a small dev cluster
> deployed in standalone manner.
>
> Got the same error as described here
> https://stackoverflow.com/questions/54257671/runnning-a-job-in-apache-flink-standalone-mode-on-zeppelin-i-have-this-error-to
>
> Would appreciate for any support for helping to resolve that problem.
>
>  
>
> Regards,
>
> Sergey
>
>  
>


signature.asc
Description: OpenPGP digital signature


Re: Zeppelin

2019-04-25 Thread Jeff Zhang
Thanks Dawid,

Hi Sergey,

I am working on update the flink interpreter of zeppelin to support flink
1.9 (supposed to be released this summer).
For the current flink interpreter of zeppelin 0.9, I haven't verified it
against flink 1.8. could you show the full interpreter log ? And what is
the size your input file ?



Dawid Wysakowicz  于2019年4月25日周四 下午6:31写道:

> Hi Sergey,
>
> I am not very familiar with Zepellin. But I know Jeff (cc'ed) is working
> on integrating Flink with some notebooks. He might be able to help you.
>
> Best,
>
> Dawid
> On 25/04/2019 08:42, Smirnov Sergey Vladimirovich (39833) wrote:
>
> Hello,
>
>
>
> Trying to link Zeppelin 0.9 with Flink 1.8. It`s a small dev cluster
> deployed in standalone manner.
>
> Got the same error as described here
> https://stackoverflow.com/questions/54257671/runnning-a-job-in-apache-flink-standalone-mode-on-zeppelin-i-have-this-error-to
>
> Would appreciate for any support for helping to resolve that problem.
>
>
>
> Regards,
>
> Sergey
>
>
>
>

-- 
Best Regards

Jeff Zhang


RE: Zeppelin

2019-04-26 Thread Smirnov Sergey Vladimirovich (39833)
Hi,

Dawid, great, thanks for answering.

Jeff,
flink 1.8 with default settings, standalone cluster, one job node and three 
task managers nodes.
zeppelin 0.9 config
checked "Connect to existing cluster"
host: 10.219.179.16
port: 6123
create simple notebook:
%flink
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();

zeppelin logs:
2019-04-23 10:09:17,241 WARN  akka.remote.transport.netty.NettyTransport
- Remote connection to [/10.216.26.26:45588] failed with 
org.apache.flink.shaded.akka.org.jboss.netty.handler.codec.frame.TooLongFrameException:
 Adjusted frame length exceeds 10485760: 2147549189 - discarded
2019-04-23 10:09:29,475 WARN  akka.remote.transport.netty.NettyTransport
- Remote connection to [/10.216.26.26:45624] failed with 
org.apache.flink.shaded.akka.org.jboss.netty.handler.codec.frame.TooLongFrameException:
 Adjusted frame length exceeds 10485760: 2147549189 - discarded
flink:
org.apache.thrift.transport.TTransportException
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at 
org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)
at 
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
at 
org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_createInterpreter(RemoteInterpreterService.java:189)
at 
org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.createInterpreter(RemoteInterpreterService.java:172)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:169)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:165)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:118)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:290)
at 
org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:443)
at 
org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:75)
at org.apache.zeppelin.scheduler.Job.run(Job.java:181)
at 
org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:123)
at 
org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:187)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
...

Regards,
Sergey

From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Thursday, April 25, 2019 4:24 PM
To: Dawid Wysakowicz 
Cc: Smirnov Sergey Vladimirovich (39833) ; 
user@flink.apache.org
Subject: Re: Zeppelin

Thanks Dawid,

Hi Sergey,

I am working on update the flink interpreter of zeppelin to support flink 1.9 
(supposed to be released this summer).
For the current flink interpreter of zeppelin 0.9, I haven't verified it 
against flink 1.8. could you show the full interpreter log ? And what is the 
size your input file ?



Dawid Wysakowicz mailto:dwysakow...@apache.org>> 
于2019年4月25日周四 下午6:31写道:

Hi Sergey,

I am not very familiar with Zepellin. But I know Jeff (cc'ed) is working on 
integrating Flink with some notebooks. He might be able to help you.

Best,

Dawid
On 25/04/2019 08:42, Smirnov Sergey Vladimirovich (39833) wrote:
Hello,

Trying to link Zeppelin 0.9 with Flink 1.8. It`s a small dev cluster deployed 
in standalone manner.
Got the same error as described here 
https://stackoverflow.com/questions/54257671/runnning-a-job-in-apache-flink-standalone-mode-on-zeppelin-i-have-this-error-to
Would appreciate for any support for helping to resolve that problem.

Regards,
Sergey



--
Best Regards

Jeff Zhang


Re: Zeppelin Integration

2015-10-21 Thread Till Rohrmann
Hi Trevor,

in order to use Zeppelin with a different Flink version in local mode,
meaning that Zeppelin starts a LocalFlinkMiniCluster when executing your
jobs, you have to build Zeppelin and change the flink.version property in
the zeppelin/flink/pom.xml file to the version you want to use.

If you want to let Zeppelin submit jobs to a remote cluster, you should
build Zeppelin with the version of your cluster. That’s because internally
Zeppelin will use this version to construct a JobGraph which is then
submitted to the cluster. In order to configure the remote cluster, you
have to go the *Interpreter* page and scroll down to the *flink* section.
There you have to specify the address of your cluster under *host* and the
port under *port*. This should then be used to submit jobs to the Flink
cluster.

I hope this answers your question.

Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
version, you should checkout my branch
https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
where I’ve made the necessary changes.

Cheers,
Till
​

On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant 
wrote:

> I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin seems to
> be relatively well supported and configurable but the Flink is not so much.
>
> I want Zeppelin to run against my 0.10 build instead of the 0.6 build that
> ships with Zeppelin.  My best guess at the moment on how to accomplish this
> is to create a symbolic link from the /opt/zepplin/flink folder to
> /opt/flink-0.10, but this feels dirty and wrong.
>
> Does anyone out there have any experience connecting Zeppelin to a
> non-prepackaged Flink build?
>
> I feel like there is a great opporutnity for a HOWTO write up if non
> currently exists.
>
> I'm asking on the Zeppelin user mailing list too as soon as I am added.
>
> Thanks for any help
>
> tg
>
>
> Trevor Grant
> Data Scientist
> https://github.com/rawkintrevo
> http://stackexchange.com/users/3002022/rawkintrevo
>
> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>
>


Re: Zeppelin Integration

2015-10-21 Thread Trevor Grant
Hey Till,

I cloned your branch of Zeplin and while it will compile, it fails tests on
timeout, which consequently was the same issue I was having when trying to
use Zeppelin.

Ideas?

---
Test set: org.apache.zeppelin.flink.FlinkInterpreterTest
---
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 100.347 sec
<<< FAILURE! - in org.apache.zeppelin.flink.FlinkInterpreterTest
org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347 sec
 <<< ERROR!
java.util.concurrent.TimeoutException: Futures timed out after [10
milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at
org.apache.flink.runtime.minicluster.FlinkMiniCluster.getLeaderIndex(FlinkMiniCluster.scala:171)
at
org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.getLeaderRPCPort(LocalFlinkMiniCluster.scala:132)
at
org.apache.zeppelin.flink.FlinkInterpreter.getPort(FlinkInterpreter.java:136)
at org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:98)
at
org.apache.zeppelin.flink.FlinkInterpreterTest.setUp(FlinkInterpreterTest.java:42)

org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347 sec
 <<< ERROR!
java.lang.NullPointerException: null
at
org.apache.zeppelin.flink.FlinkInterpreter.close(FlinkInterpreter.java:221)
at
org.apache.zeppelin.flink.FlinkInterpreterTest.tearDown(FlinkInterpreterTest.java:48)



Trevor Grant
Data Scientist
https://github.com/rawkintrevo
http://stackexchange.com/users/3002022/rawkintrevo

*"Fortunate is he, who is able to know the causes of things."  -Virgil*


On Wed, Oct 21, 2015 at 11:57 AM, Till Rohrmann 
wrote:

> Hi Trevor,
>
> in order to use Zeppelin with a different Flink version in local mode,
> meaning that Zeppelin starts a LocalFlinkMiniCluster when executing your
> jobs, you have to build Zeppelin and change the flink.version property in
> the zeppelin/flink/pom.xml file to the version you want to use.
>
> If you want to let Zeppelin submit jobs to a remote cluster, you should
> build Zeppelin with the version of your cluster. That’s because internally
> Zeppelin will use this version to construct a JobGraph which is then
> submitted to the cluster. In order to configure the remote cluster, you
> have to go the *Interpreter* page and scroll down to the *flink* section.
> There you have to specify the address of your cluster under *host* and
> the port under *port*. This should then be used to submit jobs to the
> Flink cluster.
>
> I hope this answers your question.
>
> Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
> version, you should checkout my branch
> https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
> where I’ve made the necessary changes.
>
> Cheers,
> Till
> ​
>
> On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant 
> wrote:
>
>> I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin seems
>> to be relatively well supported and configurable but the Flink is not so
>> much.
>>
>> I want Zeppelin to run against my 0.10 build instead of the 0.6 build
>> that ships with Zeppelin.  My best guess at the moment on how to accomplish
>> this is to create a symbolic link from the /opt/zepplin/flink folder to
>> /opt/flink-0.10, but this feels dirty and wrong.
>>
>> Does anyone out there have any experience connecting Zeppelin to a
>> non-prepackaged Flink build?
>>
>> I feel like there is a great opporutnity for a HOWTO write up if non
>> currently exists.
>>
>> I'm asking on the Zeppelin user mailing list too as soon as I am added.
>>
>> Thanks for any help
>>
>> tg
>>
>>
>> Trevor Grant
>> Data Scientist
>> https://github.com/rawkintrevo
>> http://stackexchange.com/users/3002022/rawkintrevo
>>
>> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>>
>>
>


Re: Zeppelin Integration

2015-10-22 Thread Till Rohrmann
Hi Trevor,

that’s actually my bad since I only tested my branch against a remote
cluster. I fixed the problem (not properly starting the
LocalFlinkMiniCluster) so that you can now use Zeppelin also in local mode.
Just check out my branch again.

Cheers,
Till
​

On Wed, Oct 21, 2015 at 10:00 PM, Trevor Grant 
wrote:

> Hey Till,
>
> I cloned your branch of Zeplin and while it will compile, it fails tests
> on timeout, which consequently was the same issue I was having when trying
> to use Zeppelin.
>
> Ideas?
>
>
> ---
> Test set: org.apache.zeppelin.flink.FlinkInterpreterTest
>
> ---
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 100.347
> sec <<< FAILURE! - in org.apache.zeppelin.flink.FlinkInterpreterTest
> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347 sec
>  <<< ERROR!
> java.util.concurrent.TimeoutException: Futures timed out after [10
> milliseconds]
> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
> at
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> at scala.concurrent.Await$.result(package.scala:107)
> at
> org.apache.flink.runtime.minicluster.FlinkMiniCluster.getLeaderIndex(FlinkMiniCluster.scala:171)
> at
> org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.getLeaderRPCPort(LocalFlinkMiniCluster.scala:132)
> at
> org.apache.zeppelin.flink.FlinkInterpreter.getPort(FlinkInterpreter.java:136)
> at
> org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:98)
> at
> org.apache.zeppelin.flink.FlinkInterpreterTest.setUp(FlinkInterpreterTest.java:42)
>
> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347 sec
>  <<< ERROR!
> java.lang.NullPointerException: null
> at
> org.apache.zeppelin.flink.FlinkInterpreter.close(FlinkInterpreter.java:221)
> at
> org.apache.zeppelin.flink.FlinkInterpreterTest.tearDown(FlinkInterpreterTest.java:48)
>
>
>
> Trevor Grant
> Data Scientist
> https://github.com/rawkintrevo
> http://stackexchange.com/users/3002022/rawkintrevo
>
> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>
>
> On Wed, Oct 21, 2015 at 11:57 AM, Till Rohrmann 
> wrote:
>
>> Hi Trevor,
>>
>> in order to use Zeppelin with a different Flink version in local mode,
>> meaning that Zeppelin starts a LocalFlinkMiniCluster when executing your
>> jobs, you have to build Zeppelin and change the flink.version property
>> in the zeppelin/flink/pom.xml file to the version you want to use.
>>
>> If you want to let Zeppelin submit jobs to a remote cluster, you should
>> build Zeppelin with the version of your cluster. That’s because internally
>> Zeppelin will use this version to construct a JobGraph which is then
>> submitted to the cluster. In order to configure the remote cluster, you
>> have to go the *Interpreter* page and scroll down to the *flink*
>> section. There you have to specify the address of your cluster under
>> *host* and the port under *port*. This should then be used to submit
>> jobs to the Flink cluster.
>>
>> I hope this answers your question.
>>
>> Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
>> version, you should checkout my branch
>> https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
>> where I’ve made the necessary changes.
>>
>> Cheers,
>> Till
>> ​
>>
>> On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant 
>> wrote:
>>
>>> I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin seems
>>> to be relatively well supported and configurable but the Flink is not so
>>> much.
>>>
>>> I want Zeppelin to run against my 0.10 build instead of the 0.6 build
>>> that ships with Zeppelin.  My best guess at the moment on how to accomplish
>>> this is to create a symbolic link from the /opt/zepplin/flink folder to
>>> /opt/flink-0.10, but this feels dirty and wrong.
>>>
>>> Does anyone out there have any experience connecting Zeppelin to a
>>> non-prepackaged Flink build?
>>>
>>> I feel like there is a great opporutnity for a HOWTO write up if non
>>> currently exists.
>>>
>>> I'm asking on the Zeppelin user mailing list too as soon as I am added.
>>>
>>> Thanks for any help
>>>
>>> tg
>>>
>>>
>>> Trevor Grant
>>> Data Scientist
>>> https://github.com/rawkintrevo
>>> http://stackexchange.com/users/3002022/rawkintrevo
>>>
>>> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>>>
>>>
>>
>


Re: Zeppelin Integration

2015-11-04 Thread Robert Metzger
For those interested, Trevor wrote a blog post describing how to setup
Spark, Flink and Zeppelin, both locally and on clusters:
http://trevorgrant.org/2015/11/03/apache-casserole-a-delicious-big-data-recipe-for-the-whole-family/
Thanks Trevor for the great tutorial!

On Thu, Oct 22, 2015 at 4:23 PM, Till Rohrmann  wrote:

> Hi Trevor,
>
> that’s actually my bad since I only tested my branch against a remote
> cluster. I fixed the problem (not properly starting the
> LocalFlinkMiniCluster) so that you can now use Zeppelin also in local
> mode. Just check out my branch again.
>
> Cheers,
> Till
> ​
>
> On Wed, Oct 21, 2015 at 10:00 PM, Trevor Grant 
> wrote:
>
>> Hey Till,
>>
>> I cloned your branch of Zeplin and while it will compile, it fails tests
>> on timeout, which consequently was the same issue I was having when trying
>> to use Zeppelin.
>>
>> Ideas?
>>
>>
>> ---
>> Test set: org.apache.zeppelin.flink.FlinkInterpreterTest
>>
>> ---
>> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 100.347
>> sec <<< FAILURE! - in org.apache.zeppelin.flink.FlinkInterpreterTest
>> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347 sec
>>  <<< ERROR!
>> java.util.concurrent.TimeoutException: Futures timed out after [10
>> milliseconds]
>> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> at
>> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> at scala.concurrent.Await$.result(package.scala:107)
>> at
>> org.apache.flink.runtime.minicluster.FlinkMiniCluster.getLeaderIndex(FlinkMiniCluster.scala:171)
>> at
>> org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.getLeaderRPCPort(LocalFlinkMiniCluster.scala:132)
>> at
>> org.apache.zeppelin.flink.FlinkInterpreter.getPort(FlinkInterpreter.java:136)
>> at
>> org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:98)
>> at
>> org.apache.zeppelin.flink.FlinkInterpreterTest.setUp(FlinkInterpreterTest.java:42)
>>
>> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347 sec
>>  <<< ERROR!
>> java.lang.NullPointerException: null
>> at
>> org.apache.zeppelin.flink.FlinkInterpreter.close(FlinkInterpreter.java:221)
>> at
>> org.apache.zeppelin.flink.FlinkInterpreterTest.tearDown(FlinkInterpreterTest.java:48)
>>
>>
>>
>> Trevor Grant
>> Data Scientist
>> https://github.com/rawkintrevo
>> http://stackexchange.com/users/3002022/rawkintrevo
>>
>> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>>
>>
>> On Wed, Oct 21, 2015 at 11:57 AM, Till Rohrmann 
>> wrote:
>>
>>> Hi Trevor,
>>>
>>> in order to use Zeppelin with a different Flink version in local mode,
>>> meaning that Zeppelin starts a LocalFlinkMiniCluster when executing
>>> your jobs, you have to build Zeppelin and change the flink.version
>>> property in the zeppelin/flink/pom.xml file to the version you want to
>>> use.
>>>
>>> If you want to let Zeppelin submit jobs to a remote cluster, you should
>>> build Zeppelin with the version of your cluster. That’s because internally
>>> Zeppelin will use this version to construct a JobGraph which is then
>>> submitted to the cluster. In order to configure the remote cluster, you
>>> have to go the *Interpreter* page and scroll down to the *flink*
>>> section. There you have to specify the address of your cluster under
>>> *host* and the port under *port*. This should then be used to submit
>>> jobs to the Flink cluster.
>>>
>>> I hope this answers your question.
>>>
>>> Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
>>> version, you should checkout my branch
>>> https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
>>> where I’ve made the necessary changes.
>>>
>>> Cheers,
>>> Till
>>> ​
>>>
>>> On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant 
>>> wrote:
>>>
 I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin seems
 to be relatively well supported and configurable but the Flink is not so
 much.

 I want Zeppelin to run against my 0.10 build instead of the 0.6 build
 that ships with Zeppelin.  My best guess at the moment on how to accomplish
 this is to create a symbolic link from the /opt/zepplin/flink folder to
 /opt/flink-0.10, but this feels dirty and wrong.

 Does anyone out there have any experience connecting Zeppelin to a
 non-prepackaged Flink build?

 I feel like there is a great opporutnity for a HOWTO write up if non
 currently exists.

 I'm asking on the Zeppelin user mailing list too as soon as I am added.

 Thanks for any help

 tg


 Trevor Grant
 Data Scientist
 

Re: Zeppelin Integration

2015-11-04 Thread Till Rohrmann
Really cool tutorial Trevor :-)

On Wed, Nov 4, 2015 at 3:26 PM, Robert Metzger  wrote:

> For those interested, Trevor wrote a blog post describing how to setup
> Spark, Flink and Zeppelin, both locally and on clusters:
> http://trevorgrant.org/2015/11/03/apache-casserole-a-delicious-big-data-recipe-for-the-whole-family/
> Thanks Trevor for the great tutorial!
>
> On Thu, Oct 22, 2015 at 4:23 PM, Till Rohrmann 
> wrote:
>
>> Hi Trevor,
>>
>> that’s actually my bad since I only tested my branch against a remote
>> cluster. I fixed the problem (not properly starting the
>> LocalFlinkMiniCluster) so that you can now use Zeppelin also in local
>> mode. Just check out my branch again.
>>
>> Cheers,
>> Till
>> ​
>>
>> On Wed, Oct 21, 2015 at 10:00 PM, Trevor Grant 
>> wrote:
>>
>>> Hey Till,
>>>
>>> I cloned your branch of Zeplin and while it will compile, it fails tests
>>> on timeout, which consequently was the same issue I was having when trying
>>> to use Zeppelin.
>>>
>>> Ideas?
>>>
>>>
>>> ---
>>> Test set: org.apache.zeppelin.flink.FlinkInterpreterTest
>>>
>>> ---
>>> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 100.347
>>> sec <<< FAILURE! - in org.apache.zeppelin.flink.FlinkInterpreterTest
>>> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347
>>> sec  <<< ERROR!
>>> java.util.concurrent.TimeoutException: Futures timed out after [10
>>> milliseconds]
>>> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>>> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>>> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>>> at
>>> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>>> at scala.concurrent.Await$.result(package.scala:107)
>>> at
>>> org.apache.flink.runtime.minicluster.FlinkMiniCluster.getLeaderIndex(FlinkMiniCluster.scala:171)
>>> at
>>> org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.getLeaderRPCPort(LocalFlinkMiniCluster.scala:132)
>>> at
>>> org.apache.zeppelin.flink.FlinkInterpreter.getPort(FlinkInterpreter.java:136)
>>> at
>>> org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:98)
>>> at
>>> org.apache.zeppelin.flink.FlinkInterpreterTest.setUp(FlinkInterpreterTest.java:42)
>>>
>>> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347
>>> sec  <<< ERROR!
>>> java.lang.NullPointerException: null
>>> at
>>> org.apache.zeppelin.flink.FlinkInterpreter.close(FlinkInterpreter.java:221)
>>> at
>>> org.apache.zeppelin.flink.FlinkInterpreterTest.tearDown(FlinkInterpreterTest.java:48)
>>>
>>>
>>>
>>> Trevor Grant
>>> Data Scientist
>>> https://github.com/rawkintrevo
>>> http://stackexchange.com/users/3002022/rawkintrevo
>>>
>>> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>>>
>>>
>>> On Wed, Oct 21, 2015 at 11:57 AM, Till Rohrmann 
>>> wrote:
>>>
 Hi Trevor,

 in order to use Zeppelin with a different Flink version in local mode,
 meaning that Zeppelin starts a LocalFlinkMiniCluster when executing
 your jobs, you have to build Zeppelin and change the flink.version
 property in the zeppelin/flink/pom.xml file to the version you want to
 use.

 If you want to let Zeppelin submit jobs to a remote cluster, you should
 build Zeppelin with the version of your cluster. That’s because internally
 Zeppelin will use this version to construct a JobGraph which is then
 submitted to the cluster. In order to configure the remote cluster, you
 have to go the *Interpreter* page and scroll down to the *flink*
 section. There you have to specify the address of your cluster under
 *host* and the port under *port*. This should then be used to submit
 jobs to the Flink cluster.

 I hope this answers your question.

 Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
 version, you should checkout my branch
 https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
 where I’ve made the necessary changes.

 Cheers,
 Till
 ​

 On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant >>> > wrote:

> I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin
> seems to be relatively well supported and configurable but the Flink is 
> not
> so much.
>
> I want Zeppelin to run against my 0.10 build instead of the 0.6 build
> that ships with Zeppelin.  My best guess at the moment on how to 
> accomplish
> this is to create a symbolic link from the /opt/zepplin/flink folder to
> /opt/flink-0.10, but this feels dirty and wrong.
>
> Does anyone out there have any experience connecting Zeppelin to a
> non-prepackaged Flink build?
>
> I feel like there is a great opporutni

Re: Zeppelin Integration

2015-11-04 Thread Leonard Wolters
Indeed very nice! Thanks
On Nov 4, 2015 5:04 PM, "Till Rohrmann"  wrote:

> Really cool tutorial Trevor :-)
>
> On Wed, Nov 4, 2015 at 3:26 PM, Robert Metzger 
> wrote:
>
>> For those interested, Trevor wrote a blog post describing how to setup
>> Spark, Flink and Zeppelin, both locally and on clusters:
>> http://trevorgrant.org/2015/11/03/apache-casserole-a-delicious-big-data-recipe-for-the-whole-family/
>> Thanks Trevor for the great tutorial!
>>
>> On Thu, Oct 22, 2015 at 4:23 PM, Till Rohrmann 
>> wrote:
>>
>>> Hi Trevor,
>>>
>>> that’s actually my bad since I only tested my branch against a remote
>>> cluster. I fixed the problem (not properly starting the
>>> LocalFlinkMiniCluster) so that you can now use Zeppelin also in local
>>> mode. Just check out my branch again.
>>>
>>> Cheers,
>>> Till
>>> ​
>>>
>>> On Wed, Oct 21, 2015 at 10:00 PM, Trevor Grant >> > wrote:
>>>
 Hey Till,

 I cloned your branch of Zeplin and while it will compile, it fails
 tests on timeout, which consequently was the same issue I was having when
 trying to use Zeppelin.

 Ideas?


 ---
 Test set: org.apache.zeppelin.flink.FlinkInterpreterTest

 ---
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 100.347
 sec <<< FAILURE! - in org.apache.zeppelin.flink.FlinkInterpreterTest
 org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347
 sec  <<< ERROR!
 java.util.concurrent.TimeoutException: Futures timed out after [10
 milliseconds]
 at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
 at
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
 at
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:107)
 at
 org.apache.flink.runtime.minicluster.FlinkMiniCluster.getLeaderIndex(FlinkMiniCluster.scala:171)
 at
 org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.getLeaderRPCPort(LocalFlinkMiniCluster.scala:132)
 at
 org.apache.zeppelin.flink.FlinkInterpreter.getPort(FlinkInterpreter.java:136)
 at
 org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:98)
 at
 org.apache.zeppelin.flink.FlinkInterpreterTest.setUp(FlinkInterpreterTest.java:42)

 org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347
 sec  <<< ERROR!
 java.lang.NullPointerException: null
 at
 org.apache.zeppelin.flink.FlinkInterpreter.close(FlinkInterpreter.java:221)
 at
 org.apache.zeppelin.flink.FlinkInterpreterTest.tearDown(FlinkInterpreterTest.java:48)



 Trevor Grant
 Data Scientist
 https://github.com/rawkintrevo
 http://stackexchange.com/users/3002022/rawkintrevo

 *"Fortunate is he, who is able to know the causes of things."  -Virgil*


 On Wed, Oct 21, 2015 at 11:57 AM, Till Rohrmann 
 wrote:

> Hi Trevor,
>
> in order to use Zeppelin with a different Flink version in local mode,
> meaning that Zeppelin starts a LocalFlinkMiniCluster when executing
> your jobs, you have to build Zeppelin and change the flink.version
> property in the zeppelin/flink/pom.xml file to the version you want
> to use.
>
> If you want to let Zeppelin submit jobs to a remote cluster, you
> should build Zeppelin with the version of your cluster. That’s because
> internally Zeppelin will use this version to construct a JobGraph
> which is then submitted to the cluster. In order to configure the remote
> cluster, you have to go the *Interpreter* page and scroll down to the
> *flink* section. There you have to specify the address of your
> cluster under *host* and the port under *port*. This should then be
> used to submit jobs to the Flink cluster.
>
> I hope this answers your question.
>
> Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
> version, you should checkout my branch
> https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
> where I’ve made the necessary changes.
>
> Cheers,
> Till
> ​
>
> On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant <
> trevor.d.gr...@gmail.com> wrote:
>
>> I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin
>> seems to be relatively well supported and configurable but the Flink is 
>> not
>> so much.
>>
>> I want Zeppelin to run against my 0.10 build instead of the 0.6 build
>> that ships with Zeppelin.  My best guess at the moment on how to 
>> accomplish
>> this is to create a symbolic link from the /opt/zepplin/fl

Re: Zeppelin Integration

2015-11-04 Thread Vasiliki Kalavri
Great tutorial! Thanks a lot ^^

On 4 November 2015 at 17:12, Leonard Wolters  wrote:

> Indeed very nice! Thanks
> On Nov 4, 2015 5:04 PM, "Till Rohrmann"  wrote:
>
>> Really cool tutorial Trevor :-)
>>
>> On Wed, Nov 4, 2015 at 3:26 PM, Robert Metzger 
>> wrote:
>>
>>> For those interested, Trevor wrote a blog post describing how to setup
>>> Spark, Flink and Zeppelin, both locally and on clusters:
>>> http://trevorgrant.org/2015/11/03/apache-casserole-a-delicious-big-data-recipe-for-the-whole-family/
>>> Thanks Trevor for the great tutorial!
>>>
>>> On Thu, Oct 22, 2015 at 4:23 PM, Till Rohrmann 
>>> wrote:
>>>
 Hi Trevor,

 that’s actually my bad since I only tested my branch against a remote
 cluster. I fixed the problem (not properly starting the
 LocalFlinkMiniCluster) so that you can now use Zeppelin also in local
 mode. Just check out my branch again.

 Cheers,
 Till
 ​

 On Wed, Oct 21, 2015 at 10:00 PM, Trevor Grant <
 trevor.d.gr...@gmail.com> wrote:

> Hey Till,
>
> I cloned your branch of Zeplin and while it will compile, it fails
> tests on timeout, which consequently was the same issue I was having when
> trying to use Zeppelin.
>
> Ideas?
>
>
> ---
> Test set: org.apache.zeppelin.flink.FlinkInterpreterTest
>
> ---
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed:
> 100.347 sec <<< FAILURE! - in 
> org.apache.zeppelin.flink.FlinkInterpreterTest
> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347
> sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Futures timed out after [10
> milliseconds]
> at
> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
> at
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
> at
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> at scala.concurrent.Await$.result(package.scala:107)
> at
> org.apache.flink.runtime.minicluster.FlinkMiniCluster.getLeaderIndex(FlinkMiniCluster.scala:171)
> at
> org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.getLeaderRPCPort(LocalFlinkMiniCluster.scala:132)
> at
> org.apache.zeppelin.flink.FlinkInterpreter.getPort(FlinkInterpreter.java:136)
> at
> org.apache.zeppelin.flink.FlinkInterpreter.open(FlinkInterpreter.java:98)
> at
> org.apache.zeppelin.flink.FlinkInterpreterTest.setUp(FlinkInterpreterTest.java:42)
>
> org.apache.zeppelin.flink.FlinkInterpreterTest  Time elapsed: 100.347
> sec  <<< ERROR!
> java.lang.NullPointerException: null
> at
> org.apache.zeppelin.flink.FlinkInterpreter.close(FlinkInterpreter.java:221)
> at
> org.apache.zeppelin.flink.FlinkInterpreterTest.tearDown(FlinkInterpreterTest.java:48)
>
>
>
> Trevor Grant
> Data Scientist
> https://github.com/rawkintrevo
> http://stackexchange.com/users/3002022/rawkintrevo
>
> *"Fortunate is he, who is able to know the causes of things."  -Virgil*
>
>
> On Wed, Oct 21, 2015 at 11:57 AM, Till Rohrmann 
> wrote:
>
>> Hi Trevor,
>>
>> in order to use Zeppelin with a different Flink version in local
>> mode, meaning that Zeppelin starts a LocalFlinkMiniCluster when
>> executing your jobs, you have to build Zeppelin and change the
>> flink.version property in the zeppelin/flink/pom.xml file to the
>> version you want to use.
>>
>> If you want to let Zeppelin submit jobs to a remote cluster, you
>> should build Zeppelin with the version of your cluster. That’s because
>> internally Zeppelin will use this version to construct a JobGraph
>> which is then submitted to the cluster. In order to configure the remote
>> cluster, you have to go the *Interpreter* page and scroll down to
>> the *flink* section. There you have to specify the address of your
>> cluster under *host* and the port under *port*. This should then be
>> used to submit jobs to the Flink cluster.
>>
>> I hope this answers your question.
>>
>> Btw: If you want to use Zeppelin with the latest Flink 0.10-SNAPSHOT
>> version, you should checkout my branch
>> https://github.com/tillrohrmann/incubator-zeppelin/tree/flink-0.10-SNAPSHOT
>> where I’ve made the necessary changes.
>>
>> Cheers,
>> Till
>> ​
>>
>> On Wed, Oct 21, 2015 at 5:00 PM, Trevor Grant <
>> trevor.d.gr...@gmail.com> wrote:
>>
>>> I'm setting up some Flink/Spark/Zeppelin at work.  Spark+Zeppelin
>>> seems to be relatively well supported and configurable but the Flink is 
>>> not
>

Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Timo Walther
You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010 
was not present at that time. You need to upgrade to Flink 1.2.


Timo


Am 17/01/17 um 15:58 schrieb Neil Derraugh:

This is really a Zeppelin question, and I’ve already posted to the user list 
there.  I’m just trying to draw in as many relevant eyeballs as possible.  If 
you can help please reply on the Zeppelin mailing list.

In my Zeppelin notebook I’m having a problem importing the Kafka streaming 
library for Flink.

I added org.apache.flink:flink-connector-kafka_2.11:0.10.2 to the Dependencies 
on the Flink interpreter.

The Flink interpreter runs code, just not if I have the following import.
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010

I get this error:
:72: error: object FlinkKafkaConsumer010 is not a member of package 
org.apache.flink.streaming.connectors.kafka
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010

Am I doing something wrong here?

Neil





Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Fabian Hueske
One thing to add: Flink 1.2.0 has not been release yet.
The FlinkKafkaConsumer010 is only available in a SNAPSHOT release or the
first release candidate (RC0).

Best, Fabian

2017-01-17 16:08 GMT+01:00 Timo Walther :

> You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010
> was not present at that time. You need to upgrade to Flink 1.2.
>
> Timo
>
>
> Am 17/01/17 um 15:58 schrieb Neil Derraugh:
>
> This is really a Zeppelin question, and I’ve already posted to the user
>> list there.  I’m just trying to draw in as many relevant eyeballs as
>> possible.  If you can help please reply on the Zeppelin mailing list.
>>
>> In my Zeppelin notebook I’m having a problem importing the Kafka
>> streaming library for Flink.
>>
>> I added org.apache.flink:flink-connector-kafka_2.11:0.10.2 to the
>> Dependencies on the Flink interpreter.
>>
>> The Flink interpreter runs code, just not if I have the following import.
>> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
>>
>> I get this error:
>> :72: error: object FlinkKafkaConsumer010 is not a member of
>> package org.apache.flink.streaming.connectors.kafka
>> import org.apache.flink.streaming.con
>> nectors.kafka.FlinkKafkaConsumer010
>>
>> Am I doing something wrong here?
>>
>> Neil
>>
>
>
>


Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Foster, Craig
Are connectors being included in the 1.2.0 release or do you mean Kafka 
specifically?

From: Fabian Hueske 
Reply-To: "user@flink.apache.org" 
Date: Tuesday, January 17, 2017 at 7:10 AM
To: "user@flink.apache.org" 
Subject: Re: Zeppelin: Flink Kafka Connector

One thing to add: Flink 1.2.0 has not been release yet.
The FlinkKafkaConsumer010 is only available in a SNAPSHOT release or the first 
release candidate (RC0).
Best, Fabian

2017-01-17 16:08 GMT+01:00 Timo Walther 
mailto:twal...@apache.org>>:
You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010 was not 
present at that time. You need to upgrade to Flink 1.2.

Timo


Am 17/01/17 um 15:58 schrieb Neil Derraugh:

This is really a Zeppelin question, and I’ve already posted to the user list 
there.  I’m just trying to draw in as many relevant eyeballs as possible.  If 
you can help please reply on the Zeppelin mailing list.

In my Zeppelin notebook I’m having a problem importing the Kafka streaming 
library for Flink.

I added org.apache.flink:flink-connector-kafka_2.11:0.10.2 to the Dependencies 
on the Flink interpreter.

The Flink interpreter runs code, just not if I have the following import.
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010

I get this error:
:72: error: object FlinkKafkaConsumer010 is not a member of package 
org.apache.flink.streaming.connectors.kafka
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010

Am I doing something wrong here?

Neil




Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Fabian Hueske
The connectors are included in the release and available as individual
Maven artifacts.
So Flink 1.2.0 will provide a flink-connector-kafka-0.10 artifact (with
version 1.2.0).

2017-01-17 16:22 GMT+01:00 Foster, Craig :

> Are connectors being included in the 1.2.0 release or do you mean Kafka
> specifically?
>
>
>
> *From: *Fabian Hueske 
> *Reply-To: *"user@flink.apache.org" 
> *Date: *Tuesday, January 17, 2017 at 7:10 AM
> *To: *"user@flink.apache.org" 
> *Subject: *Re: Zeppelin: Flink Kafka Connector
>
>
>
> One thing to add: Flink 1.2.0 has not been release yet.
> The FlinkKafkaConsumer010 is only available in a SNAPSHOT release or the
> first release candidate (RC0).
>
> Best, Fabian
>
>
>
> 2017-01-17 16:08 GMT+01:00 Timo Walther :
>
> You are using an old version of Flink (0.10.2). A FlinkKafkaConsumer010
> was not present at that time. You need to upgrade to Flink 1.2.
>
> Timo
>
>
> Am 17/01/17 um 15:58 schrieb Neil Derraugh:
>
>
>
> This is really a Zeppelin question, and I’ve already posted to the user
> list there.  I’m just trying to draw in as many relevant eyeballs as
> possible.  If you can help please reply on the Zeppelin mailing list.
>
> In my Zeppelin notebook I’m having a problem importing the Kafka streaming
> library for Flink.
>
> I added org.apache.flink:flink-connector-kafka_2.11:0.10.2 to the
> Dependencies on the Flink interpreter.
>
> The Flink interpreter runs code, just not if I have the following import.
> import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010
>
> I get this error:
> :72: error: object FlinkKafkaConsumer010 is not a member of
> package org.apache.flink.streaming.connectors.kafka
> import org.apache.flink.streaming.connectors.kafka.
> FlinkKafkaConsumer010
>
> Am I doing something wrong here?
>
> Neil
>
>
>
>
>


Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
Hi Timo & Fabian,

Thanks for replying.  I'm using Zeppelin built off master.  And Flink 1.2
built off the release-1.2 branch.  Is that the right branch?

Neil



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-Connector-tp3p9.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Zeppelin: Flink Kafka Connector

2017-01-17 Thread Neil Derraugh
I re-read that enough times and it finally made sense. I wasn’t paying 
attention and thought 0.10.2 was the Kafka version —which hasn’t been released 
yet either - ha :(.  I switched to a recent version and it’s all good. :)

Thanks !
Neil

> On Jan 17, 2017, at 11:14 AM, Neil Derraugh 
>  wrote:
> 
> Hi Timo & Fabian,
> 
> Thanks for replying.  I'm using Zeppelin built off master.  And Flink 1.2
> built off the release-1.2 branch.  Is that the right branch?
> 
> Neil
> 
> 
> 
> --
> View this message in context: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-Kafka-Connector-tp3p9.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at 
> Nabble.com.



Re: Zeppelin: Flink Kafka Connector

2017-01-18 Thread Fabian Hueske
Ah, OK :-)
Thanks for reporting back!

Cheers, Fabian

2017-01-17 17:50 GMT+01:00 Neil Derraugh <
neil.derra...@intellifylearning.com>:

> I re-read that enough times and it finally made sense. I wasn’t paying
> attention and thought 0.10.2 was the Kafka version —which hasn’t been
> released yet either - ha :(.  I switched to a recent version and it’s all
> good. :)
>
> Thanks !
> Neil
>
> > On Jan 17, 2017, at 11:14 AM, Neil Derraugh  intellifylearning.com> wrote:
> >
> > Hi Timo & Fabian,
> >
> > Thanks for replying.  I'm using Zeppelin built off master.  And Flink 1.2
> > built off the release-1.2 branch.  Is that the right branch?
> >
> > Neil
> >
> >
> >
> > --
> > View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Zeppelin-Flink-
> Kafka-Connector-tp3p9.html
> > Sent from the Apache Flink User Mailing List archive. mailing list
> archive at Nabble.com.
>
>