I had fought with 0.8.0.2 kafka and flink 0.10.2 scala version 2.11, was
never able to get it working confounded with noclassdeffounderror, moved to
flink 1.0.0 with kafka 0.8.0.2 scala version 2.11 things worked for me, if
moving to flink 1.0.0 is an option for you do so.
balaji
On Mon, Apr 18,
Answered based on my understanding.
On Mon, Apr 18, 2016 at 8:12 AM, Prez Cannady
wrote:
> Some background.
>
> I’m running Flink application on a single machine, instrumented by Spring
> Boot and launched via the Maven Spring Boot plugin. Basically, I’m trying
> to figure out how much I can squ
Some background.
I’m running Flink application on a single machine, instrumented by Spring Boot
and launched via the Maven Spring Boot plugin. Basically, I’m trying to figure
out how much I can squeeze out of a single node processing my task before
committing to a cluster solution.
Couple of q
Hi Mich,
You can add external dependencies to Scala shell using `--addclasspath` option.
There is more detail description in documentation [1].
[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/scala_shell.html#adding-external-dependencies
Regards,
Chiwan Park
> On Apr 17,
Hi everyone,
I have a Kafka cluster running on version 0.8.1, hence I'm using the
FlinkKafkaConsumer081. When running my program, I saw a
NoClassDefFoundError for org.apache.kafka.common.Node. So I packaged my
binaries according to
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/
Im trying what you suggested. Is this what you are suggesting (this is just
a skeleton of logic not the actual implementation)?
val dataStream = ... //window based stream
val modelStream = ...
val connected = dataStream.connect(modelStream)
val output = connected.map(
(x:St
Hi there,
I try run large number of subtasks within a task slot using slot sharing
group. The usage scenario tried to adress operator that makes a network
call with high latency yet less memory or cpu footprint. (sample code below)
>From doc provided, slotsharinggroup seems the place to look at.
Oh I solved the problem thank you and sorry for not being precise enough.
On 17 April 2016 at 16:40, Matthias J. Sax wrote:
> Can you be a little bit more precise. It fails when you try to do
>
> bin/start-local.sh
>
> ?? Or what do you mean by "try to start the web interface"? The web
> int
Can you be a little bit more precise. It fails when you try to do
bin/start-local.sh
?? Or what do you mean by "try to start the web interface"? The web
interface is started automatically within the JobManager process.
What is the exact error message. Is there any stack trace? Anny error in
th
Thanks for the heads up! I'm glad you figured it out.
On Sun, Apr 17, 2016, 13:35 neo21 zerro wrote:
> Nevermind, I've figured it out.
> I was skipping the tuples that were coming from kafka based on some custom
> login.
> That custom logic made sure that the kafka operator did not emit any
> tu
for the sake of history(at task manager level):
in conf/flink-conf.yaml
env.java.opts: -Dmy-prop=bla -Dmy-prop2=bla2
On 17 April 2016 at 16:25, Igor Berman wrote:
> How do I provide java arguments while submitting job? Suppose I have some
> legacy component that is dependent on java argument co
Sorry the error is can't find the path specified*
On 17 April 2016 at 15:49, Ahmed Nader wrote:
> Thanks, I followed the instructions and when i try to start the web
> interface i get an error can't find file specified. I tried to change the
> env.java.home variable to the path of Java JDK or Ja
Thanks, I followed the instructions and when i try to start the web
interface i get an error can't find file specified. I tried to change the
env.java.home variable to the path of Java JDK or Java JRE on my machine
however still i get the same error.
Any idea how to solve this?
On 17 April 2016 at
How do I provide java arguments while submitting job? Suppose I have some
legacy component that is dependent on java argument configuration.
I suppose Flink reuses same jvm for all jobs, so in general I can start
task manager with desired arguments, but then all my jobs can't have
different system
Nevermind, I've figured it out.
I was skipping the tuples that were coming from kafka based on some custom
login.
That custom logic made sure that the kafka operator did not emit any tuples.
Hence, the missing metrics in the flink ui.
On Thursday, April 14, 2016 1:12 AM, neo21 zerro wrote:
Did you double check that your jar does contain the Kafka connector
classes? I would assume that the jar is not assembled correctly.
See her for some help on how to package jars correctly:
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/cluster_execution.html#linking-with-modules-
You need to download Flink and install it. Follow this instructions:
https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html
-Matthias
On 04/16/2016 04:00 PM, Ahmed Nader wrote:
> Hello,
> I'm new to flink so this might seem a basic question. I added flink to
Hi,
IN Spark shell I can load Kafka jar file through spark-shell option --jar
spark-shell --master spark://50.140.197.217:7077 --jars
,/home/hduser/jars/spark-streaming-kafka-assembly_2.10-1.6.1.jar
This works fine.
In Flink I have added the jar file
/home/hduser/jars/flink-connector-kafka-0.10
18 matches
Mail list logo