Hi Ritesh,
We are using Flink 1.0.0 and don’t have issues with using the command line or
the web interface.
Can you please share some details on the command you are using and probably a
screen shot of the web interface with the values used for submission.
You are running Flink standalone or ove
It looks like the problem is due to the stack trace below.
Simply put, connection failure to Kafka when using the default settings causes
job submission to take over (flink.get-partitions.retry * tries by
SimpleConsumer * socket.timeout.ms * # of Kafka brokers) = (3 * 2 * 30 * (# of
Kafka broke
thats kinda strange. Have you tried setting minimum allocations for
containers to a bigger size too ?
2016-06-02 0:04 GMT+02:00 prateekarora :
> Hi
>
> I have not changed any configuration "yarn.heap-cutoff-ratio" or
> "yarn.heap-cutoff-ratio.min" .
>
> As per log flink assign 4608 M out of 6 GB
Hi
I have not changed any configuration "yarn.heap-cutoff-ratio" or
"yarn.heap-cutoff-ratio.min" .
As per log flink assign 4608 M out of 6 GB . i thought configuration
working fine .
/usr/lib/jvm/java-7-oracle-cloudera/bin/java -Xms4608m -Xmx4608m
-XX:MaxDirectMemorySize=4608m
Regards
Pratee
Hi Prateek,
did you change "yarn.heap-cutoff-ratio" or "yarn.heap-cutoff-ratio.min"
[1]?
Cheers,
Konstantin
[1]
https://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#yarn
On 01.06.2016 17:46, prateekarora wrote:
> Hi
>
> Thanks for the reply
>
> i have 6 node yarn cluste
Awesome, thanks Robert!
On Wed, Jun 1, 2016 at 1:12 PM Robert Metzger wrote:
> Hi Kim,
>
> I agree with you. We should definitively filter out custom Flink
> properties. I've filed a JIRA for this:
> https://issues.apache.org/jira/browse/FLINK-4004
> Thanks for the report!
>
> Regards,
> Robert
Hi Kim,
I agree with you. We should definitively filter out custom Flink
properties. I've filed a JIRA for this:
https://issues.apache.org/jira/browse/FLINK-4004
Thanks for the report!
Regards,
Robert
On Wed, Jun 1, 2016 at 6:42 PM, David Kim
wrote:
> Hello!
>
> Using Flink 1.0.3.
>
> This is
Hi,
I have been trying to get a case class with a generic parameter working with
Filnk 1.0.3 and have been having some trouble. However when I compile I get the
following error:
debug-type-bug/src/main/scala/com/example/flink/jobs/CaseClassWithGeneric.scala:40:
error: could not find implicit va
Hi,
I'm not sure if anyone has faced a similar issue with Flink_v1.0.0 that
it's behaving differently when I submit a job via command line and when I
do the same using Flink web interface.
The web interface is giving me Null pointer exceptions when the same is
working via the command line and fro
Hi folks,
I have deployed a Flink cluster on top of YARN in an AWS EMR cluster in my test
environment, and everything is working fine. However, I am unable to submit
jobs to the prod cluster.
Uploading the JAR containing a Flink job succeeds. However, the request to run
the job (UI makes API
Hello!
Using Flink 1.0.3.
This is cosmetic but will help clean up logging I think.
The Apache *KafkaConsumer* logs a warning [1] for any unused properties.
This is great in case the developer has a typo or should clean up any
unused keys.
Flink's Kafka consumer and producer have some custom pro
Hi
Thanks for the reply
i have 6 node yarn cluster with total 107.66 GB Memory and 48 vcore .
configuration :
5 Node :
configure each Node with 19.53 GiB (
yarn.nodemanager.resource.memory-mb = 19.53 GB)
1 Node :
configure Node with 10 GiB ( yarn.nodemanager
The last week I've been able to run the job several times without any
error. then I just recompiled it and the error reappered :(
This time I had:
java.lang.Exception: The data preparation for task 'CHAIN CoGroup
(CoGroup at main(DataInference.java:372)) -> Map (Map at
writeEntitonPojos(ParquetThr
Hi,
Before giving the method u described above a try, i tried adding the
timestamp with my data directly at the stream source.
Following is my stream source:
http://pastebin.com/AsXiStMC
and I am using the stream source as follows:
DataStream tuples = env.addSource(new DataStreamGenerator(file
Hi Simon,
You don't need to write any code to checkpoint the Keyed State. It is
done automatically by Flink. Just remove the `snapshoteState(..)` and
`restoreState(..)` methods.
Cheers,
Max
On Wed, Jun 1, 2016 at 4:00 PM, simon peyer wrote:
> Hi Max
>
> I'm using a keyby but would like to store
Hi Max
I'm using a keyby but would like to store the state.
Thus what's the way to go?
How do I have to handle the state in option 2).
Could you give an example?
Thanks
--Simon
> On 01 Jun 2016, at 15:55, Maximilian Michels wrote:
>
> Hi Simon,
>
> There are two types of state:
>
>
> 1)
Hi Simon,
There are two types of state:
1) Keyed State
The state you access via `getRuntimeContext().getState(..)` is scoped
by key. If no key is in the scope, the key is null and update
operations won't work. Use a `keyBy(..)` before your map function to
partition the state by key. The state i
Hi together
I did implement a little pipeline, which has some statefull computation:
Conntaing a function which extends RichFlatMapFunction and Checkpointed.
The state is stored in the field:
private var state_item: ValueState[Option[Pathsection]] = null
override def open(conf: Configurat
Hi,
yeah, in that case per-key watermarks would be useful for you. I won't be
possible to add such a feature, though, due to the (possibly) dynamic
nature of the key space and how watermark tracking works.
You should be able to implement it with relatively low overhead using a
RichFlatMapFunction
Hi Martin,
No worries. Thanks for letting us know!
Cheers,
Max
On Mon, May 30, 2016 at 9:17 AM, Martin Junghanns
wrote:
> Hi again,
>
> I had a bug in my logic. It works as expected (which is perfect).
>
> So maybe for others:
>
> Problem:
> - execute superstep-dependent UDFs on datasets which
Hi Ashutosh,
I used the same connector to read from the Kafka, it is working fine. but
writing has the mentioned issue!
On Wed, Jun 1, 2016 at 12:37 PM, Ashutosh Kumar
wrote:
> How are you packaging and deploying your jar ? I have tested with flink
> and kafka .9 . It works fine for me .
>
> Th
There is this in the Wiki:
https://cwiki.apache.org/confluence/display/FLINK/Data+exchange+between+tasks
Buffers for data exchange come from the network buffer pool (by
default 2048 * 32KB buffers). They are distributed to the running
tasks and each logical channel between tasks needs at least one
I run it in Eclipse IDE,
On Wed, Jun 1, 2016 at 12:37 PM, Ashutosh Kumar
wrote:
> How are you packaging and deploying your jar ? I have tested with flink
> and kafka .9 . It works fine for me .
>
> Thanks
> Ashutosh
>
> On Wed, Jun 1, 2016 at 3:37 PM, ahmad Sa P wrote:
>
>> I did test it with K
How are you packaging and deploying your jar ? I have tested with flink and
kafka .9 . It works fine for me .
Thanks
Ashutosh
On Wed, Jun 1, 2016 at 3:37 PM, ahmad Sa P wrote:
> I did test it with Kafka 0.9.0.1, still the problem exists!
>
> On Wed, Jun 1, 2016 at 11:50 AM, Aljoscha Krettek
>
I did test it with Kafka 0.9.0.1, still the problem exists!
On Wed, Jun 1, 2016 at 11:50 AM, Aljoscha Krettek
wrote:
> The Flink Kafka Consumer was never tested with Kafka 0.10, could you try
> it with 0.9. The 0.10 release is still very new and we have yet to provide
> a consumer for that.
>
>
I have a question regarding how tuples are buffered between (possibly
chained) subtasks.
Is it correct that there is a buffer for each vertex in the DAG of subtasks?
Regardless of task slot sharing? If yes, then the primary optimization in
this regard is operator chaining.
Furthermore, how do
The Flink Kafka Consumer was never tested with Kafka 0.10, could you try it
with 0.9. The 0.10 release is still very new and we have yet to provide a
consumer for that.
On Wed, 1 Jun 2016 at 10:47 ahmad Sa P wrote:
> Hi Aljoscha,
> I have tried different version of Flink V 1.0.0 and 1.0.3 and K
Hi Robertson,
You need to supply a TypeInformation for the data read from the InputFormat.
val dataset = env.createInput(input, new TupleTypeInfo(Tuple1.class,
BasicTypeInfo.STRING_TYPE_INFO))
should do the trick.
Cheers,
Max
On Tue, May 31, 2016 at 1:13 PM, Robertson Williams
wrote:
> I test
Hi Aljoscha,
I have tried different version of Flink V 1.0.0 and 1.0.3 and Kafka
version 0.10.0.0.
Ahmad
On Wed, Jun 1, 2016 at 10:39 AM, Aljoscha Krettek
wrote:
> This is unrelated to joda time or Kryo, that's just an info message in the
> log.
>
> What version of Flink and Kafka are you usi
This is unrelated to joda time or Kryo, that's just an info message in the
log.
What version of Flink and Kafka are you using?
On Wed, 1 Jun 2016 at 07:02 arpit srivastava wrote:
> Flink uses kryo serialization which doesn't support joda time object
> serialization.
>
> Use java.util.date or
30 matches
Mail list logo