Interestingly enough same job runs ok on Linux but not on windows
On Fri, Feb 17, 2017 at 4:54 PM, Mohit Anchlia
wrote:
> I have this code trying to read from a topic however the flink process
> comes up and waits forever even though there is data in the topic. Not sure
I have this code trying to read from a topic however the flink process
comes up and waits forever even though there is data in the topic. Not sure
why? Has anyone else seen this problem?
StreamExecutionEnvironment env = StreamExecutionEnvironment
.*createLocalEnvironment*();
Properties
Hi,
this looks like a bug to me.
Can you open a JIRA and maybe a small testcase to reproduce the issue?
Thank you,
Fabian
2017-02-18 1:06 GMT+01:00 Kürşat Kurt :
> Hi;
>
>
>
> I have a Dataset like this:
>
>
>
> *(**0,Auto,0.4,1,5.8317538999854194E-5)*
>
>
Hi;
I have a Dataset like this:
(0,Auto,0.4,1,5.8317538999854194E-5)
(0,Computer,0.2,1,4.8828125E-5)
(0,Sports,0.4,2,1.7495261699956258E-4)
(1,Auto,0.4,1,1.7495261699956258E-4)
(1,Computer,0.2,1,4.8828125E-5)
(1,Sports,0.4,1,5.8317538999854194E-5)
This code;
Hi Daniel,
I've implemented a macro that generates message pack serializers in our
codebase.
Resulting code is basically a series of writes\reads like in hand-written
structured serialization.
E.g. given
case class Data1(str: String, subdata: Data2)
case class Data2(num: Int)
serialization code
Hi Till,
It happened during deserialization of a savepoint.
Best regards,
Dmitry
On Fri, Feb 17, 2017 at 2:48 PM, Till Rohrmann wrote:
> Hi Dmitry,
>
> curious to know when exactly you observed the IllegalStateException. Did
> it happen after resuming from a savepoint or
A few of my jobs recently failed and showed this exception:
org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load user
class: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09
ClassLoader info: URL ClassLoader:
file:
Hello Dimitry,
Could you please elaborate on your tuning on ->
environment.addDefaultKryoSerializer(..) .
I'm interested on knowing what have you done there for a boost of about
50% .
Some small or simple example would be very nice.
Thank you very much in advance.
Kind Regards,
Daniel
One network setting is mentioned here:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html#controlling-latency
From: Dmitry Golubets >
Date: Friday, February 17, 2017 at 6:43 AM
To:
Hi Till,
when you say parallel windows, what do you mean? Do you mean the use of
timeWindowAll which has all the elements of a window in a single task?
--
View this message in context:
Hi Dmitry,
curious to know when exactly you observed the IllegalStateException. Did it
happen after resuming from a savepoint or did it already happen during the
first run of the program? If the latter is the case, then this might
indicate a bug where we don’t use the correct ExecutionConfig to
@Anton, these are the Ideas I was mentioning and I'm afraid I have nothing
more to add. (In the FLIP)
On Fri, 17 Feb 2017 at 06:26 wangzhijiang999
wrote:
> yes, it is really a critical problem for large batch job because the
> unexpected failure is a common case.
>
Yes, you're correct. :-)
On Thu, 16 Feb 2017 at 14:24 nsengupta wrote:
> Thanks, Aljoscha for the clarification.
>
> I understand that instead of using a flatMap() in the way I am using, I am
> better off using :
> * a fold (init, fold_func, window_func) first and
Hi,
I think atomic rename is not part of the requirements.
I'll add +Stephan who recently wrote this document in case he has any
additional input.
Cheers,
Aljoscha
On Thu, 16 Feb 2017 at 23:28 Vijay Srinivasaraghavan
wrote:
> Following up on my question regarding backed
Hi Howard,
We run Flink 1.2 in Yarn without issues. Sorry I don't have any specific
solution, but are you sure you don't have some sort of Flink mix? In your
logs I can see:
The configuration directory ('/home/software/flink-1.1.4/conf') contains
both LOG4J and Logback configuration files.
Hi,
I’m trying to run flink on yarn by using command: bin/flink run -m
yarn-cluster -yn 2 -ys 4 ./examples/batch/WordCount.jar
But I got the following error:
2017-02-17 15:52:40,746 INFO org.apache.flink.yarn.cli.FlinkYarnSessionCli
- No path for the flink
Hi to all,
in my use case I'd need to output my Row objects into an output folder as
CSV on HDFS but creating/overwriting new subfolders based on an attribute
(for example create a subfolder for each value of a specified column).
Then, it could be interesting to bucketing the data inside those
17 matches
Mail list logo