Some further observations: I had a Job which was taking events of a Kafka
topic and sending it to two sinks whereas for one of them a Map operation
would happen first. When creating one event stream and sending it to the two
sinks the JMX representation was not showing both sinks and the naming of
Hi,
wow, oh that's indeed a nice solution.
Your version still threw some errors:
> Caused by: org.apache.flink.api.common.InvalidProgramException: Object
> factorytest.Config$1@5143c662 not serializable
> Caused by: java.io.NotSerializableException: factorytest.factory.PearFactory
I fixed this
The functions are serialized when env.execute() is being executed. The
thing is, as i understand it, that your singleton is simply not part of
the serialized function, so it doesn't actually matter when the function
is serialized.
Storing the factory instance in the function shouldn't be too m
Hi Chesnay,
thank you for looking into this!
Is there any way to tell Flink to (re)sync the changed classes and/or
tell it to distribute the serialized classes at a given point (e.g.
first on a env.execute() ) or so?
The thing is, that I'm working on a small framework which bases on
flink, so pa
Dear Apache Enthusiast,
ApacheCon Sevilla is now less than a month out, and we need your help
getting the word out. Please tell your colleagues, your friends, and
members of related technical communities, about this event. Rates go up
November 3rd, so register today!
ApacheCon, and Apache Big Dat
Hi Ufuk,
Thank you for looking into the issue. Please find your answers below :
(1) In detached mode the configuration seems to be not picked up
correctly. That should be independent of the savepoints. Can you
confirm this?
—> I tried starting a new job in detached mode and the job started on the
Hello,
admittedly i didn't look to deeply into this, but I would assume that
you are only modifying the factory on the client. When the operators are
deserialized on a cluster, your singleton instance is back to the
default, which is apples (i think), since the statement that changes the
fact
Hi all,
I'm using httpclient with the following dependency:
org.apache.httpcomponents
httpclient
4.5.2
On local mode, the program works correctly, but when executed on the
cluster, I get the following exception:
java.lang.Exception: The user defined 'open(Configuration)' method in class
org.m
The Hadoop config of your Hadoop installation which is loaded in
SequenceFileWriter.open() needs to be configured to have
"io.compression.codecs" set to include "SnappyCodec". This is probably
described in the Hadoop documentation.
-Max
On Wed, Oct 19, 2016 at 6:09 PM, wrote:
> Hi Till,
>
>
>
Is it possible to process the same stream in two different ways? I can’t find
anything in the documentation definitively stating this is possible, but nor do
I find anything stating it isn’t. My attempt had some unexpected results,
which I’ll explain below:
Essentially, I have a stream of dat
Hi Till,
Thanks for the response. I’m hoping not to have to change Flink’s lib folder.
The path I specified does exist on each node, and –C is supposed to add the
path to the classpath on all nodes in the cluster. I might try to bundle the
snappy jar within my job jar to see if that works.
Hi,
I'm currently working with flink for my bachelor thesis and I'm running
into some odd issues with flink in regards to factories.
I've built a small "proof of concept" and the code can be found here:
https://gitlab.tubit.tu-berlin.de/gehaxelt/flink-factorytest
The idea is that a Config-single
Thanks for the guide, Alberto!
-Max
On Tue, Oct 18, 2016 at 10:20 PM, Till Rohrmann wrote:
> Great to see Alberto. Thanks for sharing it with the community :-)
>
> Cheers,
> Till
>
> On Tue, Oct 18, 2016 at 7:40 PM, Alberto Ramón
> wrote:
>>
>> Hello
>>
>> I made a small contribution / manual
This usually happens when you enable the 'build-jar' profile from
within IntelliJ. This profile assumes you have a Flink installation in
the class path which is only true if you submit the job to an existing
Flink cluster.
-Max
On Mon, Oct 17, 2016 at 10:50 AM, Stefan Richter
wrote:
> Hi,
>
> l
Hi Robert,
have you tried putting the snappy java jar in Flink's lib folder? When
specifying the classpath manually you have to make sure that all
distributed components are also started with this classpath.
Cheers,
Till
On Wed, Oct 19, 2016 at 1:07 PM, wrote:
> I have flink running on a stand
I have flink running on a standalone cluster, but with a Hadoop cluster
available to it. I’m using a RollingSink and a SequenceFileWriter, and I’ve
set compression to Snappy:
.setWriter(new SequenceFileWriter("org.apache.hadoop.io.compress.SnappyCodec",
SequenceFile.CompressionType.BLOCK))
H
Hi,
are we talking about the Plan View in the JobManager dashboard? If yes,
then I expect there to be only one "box" for the combination of filter and
sink because they are chained together to avoid sending data.
For debugging, could you maybe change check() to always return true and see
if you th
Hey Anirudh!
As you say, this looks like two issues:
(1) In detached mode the configuration seems to be not picked up
correctly. That should be independent of the savepoints. Can you
confirm this?
(2) The program was changed in a non-compatible way after the
savepoint. Did you change the program
Hello Kursat,
We don't have a multi class classifier in FlinkML currently.
Regards,
Theodore
--
Sent from a mobile device. May contain autocorrect errors.
On Oct 19, 2016 12:33 AM, "Kürşat Kurt" wrote:
> Hi;
>
>
> I am trying to learn Flink Ml lib.
>
> Where can i find detailed multiclass cl
19 matches
Mail list logo