Thanks a lot Rong and Sameer.
Looks like this is what I wanted.
I will try the above projects.
*Regards,*
*Abhishek Kumar Singh*
*Search Engineer*
*Mob :+91 7709735480 *
*...*
On Wed, May 15, 2019 at 8:00 AM Rong Rong wrote:
> Hi Abhishek,
>
> Based on your description, I think this FLIP
Hi,
We have flink 1.4.2 in production, and we have started seeing below
exception consistently. Could some help me understand the real issue
happening here? I see that https://issues.apache.org/jira/browse/FLINK-8836
has fixed it, but since it needs an upgrade, we exploring workarounds or
other opt
Hi Abhishek,
Based on your description, I think this FLIP proposal[1] seems to fit
perfectly for your use case.
you can also checkout the Github repo by Boris (CCed) for the PMML
implementation[2]. This proposal is still under development [3], you are
more than welcome to test out and share your f
If you can save the model as a PMML file you can apply it on a stream using one
of the java pmml libraries.
Sent from my iPhone
> On May 14, 2019, at 4:44 PM, Abhishek Singh wrote:
>
> I was looking forward to using Flink ML for my project where I think I can
> use SVM.
>
> I have been able
I have some awkward code in a few Flink jobs which is converting a Scala stream
into a Java stream in order to pass it to AsyncDataStream.unorderedWait(), and
using a Java RichAsyncFunction, due to old versions of Flink not having the
ability to do async stuff with a Scala stream.
In newer vers
I was looking forward to using Flink ML for my project where I think I can
use SVM.
I have been able to run a bath job using flink ML and trained and tested my
data.
Now I want to do the following:-
1. Applying the above-trained model to a stream of events from Kafka
(Using Data Streams) :For
BTW looking at past posts on this issue[1] it should have been fixed? i'm
using version 1.7.2
Also the recommendation was to use a custom function, though that's exactly
what im doing with the conditionalArray function[2]
Thanks!
[1]
http://apache-flink-user-mailing-list-archive.2336050.n4.nabbl
In a subsequent run i get
Caused by: org.codehaus.janino.JaninoRuntimeException: Code of method
"split$3681$(LDataStreamCalcRule$3682;)V" of class "DataStreamCalcRule$3682"
grows beyond 64 KB
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hey,
While running a SQL query i get an OutOfMemoryError exception and "Table
program cannot be compiled" [2].
In my scenario i'm trying to enrich an event using an array of tags, each
tag has a boolean classification (like a WHERE clause) and with a custom
function i'm filtering the array to keep
Every time that I access Flink's WEB UI I get the following exception:
/2019-05-14 12:31:47,837 WARN
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint- Unhandled
exception
org.apache.flink.shaded.netty4.io.netty.handler.codec.DecoderException:
javax.net.ssl.SSLException: Received fat
Hi,
Just to add to what Piotr already mentioned:
The community is working on adding support for this directly in Flink.
You can follow the efforts here:
https://issues.apache.org/jira/browse/FLINK-12047.
Cheers,
Gordon
On Tue, May 14, 2019 at 11:39 AM Piotr Nowojski wrote:
> Hi,
>
> Currently
Hi Gordon -
I have been trying out Flink 1.8 only recently. But this problem looks to
to have existed since a long time. It's related to the way Flink handles
Avro serialization, which I guess has not changed in recent times.
regards.
On Tue, May 14, 2019 at 2:22 PM Tzu-Li (Gordon) Tai
wrote:
Hi,
Currently there is no native Flink support for modifying the state in a such
manner. However there is an on-going effort [1] and a third party project [2]
to address exactly this. Both allows you you to read savepoint, modify it and
write back the new modified savepoint from which you can r
Hi,
Sorry for late response, somehow I wasn’t notified about your e-mail.
>
> So you meant implementation in DataStreamAPI with cutting corners would,
> generally, shorter than Table Join. I thought that using Tables would be
> more intuitive and shorter, hence my initial question :)
It depends
Thanks Rafi .. will try it out ..
On Tue, 14 May 2019 at 1:26 PM, Rafi Aroch wrote:
> Hi Debasish,
>
> It would be a bit tedious, but in order to override the default
> AvroSerializer you could specify a TypeInformation object where needed.
> You would need to implement your own MyAvroTypeInfo i
Hi,
Aljoscha opened a JIRA just recently for this issue:
https://issues.apache.org/jira/browse/FLINK-12501.
Do you know if this is a regression from previous Flink versions?
I'm asking just to double check, since from my understanding of the issue,
the problem should have already existed before.
Hi Marc!
I know we talked offline about the issues mentioned in this topic already,
but I'm just relaying the result of the discussions here to make it
searchable by others bumping into the same issues.
On Thu, Mar 21, 2019 at 4:27 PM Marc Rooding wrote:
> Hi
>
> I’ve been trying to get state m
Hi Wouter
I have no idea of question-2. But for question-1, you could try to add your
steps which already included in your
https://github.com/mbode/flink-prometheus-example/blob/master/Dockerfile 's
"RUN" phase to your k8s deployment-yaml's "command" phase before launch the
cluster in k8s.
Be
Hi Debasish,
It would be a bit tedious, but in order to override the default
AvroSerializer you could specify a TypeInformation object where needed.
You would need to implement your own MyAvroTypeInfo instead of the provided
AvroTypeInfo.
For example:
env.addSource(kafkaConsumer)
.return
Hello,
I would like to have some advices about splitting an operator with a state
into multiple operators.
The new operators would have state containing pieces of information of the
initial state
We will "split" the state
For exemple, I have operator (process) with uid A, with a state containing
Hi,
This looks like a good solution to me.
The conversion mappers in step 1. and 4. should not cause a lot of overhead
as they are chained to their predecessors.
Best, Fabian
Am Di., 14. Mai 2019 um 01:08 Uhr schrieb Shahar Cizer Kobrinsky <
shahar.kobrin...@gmail.com>:
> Hey Hequn & Fabian,
>
Hey,
I have collected some rocksdb logs for the snapshot itself but I cant
really wrap my head around where exactly the time is spent:
https://gist.github.com/gyfora/9a37aa349f63c35cd6abe2da2cf19d5b
The general pattern where the time is spent is this:
2019/05/14-09:15:49.486455 7fbe6a8ee700 [db/d
Hi Konstantin -
I did take a look at the option you mentioned. Using that option I can
register a custom serializer for a custom type. But my requirement is a bit
different - I would like to have a custom AvroSerializer for *all* types
which implement SpecificRecordBase of Avro. The reason is I wo
Hi Debasish,
this should be possible via
env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class,
MyCustomSerializer.class);
You can check that the correct serializer is used with
TypeInformation.of(MyCustomType.class).createSerializer(env.getConfig());
In this case your serializer n
24 matches
Mail list logo