Hi Ryan,
afaik there is no way to override this behaviour.
Did you have a look at the ProcessFunction [1]? Depending on your
specific requirements you might be able to re-implement the desired
functionality on top of it, but you will have to do all the
time-handling yourself and you do not have w
Hello Flinkers,
Is there a means of configuring a windowed stream such that its window
function is invoked for empty panes?
Looking at the source of WindowOperator in Flink 1.2.0 (line 473), I see
that the user's window function is invoked only when the pane isn't empty.
I am hoping for a means o
Thank you, Chesnay. My hope is to keep things computationally inexpensive,
and if I understand you correctly, that is satisfied even with this
rekeying.
Ryan
On Sat, Apr 15, 2017 at 4:22 AM, Chesnay Schepler
wrote:
> Hello,
>
> I think if you have multiple keyBy() transformations with identical
Hi Nico;
I found the problem. I am also using xgboost. Its library has old version of
flink.
I removed xgboost's jar with depencies library .
Thank you for your interest.
Regards,
Kursat.
-Original Message-
From: Nico Kruber [mailto:n...@data-artisans.com]
Sent: Thursday, April 13, 201
Hi;
I have label index DataSet and final DataSet that i want to convert its
indexes to labels.
ixDS: DataSet[(Long, String)]
(1,Car)
(2,Sports)
(3,Home)
...
finalDS:DataSet[(Long, String, Double, String, Double)]
(1,x,1,y,4)
(2,z,3,t,5)
...
If i want to convert finalDS's inde
Hi Gábor,
thanks a lot
Best,
Marc
> Am 17.04.2017 um 20:32 schrieb Gábor Gévay :
>
> Hello Marc,
>
> You can group by edge, and then sum:
>
> edges
> .groupBy(0,1) // make first two fields a composite key
> .sum(2); // sum the value field
>
> This will turn multiple edges that have the sam
Hello Marc,
You can group by edge, and then sum:
edges
.groupBy(0,1) // make first two fields a composite key
.sum(2); // sum the value field
This will turn multiple edges that have the same source and target
into one edge, whose value will be the sum of the values of the
original group of e
Hi,
how can I sum and reduce multiple edges in my entire graph?
e.g. my input graph looks like (source-ID, target-ID, value):
(1, 2, 30)
(1, 2, 10)
(2, 1, 55)
And I need:
(1, 2, 40)
(2, 1, 55)
Thanks!
Marc
I was using Flink Kafka 08 api for consuming and producing to Kafka .
Yesterday I upgraded to flink 1.2 and started using Flink Kafka 0.10 api for
consumer and producer .
Issue I found is , all accumulators that I have added to source or sink are
appearing in flink web api , which were there bef
Hi Benjamin,
In your case, the tumbling window subtasks would each have 3 input streams, 1
for each of the 3 FlinkKafkaConsumer operator subtasks.
I thought that each subtask of the window would get only elements from one
partition and therefore the watermarks would be calculated independently
Thanks for Reply Gordon . I am not doing any unit test . Its production code
.
public class MyClass extends FlinkKafkaProducer010
and then
incrementCounter("items-not-found", this.getRuntimeContext());
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4
Hi,
How are you using your extended sink?
The runtime context is provided to the UDF by the system when the job is
executed.
So, for example, if you’re just testing out your UDF in unit tests,
`getRuntimeContext` would return null (simply because it isn’t set). In such
cases, you would need to
I have a class extending FlinkKafkaProducer010 . In Invoke method I am using
this.getRuntimeContext to increment flink counter .
But I am getting null value of RuntimeContext which is leading to
NullPointerException .
Using Flink 1.2
--
View this message in context:
http://apache-flink-user-
Hi,
So I have been rearranging my architecture to where I only have one input
and one output topic, each with 3 partitions and in my flink job I have one
consumer and one producer running with parallelism of 3. To run in
parallel, I extract the partition from the metadata information per kafka
mes
14 matches
Mail list logo