Guowei
>
>
> Guowei Ma mailto:guowei@gmail.com>> 于2019年4月28日周日
> 上午9:25写道:
> Hi
> AFAIK, TimeService is Flink could guarantee the semastics of
> Best,
> Guowei
>
>
> Mikhail Pryakhin mailto:m.prya...@gmail.com>>
> 于2019年4月26日周五 下午7:57写道:
>
when/and where(at
> which stage of the pipeline) the function will actually be executed. This
> characteristic doesn't align with TimerService and timely callbacks.
>
> Best,
>
> Dawid
>
> On 19/04/2019 17:41, Mikhail Pryakhin wrote:
>> Hello, Flink community!
>&
Hello, Flink community!
It happens that I need to access a timer service in a RichAsyncFunction
implementation. I know it's normally accomplished via StreamingRuntimeContext
instance available in a RichFunction, but unfortunately, RichAsyncFunction
extending RichFunction overrides “setRuntimeCo
Hello Flink community!
I've come across of employing an "Iterator Data Sync"[1] approach to test
output from a streaming pipeline. The pipeline consists of a single
ProcessFunction which side-outputs some events. I'd like to collect both the
primary and the side-output streams in my test. I do
Hello Flink experts!
My streaming pipeline makes async IO calls via the recommended AsyncFunction.
The pipeline evolves and I've encountered a requirement to side output
additional events from the function.
As it turned out the side output feature is only available in the following
functions:
Pr
i Mike,have you tried whether the problem also occurs with Flink 1.6.2? If yes, then please share with us the Flink logs with DEBUG log level to further debug the problem.Cheers,TillOn Fri, Oct 26, 2018 at 5:46 PM Mikhail Pryakhin <m.prya...@gmail.com> wrote:Hi community!Righ after I've upg
ot!
>
> Cheers,
> Till
>
> On Fri, Oct 26, 2018 at 4:31 PM Mikhail Pryakhin <mailto:m.prya...@gmail.com>> wrote:
> Hi Andrey, Thanks a lot for your reply!
>
>> What was the full job life cycle?
>
> 1. The job is deployed as a YARN cluster w
Hi community!
Righ after I've upgraded flink up to flink-1.6.1 I get an exception during job
deployment as a YARN cluster.
The job is submitted with zookeper HA enabled, in detached mode.
The flink yaml contains the following properties:
high-availability: zookeeper
high-availability.zookeeper
> cleaned from Zookeeper upon cancellation.
>
> Best,
> Andrey
>
> [1] https://issues.apache.org/jira/browse/FLINK-10011
> <https://issues.apache.org/jira/browse/FLINK-10011>
>
>> On 25 Oct 2018, at 15:30, Mikhail Pryakhin > <mailto:m.prya...@gmail.com>&
Hi Flink community,
Could you please help me clarify the following question:
When a streaming job running in YARN gets manually killed via yarn -kill
command is there any way to make a savepoint or other clean up actions before
the job manager is killed?
Kind Regards,
Mike Pryakhin
smime.p7s
Hi Flink experts!
When a streaming job with Zookeeper-HA enabled gets cancelled all the
job-related Zookeeper nodes are not removed. Is there a reason behind that?
I noticed that Zookeeper paths are created of type "Container Node" (an
Ephemeral node that can have nested nodes) and fall back to
yakhin
> On 24 Oct 2018, at 16:12, Mikhail Pryakhin wrote:
>
> Hi guys,
> I'm trying to substitute Zookeeper-based HA registry with YARN-based HA
> registry. (The idea was taken from the issue
> https://issues.apache.org/jira/browse/FLINK-5254
> <https://issue
Hi guys,
I'm trying to substitute Zookeeper-based HA registry with YARN-based HA
registry. (The idea was taken from the issue
https://issues.apache.org/jira/browse/FLINK-5254)
In Flink 1.6.1, there exists an
org.apache.flink.runtime.highavailability.FsNegativeRunningJobsRegistry which
claims t
Exactly, at least it's worth mentioning the partitioner used by default in case
none was specified in the javadoc, because the default behavior might not seem
obvious.
Kind Regards,
Mike Pryakhin
> On 3 Dec 2017, at 22:08, Stephan Ewen wrote:
>
> Sounds like adding a round robin partitioner
Hi all,
I've just come across a FlinkKafkaProducer misconfiguration issue especially
when a FlinkKafkaProducer is created without specifying a kafka partitioner
then a FlinkFixedPartitioner instance is used, and all messages end up in a
single kafka partition (in case I have a single task manage
skmanagers
> have access to all files (of any type) that are passed using the --ship
> command (or in the lib/ folder).
>
>
> On Wed, Sep 27, 2017 at 3:42 PM, Mikhail Pryakhin <mailto:m.prya...@gmail.com>> wrote:
> Hi Nico,
>
> Thanks a lot for you help, but un
gards,
Mike Pryakhin
> On 21 Jun 2017, at 16:55, Mikhail Pryakhin wrote:
>
> Hi Nico!
> Sounds great, will give it a try and return back with results soon.
>
> Thank you so much for your help!!
>
> Kind Regards,
> Mike Pryakhin
>
>> On 21 Jun 2017,
java#L99
>
> public class MyFunction extends AbstractRichFunction {
> private static final long serialVersionUID = 1L;
>
> @Override
> public void open(Configuration conf) throws IOException {
> File file =
> getRuntimeContext().getDistributedCache().getFi
ough, even in a HA setup, since the
> administrator might want to know what is happening and what Flink is doing
> under the hood.
>
> Everything's fine then.
>
> Nico
>
> On Tuesday, 20 June 2017 17:56:00 CEST Mikhail Pryakhin wrote:
>> Thanks a lot Niko!
>>
&
7;d say, you
> don't have to worry about the messages.
> Till (cc'd) may elaborate a bit more on this.
>
>
> Nico
>
> On Tuesday, 20 June 2017 17:06:00 CEST Mikhail Pryakhin wrote:
>> Hi Niko,
>> Thanks for your reply!
>>
>> Having zoo
tore JobManager state. Without
> it,
> a recovery would not know what to recover from.
>
>
> Nico
>
> [1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/
> jobmanager_high_availability.html#yarn-cluster-high-availability
>
> On Tuesday, 20 J
Hello,
I'm currently trying to check whether my job is restarted in case of Job
Manager failure.
The job is submitted as a single job on YARN with the following options set in
the flink-conf.yaml:
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 3
restart-strategy.fixed-de
Hi guys,
any news?
I’ve created a jira-ticket https://issues.apache.org/jira/browse/FLINK-6949
<https://issues.apache.org/jira/browse/FLINK-6949>.
Kind Regards,
Mike Pryakhin
> On 16 Jun 2017, at 16:35, Mikhail Pryakhin wrote:
>
> Hi all,
>
> I run my flink job on yar
Hi all,
I run my flink job on yarn cluster and need to supply job configuration
parameters via configuration file alongside with the job jar. (configuration
file can't be packaged into jobs jar file).
I tried to put the configuration file into the folder that is passed via
--yarnship option to
e
> Record API,
> the predecessor of the current DataSet API.
>
> Even in the DataSet API you can just pass arguments through the constructor.
> Feel free to open a JIRA, just make sure it is a subtask of FLINK-3957.
>
> On 13.06.2017 16:40, Mikhail Pryakhin wrote:
>>
gards,
Mike Pryakhin
> On 13 Jun 2017, at 17:20, Chesnay Schepler wrote:
>
> I'm not aware of any plans to replace it.
>
> For the Batch API it also works properly, so deprecating it would be
> misleading.
>
> On 13.06.2017 16:04, Mikhail Pryakhin wrote:
>
open() is a remnant of the past.
>
> We currently recommend to pass all arguments through the constructor and
> store them in fields.
> You can of course also pass a Configuration containing all parameters.
>
> On 13.06.2017 15:46, Mikhail Pryakhin wrote:
>> Hi all!
>
Hi all!
A RichMapFunction [1] provides a very handy setup method
RichFunction#open(org.apache.flink.configuration.Configuration) which consumes
a Configuration instance as an argument, but this argument doesn't bear any
configuration parameters because it is always passed to the method as a new
help.
——
Mike Pryakhin
> On 23 May 2017, at 16:04, Robert Metzger wrote:
>
> Hi Mike,
>
> I would recommend you to build a "fat jar" containing your application code
> and all required dependencies.
>
> On Tue, May 23, 2017 at 10:33 AM, Mikhail Pryakhi
——
Mike Pryakhin
> On 22 May 2017, at 23:06, Mikhail Pryakhin wrote:
>
> Hi Robert!
> Thanks a lot for your reply!
>
> >Can you double check if the
> >job-libs/flink-connector-kafka-0.9_2.11-1.2.1.jar contains
> >org/apache/flink/streaming/connectors/kaf
s in the log??From this SO answer, it seems that this is not really the classical classNotFoundException, but a bit differenT: https://stackoverflow.com/a/5756989/568695On Mon, May 22, 2017 at 5:12 PM, Mikhail Pryakhin <m.prya...@cleverdata.ru> wrote:Hi all!
I'm playing with flink stream
Hi all!
I'm playing with flink streaming job on yarn cluster. The job consumes events
from kafka and prints them to the standard out.
The job uses flink-connector-kafka-0.9_2.11-1.2.1.jar library that is passed
via the --yarnship option.
Here is the way I run the job:
export HADOOP_USER_NAME=hd
Hi all!
I'm currently choosing which API to stick with while implementing Flink jobs
(primarily streaming jobs).
Could you please shed a light on in which API new features are implemented
first? Do all Flink features are available in both APIs?
Many thanks in advance.
Best Regards,
Mike Pry
33 matches
Mail list logo