Please correct me if I am wrong. `DataStream#print()` only prints to the
screen when running from the IDE, but does not work (print to the screen)
when running on a cluster (even a local cluster).
Thanks,
Pankaj
On Mon, Nov 23, 2020 at 5:31 PM Austin Cawley-Edwards <
austin.caw...@gmail.com>
link-docs-stable/dev/stream/operators/windows.html#triggers
>
> czw., 15 paź 2020 o 01:55 Pankaj Chand
> napisał(a):
>
>> Hi Piotrek,
>>
>> Thank you for replying! I want to process each record as soon as it is
>> ingested (or reaches an operator) without wai
ps://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html
>
> śr., 14 paź 2020 o 12:38 Pankaj Chand
> napisał(a):
>
>> Hi all,
>>
>> What is the recommended way to make a Flink job that processes each event
>> individually as soon as it comes and witho
Hi all,
What is the recommended way to make a Flink job that processes each event
individually as soon as it comes and without waiting for a window, in order
to minimize latency in the entire DAG of operators?
For example, here is some sample WordCount code (without windws),
followed by some
Actually, I was wrong. It turns out I was setting the values the wrong way
in conf/flink-conf.yaml.
I set "metrics.latency.interval 100" instead of "metrics.latency.interval:
100". Sorry about that.
T
On Thu, Sep 10, 2020 at 7:05 AM Pankaj Chand
wrote:
> Thank you, D
{"id":"latency.source_id.cbc357ccb763df2852fee8c4fc7d55f2.operator_id.17fbfcaabad45985bbdf4da0490487e3.operator_subtask_index.0.latency_p999","value":"105.0"}]
>
> Best,
> David
>
>
> On Wed, Sep 9, 2020 at 3:16 PM Pankaj Chand
> w
ou'll
> find them in
>
> /jobs//vertices//subtasks/metrics
>
> Regards,
> David
>
>
>
> On Tue, Sep 8, 2020 at 10:52 PM Pankaj Chand
> wrote:
>
>> Hello,
>>
>> How do I visualize (or extract) the results for Latency Tracking for a
>> Fli
Hello,
How do I visualize (or extract) the results for Latency Tracking for a
Flink local cluster? I set "metrics.latency.interval 100" in the
conf/flink-conf.yaml file, and started the cluster and
SocketWindowWordCount job. However, I could not find the latency
distributions anywhere in the web
Thank you so much, Yun! It is exactly what I needed.
On Mon, Aug 31, 2020 at 1:50 AM Yun Gao wrote:
> Hi Pankaj,
>
> I think it should be in
> org.apache.flink.runtime.io.network.api.writer.RecordWriter$OutputFlusher.
>
> Best,
> Yun
>
>
>
>
Hello,
The documentation gives the following two sample lines for setting the
buffer timeout for the streaming environment or transformation.
*env.setBufferTimeout(timeoutMillis);env.generateSequence(1,10).map(new
MyMapper()).setBufferTimeout(timeoutMillis);*
I have been trying to find where
same code-base. The main difference between
> Oracle and OpenJDK is the branding and price.
>
>
> > On Aug 22, 2020, at 4:23 AM, Pankaj Chand
> wrote:
> >
> > Hello,
> >
> > The documentation says that to run Flink, we need Java 8 or 11.
> >
> &g
Hello,
The documentation says that to run Flink, we need Java 8 or 11.
Will JDK 11 work for running Flink, programming Flink applications as well
as building Flink from source?
Also, can we use Open JDK for the above three capabilities, or do any of
the capabilities require Oracle JDK?
Thanks,
, etc.).
> The K8s cluster could guarantee the isolation.
>
>
> Best,
> Yang
>
> Pankaj Chand 于2020年3月16日周一 下午5:51写道:
>
>> Hi Xintong,
>>
>> Thank you for the explanation!
>>
>> If I run Flink "natively" on Kubernetes, will I also be able to ru
Flink natively on Kubernetes" refers deploy Flink as a
>> Kubernetes Job. Flink Master will interact with Kubernetes Master, and
>> actively requests for pods/containers, like on Yarn/Mesos.
>>
>> Thank you~
>>
>> Xintong Song
>>
>>
>&g
Hi all,
I want to run Flink, Spark and other processing engines on a single
Kubernetes cluster.
>From the Flink documentation, I did not understand the difference between:
(1) Running Flink on Kubernetes, Versus (2) Running Flink natively on
Kubernetes.
Could someone please explain the
Hi all,
Please tell me, is there anything in Flink that is similar to Spark's
structured streaming Run Once Trigger (or Trigger.Oncefeature) as described
in the blog below:
https://databricks.com/blog/2017/05/22/running-streaming-jobs-day-10x-cost-savings.html
This feature allows you to call a
by
Flink (as shown in the Flink's dashboard Web UI).
This shouldn't happen since the Task_Manager ID "should" be different,
though it would have the old index in the Task_Managers list.
Would this be a bug?
Thanks!
Pankaj
On Thu, Dec 12, 2019 at 5:59 AM Pankaj Chand
wrote:
> Thank
share the
> Flink client and job manager log with us?
> >
> > This information would help us to locate your problem.
> >
> > Best,
> > Vino
> >
> > Pankaj Chand 于2019年12月12日周四 下午7:08写道:
> >>
> >> Hello,
> >>
> >> When using Fli
Hello,
When using Flink on YARN in session mode, each Flink job client would
automatically know the YARN cluster to connect to. It says this somewhere
in the documentation.
So, I killed the Flink session cluster by simply killing the YARN
application using the "yarn kill" command. However, when
Thank you, Chesnay!
On Thu, Dec 12, 2019 at 5:46 AM Chesnay Schepler wrote:
> Yes, when a cluster was started it takes a few seconds for (any) metrics
> to be available.
>
> On 12/12/2019 11:36, Pankaj Chand wrote:
>
> Hi Vino,
>
> Thank you for the links regardin
/2019/06/05/flink-network-stack.html
> [2]: https://flink.apache.org/2019/07/23/flink-network-stack-2.html
>
> Best,
> Vino
>
> Pankaj Chand 于2019年12月9日周一 下午12:07写道:
>
>> Hello,
>>
>> Using Flink on Yarn, I could not understand the documentation for how to
&g
Hello,
Using Flink on Yarn, I could not understand the documentation for how to
read the default metrics via code. In particular, I want to read
throughput, i.e. CPU usage, Task/Operator's numRecordsOutPerSecond, and
Memory.
Is there any sample code for how to read such default metrics? Is
Please disregard my last question. It is working fine with Hadoop 2.7.5.
Thanks
On Sat, Dec 7, 2019 at 2:13 AM Pankaj Chand
wrote:
> Is it required to use exactly the same versions of Hadoop as the
> pre-bundled hadoop version?
>
> I'm using Hadoop 2.7.1 cluster with
is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
MILLISECONDS)
Thanks!
Pankaj
On Wed, Nov 20, 2019 at 2:45 AM Pankaj Chand
wrote:
> Thank you, Ana and Yang!
>
> On Tue, Nov 19, 2019, 9:29 PM Yang Wang wrote:
>
>> Hi Pankaj,
>>
>> First, you need to prepare a
t; Ana 于2019年11月20日周三 上午10:12写道:
>
>> Hi,
>>
>> I was able to run Flink on YARN by installing YARN and Flink separately.
>>
>> Thank you.
>>
>> Ana
>>
>> On Wed, Nov 20, 2019 at 10:42 AM Pankaj Chand
>> wrote:
>>
>>> Hello
Hello,
I want to run Flink on YARN upon a cluster of nodes. From the
documentation, I was not able to fully understand how to go about it. Some
of the archived answers are a bit old and had pending JIRA issues, so I
thought I would ask.
Am I first supposed to install YARN separately, and then
Thank you!
On Mon, Oct 28, 2019 at 3:53 AM vino yang wrote:
> Hi Pankaj,
>
> It seems it is a bug. You can report it by opening a Jira issue.
>
> Best,
> Vino
>
> Pankaj Chand 于2019年10月28日周一 上午10:51写道:
>
>> Hello,
>>
>> I am trying to modify the
Hello,
I am trying to modify the parallelism of a streaming Flink job (wiki-edits
example) multiple times on a standalone cluster (one local machine) having
two TaskManagers with 3 slots each (i.e. 6 slots total). However, the
"modify" command is only working once (e.g. when I change the
Hi Haibo Sun,
It's exactly what I needed. Thank you so much!
Best,
Pankaj
On Thu, Jun 27, 2019 at 7:45 AM Haibo Sun wrote:
> Hi, Pankaj Chand
>
> If you're running Flink on YARN, you can do this by limiting the number of
> applications in the cluster or in the queue. As far as I
Hi everyone,
Is there any way (parameter or function) I can limit the number of
concurrent jobs executing in my Flink cluster? Or alternatively, limit the
number of concurrent Job Managers (since there has to be one Job Manager
for every job)?
Thanks!
Pankaj
Thank you!
On Thu, Jun 20, 2019, 5:49 AM Chesnay Schepler wrote:
> There is no version of the documentation that is more up-to-date. The
> documentation was simply not updated yet for the new architecture.
>
> On 20/06/2019 11:45, Pankaj Chand wrote:
>
> Based on the below con
s
Regards,
Eduardo
On Tue, 18 Jun 2019, 08:42 Pankaj Chand, wrote:
I am trying to understand the role of Job Manager in Flink, and have come
across two possibly distinct interpretations.
1. The online documentation v1.8 signifies that there is at least one Job
Manager in a cluster, and it is cl
Hello,
Please let me know how to get the updated documentation and tutorials of
Apache Flink.
The stable v1.8 and v1.9-snapshot release of the documentation seems to be
outdated.
Thanks!
Pankaj
e's one
>> Job Manager per application i.e. per jar (as in the example in the first
>> chapter). This is not to say there's one Job Manager per job. Actually I
>> don't think the word Job is defined in the book, I've seen Task defined,
>> and those do have Task Managers
>>
>>
I am trying to understand the role of Job Manager in Flink, and have come
across two possibly distinct interpretations.
1. The online documentation v1.8 signifies that there is at least one Job
Manager in a cluster, and it is closely tied to the cluster of machines, by
managing all jobs in that
35 matches
Mail list logo