gt;>>
>>>
>>> 在 2023-12-04 15:56:04,"nick toker" 写道:
>>>
>>> Hi
>>>
>>> restart the job it's ok and i do that , but i must cancel the job and
>>> submit a new one and i dont want the data from the state
>>> fo
>>
>> 在 2023-12-04 15:56:04,"nick toker" 写道:
>>
>> Hi
>>
>> restart the job it's ok and i do that , but i must cancel the job and
>> submit a new one and i dont want the data from the state
>> forget to mention that i use
quot;nick toker" 写道:
>
> Hi
>
> restart the job it's ok and i do that , but i must cancel the job and
> submit a new one and i dont want the data from the state
> forget to mention that i use the parameter "-allowNonRestoredState"
>
>
> my step
mention that i use the parameter "-allowNonRestoredState"
my steps:
1. stop the job with savepoint
2. run the updated job ( update job graph) from savepoint
expect it to run
currently the result is the the job fail
nick
בתאריך יום ב׳, 4 בדצמ׳ 2023 ב-8:41 מאת Xuyang
Hi
restart the job it's ok and i do that , but i must cancel the job and
submit a new one and i dont want the data from the state
forget to mention that i use the parameter "-allowNonRestoredState"
my steps:
1. stop the job with savepoint
2. run the updated job ( update
Hi, nick.
> using savepoint i must cancel the job to be able run the new graph
Do you mean that you need cancel and start the job using the new flink job
graph in 1.17.1,
and in the past, it was able to make the changes to the new operator effective
without restarting the job?
I think
Hi
when i add or remove an operator in the job graph , using savepoint i must
cancel the job to be able run the new graph
e.g. by adding or removing operator (like new sink target)
it was working in the past
i using flink 1.17.1
1. is it a known bug? if so when planned to be fix
2. do i need
v.getExecutionPlan());
> env.execute("my job");
>
> The result is a JSON-encoded representation of the job graph, which
> for the simple example I just tried it with, produced this output:
>
> {
> "nodes" : [ {
> "id" : 1,
> "type"
This may or may not help, but you can get the execution plan from
inside the client, by doing something like this (I printed the plan to
stderr):
...
System.err.println(env.getExecutionPlan());
env.execute("my job");
The result is a JSON-encoded representation of the job gr
2023年8月31日周四 01:13写道:
> Hello folks,
>
> I am trying to get the job graph of a running flink job. I want to use
> flink libraries. For now, I have the RestClusterClient and the job IDs.
> Tell me please how to get the job graph.
>
> Thank you.
Hello folks,
I am trying to get the job graph of a running flink job. I want to use flink
libraries. For now, I have the RestClusterClient and the job IDs. Tell me
please how to get the job graph.
Thank you.
han Dadashov >
> wrote:
>
> > Dear Flink developers,
> >
> > Wanted to check, if there is a way to control the parallelism of
> > auto-generated Flink operators of the FlinkSQL job graph?
> >
> > In Java API, it is possible to have full control of the parall
of the
box and we cannot expect users to run a SQL shell to run their production SQL
streaming services. Since we are dealing with the job graph generation
ourselves, we have run into the issue where our client needs to be compiled
with the same version of Flink that we are running, otherwise we
On 17 Jun 2021, at 23:55, Sonam Mandal wrote:
>
> Hello,
>
> We are exploring running multiple Flink clusters within a Kubernetes cluster
> such that each Flink cluster can run with a specified Flink image version.
> Since the Flink Job Graph needs to be compatible w
Hello,
We are exploring running multiple Flink clusters within a Kubernetes cluster
such that each Flink cluster can run with a specified Flink image version.
Since the Flink Job Graph needs to be compatible with the Flink version running
in the Flink cluster, this brings a challenge in how we
ducer , without restarting the current
>>>> application or using option1 ?
>>>>
>>>> --> (Option 2 )What do you mean by adding a custom sink at
>>>> coProcessFunction , how will it change the execution graph ?
>>>>
>>>> Thanks
will it change the execution graph ?
>>>
>>> Thanks
>>> Jessy
>>>
>>>
>>>
>>> On Tue, 16 Mar 2021 at 17:45, Timo Walther wrote:
>>>
>>>> Hi Jessy,
>>>>
>>>> to be precise, the JobGraph is not used at
;
>>>
>>> On Tue, 16 Mar 2021 at 17:45, Timo Walther wrote:
>>>
>>>> Hi Jessy,
>>>>
>>>> to be precise, the JobGraph is not used at runtime. It is translated
>>>> into an ExecutionGraph.
>>>>
>>>> But nev
tion 2 )What do you mean by adding a custom sink at
>>> coProcessFunction , how will it change the execution graph ?
>>>
>>> Thanks
>>> Jessy
>>>
>>>
>>>
>>> On Tue, 16 Mar 2021 at 17:45, Timo Walther wrote:
>>>
>>&
t;
>> Thanks
>> Jessy
>>
>>
>>
>> On Tue, 16 Mar 2021 at 17:45, Timo Walther wrote:
>>
>>> Hi Jessy,
>>>
>>> to be precise, the JobGraph is not used at runtime. It is translated
>>> into an ExecutionGraph.
>>>
. It is translated
>> into an ExecutionGraph.
>>
>> But nevertheless such patterns are possible but require a bit of manual
>> implementation.
>>
>> Option 1) You stop the job with a savepoint and restart the application
>> with slightly different parameters
the old state can be remapped to the slightly modified job graph.
> This is the easiest solution but with the downside of maybe a couple of
> seconds downtime.
>
> Option 2) You introduce a dedicated control stream (i.e. by using the
> connect() DataStream API [1]). Either you implement a c
parameters. If the pipeline has not changed
much, the old state can be remapped to the slightly modified job graph.
This is the easiest solution but with the downside of maybe a couple of
seconds downtime.
Option 2) You introduce a dedicated control stream (i.e. by using the
connect() DataStream
Hi Team,
Is it possible to edit the job graph at runtime ? . Suppose, I want to add
a new sink to the flink application at runtime that depends upon the
specific parameters in the incoming events.Can i edit the jobgraph of a
running flink application ?
Thanks
Jessy
<
theo.diefent...@scoop-software.de> wrote:
> Hi there,
>
> I'm currently analyzing a weird behavior of one of our jobs running on
> YARN with Flink 1.11.2. I have a kind of special situation here in that
> regard that I submit a single streaming job with a disjoint job graph
Hi there,
I'm currently analyzing a weird behavior of one of our jobs running on YARN
with Flink 1.11.2. I have a kind of special situation here in that regard that
I submit a single streaming job with a disjoint job graph, i.e. that job
contains two graphs of the same kind but to
eartbeat request.
>>>>>>> 2020-09-01 11:50:37,354 DEBUG
>>>>>>> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager -
>>>>>>> Received heartbeat from 4e4ae8b90f911787ac112c2847759512.
>>>>>>> 2020
;>>>>> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager -
>>>>>> Trigger heartbeat request.
>>>>>> 2020-09-01 11:50:47,325 DEBUG
>>>>>> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager -
>>>>
gt;>> org.apache.flink.runtime.blob.FileSystemBlobStore - Copying
>>>>> from
>>>>> /tmp/flink-blobs/blobStore-6e468470-ba5f-4ea0-a8fd-cf31af663f11/job_980d3ff229b7fbfe889e2bc93e526da0/blob_p-90776d1fe82af438f6fe2c4385461fe6cb96d25a-86f63972fbf1e10b502
1640fe01b426
>>>> to
>>>> gs:///blob/job_980d3ff229b7fbfe889e2bc93e526da0/blob_p-90776d1fe82af438f6fe2c4385461fe6cb96d25a-86f63972fbf1e10b502f1640fe01b426.
>>>> 2020-09-01 11:50:43,904 DEBUG
>>>> org.apache.flink.shaded.zookeeper.org.apache.zookeeper.ClientCn
385461fe6cb96d25a-86f63972fbf1e10b502f1640fe01b426.
>>> 2020-09-01 11:50:43,904 DEBUG
>>> org.apache.flink.shaded.zookeeper.org.apache.zookeeper.ClientCnxn - Got
>>> ping response for sessionid: 0x30be3d929102460 after 2ms
>>> 2020-09-01 11:50:46,400 INFO
>>&g
per.org.apache.zookeeper.ClientCnxn - Got
>> ping response for sessionid: 0x30be3d929102460 after 2ms
>> 2020-09-01 11:50:46,400 INFO
>> org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Received
>> JobGraph submission 980d3ff229b7fbfe889e2bc93e526da0 (cli
80d3ff229b7fbfe889e2bc93e526da0 (cli-test-001).
> 2020-09-01 11:50:46,405 DEBUG
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore -
> Adding job graph 980d3ff229b7fbfe889e2bc93e526da0 to
> flink/cluster/jobgraphs/980d3ff229b7fbfe889e2bc93e526da0.
>
- Submitting
job 980d3ff229b7fbfe889e2bc93e526da0 (cli-test-001).
2020-09-01 11:50:46,405 DEBUG
org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore -
Adding job graph 980d3ff229b7fbfe889e2bc93e526da0 to
flink/cluster/jobgraphs/980d3ff229b7fbfe889e2bc93e526da0.
2020-09-01 11:50:47,325
Hi Prakhar,
have you enabled HA for your cluster? If yes, then Flink will try to store
the job graph to the configured high-availability.storageDir in order to be
able to recover it. If this operation takes long, then it is either the
filesystem which is slow or storing the pointer in ZooKeeper
state
backend having GCS as remote storage.
On running the cluster in debug mode, we observed that generating the plan
itself takes around 6 seconds and copying job graph from local to the
remote folder takes around 10 seconds.
We were wondering whether this delay is expected or if it can be reduce
Hello, I have an auto-generated job that creates too many tasks for web UI’s
> job graph to handle. The browser pinwheels while the page attempts to load.
> Is it possible to disable the job graph component in the web UI? For slightly
> smaller jobs, once the graph loads the rest o
Hello, I have an auto-generated job that creates too many tasks for web UI’s
job graph to handle. The browser pinwheels while the page attempts to load. Is
it possible to disable the job graph component in the web UI? For slightly
smaller jobs, once the graph loads the rest of the UI is usable
On Wed, Jun 29, 2016 at 9:19 PM, Bajaj, Abhinav wrote:
> Is their a plan to add the Job id or name to the logs ?
This is now part of the YARN client output and should be part of the
1.1 release.
Regarding your other question: in standalone mode, you have to
manually make sure to not submit mult
sday, June 21, 2016 at 8:23 AM
To: "user@flink.apache.org<mailto:user@flink.apache.org>"
mailto:user@flink.apache.org>>, Till Rohrmann
mailto:trohrm...@apache.org>>
Cc: Aljoscha Krettek mailto:aljos...@apache.org>>
Subject: Re: Documentation for translation of J
g>"
mailto:user@flink.apache.org>>, Till Rohrmann
mailto:trohrm...@apache.org>>
Cc: Aljoscha Krettek mailto:aljos...@apache.org>>
Subject: Re: Documentation for translation of Job graph to Execution graph
Hi,
the link has been added newly, yes.
Regarding Q1, since there is n
ings I am trying to understand and get
> comfortable with -
>
>1. How a Job graph is translated to Execution graph. The logs and
>monitoring APIs are for the Execution graph. So, I need to map them to the
>Job graph. I am trying to bridge this gap.
>2. The job manager &am
Hi,
Thanks for sharing this link. I have not see it before. May be this is newly
added in 1.0 docs. I will go through it.
In general, there are two things I am trying to understand and get comfortable
with -
1. How a Job graph is translated to Execution graph. The logs and monitoring
APIs
hinav wrote:
> Hi,
>
> When troubleshooting a flink job, it is tricky to map the Job graph
> (application code) to the logs & monitoring REST APIs.
>
> So, I am trying to find documentation on how a Job graph is translated to
> Execution graph.
> I found this -
> https:
Hi,
When troubleshooting a flink job, it is tricky to map the Job graph
(application code) to the logs & monitoring REST APIs.
So, I am trying to find documentation on how a Job graph is translated to
Execution graph.
I found this -
https://ci.apache.org/projects/flink/flink-docs-release
45 matches
Mail list logo