metrics for through
the various name() methods
2. Use the jobid API [1] and find the operator we’ve named in the
“vertices” array
[1]https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/rest_api/#jobs-jobid
ah
From: M Singh
Sent: Tuesday, June 7, 2022 4:51 PM
Hi Folks:
I am trying to find if I can get the number of records for an operator using
flinks REST API. I've checked the docs at
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/rest_api/.
I did see some apis that use vertexid, but could not find how to that info
without
Hi:
I just wanted to see if this is the right place for Apache Flink StateFun users
email list.
If it is not, please let me know and I apologize for any inconvenience.
Thanks
Hi Folks:
I am trying to understand details of module.yaml. The documentation at
(https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/docs/modules/overview/)
is just one example and I am trying to find out what are other kinds that can
be configured.
Also, in the greeter example
Hi:
I am working with Stateful Functions 3.2.0 using Java SDK.
I wanted to find out if there is a broadcast an event to all functions
functionality available in Stateful Functions just like broadcast process and
keyed broadcast processing in Apache Flink.
Also, how would we implement processing
Hi Folks:
I would like to run StateFun application in a flink cluster just like a Flink
application.
I found the instructions in
(https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/docs/deployment/overview/)
but could not find if there is any difference in how to build the jar
Hi Folks:
I am using 'kafka' connector and joining with data from jdbc source (using
connector). I am using Flink v 1.14.3. If I do a left outer join between
kafka source and jdbc source, and try to save it to another kafka sink using
connectors api, I get the following exception:
Exception
phase rather than when parsing,
but it shouldn't work anyway.
I suggest you to just use Table API for that, as it's richer. You can even use
withColumns(range(..)) which gives you more control.
Hope it helps,FG
On Thu, Feb 17, 2022 at 1:34 AM M Singh wrote:
Hi:
I have a simple concatenate UDF
AM M Singh wrote:
Hi Folks:
I am trying to monitor a jdbc source and continuously streaming data in an
application using the jdbc connector. However, the application stops after
reading the data in the table.
I've checked the docs
(https://nightlies.apache.org/flink/flink-docs-master/docs
Hi Folks:
I am trying to monitor a jdbc source and continuously streaming data in an
application using the jdbc connector. However, the application stops after
reading the data in the table.
I've checked the docs
18, 2022 at 1:28 AM M Singh wrote:
Hi:
I have a simple application and am using file system connector to monitor a
directory and then print to the console (using datastream). However, the
application stops after reading the file in the directory (at the moment I have
a single file
Hi:
I have a simple application and am using file system connector to monitor a
directory and then print to the console (using datastream). However, the
application stops after reading the file in the directory (at the moment I have
a single file in the directory). I am using Apache Flink
Hi:
I have a simple concatenate UDF (for testing purpose) defined as:
public static class ConcatenateFunction extends ScalarFunction {
public String eval(@DataTypeHint(inputGroup = InputGroup.ANY) Object ...
inputs) { return Arrays.stream(inputs).map(i ->
is NULL, and no
events will be considered late.
Hope this helps clarify things.
Regards,David
On Sat, Feb 12, 2022 at 12:01 AM M Singh wrote:
I thought a little more about your references Martijn and wanted to confirm
one thing - the table is specifying the watermark and the downstream view needs
.
Mans
On Friday, February 11, 2022, 05:02:49 PM EST, M Singh
wrote:
Hi Martijn:
Thanks for the reference.
My understanding was that if we use watermark then any event with event time
(in the above example) < event_time - 30 seconds will be dropped automatically.
My question
at 16:45, M Singh wrote:
Hi:
The flink docs
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/)
indicates that the CURRENT_WATERMARK(rowtime) can return null:
Note that this function can return NULL, and you may have to consider this
case. Fo
Hi:
The flink docs
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/systemfunctions/)
indicates that the CURRENT_WATERMARK(rowtime) can return null:
Note that this function can return NULL, and you may have to consider this
case. For example, if you want to filter
],groupByType:byAccount,aggregationKey:'2'}
From: Colletta, Edward
Sent: Tuesday, January 25, 2022 1:29 PM
To: M Singh ; Caizhi Weng ;
User-Flink
Subject: RE: Apache Fink - Adding/Removing KeyedStreams at run time without
stopping the application
You don’t have to add keyBy’s
to the stream based on the value of groupByFields
The flatMap means one message may be emitted several times with different
values of aggregationKey so it may belong to multiple aggregations.
From: M Singh
Sent: Monday, January 24, 2022 9:52 PM
To: Caizhi Weng ; User-Flink
would be
SingleOutputStreamOperator, which cannot be parallel.
[1]
https://github.com/apache/flink/blob/master/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/AllWindowedStream.scala
BestYun TangFrom: M Singh
Sent: Sunday, January 23, 2022 4:24
To: User-Flink
Subject
it is not possible to do so without restarting the job and as far as
I know there is no existing framework/pattern to achieve this.
By the way, why do you need this functionality? Could you elaborate more on
your use case?
M Singh 于2022年1月22日周六 21:32写道:
Hi Folks:
I am working on an exploratory project
Hi Folks:
The documentation for AllWindowedStream
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/overview/#datastream-rarr-allwindowedstream)
has a note:
This is in many cases a non-parallel transformation. All records will be
gathered in one task for the
Hi Folks:
I am working on an exploratory project in which I would like to add/remove
KeyedStreams
(https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/overview/#keyby)
without restarting the Flink streaming application.
Is it possible natively in Apache Flink ?
Thanks Chesnay for your pointers. Mans
On Sunday, January 16, 2022, 06:19:09 AM EST, Chesnay Schepler
wrote:
see https://github.com/apache/flink/tree/master/docs
On 15/01/2022 15:04, M Singh wrote:
Hi:
I wanted to find out what's the command for building the flink
Hi:
I wanted to find out what's the command for building the flink docs locally and
the location of the final docs. Is it apart of the build commands (Building
Flink from Source) and can they be built separately ?
Thanks
`, `JdbcOutputFormat.Builder` would
choose to create a `TableJdbcUpsertOutputFormat` or `JdbcOutputFormat` instance
depends on whether key fields is defined in DML.
[1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/connectors/table/jdbc/#idempotent-writes
Best,JING ZHANG
M Singh 于2021年10月19日周
;
+ ")\n"
+ "GROUP BY cnt, cTag")
.await();
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/jdbc/#key-handling[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/d
Hi Folks:
I am working on Flink DataStream pipeline and would like to use JDBC upsert
functionality. I found a class TableJdbcUpsertOutputFormat but am not sure who
to use it with the JdbcSink as shown in the document
need schema registry.
Best,
Dawid
[1]https://avro.apache.org/docs/1.10.2/spec.html#Data+Serialization+and+Deserialization
[2] https://avro.apache.org/docs/1.10.2/spec.html#Schema+Resolution
On 15/07/2021 15:48, M Singh wrote:
Hello Dawid:
Thanks for your answers and references
ever used in a single parallel instance and
it is not sent over a network again. So basically there you use the writer
schema retrieved from schema registry as the reader schema.
I hope this answers your questions.
Best,
Dawid
[1] https://avro.apache.org/docs/1.10.2/spec.html
On 09/07/2021 03:09,
Hi:
I am trying to read avro encoded messages from Kafka with schema registered in
schema registry.
I am using the class
(https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/formats/avro/registry/confluent/ConfluentRegistryAvroDeserializationSchema.html)
using the
ts/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/joins/#event-time-temporal-join
M Singh 于2021年7月7日周三 下午5:22写道:
Hi Jing:
Thanks for your explanation and references.
I looked at your reference
(https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/joi
//ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/joins/
Best regards,JING ZHANG
M Singh 于2021年7月7日周三 上午8:23写道:
Hey Folks:
I am trying to understand how LookupTableSource works and have a few questions:
1. Are there other examples/documentation on how create
Hey Folks:
I am trying to understand how LookupTableSource works and have a few questions:
1. Are there other examples/documentation on how create a query that uses it vs
ScanTableSource ?2. Are there any best practices for using this interface ?3.
How does the planner decide to use
://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/tableapi/#groupby-window-aggregation[3]:
https://github.com/ververica/flink-sql-cookbook#aggregations-and-analytics[4]:
https://github.com/ververica/sql-training/wiki/Queries-and-Time
On Wed, May 12, 2021 at 1:30 PM M Singh wrote:
Hey
Hey Folks:
I have the following questions regarding Table API/SQL in streaming mode:
1. Is there is a notion triggers/evictors/timers when using Table API or SQL
interfaces ?2. Is there anything like side outputs and ability to define
allowed lateness when dealing with the Table API or SQL
.
The Akka ask timeout does not seem to be the root problem to me.
Thank you~
Xintong Song
On Sat, Jul 4, 2020 at 12:12 AM M Singh wrote:
Hi Xintong/LakeShen:
We have the following setting in flink-conf.yaml
akka.ask.timeout: 180 s
akka.tcp.timeout: 180 s
But still see this exception
would suggest to look
into the jobmanager logs and gc logs, see if there's any problem that prevent
the process from handling the rpc messages timely.
Thank you~
Xintong Song
On Fri, Jul 3, 2020 at 3:51 AM M Singh wrote:
Hi:
I am using Flink 1.10 on AWS EMR cluster.
We are getting
Hi:
I am using Flink 1.10 on AWS EMR cluster.
We are getting AskTimeoutExceptions which is causing the flink jobs to die.
Caused by: akka.pattern.AskTimeoutException: Ask timed out on
[Actor[akka://flink/user/resourcemanager#-1602864959]] after [1 ms].
Message of type
the data to consumer, instead of
consumer periodically pulling the data).
Roman Grebennikov | g...@dfdx.me
On Wed, Jun 17, 2020, at 04:39, M Singh wrote:
Thanks Roman for your response and advice.
>From my understanding increasing shards will increase throughput but still if
>more
ot;1.5") // multiplying pause to 1.5 on each next step
conf.put(ConsumerConfigConstants.SHARD_GETRECORDS_RETRIES, "100") // and
make up to 100 retries
with best regards,
Roman Grebennikov | g...@dfdx.me
On Mon, Jun 15, 2020, at 13:45, M Singh wrote:
Hi:
I am using multiple (almost
Hi:
I am using multiple (almost 30 and growing) Flink streaming applications that
read from the same kinesis stream and get
ProvisionedThroughputExceededException exception which fails the job.
I have seen a reference
Hi:
I wanted to find out if we can access the savepoints created for a job or all
jobs using Flink CLI or REST API.
I took a look at the CLI (Apache Flink 1.10 Documentation: Command-Line
Interface) and REST API
-docs-release-1.6/api/java/org/apache/flink/api/common/functions/StoppableFunction.html[2]
https://github.com/apache/flink/blob/release-1.6/flink-core/src/main/java/org/apache/flink/api/common/functions/StoppableFunction.java
On Sat, Jun 6, 2020 at 8:03 PM M Singh wrote:
Hi Arvid:
I check
if this SO thread [1] helps you already?
[1]
https://stackoverflow.com/questions/53735318/flink-how-to-solve-error-this-job-is-not-stoppable
On Thu, Jun 4, 2020 at 7:43 PM M Singh wrote:
Hi:
I am running a job which consumes data from Kinesis and send data to another
Kinesis queue. I am using
Hi:
I am running a job which consumes data from Kinesis and send data to another
Kinesis queue. I am using an older version of Flink (1.6), and when I try to
stop the job I get an exception
Caused by: java.util.concurrent.ExecutionException:
will recover the submitted jobs from a persistent
storage system.
Cheers,Till
On Thu, May 28, 2020 at 4:05 PM M Singh wrote:
Hi Till/Zhu/Yang: Thanks for your replies.
So just to clarify - the job id remains same if the job restarts have not been
exhausted. Does Yarn also resubmit the job in case
e) is also used for the same
purpose from the execution environments.
Cheers,
Kostas
On Wed, May 27, 2020 at 4:49 AM Yang Wang wrote:
>
> Hi M Singh,
>
> The Flink CLI picks up the correct ClusterClientFactory via java SPI. You
> could check YarnClusterClientFactory#isCompatibleWit
the checkpoints associated with
this job.
Cheers,Till
On Tue, May 26, 2020 at 12:42 PM M Singh wrote:
Hi Zhu Zhu:
I have another clafication - it looks like if I run the same app multiple times
- it's job id changes. So it looks like even though the graph is the same the
job id is not dependent
Hi:
I wanted to find out which parameter/configuration allows flink cli pick up the
appropriate cluster client factory (especially in the yarn mode).
Thanks
know if I've missed anything.
Thanks
On Monday, May 25, 2020, 05:32:39 PM EDT, M Singh
wrote:
Hi Zhu Zhu:
Just to clarify - from what I understand, EMR also has by default restart times
(I think it is 3). So if the EMR restarts the job - the job id is the same
since the job graph
On Saturday, May 23, 2020, 10:17:27 AM EDT, Chesnay Schepler
wrote:
You also have to set the boolean cancel-job parameter.
On 22/05/2020 22:47, M Singh wrote:
Hi:
I am using Flink 1.6.2 and trying to create a savepoint using the following
curl command using the following
side and persist in DFS for reuse2. yes if high
availability is enabled
Thanks,Zhu Zhu
M Singh 于2020年5月23日周六 上午4:06写道:
Hi Flink Folks:
If I have a Flink Application with 10 restarts, if it fails and restarts, then:
1. Does the job have the same id ?2. Does the automatically restarting
Hi:
I am using Flink 1.6.2 and trying to create a savepoint using the following
curl command using the following references
(https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/rest_api.html)
and
Hi Flink Folks:
If I have a Flink Application with 10 restarts, if it fails and restarts, then:
1. Does the job have the same id ?2. Does the automatically restarting
application, pickup from the last checkpoint ? I am assuming it does but just
want to confirm.
Also, if it is running on AWS EMR
BTW - Is there any limit to the amount of data that can be stored on emptyDir
in K8 ?
On Wednesday, February 26, 2020, 07:33:54 PM EST, M Singh
wrote:
Thanks Yang and Arvid for your advice and pointers. Mans
On Wednesday, February 26, 2020, 09:52:26 AM EST, Arvid Heise
wrote
at 3:36 AM Yang Wang wrote:
Hi M Singh,
> Mans - If we use the session based deployment option for K8 - I thought K8
>will automatically restarts any failed TM or JM.
In the case of failed TM - the job will probably recover, but in the case of
failed JM - perhaps we need to resubmit al
Thanks will try your recommendations and apologize for the delayed response.
On Wednesday, January 29, 2020, 09:58:26 AM EST, Till Rohrmann
wrote:
Hi M Singh,
have you checked the TaskManager logs of
ip-xx-xxx-xxx-xxx.ec2.internal/xx.xxx.xxx.xxx:39623 for any suspicious logging
should we use emptyDir ?
[1].
https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html
M Singh 于2020年2月23日周日 上午2:28写道:
Hey Folks:
I am trying to figure out the options for running Flink on Kubernetes and am
trying to find out the pros and cons of running in F
Hey Folks:
I am trying to figure out the options for running Flink on Kubernetes and am
trying to find out the pros and cons of running in Flink Session vs Flink
Cluster mode
Hi Folks:
We have streaming Flink application (using v 1.6.2) and it dies within 12
hours. We have configured number of restarts which is 10 at the moment.
Sometimes the job runs for some time and then within a very short time has a
number of restarts and finally fails. In other instances, the
/KeyGroupStreamPartitioner.java#L58
BestYun Tang
From: M Singh
Sent: Friday, January 10, 2020 23:29
To: User
Subject: Apache Flink - Sharing state in processors Hi:
I have a few question about how state is shared in processors in Flink.
1. If I have a processor instantiated in the Flink app, and apply use
Hi:
I have a few question about how state is shared in processors in Flink.
1. If I have a processor instantiated in the Flink app, and apply use in
multiple times in the Flink - (a) if the tasks are in the same slot - do
they share the same processor on the taskmanager ?
(b) if the
ecommended for a streaming job?
Best,Vino
M Singh 于2019年12月24日周二 下午4:02写道:
Hi:
I wanted to find out what's the best way of collecting Flink metrics using
Prometheus in a streaming application on EMR/Hadoop.
Since the Flink streaming jobs could be running on any node - is there any
Prome
Hi:
I wanted to find out what's the best way of collecting Flink metrics using
Prometheus in a streaming application on EMR/Hadoop.
Since the Flink streaming jobs could be running on any node - is there any
Prometheus configuration or service discovery option available that will
dynamically
specify it to distinguish
different Flink cluster.[1]
[1]:
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/metrics.html#datadog-orgapacheflinkmetricsdatadogdatadoghttpreporter
Best,Vino
M Singh 于2019年12月19日周四 上午2:54写道:
Hi:
I am using AWS EMR with Flink application and two of the job
Hi:
I am using AWS EMR with Flink application and two of the job managers are
running on the same host. I am looking at the metrics documentation (Apache
Flink 1.9 Documentation: Metrics) and and see the following:
|
|
| |
Apache Flink 1.9 Documentation: Metrics
|
|
|
-
On Wed, Dec 11, 2019 at 1:40 PM M Singh wrote:
Thanks Timo for your answer. I will try the prototype but was wondering if I
can find some theoretical documentation to give me a sound understanding.
Mans
On Wednesday, December 11, 2019, 05:44:15 AM EST, Timo Walther
wrote:
Little mistake
Thanks Zhu for your advice. Mans
On Tuesday, December 10, 2019, 09:32:01 PM EST, Zhu Zhu
wrote:
Hi M Singh,
I think you would be able to know the request failure cause and whether it is
recoverable or not.You can handle the error as you like. For example, if you
think the error
time for testing the
> semantics.
>
> I hope this helps. Otherwise of course we can help you in finding the
> answers to the remaining questions.
>
> Regards,
> Timo
>
>
>
> On 10.12.19 20:32, M Singh wrote:
>> Hi:
>>
>> I have a few question
Hi:
I have a few questions about the side output late data.
Here is the API
stream
.keyBy(...) <- keyed versus non-keyed windows
.window(...) <- required: "assigner"
[.trigger(...)]<- optional: "trigger" (else default trigger)
:08:39 PM EST, Jingsong Li
wrote:
Hi M Singh,
Our internal has this scenario too, as far as I know, Flink does not have this
internal mechanism in 1.9 too.I can share my solution:- In async function,
start a thread factory.- Send the call to thread factory when this call has
failed. Do
,
there should be no issue to only have side-outputs in your operator. There
should also be no big drawbacks. I guess mostly some metrics will not be
properly populated, but you can always populate them manually or add new ones.
Best,
Arvid
On Mon, Dec 2, 2019 at 8:40 PM M Singh wrote:
Hi:
I am replacing
Hi Folks:
I am working on a project where I will be using Flink's async processing
capabilities. The job has to make http request using a token. The token
expires periodically and needs to be refreshed.
So, I was looking for patterns for handling async call failures and retries
when the token
Hi:
I am replacing SplitOperator in my flink application with a simple processor
with side outputs.
My questions is that does the main stream from which we get the side outputs
need to have any events (ie, produced using by the using collector.collect) ?
Or can we have all the output as side
Wednesday, November 27, 2019, 11:32:06 AM
EST, Rong Rong wrote:
Hi Mans,
is this what you are looking for [1][2]?
--Rong
[1] https://issues.apache.org/jira/browse/FLINK-11501[2]
https://github.com/apache/flink/pull/7679
On Mon, Nov 25, 2019 at 3:29 AM M Singh wrote:
Thanks Ciazhi & Th
-to-all-operators-in-my-job[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/state/savepoints.html#what-happens-if-i-delete-an-operator-that-has-state-from-my-job
Best,Congxian
M Singh 于2019年11月26日周二 上午10:49写道:
Hi Kostas/Congxian:
Thanks fo your response.
Based on your
rovide some code from your job, just to see if
> there is anything problematic with the application code?
> Normally there should be no problem with not providing UIDs for some
> stateless operators.
>
> Cheers,
> Kostas
>
> On Sat, Nov 23, 2019 at 11:16 AM M Singh wrote:
Thanks DIan for your pointers. MansOn Sunday, November 24, 2019, 08:57:53
PM EST, Dian Fu wrote:
Hi Mans,
Please see my reply inline below.
在 2019年11月25日,上午5:42,M Singh 写道:
Thanks Dian for your answers.
A few more questions:
1. If I do not assign uids to operators/sources
able/udfs.html
Here is a throttling iterator example:
https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/utils/ThrottledIterator.java
M Singh 于2019年11月25日周一 上午5:50写道:
Hi:
I have an Flink streaming applica
Hi:
I have an Flink streaming application that invokes some other web services.
However the webservices have limited throughput. So I wanted to find out if
there any recommendations on how to throttle the Flink datastream so that they
don't overload the downstrream services. I am using
ating a uid for the sink ?
>> Not sure what do you mean by "attribute id". Could you give some more
>> detailed information about it?
Regards,
Dian
On Fri, Nov 22, 2019 at 6:27 PM M Singh wrote:
Hi Folks - Please let me know if you have any advice on the best practices
Hey Folks:
Please let me know how to resolve this issue since using
--allowNonRestoredState without knowing if any state will be lost seems risky.
ThanksOn Friday, November 22, 2019, 02:55:09 PM EST, M Singh
wrote:
Hi:
I have a flink application in which some of the operators have
Hi:
I have a flink application in which some of the operators have uid and name and
some stateless ones don't.
I've taken a save point and tried to start another instance of the application
from a savepoint - I get the following exception which indicates that the
operator is not available to
Hi Folks - Please let me know if you have any advice on the best practices for
setting uid for sources and sinks. Thanks. MansOn Thursday, November 21,
2019, 10:10:49 PM EST, M Singh wrote:
Hi Folks:
I am assigning uid and name for all stateful processors in our application
Hi Folks:
I am assigning uid and name for all stateful processors in our application and
wanted to find out the following:
1. Should we assign uid and name to the sources and sinks too ? 2. What are
the pros and cons of adding uid to sources and sinks ?3. The sinks have uid and
hashUid - which
-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointMetadataLoadingTest.java
Best,Congxian
M Singh 于2019年11月21日周四 下午7:44写道:
Hi Congxian:
For my application i see many uuids under the chk-6 directory ( I posted one in
the sample above). I am trying to understand that if I
, and the
assigned in the application belongs to one operator, they are different.
Best,Congxian
M Singh 于2019年11月21日周四 上午6:18写道:
Hi Arvid:
Thanks for your clarification.
I am giving supplying uid for the stateful operators and find the following
directory structure on in the chkpoint directory
in any case.
Best,
Arvid
On Sat, Nov 16, 2019 at 6:40 PM M Singh wrote:
Thanks Jiayi for your response. I am thinking on the same lines.
Regarding using the same name and uuid, I believe the checkpoint state for an
operator will be easy to identify if the uuid is the same as name. But I am
-1.9/ops/state/checkpoints.html#resuming-from-a-retained-checkpoint
Best,Congxian
M Singh 于2019年11月18日周一 上午2:54写道:
Folks - Please let me know if you have any advice on this question. Thanks
On Saturday, November 16, 2019, 02:39:18 PM EST, M Singh
wrote:
Hi:
I have a Flink job
Folks - Please let me know if you have any advice on this question. Thanks
On Saturday, November 16, 2019, 02:39:18 PM EST, M Singh
wrote:
Hi:
I have a Flink job and sometimes I need to cancel and re run it. From what I
understand the checkpoints for a job are saved under the job id
Hi:
I have a Flink job and sometimes I need to cancel and re run it. From what I
understand the checkpoints for a job are saved under the job id directory at
the checkpoint location. If I run the same job again, it will get a new job id
and the checkpoint saved from the previous run job (which
ery operator, which gives me
better experience in monitoring and scaling.
Hope this helps.
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/upgrading.html#matching-operator-state
Best,
Jiayi Liao
At 2019-11-16 18:35:38, "M Singh" wrote:
Hi:
I a
Hi:
I am working on a project and wanted to find out what are the best practices
for setting name and uuid for operators:
1. Are there any restrictions on the length of the name and uuid attributes ?2.
Are there any restrictions on the characters used for name and uuid (blank
spaces, etc) ?3.
Hi:
Is there any configuration to get heap dump when job fails in an EMR ?
Thanks
t's OK for now. But I suggest do
not manipulate state in `Evictor` until there is an official guarantee.
BTW, I'm just wondering why do you want to manipulate state in `Evictor`? If
your requirement is reasonable, maybe we could support it officially.
M Singh 于2019年7月14日周日 下午9:07写道:
Hi:
Is it s
ent `TimeCharacteristic` in one job at the
same time?I guess the answer is no. So I don't think there exists such a
scenario.
M Singh 于2019年7月19日周五 上午12:19写道:
Hey Folks - Just checking if you have any pointers for me. Thanks for your
advice.
On Sunday, July 14, 2019, 03:12:25 PM EDT, M Singh
wr
not sure what you exactly mean. Could you describe more about your
requirements?
M Singh 于2019年7月14日周日 上午9:33写道:
Hi:
I wanted to find out what is the timestamp associated with the elements of a
stream side output with different stream time characteristics.
Thanks
Man
Hey Folks - Just checking if you have any pointers for me. Thanks for your
advice.
On Sunday, July 14, 2019, 03:12:25 PM EDT, M Singh
wrote:
Also, are the event time timers and processing time timers handled separately
- ie, if I register event time timer and then use the same
Hey Folks - Just wanted to see if there are any thoughts on this question.
ThanksOn Saturday, July 13, 2019, 09:33:15 PM EDT, M Singh
wrote:
Hi:
I wanted to find out what is the timestamp associated with the elements of a
stream side output with different stream time characteristics
1 - 100 of 171 matches
Mail list logo