Hi Morrisa:
It seems that flink planner not support return Object(or generic, like you say,
type erasure) in ScalarFunction.
In ScalarFunctionCallGen:
val functionCallCode =
s"""
|${parameters.map(_.code).mkString("\n")}
|$resultTypeTerm $resultTerm = $functionReference.eval(
|
Hi Guys,
I observed some strange behaviors while using Queryable state with Flink
1.6.2. Here is the story:
My state is of type MapState[String, Map[String, String]]. the inner map is
frequently updated. Upon querying, sometimes the returned inner map can miss
some fields. What's more, sometimes
Hi Soheil,
Do you mean you are running multiple jobs in the same Flink standalone
cluster? Or running multiple Flink standalone clusters on the same set of
machines?
For the former, I don't think there is an easy way to separate the logs.
The log file contains logs of common framework activities
This is probably a very subjective question, but nevertheless here are my
reasons for choosing Flink over KStreams or even Spark.
a) KStreams couples you tightly to Kafka, and I personally don't want my
stream processing engine to be married to my message bus. There are other
(even better
Hi, 你需要正确处理“落库失败”的数据,例如可以直接抛异常出来,这样 job 会不停 failover,直到不再落库失败
方伟 于2019年5月20日周一 下午6:02写道:
> Hi 你好~:
>
>
Hi,
I have a Flink multinode cluster and I use Flink standalone scheduler to
deploy applications on the cluster. When I deploy applications on the
cluster I can see some log files on the path FLINK_HOME/logs will be
created but there is no separate log file for each application and all
Hi Greater Seattle folks!
We are hosting our next meetup with AWS Kinesis Analytics team on May 30
next Thursday in downtown Seattle.
We feature two talks this time:
1. *"AWS Kinesis Analytics: running Flink serverless in multi-tenant
environment"* by Kinesis Analytics team on:
-
Hi Morrisa,
usually, this means that you class is not recognized as a POJO. Please
check again the requirements of a POJO: Default constructor, getters and
setters for every field etc. You can use
org.apache.flink.api.common.typeinfo.Types.POJO(...) to verify if your
class is a POJO or not.
Hi Shahar,
yes the number of parameters should be the issue for a cannot compile
exception. If you moved most of the constants to a member in the
function, it should actually work.
Do you have a little reproducible example somewhere?
Thanks,
Timo
Am 16.05.19 um 19:59 schrieb shkob1:
Hi
Hi Harshith,
This is indeed an issue not resolved in 1.8. I added a comment to the
(closed) Jira issue, so this might be fixed in further releases.
Cheers,
Wouter
Op ma 20 mei 2019 om 16:18 schreef Kumar Bolar, Harshith :
> Hi Wouter,
>
>
>
> I’ve upgraded Flink to 1.8, but now I only see
Hi Wouter,
I’ve upgraded Flink to 1.8, but now I only see Internal server error on the
dashboard when a job deployment fails.
[cid:image001.png@01D50F44.B76CA0C0]
But in the logs I see the correct exception -
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at
Hello guys,
We are trying to track network queues, after enabling
"taskmanager.network.detailed-metrics” as described in the docs [1], we do not
see any of *QueueLen metrics from here [2] in our metrics reporter (slf4j for
debug purposes).
We are using Flink 1.8.0 at the moment.
Do you guys
Hi XiangWei,
Thank you for the inputs. I agree with you that it is possible that
containers may use extra memory in 1.8. As for native memory, it is memory
used by JVM and other processes outside JVM. So it's not limited by
MaxDirectMemorySize.
The community is working on a refactoring plan
It helps ! thank you
From: Aljoscha Krettek
Sent: 20 May 2019 12:45
To: Hanan Yehudai
Cc: user@flink.apache.org
Subject: Re: monitor finished files on a Continues Reader
Hi,
I think what you’re trying to achieve is not possible with the out-of-box file
source. The problem is that it is
Hi 你好~:
请教个问题:我用flink消费kafka的数据,使用了checkpoint记录分区的偏移量,5s做一次checkpoint,并设置了EXACTLY_ONCE,让消费的数据落到mysql中,如何保证落库失败了(比如数据库中字段长度设置小了),当重新消费时还会消费到上次那条数据(我的理解是此时可能那条数据已经做了checkpoint了,下次消费就会跳过这条数据,是这样吗?该如何解决呢?),谢谢!
Hi,
I think what you’re trying to achieve is not possible with the out-of-box file
source. The problem is that it is hard to know when a file can be deleted, i.e.
there are multiple splits of a file and those are possibly read on different
parallel operators. Plus, deletion/move of files has
Hi,
You should be able to pass the Configuration via:
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment#createLocalEnvironment(int,
org.apache.flink.configuration.Configuration)
Regards,
Dawid
On 19/05/2019 20:49, M Singh wrote:
> Hey Flink Folks:
>
> I was trying to find
感谢!可以了。
baiyg25...@hundsun.com
发件人: Shi Quan
发送时间: 2019-05-20 17:28
收件人: user-zh@flink.apache.org
主题: RE: flink Table API Date 类型 是否支持 ?
可以用SqlTimeTypeInfo.DATE 。
在FlinkTypeFactory typeInfoToSqlTypeName中的类型转换关系:
// temporal types
case SqlTimeTypeInfo.DATE => DATE
Hi Theo,
Could you try replacing the CEP operator with a simple flatMap to see if
the CEP is the reason for the backpressure? Another reason for this
behavior might be the time of serialization (what is the serialization
format?) of the records. You could also try enabling object reuse[1].
Good
Hi Theo,
Could you try replacing the CEP operator with a simple flatMap to see if
the CEP is the reason for the backpressure? Another reason for this
behavior might be the time of serialization (what is the serialization
format?) of the records. You could also try enabling object reuse[1].
Good
同学们好!
flink Table API 中显示支持 Date 类型,实际应用中 报错异常:Caused by:
org.apache.flink.table.api.TableException: Type is not supported: Date。
应用场景:
第一步: 从mysql 数据库中读取 Date 型 数据
new JDBCInputFormat
.JDBCInputFormatBuilder()
Hi Harshith,
I haven't tried it, but for Kafka you should be able to use the dynamic
sasl configuration of the underlying KafkaConsumer. Try setting the
`sasl.jaas.config` parameter for the FlinkKafkaConsumer as per the Kafka
documentation.
As far as I know if you use a Flink's specific way of
Hi all,
Currently i am running my flink application in yarn session mode and
using below commnad :
*bin/yarn-session.sh -d -s 3 -jm 1024 -tm 4096*
when taskmanager complete to started,i found the container launching
command is :
* bin/java -Xms2765m -Xmx2765m -XX:MaxDirectMemorySize=1331m
Hi all,
We have a central Flink cluster which will be used by multiple different teams
(Data Science, Engineering etc). Each team has their own user and keytab to
connect to services like Kafka, Cassandra etc. How should the jobs be
configured such that different jobs use different keytabs and
Hi Monika and Georgi,
it is quite hard to debug this problem remotely because one would need
access to the logs with DEBUG log level. Additionally, it would be great to
have a better understanding of what the individual operators do.
In general, 20 MB of state should be super easy to handle for
Hi,
Have you checked the logs for exceptions? Could you share the logs with
us? Have you tried switching to e.g. FSStateBackend or disabling the
incremental checkpoints on 1.7.2 to see in what configuration the
problem occurs?
Best,
Dawid
On 20/05/2019 09:09, Georgi Stoyanov wrote:
>
> Hi
Hi
im looking for a way to delete / rename files that are done loading..
im using the env.readFile , monitoring a directory for all new files, once
files are done with I would like to delete it.
Is there a way to monitor the closed splits in the continues reader ? is there
an different way
Hi,
Currently no committers (or PMC members) are focusing on the queryable state
feature. This will probably mean that not much is going to happen there in the
near future. However, there is some discussion on the development by the larger
community about QS:
Hi
As the Queryable state is in the Beta state, Can you Please confirm the plan
for the formal release of the feature Queryable state.
Is there any timeline by when this would be added to Flink.
Thanks !!!
/// Regards
Praveen Chandna
Product Owner,
Mobile +91 9873597204 | ECN: 2864
29 matches
Mail list logo