OK, Thanks Jark
Thanks,
SImon
On 08/13/2019 14:05,Jark Wu wrote:
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
Hi Jark
Thanks for your reply.
It’s weird that
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
> Hi Jark
>
> Thanks for your reply.
>
> It’s weird that In this case the tableEnv provide the api called
> “registerCatalog”
Hi Jark
Thanks for your reply.
It’s weird that In this case the tableEnv provide the api called
“registerCatalog”, but it does not work in some cases ( like my cases ).
Do you think it’s feasible to unify this behaviors ? I think the document is
necessary, but a unify way to use tableEnv is
I think we might need to improve the javadoc of
tableEnv.registerTableSource/registerTableSink.
Currently, the comment says
"Registers an external TableSink with already configured field names and
field types in this TableEnvironment's catalog."
But, what catalog? The current one or default in-me
Yes, tableEnv.registerTable(_) etc always registers in the default catalog.
To create table in your custom catalog, you could use
tableEnv.sqlUpdate("create table ").
Thanks,
Xuefu
On Mon, Aug 12, 2019 at 6:17 PM Simon Su wrote:
> Hi Xuefu
>
> Thanks for you reply.
>
> Actually I have tried
Hi Xuefu
Thanks for you reply.
Actually I have tried it as your advises. I have tried to call
tableEnv.useCatalog and useDatabase. Also I have tried to use
“catalogname.databasename.tableName” in SQL. I think the root cause is that
when I call tableEnv.registerTableSource, it’s always use
Hi Eduardo,
JDBCSinkFunction is a package-private class which you can make use of
by JDBCAppendTableSink. A typical statement could be
new JDBCAppendTableSink.builder()
. ...
.build()
.consumeDataStream(upstream);
Also JDBCUpse
Hi all,
Could someone point me to the current advised way of adding a JDBC sink?
Online I've seen one can DataStream.writeUsingOutputFormat() however I see
the OutputFormatSinkFunction is deprecated.
I can also see a JDBCSinkFunction (or JDBCUpsertSinkFunction) but that is
"package private" so I
Hi Vishwas,
Replace ',' with ' '(space) should work.
Best,
tison.
Vishwas Siravara 于2019年8月13日周二 上午6:50写道:
> Hi guys,
> I have this entry in flink-conf.yaml file for jvm options.
> env.java.opts: "-Djava.security.auth.login.config={{ flink_installed_dir
> }}/kafka-jaas.conf,-Djava.security.kr
Hi guys,
I have this entry in flink-conf.yaml file for jvm options.
env.java.opts: "-Djava.security.auth.login.config={{ flink_installed_dir
}}/kafka-jaas.conf,-Djava.security.krb5.conf={{ flink_installed_dir
}}/krb5.conf"
Is this supposed to be a , separated list ? I get a parse exception when
th
Thanks a lot Bowen.
I've started reading these docs. Really helpful. It's a good description of
the Hive integration in Flink and how to use it.
I continue my dev.
See you soon
Le lun. 12 août 2019 à 20:55, Bowen Li a écrit :
> Hi David,
>
> Check out Hive related documentations:
>
> -
> https:
Hi All !
Join our next Seattle Flink Meetup at Uber Seattle, featuring talks of
[Flink + Kappa+ @ Uber] and [Flink + Pulsar for streaming-first, unified
data processing].
- TALK #1: Moving from Lambda and Kappa Architectures to Kappa+ with Flink
at Uber
- TALK #2: When Apache Pulsar meets Apache
Hi Simon,
Thanks for reporting the problem. There is some rough edges around catalog
API and table environments, and we are improving post 1.9 release.
Nevertheless, tableEnv.registerCatalog() is just to put a new catalog in
Flink's CatalogManager, It doens't change the default catalog/database a
Hi David,
Check out Hive related documentations:
-
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/catalog.html
-
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive_integration.html
-
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive_integra
Hello Again,
I'm looking for some examples of how to implement custom
windows/triggers/evictors using flink 1.6.4, a simple search gives me
surprisingly little about the subject. Can you point me to some repos,
talks, tutorials?
--
Best Regards
Yoandy Rodríguez
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
`transaction.timeout.ms` is a producer setting, thus you can increase
it accordingly.
Note, that brokers bound the range via `transaction.max.timeout.ms`;
thus, you may need to increase this broker configs, too.
- -Matthias
On 8/12/19 2:43 AM, Pi
Hi there,
Currently, I'm trying to write a SQL query which shall executed a time
windowed/bounded JOIN on two data streams.
Suppose I have stream1 with attribute id, ts, user and stream2 with attribute
id, ts, userName. I want to receive the natural JOIN of both streams with
events of the sa
Hi Cam,
Zili is correct.
Each shared slot can at most host one instance of each different
task(JobVertex). So you will have at most 13 tasks in each slot.
As shown in
https://ci.apache.org/projects/flink/flink-docs-release-1.8/concepts/runtime.html#task-slots-and-resources
.
To specify its parall
Hello Users,
I have a question that is perhaps not best solved within Flink: It has to
do with notifying a downstream application that a Flink window has
completed.
The (simplified) scenario is this:
- We have a Flink job that consumes from Kafka, does some preprocessing,
and then has a sliding w
Hi Cam,
If you set parallelism to 60, then you would make use of all 60 slots you
have and
for you case, each slot executes a chained operator contains 13 tasks. It
is not
the case one slot executes at least 60 sub-tasks.
Best,
tison.
Cam Mach 于2019年8月12日周一 下午7:55写道:
> Hi Zhu and Abhishek,
>
Hi Zhu and Abhishek,
Thanks for your response and pointers. It's correct, the count of
parallelism will be the number of slot used for a pipeline. And, the number
(or count) of the parallelism is also used to generate number of sub-tasks
for each operator. In my case, I have parallelism of 60, it
Hi everyone,
I have a job running in production whose structure is approximately this;
stream
?? .filter(inboundData -> inboundData.hasToBeFiltered())
?? .keyBy("myKey")
?? .process(doSomething());
I've recently decided to test the extent to which I can change a job's
structure without breakin
Hi Zhu,
Look like it's expected. Those are the cases that are happened to our
cluster.
Thanks for your response, Zhu
Cam
On Sun, Aug 11, 2019 at 10:53 PM Zhu Zhu wrote:
> Another possibility is the JM is killed externally, e.g. K8s may kill
> JM/TM if it exceeds the resource limit.
>
> Than
Hi,
Ok, I see. You can try to rewrite your logic (or maybe records schema by adding
some ID fields) to manually deduplicating the records after processing them
with at least once semantic. Such setup is usually simpler, with slightly
better throughput and significantly better latency (end-to-en
Hi,
It would be good if you can provide the job manager and task manager log
files, so that others can analysis the problem?
Thank you~
Xintong Song
On Mon, Aug 12, 2019 at 10:12 AM pengcheng...@bonc.com.cn <
pengcheng...@bonc.com.cn> wrote:
> Hi all,
> some slots are not be available,when j
Hi Piotr,
Thanks a lot. I need exactly once in my use case, but instead of having the
risk of losing data, at least once is more acceptable when error occurred.
Best,
Tony Wei
Piotr Nowojski 於 2019年8月12日 週一 下午3:27寫道:
> Hi,
>
> Yes, if it’s due to transaction timeout you will lose the data.
>
>
Hi all,
some slots are not be available,when job is not running.
I get TM dump when job is not running,and analysis it with Eclipse Memory
Analyzer. Here are some of the results which look useful:
org.apache.flink.streaming.connectors.kafka.internal.KafkaConsumerThread @
0x7f9442c8
Kafka 0.1
Hi Simon,
First of all for more thorough discussion you might want to have a look
at this thread:
https://lists.apache.org/thread.html/b450df1a7bf10187301820e529cbc223ce63f233c1af0f0c7415e62b@%3Cdev.flink.apache.org%3E
TL;DR; All objects registered with registerTable/registerTableSource are
tempo
Hi All
I want to use a custom catalog by setting the name “ca1” and create a
database under this catalog. When I submit the
SQL, and it raises the error like :
Exception in thread "main" org.apache.flink.table.api.ValidationException:
SQL validation failed. From line 1, column 98 to li
Hi,
Yes, if it’s due to transaction timeout you will lose the data.
Whether can you fallback to at least once, that depends on Kafka, not on Flink,
since it’s the Kafka that timeouts those transactions and I don’t see in the
documentation anything that could override this [1]. You might try dis
30 matches
Mail list logo