我尝试使用普通的join
String sql2 = "insert into dws_b2b_trade_year_index\n" +
"WITH temp AS (\n" +
"select \n" +
" ta.gmtStatistical as gmtStatistical,\n" +
" ta.paymentMethod as paymentMethod,\n" +
" tb.CORP_ID as outCorpId,\n" +
" tc.CORP_ID as inCorpId,\n" +
" sum(ta.tradeAmt) as tranAmount,\n" +
"
我尝试使用普通的join
String sql2 = "insert into dws_b2b_trade_year_index\n" +
"WITH temp AS (\n" +
"select \n" +
" ta.gmtStatistical as gmtStatistical,\n" +
" ta.paymentMethod as paymentMethod,\n" +
" tb.CORP_ID as outCorpId,\n" +
" tc.CORP_ID as inCorpId,\n" +
" sum(ta.tradeAmt) as tranAmount,\n" +
"
我尝试使用普通的join
String sql2 = "insert into dws_b2b_trade_year_index\n" +
"WITH temp AS (\n" +
"select \n" +
" ta.gmtStatistical as gmtStatistical,\n" +
" ta.paymentMethod as paymentMethod,\n" +
" tb.CORP_ID as outCorpId,\n"
用普通的 join, 不要用 lookup join
Zhiwen Sun
On Fri, Nov 11, 2022 at 11:10 AM Jason_H wrote:
>
>
> hi,大家好
>
> 我正在使用flink的sql实现一个维表join的逻辑,数据源为kafka(交易数据),维表为mysql(账号),现在遇到一个问题:当kafka有数据进来时,没有在维表中找到账号,这时我手动插入该账号,在下一条数据进来时可以匹配上对应的账号信息,但是,输出的累计结果就会缺失没有匹配上的那条数据,举例如下:
> kakfa输入:
> 账号 金额 笔数
> 100 1
退订
hi,大家好
我正在使用flink的sql实现一个维表join的逻辑,数据源为kafka(交易数据),维表为mysql(账号),现在遇到一个问题:当kafka有数据进来时,没有在维表中找到账号,这时我手动插入该账号,在下一条数据进来时可以匹配上对应的账号信息,但是,输出的累计结果就会缺失没有匹配上的那条数据,举例如下:
kakfa输入:
账号 金额 笔数
100 1 -> 未匹配
100 1 -> 未匹配
100 1 -> 匹配上
维表
账号 企业
-> 后插入的账号信息
实际输出结果
企业 金额
Excellent news -- welcome to the new era of easier, more timely and more
feature-rich releases for everyone!
Great job! Ryan
On Thu, Nov 10, 2022 at 3:15 PM Leonard Xu wrote:
> Thanks Chesnay and Martijn for the great work! I believe the
> flink-connector-shared-utils[1] you built will help
Thanks Chesnay and Martijn for the great work! I believe the
flink-connector-shared-utils[1] you built will help Flink connector developers
a lot.
Best,
Leonard
[1] https://github.com/apache/flink-connector-shared-utils
> 2022年11月10日 下午9:53,Martijn Visser 写道:
>
> Really happy with the firs
Really happy with the first externalized connector for Flink. Thanks a lot
to all of you involved!
On Thu, Nov 10, 2022 at 12:51 PM Chesnay Schepler
wrote:
> The Apache Flink community is very happy to announce the release of
> Apache Flink Elasticsearch Connector 3.0.0.
>
> Apache Flink® is an
The Apache Flink community is very happy to announce the release of
Apache Flink Elasticsearch Connector 3.0.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streaming applications.
The release is available f
10 matches
Mail list logo