Are metadata columns required to get declared in the table's schema?
Hi community, I want to query a metadata column from my table t. Do I need to declare it in the table schema explicitly? In spark, metadata columns are hidden columns, which means we don’t need to declare it in the table ddl, we only explicitly reference it in our query. For instance, select *, _metadata from t.
Re: Table API function and expression vs SQL
Hi, Please also keep in mind that restoring existing Table API jobs from savepoints when upgrading to a newer minor version of Flink, e.g. 1.16 -> 1.17 is not supported as the topology might change between these versions due to optimizer changes. See here for more information: https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/overview/#stateful-upgrades-and-evolution Regards, Mate Hang Ruan ezt írta (időpont: 2023. márc. 25., Szo, 13:38): > Hi, > > I think the SQL job is better. Flink SQL jobs can be easily shared with > others for debugging. And it is more suitable for flow batch integration. > For a small part of jobs which can not be expressed through SQL, we will > choose a job by DataStream API. > > Best, > Hang > > ravi_suryavanshi.yahoo.com via user 于2023年3月24日周五 > 17:25写道: > >> Hello Team, >> Need your advice on which method is recommended considering don't want to >> change my query code when the Flink is updated/upgraded to the higher >> version. >> >> Here I am seeking advice for writing the SQL using java code(Table API >> function and Expression) or using pure SQL. >> >> I am assuming that SQL will not have any impact if upgraded to the higher >> version. >> >> Thanks and Regards, >> Ravi >> >
Re: Table API function and expression vs SQL
Hi, I think the SQL job is better. Flink SQL jobs can be easily shared with others for debugging. And it is more suitable for flow batch integration. For a small part of jobs which can not be expressed through SQL, we will choose a job by DataStream API. Best, Hang ravi_suryavanshi.yahoo.com via user 于2023年3月24日周五 17:25写道: > Hello Team, > Need your advice on which method is recommended considering don't want to > change my query code when the Flink is updated/upgraded to the higher > version. > > Here I am seeking advice for writing the SQL using java code(Table API > function and Expression) or using pure SQL. > > I am assuming that SQL will not have any impact if upgraded to the higher > version. > > Thanks and Regards, > Ravi >
Re: Flink 1.17 upgrade issue when using azure storage account for checkpoints/savepoints
On Sat, Mar 25, 2023 at 02:01:24PM +0530, Jessy Ping wrote: > Root cause: Caused by: java.util.concurrent.CompletionException: > java.lang.RuntimeException: java.lang.ClassNotFoundException: Class > org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not found We have similar error with Google Cloud Storage, and there is workaround in slack thread https://apache-flink.slack.com/archives/C03G7LJTS2G/p1679320815257449 -- ChangZhuo Chen (陳昌倬) czchen@{czchen,debian}.org http://czchen.info/ Key fingerprint = BA04 346D C2E1 FE63 C790 8793 CC65 B0CD EC27 5D5B signature.asc Description: PGP signature
Flink 1.17 upgrade issue when using azure storage account for checkpoints/savepoints
Hi Team. The application failed to start after upgrading from Flink 1.16.1 to 1.17.0 in both kubernetes and docker. I didn't make any changes in flink configurations related to savepoints and checkpoints. Root cause: Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not found Please find the attachment for the complete error trace. Are there any breaking changes in Flink 1.17.0 in terms of azure file system?. Thank you Jessy error.json Description: application/json
Re: [ANNOUNCE] Apache Flink 1.17.0 released
Thanks for the great work ! Congrats all! Best, Hang Panagiotis Garefalakis 于2023年3月25日周六 03:22写道: > Congrats all! Well done! > > Cheers, > Panagiotis > > On Fri, Mar 24, 2023 at 2:46 AM Qingsheng Ren wrote: > > > I'd like to say thank you to all contributors of Flink 1.17. Your support > > and great work together make this giant step forward! > > > > Also like Matthias mentioned, feel free to leave us any suggestions and > > let's improve the releasing procedure together. > > > > Cheers, > > Qingsheng > > > > On Fri, Mar 24, 2023 at 5:00 PM Etienne Chauchot > > wrote: > > > >> Congrats to all the people involved! > >> > >> Best > >> > >> Etienne > >> > >> Le 23/03/2023 à 10:19, Leonard Xu a écrit : > >> > The Apache Flink community is very happy to announce the release of > >> Apache Flink 1.17.0, which is the first release for the Apache Flink > 1.17 > >> series. > >> > > >> > Apache Flink® is an open-source unified stream and batch data > >> processing framework for distributed, high-performing, always-available, > >> and accurate data applications. > >> > > >> > The release is available for download at: > >> > https://flink.apache.org/downloads.html > >> > > >> > Please check out the release blog post for an overview of the > >> improvements for this release: > >> > > >> > https://flink.apache.org/2023/03/23/announcing-the-release-of-apache-flink-1.17/ > >> > > >> > The full release notes are available in Jira: > >> > > >> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351585 > >> > > >> > We would like to thank all contributors of the Apache Flink community > >> who made this release possible! > >> > > >> > Best regards, > >> > Qingsheng, Martijn, Matthias and Leonard > >> > > >