Re: Are metadata columns required to get declared in the table's schema?

2023-03-26 Thread Jie Han
Thank you for your respond.
Actually I noticed that the doc says 'However, declaring a metadata column in a 
table’s schema is optional’.
So, does it mean that we don’t need to declare it when we don't query it rather 
than we can query it without the declaration?

Best,
Jay








Are metadata columns required to get declared in the table's schema?

2023-03-25 Thread Jie Han
Hi community, I want to query a metadata column from my table t. Do I need to 
declare it in the table schema explicitly? 

In spark, metadata columns are hidden columns, which means we don’t need to 
declare it in the table ddl, we only explicitly reference it in our query. For 
instance, select *, _metadata from t.




Re: Example of dynamic table

2023-03-07 Thread Jie Han
I’ve got the concept figured out, but don’t know how.

For example, I have 2 kafka tables `a` and `b`, and want to execute a 
continuous query like ’select a.f1,b.f1 from a left join b on a .f0 = b.f0’. 
How to write the sql to tell flink that it’s a continuous query?

> 2023年3月8日 09:24,yuxia  写道:
> 
> What do your mean "try the feature of dynamic table", do you want to know the 
> concept of dynamic table[1] or  User-defined Sources & Sinks[2] with dynamic 
> table?
> [1]: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/dynamic_tables/
> [2]: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sourcessinks/
> 
> Best regards,
> Yuxia
> 
> - 原始邮件 -
> 发件人: "Jie Han" 
> 收件人: "User" 
> 发送时间: 星期三, 2023年 3 月 08日 上午 7:54:06
> 主题: Example of dynamic table
> 
> Hello community!
> I want to try the feature of dynamic table but do not find examples in the 
> official doc.
> Is this part missing?



Example of dynamic table

2023-03-07 Thread Jie Han
Hello community!
I want to try the feature of dynamic table but do not find examples in the 
official doc.
Is this part missing? 

Re: jobmaster's fatal error will kill the session cluster

2022-10-14 Thread Jie Han
(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_db70a2fa-991e-4392-9447-5d060aeb156e.jar:1.15.0]
at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_db70a2fa-991e-4392-9447-5d060aeb156e.jar:1.15.0]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_db70a2fa-991e-4392-9447-5d060aeb156e.jar:1.15.0]
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
 ~[flink-rpc-akka_db70a2fa-991e-4392-9447-5d060aeb156e.jar:1.15.0]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
~[?:?]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]

I’m not sure whether it’s proper to kill the cluster just because of using a 
wrong job configuration (set a relative path).


> 2022年10月14日 19:53,Matthias Pohl via user  写道:
> 
> Hi Jie Han,
> welcome to the community. Just a little side note: These kinds of questions 
> are more suitable to be asked in the user mailing list. The dev mailing list 
> is rather used for discussing feature development or project-related topics. 
> See [1] for further details.
> 
> About your question: The stacktrace you're providing indicates that something 
> went wrong while initiating the job execution. Unfortunately, the actual 
> reason is not clear because that's not included in your stacktrace (it should 
> be listed as a cause for the JobMasterException in your logs). You're right 
> in assuming that Flink is able to handle certain kinds of user code and 
> infrastructure-related errors by restarting the job. But there might be other 
> Flink cluster internal errors that could cause a Flink cluster shutdown. It's 
> hard to tell from the logs you provided. Usually, it's a good habit to share 
> a reasonable amount of logs to make investigating the issue easier right away.
> 
> Let's move the discussion into the user mailing list in case you have further 
> questions.
> 
> Best,
> Matthias
> 
> [1] https://flink.apache.org/community.html#mailing-lists 
> <https://flink.apache.org/community.html#mailing-lists>
> On Fri, Oct 14, 2022 at 10:13 AM Jie Han  <mailto:tunyu...@gmail.com>> wrote:
> Hi, guys, I’m new to apache flink. It’s exciting to join the community!
> 
> When I experienced flink 1.15.0, I met some problems confusing, here is the 
> streamlined log:
> 
> org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Could not 
> start RpcEndpoint jobmanager_2.
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:617)
>  ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a.jar:1.15.0]
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
>  ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a.jar:1.15.0]
> at akka.japi.pf 
> <http://akka.japi.pf/>.UnitCaseStatement.apply(CaseStatements.scala:24) 
> ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a.jar:1.15.0]
> at akka.japi.pf 
> <http://akka.japi.pf/>.UnitCaseStatement.apply(CaseStatements.scala:20) 
> ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a.jar:1.15.0]
> at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
> ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a.jar:1.15.0]
> at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
> ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a.jar:1.15.0]
> at akka.japi.pf 
> <http://akka.japi.pf/>.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
> ~[flink-rpc-akka_65043be6-9dc5-4303-a760-61bd044fb53a

Re: Flink SQL: HOW to define `ANY` subtype in constructured Constructured Data Types(MAP, ARRAY...)

2022-02-23 Thread Jie Han
There is no built-in LogicType for ’ANY’, it’s a invalid token

> 2022年2月23日 下午10:29,zhouhaifengmath  写道:
> 
> 
> When I define a udf paramters like:
> public @DataTypeHint("Row") Row 
> eval(@DataTypeHint("MAP") Map mapData)
> 
> It gives error:
> Please check for implementation mistakes and/or provide a corresponding 
> hint.
>at 
> org.apache.flink.table.types.extraction.ExtractionUtils.extractionError(ExtractionUtils.java:362)
>at 
> org.apache.flink.table.types.extraction.TypeInferenceExtractor.extractTypeInference(TypeInferenceExtractor.java:150)
>at 
> org.apache.flink.table.types.extraction.TypeInferenceExtractor.forScalarFunction(TypeInferenceExtractor.java:83)
>at 
> org.apache.flink.table.functions.ScalarFunction.getTypeInference(ScalarFunction.java:143)
>at 
> org.apache.flink.table.planner.catalog.FunctionCatalogOperatorTable.convertToBridgingSqlFunction(FunctionCatalogOperatorTable.java:160)
>... 20 more
> Caused by: org.apache.flink.table.api.ValidationException: Error in 
> extracting a signature to output mapping.
>at 
> org.apache.flink.table.types.extraction.ExtractionUtils.extractionError(ExtractionUtils.java:362)
>at 
> org.apache.flink.table.types.extraction.FunctionMappingExtractor.extractOutputMapping(FunctionMappingExtractor.java:117)
>at 
> org.apache.flink.table.types.extraction.TypeInferenceExtractor.extractTypeInferenceOrError(TypeInferenceExtractor.java:161)
>at 
> org.apache.flink.table.types.extraction.TypeInferenceExtractor.extractTypeInference(TypeInferenceExtractor.java:148)
>... 23 more
> Caused by: org.apache.flink.table.api.ValidationException: Unable to extract 
> a type inference from method:
> public org.apache.flink.types.Row 
> com.netease.nie.sql.udfs.P1P2.eval(java.util.Map)
>at 
> org.apache.flink.table.types.extraction.ExtractionUtils.extractionError(ExtractionUtils.java:362)
>at 
> org.apache.flink.table.types.extraction.FunctionMappingExtractor.extractResultMappings(FunctionMappingExtractor.java:183)
>at 
> org.apache.flink.table.types.extraction.FunctionMappingExtractor.extractOutputMapping(FunctionMappingExtractor.java:114)
>... 25 more
> Caused by: org.apache.flink.table.api.TableException: User-defined types are 
> not supported yet.
>at 
> org.apache.flink.table.catalog.DataTypeFactoryImpl.resolveType(DataTypeFactoryImpl.java:189)
>at 
> org.apache.flink.table.catalog.DataTypeFactoryImpl.access$100(DataTypeFactoryImpl.java:50)
>at 
> org.apache.flink.table.catalog.DataTypeFactoryImpl$LogicalTypeResolver.defaultMethod(DataTypeFactoryImpl.java:178)
>at 
> org.apache.flink.table.catalog.DataTypeFactoryImpl$LogicalTypeResolver.defaultMethod(DataTypeFactoryImpl.java:171)
>at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:202)
>at 
> org.apache.flink.table.types.logical.UnresolvedUserDefinedType.accept(UnresolvedUserDefinedType.java:104)
>at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDuplicator.visit(LogicalTypeDuplicator.java:63)
>at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDuplicator.visit(LogicalTypeDuplicator.java:44)
>at 
> org.apache.flink.table.types.logical.MapType.accept(MapType.java:115)
>at 
> org.apache.flink.table.catalog.DataTypeFactoryImpl.createDataType(DataTypeFactoryImpl.java:80)
>at 
> org.apache.flink.table.types.extraction.DataTypeTemplate.extractDataType(DataTypeTemplate.java:297)
>at 
> org.apache.flink.table.types.extraction.DataTypeTemplate.fromAnnotation(DataTypeTemplate.java:112)
>at 
> org.apache.flink.table.types.extraction.DataTypeExtractor.extractFromMethodParameter(DataTypeExtractor.java:145)
>at 
> org.apache.flink.table.types.extraction.FunctionMappingExtractor.extractDataTypeArgument(FunctionMappingExtractor.java:409)
>at 
> org.apache.flink.table.types.extraction.FunctionMappingExtractor.lambda$null$10(FunctionMappingExtractor.java:385)
>at java.util.Optional.orElseGet(Optional.java:267)
>at 
> org.apache.flink.table.types.extraction.FunctionMappingExtractor.lambda$extractArgumentTemplates$11(FunctionMappingExtractor.java:383)
>at java.util.stream.IntPipeline$4$1.accept(IntPipeline.java:250)
>at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693)
>at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>at 
> java.util.stream.ReferencePipeline.collec