Yao Zhang created HUDI-5725:
-------------------------------

             Summary: Creating Hudi table with misspelled table type in Flink 
leads to Flink cluster crash
                 Key: HUDI-5725
                 URL: https://issues.apache.org/jira/browse/HUDI-5725
             Project: Apache Hudi
          Issue Type: Bug
          Components: flink
    Affects Versions: 0.12.2, 0.11.1
         Environment: Flink 1.13.2
Hadoop 3.1.1
Hudi 0.14.0-SNAPSHOT
            Reporter: Yao Zhang
            Assignee: Yao Zhang
             Fix For: 0.14.0


Create table with the following SQL:
{code:sql}
CREATE TABLE t1(
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts TIMESTAMP(3),
  `partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
WITH (
  'connector' = 'hudi',
  'path' = 'hdfs:///hudi/t1',
  'table.type' = 'MERGE_ON_REA
  D'
);
{code}

The value table.type contains an unexpected line break in 'MERGE_ON_READ'.

After the table creation, insert an arbitrary line of data. Flink cluster will 
immediately crash with the exception below:

{code:java}
Caused by: java.lang.IllegalArgumentException: No enum constant 
org.apache.hudi.common.model.HoodieTableType.MERGE_ON_REA
  D
        at java.lang.Enum.valueOf(Enum.java:238) ~[?:1.8.0_121]
        at 
org.apache.hudi.common.model.HoodieTableType.valueOf(HoodieTableType.java:30) 
~[hudi-flink1.13-bundle-0.14.0-SNAPSHOT.jar:0.14.0-SNAPSHOT]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator$TableState.<init>(StreamWriteOperatorCoordinator.java:630)
 ~[hudi-flink1.13-bundle-0.14.0-SNAPSHOT.jar:0.14.0-SNAPSHOT]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator$TableState.create(StreamWriteOperatorCoordinator.java:640)
 ~[hudi-flink1.13-bundle-0.14.0-SNAPSHOT.jar:0.14.0-SNAPSHOT]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:187)
 ~[hudi-flink1.13-bundle-0.14.0-SNAPSHOT.jar:0.14.0-SNAPSHOT]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:592)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        ... 12 more
{code}

The expected behavior is Flink cluster is still running and gives some 
infomation like 'Illegal table type' and gives the user a chance to correct the 
SQL statement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to