James601232 opened a new issue #3963: URL: https://github.com/apache/hudi/issues/3963
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)? - Join the mailing list to engage in conversations and get faster support at dev-subscr...@hudi.apache.org. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** Use hudi-java-client create hudi table failed A clear and concise description of the problem. **To Reproduce** Steps to reproduce the behavior: 1. write create hudi table codes `public static void main(String[] args) throws Exception { String tablePath = "/home/work/hudi_catalog1"; String tableName = "hudi_clyang_table"; // 测试数据器 HoodieExampleDataGenerator<HoodieAvroPayload> dataGen = new HoodieExampleDataGenerator<>(); //init hadoopconf Configuration hadoopConf = new Configuration(); // hadoopConf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem"); hadoopConf.addResource(new Path("core-site.xml")); hadoopConf.addResource(new Path("hdfs-site.xml")); System.setProperty("HADOOP_USER_NAME", "work"); //init properties Properties properties = new Properties(); properties.setProperty(HoodieWriteConfig.TABLE_NAME,tableName); // 初始化表 Path path = new Path(tablePath); FileSystem fs = FileSystem.newInstance(hadoopConf); HoodieTableMetaClient hoodieTableMetaClient = null; if (!fs.exists(path)) { hoodieTableMetaClient = HoodieTableMetaClient.initTableAndGetMetaClient(hadoopConf, tablePath, properties); } // 创建write client conf HoodieWriteConfig hudiWriteConf = HoodieWriteConfig.newBuilder() // 数据schema .withSchema(HoodieExampleDataGenerator.TRIP_EXAMPLE_SCHEMA) // 数据插入更新并行度 .withParallelism(2, 2) // 数据删除并行度 .withDeleteParallelism(2) // hudi表索引类型,内存 .withIndexConfig(HoodieIndexConfig.newBuilder().withIndexType(HoodieIndex.IndexType.INMEMORY).build()) // 合并 .withCompactionConfig(HoodieCompactionConfig.newBuilder().archiveCommitsWith(20, 30).build()) .withPath(tablePath) .forTable(tableName) .build(); // 获得hudi write client HoodieJavaWriteClient<HoodieAvroPayload> client = new HoodieJavaWriteClient<>(new HoodieJavaEngineContext(hadoopConf), hudiWriteConf); properties = HoodieTableMetaClient.withPropertyBuilder() .setTableName(tableName) .setTableType(HoodieTableType.COPY_ON_WRITE) .setPayloadClass(HoodieAvroPayload.class) .fromProperties(properties) .build(); HoodieTableMetaClient hoodieTableMetaClient1 = HoodieTableMetaClient.initTableAndGetMetaClient(hadoopConf, tablePath, properties); HoodieTable table = HoodieJavaTable.create(hudiWriteConf, new HoodieJavaEngineContext(hadoopConf), hoodieTableMetaClient1); client.close(); }` 2. executed success and no error. log as following `0 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 707 [main] INFO org.apache.hudi.client.embedded.EmbeddedTimelineService - Starting Timeline service !! 707 [main] WARN org.apache.hudi.client.embedded.EmbeddedTimelineService - Unable to find driver bind address from spark config 717 [main] INFO org.apache.hudi.common.table.view.FileSystemViewManager - Creating View Manager with storage type :MEMORY 718 [main] INFO org.apache.hudi.common.table.view.FileSystemViewManager - Creating in-memory based Table View 728 [main] INFO org.apache.hudi.org.apache.jetty.util.log - Logging initialized @1483ms to org.apache.hudi.org.apache.jetty.util.log.Slf4jLog 943 [main] INFO io.javalin.Javalin - __ __ _ / /____ _ _ __ ____ _ / /(_)____ __ / // __ `/| | / // __ `// // // __ \ / /_/ // /_/ / | |/ // /_/ // // // / / / \____/ \__,_/ |___/ \__,_//_//_//_/ /_/ https://javalin.io/documentation 944 [main] INFO io.javalin.Javalin - Starting Javalin ... 1072 [main] INFO io.javalin.Javalin - Listening on http://localhost:57856/ 1072 [main] INFO io.javalin.Javalin - Javalin started in 131ms \o/ 1072 [main] INFO org.apache.hudi.timeline.service.TimelineService - Starting Timeline server on port :57856 1073 [main] INFO org.apache.hudi.client.embedded.EmbeddedTimelineService - Started embedded timeline server at 0.0.0.0:57856 1079 [main] INFO org.apache.hudi.common.table.HoodieTableMetaClient - Initializing /home/work/hudi_catalog1 as hoodie table /home/work/hudi_catalog1 2109 [main] INFO org.apache.hudi.common.table.HoodieTableMetaClient - Loading HoodieTableMetaClient from /home/work/hudi_catalog1 2161 [main] INFO org.apache.hudi.common.table.HoodieTableConfig - Loading table properties from /home/work/hudi_catalog1/.hoodie/hoodie.properties 2327 [main] INFO org.apache.hudi.common.table.HoodieTableMetaClient - Finished Loading Table of type COPY_ON_WRITE(version=1, baseFileFormat=PARQUET) from /home/work/hudi_catalog1 2327 [main] INFO org.apache.hudi.common.table.HoodieTableMetaClient - Finished initializing Table of type COPY_ON_WRITE from /home/work/hudi_catalog1 2331 [main] INFO org.apache.hudi.common.table.view.FileSystemViewManager - Creating View Manager with storage type :REMOTE_FIRST 2331 [main] INFO org.apache.hudi.common.table.view.FileSystemViewManager - Creating remote first table view 2332 [main] INFO org.apache.hudi.client.AbstractHoodieClient - Stopping Timeline service !! 2332 [main] INFO org.apache.hudi.client.embedded.EmbeddedTimelineService - Closing Timeline server 2332 [main] INFO org.apache.hudi.timeline.service.TimelineService - Closing Timeline Service 2332 [main] INFO io.javalin.Javalin - Stopping Javalin ... 2342 [main] INFO io.javalin.Javalin - Javalin has stopped 2342 [main] INFO org.apache.hudi.timeline.service.TimelineService - Closed Timeline Service 2342 [main] INFO org.apache.hudi.client.embedded.EmbeddedTimelineService - Closed Timeline server` 3. check result in hdfs ![image](https://user-images.githubusercontent.com/19258506/141141739-e47e5b58-99a5-489b-b372-885d600aa9a0.png) **Expected behavior** Should be create hudi table corrected **Environment Description** jdk1.8 * Hudi version : 0.9.0 * Spark version : 3.0.2 * Hive version : 2.3.6 * Hadoop version : 2.7 * Storage (HDFS/S3/GCS..) : HDFS * Running on Docker? (yes/no) : no **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org