This is an automated email from the ASF dual-hosted git repository.
corgy pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git
The following commit(s) were added to refs/heads/dev by this push:
new 27c966daa4 [Feature][Connector-V2] Support HDFS ViewFs Schema (#10117)
27c966daa4 is described below
commit 27c966daa47dae68cc4919b9aa428618e787f59c
Author: xiaochen <[email protected]>
AuthorDate: Wed Nov 26 21:40:52 2025 +0800
[Feature][Connector-V2] Support HDFS ViewFs Schema (#10117)
---
docs/en/connector-v2/sink/HdfsFile.md | 36 +++-
docs/zh/connector-v2/sink/HdfsFile.md | 37 ++++-
.../seatunnel/file/config/HadoopConf.java | 10 +-
.../connector-file-hadoop-e2e/pom.xml | 55 ++++++
.../e2e/connector/file/hdfs/HdfsFileIT.java | 131 +++++++++++++++
.../e2e/connector/file/hdfs/HdfsFileViewFsIT.java | 185 +++++++++++++++++++++
.../src/test/resources/fake_to_hdfs_normal.conf | 56 +++++++
.../src/test/resources/fake_to_hdfs_viewfs.conf | 77 +++++++++
.../src/test/resources/hdfs_normal_to_assert.conf | 83 +++++++++
.../src/test/resources/hdfs_viewfs_to_assert.conf | 114 +++++++++++++
.../test/resources/viewfs/cluster1/core-site.xml | 29 ++++
.../test/resources/viewfs/cluster1/hdfs-site.xml | 39 +++++
.../test/resources/viewfs/cluster2/core-site.xml | 29 ++++
.../test/resources/viewfs/cluster2/hdfs-site.xml | 41 +++++
.../src/test/resources/viewfs/core-site.xml | 48 ++++++
seatunnel-e2e/seatunnel-connector-v2-e2e/pom.xml | 3 +-
16 files changed, 968 insertions(+), 5 deletions(-)
diff --git a/docs/en/connector-v2/sink/HdfsFile.md
b/docs/en/connector-v2/sink/HdfsFile.md
index 5031f13594..073a56e345 100644
--- a/docs/en/connector-v2/sink/HdfsFile.md
+++ b/docs/en/connector-v2/sink/HdfsFile.md
@@ -50,7 +50,7 @@ Output data to hdfs file
| Name | Type | Required | Default
| Description
[...]
|---------------------------------------|---------|----------|--------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
-| fs.defaultFS | string | yes | -
| The hadoop cluster address that start with
`hdfs://`, for example: `hdfs://hadoopcluster`
[...]
+| fs.defaultFS | string | yes | -
| Hadoop cluster address. Supports the following
formats:<br/>- Standard HDFS: `hdfs://hadoopcluster` or
`hdfs://namenode:9000`<br/>- ViewFS (Federated HDFS):
`viewfs://mycluster`<br/>See ViewFS configuration example below.
[...]
| path | string | yes | -
| The target dir path is required.
[...]
| tmp_path | string | yes | /tmp/seatunnel
| The result file will write to a tmp path first and
then use `mv` to submit tmp dir to target dir. Need a hdfs path.
[...]
| hdfs_site_path | string | no | -
| The path of `hdfs-site.xml`, used to load ha
configuration of namenodes
[...]
@@ -240,6 +240,40 @@ HdfsFile {
}
```
+### ViewFS (Federated HDFS) Configuration Example
+
+ViewFS allows you to unify multiple HDFS clusters or namespaces into a single
logical namespace. This is very useful for HDFS Federation scenarios.
+
+```hocon
+HdfsFile {
+ fs.defaultFS = "viewfs://mycluster"
+ path = "/data/output"
+ file_format_type = "parquet"
+ hdfs_site_path = "/path/to/core-site.xml"
+ data_save_mode = "DROP_DATA"
+}
+```
+
+Configure mount table in `core-site.xml`:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<configuration>
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./data</name>
+ <value>hdfs://namenode1:9000/data</value>
+ </property>
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./logs</name>
+ <value>hdfs://namenode2:9000/logs</value>
+ </property>
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./tmp</name>
+ <value>hdfs://namenode3:9000/tmp</value>
+ </property>
+</configuration>
+```
+
## Changelog
<ChangeLog />
\ No newline at end of file
diff --git a/docs/zh/connector-v2/sink/HdfsFile.md
b/docs/zh/connector-v2/sink/HdfsFile.md
index 0e61ea9764..c1f7c2eb1c 100644
--- a/docs/zh/connector-v2/sink/HdfsFile.md
+++ b/docs/zh/connector-v2/sink/HdfsFile.md
@@ -48,7 +48,7 @@ import ChangeLog from '../changelog/connector-file-hadoop.md';
| 名称 | 类型 | 是否必须 | 默认值
| 描述
|
|----------------------------------|---------|------|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| fs.defaultFS | string | 是 | -
| 以 `hdfs://` 开头的 Hadoop 集群地址,例如:`hdfs://hadoopcluster`
|
+| fs.defaultFS | string | 是 | -
| Hadoop 集群地址。支持以下格式:<br/>- 标准 HDFS:`hdfs://hadoopcluster` 或
`hdfs://namenode:9000`<br/>- ViewFS(联邦 HDFS):`viewfs://mycluster`<br/>详见下方
ViewFS 配置示例。
|
| path | string | 是 | -
| 目标目录路径是必需的。
|
| tmp_path | string | 是 | /tmp/seatunnel
| 结果文件将首先写入临时路径,然后使用 `mv` 命令将临时目录提交到目标目录。需要一个Hdfs路径。
|
| hdfs_site_path | string | 否 | -
| `hdfs-site.xml` 的路径,用于加载 namenodes 的 ha 配置。
|
@@ -235,6 +235,41 @@ HdfsFile {
}
```
+### ViewFS(联邦 HDFS)配置示例
+
+ViewFS 允许您将多个 HDFS 集群或命名空间统一到一个逻辑命名空间中。这对于 HDFS 联邦(Federation)场景非常有用。
+
+```
+HdfsFile {
+ fs.defaultFS = "viewfs://mycluster"
+ path = "/data/output"
+ file_format_type = "parquet"
+ hdfs_site_path = "/path/to/core-site.xml"
+ data_save_mode = "DROP_DATA"
+}
+```
+
+在 `core-site.xml` 中配置挂载表:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<configuration>
+ <!-- ViewFS mount table for mycluster -->
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./data</name>
+ <value>hdfs://namenode1:9000/data</value>
+ </property>
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./logs</name>
+ <value>hdfs://namenode2:9000/logs</value>
+ </property>
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./tmp</name>
+ <value>hdfs://namenode3:9000/tmp</value>
+ </property>
+</configuration>
+```
+
## 变更日志
<ChangeLog />
\ No newline at end of file
diff --git
a/seatunnel-connectors-v2/connector-file/connector-file-base/src/main/java/org/apache/seatunnel/connectors/seatunnel/file/config/HadoopConf.java
b/seatunnel-connectors-v2/connector-file/connector-file-base/src/main/java/org/apache/seatunnel/connectors/seatunnel/file/config/HadoopConf.java
index 266c735b66..77333eea38 100644
---
a/seatunnel-connectors-v2/connector-file/connector-file-base/src/main/java/org/apache/seatunnel/connectors/seatunnel/file/config/HadoopConf.java
+++
b/seatunnel-connectors-v2/connector-file/connector-file-base/src/main/java/org/apache/seatunnel/connectors/seatunnel/file/config/HadoopConf.java
@@ -36,7 +36,9 @@ import static
org.apache.parquet.avro.AvroWriteSupport.WRITE_OLD_LIST_STRUCTURE;
@Data
public class HadoopConf implements Serializable {
private static final String HDFS_IMPL =
"org.apache.hadoop.hdfs.DistributedFileSystem";
+ private static final String VIEWFS_IMPL =
"org.apache.hadoop.fs.viewfs.ViewFileSystem";
private static final String SCHEMA = "hdfs";
+ private static final String VIEWFS_SCHEMA = "viewfs";
protected Map<String, String> extraOptions = new HashMap<>();
protected String hdfsNameKey;
protected String hdfsSitePath;
@@ -52,11 +54,15 @@ public class HadoopConf implements Serializable {
}
public String getFsHdfsImpl() {
- return HDFS_IMPL;
+ return isViewFs() ? VIEWFS_IMPL : HDFS_IMPL;
}
public String getSchema() {
- return SCHEMA;
+ return isViewFs() ? VIEWFS_SCHEMA : SCHEMA;
+ }
+
+ protected boolean isViewFs() {
+ return hdfsNameKey != null && hdfsNameKey.startsWith("viewfs://");
}
public void setExtraOptionsForConfiguration(Configuration configuration) {
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/pom.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/pom.xml
new file mode 100644
index 0000000000..46c1cdd0e6
--- /dev/null
+++ b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/pom.xml
@@ -0,0 +1,55 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <parent>
+ <groupId>org.apache.seatunnel</groupId>
+ <artifactId>seatunnel-connector-v2-e2e</artifactId>
+ <version>${revision}</version>
+ </parent>
+
+ <artifactId>connector-file-hadoop-e2e</artifactId>
+ <name>SeaTunnel : E2E : Connector V2 : File Hadoop</name>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.apache.seatunnel</groupId>
+ <artifactId>connector-fake</artifactId>
+ <version>${project.version}</version>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.seatunnel</groupId>
+ <artifactId>connector-file-hadoop</artifactId>
+ <version>${project.version}</version>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.seatunnel</groupId>
+ <artifactId>connector-assert</artifactId>
+ <version>${project.version}</version>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.seatunnel</groupId>
+ <artifactId>seatunnel-e2e-common</artifactId>
+ <version>${project.version}</version>
+ <type>test-jar</type>
+ <scope>test</scope>
+ </dependency>
+ </dependencies>
+</project>
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/java/org/apache/seatunnel/e2e/connector/file/hdfs/HdfsFileIT.java
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/java/org/apache/seatunnel/e2e/connector/file/hdfs/HdfsFileIT.java
new file mode 100644
index 0000000000..40de8f03a2
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/java/org/apache/seatunnel/e2e/connector/file/hdfs/HdfsFileIT.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.e2e.connector.file.hdfs;
+
+import org.apache.seatunnel.e2e.common.TestResource;
+import org.apache.seatunnel.e2e.common.TestSuiteBase;
+import org.apache.seatunnel.e2e.common.container.TestContainer;
+import org.apache.seatunnel.e2e.common.junit.TestContainerExtension;
+
+import org.junit.jupiter.api.AfterAll;
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.TestTemplate;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.output.Slf4jLogConsumer;
+import org.testcontainers.containers.wait.strategy.Wait;
+import org.testcontainers.lifecycle.Startables;
+import org.testcontainers.utility.DockerImageName;
+import org.testcontainers.utility.DockerLoggerFactory;
+import org.testcontainers.utility.MountableFile;
+
+import lombok.extern.slf4j.Slf4j;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.stream.Stream;
+
+@Slf4j
+public class HdfsFileIT extends TestSuiteBase implements TestResource {
+
+ private static final String HADOOP_IMAGE = "apache/hadoop:3";
+
+ private GenericContainer<?> nameNode;
+ private GenericContainer<?> dataNode;
+
+ @TestContainerExtension
+ private final
org.apache.seatunnel.e2e.common.container.ContainerExtendedFactory
+ extendedFactory = container -> {};
+
+ @BeforeAll
+ @Override
+ public void startUp() throws Exception {
+ nameNode =
+ new GenericContainer<>(DockerImageName.parse(HADOOP_IMAGE))
+ .withNetwork(NETWORK)
+ .withNetworkAliases("namenode1")
+ .withEnv("ENSURE_NAMENODE_DIR",
"/tmp/hadoop-root/dfs/name")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/core-site.xml"),
+ "/opt/hadoop/etc/hadoop/core-site.xml")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/hdfs-site.xml"),
+ "/opt/hadoop/etc/hadoop/hdfs-site.xml")
+ .withCommand("sh", "-c", "hdfs namenode -format -force
&& hdfs namenode")
+ .withExposedPorts(9870, 9000)
+ .waitingFor(
+ Wait.forHttp("/")
+ .forPort(9870)
+
.withStartupTimeout(Duration.ofMinutes(2)))
+ .withLogConsumer(
+ new Slf4jLogConsumer(
+
DockerLoggerFactory.getLogger(HADOOP_IMAGE + ":namenode")));
+
+ dataNode =
+ new GenericContainer<>(DockerImageName.parse(HADOOP_IMAGE))
+ .withNetwork(NETWORK)
+ .withNetworkAliases("datanode1")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/core-site.xml"),
+ "/opt/hadoop/etc/hadoop/core-site.xml")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/hdfs-site.xml"),
+ "/opt/hadoop/etc/hadoop/hdfs-site.xml")
+ .withCommand("hdfs", "datanode")
+ .dependsOn(nameNode)
+ .withLogConsumer(
+ new Slf4jLogConsumer(
+
DockerLoggerFactory.getLogger(HADOOP_IMAGE + ":datanode")));
+
+ Startables.deepStart(Stream.of(nameNode, dataNode)).join();
+ Thread.sleep(5000);
+ }
+
+ @AfterAll
+ @Override
+ public void tearDown() throws Exception {
+ if (dataNode != null) {
+ dataNode.stop();
+ log.info("HDFS DataNode stopped");
+ }
+ if (nameNode != null) {
+ nameNode.stop();
+ log.info("HDFS NameNode stopped");
+ }
+ }
+
+ @TestTemplate
+ public void testHdfsWrite(TestContainer container) throws IOException,
InterruptedException {
+ org.testcontainers.containers.Container.ExecResult execResult =
+ container.executeJob("/fake_to_hdfs_normal.conf");
+ Assertions.assertEquals(0, execResult.getExitCode());
+ org.testcontainers.containers.Container.ExecResult lsResult =
+ nameNode.execInContainer("hdfs", "dfs", "-ls",
"/normal/output");
+ Assertions.assertEquals(0, lsResult.getExitCode(), "Directory
/normal/output should exist");
+ }
+
+ @TestTemplate
+ public void testHdfsRead(TestContainer container) throws IOException,
InterruptedException {
+ org.testcontainers.containers.Container.ExecResult writeResult =
+ container.executeJob("/fake_to_hdfs_normal.conf");
+ Assertions.assertEquals(0, writeResult.getExitCode());
+ org.testcontainers.containers.Container.ExecResult readResult =
+ container.executeJob("/hdfs_normal_to_assert.conf");
+ Assertions.assertEquals(0, readResult.getExitCode());
+ }
+}
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/java/org/apache/seatunnel/e2e/connector/file/hdfs/HdfsFileViewFsIT.java
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/java/org/apache/seatunnel/e2e/connector/file/hdfs/HdfsFileViewFsIT.java
new file mode 100644
index 0000000000..2832526794
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/java/org/apache/seatunnel/e2e/connector/file/hdfs/HdfsFileViewFsIT.java
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.seatunnel.e2e.connector.file.hdfs;
+
+import org.apache.seatunnel.e2e.common.TestResource;
+import org.apache.seatunnel.e2e.common.TestSuiteBase;
+import org.apache.seatunnel.e2e.common.container.ContainerExtendedFactory;
+import org.apache.seatunnel.e2e.common.container.TestContainer;
+import org.apache.seatunnel.e2e.common.junit.TestContainerExtension;
+
+import org.junit.jupiter.api.AfterAll;
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.TestTemplate;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.output.Slf4jLogConsumer;
+import org.testcontainers.containers.wait.strategy.Wait;
+import org.testcontainers.lifecycle.Startables;
+import org.testcontainers.utility.DockerImageName;
+import org.testcontainers.utility.DockerLoggerFactory;
+import org.testcontainers.utility.MountableFile;
+
+import lombok.extern.slf4j.Slf4j;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.stream.Stream;
+
+@Slf4j
+public class HdfsFileViewFsIT extends TestSuiteBase implements TestResource {
+
+ private static final String HADOOP_IMAGE = "apache/hadoop:3";
+
+ private GenericContainer<?> nameNode1;
+ private GenericContainer<?> dataNode1;
+ private GenericContainer<?> nameNode2;
+ private GenericContainer<?> dataNode2;
+
+ @TestContainerExtension
+ private final ContainerExtendedFactory extendedFactory =
+ container -> {
+ container.copyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/core-site.xml"),
+ "/tmp/seatunnel/config/viewfs/core-site.xml");
+ log.info("ViewFS core-site.xml copied to container");
+ };
+
+ @BeforeAll
+ @Override
+ public void startUp() throws Exception {
+ nameNode1 =
+ new GenericContainer<>(DockerImageName.parse(HADOOP_IMAGE))
+ .withNetwork(NETWORK)
+ .withNetworkAliases("namenode1")
+ .withEnv("ENSURE_NAMENODE_DIR",
"/tmp/hadoop-root/dfs/name")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/core-site.xml"),
+ "/opt/hadoop/etc/hadoop/core-site.xml")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/hdfs-site.xml"),
+ "/opt/hadoop/etc/hadoop/hdfs-site.xml")
+ .withCommand("sh", "-c", "hdfs namenode -format -force
&& hdfs namenode")
+ .withExposedPorts(9870, 9000)
+ .waitingFor(
+ Wait.forHttp("/")
+ .forPort(9870)
+
.withStartupTimeout(Duration.ofMinutes(2)))
+ .withLogConsumer(
+ new Slf4jLogConsumer(
+ DockerLoggerFactory.getLogger(
+ HADOOP_IMAGE + ":namenode1")));
+ dataNode1 =
+ new GenericContainer<>(DockerImageName.parse(HADOOP_IMAGE))
+ .withNetwork(NETWORK)
+ .withNetworkAliases("datanode1")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/core-site.xml"),
+ "/opt/hadoop/etc/hadoop/core-site.xml")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster1/hdfs-site.xml"),
+ "/opt/hadoop/etc/hadoop/hdfs-site.xml")
+ .withCommand("hdfs", "datanode")
+ .withExposedPorts(9864, 9866, 9867)
+ .withLogConsumer(
+ new Slf4jLogConsumer(
+
DockerLoggerFactory.getLogger(HADOOP_IMAGE + ":datanode1")))
+ .dependsOn(nameNode1);
+ nameNode2 =
+ new GenericContainer<>(DockerImageName.parse(HADOOP_IMAGE))
+ .withNetwork(NETWORK)
+ .withNetworkAliases("namenode2")
+ .withEnv("ENSURE_NAMENODE_DIR",
"/tmp/hadoop-root/dfs/name")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster2/core-site.xml"),
+ "/opt/hadoop/etc/hadoop/core-site.xml")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster2/hdfs-site.xml"),
+ "/opt/hadoop/etc/hadoop/hdfs-site.xml")
+ .withCommand("sh", "-c", "hdfs namenode -format -force
&& hdfs namenode")
+ .withExposedPorts(9870, 9000)
+ .waitingFor(
+ Wait.forHttp("/")
+ .forPort(9870)
+
.withStartupTimeout(Duration.ofMinutes(2)))
+ .withLogConsumer(
+ new Slf4jLogConsumer(
+ DockerLoggerFactory.getLogger(
+ HADOOP_IMAGE + ":namenode2")));
+ dataNode2 =
+ new GenericContainer<>(DockerImageName.parse(HADOOP_IMAGE))
+ .withNetwork(NETWORK)
+ .withNetworkAliases("datanode2")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster2/core-site.xml"),
+ "/opt/hadoop/etc/hadoop/core-site.xml")
+ .withCopyFileToContainer(
+
MountableFile.forClasspathResource("viewfs/cluster2/hdfs-site.xml"),
+ "/opt/hadoop/etc/hadoop/hdfs-site.xml")
+ .withCommand("hdfs", "datanode")
+ .withExposedPorts(9864, 9866, 9867)
+ .withLogConsumer(
+ new Slf4jLogConsumer(
+
DockerLoggerFactory.getLogger(HADOOP_IMAGE + ":datanode2")))
+ .dependsOn(nameNode2);
+ Startables.deepStart(Stream.of(nameNode1, dataNode1, nameNode2,
dataNode2)).join();
+ Thread.sleep(5000);
+ }
+
+ @AfterAll
+ @Override
+ public void tearDown() throws Exception {
+ if (dataNode1 != null) {
+ dataNode1.stop();
+ }
+ if (nameNode1 != null) {
+ nameNode1.stop();
+ log.info("HDFS Cluster 1 stopped");
+ }
+ if (dataNode2 != null) {
+ dataNode2.stop();
+ }
+ if (nameNode2 != null) {
+ nameNode2.stop();
+ log.info("HDFS Cluster 2 stopped");
+ }
+ }
+
+ @TestTemplate
+ public void testViewFsWrite(TestContainer container) throws IOException,
InterruptedException {
+ org.testcontainers.containers.Container.ExecResult execResult =
+ container.executeJob("/fake_to_hdfs_viewfs.conf");
+ Assertions.assertEquals(
+ 0, execResult.getExitCode(), "SeaTunnel job should complete
successfully");
+
+ // Verify files were written to cluster1 via ViewFS mount point /data
+ org.testcontainers.containers.Container.ExecResult lsResult =
+ nameNode1.execInContainer("hdfs", "dfs", "-ls",
"/data/output");
+ Assertions.assertEquals(0, lsResult.getExitCode(), "Directory
/data/output should exist");
+ }
+
+ @TestTemplate
+ public void testViewFsRead(TestContainer container) throws IOException,
InterruptedException {
+ org.testcontainers.containers.Container.ExecResult writeResult =
+ container.executeJob("/fake_to_hdfs_viewfs.conf");
+ Assertions.assertEquals(0, writeResult.getExitCode());
+ org.testcontainers.containers.Container.ExecResult readResult =
+ container.executeJob("/hdfs_viewfs_to_assert.conf");
+ Assertions.assertEquals(0, readResult.getExitCode());
+ }
+}
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/fake_to_hdfs_normal.conf
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/fake_to_hdfs_normal.conf
new file mode 100644
index 0000000000..3622f515cc
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/fake_to_hdfs_normal.conf
@@ -0,0 +1,56 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+env {
+ parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ FakeSource {
+ parallelism = 1
+ plugin_output = "fake"
+ row.num = 100
+ schema = {
+ fields {
+ c_map = "map<string, string>"
+ c_array = "array<int>"
+ c_string = string
+ c_boolean = boolean
+ c_float = float
+ c_double = double
+ c_date = date
+ c_decimal = "decimal(38, 18)"
+ c_timestamp = timestamp
+ }
+ }
+ }
+}
+
+sink {
+ HdfsFile {
+ fs.defaultFS = "hdfs://namenode1:9000"
+ path = "/normal/output"
+ tmp_path = "/normal/tmp"
+ file_format_type = "parquet"
+ data_save_mode = "DROP_DATA"
+ hadoop_conf = {
+ "dfs.replication" = 1
+ }
+ }
+}
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/fake_to_hdfs_viewfs.conf
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/fake_to_hdfs_viewfs.conf
new file mode 100644
index 0000000000..0ef285cd35
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/fake_to_hdfs_viewfs.conf
@@ -0,0 +1,77 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+env {
+ parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ FakeSource {
+ row.num = 100
+ schema = {
+ fields {
+ c_map = "map<string, string>"
+ c_array = "array<int>"
+ c_string = string
+ c_boolean = boolean
+ c_tinyint = tinyint
+ c_smallint = smallint
+ c_int = int
+ c_bigint = bigint
+ c_float = float
+ c_double = double
+ c_bytes = bytes
+ c_date = date
+ c_decimal = "decimal(38, 18)"
+ c_timestamp = timestamp
+ c_row = {
+ c_map = "map<string, string>"
+ c_array = "array<int>"
+ c_string = string
+ c_boolean = boolean
+ c_tinyint = tinyint
+ c_smallint = smallint
+ c_int = int
+ c_bigint = bigint
+ c_float = float
+ c_double = double
+ c_bytes = bytes
+ c_date = date
+ c_decimal = "decimal(20, 18)"
+ c_timestamp = timestamp
+ }
+ }
+ }
+ }
+}
+
+sink {
+ HdfsFile {
+ fs.defaultFS = "viewfs://mycluster"
+ path = "/data/output"
+ tmp_path = "/data/tmp"
+ hdfs_site_path = "/tmp/seatunnel/config/viewfs/core-site.xml"
+ file_format_type = "json"
+ schema_save_mode = "CREATE_SCHEMA_WHEN_NOT_EXIST"
+ data_save_mode = "DROP_DATA"
+ hadoop_conf = {
+ "dfs.replication" = 1
+ }
+ }
+}
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/hdfs_normal_to_assert.conf
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/hdfs_normal_to_assert.conf
new file mode 100644
index 0000000000..67939fb4be
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/hdfs_normal_to_assert.conf
@@ -0,0 +1,83 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+env {
+ parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ HdfsFile {
+ fs.defaultFS = "hdfs://namenode1:9000"
+ path = "/normal/output"
+ file_format_type = "parquet"
+ schema = {
+ fields {
+ c_map = "map<string, string>"
+ c_array = "array<int>"
+ c_string = string
+ c_boolean = boolean
+ c_float = float
+ c_double = double
+ c_date = date
+ c_decimal = "decimal(38, 18)"
+ c_timestamp = timestamp
+ }
+ }
+ hadoop_conf = {
+ "dfs.replication" = 1
+ }
+ }
+}
+
+sink {
+ Assert {
+ rules {
+ row_rules = [
+ {
+ rule_type = MAX_ROW
+ rule_value = 100
+ },
+ {
+ rule_type = MIN_ROW
+ rule_value = 100
+ }
+ ]
+ field_rules = [
+ {
+ field_name = c_string
+ field_type = string
+ field_value = [
+ {
+ rule_type = NOT_NULL
+ }
+ ]
+ },
+ {
+ field_name = c_boolean
+ field_type = boolean
+ field_value = [
+ {
+ rule_type = NOT_NULL
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/hdfs_viewfs_to_assert.conf
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/hdfs_viewfs_to_assert.conf
new file mode 100644
index 0000000000..2c98b3b440
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/hdfs_viewfs_to_assert.conf
@@ -0,0 +1,114 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+env {
+ parallelism = 1
+ job.mode = "BATCH"
+}
+
+source {
+ HdfsFile {
+ fs.defaultFS = "viewfs://mycluster"
+ path = "/data/output"
+ hdfs_site_path = "/tmp/seatunnel/config/viewfs/core-site.xml"
+ file_format_type = "json"
+ schema = {
+ fields {
+ c_map = "map<string, string>"
+ c_array = "array<int>"
+ c_string = string
+ c_boolean = boolean
+ c_tinyint = tinyint
+ c_smallint = smallint
+ c_int = int
+ c_bigint = bigint
+ c_float = float
+ c_double = double
+ c_bytes = bytes
+ c_date = date
+ c_decimal = "decimal(38, 18)"
+ c_timestamp = timestamp
+ c_row = {
+ c_map = "map<string, string>"
+ c_array = "array<int>"
+ c_string = string
+ c_boolean = boolean
+ c_tinyint = tinyint
+ c_smallint = smallint
+ c_int = int
+ c_bigint = bigint
+ c_float = float
+ c_double = double
+ c_bytes = bytes
+ c_date = date
+ c_decimal = "decimal(38, 18)"
+ c_timestamp = timestamp
+ }
+ }
+ }
+ hadoop_conf = {
+ "dfs.replication" = 1
+ }
+ }
+}
+
+sink {
+ Assert {
+ rules {
+ row_rules = [
+ {
+ rule_type = MAX_ROW
+ rule_value = 100
+ },
+ {
+ rule_type = MIN_ROW
+ rule_value = 100
+ }
+ ]
+ field_rules = [
+ {
+ field_name = c_string
+ field_type = string
+ field_value = [
+ {
+ rule_type = NOT_NULL
+ }
+ ]
+ },
+ {
+ field_name = c_boolean
+ field_type = boolean
+ field_value = [
+ {
+ rule_type = NOT_NULL
+ }
+ ]
+ },
+ {
+ field_name = c_int
+ field_type = int
+ field_value = [
+ {
+ rule_type = NOT_NULL
+ }
+ ]
+ }
+ ]
+ }
+ }
+}
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster1/core-site.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster1/core-site.xml
new file mode 100644
index 0000000000..e9346346b8
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster1/core-site.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+ <property>
+ <name>fs.defaultFS</name>
+ <value>hdfs://namenode1:9000</value>
+ </property>
+ <property>
+ <name>dfs.permissions.enabled</name>
+ <value>false</value>
+ </property>
+</configuration>
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster1/hdfs-site.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster1/hdfs-site.xml
new file mode 100644
index 0000000000..9bfcdeff35
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster1/hdfs-site.xml
@@ -0,0 +1,39 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+ <property>
+ <name>dfs.replication</name>
+ <value>1</value>
+ </property>
+ <property>
+ <name>dfs.namenode.name.dir</name>
+ <value>file:///tmp/hadoop-root/dfs/name</value>
+ </property>
+ <property>
+ <name>dfs.datanode.data.dir</name>
+ <value>file:///tmp/hadoop-root/dfs/data</value>
+ </property>
+ <property>
+ <name>dfs.permissions.enabled</name>
+ <value>false</value>
+ </property>
+ <property>
+ <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
+ <value>false</value>
+ </property>
+</configuration>
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster2/core-site.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster2/core-site.xml
new file mode 100644
index 0000000000..46f163517c
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster2/core-site.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+ <property>
+ <name>fs.defaultFS</name>
+ <value>hdfs://namenode2:9000</value>
+ </property>
+ <property>
+ <name>dfs.permissions.enabled</name>
+ <value>false</value>
+ </property>
+</configuration>
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster2/hdfs-site.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster2/hdfs-site.xml
new file mode 100644
index 0000000000..947fd61ee5
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/cluster2/hdfs-site.xml
@@ -0,0 +1,41 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+ <property>
+ <name>dfs.replication</name>
+ <value>1</value>
+ </property>
+ <property>
+ <name>dfs.namenode.name.dir</name>
+ <value>file:///tmp/hadoop-root/dfs/name</value>
+ </property>
+ <property>
+ <name>dfs.datanode.data.dir</name>
+ <value>file:///tmp/hadoop-root/dfs/data</value>
+ </property>
+ <property>
+ <name>dfs.permissions.enabled</name>
+ <value>false</value>
+ </property>
+ <property>
+ <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
+ <value>false</value>
+ </property>
+</configuration>
+
diff --git
a/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/core-site.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/core-site.xml
new file mode 100644
index 0000000000..6f2702ffd6
--- /dev/null
+++
b/seatunnel-e2e/seatunnel-connector-v2-e2e/connector-file-hadoop-e2e/src/test/resources/viewfs/core-site.xml
@@ -0,0 +1,48 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<configuration>
+ <!-- ViewFS default filesystem -->
+ <property>
+ <name>fs.defaultFS</name>
+ <value>viewfs://mycluster</value>
+ </property>
+
+ <!-- ViewFS mount table configuration -->
+ <!-- Mount /data to cluster1 -->
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./data</name>
+ <value>hdfs://namenode1:9000/data</value>
+ </property>
+ <property>
+ <name>fs.viewfs.mounttable.mycluster.link./tmp</name>
+ <value>hdfs://namenode2:9000/tmp</value>
+ </property>
+
+ <property>
+ <name>dfs.replication</name>
+ <value>1</value>
+ </property>
+ <property>
+ <name>dfs.permissions.enabled</name>
+ <value>false</value>
+ </property>
+</configuration>
+
+
diff --git a/seatunnel-e2e/seatunnel-connector-v2-e2e/pom.xml
b/seatunnel-e2e/seatunnel-connector-v2-e2e/pom.xml
index 4cdb7af240..912026b79f 100644
--- a/seatunnel-e2e/seatunnel-connector-v2-e2e/pom.xml
+++ b/seatunnel-e2e/seatunnel-connector-v2-e2e/pom.xml
@@ -39,6 +39,7 @@
<module>connector-amazonsqs-e2e</module>
<module>connector-file-local-e2e</module>
<module>connector-file-cos-e2e</module>
+ <module>connector-file-hadoop-e2e</module>
<module>connector-file-sftp-e2e</module>
<module>connector-file-oss-e2e</module>
<module>connector-file-s3-e2e</module>
@@ -66,7 +67,7 @@
<module>connector-druid-e2e</module>
<module>connector-google-firestore-e2e</module>
<module>connector-rocketmq-e2e</module>
- <module>connector-file-obs-e2e</module>
+ <!-- <module>connector-file-obs-e2e</module>-->
<module>connector-file-ftp-e2e</module>
<module>connector-pulsar-e2e</module>
<module>connector-paimon-e2e</module>