[ 
https://issues.apache.org/jira/browse/HUDI-4485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yao Zhang updated HUDI-4485:
----------------------------
    Description: 
This issue is from: [[SUPPORT] Hudi cli got empty result for command show 
fsview all · Issue #6177 · apache/hudi 
(github.com)|https://github.com/apache/hudi/issues/6177]

*{*}Describe the problem you faced{*}*

Hudi cli got empty result after running command show fsview all.

![image]([https://user-images.githubusercontent.com/7007327/180346750-6a55f472-45ac-46cf-8185-3c4fc4c76434.png])

The type of table t1 is COW and I am sure that the parquet file is actually 
generated inside data folder. Also, the parquet files are not damaged as the 
data could be retrieved correctly by reading as Hudi table or directly reading 
each parquet file(using Spark).

*{*}To Reproduce{*}*

Steps to reproduce the behavior:

1. Enter Flink SQL client.
2. Execute the SQL and check the data was written successfully.
```sql
CREATE TABLE t1(
uuid VARCHAR(20),
name VARCHAR(10),
age INT,
ts TIMESTAMP(3),
`partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
WITH (
'connector' = 'hudi',
'path' = 'hdfs:///path/to/table/',
'table.type' = 'COPY_ON_WRITE'
);

– insert data using values
INSERT INTO t1 VALUES
('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),
('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),
('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),
('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),
('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),
('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),
('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),
('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');
```
3. Enter Hudi cli and execute `show fsview all`

*{*}Expected behavior{*}*

`show fsview all` in Hudi cli should return all file slices.

*{*}Environment Description{*}*
 * Hudi version : 0.11.1

 * Spark version : 3.1.1

 * Hive version : 3.1.0

 * Hadoop version : 3.1.1

 * Storage (HDFS/S3/GCS..) : HDFS

 * Running on Docker? (yes/no) : no

*{*}Additional context{*}*

No.

*{*}Stacktrace{*}*

N/A

 

Temporary solution:

I modified and reocmpiled spring-shell 1.2.0.RELEASE. Please download the 
attachment and replace the same file in ${HUDI_CLI_DIR}/target/lib/.

  was:
This issue is from: [[SUPPORT] Hudi cli got empty result for command show 
fsview all · Issue #6177 · apache/hudi 
(github.com)|https://github.com/apache/hudi/issues/6177]

**Describe the problem you faced**

Hudi cli got empty result after running command show fsview all.

![image](https://user-images.githubusercontent.com/7007327/180346750-6a55f472-45ac-46cf-8185-3c4fc4c76434.png)

The type of table t1 is  COW and I am sure that the parquet file is actually 
generated inside data folder. Also, the parquet files are not damaged as the 
data could be retrieved correctly by reading as Hudi table or directly reading 
each parquet file(using Spark).

**To Reproduce**

Steps to reproduce the behavior:

1. Enter Flink SQL client.
2. Execute the SQL and check the data was written successfully.
```sql
CREATE TABLE t1(
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts TIMESTAMP(3),
  `partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
WITH (
  'connector' = 'hudi',
  'path' = 'hdfs:///path/to/table/',
  'table.type' = 'COPY_ON_WRITE'
);

-- insert data using values
INSERT INTO t1 VALUES
  ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),
  ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),
  ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),
  ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),
  ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),
  ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),
  ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),
  ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');
```
3. Enter Hudi cli and execute `show fsview all`

**Expected behavior**

`show fsview all` in Hudi cli should return all file slices.

**Environment Description**

* Hudi version : 0.11.1

* Spark version : 3.1.1

* Hive version : 3.1.0

* Hadoop version : 3.1.1

* Storage (HDFS/S3/GCS..) : HDFS

* Running on Docker? (yes/no) : no


**Additional context**

No.

**Stacktrace**

N/A


 


> Hudi cli got empty result for command show fsview all
> -----------------------------------------------------
>
>                 Key: HUDI-4485
>                 URL: https://issues.apache.org/jira/browse/HUDI-4485
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: cli
>    Affects Versions: 0.11.1
>         Environment: Hudi version : 0.11.1
> Spark version : 3.1.1
> Hive version : 3.1.0
> Hadoop version : 3.1.1
>            Reporter: Yao Zhang
>            Priority: Minor
>             Fix For: 0.13.0
>
>         Attachments: spring-shell-1.2.0.RELEASE.jar
>
>
> This issue is from: [[SUPPORT] Hudi cli got empty result for command show 
> fsview all · Issue #6177 · apache/hudi 
> (github.com)|https://github.com/apache/hudi/issues/6177]
> *{*}Describe the problem you faced{*}*
> Hudi cli got empty result after running command show fsview all.
> ![image]([https://user-images.githubusercontent.com/7007327/180346750-6a55f472-45ac-46cf-8185-3c4fc4c76434.png])
> The type of table t1 is COW and I am sure that the parquet file is actually 
> generated inside data folder. Also, the parquet files are not damaged as the 
> data could be retrieved correctly by reading as Hudi table or directly 
> reading each parquet file(using Spark).
> *{*}To Reproduce{*}*
> Steps to reproduce the behavior:
> 1. Enter Flink SQL client.
> 2. Execute the SQL and check the data was written successfully.
> ```sql
> CREATE TABLE t1(
> uuid VARCHAR(20),
> name VARCHAR(10),
> age INT,
> ts TIMESTAMP(3),
> `partition` VARCHAR(20)
> )
> PARTITIONED BY (`partition`)
> WITH (
> 'connector' = 'hudi',
> 'path' = 'hdfs:///path/to/table/',
> 'table.type' = 'COPY_ON_WRITE'
> );
> – insert data using values
> INSERT INTO t1 VALUES
> ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),
> ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),
> ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),
> ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),
> ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),
> ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),
> ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),
> ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');
> ```
> 3. Enter Hudi cli and execute `show fsview all`
> *{*}Expected behavior{*}*
> `show fsview all` in Hudi cli should return all file slices.
> *{*}Environment Description{*}*
>  * Hudi version : 0.11.1
>  * Spark version : 3.1.1
>  * Hive version : 3.1.0
>  * Hadoop version : 3.1.1
>  * Storage (HDFS/S3/GCS..) : HDFS
>  * Running on Docker? (yes/no) : no
> *{*}Additional context{*}*
> No.
> *{*}Stacktrace{*}*
> N/A
>  
> Temporary solution:
> I modified and reocmpiled spring-shell 1.2.0.RELEASE. Please download the 
> attachment and replace the same file in ${HUDI_CLI_DIR}/target/lib/.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to