[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965426924


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline
+- Persist each table service into an instance table of Table Management Service
+- notify a separate execution component/thread can start executing it
+- Monitor task execution status, update table information, and retry failed 
table services up to the maximum number of times
+
+### Storage
+
+- There are two types of stored information
+- Register with the hoodie table of the Table Management Service
+- Each table service instance is generated by Table Management Service
+
+ Lectotype
+
+**Requirements:** support single row ACID transactions. Almost all write 
operations require it, like operation creation,
+status changing and so on.
+
+There are the candidates,
+
+**Hudi table**
+
+pros:
+
+- No external components are introduced and maintained.
+
+crons:
+
+- Each write to hudi table will be a deltacommit, this will further lower the 
number of possible requests / sec that can
+  be served.
+
+**RDBMS**
+
+pros:
+
+- database that is suitable for structured data like metadata to store.
+
+- can describe the relation between many 

[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965424691


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline
+- Persist each table service into an instance table of Table Management Service
+- notify a separate execution component/thread can start executing it
+- Monitor task execution status, update table information, and retry failed 
table services up to the maximum number of times
+
+### Storage
+
+- There are two types of stored information
+- Register with the hoodie table of the Table Management Service
+- Each table service instance is generated by Table Management Service
+
+ Lectotype
+
+**Requirements:** support single row ACID transactions. Almost all write 
operations require it, like operation creation,
+status changing and so on.
+
+There are the candidates,
+
+**Hudi table**
+
+pros:
+
+- No external components are introduced and maintained.
+
+crons:
+
+- Each write to hudi table will be a deltacommit, this will further lower the 
number of possible requests / sec that can
+  be served.
+
+**RDBMS**
+
+pros:
+
+- database that is suitable for structured data like metadata to store.
+
+- can describe the relation between many 

[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965424454


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline
+- Persist each table service into an instance table of Table Management Service
+- notify a separate execution component/thread can start executing it
+- Monitor task execution status, update table information, and retry failed 
table services up to the maximum number of times
+
+### Storage
+
+- There are two types of stored information
+- Register with the hoodie table of the Table Management Service
+- Each table service instance is generated by Table Management Service
+
+ Lectotype
+
+**Requirements:** support single row ACID transactions. Almost all write 
operations require it, like operation creation,
+status changing and so on.
+
+There are the candidates,
+
+**Hudi table**
+
+pros:
+
+- No external components are introduced and maintained.
+
+crons:
+
+- Each write to hudi table will be a deltacommit, this will further lower the 
number of possible requests / sec that can
+  be served.
+
+**RDBMS**
+
+pros:
+
+- database that is suitable for structured data like metadata to store.
+
+- can describe the relation between many 

[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965423222


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline
+- Persist each table service into an instance table of Table Management Service
+- notify a separate execution component/thread can start executing it
+- Monitor task execution status, update table information, and retry failed 
table services up to the maximum number of times
+
+### Storage
+
+- There are two types of stored information
+- Register with the hoodie table of the Table Management Service
+- Each table service instance is generated by Table Management Service
+
+ Lectotype

Review Comment:
   Will update it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965421349


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline

Review Comment:
   I mean scheduling the corresponding table service to the hudi table‘s 
timeline on storage via TMS



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965390402


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.

Review Comment:
   The meaning expressed here is that it will not be implemented through the 
pull mode, because as the number of tables increases, it will face the problem 
of non-scalability.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965378497


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline
+- Persist each table service into an instance table of Table Management Service
+- notify a separate execution component/thread can start executing it
+- Monitor task execution status, update table information, and retry failed 
table services up to the maximum number of times

Review Comment:
   Separate threads in TMS.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965378497


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)
+
+### Processing flow
+
+- After receiving the request, the table management server schedules the 
relevant table service to the table's timeline
+- Persist each table service into an instance table of Table Management Service
+- notify a separate execution component/thread can start executing it
+- Monitor task execution status, update table information, and retry failed 
table services up to the maximum number of times

Review Comment:
   TMS 中的单独线程



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965378032


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)
+
+- Do not enable meta server
+- for every write/commit on the table, the table management server is 
notified.
+  We can set a heartbeat timeout for each hoodie table, and if it exceeds 
it, we will actively pull it once to prevent the commit request from being lost
+![](service_without_meta_server.png)

Review Comment:
   At present, this design has been modified to bring all pending instant every 
time the client requests TMS, and update the RFC document later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r965376886


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+### Processing mode
+Different processing modes depending on whether the meta server is enabled
+
+- Enable meta server
+- The pull-based mechanism works for fewer tables. Scanning 1000s of 
tables for possible services is going to induce lots of a load of listing.
+- The meta server provides a listener that takes as input the uris of the 
Table Management Service and triggers a callback through the hook at each 
instant commit, thereby calling the Table Management Service to do the 
scheduling/execution for the table.
+![](service_with_meta_server.png)

Review Comment:
   The current design choice is REST.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-09-07 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r964884802


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+

Review Comment:
   This is expected to provide pluggable interfaces to support various 
scheduling strategies.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-07-18 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r924004121


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,316 @@
+
+
+# RFC-43: Implement Table Management ServiceTable Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+

Review Comment:
   > Trying to understand the comment "maintenance cost will become higher". 
Can you explain?
   
   If we have more than 1000 tables, each configured with a compaction and a 
clustering task, then we need to manage more than 2000 table service tasks, and 
this management cost is very huge. Including but not limited to task failure 
retry, exception cause troubleshooting, resource management.
   
   > In addition to this - We should also think about a pluggable triggering 
strategies for any table management service to run. Eventually, we can be more 
intelligent when we trigger a service I guess.
   
   We plan to support triggering service execution via API, is that what you 
mean?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-05-12 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r871367878


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,257 @@
+
+
+# RFC-43: Implement Table Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi
+compaction/clustering job.
+
+## Background
+
+In the current implementation, if the HUDI table needs do compact/cluster, it 
only has three ways:
+
+1. Use inline compaction/clustering, in this mode the job will be block 
writing job.
+
+2. Using Async compaction/clustering, in this mode the job execute async but 
also sharing the resource with HUDI to
+   write a job that may affect the stability of job writing, which is not what 
the user wants to see.
+
+3. Using independent compaction/clustering job is a better way to schedule the 
job, in this mode the job execute async
+   and do not sharing resources with writing job, but also has some questions:
+1. Users have to enable lock service providers so that there is not data 
loss. Especially when compaction/clustering
+   is getting scheduled, no other writes should proceed concurrently and 
hence a lock is required.
+2. The user needs to manually start an async compaction/clustering 
application, which means that the user needs to
+   maintain two jobs.
+3. With the increase in the number of HUDI jobs, there is no unified 
service to manage compaction/clustering jobs (
+   monitor, retry, history, etc...), which will make maintenance costs 
increase.
+
+With this effort, we want to provide an independent compaction/clustering 
Service, it will have these abilities:
+
+- Provides a pluggable execution interface that can adapt to multiple 
execution engines, such as Spark and Flink.
+
+- With the ability to failover, need to be persisted compaction/clustering 
message.
+
+- Perfect metrics and reuse HoodieMetric expose to the outside.
+
+- Provide automatic failure retry for compaction/clustering job.
+
+## Implementation
+
+![](service.jpg)

Review Comment:
   1. We can implement REST API for phase1, and have a common Request Handler.
   2 and 3 is very necessary, which is also reflected in the RFC.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] yuzhaojing commented on a diff in pull request #4309: [HUDI-3016][RFC-43] Proposal to implement Table Management Service

2022-05-12 Thread GitBox


yuzhaojing commented on code in PR #4309:
URL: https://github.com/apache/hudi/pull/4309#discussion_r871364199


##
rfc/rfc-43/rfc-43.md:
##
@@ -0,0 +1,257 @@
+
+
+# RFC-43: Implement Table Management Service for Hudi
+
+## Proposers
+
+- @yuzhaojing
+
+## Approvers
+
+- @vinothchandar
+- @Raymond
+
+## Status
+
+JIRA: 
[https://issues.apache.org/jira/browse/HUDI-3016](https://issues.apache.org/jira/browse/HUDI-3016)
+
+## Abstract
+
+Hudi table needs table management operations. Currently, schedule these job 
provides Three ways:
+
+- Inline, execute these job and writing job in the same application, perform 
the these job and writing job serially.
+
+- Async, execute these job and writing job in the same application, Async 
parallel execution of these job and write job.
+
+- Independent compaction/clustering job, execute an async 
compaction/clustering job of another application.
+
+With the increase in the number of HUDI tables, due to a lack of management 
capabilities, maintenance costs will become
+higher. This proposal is to implement an independent compaction/clustering 
Service to manage the Hudi

Review Comment:
   Totally agree, we will evolve in this direction!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org