wsry commented on a change in pull request #10375: [FLINK-14845][runtime] 
Introduce data compression to reduce disk and network IO of shuffle.
URL: https://github.com/apache/flink/pull/10375#discussion_r354822455
 
 

 ##########
 File path: 
flink-core/src/main/java/org/apache/flink/configuration/NettyShuffleEnvironmentOptions.java
 ##########
 @@ -54,6 +54,28 @@
                        .withDescription("Enable SSL support for the 
taskmanager data transport. This is applicable only when the" +
                                " global flag for internal SSL (" + 
SecurityOptions.SSL_INTERNAL_ENABLED.key() + ") is set to true");
 
+       /**
+        * Boolean flag indicating whether the shuffle data will be compressed 
or not.
+        *
+        * <p>Note: Data is compressed per buffer (may be sliced buffer in 
pipeline mode) and compression can incur extra
+        * CPU overhead so it is more effective for IO bounded scenario when 
data compression ratio is high.
+        */
+       public static final ConfigOption<Boolean> DATA_COMPRESSION_ENABLED =
+               key("taskmanager.data.compression.enabled")
 
 Review comment:
   Thanks for the comments. After reconsidering the problem, I think the user 
should only need to enable/disable compression and decide which compression 
plugin to use in the future. All other strategies should be left to the plugin, 
including which compression algorithm to use (may be different for pipelined 
and blocking), enable other compression optimization configurations and so on. 
So I would just leave it as it is now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to