smallzhongfeng commented on code in PR #210:
URL: https://github.com/apache/incubator-uniffle/pull/210#discussion_r977185736
##########
coordinator/src/main/java/org/apache/uniffle/coordinator/SelectStorageStrategy.java:
##########
@@ -17,24 +17,12 @@
package org.apache.uniffle.coordinator;
+import java.util.List;
import java.util.Map;
-import org.apache.uniffle.common.RemoteStorageInfo;
import
org.apache.uniffle.coordinator.LowestIOSampleCostSelectStorageStrategy.RankValue;
public interface SelectStorageStrategy {
- RemoteStorageInfo pickRemoteStorage(String appId);
-
- void incRemoteStorageCounter(String remoteStoragePath);
-
- void decRemoteStorageCounter(String storagePath);
-
- void removePathFromCounter(String storagePath);
-
- Map<String, RemoteStorageInfo> getAvailableRemoteStorageInfo();
-
- Map<String, RemoteStorageInfo> getAppIdToRemoteStorageInfo();
-
- Map<String, RankValue> getRemoteStoragePathRankValue();
+ List<Map.Entry<String, RankValue>> readAndWrite(String path);
Review Comment:
The meaning of the `readAndWrite` interface is to perform anomaly detection
of remote paths. For hdfs, reading and writing files is a more direct way to
detect, so I took this name, but the real comparison method comes from
`sortPathByRankValue`, which I do not have. The reason for defining it as an
interface is because the objects to be sorted may be different, and this is
left to different storage methods for implementation.
##########
coordinator/src/main/java/org/apache/uniffle/coordinator/SelectStorageStrategy.java:
##########
@@ -17,24 +17,12 @@
package org.apache.uniffle.coordinator;
+import java.util.List;
import java.util.Map;
-import org.apache.uniffle.common.RemoteStorageInfo;
import
org.apache.uniffle.coordinator.LowestIOSampleCostSelectStorageStrategy.RankValue;
public interface SelectStorageStrategy {
- RemoteStorageInfo pickRemoteStorage(String appId);
-
- void incRemoteStorageCounter(String remoteStoragePath);
-
- void decRemoteStorageCounter(String storagePath);
-
- void removePathFromCounter(String storagePath);
-
- Map<String, RemoteStorageInfo> getAvailableRemoteStorageInfo();
-
- Map<String, RemoteStorageInfo> getAppIdToRemoteStorageInfo();
-
- Map<String, RankValue> getRemoteStoragePathRankValue();
+ List<Map.Entry<String, RankValue>> readAndWrite(String path);
Review Comment:
The meaning of the `readAndWrite` interface is to perform anomaly detection
of remote paths. For hdfs, reading and writing files is a more direct way to
detect, so I took this name, but the real comparison method comes from
`sortPathByRankValue`, which I do not have. The reason for defining it as an
interface is because the objects to be sorted may be different, and this is
left to different storage methods for implementation.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]