prashantwason commented on a change in pull request #2607:
URL: https://github.com/apache/hudi/pull/2607#discussion_r591853011



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieAppendHandle.java
##########
@@ -377,9 +388,21 @@ public void write(HoodieRecord record, 
Option<IndexedRecord> insertValue) {
         writer.close();
 
         // update final size, once for all log files
+        long totalLogFileSize = 0;
         for (WriteStatus status: statuses) {
           long logFileSize = FSUtils.getFileSize(fs, new 
Path(config.getBasePath(), status.getStat().getPath()));
           status.getStat().setFileSizeInBytes(logFileSize);
+          totalLogFileSize += logFileSize;
+        }
+
+        if (config.isMetricsOn() && 
config.shouldCollectObservabilityMetrics()) {

Review comment:
       This code is repeated across handles. So probably move it within the 
table itself and simplify the check for metrics enabled.
   
   hoodieTable.updateObservabilityMetrics(xxx)
   
   or 
   
   hoodieTable.getObservabilityMetrics().ifPresent(....)
   

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieCreateHandle.java
##########
@@ -60,6 +65,8 @@
   protected long recordsDeleted = 0;
   private Map<String, HoodieRecord<T>> recordMap;
   private boolean useWriterSchema = false;
+  private HoodieTimer writeTimer = null;

Review comment:
       Used in all handles so maybe move it to the base class.

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/config/HoodieWriteConfig.java
##########
@@ -143,6 +143,9 @@
   public static final String CLIENT_HEARTBEAT_NUM_TOLERABLE_MISSES_PROP = 
"hoodie.client.heartbeat.tolerable.misses";
   public static final Integer DEFAULT_CLIENT_HEARTBEAT_NUM_TOLERABLE_MISSES = 
2;
 
+  public static final String COLLECT_OBSERVABILITY_METRICS = 
"hoodie.collect.observability.metrics";
+  public static final String DEFAULT_COLLECT_OBSERVABILITY_METRICS = "true";

Review comment:
       default false is better unless we are use this does not cause 
scalability issues with the reporter/metrics platform.

##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieCreateHandle.java
##########
@@ -113,13 +121,15 @@ public void write(HoodieRecord record, 
Option<IndexedRecord> avroRecord) {
     Option recordMetadata = record.getData().getMetadata();
     try {
       if (avroRecord.isPresent()) {
+        writeTimer.startTimer();

Review comment:
       If an exception is thrown, the endTimer is not called. Is is ok to call 
multiple startTimer() without endTimer()?

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/model/HoodieObservabilityStat.java
##########
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hudi.common.model;
+
+import org.apache.hudi.common.metrics.Registry;
+import org.apache.hudi.common.util.Option;
+
+import java.io.Serializable;
+
+/**
+ * Observability related metrics collection operations.
+ */
+public class HoodieObservabilityStat implements Serializable {
+  public static final String OBSERVABILITY_REGISTRY_NAME = "Observability";
+
+  // define a unique metric name string for each metric to be collected.
+  public static final String PARQUET_NORMALIZED_WRITE_TIME = 
"writeTimePerRecordInUSec";
+  public static final String PARQUET_CUMULATIVE_WRITE_TIME = 
"cumulativeParquetWriteTimeInUSec";
+  public static final String PARQUET_WRITE_TIME_PER_MB_IN_USEC = 
"writeTimePerMBInUSec";
+  public static final String PARQUET_WRITE_THROUGHPUT_MBPS = 
"writeThroughputMBps";
+  public static final String TOTAL_RECORDS_WRITTEN = "totalRecordsWritten";
+
+  public enum WriteType {
+    INSERT,
+    UPSERT,
+    UPDATE,
+  }
+
+  public static long ONE_MB = 1024 * 1024;
+  public static long USEC_PER_SEC = 1000 * 1000;
+  Option<Registry> observabilityRegistry;
+  String tableName;
+  WriteType writeType;
+  String hostName;
+  Long stageId;
+  Long partitionId;
+
+  public HoodieObservabilityStat(Option<Registry> registry, String tableName, 
WriteType type, String host,
+                                 long stageId, long partitionId) {
+    this.observabilityRegistry = registry;
+    this.tableName = tableName;
+    this.writeType = type;
+    this.hostName = host;
+    this.stageId = stageId;
+    this.partitionId = partitionId;
+  }
+
+  private String getWriteMetricWithTypeHostNameAndPartitionId(String 
tableName, String metric, String type,
+                                                              String host, 
long partitionId) {
+    return String.format("%s.%s.%s.%s.%d", tableName, metric, type, host, 
partitionId);
+  }
+
+  private String getConsolidatedWriteMetricKey(String tableName, String 
metric, String type) {
+    return String.format("%s.consolidated.%s.%s", tableName, metric, type);
+  }
+
+  public void recordWriteStats(long totalRecs, long cumulativeWriteTimeInMsec, 
long fileSizeInBytes) {

Review comment:
       Can this be done within the HoodieObservabilityMetrics while merging the 
updates from each executor?
   
   

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDWriteClient.java
##########
@@ -70,25 +72,32 @@
     AbstractHoodieWriteClient<T, JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>, 
JavaRDD<WriteStatus>> {
 
   private static final Logger LOG = 
LogManager.getLogger(SparkRDDWriteClient.class);
+  protected final transient HoodieObservabilityMetrics observabilityMetrics;
 
   public SparkRDDWriteClient(HoodieEngineContext context, HoodieWriteConfig 
clientConfig) {
-    super(context, clientConfig);
+    this(context, clientConfig, Option.empty());
   }
 
   @Deprecated
   public SparkRDDWriteClient(HoodieEngineContext context, HoodieWriteConfig 
writeConfig, boolean rollbackPending) {
-    super(context, writeConfig);
+    this(context, writeConfig, rollbackPending, Option.empty());
   }
 
   @Deprecated
   public SparkRDDWriteClient(HoodieEngineContext context, HoodieWriteConfig 
writeConfig, boolean rollbackPending,
                              Option<EmbeddedTimelineService> timelineService) {
     super(context, writeConfig, timelineService);
+    this.observabilityMetrics = (HoodieObservabilityMetrics) 
Registry.getRegistry(
+        HoodieObservabilityStat.OBSERVABILITY_REGISTRY_NAME, 
HoodieObservabilityMetrics.class.getName());
+    observabilityMetrics.registerWithSpark(context, config);
   }
 
   public SparkRDDWriteClient(HoodieEngineContext context, HoodieWriteConfig 
writeConfig,
                              Option<EmbeddedTimelineService> timelineService) {
     super(context, writeConfig, timelineService);
+    this.observabilityMetrics = (HoodieObservabilityMetrics) 
Registry.getRegistry(

Review comment:
       There is an initialize function where other metric registries are 
initialized. Better to move this code there.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to