[ 
https://issues.apache.org/jira/browse/HADOOP-18190?focusedWorklogId=793329&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-793329
 ]

ASF GitHub Bot logged work on HADOOP-18190:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Jul/22 16:12
            Start Date: 20/Jul/22 16:12
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on code in PR #4458:
URL: https://github.com/apache/hadoop/pull/4458#discussion_r925717550


##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/StreamStatisticNames.java:
##########
@@ -393,6 +393,12 @@ public final class StreamStatisticNames {
   public static final String STREAM_READ_PREFETCH_OPERATIONS
       = "stream_read_prefetch_operations";
 
+  /**
+   * Total number of failed prefetching operations.
+   */
+  public static final String STREAM_READ_FAILED_PREFETCH_OPERATIONS

Review Comment:
   not needed. every duration type you build for a store automatically gets 
.failures stats entries (count, min, mean, max), use 
StoreStatisticNames.SUFFIX_FAILURES if you want to see usages



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3Reader.java:
##########
@@ -95,26 +102,24 @@ public void close() {
   private int readOneBlockWithRetries(ByteBuffer buffer, long offset, int size)
       throws IOException {
 
-    this.s3File.getStatistics().readOperationStarted(offset, size);
+    this.streamStatistics.readOperationStarted(offset, size);
     Invoker invoker = this.s3File.getReadInvoker();
 
-    int invokerResponse = invoker.retry(
-        "read", this.s3File.getPath(), true,
-        () -> {
+    int invokerResponse = invoker.retry("read", this.s3File.getPath(), true,
+        trackDurationOfOperation(streamStatistics, 
STREAM_READ_REMOTE_BLOCK_READ, () -> {

Review Comment:
   FYI, this will do the right thing about updating the stream statistics with 
the pass/fail stats differentiated



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/common/TestBufferPool.java:
##########
@@ -37,28 +38,33 @@ public class TestBufferPool extends AbstractHadoopTestBase {
   @Test
   public void testArgChecks() throws Exception {
     // Should not throw.
-    BufferPool pool = new BufferPool(POOL_SIZE, BUFFER_SIZE);
+    BufferPool pool = new BufferPool(POOL_SIZE, BUFFER_SIZE,
+        EmptyS3AStatisticsContext.EMPTY_INPUT_STREAM_STATISTICS);
 
     // Verify it throws correctly.
     ExceptionAsserts.assertThrows(
         IllegalArgumentException.class,
         "'size' must be a positive integer",
-        () -> new BufferPool(0, 10));
+        () -> new BufferPool(0, 10, 
EmptyS3AStatisticsContext.EMPTY_INPUT_STREAM_STATISTICS));

Review Comment:
   use newInputStreamStatistics() which is part of S3AStatisticsContext.



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/common/EmptyPrefetchingStatistics.java:
##########
@@ -0,0 +1,59 @@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements. See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership. The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License. You may obtain a copy of the License at
+  *
+  *   http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing,
+  * software distributed under the License is distributed on an
+  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  * KIND, either express or implied. See the License for the
+  * specific language governing permissions and limitations
+  * under the License.
+  */
+
+package org.apache.hadoop.fs.common;
+
+import java.time.Duration;
+
+public final class EmptyPrefetchingStatistics implements PrefetchingStatistics 
{

Review Comment:
   add a static final instance of the class; make constructor private. no need 
to create > 1



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/common/PrefetchingStatistics.java:
##########
@@ -0,0 +1,41 @@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements. See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership. The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License. You may obtain a copy of the License at
+  *
+  *   http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing,
+  * software distributed under the License is distributed on an
+  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  * KIND, either express or implied. See the License for the
+  * specific language governing permissions and limitations
+  * under the License.
+  */
+
+package org.apache.hadoop.fs.common;
+
+import java.time.Duration;
+
+import org.apache.hadoop.fs.statistics.IOStatisticsSource;
+
+public interface PrefetchingStatistics extends IOStatisticsSource {

Review Comment:
   needs javadocs i'm afraid



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/common/CachingBlockManager.java:
##########
@@ -318,6 +328,7 @@ private void readBlock(BufferData data, boolean isPrefetch, 
BufferData.State...
         }
 
         if (isPrefetch) {
+          prefetchingStatistics.prefetchOperationStarted();

Review Comment:
   have this return a DurationTracker whose close() will update the statistic. 
on an an exception, call its failed() method to update the .failure keys intead





Issue Time Tracking
-------------------

    Worklog Id:     (was: 793329)
    Time Spent: 2.5h  (was: 2h 20m)

> s3a prefetching streams to collect iostats on prefetching operations
> --------------------------------------------------------------------
>
>                 Key: HADOOP-18190
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18190
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Steve Loughran
>            Assignee: Ahmar Suhail
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There is a lot more happening in reads, so there's a lot more data to collect 
> and publish in IO stats for us to view in a summary at the end of processes 
> as well as get from the stream while it is active.
> Some useful ones would seem to be:
> counters
>  * is in memory. using 0 or 1 here lets aggregation reports count total #of 
> memory cached files.
>  * prefetching operations executed
>  * errors during prefetching
> gauges
>  * number of blocks in cache
>  * total size of blocks
>  * active prefetches
> + active memory used
> duration tracking count/min/max/ave
>  * time to fetch a block
>  * time queued before the actual fetch begins
>  * time a reader is blocked waiting for a block fetch to complete
> and some info on cache use itself
>  * number of blocks discarded unread
>  * number of prefetched blocks later used
>  * number of backward seeks to a prefetched block
>  * number of forward seeks to a prefetched block
> the key ones I care about are
>  # memory consumption
>  # can we determine if cache is working (reads with cache hit) and when it is 
> not (misses, wasted prefetches)
>  # time blocked on executors
> The stats need to be accessible on a stream even when closed, and aggregated 
> into the FS. once we get per-thread stats contexts we can publish there too 
> and collect in worker threads for reporting in task commits



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to