[ 
https://issues.apache.org/jira/browse/HDDS-1094?focusedWorklogId=299162&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-299162
 ]

ASF GitHub Bot logged work on HDDS-1094:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 22/Aug/19 04:40
            Start Date: 22/Aug/19 04:40
    Worklog Time Spent: 10m 
      Work Description: arp7 commented on pull request #1323: HDDS-1094. 
Performance test infrastructure : skip writing user data on Datanode. 
Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1323#discussion_r316493928
 
 

 ##########
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/ChunkManagerFactory.java
 ##########
 @@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.keyvalue.impl;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA;
+import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_PERSISTDATA_DEFAULT;
+
+/**
+ * Select an appropriate ChunkManager implementation as per config setting.
+ * Ozone ChunkManager is a Singleton
+ */
+public final class ChunkManagerFactory {
+  static final Logger LOG = LoggerFactory.getLogger(ChunkManagerFactory.class);
+
+  private static ChunkManager instance = null;
+  private static boolean syncChunks = false;
+
+  private ChunkManagerFactory() {
+  }
+
+  public static ChunkManager getChunkManager(Configuration config,
+      boolean sync) {
+    if (instance == null) {
+      synchronized (ChunkManagerFactory.class) {
+        if (instance == null) {
+          instance = createChunkManager(config, sync);
+          syncChunks = sync;
+        }
+      }
+    }
+
+    Preconditions.checkArgument((syncChunks == sync),
+        "value of sync conflicts with previous invocation");
+    return instance;
+  }
+
+  private static ChunkManager createChunkManager(Configuration config,
+      boolean sync) {
+    ChunkManager manager = null;
+    boolean persist = config.getBoolean(HDDS_CONTAINER_PERSISTDATA,
+        HDDS_CONTAINER_PERSISTDATA_DEFAULT);
+
+    if (persist == false) {
+      boolean scrubber = config.getBoolean(
+          HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED,
+          HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED_DEFAULT);
+      if (scrubber) {
+        // Data Scrubber needs to be disabled for non-persistent chunks.
+        LOG.warn("Failed to set " + HDDS_CONTAINER_PERSISTDATA + " to false."
+            + " Please set " + HddsConfigKeys.HDDS_CONTAINERSCRUB_ENABLED
+            + " also to false to enable non-persistent containers.");
+        persist = true;
+      }
+    }
+
+    if (persist == true) {
+      manager = new ChunkManagerImpl(sync);
+    } else {
+      LOG.warn(HDDS_CONTAINER_PERSISTDATA
 
 Review comment:
   Also augment this message to say that if this setting should never be 
enabled outside of a test environment.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 299162)
    Time Spent: 1h  (was: 50m)

> Performance test infrastructure : skip writing user data on Datanode
> --------------------------------------------------------------------
>
>                 Key: HDDS-1094
>                 URL: https://issues.apache.org/jira/browse/HDDS-1094
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>          Components: Ozone Datanode
>            Reporter: Supratim Deka
>            Assignee: Supratim Deka
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> Goal:
> Make Ozone chunk Read/Write operations CPU/network bound for specially 
> constructed performance micro benchmarks.
> Remove disk bandwidth and latency constraints - running ozone data path 
> against extreme low-latency & high throughput storage will expose performance 
> bottlenecks in the flow. But low-latency storage(NVME flash drives, Storage 
> class memory etc) is expensive and availability is limited. Is there a 
> workaround which achieves similar running conditions for the software without 
> actually having the low latency storage? At least for specially constructed 
> datasets -  for example zero-filled blocks (*not* zero-length blocks).
> Required characteristics of the solution:
> No changes in Ozone client, OM and SCM. Changes limited to Datanode, Minimal 
> footprint in datanode code.
> Possible High level Approach:
> The ChunkManager and ChunkUtils can enable writeChunk for zero-filled chunks 
> to be dropped without actually writing to the local filesystem. Similarly, if 
> readChunk can construct a zero-filled buffer without reading from the local 
> filesystem whenever it detects a zero-filled chunk. Specifics of how to 
> detect and record a zero-filled chunk can be discussed on this jira. Also 
> discuss how to control this behaviour and make it available only for internal 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to