[ 
https://issues.apache.org/jira/browse/HDDS-2334?focusedWorklogId=331081&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-331081
 ]

ASF GitHub Bot logged work on HDDS-2334:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Oct/19 16:40
            Start Date: 20/Oct/19 16:40
    Worklog Time Spent: 10m 
      Work Description: adoroszlai commented on pull request #65: HDDS-2334. 
Dummy chunk manager fails with length mismatch error
URL: https://github.com/apache/hadoop-ozone/pull/65
 
 
   ## What changes were proposed in this pull request?
   
   Data size validation logic was recently 
[changed](https://github.com/apache/hadoop-ozone/commit/e70ea7b66ca3326c3b00ddc3e4af7144d48ea5f5#diff-92341865368a6b82a1430bcb40bd4264R83)
 for real `ChunkManager`, but not for the dummy implementation.  This change 
extracts the validation logic and reuses it for the dummy one, too.  This 
restores the ability to skip writing data to disk (for performance testing).
   
   https://issues.apache.org/jira/browse/HDDS-2334
   
   ## How was this patch tested?
   
   Changed existing unit test to use a buffer with additional "header" at the 
beginning.  Added test cases for dummy implementation.
   
   Tested on compose cluster with the following additional configs:
   
   ```
   OZONE-SITE.XML_hdds.container.chunk.persistdata=false
   OZONE-SITE.XML_ozone.client.verify.checksum=false
   ```
   
   ```
   $ ozone sh volume create vol1
   $ ozone sh bucket create vol1/buck1;
   $ ozone sh key put vol1/buck1/key1 /etc/passwd
   $ ozone sh key get vol1/buck1/key1 asdf
   $ ls -l /etc/passwd
   -rw-r--r-- 1 root root 671 Jun 17 15:33 /etc/passwd
   $ wc asdf
     0   0 671 asdf
   ```
   
   Also tested regular, "persistent" chunk manager:
   
   ```
   $ docker-compose exec scm ozone freon rk --numOfThreads 1 --numOfVolumes 1 
--numOfBuckets 1 --replicationType RATIS --factor ONE --validateWrites 
--keySize 1024 --numOfKeys 10 --bufferSize 1024
   ...
   Status: Success
   Git Base Revision: e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
   Number of Volumes created: 1
   Number of Buckets created: 1
   Number of Keys added: 10
   Ratis replication factor: ONE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,182
   Average Time spent in bucket creation: 00:00:00,030
   Average Time spent in key creation: 00:00:00,290
   Average Time spent in key write: 00:00:02,379
   Total bytes written: 10240
   Total number of writes validated: 10
   Writes validated: 100.0 %
   Successful validation: 10
   Unsuccessful validation: 0
   Total Execution time: 00:00:09,389
   ```
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 331081)
    Remaining Estimate: 0h
            Time Spent: 10m

> Dummy chunk manager fails with length mismatch error
> ----------------------------------------------------
>
>                 Key: HDDS-2334
>                 URL: https://issues.apache.org/jira/browse/HDDS-2334
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: test
>            Reporter: Attila Doroszlai
>            Assignee: Attila Doroszlai
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1094 added a config option ({{hdds.container.chunk.persistdata=false}}) 
> to drop chunks instead of writing them to disk.  Currently this option 
> triggers the following error with any key size:
> {noformat}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  data array does not match the length specified. DataLen: 16777216 Byte 
> Array: 16777478
>       at 
> org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerDummyImpl.writeChunk(ChunkManagerDummyImpl.java:87)
>       at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleWriteChunk(KeyValueHandler.java:695)
>       at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:176)
>       at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:277)
>       at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:150)
>       at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:413)
>       at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:423)
>       at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$1(ContainerStateMachine.java:458)
>       at 
> java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
>       at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>       at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>       at java.base/java.lang.Thread.run(Thread.java:834)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to