[
https://issues.apache.org/jira/browse/HDDS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2528:
Summary: Sonar : code smell category issues in CommitWatcher (was: Sonar :
change return type to
[
https://issues.apache.org/jira/browse/HDDS-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2528:
Description:
Sonar issues for CommitWatcher.java:
use interface instead of implementation:
[
https://issues.apache.org/jira/browse/HDDS-2586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2586:
Description:
Sonar reports CC value of 34:
Supratim Deka created HDDS-2586:
---
Summary: Sonar : refactor getAvailableNodesCount in
NetworkTopologyImpl to reduce cognitive complexity
Key: HDDS-2586
URL: https://issues.apache.org/jira/browse/HDDS-2586
Supratim Deka created HDDS-2585:
---
Summary: Sonar : refactor getDistanceCost in NetworkTopologyImpl
to reduce cognitive complexity
Key: HDDS-2585
URL: https://issues.apache.org/jira/browse/HDDS-2585
Supratim Deka created HDDS-2584:
---
Summary: Sonar : refactor chooseNodeInternal in
NetworkTopologyImpl to reduce cognitive complexity
Key: HDDS-2584
URL: https://issues.apache.org/jira/browse/HDDS-2584
[
https://issues.apache.org/jira/browse/HDDS-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2582:
Description:
Sonar reports CC value 16 :
Supratim Deka created HDDS-2583:
---
Summary: Sonar : refactor getRangeKVs in RocksDBStore to reduce
cognitive complexity
Key: HDDS-2583
URL: https://issues.apache.org/jira/browse/HDDS-2583
Project:
Supratim Deka created HDDS-2582:
---
Summary: Sonar : reduce cognitive complexity of getObject in
OzoneConfiguration
Key: HDDS-2582
URL: https://issues.apache.org/jira/browse/HDDS-2582
Project: Hadoop
[
https://issues.apache.org/jira/browse/HDDS-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2526:
Status: Patch Available (was: Open)
> Sonar : use format specifiers in Log inside HddsConfServlet
[
https://issues.apache.org/jira/browse/HDDS-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2525:
Status: Patch Available (was: Open)
> Sonar : replace lambda with method reference in SCM
[
https://issues.apache.org/jira/browse/HDDS-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2524:
Status: Patch Available (was: Open)
> Sonar : clumsy error handling in BlockOutputStream
Supratim Deka created HDDS-2532:
---
Summary: Sonar : fix issues in OzoneQuota
Key: HDDS-2532
URL: https://issues.apache.org/jira/browse/HDDS-2532
Project: Hadoop Distributed Data Store
Issue
[
https://issues.apache.org/jira/browse/HDDS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2531:
Description:
Sonar issue in executePutBlock, duplicate string literal "blockID" :
Supratim Deka created HDDS-2531:
---
Summary: Sonar : remove duplicate string literals in
BlockOutputStream
Key: HDDS-2531
URL: https://issues.apache.org/jira/browse/HDDS-2531
Project: Hadoop Distributed
[
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2530:
Description:
Sonar report :
Reduce cognitive complexity from 33 to 15
[
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2530:
Summary: Sonar : refactor verifyResourceName in HddsClientUtils to fix
Sonar errors (was: Sonar :
[
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2530:
Description:
Sonar report :
[
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2530:
Component/s: Ozone Client
> Sonar : refactor verifyResourceName in HddsClientUtils to reduce
[
https://issues.apache.org/jira/browse/HDDS-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2530:
Summary: Sonar : refactor verifyResourceName in HddsClientUtils to reduce
Cognitive Complexity
Supratim Deka created HDDS-2530:
---
Summary: Sonar : refactor method to reduce Cognitive
Key: HDDS-2530
URL: https://issues.apache.org/jira/browse/HDDS-2530
Project: Hadoop Distributed Data Store
Supratim Deka created HDDS-2529:
---
Summary: Sonar : return interface instead of implementation class
in XceiverClientRatis getCommintInfoMap
Key: HDDS-2529
URL: https://issues.apache.org/jira/browse/HDDS-2529
Supratim Deka created HDDS-2528:
---
Summary: Sonar : change return type to interface instead of
implementation in CommitWatcher
Key: HDDS-2528
URL: https://issues.apache.org/jira/browse/HDDS-2528
Supratim Deka created HDDS-2527:
---
Summary: Sonar : remove redundant temporary assignment in
HddsVersionProvider
Key: HDDS-2527
URL: https://issues.apache.org/jira/browse/HDDS-2527
Project: Hadoop
Supratim Deka created HDDS-2526:
---
Summary: Sonar : use format specifiers in Log inside
HddsConfServlet
Key: HDDS-2526
URL: https://issues.apache.org/jira/browse/HDDS-2526
Project: Hadoop Distributed
Supratim Deka created HDDS-2525:
---
Summary: Sonar : replace lambda with method reference in SCM
BufferPool
Key: HDDS-2525
URL: https://issues.apache.org/jira/browse/HDDS-2525
Project: Hadoop Distributed
Supratim Deka created HDDS-2524:
---
Summary: Sonar : clumsy error handling in BlockOutputStream
validateResponse
Key: HDDS-2524
URL: https://issues.apache.org/jira/browse/HDDS-2524
Project: Hadoop
[
https://issues.apache.org/jira/browse/HDDS-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2478:
Status: Patch Available (was: Open)
> Sonar : remove temporary variable in
[
https://issues.apache.org/jira/browse/HDDS-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2478:
Description:
Sonar issues :
[
https://issues.apache.org/jira/browse/HDDS-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2478:
Summary: Sonar : remove temporary variable in XceiverClientGrpc.sendCommand
(was: Sonar : remove
Supratim Deka created HDDS-2480:
---
Summary: Sonar : remove log spam for exceptions inside
XceiverClientGrpc.reconnect
Key: HDDS-2480
URL: https://issues.apache.org/jira/browse/HDDS-2480
Project: Hadoop
Supratim Deka created HDDS-2479:
---
Summary: Sonar : replace instanceof with catch block in
XceiverClientGrpc.sendCommandWithRetry
Key: HDDS-2479
URL: https://issues.apache.org/jira/browse/HDDS-2479
Supratim Deka created HDDS-2478:
---
Summary: Sonar : remove temporary variable in
XceiverClientSpi.sendCommand
Key: HDDS-2478
URL: https://issues.apache.org/jira/browse/HDDS-2478
Project: Hadoop
Supratim Deka created HDDS-2466:
---
Summary: Split OM Key into a Prefix Part and a Name Part
Key: HDDS-2466
URL: https://issues.apache.org/jira/browse/HDDS-2466
Project: Hadoop Distributed Data Store
[
https://issues.apache.org/jira/browse/HDDS-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2208:
Status: Patch Available (was: Open)
> Propagate System Exceptions from OM transaction apply phase
[
https://issues.apache.org/jira/browse/HDDS-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2208:
Description:
The change for HDDS-2206 tracks system exceptions during preExecute phase of OM
[
https://issues.apache.org/jira/browse/HDDS-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2208:
Summary: Propagate System Exceptions from OM transaction apply phase (was:
[
https://issues.apache.org/jira/browse/HDDS-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2206:
Status: Patch Available (was: Open)
> Separate handling for OMException and IOException in the
[
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2175:
Labels: (was: pull-request-available)
> Propagate System Exceptions from the OzoneManager
>
Supratim Deka created HDDS-2208:
---
Summary: OzoneManagerStateMachine does not track failures in
applyTransaction
Key: HDDS-2208
URL: https://issues.apache.org/jira/browse/HDDS-2208
Project: Hadoop
Supratim Deka created HDDS-2206:
---
Summary: Separate handling for OMException and IOException in the
Ozone Manager
Key: HDDS-2206
URL: https://issues.apache.org/jira/browse/HDDS-2206
Project: Hadoop
[
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939841#comment-16939841
]
Supratim Deka commented on HDDS-2175:
-
Note from [~aengineer] posted on the github PR:
Also are these
[
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2175:
Description:
Exceptions encountered while processing requests on the OM are categorized as
[
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2175:
Description:
Exceptions encountered while processing requests on the OM are categorized as
[
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2175:
Summary: Propagate System Exceptions from the OzoneManager (was: Propagate
stack trace for OM
Supratim Deka created HDDS-2175:
---
Summary: Propagate stack trace for OM Exceptions to the Client
Key: HDDS-2175
URL: https://issues.apache.org/jira/browse/HDDS-2175
Project: Hadoop Distributed Data
[
https://issues.apache.org/jira/browse/HDFS-14843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928155#comment-16928155
]
Supratim Deka commented on HDFS-14843:
--
+1
Thanks for the patch [~belugabehr], looks good to me.
>
[
https://issues.apache.org/jira/browse/HDDS-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919516#comment-16919516
]
Supratim Deka commented on HDDS-2061:
-
[~adoroszlai], this configuration is strictly for developers
[
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2057:
Labels: pull-request-available (was: )
> Incorrect Default OM Port in Ozone FS URI Error Message
>
[
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919132#comment-16919132
]
Supratim Deka commented on HDDS-2057:
-
[https://github.com/apache/hadoop/pull/1377]
not sure why the
[
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2057:
Status: Patch Available (was: Open)
> Incorrect Default OM Port in Ozone FS URI Error Message
>
[
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2057:
Description:
The error message displayed from BasicOzoneFilesystem.initialize specifies 5678
as
[
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-2057:
Summary: Incorrect Default OM Port in Ozone FS URI Error Message (was:
Incorrect Default OM Port
Supratim Deka created HDDS-2057:
---
Summary: Incorrect Default OM Port in Ozone FS Error Message and
ozonefs.html
Key: HDDS-2057
URL: https://issues.apache.org/jira/browse/HDDS-2057
Project: Hadoop
[
https://issues.apache.org/jira/browse/HDDS-168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917369#comment-16917369
]
Supratim Deka commented on HDDS-168:
[~hanishakoneru] , what is the ScmGroupID being referred to in the
[
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16913161#comment-16913161
]
Supratim Deka commented on HDDS-1094:
-
yes, I understand. exactly why I said earlier, depends on what
[
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912954#comment-16912954
]
Supratim Deka commented on HDDS-1094:
-
[~anu] , depends I think. if we want to stress the entire
[
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1094:
Assignee: Supratim Deka
Status: Patch Available (was: Open)
> Performance test
[
https://issues.apache.org/jira/browse/HDDS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1094:
Summary: Performance test infrastructure : skip writing user data on
Datanode (was: Performance
[
https://issues.apache.org/jira/browse/HDDS-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka resolved HDDS-1740.
-
Resolution: Not A Problem
On the Datanode, Container state changes are driven through
[
https://issues.apache.org/jira/browse/HDDS-1798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1798:
Status: Patch Available (was: Open)
> Propagate failure in writeStateMachineData to Ratis
>
[
https://issues.apache.org/jira/browse/HDDS-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887169#comment-16887169
]
Supratim Deka commented on HDDS-1818:
-
Useful to implement a test Container implementation which will
Supratim Deka created HDDS-1818:
---
Summary: Instantiate Ozone Containers using Factory pattern
Key: HDDS-1818
URL: https://issues.apache.org/jira/browse/HDDS-1818
Project: Hadoop Distributed Data Store
Supratim Deka created HDDS-1798:
---
Summary: Propagate failure in writeStateMachineData to Ratis
Key: HDDS-1798
URL: https://issues.apache.org/jira/browse/HDDS-1798
Project: Hadoop Distributed Data Store
Supratim Deka created HDDS-1783:
---
Summary: Latency metric for applyTransaction in
ContainerStateMachine
Key: HDDS-1783
URL: https://issues.apache.org/jira/browse/HDDS-1783
Project: Hadoop Distributed
Supratim Deka created HDDS-1781:
---
Summary: Add ContainerCache metrics in ContainerMetrics
Key: HDDS-1781
URL: https://issues.apache.org/jira/browse/HDDS-1781
Project: Hadoop Distributed Data Store
[
https://issues.apache.org/jira/browse/HDDS-1765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879243#comment-16879243
]
Supratim Deka commented on HDDS-1765:
-
similar symptom but not the same problem. Linking for
Supratim Deka created HDDS-1765:
---
Summary: destroyPipeline scheduled from finalizeAndDestroyPipeline
fails for short dead node interval
Key: HDDS-1765
URL: https://issues.apache.org/jira/browse/HDDS-1765
[
https://issues.apache.org/jira/browse/HDDS-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1754:
---
Assignee: Supratim Deka (was: Nanda kumar)
> getContainerWithPipeline fails with
[
https://issues.apache.org/jira/browse/HDDS-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1748:
---
Assignee: Supratim Deka
> Error message for 3 way commit failure is not verbose
>
Supratim Deka created HDDS-1740:
---
Summary: Handle Failure to Update Ozone Container YAML
Key: HDDS-1740
URL: https://issues.apache.org/jira/browse/HDDS-1740
Project: Hadoop Distributed Data Store
[
https://issues.apache.org/jira/browse/HDDS-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1603:
---
Assignee: Supratim Deka
> Handle Ratis Append Failure in Container State Machine
>
Supratim Deka created HDDS-1739:
---
Summary: Handle Apply Transaction Failure in State Machine
Key: HDDS-1739
URL: https://issues.apache.org/jira/browse/HDDS-1739
Project: Hadoop Distributed Data Store
[
https://issues.apache.org/jira/browse/HDDS-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1621:
Summary: writeData in ChunkUtils should not use AsynchronousFileChannel
(was:
[
https://issues.apache.org/jira/browse/HDDS-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855621#comment-16855621
]
Supratim Deka commented on HDDS-1621:
-
keeping the FileChannel around after writeData and passing it
[
https://issues.apache.org/jira/browse/HDDS-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854754#comment-16854754
]
Supratim Deka commented on HDDS-1621:
-
ChunkManagerImpl is initialised with the default value of
[
https://issues.apache.org/jira/browse/HDDS-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1621:
Issue Type: Sub-task (was: Bug)
Parent: HDDS-1595
> flushStateMachineData should ensure
Supratim Deka created HDDS-1603:
---
Summary: Handle Ratis Append Failure in Container State Machine
Key: HDDS-1603
URL: https://issues.apache.org/jira/browse/HDDS-1603
Project: Hadoop Distributed Data
[
https://issues.apache.org/jira/browse/HDDS-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1595:
Attachment: Handling IO Failures on the Datanode.pdf
> Handling IO Failures on the Datanode
>
Supratim Deka created HDDS-1595:
---
Summary: Handling IO Failures on the Datanode
Key: HDDS-1595
URL: https://issues.apache.org/jira/browse/HDDS-1595
Project: Hadoop Distributed Data Store
Issue
[
https://issues.apache.org/jira/browse/HDDS-700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847244#comment-16847244
]
Supratim Deka commented on HDDS-700:
looks like the checkstyle issues reported in patch 03 slipped by
[
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846789#comment-16846789
]
Supratim Deka commented on HDDS-1534:
-
+1
Thanks [~nilotpalnandi] for updating the patch.
> freon
[
https://issues.apache.org/jira/browse/HDDS-1534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844651#comment-16844651
]
Supratim Deka commented on HDDS-1534:
-
Thanks [~nilotpalnandi] for the patch.
Why not change the type
[
https://issues.apache.org/jira/browse/HDDS-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1454:
---
Assignee: Supratim Deka
> GC other system pause events can trigger pipeline destroy for all
Supratim Deka created HDDS-1559:
---
Summary: Include committedBytes to determine Out of Space in
VolumeChoosingPolicy
Key: HDDS-1559
URL: https://issues.apache.org/jira/browse/HDDS-1559
Project: Hadoop
Supratim Deka created HDDS-1535:
---
Summary: Space tracking for Open Containers : Handle Node Startup
Key: HDDS-1535
URL: https://issues.apache.org/jira/browse/HDDS-1535
Project: Hadoop Distributed Data
Supratim Deka created HDDS-1533:
---
Summary: JVM exit on TestHddsDatanodeService
Key: HDDS-1533
URL: https://issues.apache.org/jira/browse/HDDS-1533
Project: Hadoop Distributed Data Store
Issue
[
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1511:
Attachment: HDDS-1511.001.patch
> Space tracking for Open Containers in HDDS Volumes
>
[
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836892#comment-16836892
]
Supratim Deka commented on HDDS-1511:
-
addressed comment from [~arpitagarwal] in patch 001. will add a
[
https://issues.apache.org/jira/browse/HDDS-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1511:
Attachment: HDDS-1511.000.patch
Status: Patch Available (was: Open)
unit test code added
Supratim Deka created HDDS-1511:
---
Summary: Space tracking for Open Containers in HDDS Volumes
Key: HDDS-1511
URL: https://issues.apache.org/jira/browse/HDDS-1511
Project: Hadoop Distributed Data Store
[
https://issues.apache.org/jira/browse/HDDS-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1206:
Summary: Handle Datanode volume out of space (was: need to handle in the
client when one of the
[
https://issues.apache.org/jira/browse/HDDS-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1315:
---
Assignee: Supratim Deka
> datanode process dies if it runs out of disk space
>
[
https://issues.apache.org/jira/browse/HDDS-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16810444#comment-16810444
]
Supratim Deka commented on HDDS-1315:
-
related to disk full handling across ozone components.
>
[
https://issues.apache.org/jira/browse/HDDS-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1206:
---
Assignee: Supratim Deka (was: Shashikant Banerjee)
> need to handle in the client when one
[
https://issues.apache.org/jira/browse/HDDS-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807897#comment-16807897
]
Supratim Deka commented on HDDS-1365:
-
hello [~linyiqun], I did consider losing the error code. Not
[
https://issues.apache.org/jira/browse/HDDS-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka reassigned HDDS-1200:
---
Assignee: (was: Supratim Deka)
> Ozone Data Scrubbing : Checksum verification for chunks
[
https://issues.apache.org/jira/browse/HDDS-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka updated HDDS-1365:
Attachment: HDDS-1365.000.patch
Status: Patch Available (was: Open)
> Fix error handling
Supratim Deka created HDDS-1365:
---
Summary: Fix error handling in KeyValueContainerCheck
Key: HDDS-1365
URL: https://issues.apache.org/jira/browse/HDDS-1365
Project: Hadoop Distributed Data Store
[
https://issues.apache.org/jira/browse/HDDS-1229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Supratim Deka resolved HDDS-1229.
-
Resolution: Not A Problem
This is not an issue because HDDS-1163 adopted a simple approach to
1 - 100 of 156 matches
Mail list logo