[ https://issues.apache.org/jira/browse/HDDS-116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495542#comment-16495542 ]
Hanisha Koneru commented on HDDS-116: ------------------------------------- Thanks [~anu] and [~xyao] for the reviews. {quote}VolumeSet#Since we don't use this anywhere, it might be may not matter for now, but just for curiosities sake – what happens if VolumeSet throws ?{quote} VolumeSet initialization should happen when Datanode is started. At that time if we encounter the DiskOutOfSpaceException as no volumes are configured, we should probably shutdown the datanode? {quote}HDDS currently uses a ContainerStorageLocation/StorageLocation and starts DU thread per location(volume) to get the usage info. From the design spec, it seems that we are going to use VolumeSet/VolumeInfo to replace ContainerStorageLocation/StorageLocation. However, the current VolumeInfo does not have ability to get the usage information itself like ContainerStorageLocation. We don't have to address it now if you plan to add it later with a different JIRA?{quote} We planned to implement the usage computation through DU threads in subsequent Jiras. > Implement VolumeSet to manage disk volumes > ------------------------------------------ > > Key: HDDS-116 > URL: https://issues.apache.org/jira/browse/HDDS-116 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Hanisha Koneru > Assignee: Hanisha Koneru > Priority: Major > Labels: ContainerIO > Fix For: 0.2.1 > > Attachments: HDDS-116-HDDS-48.001.patch, HDDS-116-HDDS-48.002.patch, > HDDS-116-HDDS-48.003.patch > > > VolumeSet would be responsible for managing volumes in the Datanode. Some of > its functions are: > # Initialize volumes on startup > # Provide APIs to add/ remove volumes > # Choose and return volume to calling service based on the volume choosing > policy (currently implemented Round Robin choosing policy) -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org