[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16162491#comment-16162491
 ] 

Devaraj K commented on YARN-6620:
---------------------------------

Thanks [~leftnoteasy] for the patch, Great work!

There are some comments on the patch.

1. XML file reading in GpuDeviceInformationParser.java, can we use the existing 
libraries like javax.xml.bind.JAXBContext to unmarshall the XML document to a 
Java Object instead of reading tag by tag?

2. If you don't agree to use the existing libraries for reading xml file, 'in' 
stream may have to be closed after reading/parsing.

{code:xml}
      InputStream in = IOUtils.toInputStream(sanitizeXmlInput(xmlStr), "UTF-8");
      doc = builder.parse(in);
{code}

3. Instead of hardcoding the BINARY_NAME, can it be included as part of 
DEFAULT_NM_GPU_PATH_TO_EXEC as a default value, so that it can be also becomes 
configurable if incase users want to change it.
{code:xml}
    public static final String DEFAULT_NM_GPU_PATH_TO_EXEC = "";

    protected static final String BINARY_NAME = "nvidia-smi";
{code}


4. Please change the inline comment here accordingly.
{code:xml}
+  /**
+   * Disk as a resource is disabled by default.
+   **/
+  @Private
+  public static final boolean DEFAULT_NM_GPU_RESOURCE_ENABLED = false;
{code}

5. Can we use spaces instead of tab characters for indentation in 
nvidia-smi-sample-output.xml?

6. Are we going to support multiple containers/processes(limited number) 
sharing the same GPU device?

7. 

{code:title=GpuResourceAllocator.java|borderStyle=solid}
      for (int deviceNum : allowedGpuDevices) {
        if (!usedDevices.containsKey(deviceNum)) {
          usedDevices.put(deviceNum, containerId);
          assignedGpus.add(deviceNum);
          if (assignedGpus.size() == numRequestedGpuDevices) {
            break;
          }
        }
      }

      // Record in state store if we allocated anything
      if (!assignedGpus.isEmpty()) {
        List<Serializable> allocatedDevices = new ArrayList<>();
        for (int gpu : assignedGpus) {
          allocatedDevices.add(String.valueOf(gpu));
        }
{code}

Can you merge these two for loops into a one like below,

{code:xml}
 usedDevices.put(deviceNum, containerId);
 assignedGpus.add(deviceNum);

allocatedDevices.add(String.valueOf(deviceNum));
{code}

And also if the condition *if (assignedGpus.size() == numRequestedGpuDevices)*  
doesn't meet, do we need to throw an exception or log the error?

8. I see that getGpuDeviceInformation() is getting invoked twice which intern 
executes shell command and parses the xml file which are costly operations. Do 
we need to execute it twice here?

{code:title=GpuResourceDiscoverPlugin.java|borderStyle=solid}
GpuDeviceInformation info = getGpuDeviceInformation();

LOG.info("Trying to discover GPU information ...");
        GpuDeviceInformation info = getGpuDeviceInformation();
{code}
And also I don't convince that having the logic other than assigning conf in 
setConf() method.

{code:xml}
public synchronized void setConf(Configuration conf) {
    this.conf = conf;
    numOfErrorExecutionSinceLastSucceed = 0;
    featureEnabled = conf.getBoolean(YarnConfiguration.NM_GPU_RESOURCE_ENABLED,
        YarnConfiguration.DEFAULT_NM_GPU_RESOURCE_ENABLED);

    if (featureEnabled) {
      String dir = conf.get(YarnConfiguration.NM_GPU_PATH_TO_EXEC,
      .........
{code}

And also there are Hadoop QA reported comments which needs to be fixed.

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -------------------------------------------------------------------------------------
>
>                 Key: YARN-6620
>                 URL: https://issues.apache.org/jira/browse/YARN-6620
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>         Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to