Andrew Onischuk created AMBARI-9493:
---------------------------------------

             Summary: Failed to start Datanode with non-default umask
                 Key: AMBARI-9493
                 URL: https://issues.apache.org/jira/browse/AMBARI-9493
             Project: Ambari
          Issue Type: Bug
            Reporter: Andrew Onischuk
            Assignee: Andrew Onischuk
             Fix For: 2.0.0


**STR**

  1. Set umask to 0027
  2. Install ambari-server
  3. Deploy ambari

**AR**. Deploy failed on stage of Start Services.
    
    
    
    2015-02-03 13:52:02,306 - Error while executing command 'start':
    Traceback (most recent call last):
      File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 184, in execute
        method(env)
      File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
 line 63, in start
        datanode(action="start")
      File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
 line 61, in datanode
        create_log_dir=True
      File 
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py",
 line 212, in service
        environment=hadoop_env_exports
      File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 148, in __init__
        self.env.run()
      File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 151, in run
        self.run_action(resource, action)
      File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 117, in run_action
        provider_action()
      File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 276, in action_run
        raise ex
    Fail: Execution of '/usr/bin/sudo su hdfs -l -s /bin/bash -c 'ulimit -c 
unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config 
/etc/hadoop/conf start datanode'' returned 1. mkdir: cannot create directory 
`/grid/0/log': Permission denied
    chown: cannot access `/grid/0/log/hadoop/hdfs': Permission denied
    starting datanode, logging to 
/grid/0/log/hadoop/hdfs/hadoop-hdfs-datanode-umask-default-centos-3.out
    /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 165: 
/grid/0/log/hadoop/hdfs/hadoop-hdfs-datanode-umask-default-centos-3.out: 
Permission denied
    head: cannot open 
`/grid/0/log/hadoop/hdfs/hadoop-hdfs-datanode-umask-default-centos-3.out' for 
reading: Permission denied
    /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 183: 
/grid/0/log/hadoop/hdfs/hadoop-hdfs-datanode-umask-default-centos-3.out: 
Permission denied
    /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 184: 
/grid/0/log/hadoop/hdfs/hadoop-hdfs-datanode-umask-default-centos-3.out: 
Permission denied
    

Cluster is available via <http://172.18.145.54:8080/>. Lifetime 22h.  
**EXTERNAL HOSTNAMES**  
172.18.145.54  
172.18.145.46  
172.18.147.179  
172.18.145.49  
172.18.146.175  
*INTERNAL HOSTNAMES *  
umask-default-centos-8.cs1cloud.internal  
umask-default-centos-6.cs1cloud.internal  
umask-default-centos-3.cs1cloud.internal  
umask-default-centos-7.cs1cloud.internal  
umask-default-centos-4.cs1cloud.internal





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to