Andrii Tkach created AMBARI-7405:
------------------------------------

             Summary: Slider View: Error when creating new app not shown to user
                 Key: AMBARI-7405
                 URL: https://issues.apache.org/jira/browse/AMBARI-7405
             Project: Ambari
          Issue Type: Task
          Components: ambari-web
    Affects Versions: 1.7.0
            Reporter: Andrii Tkach
            Assignee: Andrii Tkach
            Priority: Critical
             Fix For: 1.7.0


I created a user in Ambari that didnt exist in HDFS. Then I attempted to create 
a new Slider App. The POST API call threw the following 
{code}
PUT http://c6401:8080/api/v1/views/SLIDER/versions/1.0.0/instances/A1/apps

{
  "typeName": "HBASE",
  "typeVersion": "0.98.4.2.2.0.0-703-hadoop2",
  "name": "fq",
  "typeComponents": [
    {
      "id": "HBASE_MASTER",
      "instanceCount": "1",
      "yarnMemory": "1024",
      "yarnCpuCores": "1",
      "priority": 1
    },
    {
      "id": "HBASE_REGIONSERVER",
      "instanceCount": "1",
      "yarnMemory": "1024",
      "yarnCpuCores": "1",
      "priority": 2
    },
    {
      "id": "HBASE_REST",
      "instanceCount": "0",
      "yarnMemory": "1024",
      "yarnCpuCores": "1",
      "priority": 3
    },
    {
      "id": "HBASE_THRIFT",
      "instanceCount": "0",
      "yarnMemory": "1024",
      "yarnCpuCores": "1",
      "priority": 4
    },
    {
      "id": "HBASE_THRIFT2",
      "instanceCount": "0",
      "yarnMemory": "1024",
      "yarnCpuCores": "1",
      "priority": 5
    }
  ],
  "typeConfigs": {
    "application.def": 
"./slider/package/HBASE/slider-hbase-app-package-0.98.4.2.2.0.0-703-hadoop2.zip",
    "create.default.zookeeper.node": "true",
    "java_home": "/usr/jdk64/jdk1.7.0_45",
    "site.global.app_root": 
"${AGENT_WORK_ROOT}/app/install/hbase-0.98.4.2.2.0.0-703-hadoop2",
    "site.global.app_user": "yarn",
    "site.global.ganglia_enabled": "true",
    "site.global.hbase_instance_name": "instancename",
    "site.global.hbase_rest_port": "${HBASE_REST.ALLOCATED_PORT}",
    "site.global.hbase_root_password": "secret",
    "site.global.hbase_thrift2_port": "${HBASE_THRIFT2.ALLOCATED_PORT}",
    "site.global.hbase_thrift_port": "${HBASE_THRIFT.ALLOCATED_PORT}",
    "site.global.monitor_protocol": "http",
    "site.global.security_enabled": "false",
    "site.global.user_group": "hadoop",
    "site.hbase-env.hbase_master_heapsize": "1024m",
    "site.hbase-env.hbase_regionserver_heapsize": "1024m",
    "site.hbase-site.hbase.local.dir": "${hbase.tmp.dir}/local",
    "site.hbase-site.hbase.master.info.port": "${HBASE_MASTER.ALLOCATED_PORT}",
    "site.hbase-site.hbase.master.port": "0",
    "site.hbase-site.hbase.regionserver.info.port": "0",
    "site.hbase-site.hbase.regionserver.port": "0",
    "site.hbase-site.hbase.rootdir": "${DEFAULT_DATA_DIR}",
    "site.hbase-site.hbase.superuser": "yarn",
    "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
    "site.hbase-site.hbase.zookeeper.quorum": "${ZK_HOST}",
    "site.hbase-site.zookeeper.znode.parent": "${DEF_ZK_PATH}",
    "system_configs": "core-site"
  }
}
{code}
{code}
{
  "status": 500,
  "message": "Permission denied: user\u003duser1, access\u003dWRITE, 
inode\u003d\"/user\":hdfs:hdfs:drwxr-xr-x\n\tat 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:176)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5525)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5507)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5481)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3624)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3594)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3568)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:760)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)\n\tat 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:415)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)\n"
}
{code}

The server response should be shown to user (as it is) in a dialog similar to 
Ambari's dialog. Hitting OK should return user to the Create App dialog.

A similar error dialog should show up for other App action failures - Stop, 
Start, Flex, Destroy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to