When i execute configs.sh script, I am getting below output only. Not
getting the output which you have show in above mail.


/var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin
delete localhost BDP2_DEV hdfs-site “dfs.namenode.rpc-address”
USERID=admin
PASSWORD=admin
########## Performing 'delete' “dfs.namenode.rpc-address”: on
(Site:hdfs-site, Tag:version1446466158434)
########## PUTting json into: doSet_version1446525552539102324.json
########## NEW Site:hdfs-site, Tag:version1446466158434

I fetch the config properties based on last modified version, please find
the attached file.
look like it's unable to delete the property or unable to update the new
version. Kindly check it once.

On 3 November 2015 at 12:36, Shaik M <munna.had...@gmail.com> wrote:

> Sorry, my bad it showing correct version.
>
> "href" : 
> "http://sv2lxbdp2mstd01.corp.equinix.com:8080/api/v1/clusters/BDP2_DEV/configurations?type=hdfs-site&tag=version1446466158434";,
>       "tag" : "version1446466158434",
>       "type" : "hdfs-site",
>       "version" : 14,
>       "Config" : {
>         "cluster_name" : "BDP2_DEV",
>         "stack_id" : "HDP-2.3"
>
>
>
> On 3 November 2015 at 12:25, Shaik M <munna.had...@gmail.com> wrote:
>
>> Hi Sumit,
>>
>> I have recently upgraded to Ambari 1.7 to 2.1.2-377 and HDP 2.2 to 2.3
>>
>> I ran
>> http://mstd01.corp.com:8080/api/v1/clusters/BDP2_DEV/configurations?type=hdfs-site
>>  output
>> of this url is attached.
>>
>> There is saw it still displaying "stack_id" : "HDP-2.2", but my cluster
>> upgrade to HDP 2.3 and i can able to find my stack version in*
>> Admin->Stack and Version* tab.
>>
>> Please suggest how can I proceed.
>>
>> Thanks,
>> Shaik
>>
>> On 3 November 2015 at 11:20, Sumit Mohanty <smoha...@hortonworks.com>
>> wrote:
>>
>>> Sorry, my bad. I meant to ask what "get" returns?
>>>
>>>
>>> /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin@DEV
>>> *get* mstd01.corp.com BDP2_DEV hdfs-site​
>>>
>>>
>>> Similarly, what does
>>> http://mstd01.corp.com:8080/api/v1/clusters/BDP2_DEV/configurations?type=hdfs-site
>>> return?
>>>
>>>
>>> Also, I think the delete may have some issue - what version of Ambari
>>> are you using/?
>>>
>>>
>>> This is what I see ...
>>>
>>>
>>> [root@c6402 vagrant]#
>>> /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin
>>> delete localhost c1  hdfs-site dfs.namenode.rpc-address
>>> USERID=admin
>>> PASSWORD=admin
>>> ########## Performing 'delete' dfs.namenode.rpc-address: on
>>> (Site:hdfs-site, Tag:version1)
>>> ########## Config found. Skipping origin value
>>> ########## PUTting json into: doSet_version1446520420271646783.json
>>> {
>>>   "resources" : [
>>>     {
>>>       "href" : "
>>> http://localhost:8080/api/v1/clusters/c1/configurations/service_config_versions?service_name=HDFS&service_config_version=2
>>> ",
>>>       "configurations" : [
>>>         {
>>>           "clusterName" : "c1",
>>>           "stackId" : {
>>>             "stackName" : "HDP",
>>>             "stackVersion" : "2.3",
>>>             "stackId" : "HDP-2.3"
>>>           },
>>>           "type" : "hdfs-site",
>>>           "versionTag" : "version1446520420271646783",
>>>           "version" : 2,
>>>           "serviceConfigVersions" : null,
>>>           "configs" : {
>>>             "dfs.replication" : "3",
>>>             "dfs.namenode.audit.log.async" : "true",​
>>>            ...
>>>
>>>
>>> ------------------------------
>>> *From:* Shaik M <munna.had...@gmail.com>
>>> *Sent:* Monday, November 02, 2015 6:05 PM
>>>
>>> *To:* user@ambari.apache.org
>>> *Subject:* Re: Rebalance HDFS - Issue
>>>
>>> I have restart HDFS after executing this. The output of this command:
>>>
>>> [root@mstd01~]# /var/lib/ambari-server/resources/scripts/configs.sh -u
>>> admin -p admin@DEV delete mstd01.corp.com BDP2_DEV hdfs-site
>>> “dfs.namenode.rpc-address”
>>> USERID=admin
>>> PASSWORD=admin@DEV
>>> ########## Performing 'delete' “dfs.namenode.rpc-address”: on
>>> (Site:hdfs-site, Tag:version1446466158434)
>>> ########## PUTting json into: doSet_version1446466579436571830.json
>>> ########## NEW Site:hdfs-site, Tag:version1446466158434
>>>
>>> But still that property is exists in Ambari->HDFS->Config.
>>>
>>> On 2 November 2015 at 21:01, Sumit Mohanty <smoha...@hortonworks.com>
>>> wrote:
>>>
>>>> What do you get when you use this -
>>>>
>>>>
>>>> /var/lib/ambari-server/resources/scripts/configs.sh -u admin
>>>> -p admin@DEV delete mstd01.corp.com BDP2_DEV hdfs-site​
>>>>
>>>>
>>>> Did you restart after the delete?
>>>> ------------------------------
>>>> *From:* Shaik M <munna.had...@gmail.com>
>>>> *Sent:* Monday, November 02, 2015 4:23 AM
>>>> *To:* user@ambari.apache.org
>>>> *Subject:* Re: Rebalance HDFS - Issue
>>>>
>>>> Hi Sumit,
>>>>
>>>> Thank you for your assistance.
>>>>
>>>> I have tried to remove "dfs.namenode.rpc-address" property using below
>>>> command, it went successfully, but still that property exists.
>>>>
>>>> [root@mstd01~]# sudo
>>>> /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin@DEV
>>>> delete mstd01.corp.com BDP2_DEV hdfs-site “dfs.namenode.rpc-address”
>>>> [sudo] password for root:
>>>> USERID=admin
>>>> PASSWORD=admin@DEV
>>>> ########## Performing 'delete' “dfs.namenode.rpc-address”: on
>>>> (Site:hdfs-site, Tag:version1446466158434)
>>>> ########## PUTting json into: doSet_version1446466579436571830.json
>>>> ########## NEW Site:hdfs-site, Tag:version1446466158434
>>>>
>>>> Please help me to fix this issue.
>>>>
>>>> https://issues.apache.org/jira/browse/AMBARI-13373
>>>>
>>>> Thanks,
>>>> Shaik
>>>>
>>>>
>>>> On 2 November 2015 at 19:04, Sumit Mohanty <smoha...@hortonworks.com>
>>>> wrote:
>>>>
>>>>> ​Likely "dfs.namenode.rpc-address" in hdfs-site has a wrong value. If
>>>>> it is, you can delete it - see
>>>>> https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations
>>>>> (section Edit  Configurations with configs.sh)
>>>>>
>>>>>
>>>>> ------------------------------
>>>>> *From:* Shaik M <munna.had...@gmail.com>
>>>>> *Sent:* Sunday, November 01, 2015 8:17 PM
>>>>> *To:* user@ambari.apache.org
>>>>> *Subject:* Rebalance HDFS - Issue
>>>>>
>>>>> Hi,
>>>>>
>>>>> I am trying to run HDFS balancer from Ambari 2.1.2 with HDP 2.3.2,
>>>>> it's failing with following exception.
>>>>>
>>>>> 15/11/02 03:07:00 INFO block.BlockTokenSecretManager: Setting block keys
>>>>> 15/11/02 03:07:00 INFO balancer.KeyManager: Update block keys every 2hrs, 
>>>>> 30mins, 0secjava.io.IOException: Another Balancer is running..  Exiting 
>>>>> ...
>>>>> Nov 2, 2015 3:07:00 AM   Balancing took 1.935 seconds
>>>>>
>>>>> I have verified respective Hadoop NN, there is no balancer on NameNode.
>>>>>
>>>>> Could you please help use to fix this issue.
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Shaik M
>>>>>
>>>>>
>>>>
>>>
>>
>
{
  "href" : 
"http://mstd01.corp.com/api/v1/clusters/BDP2_DEV/configurations?type=hdfs-site&tag=version1446466158434";,
  "items" : [
    {
      "href" : 
"http://mstd01.corp.com/api/v1/clusters/BDP2_DEV/configurations?type=hdfs-site&tag=version1446466158434";,
      "tag" : "version1446466158434",
      "type" : "hdfs-site",
      "version" : 14,
      "Config" : {
        "cluster_name" : "BDP2_DEV",
        "stack_id" : "HDP-2.3"
      },
      "properties" : {
        "dfs.block.access.token.enable" : "true",
        "dfs.blockreport.initialDelay" : "120",
        "dfs.blocksize" : "134217728",
        "dfs.client.failover.proxy.provider.bdp-dev-hadoop" : 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
        "dfs.client.read.shortcircuit" : "true",
        "dfs.client.read.shortcircuit.streams.cache.size" : "4096",
        "dfs.client.retry.policy.enabled" : "false",
        "dfs.cluster.administrators" : " hdfs",
        "dfs.content-summary.limit" : "5000",
        "dfs.datanode.address" : "0.0.0.0:1019",
        "dfs.datanode.balance.bandwidthPerSec" : "6250000",
        "dfs.datanode.data.dir" : "/data/hadoop/hdfs/data",
        "dfs.datanode.data.dir.perm" : "750",
        "dfs.datanode.du.reserved" : "1073741824",
        "dfs.datanode.failed.volumes.tolerated" : "0",
        "dfs.datanode.http.address" : "0.0.0.0:1022",
        "dfs.datanode.https.address" : "0.0.0.0:50475",
        "dfs.datanode.ipc.address" : "0.0.0.0:8010",
        "dfs.datanode.kerberos.principal" : "dn/_h...@bdp2uat.org",
        "dfs.datanode.keytab.file" : "/etc/security/keytabs/dn.service.keytab",
        "dfs.datanode.max.transfer.threads" : "4096",
        "dfs.domain.socket.path" : "/var/lib/hadoop-hdfs/dn_socket",
        "dfs.encrypt.data.transfer.cipher.suites" : "AES/CTR/NoPadding",
        "dfs.encryption.key.provider.uri" : "",
        "dfs.ha.automatic-failover.enabled" : "true",
        "dfs.ha.fencing.methods" : "shell(/bin/true)",
        "dfs.ha.namenodes.bdp-dev-hadoop" : "nn1,nn2",
        "dfs.heartbeat.interval" : "3",
        "dfs.hosts.exclude" : "/etc/hadoop/conf/dfs.exclude",
        "dfs.http.policy" : "HTTP_ONLY",
        "dfs.https.port" : "50470",
        "dfs.journalnode.edits.dir" : "/hadoop/hdfs/journal",
        "dfs.journalnode.http-address" : "0.0.0.0:8480",
        "dfs.journalnode.https-address" : "0.0.0.0:8481",
        "dfs.journalnode.kerberos.internal.spnego.principal" : 
"HTTP/_h...@bdp2uat.org",
        "dfs.journalnode.kerberos.principal" : "jn/_h...@bdp2uat.org",
        "dfs.journalnode.keytab.file" : 
"/etc/security/keytabs/jn.service.keytab",
        "dfs.namenode.accesstime.precision" : "0",
        "dfs.namenode.audit.log.async" : "true",
        "dfs.namenode.avoid.read.stale.datanode" : "true",
        "dfs.namenode.avoid.write.stale.datanode" : "true",
        "dfs.namenode.checkpoint.dir" : "/data/hadoop/hdfs/namesecondary",
        "dfs.namenode.checkpoint.edits.dir" : "${dfs.namenode.checkpoint.dir}",
        "dfs.namenode.checkpoint.period" : "21600",
        "dfs.namenode.checkpoint.txns" : "1000000",
        "dfs.namenode.fslock.fair" : "false",
        "dfs.namenode.handler.count" : "40",
        "dfs.namenode.http-address" : "mstd01.corp.com:50070",
        "dfs.namenode.http-address.bdp-dev-hadoop.nn1" : 
"mstd01.corp.com:50070",
        "dfs.namenode.http-address.bdp-dev-hadoop.nn2" : 
"mstd02.corp.com:50070",
        "dfs.namenode.https-address" : "mstd01.corp.com:50470",
        "dfs.namenode.https-address.bdp-dev-hadoop.nn1" : 
"mstd01.corp.com:50470",
        "dfs.namenode.https-address.bdp-dev-hadoop.nn2" : 
"mstd02.corp.com:50470",
        "dfs.namenode.kerberos.internal.spnego.principal" : 
"HTTP/_h...@bdp2uat.org",
        "dfs.namenode.kerberos.principal" : "nn/_h...@bdp2uat.org",
        "dfs.namenode.keytab.file" : "/etc/security/keytabs/nn.service.keytab",
        "dfs.namenode.name.dir" : "/data/hadoop/hdfs/namenode",
        "dfs.namenode.name.dir.restore" : "true",
        "dfs.namenode.rpc-address" : "mstd02.corp.com:8020",
        "dfs.namenode.rpc-address.bdp-dev-hadoop.nn1" : "mstd01.corp.com:8020",
        "dfs.namenode.rpc-address.bdp-dev-hadoop.nn2" : "mstd02.corp.com:8020",
        "dfs.namenode.safemode.threshold-pct" : "0.99",
        "dfs.namenode.secondary.http-address" : "mstd01.corp.com:50090",
        "dfs.namenode.shared.edits.dir" : 
"qjournal://mstd01.corp.com:8485;mstd02.corp.com:8485;mstd03.corp.com:8485/bdp-dev-hadoop",
        "dfs.namenode.stale.datanode.interval" : "30000",
        "dfs.namenode.startup.delay.block.deletion.sec" : "3600",
        "dfs.namenode.write.stale.datanode.ratio" : "1.0f",
        "dfs.nameservices" : "bdp-dev-hadoop",
        "dfs.permissions.enabled" : "true",
        "dfs.permissions.superusergroup" : "hdfs",
        "dfs.replication" : "3",
        "dfs.replication.max" : "50",
        "dfs.support.append" : "true",
        "dfs.web.authentication.kerberos.keytab" : 
"/etc/security/keytabs/spnego.service.keytab",
        "dfs.web.authentication.kerberos.principal" : "HTTP/_h...@bdp2uat.org",
        "dfs.webhdfs.enabled" : "true",
        "fs.permissions.umask-mode" : "022",
        "nfs.exports.allowed.hosts" : "* rw",
        "nfs.file.dump.dir" : "/tmp/.hdfs-nfs"
      }
    }
  ]
}

Reply via email to