Re: api questions

2013-11-23 Thread Sumit Mohanty
Artem, forgive me if this is a delayed reply as I am out of country.

Assuming you just need to re-install oozie components:

I would expects calls similar to the following ­ these install the
components on the specific host.
curl --user admin:admin -i -X PUT -d '{"HostRoles": {"state": "INSTALLED"}}'
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/HOST_NAME/
host_components/OOZIE_SERVER
curl --user admin:admin -i -X PUT -d '{"HostRoles": {"state": "INSTALLED"}}'
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/HOST_NAME/
host_components/OOZIE_CLIENT

This will start the server.
curl --user admin:admin -i -X PUT -d '{"HostRoles": {"state": "STARTED"}}'
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/HOST_NAME/
host_components/OOZIE_SERVER

The calls shown below (respectively):
* Map a OOZIE_SERVER component to a host (this will fail as it already
exists and is in INIT state)
> * You can verify that with a GET on
> http://ambariserver:8080/api/v1/clusters/clustername/hosts?Hosts/host_name=mac
> hinename
* Expects OOZIE_SERVER as a service (but the name of the service is OOZIE)
> * Do a GET on http://ambariserver:8080/api/v1/clusters/clustername/
> <http://ambariserver:8080/api/v1/clusters/clustername/hosts?Hosts/host_name=ma
> chinename> services/OOZIE and verify if the state is INIT. If it is then you
> can set the state to INSTALLED and it will install all the OOZIE components.
> Otherwise, you need to install the components one-by-one. My suggestion will
> be to try to INSTALL at the level of OOZIE service. If the service is already
> at INSTALLED/STARTED state then you can do the same at the level of components
> (OOZIE_SERVER and OOZIE_CLIENT). Also, after you have INSTALLED, you can use
> Ambari Web FE to start.
From:  Artem Ervits 
Reply-To:  
Date:  Friday, November 22, 2013 9:16 AM
To:  "ambari-u...@incubator.apache.org" 
Subject:  RE: api questions

Sumit,
 
I tried that and I get:
 
curl -u usr:pw -i -X POST -d '{"host_components" :
[{"HostRoles":{"component_name":"OOZIE_SERVER"}}] }'
http://ambariserver:8080/api/v1/clusters/clustername/hosts?Hosts/host_name=m
achinename
HTTP/1.1 409 Conflict
Set-Cookie: AMBARISESSIONID=k9e2ashgd3i4vlzq8c6l7w9k;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 253
Server: Jetty(7.6.7.v20120910)
 
{
  "status" : 409,
  "message" : 
"org.apache.ambari.server.controller.spi.ResourceAlreadyExistsException:
Attempted to create a host_component which already exists:
[clusterName=clustername, hostName=machinename, componentName=OOZIE_SERVER]"
 
 
And then:
 
curl -u usr:pw  -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}'
http://ambariserver:8080/api/v1/clusters/clustername/services/OOZIE_SERVER
HTTP/1.1 404 Not Found
Set-Cookie: AMBARISESSIONID=1xa5af3najngmuwjytkx6n6rt;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 205
Server: Jetty(7.6.7.v20120910)
 
{
  "status" : 404,
  "message" : 
"org.apache.ambari.server.controller.spi.NoSuchResourceException: The
specified resource doesn't exist: Service not found,
clusterName=clustername, serviceName=OOZIE_SERVER"
 
 
 

From: Sumit Mohanty [mailto:smoha...@hortonworks.com]
Sent: Wednesday, November 20, 2013 7:26 PM
To: ambari-u...@incubator.apache.org
Subject: Re: api questions
 

Looks like you can INSTALL OOZIE_SERVER directly without moving it to
MAINTENANCE state as its in the INIT state.

 

Step 5 at 
https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+com
ponents+using+APIs can give you details around installing specific
components.

 

Before you reinstall oozie, ensure that its stopped.

 

From: Artem Ervits 
Reply-To: 
Date: Wednesday, November 20, 2013 12:52 PM
To: "ambari-u...@incubator.apache.org" 
Subject: api questions

 

Hello all,
 
I was messing with Hue and inadvertenly messed up my Oozie installation. I
wanted to reinstall and got myself in a situation where I¹m having trouble
recovering from . Using the Ambari API I am trying to put Oozie in
maintenance mode just so I could reinstall and I am getting the following
message:
 
message" : "org.apache.ambari.server.controller.spi.SystemException: An
internal system exception occurred: Invalid transition for
servicecomponenthost, clusterName=NYPTest, clusterId=2, serviceName=OOZIE,
componentName=OOZIE_SERVER, hostname=machine02, currentState=INIT,
newDesiredState=MAINTENANCE"
 
If I try to use api to install the service I get:
 
"status" : 404,
  "message" : 
"org.apache.ambari.server.controller.spi.NoSuchResourceExceptio

Re: Upgrading a cluster's stack version

2013-12-19 Thread Sumit Mohanty
Ambari does not have any automated way of doing it. The process is a mix of
manual steps and API calls.

You can refer to 
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_using_Ambari_bo
ok/content/ambari-chap9_2x.html for the details of this process.

-Sumit

From:  JOAQUIN GUANTER GONZALBEZ 
Reply-To:  
Date:  Thursday, December 19, 2013 6:20 AM
To:  "ambari-u...@incubator.apache.org" 
Subject:  Upgrading a cluster's stack version

Hello,

If I have a cluster I deployed with Ambari on HDP1.3 and I want to upgrade
it to HDP2.0, is there any way to accomplish this through Ambari? I am fine
using the API directly instead of the Web UI.

Thanks,
Ximo



Este mensaje se dirige exclusivamente a su destinatario. Puede consultar
nuestra política de envío y recepción de correo electrónico en el enlace
situado más abajo.
This message is intended exclusively for its addressee. We only send and
receive email on the basis of the terms set out at:
http://www.tid.es/ES/PAGINAS/disclaimer.aspx



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Does this repository still work :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

2014-01-12 Thread Sumit Mohanty
Can you check if you can download the hep.repo?

wget  
http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/hdp.repo

I just tried and I can download it.

If you can download, then on one one of your agent hosts, verify that
/etc/zypp/repos.d/HDP.repo has the same base url value for [HDP-1.x] as the
downloaded one above.

If all of the above are in order, pls. verify if zypper is able to see the
packages. One way to check will be
zypper se 'hadoop*' - to see if the host can access the repo and packages.
You can also call "zypper install hadoop-sbin".

-Sumit 

From:  王明 
Reply-To:  
Date:  Sunday, January 12, 2014 5:08 AM
To:  
Subject:  Does this repository still work
:http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

Hi, I am using ambari 1.4.2.104 ,  when i am deploying HDP 1.3.3 ,  i got
such error:

--ganglia-
notice: /Stage[2]/Hdp-ganglia/Hdp::User[gmond_user]/User[nobody]/groups:
groups changed 'nogroup' to 'nobody,nogroup'
err: 
/Stage[2]/Hdp-ganglia::Monitor/Hdp::Package[ganglia-monitor]/Hdp::Package::P
rocess_pkg[ganglia-monitor]/Package[ganglia-gmond-3.5.0-99]/ensure: change
from absent to present failed: Could not find package ganglia-gmond-3.5.0-99
notice: 
/Stage[2]/Hdp-ganglia::Monitor/Hdp::Package[ganglia-monitor]/Hdp::Package::P
rocess_pkg[ganglia-monitor]/Anchor[hdp::package::ganglia-monitor::end]:
Dependency Package[ganglia-gmond-3.5.0-99] has failures: true
notice: 
/Stage[2]/Hdp-ganglia::Monitor/Hdp::Package[ganglia-gmond-modules-python]/Hd
p::Package::Process_pkg[ganglia-gmond-modules-python]/Anchor[hdp::package::g
anglia-gmond-modules-python::begin]: Dependency
Package[ganglia-gmond-3.5.0-99] has failures: true


-datanode---
err: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Package[hadoop]/Hdp::Package[hadoop
64]/Hdp::Package::Process_pkg[hadoop 64]/Package[hadoop-sbin]/ensure: change
from absent to present failed: Could not find package hadoop-sbin
err: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Package[hadoop]/Hdp::Package[hadoop
64]/Hdp::Package::Process_pkg[hadoop 64]/Package[hadoop-libhdfs]/ensure:
change from absent to present failed: Could not find package hadoop-libhdfs
err: /Stage[main]/Hdp-hadoop/Hdp-hadoop::Package[hadoop]/Hdp::Package[hadoop
64]/Hdp::Package::Process_pkg[hadoop 64]/Package[hadoop-pipes]/ensure:
change from absent to present failed: Could not find package hadoop-pipes

i think maybe ambari can not download those needed file from the repo.
so, if this repo is still working
http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/   ?
if not,  which should i use ?
thanks.



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Does this repository still work :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

2014-01-12 Thread Sumit Mohanty
Assuming the cluster is only installed and no services are started, you can
reset and reinstall the whole cluster. The reset does not uninstall the
already installed packages.

To reset:
* Log out of ambari-server
* Stop ambari-server and all instances of ambari-agent.
* Call ambari-server reset (this will reset Ambari DB and thus Ambari's
knowledge of the cluster)
* Start amabri-server and ambari-agent instances
* Access ambari-server (you may need to hit hard refresh)
Note that you will loose data, e.g. stored in HDFS, when you reset. So pls.
evaluate if you can indeed reset and restart.

-Sumit

From:  王明 
Reply-To:  
Date:  Sunday, January 12, 2014 8:22 PM
To:  
Subject:  Re: Does this repository still work
:http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

thanks,  it does still not work,
i have manually installed ganglia ,  then ambari UI shows ganglia installed,
but other components which i did not install manually failed.
so,  I think ambari has skipped component downloading and installing steps,
how to enable the two steps?


2014/1/13 Sumit Mohanty 
> Can you do a "zypper clean" on all the hosts?
> 
> Next time you try to install look for the log files at
> /var/lib/ambari-agent/data.
> This location contains logs for all tasks executed on an Ambari agent host.
> Each log name includes a specific number, for example N. Pay particular
> attention to the following three files:
> 
> * site-N.pp - the puppet file corresponding to a specfic task.
> * 
> * output-N.txt - output from puppet file execution.
> * 
> * errors-N.txt - error messages.
> 
> Do installation of all components fail or there are some intermittent
> failures? If download speed is slow then the tasks may be timing out.
> 
> You can see the details of requests and tasks at
> http://ambariserverhost:8080/api/v1/clusters/c1/requests/
> http://ambariserverhost:8080/api/v1/clusters/c1/requests/REQUESTID/tasks
> 
http://ambariserverhost:8080/api/v1/clusters/c1/requests/REQUESTID/tasks/TASKI>
D
> 
> From:  王明 
> Reply-To:  
> Date:  Sunday, January 12, 2014 5:51 PM
> To:  
> Subject:  Re: Does this repository still work
> :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?
> 
> Hi, Sumit , thanks for replying.
> I have checked I can installed packages using  "zypper install hadoop-sbin" ,
> but i still encountered those file not find error.
> 
> Here is my HDP repo on agent:
> 
> -/etc/zypp/repos.d/HDP.repo---
> [HDP-1.3.3]
> name=HDP
> baseurl=http://public-repo-1.hortonworks.com/HDP/suse11/1.x/GA
> path=/
> enabled=1
> gpgcheck=0
> 
> --
> Here is my ui setting on select statck page:
> 
> 
> ~
> 
> 
> 2014/1/13 Sumit Mohanty 
>> Can you check if you can download the hep.repo?
>> 
>> wget  
>> http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/hdp.repo
>> 
>> I just tried and I can download it.
>> 
>> If you can download, then on one one of your agent hosts, verify that
>> /etc/zypp/repos.d/HDP.repo has the same base url value for [HDP-1.x] as the
>> downloaded one above.
>> 
>> If all of the above are in order, pls. verify if zypper is able to see the
>> packages. One way to check will be
>> zypper se 'hadoop*' - to see if the host can access the repo and packages.
>> You can also call "zypper install hadoop-sbin".
>> 
>> -Sumit 
>> 
>> From:  王明 
>> Reply-To:  
>> Date:  Sunday, January 12, 2014 5:08 AM
>> To:  
>> Subject:  Does this repository still work
>> :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?
>> 
>> Hi, I am using ambari 1.4.2.104 ,  when i am deploying HDP 1.3.3 ,  i got
>> such error:
>> 
>> --ganglia-
>> notice: /Stage[2]/Hdp-ganglia/Hdp::User[gmond_user]/User[nobody]/groups:
>> groups changed 'nogroup' to 'nobody,nogroup'
>> err: 
>> /Stage[2]/Hdp-ganglia::Monitor/Hdp::Package[ganglia-monitor]/Hdp::Package::Pr
>> ocess_pkg[ganglia-monitor]/Package[ganglia-gmond-3.5.0-99]/ensure: change
>> from absent to present failed: Could not find package ganglia-gmond-3.5.0-99
>> notice: 
>> /Stage[2]/Hdp-ganglia::Monitor/Hdp::Package[ganglia-monitor]/Hdp::Package::Pr
>> ocess_pkg[ganglia-monitor]/Anchor[hdp::package::ganglia-monitor::end]:
>> Dependency Package[ganglia-gmond-3.5.0-99] has failures: true
>> notice: 
>> /Stage[2]/Hdp-ganglia::Monitor/Hdp::Package[ganglia-gmond-modules-python]/Hdp
>> ::Package::Process_pkg[ganglia-gmond-modules-python]/Anchor[hdp::package::gan
>> glia-gmond-modules-python::begin]: Dependency Package[ganglia-

Re: Does this repository still work :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

2014-01-13 Thread Sumit Mohanty
Just curious – did you intentionally tried the centos6 repo url -
http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/'

I just tried - wget
http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/zookeepe
r/zookeeper-3.4.5.1.3.3.0-58.noarch.rpm and its able to download.

What linux distro/version you are using?

-Sumit

From:  王明 
Reply-To:  
Date:  Sunday, January 12, 2014 11:24 PM
To:  
Subject:  Re: Does this repository still work
:http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

i have manually tried zypper install zookeeper* ,  and got such eror:
---using 
http://public-repo-1.hortonworks.com/HDP/suse11/1.x/GA---
2 new packages to install.
Overall download size: 8.5 MiB. After the operation, additional 12.9 MiB
will be used.
Continue? [y/n/?] (y): y
Retrieving package zookeeper-3.4.5.1.3.3.0-58.noarch (1/2), 8.5 MiB (12.9
MiB unpacked)
Retrieving: zookeeper-3.4.5.1.3.3.0-58.noarch.rpm [error]
File './zookeeper/zookeeper-3.4.5.1.3.3.0-58.noarch.rpm' not found on medium
'http://public-repo-1.hortonworks.com/HDP/suse11/1.x/GA

-using 
http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/'
2 new packages to install.
Overall download size: 8.5 MiB. After the operation, additional 12.9 MiB
will be used.
Continue? [y/n/?] (y): y
Retrieving package zookeeper-3.4.5.1.3.3.0-58.noarch (1/2), 8.5 MiB (12.9
MiB unpacked)
Retrieving: zookeeper-3.4.5.1.3.3.0-58.noarch.rpm [error]
File './zookeeper/zookeeper-3.4.5.1.3.3.0-58.noarch.rpm' not found on medium
'http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/'

both of the two urls are failed to download zookeeper using "zypper install
zookeeper*",  so i doubt if it is the repo base url problem.
have you used anyone of them ?




2014/1/13 Sumit Mohanty 
> Assuming the cluster is only installed and no services are started, you can
> reset and reinstall the whole cluster. The reset does not uninstall the
> already installed packages.
> 
> To reset:
> * Log out of ambari-server
> * Stop ambari-server and all instances of ambari-agent.
> * Call ambari-server reset (this will reset Ambari DB and thus Ambari's
> knowledge of the cluster)
> * Start amabri-server and ambari-agent instances
> * Access ambari-server (you may need to hit hard refresh)
> Note that you will loose data, e.g. stored in HDFS, when you reset. So pls.
> evaluate if you can indeed reset and restart.
> 
> -Sumit
> 
> From:  王明 
> Reply-To:  
> Date:  Sunday, January 12, 2014 8:22 PM
> 
> To:  
> Subject:  Re: Does this repository still work
> :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?
> 
> thanks,  it does still not work,
> i have manually installed ganglia ,  then ambari UI shows ganglia installed,
> but other components which i did not install manually failed.
> so,  I think ambari has skipped component downloading and installing steps,
> how to enable the two steps?
> 
> 
> 2014/1/13 Sumit Mohanty 
>> Can you do a "zypper clean" on all the hosts?
>> 
>> Next time you try to install look for the log files at
>> /var/lib/ambari-agent/data.
>> This location contains logs for all tasks executed on an Ambari agent host.
>> Each log name includes a specific number, for example N. Pay particular
>> attention to the following three files:
>> 
>> * site-N.pp - the puppet file corresponding to a specfic task.
>> * 
>> * output-N.txt - output from puppet file execution.
>> * 
>> * errors-N.txt - error messages.
>> 
>> Do installation of all components fail or there are some intermittent
>> failures? If download speed is slow then the tasks may be timing out.
>> 
>> You can see the details of requests and tasks at
>> http://ambariserverhost:8080/api/v1/clusters/c1/requests/
>> http://ambariserverhost:8080/api/v1/clusters/c1/requests/REQUESTID/tasks
>> http://ambariserverhost:8080/api/v1/clusters/c1/requests/REQUESTID/tasks/TASK
>> ID
>> 
>> From:  王明 
>> Reply-To:  
>> Date:  Sunday, January 12, 2014 5:51 PM
>> To:  
>> Subject:  Re: Does this repository still work
>> :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?
>> 
>> Hi, Sumit , thanks for replying.
>> I have checked I can installed packages using  "zypper install hadoop-sbin" ,
>> but i still encountered those file not find error.
>> 
>> Here is my HDP repo on agent:
>> 
>> -/etc/zypp/repos.d/HDP.repo---
>> [HDP-1.3.3]
>> name=HDP
>> baseurl=http://public-repo-1.hortonworks.com/HDP/suse11/1.x/GA
>> path=/
>> enabled=1
>> gpgcheck=0
>> 
>> -

Re: Does this repository still work :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

2014-01-15 Thread Sumit Mohanty
Where exactly are you wrt deployment using Ambari? Can you retry
installation?

Otherwise, if its alright with your current installation you can "reset" and
then try installation again.

If installation fails this time we can debug the puppet code to get more
details.

The way installation, or any Ambari request execution works is that it
creates one or more tasks and each task is executed independently. For
example, installation will create tasks for installing NameNode, DataNode,
etc.

If installation fails you can find out which task failed and we can run that
in debug mode.

To run in debug you can find the task, find the host it is run and its id.
Just go to the host and run the following:

export MY_RUBY_HOME=/usr/lib/ambari-agent/lib/ruby-1.8.7-p370
export PATH=/usr/lib/ambari-agent/lib/ruby-1.8.7-p370/bin:$PATH
export 
RUBYLIB=/usr/lib/ambari-agent/lib/facter-1.6.10/lib/:/usr/lib/ambari-agent/l
ib/puppet-2.7.9/lib
/usr/lib/ambari-agent/lib/puppet-2.7.9/bin/puppet apply --debug
--confdir=/var/lib/ambari-agent/puppet --detailed-exitcodes
/var/lib/ambari-agent/data/site-N.pp

where site-N.pp should be replaced by the actual numbered .pp file. This
number correspond to the task id.

Say, api/v1/clusters/CLUSTERNAME/requests/2/tasks/10 failed. Then you need
to run using site-10.pp

-Sumit

From:  王明 
Reply-To:  
Date:  Wednesday, January 15, 2014 6:54 PM
To:  
Subject:  Re: Does this repository still work
:http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?

sorry to response late,
I use suse and can download rpm and component manually using "zypper install
xxx" ,  but ambari can not download automatically.



2014/1/14 Sumit Mohanty 
> Just curious – did you intentionally tried the centos6 repo url -
> http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/'
> 
> I just tried - wget
> http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/zookeeper/
> zookeeper-3.4.5.1.3.3.0-58.noarch.rpm and its able to download.
> 
> What linux distro/version you are using?
> 
> -Sumit
> 
> From:  王明 
> Reply-To:  
> Date:  Sunday, January 12, 2014 11:24 PM
> 
> To:  
> Subject:  Re: Does this repository still work
> :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?
> 
> i have manually tried zypper install zookeeper* ,  and got such eror:
> ---using http://public-repo-1.hortonworks.com/HDP/suse11/1.x/GA---
> 2 new packages to install.
> Overall download size: 8.5 MiB. After the operation, additional 12.9 MiB will
> be used.
> Continue? [y/n/?] (y): y
> Retrieving package zookeeper-3.4.5.1.3.3.0-58.noarch (1/2), 8.5 MiB (12.9 MiB
> unpacked)
> Retrieving: zookeeper-3.4.5.1.3.3.0-58.noarch.rpm [error]
> File './zookeeper/zookeeper-3.4.5.1.3.3.0-58.noarch.rpm' not found on medium
> 'http://public-repo-1.hortonworks.com/HDP/suse11/1.x/GA
> 
> -using 
> http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/'
> 2 new packages to install.
> Overall download size: 8.5 MiB. After the operation, additional 12.9 MiB will
> be used.
> Continue? [y/n/?] (y): y
> Retrieving package zookeeper-3.4.5.1.3.3.0-58.noarch (1/2), 8.5 MiB (12.9 MiB
> unpacked)
> Retrieving: zookeeper-3.4.5.1.3.3.0-58.noarch.rpm [error]
> File './zookeeper/zookeeper-3.4.5.1.3.3.0-58.noarch.rpm' not found on medium
> 'http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.3.0/'
> 
> both of the two urls are failed to download zookeeper using "zypper install
> zookeeper*",  so i doubt if it is the repo base url problem.
> have you used anyone of them ?
> 
> 
> 
> 
> 2014/1/13 Sumit Mohanty 
>> Assuming the cluster is only installed and no services are started, you can
>> reset and reinstall the whole cluster. The reset does not uninstall the
>> already installed packages.
>> 
>> To reset:
>> * Log out of ambari-server
>> * Stop ambari-server and all instances of ambari-agent.
>> * Call ambari-server reset (this will reset Ambari DB and thus Ambari's
>> knowledge of the cluster)
>> * Start amabri-server and ambari-agent instances
>> * Access ambari-server (you may need to hit hard refresh)
>> Note that you will loose data, e.g. stored in HDFS, when you reset. So pls.
>> evaluate if you can indeed reset and restart.
>> 
>> -Sumit
>> 
>> From:  王明 
>> Reply-To:  
>> Date:  Sunday, January 12, 2014 8:22 PM
>> 
>> To:  
>> Subject:  Re: Does this repository still work
>> :http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/ ?
>> 
>> thanks,  it does still not work,
>> i have manually installed ganglia ,  then ambari UI shows ganglia installed,
>>

Re: Ambari upgrade 1.4.1 to 1.4.2

2014-01-24 Thread Sumit Mohanty
Upgrade should not wipe out the database. Only command that cleans up the
database is "ambari-server reset".

Can you share the document link you used for upgrade?

Is the DB Postgres?

-Sumit


On Fri, Jan 24, 2014 at 10:16 AM, Vinod Kumar Vavilapalli <
vino...@hortonworks.com> wrote:

> +user@ambari -user@hadoop
>
> Please post ambari related questions to the ambari user mailing list.
>
> Thanks
> +Vinod
> Hortonworks Inc.
> http://hortonworks.com/
>
>
> On Fri, Jan 24, 2014 at 9:15 AM, Kokkula, Sada <
> sadanandam.kokk...@bnymellon.com> wrote:
>
>>
>>
>> Ambari-Server upgrade from 1.4.1 to 1.4.2 wipes out Ambari database
>> during upgrade. After that, not able to open the Ambari Server GUI.
>>
>> Reviewed the Horton works web site for help, but the steps in doc plan
>> not help out to fix the issue.
>>
>>
>>
>> Appreciated for any updates.
>>
>>
>>
>> Thanks,
>>
>> The information contained in this e-mail, and any attachment, is
>> confidential and is intended solely for the use of the intended recipient.
>> Access, copying or re-use of the e-mail or any attachment, or any
>> information contained therein, by any other person is not authorized. If
>> you are not the intended recipient please return the e-mail to the sender
>> and delete it from your computer. Although we attempt to sweep e-mail and
>> attachments for viruses, we do not guarantee that either are virus-free and
>> accept no liability for any damage sustained as a result of viruses.
>>
>> Please refer to http://disclaimer.bnymellon.com/eu.htm for certain
>> disclosures relating to European legal entities.
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Fw: Re: Unable to load logon screen of Ambari Web ui

2014-02-04 Thread Sumit Mohanty
Meghavi,

Looks like you are using RHEL and it may require registration to download
packages. Try searching "This system is not registered with RHN" on Google
and there are a few articles on this topic.

You can use command line to manually call "yum install lzo" and see if that
works.

-Sumit


On Tue, Feb 4, 2014 at 2:57 AM, Meghavi Sugandhi
wrote:

> Hi Yusaku,
>
> Thanks for your continous help.
> Now  we have come to the installing step.
> When we try to install the cluster on the host on which we have manually
> installed the ambari-agent, it is giving following error  during the
> installation of datanode:
>
>
>
> warning: Could not retrieve fact fqdn
> warning: Host is missing hostname and/or domain: sandeep
> notice: Finished catalog run in 0.07 seconds
> err: /Stage[1]/Hdp/Hdp::Lzo::Package[64]/Hdp::Package[lzo
> 64]/Hdp::Package::Process_pkg[lzo 64]/Package[lzo]/ensure: change from
> absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install
> lzo' returned 1: This system is not registered with RHN.
> RHN support will be disabled.
> Error: Nothing to do
>
> err: /Stage[1]/Hdp/Hdp::Lzo::Package[64]/Hdp::Package[lzo
> 64]/Hdp::Package::Process_pkg[lzo 64]/Package[lzo-devel]/ensure: change
> from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y
> install lzo-devel' returned 1: This system is not registered with RHN.
> RHN support will be disabled.
> Error: Nothing to do
>
> err:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Package[snappy]/Hdp::Package::Process_pkg[snappy]/Package[snappy]/ensure:
> change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0
> -y install snappy' returned 1: This system is not registered with RHN.
> RHN support will be disabled.
> Error: Nothing to do
>
> notice: /Stage[1]/Hdp/Hdp::Lzo::Package[64]/Hdp::Package[lzo
> 64]/Hdp::Package::Process_pkg[lzo 64]/Anchor[hdp::package::lzo 64::end]:
> Dependency Package[lzo-devel] has failures: true
> notice: /Stage[1]/Hdp/Hdp::Lzo::Package[64]/Hdp::Package[lzo
> 64]/Hdp::Package::Process_pkg[lzo 64]/Anchor[hdp::package::lzo 64::end]:
> Dependency Package[lzo] has failures: true
> notice:
> /Stage[1]/Hdp/Hdp::Lzo::Package[64]/Anchor[hdp::lzo::package::64::end]:
> Dependency Package[lzo-devel] has failures: true
> notice:
> /Stage[1]/Hdp/Hdp::Lzo::Package[64]/Anchor[hdp::lzo::package::64::end]:
> Dependency Package[lzo] has failures: true
> err:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Package[snappy]/Hdp::Package::Process_pkg[snappy]/Package[snappy-devel]/ensure:
> change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0
> -y install snappy-devel' returned 1: This system is not registered with RHN.
> RHN support will be disabled.
> Error: Nothing to do
>
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Package[snappy]/Hdp::Package::Process_pkg[snappy]/Anchor[hdp::package::snappy::end]:
> Dependency Package[snappy-devel] has failures: true
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Package[snappy]/Hdp::Package::Process_pkg[snappy]/Anchor[hdp::package::snappy::end]:
> Dependency Package[snappy] has failures: true
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln
> 32]/Anchor[hdp::exec::hdp::snappy::package::ln 32::begin]: Dependency
> Package[snappy-devel] has failures: true
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln
> 32]/Anchor[hdp::exec::hdp::snappy::package::ln 32::begin]: Dependency
> Package[snappy] has failures: true
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[64]/Hdp::Exec[hdp::snappy::package::ln
> 64]/Anchor[hdp::exec::hdp::snappy::package::ln 64::begin]: Dependency
> Package[snappy-devel] has failures: true
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[64]/Hdp::Exec[hdp::snappy::package::ln
> 64]/Anchor[hdp::exec::hdp::snappy::package::ln 64::begin]: Dependency
> Package[snappy] has failures: true
> OK
> Licensed under the Apache License, Version 2.0.
> See third-party tools/resources that Ambari uses and their respective
> authors
>
> Please provide some solution.
>
>
>
> Thanks & Regards,
> Meghavi Sugandhi
> Tata Consultancy Services
> Mailto: meghavi.sugan...@tcs.com
> Website: http://www.tcs.com
>
>
> -Yusaku Sako  wrote: -
>
> To: user@ambari.apache.org
> From: Yusaku Sako 
> Date: 02/01/2014 01:13AM
>
> Subject: Re: Fw: Re: Unable to load logon screen of Ambari Web ui
>
> Hi Meghavi,
>
> Glad to see that you can load the Web UI now.
> It is the expected behavior; you first need to deploy a cluster with
> Ambari in order to manage it.
> You only see HDP stacks because that's what's been contributed to Ambari
> so far, but Ambari is designed to support other stacks; for example there
> has been an effort to deploy a custom stack using a non-HDFS distributed
> file system.
> Also, there have been success stories of having Ambari manage existing,
> already-installed clusters of various d

Re: Option to not install a master component

2014-02-07 Thread Sumit Mohanty
Are you using APIs to install services? If so you can install only specific
components.

-Sumit


On Fri, Feb 7, 2014 at 1:01 PM, Anisha Agarwal wrote:

>  Hi,
>
>  We have a custom service, containing multiple components which we add to
> the ambari cluster.
> I wanted to have the option of not being able to install a master
> component. This could also mean
>
>1. Adding a "None" option to the available hostnames during install
>time.
>2. Being able to select which components from a service we want to
>install.
>
> Can anyone please give me a few pointers on how to achieve this?
>
>  Thanks,
> Anisha
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Client-only service differences between 1.4.1 and 1.4.3

2014-02-20 Thread Sumit Mohanty
The state of the service is now calculated based on the actual states of
the *master host components* that belong to the service. E.g. if a master
component (NAMENODE for HDFS) is INSTALLED then the state of the service
will be INSTALLED.

What is happening in the case of client only services is that the code is
picking up the default value which happens to be STARTED. This could
instead be computed based on the state of all non-master components - e.g.
if all are INSTALLED then the service state is INSTALLED. Pls. open a JIRA
and we can discuss it over there.

I would recommend, as part of the above JIRA, we add some unit tests
covering client only services and state of the service wrt. the state of
the client.

-Sumit


On Thu, Feb 20, 2014 at 6:05 AM, JOAQUIN GUANTER GONZALBEZ wrote:

>  Hello,
>
>  I am evaluating upgrading Ambari from 1.4.1 to 1.4.3 and I am seeing a
> difference in behavior I quite can't tell if it is a bug or a change in
> design. In 1.4.1, services that consisted only of CLIENT components would
> not transition to the "STARTED" state and would remain in "INSTALLED"
> state. It seems like this is no longer the case in 1.4.3, where this
> client-only service is now in the "STARTED" state.
>
>  Is this a design can which I should account for in my code, or is this a
> bug and the 1.4.1 behavior will eventually come back in newer releases?
>
>  Thanks,
> Ximo
>
> --
>
> Este mensaje se dirige exclusivamente a su destinatario. Puede consultar
> nuestra política de envío y recepción de correo electrónico en el enlace
> situado más abajo.
> This message is intended exclusively for its addressee. We only send and
> receive email on the basis of the terms set out at:
> http://www.tid.es/ES/PAGINAS/disclaimer.aspx
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: UPGRADE

2014-02-25 Thread Sumit Mohanty
Hi Aaron,

Ambari does not support automatic upgrade of the stack yet. What you see
are remnants of the feature that we started working on but it is not
complete.

Upgrade, for now, is manual and there are documented steps on how to do it.

-Sumit


On Tue, Feb 25, 2014 at 11:14 AM, Aaron Cody  wrote:

> Hello
> I'm trying to figure out how upgrade works in Ambari ... I see there is an
> UPGRADE role and various upgrade states in the state machine, but so far I
> haven't spotted any information as to how it is supposed to work .. (i'm
> using 1.2.4) .. is it a work in progress?
> - how do you upgrade from the GUI?
> - how do you upgrade from the REST api?
>
> thanks
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: UPGRADE

2014-02-25 Thread Sumit Mohanty
We do plan to support automatic upgrade as a feature. At this point, this
effort is not scheduled for any specific release yet.

-Sumit


On Tue, Feb 25, 2014 at 11:41 AM, Aaron Cody  wrote:

> ok thanks Sumit - is this a feature you plan to finish or is it now
> abandoned for the 'manual' method?
>
> From: Sumit Mohanty 
> Reply-To: 
> Date: Tue, 25 Feb 2014 11:26:41 -0800
> To: 
> Subject: Re: UPGRADE
>
> Hi Aaron,
>
> Ambari does not support automatic upgrade of the stack yet. What you see
> are remnants of the feature that we started working on but it is not
> complete.
>
> Upgrade, for now, is manual and there are documented steps on how to do it.
>
> -Sumit
>
>
> On Tue, Feb 25, 2014 at 11:14 AM, Aaron Cody  wrote:
>
>> Hello
>> I'm trying to figure out how upgrade works in Ambari ... I see there is an
>> UPGRADE role and various upgrade states in the state machine, but so far I
>> haven't spotted any information as to how it is supposed to work .. (i'm
>> using 1.2.4) .. is it a work in progress?
>> - how do you upgrade from the GUI?
>> - how do you upgrade from the REST api?
>>
>> thanks
>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: UPGRADE

2014-02-25 Thread Sumit Mohanty
We have not thought of that yet. Some of the current design will remain but
Ambari has support for custom command/action as well as declarative
application specification now. That allows for some more tools to use
during upgrade. So we will definitely try to take advantage of that.

Let me dig up the JIRAs with the details and share them.

-Sumit


On Tue, Feb 25, 2014 at 12:26 PM, Aaron Cody  wrote:

> Will you be finishing the existing implementation or going in a different
> direction? Can you share any preliminary design docs?
> thanks
>
> From: Sumit Mohanty 
> Reply-To: 
> Date: Tue, 25 Feb 2014 12:03:18 -0800
>
> To: 
> Subject: Re: UPGRADE
>
> We do plan to support automatic upgrade as a feature. At this point, this
> effort is not scheduled for any specific release yet.
>
> -Sumit
>
>
> On Tue, Feb 25, 2014 at 11:41 AM, Aaron Cody  wrote:
>
>> ok thanks Sumit - is this a feature you plan to finish or is it now
>> abandoned for the 'manual' method?
>>
>> From: Sumit Mohanty 
>> Reply-To: 
>> Date: Tue, 25 Feb 2014 11:26:41 -0800
>> To: 
>> Subject: Re: UPGRADE
>>
>> Hi Aaron,
>>
>> Ambari does not support automatic upgrade of the stack yet. What you see
>> are remnants of the feature that we started working on but it is not
>> complete.
>>
>> Upgrade, for now, is manual and there are documented steps on how to do
>> it.
>>
>> -Sumit
>>
>>
>> On Tue, Feb 25, 2014 at 11:14 AM, Aaron Cody wrote:
>>
>>> Hello
>>> I'm trying to figure out how upgrade works in Ambari ... I see there is an
>>> UPGRADE role and various upgrade states in the state machine, but so far I
>>> haven't spotted any information as to how it is supposed to work .. (i'm
>>> using 1.2.4) .. is it a work in progress?
>>> - how do you upgrade from the GUI?
>>> - how do you upgrade from the REST api?
>>>
>>> thanks
>>>
>>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: UPGRADE

2014-02-26 Thread Sumit Mohanty
https://issues.apache.org/jira/browse/AMBARI-1425 has the overall design
and all related JIRAs. A number of the commits have since been removed or
commented out as the feature was never used/regression tested.

In any case, we should apply a fresh perspective and focus on upgrades of
the latest stack releases and beyond. Lets open a new JIRA (let me know if
you want to do it) and discuss the design and implementation requirements.

-Sumit


On Tue, Feb 25, 2014 at 12:36 PM, Sumit Mohanty wrote:

> We have not thought of that yet. Some of the current design will remain
> but Ambari has support for custom command/action as well as declarative
> application specification now. That allows for some more tools to use
> during upgrade. So we will definitely try to take advantage of that.
>
> Let me dig up the JIRAs with the details and share them.
>
> -Sumit
>
>
> On Tue, Feb 25, 2014 at 12:26 PM, Aaron Cody  wrote:
>
>> Will you be finishing the existing implementation or going in a different
>> direction? Can you share any preliminary design docs?
>> thanks
>>
>> From: Sumit Mohanty 
>> Reply-To: 
>> Date: Tue, 25 Feb 2014 12:03:18 -0800
>>
>> To: 
>> Subject: Re: UPGRADE
>>
>> We do plan to support automatic upgrade as a feature. At this point, this
>> effort is not scheduled for any specific release yet.
>>
>> -Sumit
>>
>>
>> On Tue, Feb 25, 2014 at 11:41 AM, Aaron Cody wrote:
>>
>>> ok thanks Sumit - is this a feature you plan to finish or is it now
>>> abandoned for the 'manual' method?
>>>
>>> From: Sumit Mohanty 
>>> Reply-To: 
>>> Date: Tue, 25 Feb 2014 11:26:41 -0800
>>> To: 
>>> Subject: Re: UPGRADE
>>>
>>> Hi Aaron,
>>>
>>> Ambari does not support automatic upgrade of the stack yet. What you see
>>> are remnants of the feature that we started working on but it is not
>>> complete.
>>>
>>> Upgrade, for now, is manual and there are documented steps on how to do
>>> it.
>>>
>>> -Sumit
>>>
>>>
>>> On Tue, Feb 25, 2014 at 11:14 AM, Aaron Cody wrote:
>>>
>>>> Hello
>>>> I'm trying to figure out how upgrade works in Ambari ... I see there is
>>>> an UPGRADE role and various upgrade states in the state machine, but so far
>>>> I haven't spotted any information as to how it is supposed to work .. (i'm
>>>> using 1.2.4) .. is it a work in progress?
>>>> - how do you upgrade from the GUI?
>>>> - how do you upgrade from the REST api?
>>>>
>>>> thanks
>>>>
>>>>
>>>
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Error installing datanode and tasktracker

2014-02-26 Thread Sumit Mohanty
Looks like datanode start script succeeded but datanode failed soon after.
Try to see if datanode is running by checking the process with id in
/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid.

If not then check the datanode log file at
/var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log.
This should say why datanode failed. Similarly, you can check the task
tracker log file at /var/log/hadoop as well. Look for log files with
trasktracker in the name.

Share the errors and we can help debug it. What version of Ambari and stack
you are using?

-Sumit


On Wed, Feb 26, 2014 at 3:36 AM, AKSHATHA SATHYANARAYAN <
akshath...@samsung.com> wrote:

>  Hello All,
>
>
>
> I am getting the following error while installing datanode and
> tasktracker. Any help/suggestion is appreciated.
>
>
>
> err:
> /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep
> 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps
> `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null
> 2>&1]/Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid
> >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid`
> >/dev/null 2>&1]/returns: change from notrun to 0 failed: sleep 5; ls
> /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat
> /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1 returned 1
> instead of one of [0] at
> /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:487
>
>
>
> Thanks,
>
> Akshata
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Re: Error installing datanode and tasktracker

2014-02-26 Thread Sumit Mohanty
Akshatha,

Looks like dfs.datanode.data.dir is set to "/hdfs/hadoop/hdfs/data/storage".
Is this the value provided manually when you configured HDFS?

Can you check if the folder is writable and has write permissions for the
HDFS user. The default hdfs user name is "hdfs" but its also possible to
change the username and you can check if custom username is being used by
checking "hdfs_user" under the global config type.

Is it a fresh install?

-Sumit


On Wed, Feb 26, 2014 at 4:10 PM, AKSHATHA SATHYANARAYAN <
akshath...@samsung.com> wrote:

>  Hello Sumit,
>
>
>
> Thanks for your reply.
>
> I am using Hadoop 1.2.0 and ambari 1.4.4
>
>
>
> datanode log file shows the following error:
>
>
>
> STARTUP_MSG:   version = 1.2.0.1.3.2.0-111
> STARTUP_MSG:   build = git://c64-s8/ on branch comanche-branch-1 -r
> 3e43bec958e627d53f02d2842f6fac24a93110a9; compiled by 'jenkins' on Mon Aug
> 19 18:34:32 PDT 2013
> STARTUP_MSG:   java = 1.6.0_31
> /
> 2014-02-26 23:34:12,010 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-02-26 23:34:12,051 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2014-02-26 23:34:12,052 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-02-26 23:34:12,052 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2014-02-26 23:34:12,137 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2014-02-26 23:34:12,436 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> java.io.FileNotFoundException: /hdfs/hadoop/hdfs/data/storage (Permission
> denied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:216)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.isConversionNeeded(DataStorage.java:194)
> at
> org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:689)
> at
> org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:57)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:458)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:111)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> 2014-02-26 23:34:12,438 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
>
>
> Thanks,
>
> Akshatha
>
>
>
>
>
>
>
>
>
> --- *Original Message* ---
>
> *Sender* : Sumit Mohanty
>
> *Date* : Feb 27, 2014 02:07 (GMT+09:00)
>
> *Title* : Re: Error installing datanode and tasktracker
>
>
>  Looks like datanode start script succeeded but datanode failed soon
> after.
> Try to see if datanode is running by checking the process with id in
> /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid.
>
> If not then check the datanode log file at 
> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log.
> This should say why datanode failed. Similarly, you can check the task
> tracker log file at /var/log/hadoop as well. Look for log files with
> trasktracker in the name.
>
> Share the errors and we can help debug it. What version of Ambari and
> stack you are using?
>
> -Sumit
>
>
> On Wed, Feb 26, 2014 at 3:36 AM, AKSHATHA SATHYANARAYAN <
> akshath...@samsung.com> wrote:
>
>>  Hello All,
>>
>>
>>
>> I am getting the following error while installing datanode and
>> tasktracker. Any help/suggestion is appreciated.
>>
>>
>>
>> err:
>> /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep
>> 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps
>> `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null

Re: REST api question

2014-02-26 Thread Sumit Mohanty
You need to use this API to stop the component. The reason its different is
that stop of the host component is a processed on the host component
resource.

curl -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Stop
Component"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http:
//AMBARI_SERVER_HOST:8080/api/v1/clusters/c1/hosts/HOSTNAME/host_components/COMPONENT_NAME

You can use the wiki to get sample API calls -
https://cwiki.apache.org/confluence/display/AMBARI/API+usage+scenarios%2C+troubleshooting%2C+and+other+FAQs


On Wed, Feb 26, 2014 at 4:30 PM, Aaron Cody  wrote:

> we can stop an entire service like so:
>
> curl --user admin:admin -i -X PUT -d
> '{"ServiceInfo":{"state":"INSTALLED"}}'
> http://ambari-master.foo.com:8080/api/v1/clusters/c1/services/HDFS
>
> but how can I stop all instances of a running component? I had hoped with
> something like :
>
> curl --user admin:admin -i -X PUT -d
> '{"ServiceInfo":{"state":"INSTALLED"}}'
> http://ambari-master.foo.com:8080/api/v1/clusters/c1/services/HDFS/components/DATANODE
>
> but this throws an invalid property exception
>
> is my URL wrong or are we not allowed to do this sort of thing?
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: 1.4.4 install guide uses the wrong link

2014-03-05 Thread Sumit Mohanty
Thanks Gunnar.

You may have already figured it out but the Rhel6 link will be
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo

We will edit the document.


On Wed, Mar 5, 2014 at 9:10 PM, Tapper, Gunnar  wrote:

>
> https://cwiki.apache.org/confluence/display/AMBARI/Install+Ambari-1.4.4+from+public+repositories
>
>
>
> RHel6 link is wrong.
>
>
>
> Thanks,
>
>
>
> Gunnar
>
>
>
> *The person that says it cannot be done should not interrupt the person
> doing it.*
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: changing conf using API > is it persisted

2014-03-19 Thread Sumit Mohanty
There is no automatic updates. The set call using configs.sh will persist
the change. Persisting of a config means adding a new version of the config
and applying it to the cluster resource.

Where did you see it reverted?

Go to the cluster resource - e.g.
http://c6402.ambari.apache.org:8080/api/v1/clusters/c1 and check the
applied version of the hdfs-site (see desired_configs field).
Then look at the config with the same version - e.g.
http://c6402.ambari.apache.org:8080/api/v1/clusters/c1/configurations?type=hdfs-site&tag=version1
Does it have your change? If not you can look at other versions of
hdfs-site config type and use that to figure out what may have a happened.

As far as the leprechaun is concerned, it will be REST aware to do this
change. So let us know.


On Wed, Mar 19, 2014 at 9:31 AM, Mathieu Despriee wrote:

> Hi folks,
>
> I have a cluster on which I changed the dfs.namenode.http-address in
> hdfs-site using the script
> /var/lib/ambari-server/resources/scripts/configs.sh .
>
> Everything went fine. But after some time, I noticed this parameter value
> had been reverted to the previous value.
> Is there any automatic thing running am not aware of ?
> Is there any action to do to persist changes ?
> Is there a leprechaun on my cluster ?
>
> Thanks for your help
>
> Mathieu
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Creating new stacks and services....target 1.4.x or 1.5?

2014-03-20 Thread Sumit Mohanty
In fact the python support makes it much much easier to to add custom
services and custom scripts. So I will encourage you to try that and
provide feedback.

Python support is at par with puppet support and so its mature..


On Thu, Mar 20, 2014 at 12:42 PM, Erin Boyd  wrote:

> It's my understanding that the idea is to move away from puppet to Python
> in order to ease testing and deployment of singular components within the
> stack.
> I believe the release for 1.5.0 is going to be in April.
> I also believe the changes for python (using the new services
> architecture) has been back ported all the way to 1.3, therefore you shoudl
> be able to deploy those stacks using 1.5.
>
> Erin
>
>
> - Original Message -
> From: "Chris Mildebrandt" 
> To: user@ambari.apache.org, d...@ambari.apache.org
> Sent: Thursday, March 20, 2014 12:21:12 PM
> Subject: Creating new stacks and servicestarget 1.4.x or 1.5?
>
> Hey all,
>
> We'd like to create a custom stack and scripts for managing components.
> From my initial searching, I see support for both puppet and python right
> now in the 1.5 branch, and 1.4.x is purely puppet. So, I have the following
> questions:
>
> 1) Does this reflect a movement away from puppet in the future, like in
> 1.6.x?
> 2) How mature will the python support be once 1.5 is released?
> 3) What is the expected release date of 1.5?
> 4) Will I continue to be able to deploy HDP 1.3.x and 2.0.x using Ambari
> 1.5?
>
> Since we're just starting out, I'd like to make sure we're moving in a
> direction that will be supported for quite some time. And personally I'd
> rather use python rather than puppet. Can someone shed some light on the
> direction so I can make an informed decision for my code?
>
> Thanks,
> -Chris
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: stuck in historyServer installation

2014-03-20 Thread Sumit Mohanty
These could be the reason:
  useradd: Can't get unique system GID (no more available GIDs)
  useradd: can't create group

See if
http://superuser.com/questions/666505/creating-new-user-using-useradd-but-fails-since-its-unable-to-create-group-in-osolves
it?


On Thu, Mar 20, 2014 at 4:55 PM, Anfernee Xu  wrote:

> Hi,
>
> I followed the instruction mentioned in
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_using_Ambari_book/content/ambari-chap1-1.html,
> my goal is evaluating the single host deployment before put it into
> production.
>
> I have got Ambari server/agent running and got some
> components(datanode/client) installed without problem, but keep failing at
> historyserver installation with below error.
>
> BTW, because my environment is kind of restricted by my IT, so I used my
> local account/group for all service user/group.
>
> Thanks for your help.
>
>
> stdout:
>
> notice: Finished catalog run in 0.14 seconds
> notice: /Stage[1]/Hdp::Snmp/Exec[snmpd_autostart]/returns: executed
> successfully
> notice:
> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln
> 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully
> err:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Hdp::Package[yarn-common]/Hdp::Package::Process_pkg[yarn-common]/Package[hadoop-yarn]/ensure:
> change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0
> -y install hadoop-yarn' returned 1: Error in PREIN scriptlet in rpm package
> hadoop-yarn-2.2.0.2.0.6.0-102.el6.x86_64
> useradd: Can't get unique system GID (no more available GIDs)
> useradd: can't create group
> error: %pre(hadoop-yarn-2.2.0.2.0.6.0-102.el6.x86_64) scriptlet failed,
> exit status 4
> error:   install: %pre scriptlet failed (2), skipping
> hadoop-yarn-2.2.0.2.0.6.0-102.el6
>
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Hdp::Package[yarn-common]/Hdp::Package::Process_pkg[yarn-common]/Anchor[hdp::package::yarn-common::end]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Anchor[hdp-yarn::package::yarn-common::end]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp::Configfile[/etc/security/limits.d/yarn.conf]/File[/etc/security/limits.d/yarn.conf]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Hdp::Configfile[/etc/hadoop/conf/hadoop-env.sh]/File[/etc/hadoop/conf/hadoop-env.sh]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Hdp::Configfile[/etc/hadoop/conf/yarn-env.sh]/File[/etc/hadoop/conf/yarn-env.sh]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[capacity-scheduler]/File[/etc/hadoop/conf/capacity-scheduler.xml]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[core-site]/File[/etc/hadoop/conf/core-site.xml]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[yarn-site]/File[/etc/hadoop/conf/yarn-site.xml]:
> Dependency Package[hadoop-yarn] has failures: true
> notice:
> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[mapred-site]/File[/etc/hadoop/conf/mapred-site.xml]:
> Dependency Package[hadoop-yarn] has failures: true
> notice: /Stage[2]/Hdp-yarn::Initialize/Anchor[hdp-yarn::initialize::end]:
> Dependency Package[hadoop-yarn] has failures: true
> notice: Finished catalog run in 24.29 seconds
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: stuck in historyServer installation

2014-03-20 Thread Sumit Mohanty
Try running "'/usr/bin/yum -y install hadoop-yarn" as the same user as
ambari-agent ensure it succeeds.


On Thu, Mar 20, 2014 at 5:14 PM, Sumit Mohanty wrote:

> These could be the reason:
>   useradd: Can't get unique system GID (no more available GIDs)
>   useradd: can't create group
>
> See if
> http://superuser.com/questions/666505/creating-new-user-using-useradd-but-fails-since-its-unable-to-create-group-in-osolves
>  it?
>
>
> On Thu, Mar 20, 2014 at 4:55 PM, Anfernee Xu wrote:
>
>> Hi,
>>
>> I followed the instruction mentioned in
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_using_Ambari_book/content/ambari-chap1-1.html,
>> my goal is evaluating the single host deployment before put it into
>> production.
>>
>> I have got Ambari server/agent running and got some
>> components(datanode/client) installed without problem, but keep failing at
>> historyserver installation with below error.
>>
>> BTW, because my environment is kind of restricted by my IT, so I used my
>> local account/group for all service user/group.
>>
>> Thanks for your help.
>>
>>
>> stdout:
>>
>> notice: Finished catalog run in 0.14 seconds
>> notice: /Stage[1]/Hdp::Snmp/Exec[snmpd_autostart]/returns: executed
>> successfully
>> notice:
>> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln
>> 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully
>> err:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Hdp::Package[yarn-common]/Hdp::Package::Process_pkg[yarn-common]/Package[hadoop-yarn]/ensure:
>> change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0
>> -y install hadoop-yarn' returned 1: Error in PREIN scriptlet in rpm package
>> hadoop-yarn-2.2.0.2.0.6.0-102.el6.x86_64
>> useradd: Can't get unique system GID (no more available GIDs)
>> useradd: can't create group
>> error: %pre(hadoop-yarn-2.2.0.2.0.6.0-102.el6.x86_64) scriptlet failed,
>> exit status 4
>> error:   install: %pre scriptlet failed (2), skipping
>> hadoop-yarn-2.2.0.2.0.6.0-102.el6
>>
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Hdp::Package[yarn-common]/Hdp::Package::Process_pkg[yarn-common]/Anchor[hdp::package::yarn-common::end]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Anchor[hdp-yarn::package::yarn-common::end]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp::Configfile[/etc/security/limits.d/yarn.conf]/File[/etc/security/limits.d/yarn.conf]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Hdp::Configfile[/etc/hadoop/conf/hadoop-env.sh]/File[/etc/hadoop/conf/hadoop-env.sh]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Hdp::Configfile[/etc/hadoop/conf/yarn-env.sh]/File[/etc/hadoop/conf/yarn-env.sh]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[capacity-scheduler]/File[/etc/hadoop/conf/capacity-scheduler.xml]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[core-site]/File[/etc/hadoop/conf/core-site.xml]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[yarn-site]/File[/etc/hadoop/conf/yarn-site.xml]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice:
>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Configgenerator::Configfile[mapred-site]/File[/etc/hadoop/conf/mapred-site.xml]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice: /Stage[2]/Hdp-yarn::Initialize/Anchor[hdp-yarn::initialize::end]:
>> Dependency Package[hadoop-yarn] has failures: true
>> notice: Finished catalog run in 24.29 seconds
>>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: stuck in historyServer installation

2014-03-20 Thread Sumit Mohanty
Which version of Ambari are you using?

https://cwiki.apache.org/confluence/display/AMBARI/Adding+a+New+Service+to+an+Existing+Clusterhas
the steps using API to add a service.

The detail you have to take care is to add the necessary configs for the
service being added. Look at the properties that ganglia requires at -
https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=blob;f=ambari-server/src/main/resources/stacks/HDP/2.0.6/services/GANGLIA/configuration/global.xml;h=49d38ae660fbfc6c9faad9ed7a6a5b99382948d6;hb=trunk.
These configurations should be added to config type "global". Refer to
https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurationsfor
details on how to add/modify config.

In the next release you can add services through the Web front where the UI
guides you regarding the configuration options.

-Sumit




On Thu, Mar 20, 2014 at 5:41 PM, Anfernee Xu  wrote:

> Thanks, it works now.
>
> BTW, how can I add one more service to the existing cluster, for instance
> I want to add Ganglia?
>
> Thanks
>
>
> On Thu, Mar 20, 2014 at 5:32 PM, Sumit Mohanty 
> wrote:
>
>> Try running "'/usr/bin/yum -y install hadoop-yarn" as the same user as
>> ambari-agent ensure it succeeds.
>>
>>
>> On Thu, Mar 20, 2014 at 5:14 PM, Sumit Mohanty 
>> wrote:
>>
>>> These could be the reason:
>>>   useradd: Can't get unique system GID (no more available GIDs)
>>>   useradd: can't create group
>>>
>>> See if
>>> http://superuser.com/questions/666505/creating-new-user-using-useradd-but-fails-since-its-unable-to-create-group-in-osolves
>>>  it?
>>>
>>>
>>> On Thu, Mar 20, 2014 at 4:55 PM, Anfernee Xu wrote:
>>>
>>>> Hi,
>>>>
>>>> I followed the instruction mentioned in
>>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_using_Ambari_book/content/ambari-chap1-1.html,
>>>> my goal is evaluating the single host deployment before put it into
>>>> production.
>>>>
>>>> I have got Ambari server/agent running and got some
>>>> components(datanode/client) installed without problem, but keep failing at
>>>> historyserver installation with below error.
>>>>
>>>> BTW, because my environment is kind of restricted by my IT, so I used
>>>> my local account/group for all service user/group.
>>>>
>>>> Thanks for your help.
>>>>
>>>>
>>>> stdout:
>>>>
>>>> notice: Finished catalog run in 0.14 seconds
>>>> notice: /Stage[1]/Hdp::Snmp/Exec[snmpd_autostart]/returns: executed
>>>> successfully
>>>> notice:
>>>> /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln
>>>> 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully
>>>> err:
>>>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Hdp::Package[yarn-common]/Hdp::Package::Process_pkg[yarn-common]/Package[hadoop-yarn]/ensure:
>>>> change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0
>>>> -y install hadoop-yarn' returned 1: Error in PREIN scriptlet in rpm package
>>>> hadoop-yarn-2.2.0.2.0.6.0-102.el6.x86_64
>>>> useradd: Can't get unique system GID (no more available GIDs)
>>>> useradd: can't create group
>>>> error: %pre(hadoop-yarn-2.2.0.2.0.6.0-102.el6.x86_64) scriptlet failed,
>>>> exit status 4
>>>> error:   install: %pre scriptlet failed (2), skipping
>>>> hadoop-yarn-2.2.0.2.0.6.0-102.el6
>>>>
>>>> notice:
>>>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Hdp::Package[yarn-common]/Hdp::Package::Process_pkg[yarn-common]/Anchor[hdp::package::yarn-common::end]:
>>>> Dependency Package[hadoop-yarn] has failures: true
>>>> notice:
>>>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Package[yarn-common]/Anchor[hdp-yarn::package::yarn-common::end]:
>>>> Dependency Package[hadoop-yarn] has failures: true
>>>> notice:
>>>> /Stage[2]/Hdp-yarn::Initialize/Hdp::Configfile[/etc/security/limits.d/yarn.conf]/File[/etc/security/limits.d/yarn.conf]:
>>>> Dependency Package[hadoop-yarn] has failures: true
>>>> notice:
>>>> /Stage[2]/Hdp-yarn::Initialize/Hdp-yarn::Generate_common_configs[yarn-common-configs]/Hdp::Configfile[/etc/hadoop/conf/hadoop-env.sh]/File[/etc/hadoop/conf/hadoop-env.sh]:
>>>> Dependency Package[hadoop-yarn] has failures: true
>>>

Re: stuck in historyServer installation

2014-03-20 Thread Sumit Mohanty
After you have set configs using configs.sh you do not need the step above.
You can start adding Ganglia service now.

If you add multiple config name-value pairs then the easies way is to
export config to a file (configs.sh has an option for that) and then edit
the file and set the whole file.

FYI. These are the ganglia properties from my single node cluster. Adding
them as a reference to you - the value you have added seem quite alright.

"ganglia_runtime_dir" : "/var/run/ganglia/hdp",
"gmetad_user" : "nobody",
"gmond_user" : "nobody","rrdcached_base_dir" : "/var/lib/ganglia/rrds",

-Sumit



On Thu, Mar 20, 2014 at 8:32 PM, Anfernee Xu  wrote:

> Thanks for your response, I followed the steps, but encounter an error
> when applying configuration
>
>
>
> sh-4.1# /var/lib/ambari-server/resources/scripts/configs.sh  set slc05eqg
> TIE_DC global ganglia_conf_dir /etc/ganglia/hdp
> ## Performing 'set' ganglia_conf_dir:/etc/ganglia/hdp on
> (Site:global, Tag:version1)
> ## PUTting json into: doSet_version1395371752000.json
> ## NEW Site:global, Tag:version1395371752000
>
> sh-4.1# /var/lib/ambari-server/resources/scripts/configs.sh  set slc05eqg
> TIE_DC global ganglia_runtime_dir /var/run/ganglia/hdp
> ## Performing 'set' ganglia_runtime_dir:/var/run/ganglia/hdp on
> (Site:global, Tag:version1395371752000)
> ## Config found. Skipping origin value
> ## PUTting json into: doSet_version1395371832000.json
> ## NEW Site:global, Tag:version1395371832000
>
>
> sh-4.1# /var/lib/ambari-server/resources/scripts/configs.sh  set slc05eqg
> TIE_DC global gmetad_user xinx
> ## Performing 'set' gmetad_user:xinx on (Site:global,
> Tag:version1395371832000)
> ## Config found. Skipping origin value
> ## PUTting json into: doSet_version1395371852000.json
> ## NEW Site:global, Tag:version1395371852000
>
>
> sh-4.1# /var/lib/ambari-server/resources/scripts/configs.sh  set slc05eqg
> TIE_DC global gmond_user xinx
> ## Performing 'set' gmond_user:xinx on (Site:global,
> Tag:version1395371852000)
> ## Config found. Skipping origin value
> ## PUTting json into: doSet_version1395371873000.json
> ## NEW Site:global, Tag:version1395371873000
>
>
> sh-4.1# /var/lib/ambari-server/resources/scripts/configs.sh  set slc05eqg
> TIE_DC global rrdcached_base_dir /var/lib/ganglia/rrds
> ## Performing 'set' rrdcached_base_dir:/var/lib/ganglia/rrds on
> (Site:global, Tag:version1395371873000)
> ## PUTting json into: doSet_version1395371922000.json
> ## NEW Site:global, Tag:version1395371922000
>
>
> ### Apply configuration
> sh-4.1# curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST -d
> '{"Clusters": {"desired_configs": { "type": "global", "tag"
> :"version1395371922000" }}}' http://slc05eqg:8080/api/v1/clusters/TIE_DC
> HTTP/1.1 400 Bad Request
> Server: Jetty(7.6.7.v20120910)
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: AMBARISESSIONID=18ppjcrarhp3wrk9polzxpulh;Path=/
> Content-Type: text/plain
> Content-Length: 98
> Proxy-Connection: Keep-Alive
>
> {
>   "status" : 400,
>   "message" : "Stack information should be provided when creating a
> cluster"
> }sh-4.1# curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST -d
> '{"Clusters": {"desired_configs": { "type": "global", "tag"
> :"version1395371922000" }}}' http://slc05eqg:8080/api/v1/clusters/TIE_D
> HTTP/1.1 400 Bad Request
> Server: Jetty(7.6.7.v20120910)
> Expires: Thu, 01 Jan 1970 00:00:00 GMT
> Set-Cookie: AMBARISESSIONID=t8dkubedmm46e26n5v901w14;Path=/
> Content-Type: text/plain
> Content-Length: 98
> Proxy-Connection: Keep-Alive
>
> {
>   "status" : 400,
>   "message" : "Stack information should be provided when creating a
> cluster"
> }
>
>
>
> On Thu, Mar 20, 2014 at 6:01 PM, Sumit Mohanty 
> wrote:
>
>> Which version of Ambari are you using?
>>
>>
>> https://cwiki.apache.org/confluence/display/AMBARI/Adding+a+New+Service+to+an+Existing+Clusterhas
>>  the steps using API to add a service.
>>
>> The detail you have to take care is to add the necessary configs for the
>> service being added. Look at the properties that ganglia requires at -
>> https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=blob;f=ambari-server/src/main/resources/stacks/HDP/2

Re: Host re-installation

2014-03-21 Thread Sumit Mohanty
You can use Ambari support for add/remove host.

For example, if you have not removed (using Ambari) the host already you
can do so now in the "host" page through Delete Host.

After that you can do a "Add Host" on the "hosts" and add the host back.
"Add Host" will install agent and install components based on what you
choose.



On Fri, Mar 21, 2014 at 10:32 AM, Anoop Rajendra
wrote:

> Hi,
>
> I had to take down a host from a running cluster for maintenance. Once
> I re-installed the OS on it. I'd like to bring it back to the same
> state as the host was before reinstallation.
>
> I can't seem to find hooks in Ambari to bring the host back into the
> HDP cluster. How do I do this?
>
> I'm using Ambari 1.4.4.32 and HDP 2.0.6.1
>
> -a
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Host re-installation

2014-03-21 Thread Sumit Mohanty
When you do add host, you can install components on the host.

Assuming you did not do a "Delete Host" when the host was taken out, Ambari
still thinks that the components are deployed on the host. It is reacting
to the fact that there is no agent heartbeat and thus has marked those
components to be in Unknown state.

What were deployed on the host before? Can you also share the "Host" UI
page - it will give me a good idea of the current state.


On Fri, Mar 21, 2014 at 10:47 AM, Anoop Rajendra
wrote:

> Does this mean that I have to re-associate the services to the host?
>
> -a
>
> On Fri, Mar 21, 2014 at 10:39 AM, Sumit Mohanty
>  wrote:
> > You can use Ambari support for add/remove host.
> >
> > For example, if you have not removed (using Ambari) the host already you
> can
> > do so now in the "host" page through Delete Host.
> >
> > After that you can do a "Add Host" on the "hosts" and add the host back.
> > "Add Host" will install agent and install components based on what you
> > choose.
> >
> >
> >
> > On Fri, Mar 21, 2014 at 10:32 AM, Anoop Rajendra <
> anoop.rajen...@gmail.com>
> > wrote:
> >>
> >> Hi,
> >>
> >> I had to take down a host from a running cluster for maintenance. Once
> >> I re-installed the OS on it. I'd like to bring it back to the same
> >> state as the host was before reinstallation.
> >>
> >> I can't seem to find hooks in Ambari to bring the host back into the
> >> HDP cluster. How do I do this?
> >>
> >> I'm using Ambari 1.4.4.32 and HDP 2.0.6.1
> >>
> >> -a
> >
> >
> >
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the
> reader of
> > this message is not the intended recipient, you are hereby notified that
> any
> > printing, copying, dissemination, distribution, disclosure or forwarding
> of
> > this communication is strictly prohibited. If you have received this
> > communication in error, please contact the sender immediately and delete
> it
> > from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Host re-installation

2014-03-21 Thread Sumit Mohanty
Oh! so I assume, after OS re-install, you installed/started the agent
manually.

What components are on this host?


On Fri, Mar 21, 2014 at 10:59 AM, Anoop Rajendra
wrote:

> Actually, I solved the problem without deleting the host.
>
> I stopped the service, and restarted it, and apparently that was
> enough to get the components installed  on the newly reinstalled host.
>
> -a
>
> On Fri, Mar 21, 2014 at 10:52 AM, Sumit Mohanty
>  wrote:
> > When you do add host, you can install components on the host.
> >
> > Assuming you did not do a "Delete Host" when the host was taken out,
> Ambari
> > still thinks that the components are deployed on the host. It is
> reacting to
> > the fact that there is no agent heartbeat and thus has marked those
> > components to be in Unknown state.
> >
> > What were deployed on the host before? Can you also share the "Host" UI
> page
> > - it will give me a good idea of the current state.
> >
> >
> > On Fri, Mar 21, 2014 at 10:47 AM, Anoop Rajendra <
> anoop.rajen...@gmail.com>
> > wrote:
> >>
> >> Does this mean that I have to re-associate the services to the host?
> >>
> >> -a
> >>
> >> On Fri, Mar 21, 2014 at 10:39 AM, Sumit Mohanty
> >>  wrote:
> >> > You can use Ambari support for add/remove host.
> >> >
> >> > For example, if you have not removed (using Ambari) the host already
> you
> >> > can
> >> > do so now in the "host" page through Delete Host.
> >> >
> >> > After that you can do a "Add Host" on the "hosts" and add the host
> back.
> >> > "Add Host" will install agent and install components based on what you
> >> > choose.
> >> >
> >> >
> >> >
> >> > On Fri, Mar 21, 2014 at 10:32 AM, Anoop Rajendra
> >> > 
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I had to take down a host from a running cluster for maintenance.
> Once
> >> >> I re-installed the OS on it. I'd like to bring it back to the same
> >> >> state as the host was before reinstallation.
> >> >>
> >> >> I can't seem to find hooks in Ambari to bring the host back into the
> >> >> HDP cluster. How do I do this?
> >> >>
> >> >> I'm using Ambari 1.4.4.32 and HDP 2.0.6.1
> >> >>
> >> >> -a
> >> >
> >> >
> >> >
> >> > CONFIDENTIALITY NOTICE
> >> > NOTICE: This message is intended for the use of the individual or
> entity
> >> > to
> >> > which it is addressed and may contain information that is
> confidential,
> >> > privileged and exempt from disclosure under applicable law. If the
> >> > reader of
> >> > this message is not the intended recipient, you are hereby notified
> that
> >> > any
> >> > printing, copying, dissemination, distribution, disclosure or
> forwarding
> >> > of
> >> > this communication is strictly prohibited. If you have received this
> >> > communication in error, please contact the sender immediately and
> delete
> >> > it
> >> > from your system. Thank You.
> >
> >
> >
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the
> reader of
> > this message is not the intended recipient, you are hereby notified that
> any
> > printing, copying, dissemination, distribution, disclosure or forwarding
> of
> > this communication is strictly prohibited. If you have received this
> > communication in error, please contact the sender immediately and delete
> it
> > from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Provisioning 2 clusters which sharing the same HDFS

2014-03-22 Thread Sumit Mohanty
This should be possible through API (I have not tried it myself).

Here is what you are trying:
* Define a cluster with no HDFS (say just YARN and ZK)
* Add necessary configs for YARN and ZK
* Add/modify core-site and hdfs-site to have the correct property values to
point to the other cluster
* Start all services
You can do all the above with APIs.

A way to achieve it with as much help as possible from the Web FE:
* Create a cluster with HDFS, YARN, ZK (possibly Ganglia and Nagios if you
need them)
* After everything is setup and started correctly - stop all services
* Delete HDFS using APIs
* Modify hdfs-site and core-site to point to the other cluster (use
configs.sh)
* Start all services
* Afterwards, you can clean up the left over HDFS files/folders on this
cluster.

*The above strategy is theoretically possible but I have not tried it*. So
do try it on a test cluster first. The Apache Ambari WIKI has pages talking
about sample API calls.

Feel free to write up a summary if you go the above route and we can add it
to the wiki.

-Sumit


On Fri, Mar 21, 2014 at 1:15 PM, Anfernee Xu  wrote:

> Hi,
>
> Here's my situation, I have 2 Yarn clusters(A and B), the provisioning
> process is straightforward for A, it will have HDFS, Yarn, and MR.  NN and
> RM is running on master machines of cluster A, and DataNode and NodeManager
> is running on slave machines as usual. But the special requirement comes
> from cluster B, in cluster B I only run Yarn components(RM and NM) and
> having access to HDFS provisioned in cluster A(like a HDFS client). Without
> Ambari, I could copy core-site.xml/hdfs-site.xml from A to B, so is it
> possible to do it in Ambari? and how?
>
> --
> --Anfernee
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Provisioning 2 clusters which sharing the same HDFS

2014-03-23 Thread Sumit Mohanty
Glad, it worked. I will add it to the wiki for others.


On Sun, Mar 23, 2014 at 3:37 PM, Anfernee Xu  wrote:

> Thanks, your suggestions is really helpful. Here's what  did
>
> 1. Create a normal cluster(with HDFS)
> 2. shutdown cluster
> 3. remove HDFS service from stack.
>curl -H "X-Requested-By: ambari" -u admin:admin -X DELETE  http://
> /services/HDFS
> 4. Configure core-site
>su - hadoop
> /var/lib/ambari-server/resources/scripts/configs.sh -port  set
>   core-site "fs.defaultFS"
> "hdfs://slc00dgd:55310"
>
>
>
>
> On Sat, Mar 22, 2014 at 7:56 AM, Sumit Mohanty 
> wrote:
>
>> This should be possible through API (I have not tried it myself).
>>
>> Here is what you are trying:
>> * Define a cluster with no HDFS (say just YARN and ZK)
>> * Add necessary configs for YARN and ZK
>> * Add/modify core-site and hdfs-site to have the correct property values
>> to point to the other cluster
>> * Start all services
>> You can do all the above with APIs.
>>
>> A way to achieve it with as much help as possible from the Web FE:
>> * Create a cluster with HDFS, YARN, ZK (possibly Ganglia and Nagios if
>> you need them)
>> * After everything is setup and started correctly - stop all services
>> * Delete HDFS using APIs
>> * Modify hdfs-site and core-site to point to the other cluster (use
>> configs.sh)
>> * Start all services
>> * Afterwards, you can clean up the left over HDFS files/folders on this
>> cluster.
>>
>> *The above strategy is theoretically possible but I have not tried it*.
>> So do try it on a test cluster first. The Apache Ambari WIKI has pages
>> talking about sample API calls.
>>
>> Feel free to write up a summary if you go the above route and we can add
>> it to the wiki.
>>
>> -Sumit
>>
>>
>> On Fri, Mar 21, 2014 at 1:15 PM, Anfernee Xu wrote:
>>
>>> Hi,
>>>
>>> Here's my situation, I have 2 Yarn clusters(A and B), the provisioning
>>> process is straightforward for A, it will have HDFS, Yarn, and MR.  NN and
>>> RM is running on master machines of cluster A, and DataNode and NodeManager
>>> is running on slave machines as usual. But the special requirement comes
>>> from cluster B, in cluster B I only run Yarn components(RM and NM) and
>>> having access to HDFS provisioned in cluster A(like a HDFS client). Without
>>> Ambari, I could copy core-site.xml/hdfs-site.xml from A to B, so is it
>>> possible to do it in Ambari? and how?
>>>
>>> --
>>> --Anfernee
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>
>
> --
> --Anfernee
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: How to configure

2014-03-27 Thread Sumit Mohanty
Which version of ambari are you using?

Depending on the version, hadop-env.sh.j2 (latest trunk) or
hadoop-env.sh.erb (1.4.4 and before) file can tell you how to modify it.

Alternatively, you can drop your jars in the same location as other jars -
e.g. the path that is already in the class path.

-Sumit


On Mon, Mar 24, 2014 at 10:11 AM, Anfernee Xu  wrote:

> Hi,
>
> As I implemented my own resource scheduler, so I need to change
> $HADOOP_CLASSPATH in hadoop-env.sh, so how could I do this in Ambari UI or
> some other way so that the change will propagate to other nodes in the
> cluster.
>
> Thanks
>
>
>
> --
> --Anfernee
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: what triggers restart indicators?

2014-04-01 Thread Sumit Mohanty
If config is changed and saved then it gets saved as a newer version. When
Ambari detects a mismatch in the config versions between "desired at the
cluster/host level" and "what is reported by the agent" it flags it as
restart required.


On Tue, Apr 1, 2014 at 6:25 AM, Gerd Koenig
wrote:

> Hi,
>
> most probably a configuration change.
>
>
> On 1 April 2014 14:42, Brian Jeltema wrote:
>
>> I'm running a recent version of Ambari. The dashboard is showing restart
>> icons on a number
>> of services, even though everything seems to be running correctly. What
>> causes these to appear?
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Using Ambari with vanilla Apache releases

2014-04-12 Thread Sumit Mohanty
Count me in as well for Ambari.

-Sumit


On Fri, Apr 11, 2014 at 6:26 PM, Siddharth Wagle wrote:

> I can join you guys as well with Ambari backend support.
>
> -Sid
>
>
> On Fri, Apr 11, 2014 at 6:24 PM, Yusaku Sako wrote:
>
>> Hi Roman,
>>
>> This is a great idea!
>> I'm interested in providing some Ambari expertise.
>>
>> Yusaku
>>
>>
>> On Fri, Apr 11, 2014 at 4:57 PM, Roman Shaposhnik 
>> wrote:
>>
>>> On Fri, Apr 11, 2014 at 4:44 PM, Mahadev Konar 
>>> wrote:
>>> > This would definitely be of a lot of intersest to others as well in the
>>> > community. Good suggestion Roman. Want to propose something/someday on
>>> the
>>> > mailing list? Unfortunately Ill be out for the next couple of weeks
>>> but am
>>> > sure others on the list will be interested in a meetup/hackathon as
>>> well.
>>>
>>> Hi!
>>>
>>> I'd be more than happy to host at Pivotal's Palo Alto office in April.
>>> I think the
>>> quorum we need for this type of event is a few (at least one ;-))
>>> folks intimately
>>> familiar with Ambari and a few folks who are Bigtop experts (I can
>>> personally
>>> volunteer) plus as many innocent bystanders as possible ;-)
>>>
>>> Please reply to this thread if interested!
>>>
>>> Thanks,
>>> Roman.
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari-shell POC

2014-04-16 Thread Sumit Mohanty
The shell is indeed useful.

Pls. go ahead and create a Apache Ambari JIRA (at
https://issues.apache.org/jira/browse/AMBARI) to integrate the shell into
Ambari. Looks like contrib/ambari-shell might be a good location.

Those two blueprint related commands would be good candidates. We should
also add some default blueprints to the Ambari package.

thanks
Sumit


On Wed, Apr 16, 2014 at 1:40 AM, Lajos Papp wrote:

> I'd also like to get suggestion which command to implement next.
> My plan is:
>
> - create blueprint
> - create cluster from blueprint
>
> that way one could automate the process of setting up a cluster.
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari-shell POC

2014-04-16 Thread Sumit Mohanty
Lets create a JIRA for that as well.

-Sumit


On Wed, Apr 16, 2014 at 8:13 AM, Lajos Papp wrote:

> Hi Sumit,
>
> > Pls. go ahead and create a Apache Ambari JIRA (at
> https://issues.apache.org/jira/browse/AMBARI) to integrate the shell into
> Ambari. Looks like contrib/ambari-shell might be a good location.
>
> Created as:  https://issues.apache.org/jira/browse/AMBARI-5482
>
> How about the wiki change suggested by Yusaku? Should i create a jura
> issue for that as well?
>
> cheers,
> Lajos
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari 1.5.1 install problem ..

2014-04-23 Thread Sumit Mohanty
Can you also share /etc/ambari-agent/conf/ambari-agent.ini and check if
there is any Error/Warning in /var/log/ambari-agent/ambari-agent.log?

-Sumit


On Wed, Apr 23, 2014 at 11:30 AM, Erin Boyd  wrote:

> Hi EI,
> Is your agent.ini file pointing to the server?
> Can you agent get out on the network? Can you yum install anything from
> the agent node?
> Erin
>
>
> - Original Message -
> From: "EMINE ILOGLU" 
> To: user@ambari.apache.org
> Sent: Wednesday, April 23, 2014 12:13:50 PM
> Subject: RE: ambari 1.5.1  install problem ..
>
> Hi,
>
> I have installed the ambari server without any problems, but when I
> install ambari-agent, I get the following. Any ideas?
>
> Thanks!
> EI
>
> Resolving Dependencies
> --> Running transaction check
> ---> Package ambari-agent.x86_64 0:1.5.1.110-1 will be installed
> ---> Package ambari-log4j.noarch 0:1.5.1.110-1 will be installed
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
>
> 
> PackageArch Version Repository
>  Size
>
> 
> Installing:
> ambari-agent   x86_64   1.5.1.110-1 Updates-ambari
> 6.7 M
> ambari-log4j   noarch   1.5.1.110-1 Updates-ambari
> 536 k
>
> Transaction Summary
>
> 
> Install   2 Package(s)
>
> Total download size: 7.2 M
> Installed size: 28 M
> Downloading Packages:
>
> 
> Total59 MB/s | 7.2 MB 00:00
> Running rpm_check_debug
> Running Transaction Test
> Transaction Test Succeeded
> Running Transaction
>
>   Installing : ambari-agent-1.5.1.110-1.x86_64
>  1/2
>
>   Installing : ambari-log4j-1.5.1.110-1.noarch
>  2/2
> Installed products updated.
>
> Installed:
>   ambari-agent.x86_64 0:1.5.1.110-1  ambari-log4j.noarch 0:1.5.1.110-1
>
> Complete!
> + '[' -e /usr/sbin/ambari-agent ']'
> + mkdir /tmp/ambari-agent
> + echo 'HDP_AMBARI_SERVER = zlxv2256.vci.att.com'
> HDP_AMBARI_SERVER = zlxv2256.vci.att.com
> + sh
> /opt/app/swm/aftswmnode/stage/zlxv2263/com/att/hortonworks/ambari-agent/1.4-3/common/utils/findreplace.sh
> /opt/app/ambari-agent/ambari-agent.ini
> /etc/ambari-agent/conf/ambari-agent.ini
> ++ ps -ef
> ++ grep '^root .*/ambari_agent/AmbariAgent.py'
> ++ grep -v grep
> + PS_CHK=
> + cd /var/tmp
> + '[' -n '' ']'
> + /usr/sbin/ambari-agent start
> Verifying Python version compatibility...
> Using python  /usr/bin/python2.6
> Checking for previously running Ambari Agent...
> Starting ambari-agent
> Verifying ambari-agent process status...
> ERROR: ambari-agent start failed. For more details, see
> /var/log/ambari-agent/ambari-agent.out:
> 
>   File "/usr/lib/python2.6/site-packages/ambari_agent/ActionQueue.py",
> line 81, in __init__
> controller)
>   File
> "/usr/lib/python2.6/site-packages/ambari_agent/CustomServiceOrchestrator.py",
> line 55, in __init__
> self.file_cache = FileCache(config)
>   File "/usr/lib/python2.6/site-packages/ambari_agent/FileCache.py", line
> 55, in __init__
> config.get('agent','tolerate_download_failures').lower() == 'true'
>   File "/usr/lib64/python2.6/ConfigParser.py", line 321, in get
> raise NoOptionError(option, section)
> NoOptionError: No option 'tolerate_download_failures' in section: 'agent'
>
> 
> Agent out at: /var/log/ambari-agent/ambari-agent.out
> Agent log at: /var/log/ambari-agent/ambari-agent.log
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari 1.5.1 install problem ..

2014-04-23 Thread Sumit Mohanty
That file seems to be from an older version. For example this is what I see
under [agent] when I did a clean install of 1.5.1

[agent]
prefix=/var/lib/ambari-agent/data
;loglevel=(DEBUG/INFO)
loglevel=INFO
*data_cleanup_interval=86400*
*data_cleanup_max_age=2592000*
*ping_port=8670*
*cache_dir=/var/lib/ambari-agent/cache*
*tolerate_download_failures=true*

Rpm details:
rpm -qa | grep ambari-agent
ambari-agent-1.5.1.110-1.x86_64

rpm -qf ambari-agent.ini
ambari-agent-1.5.1.110-1.x86_64

-Sumit



On Wed, Apr 23, 2014 at 11:40 AM, ILOGLU, EMINE  wrote:

>  Here is etc/ambari-agent/conf/ambari-agent.ini and no errors on the log
> file.
>
>
>
> .
>
> # distributed under the License is distributed on an "AS IS" BASIS,
>
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>
> # See the License for the specific
>
>
>
> [server]
>
> hostname=zlxv2256.vci.att.com #AMBARI SERVER
>
> url_port=8440
>
> secured_url_port=8441
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
>
>
> [stack]
>
> installprefix=/var/ambari-agent/
>
> upgradeScriptsDir=/var/lib/ambari-agent/upgrade_stack
>
>
>
> [puppet]
>
> puppetmodules=/var/lib/ambari-agent/puppet
>
> ruby_home=/usr/lib/ambari-agent/lib/ruby-1.8.7-p370
>
> puppet_home=/usr/lib/ambari-agent/lib/puppet-2.7.9
>
> facter_home=/usr/lib/ambari-agent/lib/facter-1.6.10
>
>
>
> [command]
>
> maxretries=2
>
> sleepBetweenRetries=1
>
>
>
> [security]
>
> keysdir=/var/lib/ambari-agent/keys
>
> server_crt=ca.crt
>
> passphrase_env_var_name=AMBARI_PASSPHRASE
>
>
>
> [services]
>
> pidLookupPath=/var/run/
>
>
>
> [heartbeat]
>
> state_interval=6
>
>
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
>
>   /etc/sqoop,/etc/ganglia,/etc/nagios,
>
>
> /var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
>
>
> /var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive,
>
>   /var/log/nagios
>
> rpms=nagios,ganglia,
>
>
> hadoop,hadoop-lzo,hbase,oozie,sqoop,pig,zookeeper,hive,libconfuse,ambari-log4j
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 2:35 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> Can you also share /etc/ambari-agent/conf/ambari-agent.ini and check if
> there is any Error/Warning in /var/log/ambari-agent/ambari-agent.log?
>
>
>
> -Sumit
>
>
>
> On Wed, Apr 23, 2014 at 11:30 AM, Erin Boyd  wrote:
>
> Hi EI,
> Is your agent.ini file pointing to the server?
> Can you agent get out on the network? Can you yum install anything from
> the agent node?
> Erin
>
>
>
> - Original Message -
> From: "EMINE ILOGLU" 
> To: user@ambari.apache.org
> Sent: Wednesday, April 23, 2014 12:13:50 PM
> Subject: RE: ambari 1.5.1  install problem ..
>
> Hi,
>
> I have installed the ambari server without any problems, but when I
> install ambari-agent, I get the following. Any ideas?
>
> Thanks!
> EI
>
> Resolving Dependencies
> --> Running transaction check
> ---> Package ambari-agent.x86_64 0:1.5.1.110-1 will be installed
> ---> Package ambari-log4j.noarch 0:1.5.1.110-1 will be installed
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
>
> 
> PackageArch Version Repository
>  Size
>
> 
> Installing:
> ambari-agent   x86_64   1.5.1.110-1 Updates-ambari
> 6.7 M
> ambari-log4j   noarch   1.5.1.110-1 Updates-ambari
> 536 k
>
> Transaction Summary
>
> 
> Install   2 Package(s)
>
> Total download size: 7.2 M
> Installed size: 28 M
> Downloading Packages:
>
> 
> Total59 MB/s | 7.2 MB 00:00
> Running rpm_check_debug
> Running Transaction Test
> Transaction Test Succeeded
> Running Transaction
>
>   Installing : ambari-agent-1.5.1.110-1.x86_64
>  1/2
>
>   Installing : ambari-log4j-1.5.1.110-1.noarch
>  2/2
> Installed products updated.
>
> Installed:
>   ambari-agent.x86_64 0:1.5.1.110-1  ambari-log4j.noarch 

Re: ambari 1.5.1 install problem ..

2014-04-23 Thread Sumit Mohanty
It should have been installed when you install ambari-agent on the host.

Can you check the output of the two commands:

rpm -qa | grep ambari-agent

rpm -qf /etc/ambari-agent/conf/ambari-agent.ini

-Sumit



On Wed, Apr 23, 2014 at 12:13 PM, ILOGLU, EMINE  wrote:

>  That sure explains! Thanks.. Where can I find the new version of
> ambari-agent.ini?
>
>
>
> Thanks a lot
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:08 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> That file seems to be from an older version. For example this is what I
> see under [agent] when I did a clean install of 1.5.1
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
> *data_cleanup_interval=86400*
>
> *data_cleanup_max_age=2592000*
>
> *ping_port=8670*
>
> *cache_dir=/var/lib/ambari-agent/cache*
>
> *tolerate_download_failures=true*
>
>
>
> Rpm details:
>
> rpm -qa | grep ambari-agent
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
> rpm -qf ambari-agent.ini
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
> -Sumit
>
>
>
>
>
> On Wed, Apr 23, 2014 at 11:40 AM, ILOGLU, EMINE  wrote:
>
> Here is etc/ambari-agent/conf/ambari-agent.ini and no errors on the log
> file.
>
>
>
> .
>
> # distributed under the License is distributed on an "AS IS" BASIS,
>
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>
> # See the License for the specific
>
>
>
> [server]
>
> hostname=zlxv2256.vci.att.com #AMBARI SERVER
>
> url_port=8440
>
> secured_url_port=8441
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
>
>
> [stack]
>
> installprefix=/var/ambari-agent/
>
> upgradeScriptsDir=/var/lib/ambari-agent/upgrade_stack
>
>
>
> [puppet]
>
> puppetmodules=/var/lib/ambari-agent/puppet
>
> ruby_home=/usr/lib/ambari-agent/lib/ruby-1.8.7-p370
>
> puppet_home=/usr/lib/ambari-agent/lib/puppet-2.7.9
>
> facter_home=/usr/lib/ambari-agent/lib/facter-1.6.10
>
>
>
> [command]
>
> maxretries=2
>
> sleepBetweenRetries=1
>
>
>
> [security]
>
> keysdir=/var/lib/ambari-agent/keys
>
> server_crt=ca.crt
>
> passphrase_env_var_name=AMBARI_PASSPHRASE
>
>
>
> [services]
>
> pidLookupPath=/var/run/
>
>
>
> [heartbeat]
>
> state_interval=6
>
>
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
>
>   /etc/sqoop,/etc/ganglia,/etc/nagios,
>
>
> /var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
>
>
> /var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive,
>
>   /var/log/nagios
>
> rpms=nagios,ganglia,
>
>
> hadoop,hadoop-lzo,hbase,oozie,sqoop,pig,zookeeper,hive,libconfuse,ambari-log4j
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 2:35 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> Can you also share /etc/ambari-agent/conf/ambari-agent.ini and check if
> there is any Error/Warning in /var/log/ambari-agent/ambari-agent.log?
>
>
>
> -Sumit
>
>
>
> On Wed, Apr 23, 2014 at 11:30 AM, Erin Boyd  wrote:
>
> Hi EI,
> Is your agent.ini file pointing to the server?
> Can you agent get out on the network? Can you yum install anything from
> the agent node?
> Erin
>
>
>
> - Original Message -
> From: "EMINE ILOGLU" 
> To: user@ambari.apache.org
> Sent: Wednesday, April 23, 2014 12:13:50 PM
> Subject: RE: ambari 1.5.1  install problem ..
>
> Hi,
>
> I have installed the ambari server without any problems, but when I
> install ambari-agent, I get the following. Any ideas?
>
> Thanks!
> EI
>
> Resolving Dependencies
> --> Running transaction check
> ---> Package ambari-agent.x86_64 0:1.5.1.110-1 will be installed
> ---> Package ambari-log4j.noarch 0:1.5.1.110-1 will be installed
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
>
> 
> PackageArch Version Repository
>  Size
>
> 
> Installing:
> ambari-agent   x86_64   1.5.1.110-1 Updates-ambari
> 6.7 M
> ambari-log

Re: ambari 1.5.1 install problem ..

2014-04-23 Thread Sumit Mohanty
That seems odd.

Can you check the ambari.repo file at /etc/yum.repos.d (assuming its rhel
or centos) and let me know the content?

Did you fresh install or do an upgrade? What OS it is?

Can you also check if you have the folder
/var/lib/ambari-agent/cache/stacks?

-Sumit


On Wed, Apr 23, 2014 at 12:27 PM, ILOGLU, EMINE  wrote:

>  [ei947t@zlxv2263 1.4-3]$ rpm -qa |grep ambari-agent
>
> ambari-agent-1.5.1.110-1.x86_64
>
> [ei947t@zlxv2263 1.4-3]$ rpm -qf /etc/ambari-agent/conf/ambari-agent.ini
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
>
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:25 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> It should have been installed when you install ambari-agent on the host.
>
>
>
> Can you check the output of the two commands:
>
>
>
> rpm -qa | grep ambari-agent
>
>
>
> rpm -qf /etc/ambari-agent/conf/ambari-agent.ini
>
>
>
> -Sumit
>
>
>
>
>
> On Wed, Apr 23, 2014 at 12:13 PM, ILOGLU, EMINE  wrote:
>
> That sure explains! Thanks.. Where can I find the new version of
> ambari-agent.ini?
>
>
>
> Thanks a lot
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:08 PM
>
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> That file seems to be from an older version. For example this is what I
> see under [agent] when I did a clean install of 1.5.1
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
> *data_cleanup_interval=86400*
>
> *data_cleanup_max_age=2592000*
>
> *ping_port=8670*
>
> *cache_dir=/var/lib/ambari-agent/cache*
>
> *tolerate_download_failures=true*
>
>
>
> Rpm details:
>
> rpm -qa | grep ambari-agent
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
> rpm -qf ambari-agent.ini
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
> -Sumit
>
>
>
>
>
> On Wed, Apr 23, 2014 at 11:40 AM, ILOGLU, EMINE  wrote:
>
> Here is etc/ambari-agent/conf/ambari-agent.ini and no errors on the log
> file.
>
>
>
> .
>
> # distributed under the License is distributed on an "AS IS" BASIS,
>
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>
> # See the License for the specific
>
>
>
> [server]
>
> hostname=zlxv2256.vci.att.com #AMBARI SERVER
>
> url_port=8440
>
> secured_url_port=8441
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
>
>
> [stack]
>
> installprefix=/var/ambari-agent/
>
> upgradeScriptsDir=/var/lib/ambari-agent/upgrade_stack
>
>
>
> [puppet]
>
> puppetmodules=/var/lib/ambari-agent/puppet
>
> ruby_home=/usr/lib/ambari-agent/lib/ruby-1.8.7-p370
>
> puppet_home=/usr/lib/ambari-agent/lib/puppet-2.7.9
>
> facter_home=/usr/lib/ambari-agent/lib/facter-1.6.10
>
>
>
> [command]
>
> maxretries=2
>
> sleepBetweenRetries=1
>
>
>
> [security]
>
> keysdir=/var/lib/ambari-agent/keys
>
> server_crt=ca.crt
>
> passphrase_env_var_name=AMBARI_PASSPHRASE
>
>
>
> [services]
>
> pidLookupPath=/var/run/
>
>
>
> [heartbeat]
>
> state_interval=6
>
>
> dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
>
>   /etc/sqoop,/etc/ganglia,/etc/nagios,
>
>
> /var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
>
>
> /var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive,
>
>   /var/log/nagios
>
> rpms=nagios,ganglia,
>
>
> hadoop,hadoop-lzo,hbase,oozie,sqoop,pig,zookeeper,hive,libconfuse,ambari-log4j
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 2:35 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> Can you also share /etc/ambari-agent/conf/ambari-agent.ini and check if
> there is any Error/Warning in /var/log/ambari-agent/ambari-agent.log?
>
>
>
> -Sumit
>
>
>
> On Wed, Apr 23, 2014 at 11:30 AM, Erin Boyd  wrote:
>
> Hi EI,
> Is your agent.ini file pointing to the server?
> Can you agent get out on the network? Can you yum install anything from
> the agent node?
> Erin
>
>
>
> - Original Message -
> From: "EMINE ILOGLU" 
> To: user@ambari.apache.org
> Sent: Wednesday, April

Re: ambari 1.5.1 install problem ..

2014-04-23 Thread Sumit Mohanty
For your reference the latest file from one of my host:

[server]
hostname=localhost
url_port=8440
secured_url_port=8441

[agent]
prefix=/var/lib/ambari-agent/data
;loglevel=(DEBUG/INFO)
loglevel=INFO
data_cleanup_interval=86400
data_cleanup_max_age=2592000
ping_port=8670
cache_dir=/var/lib/ambari-agent/cache
tolerate_download_failures=true

[puppet]
puppetmodules=/var/lib/ambari-agent/puppet
ruby_home=/usr/lib/ambari-agent/lib/ruby-1.8.7-p370
puppet_home=/usr/lib/ambari-agent/lib/puppet-2.7.9
facter_home=/usr/lib/ambari-agent/lib/facter-1.6.10

[command]
maxretries=2
sleepBetweenRetries=1

[security]
keysdir=/var/lib/ambari-agent/keys
server_crt=ca.crt
passphrase_env_var_name=AMBARI_PASSPHRASE

[services]
pidLookupPath=/var/run/

[heartbeat]
state_interval=6
dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
  /etc/sqoop,/etc/ganglia,/etc/nagios,

/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,

/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive,
  /var/log/nagios
rpms=nagios,ganglia,

hadoop,hadoop-lzo,hbase,oozie,sqoop,pig,zookeeper,hive,libconfuse,ambari-log4j
; 0 - unlimited
log_lines_count=300


On Wed, Apr 23, 2014 at 12:37 PM, ILOGLU, EMINE  wrote:

>  Thank you very much Sumit! I found the problem, an internal script was
> overwriting ambari-agent.ini file right after yum install with an older
> version.  I will run it with the latest and *tolerate_download_failures=true
> *settings should solve the problem*.*
>
> Emine
>
>
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:32 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> That seems odd.
>
>
>
> Can you check the ambari.repo file at /etc/yum.repos.d (assuming its rhel
> or centos) and let me know the content?
>
>
>
> Did you fresh install or do an upgrade? What OS it is?
>
>
>
> Can you also check if you have the folder
> /var/lib/ambari-agent/cache/stacks?
>
>
>
> -Sumit
>
>
>
> On Wed, Apr 23, 2014 at 12:27 PM, ILOGLU, EMINE  wrote:
>
> [ei947t@zlxv2263 1.4-3]$ rpm -qa |grep ambari-agent
>
> ambari-agent-1.5.1.110-1.x86_64
>
> [ei947t@zlxv2263 1.4-3]$ rpm -qf /etc/ambari-agent/conf/ambari-agent.ini
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
>
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:25 PM
>
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> It should have been installed when you install ambari-agent on the host.
>
>
>
> Can you check the output of the two commands:
>
>
>
> rpm -qa | grep ambari-agent
>
>
>
> rpm -qf /etc/ambari-agent/conf/ambari-agent.ini
>
>
>
> -Sumit
>
>
>
>
>
> On Wed, Apr 23, 2014 at 12:13 PM, ILOGLU, EMINE  wrote:
>
> That sure explains! Thanks.. Where can I find the new version of
> ambari-agent.ini?
>
>
>
> Thanks a lot
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:08 PM
>
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
>
>
>
> That file seems to be from an older version. For example this is what I
> see under [agent] when I did a clean install of 1.5.1
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
> *data_cleanup_interval=86400*
>
> *data_cleanup_max_age=2592000*
>
> *ping_port=8670*
>
> *cache_dir=/var/lib/ambari-agent/cache*
>
> *tolerate_download_failures=true*
>
>
>
> Rpm details:
>
> rpm -qa | grep ambari-agent
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
> rpm -qf ambari-agent.ini
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
> -Sumit
>
>
>
>
>
> On Wed, Apr 23, 2014 at 11:40 AM, ILOGLU, EMINE  wrote:
>
> Here is etc/ambari-agent/conf/ambari-agent.ini and no errors on the log
> file.
>
>
>
> .
>
> # distributed under the License is distributed on an "AS IS" BASIS,
>
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>
> # See the License for the specific
>
>
>
> [server]
>
> hostname=zlxv2256.vci.att.com #AMBARI SERVER
>
> url_port=8440
>
> secured_url_port=8441
>
>
>
> [agent]
>
> prefix=/var/lib/ambari-agent/data
>
> ;loglevel=(DEBUG/INFO)
>
> loglevel=INFO
>
>
>
> [stack]
>
> installprefix=/var/ambari-agent/
>
> upgradeScriptsDir=/var/lib/ambari-agent/upgrade_st

Re: Using Ambari with vanilla Apache releases

2014-04-23 Thread Sumit Mohanty
I am not going to HBaseCon either.

I will prefer 7th/8th if we are open for that week.

-Sumit


On Wed, Apr 23, 2014 at 3:13 PM, Yusaku Sako  wrote:

> All,
>
> I can be pretty flexible next week or the week after (though I'm not
> going to HBaseCon).
> Hortonworks should be able to host as well.
>
> Thanks,
> Yusaku
>
> On Wed, Apr 23, 2014 at 1:37 PM, Roman Shaposhnik 
> wrote:
> > On Wed, Apr 23, 2014 at 1:26 PM, Konstantin Boudnik 
> wrote:
> >> I am out on Hbasecon on 5th & 6th. And next week is pretty crazy for me
> with
> >> Bigtop 0.8
> >
> > Actually, this gives me an idea: would it  be totally crazy to have this
> as
> > a hackathon/meetup at HBaseCON? Who should we talk to to make it
> > happen? I'm sure there'll be quite a few folks there -- we might as well
> > exploit it.
> >
> > Thanks,
> > Roman.
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Using Ambari with vanilla Apache releases

2014-04-23 Thread Sumit Mohanty
In that case, lets do it next Thursday - May 1st, 6:00 PM onwards. If this
time does not work pls. suggest an alternative.

Roman, let us know if it can be hosted at Pivotal's Palo Alto office or
Yusaku and I should arrange for it to be hosted at Hortonworks' Palo Alto
office.

-Sumit


On Wed, Apr 23, 2014 at 3:51 PM, Roman Shaposhnik wrote:

> On Wed, Apr 23, 2014 at 3:21 PM, Sumit Mohanty 
> wrote:
> > I am not going to HBaseCon either.
> >
> > I will prefer 7th/8th if we are open for that week.
>
> I'm flying off to a LinuxTAG on 7th, returning back
> next week.
>
> I guess what I am saying is I'd really appreciate if
> we could get together next week.
>
> Thanks,
> Roman.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Using Ambari with vanilla Apache releases

2014-04-25 Thread Sumit Mohanty
Great. Yusaku and I will look into the logistics requirement and get back.


On Fri, Apr 25, 2014 at 9:14 AM, Roman Shaposhnik wrote:

> On Wed, Apr 23, 2014 at 6:24 PM, Sumit Mohanty 
> wrote:
> > In that case, lets do it next Thursday - May 1st, 6:00 PM onwards. If
> this
> > time does not work pls. suggest an alternative.
> >
> > Roman, let us know if it can be hosted at Pivotal's Palo Alto office or
> > Yusaku and I should arrange for it to be hosted at Hortonworks' Palo Alto
> > office.
>
> That would be perfect. Turns out I need a little bit more lead time to
> host it at Pivotal, so if we can get together at HW office next week,
> that'll be ideal -- personally I can be as flexible as needed, except
> for afternoon on Tue (but I can do Tue before 4pm).
>
> Thanks,
> Roman.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HBase HDFS Security

2014-04-29 Thread Sumit Mohanty
Which version of Ambari are you using? The 1.5.x release and also latest
from trunk set them to 711 - based on some installs I checked.

-Sumit


On Mon, Apr 28, 2014 at 11:33 PM, Tapper, Gunnar wrote:

>  Ambari seems to set up /apps/hbase as follows:
>
> [hdfs@bronto03 ~]$ hadoop fs -lsr /apps/hbase
> drwx--   - hbase hdfs  0 2014-04-29 06:18 /apps/hbase/data
> drwx--   - hbase hdfs  0 2014-03-06 06:52
> /apps/hbase/data/-ROOT-
>
> How can this be changed so that users other than the owner can list files
> and their sizes in Ambari?
>
> Thanks,
>
> Gunnar
>
> *The person that says it cannot be done should not interrupt the person
> doing it.*
>
> Download a free version of HPDSM, a unified big-data administration tool
> for Vertica and Hadoop at: 
> *http://www.vertica.com/marketplace*
>
>
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HBase HDFS Security

2014-05-16 Thread Sumit Mohanty
Which version of Ambari are you using? The 1.5.x release and also latest
from trunk set them to 711 - based on some installs I checked.

You can also manually change the permission using hdfs commands if its a
one time fix you are looking for.

-Sumit



On Thu, May 15, 2014 at 9:46 AM, Tapper, Gunnar wrote:

>  Any comments on this?
>
>
>
> Sincerely,
>
>
>
> Gunnar
>
>
>
> *From:* Tapper, Gunnar
> *Sent:* Tuesday, April 29, 2014 12:34 AM
> *To:* user@ambari.apache.org
> *Subject:* HBase HDFS Security
>
>
>
> Ambari seems to set up /apps/hbase as follows:
>
>
>
> [hdfs@bronto03 ~]$ hadoop fs -lsr /apps/hbase
>
> drwx--   - hbase hdfs  0 2014-04-29 06:18 /apps/hbase/data
>
> drwx--   - hbase hdfs  0 2014-03-06 06:52
> /apps/hbase/data/-ROOT-
>
>
>
> How can this be changed so that users other than the owner can list files
> and their sizes in Ambari?
>
>
>
> Thanks,
>
>
>
> Gunnar
>
>
>
> *The person that says it cannot be done should not interrupt the person
> doing it.*
>
>
>
> Download a free version of HPDSM, a unified big-data administration tool
> for Vertica and Hadoop at: http://www.vertica.com/marketplace
>
>
>
>
>
>
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: re-integrating name node

2014-05-16 Thread Sumit Mohanty
What was the state of the host after the maintenance? Just asking because
if the directory structure was left untouched (e.g. the name node
directory) then you may be able to start the agent on that host and start
namenode and other mapped components.

I am trying to figure out if there is a easier way than re-installing
namenode.

If you need to reinstall NN then you may install it with a different path
and then modify the config to point to the original path afterwards. Ensure
that all services are stopped before this. You will need to modify the
hdfs-site config using the API or UI and then install NN. Start and stop
the NN. After that use the UI or API to change the path and then start NN.

I have not personally tried it - so ensure that you have backed up namenode
directory and data directory as needed.

Usually, a gmond (Ganglia Monitor) instance is also deployed on the host.
So you will need to install that as well.


On Fri, May 9, 2014 at 3:43 PM, Anoop Rajendra wrote:

> Hey,
>
> I have deployed HDP 2.1.2 with Ambari 1.5.1 on a small test cluster of 7
> nodes.
>
> The host that was running the namenode server had to undergo
> maintenance today, and I need to redeploy the namenode software on
> that system.
>
> After the host came back I tried to install the namenode software
> using the API, but I got the error,
>
> ERROR: Namenode directory(s) is non empty. Will not format the
> namenode. List of non-empty namenode dirs
> /state/partition1/hadoop/hdfs/namenode
>
> I didn't want to erase the namenode directory in case I could use the
> information on it.
>
> Is this possible?
>
> -Anoop
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HBase HDFS Security

2014-05-16 Thread Sumit Mohanty
You can use rpm -qa | grep ambari

You can also use (if its a recent enough release)
ambari-server --version
ambari-server --hash

-Sumit


On Fri, May 16, 2014 at 1:08 PM, Tapper, Gunnar wrote:

>  Hi Sumit,
>
>
>
> That is a good question. What’s the easy way to figure the Ambari version?
>
>
>
> Admin>Cluster shows the Cluster Stack Version as HDP1.3.3. but I can’t
> find something that shows the Ambari version readily.
>
>
>
> Sincerely,
>
>
>
> Gunnar
>
>
>
> Download a free version of HPDSM, a unified big-data administration tool
> for Vertica and Hadoop at: http://www.vertica.com/marketplace
>
>
>
> *“People don’t know what they want until you show it to them… Our task is
> to read things that are not yet on the page.” *— Steve Jobs
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Tuesday, April 29, 2014 11:41 AM
> *To:* user@ambari.apache.org
> *Subject:* Re: HBase HDFS Security
>
>
>
> Which version of Ambari are you using? The 1.5.x release and also latest
> from trunk set them to 711 - based on some installs I checked.
>
>
>
> -Sumit
>
>
>
> On Mon, Apr 28, 2014 at 11:33 PM, Tapper, Gunnar 
> wrote:
>
> Ambari seems to set up /apps/hbase as follows:
>
>
>
> [hdfs@bronto03 ~]$ hadoop fs -lsr /apps/hbase
>
> drwx--   - hbase hdfs  0 2014-04-29 06:18 /apps/hbase/data
>
> drwx--   - hbase hdfs  0 2014-03-06 06:52
> /apps/hbase/data/-ROOT-
>
>
>
> How can this be changed so that users other than the owner can list files
> and their sizes in Ambari?
>
>
>
> Thanks,
>
>
>
> Gunnar
>
>
>
> *The person that says it cannot be done should not interrupt the person
> doing it.*
>
>
>
> Download a free version of HPDSM, a unified big-data administration tool
> for Vertica and Hadoop at: http://www.vertica.com/marketplace
>
>
>
>
>
>
>
>
>
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


RE: HBase HDFS Security

2014-05-17 Thread Sumit Mohanty
  Unlikely, but if its a one time change, do the chmod manually and it will
remain that way after upgrade.

Sent from my Windows Phone
 --
From: Tapper, Gunnar 
Sent: ‎5/‎17/‎2014 11:10 AM
To: user@ambari.apache.org
Subject: RE: HBase HDFS Security

1.4.4.23

I’ll try to upgrade to 1.5.x but does that change the existing environment?



Sincerely,



Gunnar



Download a free version of HPDSM, a unified big-data administration tool
for Vertica and Hadoop at: http://www.vertica.com/marketplace



*“People don’t know what they want until you show it to them… Our task is
to read things that are not yet on the page.” *— Steve Jobs



*From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
*Sent:* Friday, May 16, 2014 4:47 PM
*To:* user@ambari.apache.org
*Subject:* Re: HBase HDFS Security



You can use rpm -qa | grep ambari



You can also use (if its a recent enough release)

ambari-server --version

ambari-server --hash



-Sumit



On Fri, May 16, 2014 at 1:08 PM, Tapper, Gunnar 
wrote:

Hi Sumit,



That is a good question. What’s the easy way to figure the Ambari version?



Admin>Cluster shows the Cluster Stack Version as HDP1.3.3. but I can’t find
something that shows the Ambari version readily.



Sincerely,



Gunnar



Download a free version of HPDSM, a unified big-data administration tool
for Vertica and Hadoop at: http://www.vertica.com/marketplace



*“People don’t know what they want until you show it to them… Our task is
to read things that are not yet on the page.” *— Steve Jobs



*From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
*Sent:* Tuesday, April 29, 2014 11:41 AM
*To:* user@ambari.apache.org
*Subject:* Re: HBase HDFS Security



Which version of Ambari are you using? The 1.5.x release and also latest
from trunk set them to 711 - based on some installs I checked.



-Sumit



On Mon, Apr 28, 2014 at 11:33 PM, Tapper, Gunnar 
wrote:

Ambari seems to set up /apps/hbase as follows:



[hdfs@bronto03 ~]$ hadoop fs -lsr /apps/hbase

drwx--   - hbase hdfs  0 2014-04-29 06:18 /apps/hbase/data

drwx--   - hbase hdfs  0 2014-03-06 06:52
/apps/hbase/data/-ROOT-



How can this be changed so that users other than the owner can list files
and their sizes in Ambari?



Thanks,



Gunnar



*The person that says it cannot be done should not interrupt the person
doing it.*



Download a free version of HPDSM, a unified big-data administration tool
for Vertica and Hadoop at: http://www.vertica.com/marketplace












CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.




CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: building trunk

2014-06-02 Thread Sumit Mohanty
How are you setting the version?

This is what I have used in past - mvn -B -e versions:set
-DnewVersion=1.6.1.7.

-Sumit


On Mon, Jun 2, 2014 at 1:20 PM, Aaron Cody  wrote:

>  what should I be setting AMBARI_VERSION to?  I tried 1.6.0 but got some
> regex errors…
> How do I figure this out in general?
> thanks
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: building trunk

2014-06-02 Thread Sumit Mohanty
I do not think the version number is stored in the branch - perhaps we
should once the branch is final.

Currently, the version number is applied during the build with the 4th
digit being the build number. The first three are aligned with the branch
name - e.g. branch-1.6.0.

-Sumit


On Mon, Jun 2, 2014 at 1:46 PM, Aaron Cody  wrote:

>  ok four digits… that worked - thanks
> so how do we figure out this version number for a particular branch ? is
> it stored in a file somewhere?
>
>   From: Sumit Mohanty 
> Reply-To: "user@ambari.apache.org" 
> Date: Monday, June 2, 2014 at 1:27 PM
> To: "user@ambari.apache.org" 
> Subject: Re: building trunk
>
>   How are you setting the version?
>
>  This is what I have used in past - mvn -B -e versions:set
> -DnewVersion=1.6.1.7.
>
>  -Sumit
>
>
> On Mon, Jun 2, 2014 at 1:20 PM, Aaron Cody  wrote:
>
>>  what should I be setting AMBARI_VERSION to?  I tried 1.6.0 but got some
>> regex errors…
>> How do I figure this out in general?
>> thanks
>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Running smoke test

2014-06-03 Thread Sumit Mohanty
The implementation of smoke test depends on what will happen when you run
smoke tests. Smoke tests shipped with the stacks test the services for
basic capabilities - e.g. hbase smoke test will create a table. From the
perspective of creating a table it does not make much sense to test it on
all hosts.

A client instance is randomly chosen and the test is ran only from one
client host. As its random, you may notice that different host got used on
different runs of the same test.

-Sumit


On Tue, Jun 3, 2014 at 3:41 PM, Anisha Agarwal 
wrote:

>  Hi,
>
>  I have a couple of questions about running smoke tests.
>
>  1. Both during installation, and post installation (using
> maintenance->run smoke test), I see that the smoke test run on a single
> host. How is this host decided when running smoke tests? It is different
> for me during installation, and post-installation.
>
>  2. Is it possible to run smoke tests on all hosts in the cluster?
>
>  Thanks,
> Anisha
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Running smoke test

2014-06-05 Thread Sumit Mohanty
Can you provide some details on what type of check you want to do?

You can use custom actions support for that. You will need to define a
custom action first. There is API only support for issuing custom action on
all hosts and then the action can perform the test and then report back
results.

What version of Ambari are you using? I need to check if custom action is
supported.

-Sumit



On Wed, Jun 4, 2014 at 10:23 AM, Anisha Agarwal 
wrote:

>  Thanks for the explanation Sumit.
> I have some tests which I want to run on all hosts on the cluster each
> time.
> How can I make that happen?
>
>  Thanks,
> Anisha
>
>   From: Sumit Mohanty 
> Reply-To: "user@ambari.apache.org" 
> Date: Tuesday, June 3, 2014 at 9:45 PM
> To: "user@ambari.apache.org" 
> Subject: Re: Running smoke test
>
>   The implementation of smoke test depends on what will happen when you
> run smoke tests. Smoke tests shipped with the stacks test the services for
> basic capabilities - e.g. hbase smoke test will create a table. From the
> perspective of creating a table it does not make much sense to test it on
> all hosts.
>
>  A client instance is randomly chosen and the test is ran only from one
> client host. As its random, you may notice that different host got used on
> different runs of the same test.
>
>  -Sumit
>
>
> On Tue, Jun 3, 2014 at 3:41 PM, Anisha Agarwal 
> wrote:
>
>>  Hi,
>>
>>  I have a couple of questions about running smoke tests.
>>
>>  1. Both during installation, and post installation (using
>> maintenance->run smoke test), I see that the smoke test run on a single
>> host. How is this host decided when running smoke tests? It is different
>> for me during installation, and post-installation.
>>
>>  2. Is it possible to run smoke tests on all hosts in the cluster?
>>
>>  Thanks,
>> Anisha
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Running smoke test

2014-06-06 Thread Sumit Mohanty
Ambari 1.2.4 does not have the feature where you can add a custom action of
your own. Are you planning to upgrade to the latest version?


On Thu, Jun 5, 2014 at 11:35 AM, Anisha Agarwal 
wrote:

>  I am using ambari-1.2.4.
>
>   From: Sumit Mohanty 
> Reply-To: "user@ambari.apache.org" 
> Date: Thursday, June 5, 2014 at 7:10 AM
>
> To: "user@ambari.apache.org" 
> Subject: Re: Running smoke test
>
>   Can you provide some details on what type of check you want to do?
>
>  You can use custom actions support for that. You will need to define a
> custom action first. There is API only support for issuing custom action on
> all hosts and then the action can perform the test and then report back
> results.
>
>  What version of Ambari are you using? I need to check if custom action
> is supported.
>
>  -Sumit
>
>
>
> On Wed, Jun 4, 2014 at 10:23 AM, Anisha Agarwal 
> wrote:
>
>>  Thanks for the explanation Sumit.
>> I have some tests which I want to run on all hosts on the cluster each
>> time.
>> How can I make that happen?
>>
>>  Thanks,
>> Anisha
>>
>>   From: Sumit Mohanty 
>> Reply-To: "user@ambari.apache.org" 
>> Date: Tuesday, June 3, 2014 at 9:45 PM
>> To: "user@ambari.apache.org" 
>> Subject: Re: Running smoke test
>>
>>The implementation of smoke test depends on what will happen when you
>> run smoke tests. Smoke tests shipped with the stacks test the services for
>> basic capabilities - e.g. hbase smoke test will create a table. From the
>> perspective of creating a table it does not make much sense to test it on
>> all hosts.
>>
>>  A client instance is randomly chosen and the test is ran only from one
>> client host. As its random, you may notice that different host got used on
>> different runs of the same test.
>>
>>  -Sumit
>>
>>
>> On Tue, Jun 3, 2014 at 3:41 PM, Anisha Agarwal 
>> wrote:
>>
>>>  Hi,
>>>
>>>  I have a couple of questions about running smoke tests.
>>>
>>>  1. Both during installation, and post installation (using
>>> maintenance->run smoke test), I see that the smoke test run on a single
>>> host. How is this host decided when running smoke tests? It is different
>>> for me during installation, and post-installation.
>>>
>>>  2. Is it possible to run smoke tests on all hosts in the cluster?
>>>
>>>  Thanks,
>>> Anisha
>>>
>>
>>
>>  CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Existing postgres db

2014-06-25 Thread Sumit Mohanty
The configs that determine the postgres db are in ambari.properties file:


   - server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
   - server.jdbc.user.name=ambari
   - server.jdbc.database=ambari

If you are restoring the database on the original host and ambari-server is
running on the same host then likely you do not need to make any config
changes. Ensure that ambari-server is stopped while you restore the
database. You may want to refer to details regarding how to restore the
postgres database. Usually pgdump is the way to take db backup so restoring
from /var/lib/pgsql seems a little unconventional. See if you can find any
documentation of restoring that way.


On Wed, Jun 25, 2014 at 11:58 AM, ILOGLU, EMINE  wrote:

> Hi all,
>
> Does anybody know how I can start ambari using an existing postgres db.  I
> tar'ed /var/lib/pgsql directory before making some changes and need to
> revert to that.
>
> Thanks,
> Emine
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Change my host disk options?

2014-06-27 Thread Sumit Mohanty
Host check will only report space issue based on available space at "/". It
does not look into various mounts.

When you deploy the stack using Ambari there are configuration properties
you can change so that various default folders (e.g. HDFS data dirs, log
dirs) points to sub-folders within /grid or /home.

-Sumit


On Fri, Jun 27, 2014 at 2:11 PM, Yixiao Lin  wrote:

> Hi all,
>
> Is there a way to change my disk mount settings? I ran into a not enough
> disk space problem later in installing services. Now I am resetting
> everything. The host checks returned warning:
>   The following registered hosts have issues related to disk space
> Issues
> Not enough disk space
>  I have little space on my mount on /
> but I have more space at /grid or /home
> Where can I change that setting?
>
> Thank you!
> Yixiao
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari upgrade

2014-06-30 Thread Sumit Mohanty
What version did you upgrade from? What is the version of stack?

What does these calls return (assumes default login/password - change it as
needed?

curl-u admin:admin http://:8080/api/v1/clusters/
curl-u admin:admin http://:8080/api/v1/clusters//services


On Mon, Jun 30, 2014 at 6:45 AM, ILOGLU, EMINE  wrote:

>  Hi all,
>
>
>
> After I upgrade ambari to 1.6.0, I don’t see any servers on the ambari UI,
> however they are all in postgress in hostcomponentdesiredstate and
> servicecomponentdesiredstate tables.
>
>
>
> Any ideas?
>
>
>
> *Emine Iloglu*
>
> AT&T - Common Services Systems Architecture (CSSA)
>
> T: 848-218-2108 | Q: ei947t
>
> (Remote, Eastern Timezone)
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari upgrade

2014-06-30 Thread Sumit Mohanty
Can you list the contents of
/var/lib/ambari-server/resources/stacks/
and /var/lib/ambari-server/resources/stacks/HDP

looks like the definition of HDP-1.3.3 stack is not accessible.

Can you also check through the following call that HDP-1.3.3. is indeed the
stack version?

curl-u admin:admin http://:8080/api/v1/clusters/<
clustername>





On Mon, Jun 30, 2014 at 8:05 AM, ILOGLU, EMINE  wrote:

>  This is a mystery to me. At each attempt a different service/services
> disappear from the UI and sometimes they are all fine.. Any idea what could
> be the root cause? The postgress db shows all the services as
> INSTALLED/STARTED state.
>
>
>
> *Emine Iloglu*
>
> AT&T - Common Services Systems Architecture (CSSA)
>
> T: 848-218-2108 | Q: ei947t
>
> (Remote, Eastern Timezone)
>
>
>
>
>
> *From:* Nate Cole [mailto:nc...@hortonworks.com]
> *Sent:* Monday, June 30, 2014 11:02 AM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari upgrade
>
>
>
> The public-repo is only trying to determine information about stacks, but
> does not affect existing clusters.
>
>  On 6/30/14 10:51 AM, ILOGLU, EMINE wrote:
>
> From 1.4.4 to 1.6.0
>
> HDP is 1.3.3 , using an internal repo
>
>
>
> After the upgrade, I see this in the log file:
>
> 07:38:22,307  INFO [main] StackExtensionHelper:358 - No services defined
> for stack: HDP-1.3.3
>
> 07:38:23,229  INFO [Stack Version Loading Thread] LatestRepoCallable:73 -
> Loading latest URL info from
> http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json
>
> 07:38:25,233 ERROR [Stack Version Loading Thread] LatestRepoCallable:90 -
> Could not load the URI
> http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json (connect timed
> out)
>
> 07:38:25,233  INFO [Stack Version Loading Thread] LatestRepoCallable:73 -
> Loading latest URL info from
> http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json
>
> 07:38:27,236 ERROR [Stack Version Loading Thread] LatestRepoCallable:90 -
> Could not load the URI
> http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json (connect timed
> out)
>
> 07:38:27,237  INFO [Stack Version Loading Thread] LatestRepoCallable:73 -
> Loading latest URL info from
> http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json
>
> 07:38:29,244 ERROR [Stack Version Loading Thread] LatestRepoCallable:90 -
> Could not load the URI
> http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json (Read timed out)
>
> 07:38:29,248  WARN [main] ActionDefinitionManager:117 - Ignoring action
> definition as a different definition by that name already exists.
> ActionDefinition: actionName: ambari_hdfs_rebalancer actionType: SYSTEM
> inputs: threshold,[principal],[keytab] description: HDFS Rebalance
> targetService: HDFS targetComponent: NAMENODE defaultTimeout: 600
> targetType: ANY
>
> 07:38:29,248  WARN [main] ActionDefinitionManager:117 - Ignoring action
> definition as a different definition by that name already exists.
> ActionDefinition: actionName: nagios_update_ignore actionType: SYSTEM
> inputs: [nagios_ignore] description: Used to create an alert blackout
> targetService: NAGIOS targetComponent: NAGIOS_SERVER defaultTimeout: 60
> targetType: ANY
>
>
>
> And at this attempt to upgrade, I see HDFS and Nagios are missing and all
> the other services do exist.
>
> Why is it trying to go public-repo? That might be the problem right?
>
>
>
> Thanks
>
> *Emine Iloglu*
>
> AT&T - Common Services Systems Architecture (CSSA)
>
> T: 848-218-2108 | Q: ei947t
>
> (Remote, Eastern Timezone)
>
>
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com
> ]
> *Sent:* Monday, June 30, 2014 10:46 AM
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari upgrade
>
>
>
> What version did you upgrade from? What is the version of stack?
>
>
>
> What does these calls return (assumes default login/password - change it
> as needed?
>
>
>
> curl-u admin:admin http:// host>:8080/api/v1/clusters/
>
> curl-u admin:admin http:// host>:8080/api/v1/clusters//services
>
>
>
> On Mon, Jun 30, 2014 at 6:45 AM, ILOGLU, EMINE  wrote:
>
> Hi all,
>
>
>
> After I upgrade ambari to 1.6.0, I don’t see any servers on the ambari UI,
> however they are all in postgress in hostcomponentdesiredstate and
> servicecomponentdesiredstate tables.
>
>
>
> Any ideas?
>
>
>
> *Emine Iloglu*
>
> AT&T - Common Services Systems Architecture (CSSA)
>
> T: 848-218-2108 | Q: ei947t
>
> (Remote, Eastern Timezone)
>
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or 

Re: REST API: Starting/stopping a service on a host

2014-07-08 Thread Sumit Mohanty
The wiki has few samples on API usage -
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=41812517

Top level link -
https://cwiki.apache.org/confluence/display/AMBARI/API+usage+scenarios%2C+troubleshooting%2C+and+other+FAQs

thanks
Sumit


On Tue, Jul 8, 2014 at 9:38 AM, Tapper, Gunnar  wrote:

>  Hi,
>
> After reading the REST API documentation, I still don’t understand how to
> stop/start a service on a specific host.
>
> Can someone provide an example?
>
> Thank you,
>
> Gunnar
>
> *The person that says it cannot be done should not interrupt the person
> doing it.*
>
> Download a free version of HP DSM, a unified big-data administration tool
> for Vertica and Hadoop at: *HP DSM Download*
> 
>
>
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: All processes are waiting during Cluster install

2014-07-13 Thread Sumit Mohanty
By "I restarted the process." do you mean that you restarted installation?

Can you share the command logs for tasks (e.g. 10, 42, 58, etc.)? These
would help debug why the tasks are still active.

If you look at the Ambari UI and look at the past requests (top left) then
the task specific UI will show you the hosts and the local file names on
the host. The files are named as /var/lib/ambari-agent/data/output-10.txt
and /var/lib/ambari-agent/data/errors-10.txt for task id 10.

What I can surmise based on the above is that the agents are still stuck on
executing the older tasks. Thus they cannot execute new commands sent by
Ambari Server when you retried installation. I suggest looking at the
command logs and see why they are stuck. Restarting ambari server may not
help as you may need to restart agents if they are stuck executing the
tasks.

-Sumit


On Sun, Jul 13, 2014 at 8:00 AM, Suraj Nayak M  wrote:

> Hi,
>
> I am trying to install HDP2.1 using Ambari on 4 nodes. 2 NN and 2 Slaves.
> The install failed due to python script timeout. I restarted the process.
> From past 2hrs there is no progress in the installation. Is it safe to kill
> the ambari server and restart the process ? How can I terminate the ongoing
> process in Ambari gracefully ?
>
> Below is tail of the Ambari-Server logs.
>
> 20:12:08,530  WARN [qtp527311109-183] HeartBeatHandler:369 - Operation
> failed - may be retried. Service component host: HIVE_CLIENT, host:
> slave2.hdp.somedomain.com Action id1-1
> 20:12:08,530  INFO [qtp527311109-183] HeartBeatHandler:375 - Received
> report for a command that is no longer active. 
> CommandReport{role='HIVE_CLIENT',
> actionId='1-1', status='FAILED', exitCode=999, clusterName='HDP2_CLUSTER1',
> serviceName='HIVE', taskId=57, roleCommand=INSTALL, configurationTags=null,
> customCommand=null}
> 20:12:08,530  WARN [qtp527311109-183] ActionManager:143 - The task 57 is
> not in progress, ignoring update
> 20:12:08,966  WARN [qtp527311109-183] ActionManager:143 - The task 26 is
> not in progress, ignoring update
> 20:12:12,319  WARN [qtp527311109-183] ActionManager:143 - The task 58 is
> not in progress, ignoring update
> 20:12:12,605  WARN [qtp527311109-183] ActionManager:143 - The task 42 is
> not in progress, ignoring update
> 20:12:14,872  WARN [qtp527311109-183] ActionManager:143 - The task 10 is
> not in progress, ignoring update
> 20:12:19,039  WARN [qtp527311109-184] ActionManager:143 - The task 26 is
> not in progress, ignoring update
> 20:12:22,382  WARN [qtp527311109-183] ActionManager:143 - The task 58 is
> not in progress, ignoring update
> 20:12:22,655  WARN [qtp527311109-183] ActionManager:143 - The task 42 is
> not in progress, ignoring update
> 20:12:24,919  WARN [qtp527311109-184] ActionManager:143 - The task 10 is
> not in progress, ignoring update
> 20:12:29,086  WARN [qtp527311109-184] ActionManager:143 - The task 26 is
> not in progress, ignoring update
> 20:12:32,576  WARN [qtp527311109-183] ActionManager:143 - The task 58 is
> not in progress, ignoring update
> 20:12:32,704  WARN [qtp527311109-183] ActionManager:143 - The task 42 is
> not in progress, ignoring update
> 20:12:34,955  WARN [qtp527311109-183] ActionManager:143 - The task 10 is
> not in progress, ignoring update
> 20:12:39,132  WARN [qtp527311109-183] ActionManager:143 - The task 26 is
> not in progress, ignoring update
> 20:12:42,629  WARN [qtp527311109-184] ActionManager:143 - The task 58 is
> not in progress, ignoring update
> 20:12:42,754  WARN [qtp527311109-184] ActionManager:143 - The task 42 is
> not in progress, ignoring update
> 20:12:45,137  WARN [qtp527311109-183] ActionManager:143 - The task 10 is
> not in progress, ignoring update
> 20:12:49,320  WARN [qtp527311109-183] ActionManager:143 - The task 26 is
> not in progress, ignoring update
> 20:12:52,962  WARN [qtp527311109-184] ActionManager:143 - The task 58 is
> not in progress, ignoring update
> 20:12:53,093  WARN [qtp527311109-184] ActionManager:143 - The task 42 is
> not in progress, ignoring update
> 20:12:55,184  WARN [qtp527311109-184] ActionManager:143 - The task 10 is
> not in progress, ignoring update
> 20:12:59,366  WARN [qtp527311109-184] ActionManager:143 - The task 26 is
> not in progress, ignoring update
> 20:13:03,013  WARN [qtp527311109-184] ActionManager:143 - The task 58 is
> not in progress, ignoring update
> 20:13:03,257  WARN [qtp527311109-184] ActionManager:143 - The task 42 is
> not in progress, ignoring update
> 20:13:05,231  WARN [qtp527311109-184] ActionManager:143 - The task 10 is
> not in progress, ignoring update
>
>
> --
> Thanks
> Suraj Nayak
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution

[ANNOUNCE] Apache Ambari 1.6.1

2014-07-19 Thread Sumit Mohanty
The Apache Ambari team is proud to announce Apache Ambari version 1.6.1

Apache Ambari is a tool for provisioning, managing, and monitoring Apache
Hadoop clusters. Ambari consists of a set of RESTful APIs and a
browser-based management console UI.

The release bits are at:
http://www.apache.org/dyn/closer.cgi/ambari/ambari-1.6.1

To use the released bits please use the following documentation:

https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+1.6.1

We would like to thank all the contributors that made the release possible.

Regards,

The Ambari Team

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: EC2 instance type for Ambari

2014-08-06 Thread Sumit Mohanty
It really depends on the workload you want to run on the cluster.

Are you asking about the node that will host the Ambari Server or all the
nodes in the cluster?

If its for the node hosting Ambari Server then you should have around 4 GB
of RAM and run Ambari Server, Ganglia, and Nagios on the same host. That
will leave 19 nodes for your other work load.

The requirement of other nodes depend on what workload you want to run.
What type of MR jobs or HBase capacity required, etc.


On Tue, Aug 5, 2014 at 7:53 PM, Anand Nalya  wrote:

> Hi,
>
> I'll be using Amabari to deploy around 20 nodes cluster on EC2. Is a
> t2.small instance (2GB RAM, 1vCPU) fine for that or it could be a
> bottleneck.
>
> Regards,
> Anand
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: client vs slave

2014-08-15 Thread Sumit Mohanty
Whether a component is client or a slave is driven by the metainfo.xml for
the service type in the stack definition.


On Fri, Aug 15, 2014 at 10:58 AM, Anisha Agarwal 
wrote:

>  Hi,
>
>  I was looking at the code to understand how a slave component differs
> from a client component.
>
>1. Is there a rest call to specify that the component is a client
>while creating it?
>2. I can see the flag on the js side of the code, but how does the
>server differentiate among these? I don’t see anything in the REST APIs
>which say whether a component is a client or a slave.
>
> Is there anything else which I need to know when creating a client
> component?
>
>  Thanks,
> Anisha
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: question on [STACK]/[SERVICE]/metainfo.xml inheritance rules

2014-09-05 Thread Sumit Mohanty
Could we save these as FAQs on the Ambari wiki?

-Sumit


On Thu, Sep 4, 2014 at 5:53 PM, Siddharth Wagle 
wrote:

> Hi Alex,
>
> Replies inline.
>
> 1. If a component exists in the parent stack and is defined again in the
> child stack with just a few attributes, are these values just to override
> the parent's values or the whole component definition is replaced.
>
> We go property by property and merge them from parent to child. So if you
> remove a category for example from the child it will be inherited from
> parent, that goes for pretty much all properties.
> So, the question is how do we tackle existence of a property in both
> parent and child. Here, most of the decision still follow same paradigm as
> take the child value instead of parent and every property in parent, not
> explicitly deleted from child using a marker like  tag, is
> included in the merge.
>
> - For config-dependencies, we take all or nothing approach, if this
> property exists in child use it and all of its children else take it from
> parent.
> - The custom commands are merged based on names, such that merged
> definition is a union of commands with child commands with same name
> overriding those fro parent.
> - Cardinality is overwritten by a child or take from the parent if child
> has not provided one.
>
> You could look at this method for more details:
> org.apache.ambari.server.api.util.StackExtensionHelper#mergeServices
>
> 2. If a component is missing in the new definition but is present in the
> parent, does it get inherited ?
>
> Generally yes.
>
> 3. Configuration dependencies for the service -- are they overwritten or
> merged ?
>
> Overwritten.
>
> 4. What about other elements in metainfo.xml -- which rules apply ?
>
> Answered in 1.
>
> -Sid
>
>
>
>
>
>
> On Thu, Sep 4, 2014 at 5:02 PM, Alexander Denissov 
> wrote:
>
>> I am trying to understand the inheritance rules that govern services
>> metainfo.xml file contents. I looked at
>> https://issues.apache.org/jira/browse/AMBARI-2819 but it didn't answer
>> the following:
>>
>> 1. If a component exists in the parent stack and is defined again in the
>> child stack with just a few attributes, are these values just to override
>> the parent's values or the whole component definition is replaced.
>>
>> Example: HDP-2.1 YARN/metainfo.xml contains definition of RESOURCEMANAGER
>> with just 4 attributes, out of which only the value for "cardinality" is
>> different one in HDP-2.0.6 definition. But 2.0.6 definition also has a lot
>> more attributes (such as custom commands) that are not mentioned in 2.1.
>> Will these "missing" attributes be inherited by 2.1 stack ? If yes, why
>> other attributes (category and configuration-dependencies) are defined
>> again with the same values instead of being inherited ?
>>
>> 2. If a component is missing in the new definition but is present in the
>> parent, does it get inherited ?
>>
>> 3. Configuration dependencies for the service -- are they overwritten or
>> merged ?
>>
>> Example: HDP-2.1 YARN/metainfo.xml contains 
>> element with 4 , where as in HDP-2.0.6 the same element has
>> 5  (extra line is mapred-site). So will mapred
>> -site be inherited and present in 2.1 definition or was
>> this the way to get rid of this specific line for the new stack ?
>>
>> 4. What about other elements in metainfo.xml -- which rules apply ?
>>
>> --
>> Thanks,
>> Alex.
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: How to try 1.7?

2014-11-17 Thread Sumit Mohanty
Apache Ambari describes a way to get the latest Ambari (trunk or 1.7.0) and
use that to install the latest of any supported stack.

https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide

See if that helps.

On Mon, Nov 17, 2014 at 1:47 PM, hsy...@gmail.com  wrote:

> Hi guys,
>
> Is there an easy way to try 1.7, probably with HDP2.2?
>
> Thanks!
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Access Ambari API through Service Install Script

2014-11-25 Thread Sumit Mohanty
There is another way to find out.

The command-*.json file created while invoking the INSTALL command will
have a property bag by the name of "clusterHostInfo". That has list of
hosts for various component types.

For example, one can get the list of rs_hosts using

  rs_hosts = default('/clusterHostInfo/hbase_rs_hosts', [])

In your case, you can look for
'/clusterHostInfo/_hosts'


Sample of the relevant section from a command-*.json

...
   "clusterHostInfo": {
"ganglia_monitor_hosts": [
"c6403.ambari.apache.org"
],
"all_hosts": [
"c6403.ambari.apache.org"
],
"namenode_host": [
"c6403.ambari.apache.org"
],
"ambari_server_host": [
"c6403.ambari.apache.org"
],
"zookeeper_hosts": [
"c6403.ambari.apache.org"
],
...

On Tue, Nov 25, 2014 at 4:46 PM, Yusaku Sako  wrote:

> Can someone help?
>
> Yusaku
>
> On Sat, Nov 22, 2014 at 4:48 PM, Brian de la Motte <
> bdelamo...@zdatainc.com> wrote:
>
>> Hi everyone,
>>
>> I'm trying to integrate a custom service into Ambari. The problem is the
>> service's master during installation needs to know about the hosts that are
>> to be the service's slaves. The only way I was able to find that info was
>> by using the API.
>>
>> Is it a good idea to have the master's install script call the API to get
>> that info or is there an easier way? I think it would work but the script
>> would need to know the username and password for Ambari's API call.
>>
>> Is there a way to get the service's slave's hosts without the API or for
>> a Python script to get the admin username and password from a config in
>> order to call the API?
>>
>> The call I would possibly use is something like this:
>>
>> curl  --user admin:admin
>> http://127.0.0.1:8080/api/v1/clusters/abc/services/CUST_SERVICE/components/CUST_SERVICE_SLAVE?fields=host_components/HostRoles/host_name
>>
>>
>> I could parse out the hostnames from this but how do I make this call
>> without knowing what username and password to use.
>>
>> Any ideas or alternative methods?
>>
>> Thank you!
>>
>> Brian
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Access Ambari API through Service Install Script

2014-11-25 Thread Sumit Mohanty
https://issues.apache.org/jira/browse/AMBARI-4223 has the background.

-Sumit

On Tue, Nov 25, 2014 at 5:12 PM, Sumit Mohanty 
wrote:

> There is another way to find out.
>
> The command-*.json file created while invoking the INSTALL command will
> have a property bag by the name of "clusterHostInfo". That has list of
> hosts for various component types.
>
> For example, one can get the list of rs_hosts using
>
>   rs_hosts = default('/clusterHostInfo/hbase_rs_hosts', [])
>
> In your case, you can look for
> '/clusterHostInfo/_hosts'
>
>
> Sample of the relevant section from a command-*.json
>
> ...
>"clusterHostInfo": {
> "ganglia_monitor_hosts": [
> "c6403.ambari.apache.org"
> ],
> "all_hosts": [
> "c6403.ambari.apache.org"
> ],
> "namenode_host": [
> "c6403.ambari.apache.org"
> ],
> "ambari_server_host": [
> "c6403.ambari.apache.org"
> ],
> "zookeeper_hosts": [
> "c6403.ambari.apache.org"
> ],
> ...
>
> On Tue, Nov 25, 2014 at 4:46 PM, Yusaku Sako 
> wrote:
>
>> Can someone help?
>>
>> Yusaku
>>
>> On Sat, Nov 22, 2014 at 4:48 PM, Brian de la Motte <
>> bdelamo...@zdatainc.com> wrote:
>>
>>> Hi everyone,
>>>
>>> I'm trying to integrate a custom service into Ambari. The problem is the
>>> service's master during installation needs to know about the hosts that are
>>> to be the service's slaves. The only way I was able to find that info was
>>> by using the API.
>>>
>>> Is it a good idea to have the master's install script call the API to
>>> get that info or is there an easier way? I think it would work but the
>>> script would need to know the username and password for Ambari's API call.
>>>
>>> Is there a way to get the service's slave's hosts without the API or for
>>> a Python script to get the admin username and password from a config in
>>> order to call the API?
>>>
>>> The call I would possibly use is something like this:
>>>
>>> curl  --user admin:admin
>>> http://127.0.0.1:8080/api/v1/clusters/abc/services/CUST_SERVICE/components/CUST_SERVICE_SLAVE?fields=host_components/HostRoles/host_name
>>>
>>>
>>> I could parse out the hostnames from this but how do I make this call
>>> without knowing what username and password to use.
>>>
>>> Any ideas or alternative methods?
>>>
>>> Thank you!
>>>
>>> Brian
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Is Zookeeper mandatory?

2014-11-30 Thread Sumit Mohanty
ZK is required when you use the Ambari Web UI. If you install using the
APIs then you can pick and choose.

The best option is to Stop ZK service post installation. You can mark ZK
service to be in maintenance and Start/Stop All services will skip ZK.

On Sun, Nov 30, 2014 at 6:13 PM, Fabio  wrote:

> Hi guys,
> not sure if it's a bug or a feature, but for some reason Ambari 1.6.1 it
> doesn't let me remove Zookeper as a component. It keeps saying it's
> required by something else I selected, even if I just try to install the
> base Hadoop. I am working with VMs on a quite old computer and I just need
> the essential (hadoop, hdfs, tez, ganglia). Is Zookeper really mandatory or
> there is some kind of issue in your opinion?
> (by now I installed it, but I'd rather know if it's possible to avoid this)
>
> Thanks
>
> Fabio
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Ambari API questions

2015-02-08 Thread Sumit Mohanty
1. Is there a way via the API to force it to update the DecomHosts field with 
fresh data?   There's a slight delay after the decommission process finishes 
before it is returned in the DecomHosts field of the NAMENODE, which is 
creating a race condition in my automation (sometimes it doesn't see the 
decommissioning hosts and just goes ahead and removes the DATANODE before it 
has finished re-replicating blocks).
Are you referring to "DecomNodes"? That is populated through the jmx data from 
NameNode itself. You may have to add a delay.

2. Where in the API does the UI detect that components have stale configs and 
need to be restarted? I haven't been able to find that yet.
The staleness of config is detected at the level of host components.
E.g.
http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components/RESOURCEMANAGER

{
  "href" : 
"http://c6401.ambari.apache.org:8080/api/v1/clusters/c1/hosts/c6401.ambari.apache.org/host_components/RESOURCEMANAGER";,
  "HostRoles" : {
...
"stale_configs" : false,
"state" : "STARTED",
"actual_configs" : { ... specifies what version of config is applied ...}
}

The host resources reports the desired_config versions - in case you are 
curious about what is difference.






From: Greg Hill 
Sent: Sunday, February 08, 2015 7:22 AM
To: user@ambari.apache.org
Subject: Ambari API questions

1. Is there a way via the API to force it to update the DecomHosts field with 
fresh data?   There's a slight delay after the decommission process finishes 
before it is returned in the DecomHosts field of the NAMENODE, which is 
creating a race condition in my automation (sometimes it doesn't see the 
decommissioning hosts and just goes ahead and removes the DATANODE before it 
has finished re-replicating blocks).
2. Where in the API does the UI detect that components have stale configs and 
need to be restarted? I haven't been able to find that yet.

Thanks in advance.

Greg


Re: 'resetting' Ambari

2015-03-03 Thread Sumit Mohanty
By original do you mean that you want to reset ambari so that you can do a 
fresh installation?

From: Brian Jeltema 
Sent: Tuesday, March 03, 2015 9:20 AM
To: user@ambari.apache.org
Subject: 'resetting' Ambari

I have a small cluster that I recently set up, but then operations wiped all of 
the nodes.
I’d like to restore Ambari to it’s original installation state. Is that 
possible (and how)?

Brian


Re: Adding install priority to custom services

2015-03-11 Thread Sumit Mohanty
Brian,


All INSTALLs are scheduled before all STARTs.


Does the install of your service require HDFS to be started? What operations do 
you perform on HDFS during the INSTALL? Could you move them to the START of the 
CUSTOM_MASTER/SLAVE and if you can the role_command_order you specified should 
take care of the dependencies.


-Sumit


From: Brian de la Motte 
Sent: Wednesday, March 11, 2015 4:34 PM
To: user@ambari.apache.org
Cc: bbarn...@zdatainc.com
Subject: Re: Adding install priority to custom services

Hello Sid,

I believe I tried what you asked for, but it still didn't work. I separated it 
like this...

"CUSTOM_MASTER-INSTALL": ["CUSTOM_SLAVE-INSTALL", "NAMENODE-INSTALL", 
"DATANODE-INSTALL"],
"CUSTOM_SLAVE-START": ["CUSTOM_MASTER-START"],
"CUSTOM_MASTER-START": ["NAMENODE-START"],

I searched for "START" in the ambari-agent.log but there were not start 
commands issued for NAMENODE or DATANODE before the install began.

Am I missing something here?

Thank you for your help.

Sincerely,
Brian

When

On Fri, Mar 6, 2015 at 7:18 PM, Siddharth Wagle 
mailto:swa...@hortonworks.com>> wrote:
Hi Brian,

Make sure to define the rules separately for install from start, since ambari 
creates separate stages for these commands. This should resolve the issue, do 
let us know the outcome.

BR,
Sid


Sent by Outlook for Android
_
From:Brian de la Motte
Subject:Re: Adding install priority to custom services
To:user@ambari.apache.org
Cc:Ben Barnett



Hello,

I believe we are having the same issue as Satya. We have a service that is 
dependent on HDFS being started and have declared it in our 
role_command_order.json, but Ambari is installing the custom service before 
HDFS has started and erring out. It does install HDFS before the custom 
service, just not start HDFS, even though I have this in my 
role_command_order.json:

"CUSTOM_SERVICE-INSTALL": ["CUSTOM_SERVICE-INSTALL", "NAMENODE-INSTALL", 
"NAMENODE-START", "DATANODE-START"],

When reviewing the ambari-agent.log, it installs the namenode and datanode, but 
doesn't start the namenode at all. It gets the status of the namenode, and 
shows it's not running, but doesn't start it. Is the role_command_order.json 
need anything else inside it to force a service to be in a started state before 
installing another service?

Thanks,
Brian


On Thu, Mar 5, 2015 at 3:36 AM, Satyanarayana Jampa 
mailto:sja...@innominds.com>> wrote:
Hi Sid,
Below are the steps I followed to keep the service installation 
in an order.

1.   I have modified the below file and restarted ambari-server:
vi 
/var/lib/ambari-server/resources/stacks/HDP/2.0.6/role_command_order.json
#added below lines
"A_HANDLER-INSTALL" : ["B_HANDLER-INSTALL", 
"C_HANDLER-INSTALL"],
"A_HANDLER-START": ["B_HANDLER-START", 
"C_HANDLER-START"],


2.   But, while installing the services the installation order was 
happening in Alphabetical order.

3.   I want the order to be C, B and A.

Am I missing something here.


Thanks,
Satya.
From: Siddharth Wagle 
[mailto:swa...@hortonworks.com]
Sent: 05 March 2015 01:58
To: user@ambari.apache.org
Subject: Re: Adding install priority to custom services


Hi Satya,



Take a look at the

ambari-server/src/main/resources/role_command_order.json



This json structure is used to build the dependency graph between components.

Every stack overrides this file to add order between new components that the 
stack introduces, so that the base copy of this file remains unchanged.



In your case, if you add INSTALL time or START time dependencies between 
service2 to service1 components, Ambari will re-order the commands 
automatically.

Note: Format of the keys in the json is "ComponentName"-"Command" (Commands: 
START, INSTALL, UPGRADE, SERVICE_CHECK).



BR,

Sid




From: Satyanarayana Jampa mailto:sja...@innominds.com>>
Sent: Wednesday, March 04, 2015 3:08 AM
To: user@ambari.apache.org
Subject: Adding install priority to custom services

I have created some custom services, and in my case the "service2" needs 
"service1" to be installed first as it is needed for service2 to function 
properly.
How can I specify the dependency or order in which the services should be 
installed.

Thanks,
Satya.








Re: Adding install priority to custom services

2015-03-12 Thread Sumit Mohanty
I do not think issuing START for HDFS from INSTALL of HAWQ is a good idea. It 
may not be the right level of dependency between services.


I think the right work around is what you described - Add Service for HAWQ 
after cluster is installed.


The right solution would be to move logic to verify HDFS into START of HAWQ. 
This is also helpful if HDFS configuration changed and is no longer suitable 
for HAWQ. Start will fail and user will need to reconfigure HAWQ - think HA got 
enabled for NameNode.


As a feature improvement to Ambari, you can file a JIRA to provide support for 
INITIALIZE which may be executed post INSTALL but before START.  INITIALIZE may 
also demand that it be executed by only one component instance.


-Sumit


From: Brian de la Motte 
Sent: Thursday, March 12, 2015 9:47 AM
To: user@ambari.apache.org
Cc: bbarn...@zdatainc.com
Subject: Re: Adding install priority to custom services

Hi Sumit,

I guess that explains the problem we're having. The custom service is HAWQ and 
requires HDFS to be started. During the install of HAWQ, it runs 'gpinitsystem' 
which initializes everything and verifies it can talk to HDFS correctly. It 
tries writing to HDFS and if it can't for any reason, errors out and stops the 
rest of the installation. One workaround I noticed is if I install just HDFS 
and ZOOKEEPER only, and then install HAWQ custom service, everything works 
correctly since HDFS is in a started state.

Could I start a component from another component? So in the INSTALL function of 
the custom service,  start HDFS if it's not started? Or would that be 
considered bad practice?

For now, I will try moving some of the INSTALL code to the START section as you 
mentioned, and add some logic to only run if needed. Thank you for the help and 
explanation!

Sincerely,
Brian


On Wed, Mar 11, 2015 at 5:47 PM, Sumit Mohanty 
mailto:smoha...@hortonworks.com>> wrote:

Brian,


All INSTALLs are scheduled before all STARTs.


Does the install of your service require HDFS to be started? What operations do 
you perform on HDFS during the INSTALL? Could you move them to the START of the 
CUSTOM_MASTER/SLAVE and if you can the role_command_order you specified should 
take care of the dependencies.


-Sumit


From: Brian de la Motte 
mailto:bdelamo...@zdatainc.com>>
Sent: Wednesday, March 11, 2015 4:34 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Cc: bbarn...@zdatainc.com<mailto:bbarn...@zdatainc.com>

Subject: Re: Adding install priority to custom services

Hello Sid,

I believe I tried what you asked for, but it still didn't work. I separated it 
like this...

"CUSTOM_MASTER-INSTALL": ["CUSTOM_SLAVE-INSTALL", "NAMENODE-INSTALL", 
"DATANODE-INSTALL"],
"CUSTOM_SLAVE-START": ["CUSTOM_MASTER-START"],
"CUSTOM_MASTER-START": ["NAMENODE-START"],

I searched for "START" in the ambari-agent.log but there were not start 
commands issued for NAMENODE or DATANODE before the install began.

Am I missing something here?

Thank you for your help.

Sincerely,
Brian

When

On Fri, Mar 6, 2015 at 7:18 PM, Siddharth Wagle 
mailto:swa...@hortonworks.com>> wrote:
Hi Brian,

Make sure to define the rules separately for install from start, since ambari 
creates separate stages for these commands. This should resolve the issue, do 
let us know the outcome.

BR,
Sid


Sent by Outlook<http://taps.io/outlookmobile> for Android
_
From:Brian de la Motte
Subject:Re: Adding install priority to custom services
To:user@ambari.apache.org<mailto:to%3au...@ambari.apache.org>
Cc:Ben Barnett



Hello,

I believe we are having the same issue as Satya. We have a service that is 
dependent on HDFS being started and have declared it in our 
role_command_order.json, but Ambari is installing the custom service before 
HDFS has started and erring out. It does install HDFS before the custom 
service, just not start HDFS, even though I have this in my 
role_command_order.json:

"CUSTOM_SERVICE-INSTALL": ["CUSTOM_SERVICE-INSTALL", "NAMENODE-INSTALL", 
"NAMENODE-START", "DATANODE-START"],

When reviewing the ambari-agent.log, it installs the namenode and datanode, but 
doesn't start the namenode at all. It gets the status of the namenode, and 
shows it's not running, but doesn't start it. Is the role_command_order.json 
need anything else inside it to force a service to be in a started state before 
installing another service?

Thanks,
Brian


On Thu, Mar 5, 2015 at 3:36 AM, Satyanarayana Jampa 
mailto:sja...@innominds.com>> wrote:
Hi Sid,
Below are the steps I followed to keep the service installation 
in an order.

1.   I have modified the below file and restarted ambari-s

Re: Ganglia metrics on HDP2 Setup with Ambari 1.7.

2015-03-13 Thread Sumit Mohanty
It is possible that host components, such as HBASE_REGIONSERVER, DATANODE are 
not able to push metrics to Ganglia.


Can you check if /var/lib/ganglia/data on the Ganglia metad Server host to see 
if metrics files are being created? You can also check /var/log/messages on the 
machines where host components are running.


From: Dimitris Bouras 
Sent: Friday, March 13, 2015 6:21 AM
To: user@ambari.apache.org
Subject: Ganglia metrics on HDP2 Setup with Ambari 1.7.

Hi,

I have successfully set-up HDP 2.2 using Ambari 1.7 on Centos 6.5. I have done 
this using ec2 instances.

However when I select the GangliaUI I only see the option to select the node 
Ganglia is installed on. All other machine are running gmond. My gmetad file 
hase the following datasourses registered by  default

data_source "HDPSlaves" ip-10-200-0-101.ec2.internal:8660
data_source "HDPNodeManager" ip-10-200-0-101.ec2.internal:8657
data_source "HDPNimbus" ip-10-200-0-101.ec2.internal:8649
data_source "HDPResourceManager" ip-10-200-0-101.ec2.internal:8664
data_source "HDPKafka" ip-10-200-0-101.ec2.internal:8671
data_source "HDPHBaseRegionServer" ip-10-200-0-101.ec2.internal:8656
data_source "HDPDataNode" ip-10-200-0-101.ec2.internal:8659
data_source "HDPNameNode" ip-10-200-0-101.ec2.internal:8661
data_source "HDPHBaseMaster" ip-10-200-0-101.ec2.internal:8663
data_source "HDPSupervisor" ip-10-200-0-101.ec2.internal:8650
data_source "HDPHistoryServer" ip-10-200-0-101.ec2.internal:8666

Am i missing some configuration in order to enable other nodes in my cluster?
Thanks


Re: Start/stop all services programmatically

2015-04-09 Thread Sumit Mohanty
​I do not think the predicate is implemented yet.


Looking into to UI (as Yusaku suggested) this is what it is issuing.


{"RequestInfo":{"context":"Start All Host 
Components","operation_level":{"level":"HOST","cluster_name":"c1","host_names":"u1201.ambari.apache.org"},"query":"HostRoles/component_name.in(APP_TIMELINE_SERVER,DATANODE,HISTORYSERVER,NAMENODE,NODEMANAGER,RESOURCEMANAGER,SECONDARY_NAMENODE,ZOOKEEPER_SERVER)"},"Body":{"HostRoles":{"state":"STARTED"}}}:​


As you can see all component names are explicit.


From: Krzysztof Adamski 
Sent: Thursday, April 09, 2015 1:38 AM
To: user@ambari.apache.org
Subject: Re: Start/stop all services programmatically

Still no luck.
The status is I can stop all components that were originally started (excluding 
client components, thanks Sumit).
curl -u admin:admin -H "X-Requested-By: ambari" -i -X PUT -d 
'{"RequestInfo":{"context":"Stop All Host 
Components"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' 
"http://u1201.ambari.apache.org:8080/api/v1/clusters/c1/host_components?HostRoles/host_name=u1201.ambari.apache.org&HostRoles/state=STARTED";

However I cannot start the components as client ones are complaining. I wish 
there was a third state STOPPED. Any ideas how to make a one-liner to start all 
hosts components excluding client? I was only able to make GET working
curl -u admin:admin -H "X-Requested-By: ambari" -i -X GET 
"http://ambari:8080/api/v1/clusters/HADOOP_LAB/components?ServiceComponentInfo/categry.in(MASTER,SLAVE)&host_components/HostRoles/host_name=host1&host_components/HostRoles/state=INSTALLED"





On Thu, Apr 9, 2015 at 9:26 AM Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>> wrote:
Couldn't make PUT working :(

I see there is another option
https://cwiki.apache.org/confluence/display/AMBARI/Restarting+host+components+via+the+API


On Thu, Apr 9, 2015 at 8:34 AM Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>> wrote:
almost there
Get works
curl -u admin:admin -H "X-Requested-By: ambari" -i -X GET 
"http://ambari:8080/api/v1/clusters/HADOOP_LAB/components?ServiceComponentInfo/categry.in(MASTER,SLAVE)&host_components/HostRoles/host_name=host1&host_components/HostRoles/state=INSTALLED"


On Thu, Apr 9, 2015 at 7:56 AM Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>> wrote:
That's it. A clever solution indeed.

How about starting the services then? How to quickly exclude client components?
 "message" : "java.lang.IllegalArgumentException: Invalid desired state for a 
client component"

There is a filter like this
ServiceComponentInfo/category.in<http://category.in>(SLAVE,MASTER)


On Thu, Apr 9, 2015 at 7:41 AM Sumit Mohanty 
mailto:smoha...@hortonworks.com>> wrote:

You can try something like this 


curl -u admin:admin -H "X-Requested-By: ambari" -i -X PUT -d 
'{"RequestInfo":{"context":"Stop All Host 
Components"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' 
"http://u1201.ambari.apache.org:8080/api/v1/clusters/c1/host_components?HostRoles/host_name=u1201.ambari.apache.org&HostRoles/state=STARTED";


Predicates such as 
"HostRoles/host_name=u1201.ambari.apache.org<http://u1201.ambari.apache.org>&HostRoles/state=STARTED​"
 will help narrow down the choices.



From: Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>>
Sent: Wednesday, April 08, 2015 10:11 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>

Subject: Re: Start/stop all services programmatically
Hi Yusaku,

Many thanks for pointing me to the dependancies file. It would help a lot.

The problem with the stop all command you sent is that this for all services 
within the cluster and I want to do this explicitly on a host basis to perform 
a rolling OS patching.
Another issue is that specifying state:INSTALL would result in installing 
client components on the hosts where I do not want to have them e.g. data nodes.
Unless I did something wrong.

Regards,
Krzysztof


On Thu, Apr 9, 2015 at 2:34 AM Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:
Sorry, forgot to answer your second question regarding dependencies.
Such dependencies are specified in a file called role_command_order.json as 
part of the stack defintion.

https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json

If you try to start/stop all services in bulk, the command order rules will be 
followed automatically by the server.

Yusaku

From: Yusaku

Re: Start/stop all services programmatically

2015-04-08 Thread Sumit Mohanty
You can try something like this 


curl -u admin:admin -H "X-Requested-By: ambari" -i -X PUT -d 
'{"RequestInfo":{"context":"Stop All Host 
Components"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' 
"http://u1201.ambari.apache.org:8080/api/v1/clusters/c1/host_components?HostRoles/host_name=u1201.ambari.apache.org&HostRoles/state=STARTED";


Predicates such as 
"HostRoles/host_name=u1201.ambari.apache.org&HostRoles/state=STARTED​" will 
help narrow down the choices.



From: Krzysztof Adamski 
Sent: Wednesday, April 08, 2015 10:11 PM
To: user@ambari.apache.org
Subject: Re: Start/stop all services programmatically

Hi Yusaku,

Many thanks for pointing me to the dependancies file. It would help a lot.

The problem with the stop all command you sent is that this for all services 
within the cluster and I want to do this explicitly on a host basis to perform 
a rolling OS patching.
Another issue is that specifying state:INSTALL would result in installing 
client components on the hosts where I do not want to have them e.g. data nodes.
Unless I did something wrong.

Regards,
Krzysztof


On Thu, Apr 9, 2015 at 2:34 AM Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:
Sorry, forgot to answer your second question regarding dependencies.
Such dependencies are specified in a file called role_command_order.json as 
part of the stack defintion.

https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json

If you try to start/stop all services in bulk, the command order rules will be 
followed automatically by the server.

Yusaku

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, April 8, 2015 5:27 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Start/stop all services programmatically

Hi Krzysztof,

You can do everything that the UI does with the API.
The best way to learn what API calls the UI is making is to use the browser's 
developer tool and watch the network traffic.

Stop all services:
curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d '
{"RequestInfo":{"context":"Stop all 
services","operation_level":{"level":"CLUSTER","cluster_name":"ing_hdp"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}
' http://ambari:8080/api/v1/clusters/ing_hdp/services

Start all services:
curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d '
{"RequestInfo":{"context":"Start all 
services","operation_level":{"level":"CLUSTER","cluster_name":"ing_hdp"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}
' http://ambari:8080/api/v1/clusters/ing_hdp/services

I hope this helps.
Yusaku

From: Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, April 7, 2015 12:15 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Start/stop all services programmatically

Hello,

I am busy implementing a manual job to stop all hosts services via script 
before rebooting the OS. The examples I found in wiki are per service or 
component.

1. Is there any way to invoke the stop/start for all hosts components just like 
from a web interface?
2. How ambari determines the proper order for the services to start/stop e.g. 
first stop hiveserver before stopping mysql etc.

curl -s --user admin:admin -H "X-Requested-By: ambari" -X GET 
"http://ambari:8080/api/v1/clusters/ing_hdp/components/?ServiceComponentInfo/category.in(SLAVE,MASTER)&host_components/HostRoles/host_name=host1&fields=host_components/HostRoles/component_name,host_components/HostRoles/state"
 | jq -r '[[.items[].host_components[].HostRoles.component_name]]|tostring' | 
sed -r 's/[\["]//g' | sed -r 's/[]]//g'
  function stop(){
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": 
{"context" :"Stop ‘"$1"’ via REST"}, "Body": {"HostRoles": {"state": 
"INSTALLED"}}}' 
http://ambari:8080/api/v1/clusters/ing_hdp/hosts/host1/host_components/$1
}
Thanks for any guide.


Re: Configure Database in Ambari

2015-04-14 Thread Sumit Mohanty
If you have a Ambari UI based deployment that is configured to use MySQL then 
you can export blueprint from it. 
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints? has pointers on 
how to export.


thanks

Sumit


From: Pratik Gadiya 
Sent: Tuesday, April 14, 2015 2:04 AM
To: user@ambari.apache.org
Subject: Configure Database in Ambari


Hi All,

I want to configure all the hadoop services as well as ambari to use MySQL 
database instead of using their default databases.

Please let me know how can I configure this in the blueprint json file.

Note:-
By default Databases used in Ambari are listed in the link below
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Ref_Gd_v22/supported_database_matrix/index.html#Item1.1


With Regards,
Pratik Gadiya

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: No Heartbeat after upgrade to Ambari 2.0

2015-04-16 Thread Sumit Mohanty
Can you check ambari-agent logs (/var/log/ambari-agent/ambari-agent.log or 
/var/log/ambari-agent/ambari-agent.out) and ambari-server logs 
(/var/log/ambari-server/ambari-server.log)? 

From: Frank Eisenhauer 
Sent: Thursday, April 16, 2015 2:32 PM
To: Ambari User
Subject: No Heartbeat after upgrade to Ambari 2.0

Hi All,

I've just upgraded my testcluster (3 Nodes) to Ambari 2.0 according to
the upgrade documentation of hortonworts.
The upgrade went without failures.
But after logging into Ambari Web, all Services show "Heartbeat lost".
I already restarted ambari agents on each host but status remains
"Heartbeat lost".

Has anyone encoutered a similar result of the upgrade?


Re: No Heartbeat after upgrade to Ambari 2.0

2015-04-16 Thread Sumit Mohanty
Those do not seem related to the error. Try this:

You can pick one agent for which ambari-server is reporting Heartbeat lost.

* Set the agent log to DEBUG (edit /etc/ambari-agent/conf/ambari-agent.ini)
* Stop the agent and backup its log file
* Start the agent and let it run for ~2 minutes

Can you share the log through some public share? I think Apache strips off all 
attachments.

From: Frank Eisenhauer 
Sent: Thursday, April 16, 2015 3:06 PM
To: user@ambari.apache.org
Subject: Re: No Heartbeat after upgrade to Ambari 2.0

Already checked them.
There are entries like:
INFO [pool-1-thread-5473] URLStreamProvider:144 - Received
WWW-Authentication header:Negotiate, for
URL:http://:8744/api/v1/cluster/summary
ERROR [pool-1-thread-5473] AppCookieManager:122 - SPNego authentication
failed, can not get hadoop.auth cookie for URL:
http://:8744/api/v1/cluster
ERROR [pool-1-thread-5473] AppCookieManager:122 - SPNego authentication
failed, can not get hadoop.auth cookie for URL:
http://:8744/api/v1/cluster

Am 16.04.2015 um 23:47 schrieb Sumit Mohanty:
> Can you check ambari-agent logs (/var/log/ambari-agent/ambari-agent.log or 
> /var/log/ambari-agent/ambari-agent.out) and ambari-server logs 
> (/var/log/ambari-server/ambari-server.log)?
> 
> From: Frank Eisenhauer 
> Sent: Thursday, April 16, 2015 2:32 PM
> To: Ambari User
> Subject: No Heartbeat after upgrade to Ambari 2.0
>
> Hi All,
>
> I've just upgraded my testcluster (3 Nodes) to Ambari 2.0 according to
> the upgrade documentation of hortonworts.
> The upgrade went without failures.
> But after logging into Ambari Web, all Services show "Heartbeat lost".
> I already restarted ambari agents on each host but status remains
> "Heartbeat lost".
>
> Has anyone encoutered a similar result of the upgrade?



Re: delete using API problem

2015-04-17 Thread Sumit Mohanty
These steps seem fine to me. In fact I just tried and deleted some service in 
my test cluster (using latest trunk code base though).


What does GET return

curl -u admin:password -H "X-Requested-By: ambari" -X GET  
http://localhost:8080/api/v1/clusters/c1/services/STORM   ?


From: dbis...@gmail.com  on behalf of Artem Ervits 

Sent: Friday, April 17, 2015 7:02 AM
To: user@ambari.apache.org
Subject: delete using API problem

Hello,

I have an issue where I need to delete Storm service but I may have botched the 
order of steps.

Here are my commands:

curl -u admin:password -H "X-Requested-By: ambari" -X PUT -d 
'{"RequestInfo":{"context":"Stop 
Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' 
http://localhost:8080/api/v1/clusters/c1/services/STORM

curl -u admin:password -H "X-Requested-By: ambari" -X DELETE  
http://localhost:8080/api/v1/clusters/c1/services/STORM 
{
  "status" : 500,
  "message" : "org.apache.ambari.server.controller.spi.SystemException: An 
internal system exception occurred: Cannot remove STORM. Desired state STARTED 
is not removable.  Service must be stopped or disabled."

What do I do in this case?

Thanks

Artem Ervits


Re: No Heartbeat after upgrade to Ambari 2.0

2015-04-17 Thread Sumit Mohanty
Can you share the output of

curl -u admin:admin -H "X-Requested-By: ambari" -X GET 
http://localhost:8080/api/v1/hosts
curl -u admin:admin -H "X-Requested-By: ambari" -X GET 
http://localhost:8080/api/v1/clusters/c1/hosts

Generic pattern of the calls being
curl -u : -H "X-Requested-By: ambari" -X GET 
http://:8080/api/v1/clusters//hosts

perhaps there is a mismatch between hostnames between when the agents 
registered initially vs when they register now.

From: Frank Eisenhauer 
Sent: Thursday, April 16, 2015 10:51 PM
To: user@ambari.apache.org
Subject: Re: No Heartbeat after upgrade to Ambari 2.0

Hi Sumit,
I stopped ambari agent, cleared the logs and started the agent again.
The log seems to be ok, there is only one warning which might be related
to the issue:

WARNING 2015-04-17 07:50:47,784 AlertSchedulerHandler.py:92 - There are
no alert definition commands in the heartbeat; unable to update definitions

There are entries in the log which seem to me as if the heartbeat was
sent to ambari server:
DEBUG 2015-04-17 07:50:57,796 Heartbeat.py:78 - Heartbeat:
{'componentStatus': [],
  'hostname': 'srv233.x.xxx',
  'nodeStatus': {'cause': 'NONE', 'status': 'HEALTHY'},
  'reports': [],
  'responseId': 0,
  'timestamp': 1429249857795}

We experienced some problems with upper case hostnames in the past.
Might that be a problem?

If you need more information from the log file, I'll look for a way to
share the log.


Am 17.04.2015 um 00:19 schrieb Sumit Mohanty:
> Those do not seem related to the error. Try this:
>
> You can pick one agent for which ambari-server is reporting Heartbeat lost.
>
> * Set the agent log to DEBUG (edit /etc/ambari-agent/conf/ambari-agent.ini)
> * Stop the agent and backup its log file
> * Start the agent and let it run for ~2 minutes
>
> Can you share the log through some public share? I think Apache strips off 
> all attachments.
> 
> From: Frank Eisenhauer 
> Sent: Thursday, April 16, 2015 3:06 PM
> To: user@ambari.apache.org
> Subject: Re: No Heartbeat after upgrade to Ambari 2.0
>
> Already checked them.
> There are entries like:
> INFO [pool-1-thread-5473] URLStreamProvider:144 - Received
> WWW-Authentication header:Negotiate, for
> URL:http://:8744/api/v1/cluster/summary
> ERROR [pool-1-thread-5473] AppCookieManager:122 - SPNego authentication
> failed, can not get hadoop.auth cookie for URL:
> http://:8744/api/v1/cluster
> ERROR [pool-1-thread-5473] AppCookieManager:122 - SPNego authentication
> failed, can not get hadoop.auth cookie for URL:
> http://:8744/api/v1/cluster
>
> Am 16.04.2015 um 23:47 schrieb Sumit Mohanty:
>> Can you check ambari-agent logs (/var/log/ambari-agent/ambari-agent.log or 
>> /var/log/ambari-agent/ambari-agent.out) and ambari-server logs 
>> (/var/log/ambari-server/ambari-server.log)?
>> 
>> From: Frank Eisenhauer 
>> Sent: Thursday, April 16, 2015 2:32 PM
>> To: Ambari User
>> Subject: No Heartbeat after upgrade to Ambari 2.0
>>
>> Hi All,
>>
>> I've just upgraded my testcluster (3 Nodes) to Ambari 2.0 according to
>> the upgrade documentation of hortonworts.
>> The upgrade went without failures.
>> But after logging into Ambari Web, all Services show "Heartbeat lost".
>> I already restarted ambari agents on each host but status remains
>> "Heartbeat lost".
>>
>> Has anyone encoutered a similar result of the upgrade?


Re: adjust the agent heartbeat?

2015-04-17 Thread Sumit Mohanty
?Not without code change. This is probably a good feature to add. Can you 
create a task?


From: Greg Hill 
Sent: Friday, April 17, 2015 8:32 AM
To: user@ambari.apache.org
Subject: adjust the agent heartbeat?

https://github.com/apache/ambari/blob/trunk/ambari-agent/src/main/python/ambari_agent/NetUtil.py#L34

Is there any way to tweak that heartbeat interval setting?  If I'm reading the 
code right, it checks in with the server every 10s.  I'd like to be able to 
tweak that and see if I can speed up build times maybe by making it check in 
more frequently.  I don't see any way to override that setting, but maybe 
there's something in the .ini file?

Greg


Re: delete using API problem

2015-04-17 Thread Sumit Mohanty
?That error is something I am not familiar with. Perhaps someone else can chime 
in.


From: dbis...@gmail.com  on behalf of Artem Ervits 

Sent: Friday, April 17, 2015 8:24 AM
To: user@ambari.apache.org
Subject: Re: delete using API problem

I think the anwer lies in last line "couldn't resolve host ''. How do I go 
about this?

{
  "href" : "http://localhost:8080/api/v1/clusters/c1/services/STORM";,
  "ServiceInfo" : {
"cluster_name" : "c1",
"maintenance_state" : "ON",
"service_name" : "STORM",
"state" : "UNKNOWN"
  },
  "alerts_summary" : {
"CRITICAL" : 0,
"MAINTENANCE" : 1,
"OK" : 0,
"UNKNOWN" : 0,
"WARNING" : 0
  },
  "alerts" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/alerts/12";,
  "Alert" : {
"cluster_name" : "c1",
"definition_id" : 22,
"definition_name" : "storm_supervisor_process_percent",
"host_name" : null,
"id" : 12,
"service_name" : "STORM"
  }
}
  ],
  "components" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/DRPC_SERVER";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "DRPC_SERVER",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/NIMBUS";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "NIMBUS",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_UI_SERVER";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "STORM_UI_SERVER",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/SUPERVISOR";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "SUPERVISOR",
"service_name" : "STORM"
  }
}
  ],
  "artifacts" : [ ]
curl: (6) Couldn't resolve host '?'



Re: No Heartbeat after upgrade to Ambari 2.0

2015-04-17 Thread Sumit Mohanty
Looks like two hosts originally registered with all CAPS. 
  HADOOP01.BIGDATA.LOCAL
  HADOOP02.BIGDATA.LOCAL

Does hostname -f or socket.getfqdn() [python] return names in all CAPS?

Ambari 2.0 code is converting host names to lower and that could be the reason 
for mismatch.

The work-around is to use the custom hostname script option for the agents.

How to:
To echo the customized name of the host to which the Ambari agent registers, 
for every host, create a script like the following example, named
 
/var/lib/ambari-agent/hostname.sh. Be sure to chmod the script so it is 
executable by the Agent.
#!/bin/sh
 echo 
  where  is the host name to use for Agent registration.

Open /etc/ambari-agent/conf/ambari-agent.ini on every host, using a text editor.
Add to the [agent] section the following line:
hostname_script=/var/lib/ambari-agent/hostname.sh
  where /var/lib/ambari-agent/hostname.sh is the name of your custom echo 
script.

To generate a public host name for every host, create a script like the 
following example, named var/lib/ambari-agent/public_hostname.sh to show the 
name for that host in the UI. Be sure to chmod the script so it is executable 
by the Agent.
#!/bin/sh  -f
  where  is the host name to use for Agent registration.

Open /etc/ambari-agent/conf/ambari-agent.ini on every host, using a text editor.
Add to the [agent] section the following line:
public_hostname_script=/var/lib/ambari-agent/public_hostname.sh

Restart the Agent on every host for these changes to take effect.
ambari-agent restart

-Sumit

From: Frank Eisenhauer 
Sent: Friday, April 17, 2015 9:37 AM
To: user@ambari.apache.org
Subject: Re: No Heartbeat after upgrade to Ambari 2.0

Sure. Please find the output attached.

The configuration worked without any problems in Ambari 1.7

[root@* ~]# curl -u admin:* -H "X-Requested-By: ambari" -X GET
http://localhost:8080/api/v1/hosts
{
   "href" : "http://localhost:8080/api/v1/hosts";,
   "items" : [
 {
   "href" : "http://localhost:8080/api/v1/hosts/HADOOP03.BIGDATA.LOCAL";,
   "Hosts" : {
 "host_name" : "HADOOP03.BIGDATA.LOCAL"
   }
 },
 {
   "href" : "http://localhost:8080/api/v1/hosts/hadoop02.bigdata.local";,
   "Hosts" : {
 "host_name" : "hadoop02.bigdata.local"
   }
 },
 {
   "href" : "http://localhost:8080/api/v1/hosts/hadoop01.bigdata.local";,
   "Hosts" : {
 "host_name" : "hadoop01.bigdata.local"
   }
 },
 {
   "href" : "http://localhost:8080/api/v1/hosts/HADOOP01.BIGDATA.LOCAL";,
   "Hosts" : {
 "cluster_name" : "BIGDATA_LAB",
 "host_name" : "HADOOP01.BIGDATA.LOCAL"
   }
 },
 {
   "href" : "http://localhost:8080/api/v1/hosts/HADOOP02.BIGDATA.LOCAL";,
   "Hosts" : {
 "cluster_name" : "BIGDATA_LAB",
 "host_name" : "HADOOP02.BIGDATA.LOCAL"
   }
 },
 {
   "href" : "http://localhost:8080/api/v1/hosts/hadoop03.bigdata.local";,
   "Hosts" : {
 "cluster_name" : "BIGDATA_LAB",
 "host_name" : "hadoop03.bigdata.local"
   }
 }
   ]
}

[root@* ~]# curl -u admin:* -H "X-Requested-By: ambari" -X
GET http://localhost:8080/api/v1/clusters/BIGDATA_LAB/hosts
{
   "href" : "http://localhost:8080/api/v1/clusters/BIGDATA_LAB/hosts";,
   "items" : [
 {
   "href" :
"http://localhost:8080/api/v1/clusters/BIGDATA_LAB/hosts/HADOOP01.BIGDATA.LOCAL";,
   "Hosts" : {
 "cluster_name" : "BIGDATA_LAB",
 "host_name" : "HADOOP01.BIGDATA.LOCAL"
   }
     },
     {
   "href" :
"http://localhost:8080/api/v1/clusters/BIGDATA_LAB/hosts/HADOOP02.BIGDATA.LOCAL";,
   "Hosts" : {
 "cluster_name" : "BIGDATA_LAB",
 "host_name" : "HADOOP02.BIGDATA.LOCAL"
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/BIGDATA_LAB/hosts/hadoop03.bigdata.local";,
   "Hosts" : {
 "cluster_name" : "BIGDATA_LAB",
 "host_name" : "hadoop03.bigdata.local"
   }
 }
   ]



Am 17.04.2015 um 17:50 schrieb Sumit Mohanty:
> curl -u admin:admin -H "X-Requested-By: ambari" -X 
> GEThttp://localhost:8080/api/v1/hosts


Re: delete using API problem

2015-04-17 Thread Sumit Mohanty
FailsafeHandlerList.java:130)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:363)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:627)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:51)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
17 Apr 2015 14:30:44,864  INFO [qtp-client-4136] ClusterImpl:1755 - Deleting 
service for cluster, clusterName=c1, serviceName=STORM
17 Apr 2015 14:30:44,866  INFO [qtp-client-4136] ServiceImpl:529 - Deleting all 
components for service, clusterName=c1, serviceName=STORM


:


On Fri, Apr 17, 2015 at 1:53 PM, Artem Ervits 
mailto:artemerv...@gmail.com>> wrote:
I am seeing a lot of these:

17 Apr 2015 13:52:18,828  WARN [pool-2-thread-684] 
RestMetricsPropertyProvider:204 - Unable to get component REST metrics. No host 
name for STORM_UI_SERVER.
17 Apr 2015 13:52:25,183  WARN [pool-2-thread-685] 
RestMetricsPropertyProvider:204 - Unable to get component REST metrics. No host 
name for STORM_UI_SERVER.
17 Apr 2015 13:52:31,615  WARN [pool-2-thread-678] 
RestMetricsPropertyProvider:204 - Unable to get component REST metrics. No host 
name for STORM_UI_SERVER.

I must confess that I erased all storm packages from each server prior to doing 
any API calls, if that is of any help.


On Fri, Apr 17, 2015 at 12:29 PM, Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:
Wow, this is bizarre.
Artem, do you see anything in ambari-server.log corresponding to the GET 
http://localhost:8080/api/v1/clusters/c1/services/STORM call?

Yusaku

From: Sumit Mohanty mailto:smoha...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, April 17, 2015 9:07 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>

Subject: Re: delete using API problem


​That error is something I am not familiar with. Perhaps someone else can chime 
in.


From: dbis...@gmail.com<mailto:dbis...@gmail.com> 
mailto:dbis...@gmail.com>> on behalf of Artem Ervits 
mailto:artemerv...@gmail.com>>
Sent: Friday, April 17, 2015 8:24 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: delete using API problem

I think the anwer lies in last line "couldn't resolve host ''. How do I go 
about this?

{
  "href" : "http://localhost:8080/api/v1/clusters/c1/services/STORM";,
  "ServiceInfo" : {
"cluster_name" : "c1",
"maintenance_state" : "ON",
"service_name" : "STORM",
"state" : "UNKNOWN"
  },
  "alerts_summary" : {
"CRITICAL" : 0,
"MAINTENANCE" : 1,
"OK" : 0,
"UNKNOWN" : 0,
"WARNING" : 0
  },
  "alerts" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/alerts/12";,
  "Alert" : {
"cluster_name" : "c1",
"definition_id" : 22,
"definition_name" : "storm_supervisor_process_percent",
"host_name" : null,
"id" : 12,
"service_name" : "STORM"
  }
}
  ],
  "components" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/DRPC_SERVER";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "DRPC_SERVER",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/NIMBUS";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "NIMBUS",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_UI_SERVER";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "STORM_UI_SERVER",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/SUPERVISOR";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "SUPERVISOR",
"service_name" : "STORM"
  }
}
  ],
  "artifacts" : [ ]
curl: (6) Couldn't resolve host '​'






Re: Change hostname on running cluster

2015-04-18 Thread Sumit Mohanty
+Alejandro

In theory, you can stop ambari-server, modify all occurrences of the hostname 
and that should be it. There is not first class support for it.

Alejandro, did you look at the possibility of manually changing all host names 
to rename a host (https://issues.apache.org/jira/browse/AMBARI-10167)

-Sumit

From: Frank Eisenhauer 
Sent: Saturday, April 18, 2015 12:31 AM
To: Ambari User
Subject: Change hostname on running cluster

Hi All,

we have a running hadoop cluster where we unfortunately have a hostname
in uppercase, e.g. SRV-HADOOP01.BIGDATA.LOCAL.

As of Ambari 1.7 we are experiencing a lot of side effects which are
presumably caused by the hostnames in uppercase.

I would like to rename the particular hosts(e.g.
srv-hadoop01.bigdata.local), so that there are only hosts with lowercase
names in the cluster.

Is it possible to change the hostname? I came across a few blogs, but in
general renaming hostnames seems not to be recommended.

Has anyone performed a hostname change?

Many thanks in advance.


Re: Configure Database in Ambari

2015-04-22 Thread Sumit Mohanty
?Can you check oozie-site?


From: Pratik Gadiya 
Sent: Wednesday, April 22, 2015 2:49 AM
To: user@ambari.apache.org
Subject: RE: Configure Database in Ambari

Thanks Sumit!

I followed the same approach and have few queries on the same.

When I try to deploy the cluster using Ambari UI, I can see that I can pass the 
values for database username, database host etc. from the UI.
Once the cluster got deployed via Ambari UI, I grabbed the blueprint from the 
deployed cluster and here is what I have got.

{
  "hive-env" : {
"hive_database" : "Existing MySQL Database",
"hive_database_name" : "hive",
"hive_database_type" : "mysql",
"hive_existing_mysql_database" : "MySQL",
"hive_hostname" : "%HOSTGROUP::host_group_1%",
   "hive_user" : "hive",
"webhcat_user" : "hcat"
  }
},
{
  "oozie-env" : {
"oozie_ambari_database" : "MySQL",
"oozie_database" : "Existing MySQL Database",
"oozie_derby_database" : "Derby",
"oozie_existing_mysql_database" : "MySQL",
"oozie_existing_oracle_database" : "Oracle",
"oozie_existing_postgresql_database" : "PostgreSQL",
"oozie_user" : "oozie"
  }
}

In the above configuration which is grabbed from the deployed cluster, I can't 
see keys such as oozie_database_name as we have for hive_database_name.
Apart from that, I couldn't figure out why I couldn't see oozie_hostname as we 
have hive_hostname here.

Please let me  know how can I pass the oozie_database_name and the 
oozie_hostname in the configuration part.
Also, do let me know if I could skip the entries highlighted in yellow from the 
above configs if they are not of use for just configuring all the services to 
use MySQL as database instead of their default databases.

Thanks,
Pratik Gadiya

From: Sumit Mohanty [mailto:smoha...@hortonworks.com]
Sent: Tuesday, April 14, 2015 8:06 PM
To: user@ambari.apache.org
Subject: Re: Configure Database in Ambari


If you have a Ambari UI based deployment that is configured to use MySQL then 
you can export blueprint from it. 
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints? has pointers on 
how to export.



thanks

Sumit


From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Sent: Tuesday, April 14, 2015 2:04 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Configure Database in Ambari


Hi All,

I want to configure all the hadoop services as well as ambari to use MySQL 
database instead of using their default databases.

Please let me know how can I configure this in the blueprint json file.

Note:-
By default Databases used in Ambari are listed in the link below
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.2.0/HDP_Ref_Gd_v22/supported_database_matrix/index.html#Item1.1


With Regards,
Pratik Gadiya

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: Configure Database in Ambari

2015-04-23 Thread Sumit Mohanty
​Its new for me too. I might use your example as a sample :-)


Just a reflection of different services using somewhat different patterns.


But looks good to me.


From: Pratik Gadiya 
Sent: Thursday, April 23, 2015 11:59 AM
To: user@ambari.apache.org
Subject: RE: Configure Database in Ambari

Sumit, looks like following is the appropriate configuration which needs to 
setup in blueprint for my case i.e. configuring oozie and hive to use existing 
MySQL database

{
  "hive-env" : {
"hive_database" : "Existing MySQL Database",
"hive_database_name" : "hive",
"hive_database_type" : "mysql",
"hive_existing_mysql_database" : "MySQL",
"hive_user" : "hive",
"hive_metastore_user_passwd": "”
  }
},
{
  "hive-site" : {
"ambari.hive.db.schema.name" : "hive",
   "javax.jdo.option.ConnectionPassword": ""
  }
},
{
  "oozie-env" : {
"oozie_ambari_database" : "MySQL",
"oozie_database" : "Existing MySQL Database",
"oozie_existing_mysql_database" : "MySQL",
"oozie_user" : "oozie"
  }
},
{
  "oozie-site" : {
"oozie.db.schema.name" : "oozie",
    "oozie.service.JPAService.create.db.schema" : "false",
"oozie.service.JPAService.jdbc.driver" : "com.mysql.jdbc.Driver",
"oozie.service.JPAService.jdbc.username" : "oozie"
}
}


Let me know if there has to be any modification in the same.

Thanks,
Pratik

From: Sumit Mohanty [mailto:smoha...@hortonworks.com]
Sent: Wednesday, April 22, 2015 7:28 PM
To: user@ambari.apache.org
Subject: Re: Configure Database in Ambari


​Can you check oozie-site?


From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Sent: Wednesday, April 22, 2015 2:49 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: RE: Configure Database in Ambari

Thanks Sumit!

I followed the same approach and have few queries on the same.

When I try to deploy the cluster using Ambari UI, I can see that I can pass the 
values for database username, database host etc. from the UI.
Once the cluster got deployed via Ambari UI, I grabbed the blueprint from the 
deployed cluster and here is what I have got.

{
  "hive-env" : {
"hive_database" : "Existing MySQL Database",
"hive_database_name" : "hive",
"hive_database_type" : "mysql",
"hive_existing_mysql_database" : "MySQL",
"hive_hostname" : "%HOSTGROUP::host_group_1%",
   "hive_user" : "hive",
"webhcat_user" : "hcat"
  }
},
{
  "oozie-env" : {
"oozie_ambari_database" : "MySQL",
"oozie_database" : "Existing MySQL Database",
"oozie_derby_database" : "Derby",
"oozie_existing_mysql_database" : "MySQL",
"oozie_existing_oracle_database" : "Oracle",
"oozie_existing_postgresql_database" : "PostgreSQL",
"oozie_user" : "oozie"
  }
}

In the above configuration which is grabbed from the deployed cluster, I can’t 
see keys such as oozie_database_name as we have for hive_database_name.
Apart from that, I couldn’t figure out why I couldn’t see oozie_hostname as we 
have hive_hostname here.

Please let me  know how can I pass the oozie_database_name and the 
oozie_hostname in the configuration part.
Also, do let me know if I could skip the entries highlighted in yellow from the 
above configs if they are not of use for just configuring all the services to 
use MySQL as database instead of their default databases.

Thanks,
Pratik Gadiya

From: Sumit Mohanty [mailto:smoha...@hortonworks.com]
Sent: Tuesday, April 14, 2015 8:06 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Configure Database in Ambari


If you have a Ambari UI based deployment that is configured to use MySQL then 
you can export blueprint from it. 
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints​ has pointers on 
how to export.



thanks

Sumit


From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Sent: Tuesday, April 14, 2015 2:04 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Configure Database in Ambari


Hi All,

I want to configure all the hadoop services as well as ambari to use MySQL

Re: Dividing Master Services

2015-04-23 Thread Sumit Mohanty
Whats your goal in terms of dividing this over two nodes? Generally, such 
division depends on what kind of work load you are running (I am no expert 
here). So I can easily see moving HBASE_REGIONSERVER and SUPERVISOR to one node 
so that Storm and HBase work load can use the available resources.


OOTH, if you just want to split the load arbitrarily then I think you can 
create an arbitrary split while making sure that clients are installed if 
needed.


You can refer to the common-services definitions for dependencies - e.g. 
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/metainfo.xml


-Sumit



From: Pratik Gadiya 
Sent: Thursday, April 23, 2015 11:11 AM
To: user@ambari.apache.org
Subject: Dividing Master Services

Hi All,

I have deployed 2 node cluster using Ambari ( 1 Master Node and 1 Compute Node).
I want to scale this cluster to 2 Master Node and 1 Compute Node.

My Services deployed on Master Node are as follows :

  "components" : [
{  "name" : "PIG"},
{  "name" : "HISTORYSERVER"},
{  "name" : "KAFKA_BROKER"},
{  "name" : "HBASE_REGIONSERVER"},
{  "name" : "OOZIE_CLIENT"},
{  "name" : "HBASE_CLIENT"},
{  "name" : "NAMENODE"},
{  "name" : "SUPERVISOR"},
{  "name" : "FALCON_SERVER"},
{  "name" : "HCAT"},
{  "name" : "KNOX_GATEWAY"},
{  "name" : "SLIDER"},
{  "name" : "AMBARI_SERVER"},
{  "name" : "APP_TIMELINE_SERVER"},
{  "name" : "HDFS_CLIENT"},
{  "name" : "HIVE_CLIENT"},
{  "name" : "FLUME_HANDLER"},
{  "name" : "WEBHCAT_SERVER"},
{  "name" : "RESOURCEMANAGER"},
{  "name" : "ZOOKEEPER_SERVER"},
{  "name" : "ZOOKEEPER_CLIENT"},
{  "name" : "STORM_UI_SERVER"},
{  "name" : "HBASE_MASTER"},
{  "name" : "HIVE_SERVER"},
{  "name" : "OOZIE_SERVER"},
{  "name" : "FALCON_CLIENT"},
{  "name" : "SECONDARY_NAMENODE"},
{  "name" : "TEZ_CLIENT"},
{  "name" : "HIVE_METASTORE"},
{  "name" : "GANGLIA_SERVER"},
{  "name" : "SQOOP"},
{  "name" : "YARN_CLIENT"},
{  "name" : "MAPREDUCE2_CLIENT"},
{  "name" : "MYSQL_SERVER"},
{  "name" : "GANGLIA_MONITOR"},
{  "name" : "DRPC_SERVER"},
{  "name" : "NIMBUS"}
  ],

Can someone please let me know how can I divide those services into 2.

It would be of great help if someone could actually reply on this mail as 
follows:

Components on Master Node 1: ,
Components on Master Node 2: .

Reason for dividing as such 

NOTE: I do have same configuration of machines for both the master nodes.

Thanks,
Pratik Gadiya

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: XML Schema documentation for metainfo.xml?

2015-04-24 Thread Sumit Mohanty
Some documentation exist at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133?


Rather than the java/python code a better start would be existing metainfo.xml 
files such as - 
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/metainfo.xml


From: Dmitry Vasilenko 
Sent: Friday, April 24, 2015 10:42 AM
To: user@ambari.apache.org
Subject: XML Schema documentation for metainfo.xml?

I am trying to find some information about the subject. Does anyone know if 
such document exists in some form? Or the only way to figure that out is to 
look at the java/python code on github?

Regards
Dmitry Vasilenko


Re: Ambari Custom Service Questions.

2015-05-09 Thread Sumit Mohanty
One addition to question 4.


From: Alejandro Fernandez 
Sent: Saturday, May 09, 2015 1:44 PM
To: user@ambari.apache.org; Christopher Jackson
Subject: Re: Ambari Custom Service Questions.

Hi Christopher, these are all very good questions, and it would be useful to 
supplement the wiki with them.
Comments inline.

On 5/9/15, 11:59 AM, "Christopher Jackson" 
mailto:jackson.christopher@gmail.com>> 
wrote:

Hi All,

I’ve been in the process of creating a custom Ambari service over the past week 
and have quite a few general questions in which I haven’t found answers for in 
documentation or on the wiki. I was hoping some of you could help answer any of 
the following questions. Thanks in advance.

1) I’ve noticed that when restarting a services component that is of type 
‘CLIENT’ that its install and configure method are invoked. I’m wondering if 
it’s in intended and if so why? For components of type ‘MASTER’ a restart 
doesn’t seem to invoke install and configure again, it just invokes stop then 
start. I ask about this because in my custom service I have a CLIENT component 
in which there are some steps I do in the install stage that I don’t want 
repeated every time its restarted.

Alejandro> For clients, a "Restart" or "Refresh Configs" essentially only needs 
to make sure that the client libraries are present and the configs are setup. 
Since technically a client cannot be restarted because it is not a daemon, the 
code is written in such a way that it is idempotent, so no harm in installing 
libs that are already present, or settings configs that are already there.
For Masters, they have independent commands to Install the libs, and Restart 
the daemon process.
For your client, is there any artifact you can check to avoid running your 
one-time-install multiple times?

2) Can someone explain the implication of the auto-deploy and its child 
elements in the context where the following snippet would be placed in the 
metainfo.xml file of a custom service component 
(MY_SERVICE/MY_COMPONENT_MASTER)?


HBASE/HBASE_CLIENT
host

true
MY_SERVICE/MY_COMPONENT_CLIENT



Alejandro> This is a really good question. Some components depend on others, 
and those dependencies need to be installed automatically, either anywhere in 
the cluster, the same host, or on the same host that contains another 
component. You probably don't need the "co-locate" tag, since it is used to 
indicate masters that must be together, and is configured in the UI during the 
Service Install Wizard.

3) When creating a configuration file for a custom service what are the valid 
entries for a  tag? PASSWORD, and what else? Is there any other 
child elements of ‘property’ that are useful? Perhaps anything that allows you 
to provide a regular expression for validation?

Alejandro> These are PASSWORD, USER, GROUP, TEXT.  These only contain name, 
value, description. If you need to use regex to validate a property, that means 
the UI should take care of it, so take a look at ambari-web module, 
particularly, config_property_helper.js

4) Is there some function to restart a service in resource_management or other 
ambari python library? Or should I just be restarting services using the 
command line tools and ensuring to update the appropriate pid files? I ask this 
question because I’ve noticed I cannot restart a service using the Ambari API 
as part of the installation/configuration steps of my custom service, as the 
restart commands are queued while the custom service installation/configuration 
is running, and will cause a timeout. I’m looking for a solution to this 
problem, what’s recommended if not one of the approaches I’ve asked about above?

Alejandro> Once the service is defined in the metainfo.xml file, along with the 
python file to use, it's up to that script to decide how to install and restart 
your service. Ambari doesn't couple config changes with forcing the service to 
restart automatically, this is because if a user makes a config change or 
installs something, Ambari only highlights that the service needs to be 
restarted, but it's up to the user to decide when to do it. If you wanted to do 
automatic restarts upon config changes, then that your python script would then 
have to call the restart() method. Take a look at script.py

Sumit> In general the pattern of calling Ambari Server APIs from the 
implementation of install/configure/start of a component definition is not 
supported. This is because only one command can be executed at anytime on a 
host. In theory, you could make the call from the install/configure/start 
implementation and not wait for the call to complete. Can you explain the 
scenario a bit more? Are you restarting your custom service or some other 
service from the install/configuration of the custom service.

5) How can I allow for the removal of a custom service from the ambari console? 
I know there is a sequence of Ambari API commands I can run to: stop the 
se

Re: Ambari Custom Service Questions.

2015-05-10 Thread Sumit Mohanty
Inline ...


From: Christopher Jackson 
Sent: Sunday, May 10, 2015 8:49 AM
To: user@ambari.apache.org
Cc: Sumit Mohanty; Alejandro Fernandez
Subject: Re: Ambari Custom Service Questions.

Thanks for this information. I have a few follow up questions asked inline. 
Thanks.

Regards,
Christopher Jackson

On May 9, 2015, at 9:25 PM, Sumit Mohanty 
mailto:smoha...@hortonworks.com>> wrote:

One addition to question 4.

From: Alejandro Fernandez 
mailto:afernan...@hortonworks.com>>
Sent: Saturday, May 09, 2015 1:44 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>; Christopher Jackson
Subject: Re: Ambari Custom Service Questions.

Hi Christopher, these are all very good questions, and it would be useful to 
supplement the wiki with them.
Comments inline.

On 5/9/15, 11:59 AM, "Christopher Jackson" 
mailto:jackson.christopher@gmail.com>> 
wrote:

Hi All,

I’ve been in the process of creating a custom Ambari service over the past week 
and have quite a few general questions in which I haven’t found answers for in 
documentation or on the wiki. I was hoping some of you could help answer any of 
the following questions. Thanks in advance.

1) I’ve noticed that when restarting a services component that is of type 
‘CLIENT’ that its install and configure method are invoked. I’m wondering if 
it’s in intended and if so why? For components of type ‘MASTER’ a restart 
doesn’t seem to invoke install and configure again, it just invokes stop then 
start. I ask about this because in my custom service I have a CLIENT component 
in which there are some steps I do in the install stage that I don’t want 
repeated every time its restarted.

Alejandro> For clients, a "Restart" or "Refresh Configs" essentially only needs 
to make sure that the client libraries are present and the configs are setup. 
Since technically a client cannot be restarted because it is not a daemon, the 
code is written in such a way that it is idempotent, so no harm in installing 
libs that are already present, or settings configs that are already there.
For Masters, they have independent commands to Install the libs, and Restart 
the daemon process.
For your client, is there any artifact you can check to avoid running your 
one-time-install multiple times?

Chris> Thank you for the explanation. Yes I can work around this by checking if 
certain artifacts exist.


2) Can someone explain the implication of the auto-deploy and its child 
elements in the context where the following snippet would be placed in the 
metainfo.xml file of a custom service component 
(MY_SERVICE/MY_COMPONENT_MASTER)?


HBASE/HBASE_CLIENT
host

true
MY_SERVICE/MY_COMPONENT_CLIENT



Alejandro> This is a really good question. Some components depend on others, 
and those dependencies need to be installed automatically, either anywhere in 
the cluster, the same host, or on the same host that contains another 
component. You probably don't need the "co-locate" tag, since it is used to 
indicate masters that must be together, and is configured in the UI during the 
Service Install Wizard.

Chris> As part of my custom service I am adding libraries to both the HDFS and 
HBASE Services. My custom services Client component ensures these libraries get 
installed on the system and symlinked to the appropriate lib folder. Here is my 
concern. If a user installs a cluster with my service and then later adds an 
additional node to the cluster with HDFS or HBASE installed on that node how 
can I ensure that my custom services client component also gets installed? Is 
there a way to do that without defining a custom stack or modifying the HBASE 
and HDFS Service definitions?

Sumit> At this point user needs to explicitly install the custom service client 
as well. The capability that is needed is for a component to specify in its own 
metainfo that it is a mandatory dependency to another component. Feel free to 
create a JIRA for this feature.


3) When creating a configuration file for a custom service what are the valid 
entries for a  tag? PASSWORD, and what else? Is there any other 
child elements of ‘property’ that are useful? Perhaps anything that allows you 
to provide a regular expression for validation?

Alejandro> These are PASSWORD, USER, GROUP, TEXT.  These only contain name, 
value, description. If you need to use regex to validate a property, that means 
the UI should take care of it, so take a look at ambari-web module, 
particularly, config_property_helper.js

4) Is there some function to restart a service in resource_management or other 
ambari python library? Or should I just be restarting services using the 
command line tools and ensuring to update the appropriate pid files? I ask this 
question because I’ve noticed I cannot restart a service using the Ambari API 
as part of the installation/configuration s

Re: Ambari 2.0 DECOMMISSION

2015-05-14 Thread Sumit Mohanty
?Occasions where I do not see the node go to decommission is when the 
replication factor (dfs.replication) is equal to or greater than the number of 
data nodes that are active.


Hosts get removed from exclude file when the host gets deleted. This was added 
at some point so that when the host is added back the DN can join normally. 
Host component start/stop should not trigger this.


From: Greg Hill 
Sent: Thursday, May 14, 2015 11:46 AM
To: user@ambari.apache.org; Sean Roberts
Subject: Re: Ambari 2.0 DECOMMISSION

Some further testing results:

1. Turning on maintenance mode beforehand didn't seem to affect it.
2. The datanodes do go to decommissioning briefly before they go back to live, 
so it is at least trying to decommission them.  Shouldn't they go to 
'decommissioned' after it finishes though?
3. Some operation I'm doing (either stop host components or deleting host 
components) is causing Ambari to automatically do a request like this for each 
node that's been decommissioned:
Remove host slave-6.local from exclude file
When that's done is when they get marked "dead" by the Namenode.

This worked fine in Ambari 1.7, so I'm guessing the "remove host from exclude 
file" thing is what's breaking it as that's new.  Is there some way to disable 
that?  Can someone explain the rationale behind it?  I'd like to be able to 
remove nodes without having to restart the Namenode.

Greg

From: Greg mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, May 14, 2015 at 10:59 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>, Sean Roberts 
mailto:srobe...@hortonworks.com>>
Subject: COMMERCIAL:Ambari 2.0 DECOMMISSION

Did anything change with DECOMISSION in the 2.0 release?  The process appears 
to decommission fine (the request completes and says it updated the dfs.exclude 
file), but the datanodes aren't decommissioned and HDFS now says they're dead 
and I need to restart the Namenode.  For YARN, the nodemanagers appear to have 
decommissioned ok and are in decommissioned status, but it says I need to 
restart the resource manager (this didn't used to be the case in 1.7.0).

The only difference is that I don't set maintenance mode on the datanodes until 
after the decommission completes, because that wasn't working for me at one 
point (turns out hitting the API slightly differently would have made it work). 
 Is that the cause maybe?  Is restarting the master services now required after 
a decommission?


Task output:

DataNode Decommission: slave-2.local,slave-4.local

stderr:
None
 stdout:
2015-05-14 14:45:48,439 - u"File['/etc/hadoop/conf/dfs.exclude']" {'owner': 
'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2015-05-14 14:45:48,670 - Writing u"File['/etc/hadoop/conf/dfs.exclude']" 
because contents don't match
2015-05-14 14:45:48,864 - u"Execute['']" {'user': 'hdfs'}
2015-05-14 14:45:48,968 - u"ExecuteHadoop['dfsadmin -refreshNodes']" 
{'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'conf_dir': 
'/etc/hadoop/conf', 'kinit_override': True, 'user': 'hdfs'}
2015-05-14 14:45:49,011 - u"Execute['hadoop --config /etc/hadoop/conf dfsadmin 
-refreshNodes']" {'logoutput': None, 'try_sleep': 0, 'environment': {}, 
'tries': 1, 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}

DataNodes Status3 live / 2 dead / 0 decommissioning

NodeManager Decommission: slave-2.local,slave-4.local

stderr:
None
 stdout:
2015-05-14 14:47:16,491 - u"File['/etc/hadoop/conf/yarn.exclude']" {'owner': 
'yarn', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2015-05-14 14:47:16,866 - Writing u"File['/etc/hadoop/conf/yarn.exclude']" 
because contents don't match
2015-05-14 14:47:17,057 - u"Execute[' yarn --config /etc/hadoop/conf rmadmin 
-refreshNodes']" {'environment': {'PATH': 
'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/sbin:/usr/sbin:/bin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hadoop-client/bin:/usr/hdp/current/hadoop-yarn-resourcemanager/bin'},
 'user': 'yarn'}

NodeManagers Status 3 active / 0 lost / 0 unhealthy / 0 rebooted / 2 
decommissioned



Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM service/components by using API

2015-05-16 Thread Sumit Mohanty
Can you try the delete at the level of "components" - 
http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_REST_API

This is what succeeded for me after getting STORM_REST_API to INSTALLED.

[root@c6403 vagrant]# curl -i -uadmin:admin -H "X-Requested-By: ambari" -X GET 
http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_REST_API
HTTP/1.1 200 OK
Set-Cookie: AMBARISESSIONID=1ok5l711vxq3f9h27i1be8l63;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 673
Server: Jetty(7.6.7.v20120910)

{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_REST_API";,
  "ServiceComponentInfo" : {
"category" : "MASTER",
"cluster_name" : "c1",
"component_name" : "STORM_REST_API",
"installed_count" : 1,
"service_name" : "STORM",
"started_count" : 0,
"state" : "INSTALLED",
"total_count" : 1
  },
  "host_components" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/hosts/c6403.ambari.apache.org/host_components/STORM_REST_API";,
  "HostRoles" : {
"cluster_name" : "c1",
"component_name" : "STORM_REST_API",
"host_name" : "c6403.ambari.apache.org"
  }
}
  ]
}

[root@c6403 vagrant]#
[root@c6403 vagrant]# curl -i -uadmin:admin -H "X-Requested-By: ambari" -X 
DELETE 
http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_REST_API
HTTP/1.1 200 OK
Set-Cookie: AMBARISESSIONID=1ant3i1sk3s715va5ha573cyl;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 0
Server: Jetty(7.6.7.v20120910)

[root@c6403 vagrant]# ambari-server --version
2.0.0

From: Eirik Thorsnes 
Sent: Saturday, May 16, 2015 2:40 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM 
service/components by using API

On 14.05.15 19:43, Artem Ervits wrote:
 > Hello Eirik,
 > I also struggled with this, eventually I figured it out by poking
around in the Ambari database. In essence, you need to make sure the
state is installed before you can delete it. Once you put it in the
correct state, you should be able to delete all components by first
listing which are still available. I apologize that I cannot give you
the steps I've taken but since it was a lot of trial and error, I don't
have a conclusive procedure.


Hello,

I've now checked that the state/desiredstate in the ambari-db is set to
INSTALLED for both component and service.
I'm able to delete all other Storm host-components, except
STORM_REST_API. For this I get:

curl -u admin:admin -X DELETE -H 'X-Requested-By: ambari'
http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/STORM_REST_API
{
   "status" : 500,
   "message" : "org.apache.ambari.server.controller.spi.SystemException:
An internal system exception occurred: Could not find service for
component, componentName=STORM_REST_API, clusterName=helm,
stackInfo=HDP-2.2"

The db shows that it is still there:

ambari=> select * from servicecomponentdesiredstate where service_name =
'STORM';
  component_name | cluster_id |  desired_stack_version
  | desired_state | service_name
++--+---+--
  STORM_REST_API |  2 | {"stackName":"HDP","stackVersion":"2.2"}
| INSTALLED | STORM
(1 row)

ambari=> SELECT * from hostcomponentstate where service_name = 'STORM';
  cluster_id | component_name |  current_stack_version
  | current_state | host_name  | service_name | upgrade_state |
  version  | s
ecurity_state
++--+---++--+---+---+--
--
   2 | STORM_REST_API | {"stackName":"HDP","stackVersion":"2.2"}
| INSTALLED | service-10-0.local | STORM| NONE  |
2.2.4.2-2 | U
NSECURED
(1 row)

ambari=> SELECT * from hostcomponentdesiredstate where service_name =
'STORM';
  cluster_id | component_name |  desired_stack_version
  | desired_state | host_name  | service_name | admin_state |
maintenance_sta
te | restart_required | security_state
++--+---++--+-+
---+--+
   2 | STORM_REST_API | {"stackName":"HDP","stackVersion":"2.2"}
| INSTALLED | service-10-0.local | STORM| INSERVICE   | OFF

|0 | UNSECURED
(1 row)

ambari=> select * from servicedesiredstate where service_name = 'STORM';
  cluster_id | desired_host_role_mapping |
desired_stack_version   | desired_state | service_name |
maintenance_state | security_state
+---+--+---+--

Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM service/components by using API

2015-05-17 Thread Sumit Mohanty
Can you restart Ambari Server in case DB and the server are of sync (especially 
if the DB was updated manually)

What are the outputs of

GET http://localhost:8080/api/v1/clusters/helm/services/STORM

and 

GET 
http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components
 

From: Eirik Thorsnes 
Sent: Sunday, May 17, 2015 6:54 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM 
service/components by using API

On 16.05.15 17:45, Sumit Mohanty wrote:
> Can you try the delete at the level of "components" 
> -http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_REST_API
>
> This is what succeeded for me after getting STORM_REST_API to INSTALLED.

Hi,

unfortunately, that fails with a 404:

curl -u admin:admin -i -X GET -H 'X-Requested-By: ambari'
http://localhost:8080/api/v1/clusters/helm/services/STORM/components/STORM_REST_API
HTTP/1.1 404 Not Found
Set-Cookie: AMBARISESSIONID=174fu9l7vnyfr1e9paelrd5ee9;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 176
Server: Jetty(7.6.7.v20120910)

{
   "status" : 404,
   "message" : "The requested resource doesn't exist: ServiceComponent
not found, clusterName=helm, serviceName=STORM,
serviceComponentName=STORM_REST_API"
}

It seems that since the upgrade to HDP-2.2 stack Ambari clearly is
"brain-split" in terms of what exists. I've tried to look in the ambari
db for clues to which place the "missing link" could be, but so far I've
not found any connection.

Regards,
Eirik


Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM service/components by using API

2015-05-17 Thread Sumit Mohanty
Can you call a DELETE on 
http://localhost:8080/api/v1/clusters/helm/services/STORM and check if it 
succeeds?

From: Eirik Thorsnes 
Sent: Sunday, May 17, 2015 7:36 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM 
service/components by using API

Hi,

I had already tried restart of Ambari server because the db was edited
at some point to make sure the host-component was in a known state (set
version number and set INSTALLED). Tried a new restart now, including
the postgresql db for good measure.
Still same 404 for service+component, and 500 for host+component (for
DELETEs).

The GETs are listed below. I double-checked and see that service-10-0
still has STORM_REST_API listed as a component, and
servicecomponentdesiredstate still has STORM_REST_API as well.

Thanks,
Eirik

  curl -u admin:admin -i -X GET -H 'X-Requested-By: ambari'
http://localhost:8080/api/v1/clusters/helm/services/STORM
HTTP/1.1 200 OK
Set-Cookie: AMBARISESSIONID=17av8wwjl78ozqn2wki23f242;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 729
Server: Jetty(7.6.7.v20120910)

{
   "href" : "http://localhost:8080/api/v1/clusters/helm/services/STORM";,
   "ServiceInfo" : {
 "cluster_name" : "helm",
 "maintenance_state" : "OFF",
 "service_name" : "STORM",
 "state" : "UNKNOWN"
   },
   "alerts_summary" : {
 "CRITICAL" : 1,
 "MAINTENANCE" : 0,
 "OK" : 0,
 "UNKNOWN" : 0,
 "WARNING" : 0
   },
   "alerts" : [
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/services/STORM/alerts/8";,
   "Alert" : {
 "cluster_name" : "helm",
 "definition_id" : 24,
 "definition_name" : "storm_supervisor_process_percent",
 "host_name" : null,
 "id" : 8,
 "service_name" : "STORM"
   }
 }
   ],
   "components" : [ ],
   "artifacts" : [ ]

=

curl -i -u admin:admin -X GET
http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components
HTTP/1.1 200 OK
Set-Cookie: AMBARISESSIONID=tjn6m87o7t7ox0juqnrt8fvx;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 3258
Server: Jetty(7.6.7.v20120910)

{
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components";,
   "items" : [
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/HBASE_MASTER";,
   "HostRoles" : {
 "cluster_name" : "helm",
 "component_name" : "HBASE_MASTER",
 "host_name" : "service-10-0.local"
   },
   "host" : {
 "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local";
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/HDFS_CLIENT";,
   "HostRoles" : {
 "cluster_name" : "helm",
 "component_name" : "HDFS_CLIENT",
 "host_name" : "service-10-0.local"
   },
   "host" : {
 "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local";
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/MAPREDUCE2_CLIENT";,
   "HostRoles" : {
 "cluster_name" : "helm",
 "component_name" : "MAPREDUCE2_CLIENT",
 "host_name" : "service-10-0.local"
   },
   "host" : {
 "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local";
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/METRICS_MONITOR";,
   "HostRoles" : {
 "cluster_name" : "helm",
 "component_name" : "METRICS_MONITOR",
 "host_name" : "service-10-0.local"
   },
   "host" : {
 "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local";
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/NAMENODE";,
   "HostRoles" : {
 "cluster_name" : "helm",
 "component_name" : "NAMENODE",
 "host_name" : "service-10-0.local"
   },
   "host" : {
 "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local";
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/SPARK_JOBHISTORYSERVER";,
   "HostRoles" : {
 "cluster_name" : "helm",
 "component_name" : "SPARK_JOBHISTORYSERVER",
 "host_name" : "service-10-0.local"
   },
   "host" : {
 "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local";
   }
 },
 {
   "href" :
"http://localhost:8080/api/v1/clusters/helm/hosts/service-10-0.local/host_components/TEZ_CLIENT";,
   "HostRoles" : {
 "cluster_na

Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM service/components by using API

2015-05-18 Thread Sumit Mohanty
Eirik, you are right. STORM_REST_API does not exist for STORM in HDP-2.2. 

The happy path that should work ( verified it yesterday) is

Start at HDP-2.1
Stop STORM
Upgrade to HDP-2.2
Before starting STORM service - delete STORM_REST_API at the level of component 
(not host_component)
  - DELETE 
.../api/v1/clusters/MyCluster/services/STORM/components/STORM_REST_API

This should be true for any stack definition if a component is deleted in the 
new version.

Of course it did not work for you. I will look into the thread where you 
described your initial steps and see what issue prevented it.

Thanks
Sumit


From: Eirik Thorsnes 
Sent: Monday, May 18, 2015 7:12 AM
To: Artem Ervits; user@ambari.apache.org
Subject: Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM 
service/components by using API

On 18. mai 2015 15:03, Artem Ervits wrote:
> I am following the thread, I remember when I had this issue, I believe I
> added the service and then was able to delete it. See if you can try to
> add the component with API and then try to delete the service.

Unfortunately, it also fails. From the errors of all of this, it seems
to me that in stack HDP2.2 there is no link between the service "STORM"
and the component "STORM_REST_API" and thus neither creation nor
deletion works.

I'm going now to try the direct database deletion process.

Regards,
Eirik

--
Eirik Thorsnes
Group Leader at Parallab, Uni Research Computing
Høyteknologisenteret, Thormøhlensgate 55, N-5008 Bergen, Norway
tel: (+47) 555 84153  fax: (+47) 555 84295


Re: Q Regarding Ambari QuickStart

2015-05-24 Thread Sumit Mohanty
What is the error you see or what step are you blocked on?

From: Frans Thamura 
Sent: Sunday, May 24, 2015 6:39 AM
To: user@ambari.apache.org
Subject: Q Regarding Ambari QuickStart

Hi All

I use the Ambari Quick Start in my notebook

the work are working well until Installation Wizard

I believe this vagrant step have smiliar error when I try to make it
on my server and baremetal (1 node) to deploy

is there a missing step that I must find that is not exist in the quickstart?

thx

Ref: https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide

F



Re: Upgrade process for custom services?

2015-06-01 Thread Sumit Mohanty
In general, if you removed the service and re-add it ambari will call the 
install command. If yum (assuming you are on rhel/centos) is refusing to 
upgrade then likely it is not finding the new package or expecting an "yum 
upgrade" call. In that case, you have to manually upgrade the package.

While not a lot of thought has been given (read it as there are no open tasks 
discussing the designs) the general approach for upgrading a custom service is 
as follows:

* Stop the service
* Upgrade the service binaries on all host where its components are deployed
* Modify the service definition within the stack definition on the 
ambari-server host
* Restart ambari-server
* Start the service using Ambari (ambari-agents download the modified service 
definitions when you issue svc mgmt commands)

In addition, depending on what the new version of the service needs, you may 
have one or more of the following:
* Modify the service config if some old config properties are not valid or some 
new configs do not have valid default
* Restart other services if they need to react to the new service or the new 
config

-Sumit



From: Christopher Jackson 
Sent: Monday, June 01, 2015 3:44 PM
To: user@ambari.apache.org
Subject: Upgrade process for custom services?

Hello all,

What did you guys have in mind (if anything) for being able to upgrade a custom 
service? Let’s say a cluster has my custom service installed and I release a 
new version by distributing an updated service definition and update the RPMs 
needed by my service. Is there an upgrade mechanism for an individual service 
in Ambari? I tried simply removing the service from Ambari by using the Ambari 
API and then attempted to install the updated service. But it skips over 
installing the RPM because it knows its already installed (even though its a 
different version). I don’t want to remove the old version first then install 
the new version as there are some things I would like to migrate from the old 
to new install.

Perhaps I’m not hosting my local RPM repository correctly? Or something else 
I’m missing? Any guidance would be appreciated.

Regards,

Christopher Jackson


Re: custom services / status

2015-06-03 Thread Sumit Mohanty
Its the implementation of start command that should create the pid file - 
essentially you have to do it yourself in the start script.


You can use the pid file to make the start script idempotent. Its a best 
practice for stop command to delete the pid file after stopping the component 
instance.


-Sumit


From: Donald Hansen 
Sent: Wednesday, June 03, 2015 5:08 PM
To: user@ambari.apache.org
Subject: Re: custom services / status

Thanks for the quick reply. I was looking at some examples and was curios about 
the pid file. Do I need to create that myself or is there some code that is 
creating that for me automatically?


On Wednesday, June 3, 2015, Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:
Have you implemented the "status" command for the component(s) in your custom 
service?  
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133
For most components, the status is based on the PID file.
You can look at some examples in the common-services directory: 
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase_regionserver.py#L55-L59

Yusaku

From: Donald Hansen 
Reply-To: "user@ambari.apache.org" 
Date: Wednesday, June 3, 2015 3:39 PM
To: "user@ambari.apache.org" 
Subject: custom services / status

I'm trying to create a custom service in Ambari and curious how it tells Ambari 
if the service successfully started or not. I was able to add a python function 
that starts my service and my service does start correctly but Ambari still 
shows the service as not started.

Thanks.
Donald


Re: Client components install order.

2015-06-17 Thread Sumit Mohanty
Hi Christopher,

Ambari does not support installation of clients. The dependency, for now, only 
ensures that they are installed on the same host. 

Lets open a JIRA. Its easy to implement an install order as well - basically 
call the same helper method as done for START.

thanks
Sumit

From: Christopher Jackson 
Sent: Wednesday, June 17, 2015 8:03 PM
To: user@ambari.apache.org
Subject: Client components install order.

Hi all,

Is it possible to ensure clients are installed in a certain order? I have 
written a custom service with a CLIENT component which lists HDFS_CLIENT and 
HBASE_CLIENT as dependencies (scope: host). As part of my client install I add 
some libraries to HDFS and HBASE so I need for those clients to be installed 
before my client. I would think listing them as a dependency would achieve 
this, but it does not in Ambari 2.0.0.

How is client install order determined? How can I ensure my custom client is 
installed after a particular client?

Regards,

Christopher Jackson


Re: Ambari 2.1.0-snap: Alert configuration location

2015-06-18 Thread Sumit Mohanty
Does this have the details you need?
https://github.com/apache/ambari/blob/branch-2.0.0/ambari-server/docs/api/v1/alert-definitions.md

-Sumit

From: Eirik Thorsnes 
Sent: Thursday, June 18, 2015 2:58 AM
To: user@ambari.apache.org
Subject: Ambari 2.1.0-snap: Alert configuration location

Hi,

I'm looking at Ambari 2.1.0 compiled from git.
In JIRA AMBARI-10816 it was added the possibility to use configuration
instead of hard-coded values for alerts (as I understand it).

Where do I set these configurations?
For e.g.: the configuration percent.used.space.warning.threshold in the
alert_disk_space.py script.

Thanks,
Eirik

--
Eirik Thorsnes


Re: Creating a second cluster via REST api corrupts ambari configuration

2015-06-26 Thread Sumit Mohanty
While it is possible the data model is indeed corrupt but more than likely 
there are sections of code that stop at the first cluster they see and that is 
creating some confusion. I think you can delete the second/third cluster from 
the cluster* tables and after a restart Ambari Server should get back in shape.


Tables to check ...


* clusters

* clusterstate

* cluster_version

* clusterservices

* clusterconfig

* clusterhostmapping?


Also, please go ahead a create bugs based on your observation.


From: clarkbrey...@gmail.com  on behalf of Clark 
Breyman 
Sent: Friday, June 26, 2015 5:18 PM
To: user@ambari.apache.org
Subject: Creating a second cluster via REST api corrupts ambari configuration

While experimenting with the REST api, I attempted to create a second cluster 
via POST to /api/v1/clusters/test. The result was having the service is a 
corrupted state as follows:

- Clusters lists the second cluster I created with the status "cluster creation 
is in progress". The original cluster and the third attempts were not listed.
- All three clusters are listed in the popup associated with the Versions 
"Install on" button.

I understand (now) that Ambari does not yet support multiple clusters but it 
seems odd that the data model can be corrupted by an API operation.

Is there any way to fix my database without scraping and reprovisioning the 
entire cluster?


  1   2   >